Miranda Barbour, a teenager arrested along with her husband, Elytte, for the murder of a man she allegedly met for sex through Craigslist, has confessed to killing countless others as part of a Satanic cult.
According to an NBC News story, “A teen satanist in a Pennsylvania prison claims she has killed nearly two dozen people in different parts of the country, according to a report in a local newspaper. ‘When I hit 22, I stopped counting,” Miranda Barbour said in an jailhouse interview with local newspaper the Daily Item in Sunbury. She added in the interview that ran on Saturday that she just wanted to be honest.’”
In the interview Barbour claims to have killed people in five states. Barbour’s sensational claims have of course made national news, with salacious details of sex, murder and Satan. But are they true? Many experts, as well as Barbour’s father, doubt it.
False Confessions
But why confess to something you didn’t do? Sometimes criminals will falsely confess to crimes they didn’t commit in order to confuse police or as a stalling tactic. Some do it for the notoriety, or simply to taunt police. By giving or withholding what police investigators want — in this case, information about other crimes, whether true or false — a criminal can have a measure of power or control.
Serial killer Henry Lee Lucas, who is known to have killed about a dozen people before being caught, gave wildly varying numbers when asked about how many people he’d murdered, ranging from dozens to hundreds to thousands. In the end it is not known exactly how many murders Lucas actually committed, but it is clear that he confessed to killing far more than he actually did.
False confessions can also be elicited, intentionally or otherwise, from suspects under interrogation. One example of this is the infamous Central Park Five case in which five teenagers were arrested for the brutal 1989 rape and assault of a jogger in New York’s Central Park. The boys, who were in the park at the time of the assault, were rounded up and arrested. All of them denied attacking the woman, but later confessed after hours of interrogation.
The confessions were very persuasive to the jury and all five were convicted sentenced to prison for between six and eleven years. Yet the men were finally exonerated in 2002 when a convicted rapist and murderer admitted that he had assaulted the woman, acting alone. But what about the false confessions that led to their convictions? Confessions need not be beaten or tortured out of a person; sometimes they can come after hours of psychological pressure and exhaustion. They were scared teenagers who were promised that they could go home if they just told police what they wanted to hear.
Some people have even falsely confessed to crimes without even being asked or even questioned: In 2006 an Atlanta man named John Mark Karr confessed to the unsolved murder of 6-year-old beauty pageant queen JonBenet Ramsey, whose 1996 death became a sensational homicide case. Karr was arrested by police but soon released when they realized that the details of his confession were implausible, and in some cases impossible. Whatever caused his false confession — whether a mental illness, a desire for attention, or other factors — he could not have committed the crime he confessed to.
It is plausible that Karr sincerely believed he killed Ramsey, and that Miranda Barbour sincerely believes she killed dozens of people.
In their 2004 article in the journal Applied Cognitive Psychology, researchers Linda Henkel and Kimberly Coffman analyzed this type of false confession: “Suspects who offer coerced-internalized false confessions do so for crimes that they are innocent of but come to falsely believe that they committed. These suspects sometimes come to ‘remember’ their participation, later relating the events in vivid, mental scenarios.
“Although the precise frequency with which such false confessions occur has not been officially determined, the literature is replete with case studies and compendiums of reports showing that innocent people can come to believe in their own guilt and even create ‘memories’ for their alleged crimes, with their innocence established later through additional evidence, such as DNA tests or confessions by the true guilty parties.”
In other words, some people confess because they really think that committed the crime, or mistakenly “remember” it from seeing television re-enactments or descriptions of a crime. Sometimes these individuals have learning disabilities, though not always; under the right conditions (for example being exhausted, denied sleep, or interrogated for hours) just about anyone could potentially come to believe that committed a crime they did not.
In fact, as Henkel and Coffman note, “Both victims and eyewitnesses can have vivid and detailed recollections that they are quite confident about, and yet these recollections are erroneous … memories for entire events that never took place can be ‘implanted’ through suggestion and other manipulations and be remembered with confidence and vivid detail.”
The Satanic Panic
The fact that Barbour claims to have committed the killings as part of an organized Satanic cult severely damages her credibility. While Satanists do exist, they bear little resemblance to the evil, bloodthirsty cults that populate Grade-B horror films. Satanism encompasses a variety of beliefs, but most forms of Satanism are related to pagan traditions similar to witches (practitioners of Wicca, officially recognized as a legitimate religion in 1986) in their worship of nature, magic and many New Age beliefs.
James Lewis, a religious studies professor at the University of Wisconsin, notes in his book “Satanism Today: An Encyclopedia of Religion, Folklore, and Popular Culture” that “A significant aspect of the stereotype of Satanism is that it always involves some kind of blood sacrifice — often animal, but also human. Most modern Satanists, however, are completely opposed to such acts. In the influential ‘Satanic Bible,’ for example, Anton LaVey describes this stereotype and rejects it as part of Satanism.”
Thus it seems that Barbour’s ideas about Satanic cults came not from any personal experience in one, but instead from watching false, stereotyped caricatures of them horror movies and sensationalized television shows. For another excellent in-depth look at Satanism in popular culture, see folklorist Bill Ellis’s “Raising the Devil: Satanism, New Religions, and the Media” (2000, University Press of Kentucky).
Not only is Barbour not the first person to claim to have killed far more people than she likely did, but she’s also not the first to have falsely claimed to have participated in Satanic serial killings.
A woman writing under the name Lauren Stratford authored a best-selling 1991 book titled “Satan’s Underground,” in which she described, in gory confessional detail, her first-hand experience inside a Satanic cult. Stratford admitted to horrific acts, including torture killings and killing babies in the name of the Devil.
The book was enormously popular and influential, especially in Christian circles, during the “Satanic panic” hysteria that swept across America in the late 1980s and early 1990s. Later investigation revealed that Stratford’s confession was completely false; she had never joined any Satanic, serial-killing cult. It was all made up for attention.
Read more at Discovery News
Feb 22, 2014
Essential step toward printing living human tissues
A new bioprinting method developed at the Wyss Institute for Biologically Inspired Engineering at Harvard University and the Harvard School of Engineering and Applied Sciences (SEAS) creates intricately patterned 3D tissue constructs with multiple types of cells and tiny blood vessels. The work represents a major step toward a longstanding goal of tissue engineers: creating human tissue constructs realistic enough to test drug safety and effectiveness.
The method also represents an early but important step toward building fully functional replacements for injured or diseased tissue that can be designed from CAT scan data using computer-aided design (CAD), printed in 3D at the push of a button, and used by surgeons to repair or replace damaged tissue.
"This is the foundational step toward creating 3D living tissue," said Jennifer Lewis, Ph.D., senior author of the study, who is a Core Faculty Member of the Wyss Institute for Biologically Inspired Engineering at Harvard University, and the Hansjörg Wyss Professor of Biologically Inspired Engineering at Harvard SEAS. Along with lead author David Kolesky, a graduate student in SEAS and the Wyss Institute, her team reported the results February 18 in the journal Advanced Materials.
Tissue engineers have tried for years to produce lab-grown vascularized human tissues robust enough to serve as replacements for damaged human tissue. Others have printed human tissue before, but they have been limited to thin slices of tissue about a third as thick as a dime. When scientists try to print thicker layers of tissue, cells on the interior starve for oxygen and nutrients, and have no good way of removing carbon dioxide and other waste. So they suffocate and die.
Nature gets around this problem by permeating tissue with a network of tiny, thin-walled blood vessels that nourish the tissue and remove waste, so Kolesky and Lewis set out to mimic this key function.
3D printing excels at creating intricately detailed 3D structures, typically from inert materials like plastic or metal. In the past, Lewis and her team have pioneered a broad range of novel inks that solidify into materials with useful electrical and mechanical properties. These inks enable 3D printing to go beyond form to embed functionality.
To print 3D tissue constructs with a predefined pattern, the researchers needed functional inks with useful biological properties, so they developed several "bio-inks" -- tissue-friendly inks containing key ingredients of living tissues. One ink contained extracellular matrix, the biological material that knits cells into tissues. A second ink contained both extracellular matrix and living cells.
To create blood vessels, they developed a third ink with an unusual property: it melts as it is cools, rather than as it warms. This allowed the scientists to first print an interconnected network of filaments, then melt them by chilling the material and suction the liquid out to create a network of hollow tubes, or vessels.
The Harvard team then road-tested the method to assess its power and versatility. They printed 3D tissue constructs with a variety of architectures, culminating in an intricately patterned construct containing blood vessels and three different types of cells -- a structure approaching the complexity of solid tissues.
Moreover, when they injected human endothelial cells into the vascular network, those cells regrew the blood-vessel lining. Keeping cells alive and growing in the tissue construct represents an important step toward printing human tissues. "Ideally, we want biology to do as much of the job of as possible," Lewis said.
Lewis and her team are now focused on creating functional 3D tissues that are realistic enough to screen drugs for safety and effectiveness. "That's where the immediate potential for impact is," Lewis said.
Read more at Science Daily
The method also represents an early but important step toward building fully functional replacements for injured or diseased tissue that can be designed from CAT scan data using computer-aided design (CAD), printed in 3D at the push of a button, and used by surgeons to repair or replace damaged tissue.
"This is the foundational step toward creating 3D living tissue," said Jennifer Lewis, Ph.D., senior author of the study, who is a Core Faculty Member of the Wyss Institute for Biologically Inspired Engineering at Harvard University, and the Hansjörg Wyss Professor of Biologically Inspired Engineering at Harvard SEAS. Along with lead author David Kolesky, a graduate student in SEAS and the Wyss Institute, her team reported the results February 18 in the journal Advanced Materials.
Tissue engineers have tried for years to produce lab-grown vascularized human tissues robust enough to serve as replacements for damaged human tissue. Others have printed human tissue before, but they have been limited to thin slices of tissue about a third as thick as a dime. When scientists try to print thicker layers of tissue, cells on the interior starve for oxygen and nutrients, and have no good way of removing carbon dioxide and other waste. So they suffocate and die.
Nature gets around this problem by permeating tissue with a network of tiny, thin-walled blood vessels that nourish the tissue and remove waste, so Kolesky and Lewis set out to mimic this key function.
3D printing excels at creating intricately detailed 3D structures, typically from inert materials like plastic or metal. In the past, Lewis and her team have pioneered a broad range of novel inks that solidify into materials with useful electrical and mechanical properties. These inks enable 3D printing to go beyond form to embed functionality.
To print 3D tissue constructs with a predefined pattern, the researchers needed functional inks with useful biological properties, so they developed several "bio-inks" -- tissue-friendly inks containing key ingredients of living tissues. One ink contained extracellular matrix, the biological material that knits cells into tissues. A second ink contained both extracellular matrix and living cells.
To create blood vessels, they developed a third ink with an unusual property: it melts as it is cools, rather than as it warms. This allowed the scientists to first print an interconnected network of filaments, then melt them by chilling the material and suction the liquid out to create a network of hollow tubes, or vessels.
The Harvard team then road-tested the method to assess its power and versatility. They printed 3D tissue constructs with a variety of architectures, culminating in an intricately patterned construct containing blood vessels and three different types of cells -- a structure approaching the complexity of solid tissues.
Moreover, when they injected human endothelial cells into the vascular network, those cells regrew the blood-vessel lining. Keeping cells alive and growing in the tissue construct represents an important step toward printing human tissues. "Ideally, we want biology to do as much of the job of as possible," Lewis said.
Lewis and her team are now focused on creating functional 3D tissues that are realistic enough to screen drugs for safety and effectiveness. "That's where the immediate potential for impact is," Lewis said.
Read more at Science Daily
Feb 21, 2014
Oldest fortified settlement ever found in North America
In an announcement likely to rewrite the book on early colonization of the New World, two researchers today said they have discovered the oldest fortified settlement ever found in North America. Speaking at an international conference on France at Florida State University, the pair announced that they have located Fort Caroline, a long-sought fort built by the French in 1564.
"This is the oldest fortified settlement in the present United States," said historian and Florida State University alumnus Fletcher Crowe. "This fort is older than St. Augustine, considered to be the oldest continuously inhabited city in America. It's older than the Lost Colony of Virginia by 21 years; older than the 1607 fort of Jamestown by 45 years; and predates the landing of the Pilgrims in Massachusetts in 1620 by 56 years."
Announcement of the discovery of Fort Caroline was made during "La Floride Française: Florida, France, and the Francophone World," a conference hosted by FSU's Winthrop-King Institute for Contemporary French and Francophone Studies and its Institute on Napoleon and the French Revolution. The conference commemorates the cultural relations between France and Florida since the 16th century.
Researchers have been searching for actual remains of Fort Caroline for more than 150 years but had not found the actual site until now, Crowe said. The fort was long thought to be located east of downtown Jacksonville, Fla., on the south bank of the St. Johns River. The Fort Caroline National Memorial is located just east of Jacksonville's Dames Point Bridge, which spans the river.
However, Crowe and his co-author, Anita Spring, a professor emeritus of anthropology at the University of Florida, say that the legendary fort is actually located on an island at the mouth of the Altamaha River, two miles southeast of the city of Darien, Ga. Darien is located near the Georgia coast between Brunswick and Savannah, approximately 70 miles from the Jacksonville site.
"This really is a momentous finding, and what a great honor it is for it to be announced at a conference organized by the Winthrop-King Institute," said Martin Munro, a professor in FSU's Department of Modern Languages and Linguistics and director of the Winthrop-King Institute. "It demonstrates the pre-eminence of the institute and recognizes the work we do in promoting French and Francophone culture in Florida, the United States and internationally."
Darrin McMahon, the Ben Weider Professor of History and a faculty member with the Institute on Napoleon and the French Revolution, observed that Crowe and Spring's finding -- like the conference itself -- highlights France's longstanding presence in Florida and the Southeast. "From the very beginning, down to the present day, French and Francophone peoples have played an important role in this part of the world," McMahon said. "Our conference aims to draw attention to that fact."
To make the discovery, Crowe, who received his Ph.D. in history from Florida State in 1973, flew to Paris and conducted research at the Bibliothèque Nationale de France, the French equivalent of the U.S. Library of Congress. There he found a number of 16th-century maps that pinpointed the location of Fort Caroline. Some of the maps were in 16th-century French, some in Latin, some in Spanish, and some were even in English.
Francois Dupuigrenet Desroussilles, a professor of Christianity in the FSU Department of Religion and for 20 years the curator of rare books in the Bibliothèque Nationale de France, underlines the fraternal attitude of French Protestant settlers in Fort Caroline toward native Americans, a rare occurrence among Western colonists, and the new perspectives opened by the discovery on the relationship between Huguenots and Indian tribes.
Crowe was able to match French maps from the 16th to 18th centuries of what is today the southeastern coast of the United States with coastal charts of the United States published by the National Oceanic and Atmospheric Administration, and with maps published by the U.S. Geological Survey.
One reason scholars claimed that Fort Caroline was located near Jacksonville is because, they believed, the local Indian tribes surrounding the fort spoke the Timucuan language, the Native American language of Northeast Florida.
"We proved that the Native Americans living near the fort spoke a language called Guale (pronounced "WAH-lay")," Spring said. "The Guale speakers lived near Darien, Ga. They did not live in Northeast Florida, where Jacksonville is."
The two scholars believe that Fort Caroline lies on Rhetts Island, southeast of Darien.
"The fort appears to be situated in an impoundment used for duck hunting in the fall," said Crowe, "and thankfully, the site is protected by the Georgia Department of Natural Resources."
"The frustrating and often acrimonious quest to find the fort has become a sort of American quest for the Holy Grail by archaeologists, historians and other scholars," he noted. "The inability to find the fort has made some wonder if it ever existed."
In 1565, Spanish soldiers under Pedro Menéndez marched into Fort Caroline and slaughtered some 143 men and women who were living there at the time. After the massacre, Menéndez wrote the king of Spain that he had discovered the French fort at "31 degrees North latitude." Using GoogleEarth, Crowe found the fort close to where the Spanish general had reported.
"The actual latitude of what we believe is Fort Caroline is well within the margin of error of 16th-century navigational instruments, about 17 miles," Crowe said.
French colonists at Fort Caroline were astonished by the dazzling amounts of gold and silver worn by the Indians near the fort. These reports were dismissed as fiction by previous researchers, who argued that North Florida has no deposits of either precious metal.
"We studied the trade routes of the Guale Indians and found that they led directly to the gold and silver deposits near Dahlonega, Ga.," Spring said. In 1828, Dahlonega became the site of America's first mint, and over the years about $600 million worth of gold, in 2013 dollars, has been recovered there.
Read more at Science Daily
This Fish Swims Up a Sea Cucumber’s Butt and Eats Its Gonads
If Buddhists are right about that whole reincarnation thing, it’d be hard to imagine what you’d have to do wrong to die and come back as a sea cucumber. One minute you’re human and the next you’re crawling around the seafloor as what is essentially a mobile intestine, hoovering up food at one end and expelling it through the other.
And then, inevitably, the pearlfish would find you.
You’re breathing through your anus, by the way, and when you take a breath, the pearlfish strikes — squirming up your butt, making itself comfortable in your respiratory organ, and eating your gonads. Or, they’ll go up in pairs and have sex in your body cavity. And that’s when you realize that you must have been a really awful human being in a past life. Like, the type of person who talks on their phone in a movie theater kind of awful.
Such pearlfishes come in a range of species, and don’t necessarily limit themselves to invading sea cucumbers. They’ll also work their way into sea stars, and are so named because they’ve been found dead inside oysters, completely coated in mother-of-pearl. Beautiful, really, though I reckon the pearlfish would beg to differ.
This behavior is the strange product of a housing crisis. You see, shelter is in short supply on many seafloors, particularly those that lack reefs. And there are few better shelters than sea cucumbers, little mobile homes that pearlfishes will enter pretty much as they please, leaving to hunt and returning for protection. If they can’t return to the same one, no worries at all. There’s plenty of decent housing squirming around the seafloor — if you’re willing to live in a sea cucumber’s bum.
The pearlfish finds its reluctant host likely by smell, according to biologist Eric Parmentier of Belgium’s University of Liège. It then must choose the right end to enter, using its lateral line — sensory organs that detect movements in water — to hone in on the outflow from the respiratory tree at the anus.
“Two strategies are observed for entering,” Parmentier said. “One, head first by propelling itself with violent strokes of the tail; two, tail first by placing the head at the cloaca of the sea cucumber and moving the thin tail forward alongside its own body at the level of the lateral line,” then slowing backing into the host, though not yet all the way.
“The reason for this second strategy,” Parmentier said, “is that the host has detected the presence of the fish and, in response, closes its anus. But the host has to breathe, so it has to dilate the anus to realize the water flow. The fish blocks the aperture and the host has to enlarge this opening more and more.”
Depending on what species it is, the pearlfish initiates one of two relationships once inside: a commensal one, in which it simply takes up space without either helping or adversely affecting the sea cucumber, or a rather more parasitic one, in which it chows down on its host’s gonads.
The sea cucumber, though, has a trick up its sleeve. Remarkably, it can regenerate complex body parts like intestines and, yes, gonads. And it’s a damn good thing it can, because sea cucumbers defend themselves in what might be described as a fairly unorthodox manner.
“Probably the best thing that sea cucumbers are known for is evisceration,” said marine biologist Christopher Mah, “which is tossing their guts out at predators when they are harassed by them. So you have a crab or a fish or something and what they’ll do is literally eviscerate, just take a good chunk of their intestine that will spool out of their body and get shot out at the predator or whatever as a distraction.”
So like a disgraced samurai disemboweling himself, the sea cucumber gifts the world with its intestines, whether the world wants them or not. Interestingly, though, the pearlfish doesn’t itself seem to trigger this reaction for reasons that aren’t yet clear. And it’s important to consider that the fish in fact benefits from the evisceration, because by using the sea cucumber as a home, it necessarily adopts its host’s predators. Its survival depends on the sea cucumber’s ability to defend itself, which is quite intriguing from an evolutionary perspective.
“Is it possible to see here a result of natural selection, in which the choice of a host equipped with a defense system could minimize the risk of predation?” Parmentier asked in a 2005 paper.
Some sea cucumber species even go beyond firing their intestines at predators. They’re equipped with hundreds of Cuvierian tubules — sticky, toxic tubes that spray out of the cloaca (an all-purpose opening in creatures like birds and reptiles and some invertebrates that releases both waste and reproductive elements), clinging to attackers and immobilizing them. Yet not only does the pearlfish fail to trip this defense when it enters the sea cucumber, it seems to be immune to its toxins while occupying the host, which Parmentier says may be attributable to the unusual amount of mucous coating the fish’s body.
Read more at Wired Science
'Pompeii:' 10 Strange Facts About the Roman Empire
Strange Finds and Other Buried Facts
The historical action movie "Pompeii," opening Friday in theaters, is actually two movies rolled into one. The first film is a standard-issue gladiator picture, with our hero Milo the Celt (Kit Harington) fighting his way through a procession of increasingly scary bad guys. Milo's adventures take place in the slave pits and arenas of Pompeii, the ancient Roman city that was famously buried in volcanic ash around 79 A.D.
The second movie kicks in about halfway through, when nearby Mount Vesuvius erupts in a spectacular display that provides all that historically accurate ash. Also: pillars of fire, rivers of lava, flaming boulders, several earthquakes and even a giant Mediterranean tidal wave. What began as a B-movie gladiator flick ends as a disaster picture of epic proportions, with eye-popping 3-D effects.
History nerds should enjoy all the big-budget production values detailing the ancient Roman Empire. Before the fiery destruction, the movie depicts life at the height of the Pax Romana era -- the period of relative peace after Rome's initial expansion and before its eventual decline.
Watch the corners of the frame in "Pompeii" and you can glean some interesting tidbits -- for instance, some colosseums had a kind of partial and primitive retractable roof for shading the VIPs. Here are 10 more details about the ancient Roman empire that you might not know.
Those Roman Colosseums Were Built to Last
The amphitheater of Pompeii is among the oldest surviving pieces of ancient Roman architecture. As depicted in the film, the colosseum was made of stone and plaster -- same as the larger Roman Colosseum -- and was designed to safely facilitate the gathering of large crowds for sporting events. That didn't always work to calm the hooligans, though. The Roman historian Tacitus writes of a huge riot in 59 C.E., between the Pompeians and visitors from the neighboring city of Nuceria, that resulted in a ban on colosseum events for several years.
Roman Buildings Had Central Heating
Rome's famous public baths and many private villas of the rich were heated by what's called a "hypocaust" system. The floor of the building was raised off the ground with pillars and the space below sealed off and insulated with ceramic tiles. Hot air from the furnace or fireplace was routed into the enclosed space beneath the floor, and sometimes into hollowed-out walls. A system of flues circulated the hot air and vented out the smoke.
The Toga Was a Status Symbol
While the college toga party may be an egalitarian affair, in ancient Rome the toga couldn't be worn by just anyone. In fact, the toga was restricted to Roman citizens -- a status governed by a complex system of laws. Togas weren't just sheets, either. The material, usually wool, was semi-circular in shape and draped by way of a complex method of tucks and folds. In later years, particular patterns and colors signified specific ranks and functions in Roman society.
Romans Wore Underwear, Too
Romans seldom went commando under those togas. Both men and women wore a loincloth called a subligaculum, made from wool or linen, although silken undergarments were prized by the wealthy. Women also sometimes wore a kind of strapless proto-brassiere called a mamillare or strophium. It was common for younger women especially to bound their breasts tightly, sometimes with soft leather.
Read more at Discovery News
The historical action movie "Pompeii," opening Friday in theaters, is actually two movies rolled into one. The first film is a standard-issue gladiator picture, with our hero Milo the Celt (Kit Harington) fighting his way through a procession of increasingly scary bad guys. Milo's adventures take place in the slave pits and arenas of Pompeii, the ancient Roman city that was famously buried in volcanic ash around 79 A.D.
The second movie kicks in about halfway through, when nearby Mount Vesuvius erupts in a spectacular display that provides all that historically accurate ash. Also: pillars of fire, rivers of lava, flaming boulders, several earthquakes and even a giant Mediterranean tidal wave. What began as a B-movie gladiator flick ends as a disaster picture of epic proportions, with eye-popping 3-D effects.
History nerds should enjoy all the big-budget production values detailing the ancient Roman Empire. Before the fiery destruction, the movie depicts life at the height of the Pax Romana era -- the period of relative peace after Rome's initial expansion and before its eventual decline.
Watch the corners of the frame in "Pompeii" and you can glean some interesting tidbits -- for instance, some colosseums had a kind of partial and primitive retractable roof for shading the VIPs. Here are 10 more details about the ancient Roman empire that you might not know.
Those Roman Colosseums Were Built to Last
The amphitheater of Pompeii is among the oldest surviving pieces of ancient Roman architecture. As depicted in the film, the colosseum was made of stone and plaster -- same as the larger Roman Colosseum -- and was designed to safely facilitate the gathering of large crowds for sporting events. That didn't always work to calm the hooligans, though. The Roman historian Tacitus writes of a huge riot in 59 C.E., between the Pompeians and visitors from the neighboring city of Nuceria, that resulted in a ban on colosseum events for several years.
Roman Buildings Had Central Heating
Rome's famous public baths and many private villas of the rich were heated by what's called a "hypocaust" system. The floor of the building was raised off the ground with pillars and the space below sealed off and insulated with ceramic tiles. Hot air from the furnace or fireplace was routed into the enclosed space beneath the floor, and sometimes into hollowed-out walls. A system of flues circulated the hot air and vented out the smoke.
The Toga Was a Status Symbol
While the college toga party may be an egalitarian affair, in ancient Rome the toga couldn't be worn by just anyone. In fact, the toga was restricted to Roman citizens -- a status governed by a complex system of laws. Togas weren't just sheets, either. The material, usually wool, was semi-circular in shape and draped by way of a complex method of tucks and folds. In later years, particular patterns and colors signified specific ranks and functions in Roman society.
Romans Wore Underwear, Too
Romans seldom went commando under those togas. Both men and women wore a loincloth called a subligaculum, made from wool or linen, although silken undergarments were prized by the wealthy. Women also sometimes wore a kind of strapless proto-brassiere called a mamillare or strophium. It was common for younger women especially to bound their breasts tightly, sometimes with soft leather.
Read more at Discovery News
Speeding Star Shocks Interstellar Space
Astronomers using NASA's Spitzer Space Telescope have spotted a star ripping through space, generating a violent bow shock ahead of its relentless rampage through the interstellar medium.
Kappa Cassiopeiae (κ Cass) is a hypervelocity blue supergiant star over 40 times the size of our sun that is barreling through space at the breakneck speed of 2.5 million miles per hour (or 1,100 kilometers per second) relative to its neighboring stars. At those kinds of velocities, you'd expect to see something dramatic and κ Cass doesn't disappoint.
Spitzer's infrared optics have picked out κ Cass' huge bow shock as the star's magnetic field and stellar wind particles slam into the gases and dust filling the interstellar medium, heating it up. Bow shocks are often found in front of some of the speediest stars in our galaxy.
Bow shocks are useful as they act as a remote sensor of sorts, allowing astronomers to understand the characteristics of the environment the star is traveling through.
In this image, the red bow shock exhibits some fine structure that is possibly linked to the magnetic field that threads throughout the Milky Way shaping the gas and dust. By zooming in on these hypervelocity stars that sport bow shocks, astronomers are allowed a rare look into the structure of this normally invisible field that is thought to permeate our entire galaxy.
The wispy green clouds throughout the image are caused by polycyclic aromatic hydrocarbons illuminated by starlight that are located along Spitzer's line of sight to κ Cass.
Read more at Discovery News
Kappa Cassiopeiae (κ Cass) is a hypervelocity blue supergiant star over 40 times the size of our sun that is barreling through space at the breakneck speed of 2.5 million miles per hour (or 1,100 kilometers per second) relative to its neighboring stars. At those kinds of velocities, you'd expect to see something dramatic and κ Cass doesn't disappoint.
Spitzer's infrared optics have picked out κ Cass' huge bow shock as the star's magnetic field and stellar wind particles slam into the gases and dust filling the interstellar medium, heating it up. Bow shocks are often found in front of some of the speediest stars in our galaxy.
Bow shocks are useful as they act as a remote sensor of sorts, allowing astronomers to understand the characteristics of the environment the star is traveling through.
In this image, the red bow shock exhibits some fine structure that is possibly linked to the magnetic field that threads throughout the Milky Way shaping the gas and dust. By zooming in on these hypervelocity stars that sport bow shocks, astronomers are allowed a rare look into the structure of this normally invisible field that is thought to permeate our entire galaxy.
The wispy green clouds throughout the image are caused by polycyclic aromatic hydrocarbons illuminated by starlight that are located along Spitzer's line of sight to κ Cass.
Read more at Discovery News
Feb 20, 2014
Dogs and Humans Are Hardwired to Listen
Dog and human brains turn out to be surprisingly similar, at least where communication and emotions are concerned, a new study finds.
The research, published in the journal Current Biology, is the first to compare brain functions between humans and any non-primate animal. It found that both dogs and humans evolved to listen for emotion when someone communicates.
We humans can tell if a person or dog sounds happy or sad, for example, or if he or she is ready to fight. Dogs can do the same.
“Dogs and humans share a similar social environment,” co-author Attila Andics, of the Hungarian Academy of Sciences, said in a press release. “Our findings suggest that they also use similar brain mechanisms to process social information. This may support the successfulness of vocal communication between the two species.”
If you say something to your dog and he looks as though he understands, there’s a good chance he really does — at least in terms of the emotions you are conveying. This is probably one reason why dogs are so good at reading us. They are super sensitive to how you are really feeling, as opposed to focusing on what you are saying.
You might, for example, respond, “fine,” when a housemate asks how you’re feeling, but if you are under the weather, your dog likely senses the change.
For the study, Andics and colleagues trained 11 dogs to lay motionless in an fMRI brain scanner. Human test subjects did the same. The researchers then monitored brain activity while the dogs and people listened to nearly 200 dog and human sounds, ranging from whining or crying to playful barking or laughing.
In dogs and humans, images show hearing a voice activates similar areas of the brain. The brains of dogs are more tuned to their own species. (I’ll bet experience can change that. If a person spends a lot of time around dogs, for example, they will fine-tune their doggy perception skills. Dogs surely do the same.)
An interesting difference, noted in the study, is that in dogs, 48 percent of all sound-sensitive brain regions respond more strongly to sounds other than voices. That’s in contrast to humans, in which only 3 percent of sound-sensitive brain regions show greater response to non-vocal sounds.
We then pay more attention to people talking than to, say, the sound of a squirrel chattering outside. Dogs still retain more of their wild ways, so the latter would be just as important to them.
Read more at Discovery News
The research, published in the journal Current Biology, is the first to compare brain functions between humans and any non-primate animal. It found that both dogs and humans evolved to listen for emotion when someone communicates.
We humans can tell if a person or dog sounds happy or sad, for example, or if he or she is ready to fight. Dogs can do the same.
“Dogs and humans share a similar social environment,” co-author Attila Andics, of the Hungarian Academy of Sciences, said in a press release. “Our findings suggest that they also use similar brain mechanisms to process social information. This may support the successfulness of vocal communication between the two species.”
If you say something to your dog and he looks as though he understands, there’s a good chance he really does — at least in terms of the emotions you are conveying. This is probably one reason why dogs are so good at reading us. They are super sensitive to how you are really feeling, as opposed to focusing on what you are saying.
You might, for example, respond, “fine,” when a housemate asks how you’re feeling, but if you are under the weather, your dog likely senses the change.
For the study, Andics and colleagues trained 11 dogs to lay motionless in an fMRI brain scanner. Human test subjects did the same. The researchers then monitored brain activity while the dogs and people listened to nearly 200 dog and human sounds, ranging from whining or crying to playful barking or laughing.
In dogs and humans, images show hearing a voice activates similar areas of the brain. The brains of dogs are more tuned to their own species. (I’ll bet experience can change that. If a person spends a lot of time around dogs, for example, they will fine-tune their doggy perception skills. Dogs surely do the same.)
An interesting difference, noted in the study, is that in dogs, 48 percent of all sound-sensitive brain regions respond more strongly to sounds other than voices. That’s in contrast to humans, in which only 3 percent of sound-sensitive brain regions show greater response to non-vocal sounds.
We then pay more attention to people talking than to, say, the sound of a squirrel chattering outside. Dogs still retain more of their wild ways, so the latter would be just as important to them.
Read more at Discovery News
Ants Build Raft to Escape Flood, Protect Queen
Ants may be small, but they’re certainly not stupid, as evidenced by the discovery that they build rafts to save themselves and their queen during floods.
What’s more, they construct the rafts using themselves — living ants — linked together to form a nearly waterproof buoyant vessel, according to a study published in the latest issue of PLOS ONE.
“Social organisms have an advantage when responding to ecological adversity: They can react in a collective and organized way, working together to perform tasks that a solitary individual could not achieve,” Jessica Purcell from the University of Lausanne and her colleagues wrote.
Formica selysi
Surprisingly, baby ants were used to form the base of the raft. Worker adult ants then joined together to form the rest of the structure. The queen was always placed in the safest spot — right at the center of the raft.
“We expected that individuals submerged on the base of the raft would face the highest cost, so we were astonished to see the ants systematically place the youngest colony members in that position,” Purcell said in a press release.
She continued, “Further experiments revealed that the brood are the most buoyant members of the society and that rafting does not decrease their survival … this configuration benefits the group at minimal cost.”
Who knew that baby ants float? Well, we do now. It’s no wonder that ants so often outsmart humans, foiling extermination attempts.
Ants are up there with cockroaches and other tough creatures that are true survivors. They aren’t the only non-human organisms, though, to join forces for self-preservation.
Read more at Discovery News
What’s more, they construct the rafts using themselves — living ants — linked together to form a nearly waterproof buoyant vessel, according to a study published in the latest issue of PLOS ONE.
“Social organisms have an advantage when responding to ecological adversity: They can react in a collective and organized way, working together to perform tasks that a solitary individual could not achieve,” Jessica Purcell from the University of Lausanne and her colleagues wrote.
Formica selysi
Surprisingly, baby ants were used to form the base of the raft. Worker adult ants then joined together to form the rest of the structure. The queen was always placed in the safest spot — right at the center of the raft.
“We expected that individuals submerged on the base of the raft would face the highest cost, so we were astonished to see the ants systematically place the youngest colony members in that position,” Purcell said in a press release.
She continued, “Further experiments revealed that the brood are the most buoyant members of the society and that rafting does not decrease their survival … this configuration benefits the group at minimal cost.”
Who knew that baby ants float? Well, we do now. It’s no wonder that ants so often outsmart humans, foiling extermination attempts.
Ants are up there with cockroaches and other tough creatures that are true survivors. They aren’t the only non-human organisms, though, to join forces for self-preservation.
Read more at Discovery News
Gourd Invasion Beat Europeans Across Atlantic
Bottle gourds’ wild ancestors may have crossed the Atlantic Ocean on the currents that run from western Africa to the Caribbean and South America thousands of years before Europeans made similar voyages.
A recent genetic analysis found that ancient gourds (Lagenaria siceraria) from the western hemisphere carried the DNA signature of African wild gourds. Those hard-skinned fruits could have floated across the Atlantic in as little as 100 days, with an average sea voyage of approximately nine months. The Proceedings of the National Academy of Sciences (PNAS) published a study that presented both the genetic analysis and travel time estimates.
The wild gourds simply floated from their homeland in Africa to the Western Hemisphere. Once the floating fruit reached new lands, the seeds within sprouted and animals distributed the plants even further. Wild gourd seeds have been found in mastodon dung in Florida, noted the study’s authors, led by post-doctoral researcher Logan Kistler of Penn State.
After the wild gourds established themselves of dry land, humans may have domesticated the plant in multiple places independently, suggested the study authors. The fruits’ sea-going ability allowed bottle gourds to be the only domesticated crop with a global distribution before the 1500′s. Domesticated gourds first appeared in archeological finds in the western Hemisphere from 10,000 years ago. People used the dried shell of the gourds to make water bottles, spoons, bowls and other items. In some places, such as the coast of western South America, people likely used gourds as containers before inventing ceramic pots.
Earlier studies suggested that bottle gourds may have accompanied prehistoric people as they crossed the now-submerged land bridge from Asia to North America in the Arctic. The PNAS study authors pointed out that the early Native Americans would have needed to cultivate the gourds in the frigid north as they migrated. However, gourds need warm weather to grow.
Read more at Discovery News
A recent genetic analysis found that ancient gourds (Lagenaria siceraria) from the western hemisphere carried the DNA signature of African wild gourds. Those hard-skinned fruits could have floated across the Atlantic in as little as 100 days, with an average sea voyage of approximately nine months. The Proceedings of the National Academy of Sciences (PNAS) published a study that presented both the genetic analysis and travel time estimates.
The wild gourds simply floated from their homeland in Africa to the Western Hemisphere. Once the floating fruit reached new lands, the seeds within sprouted and animals distributed the plants even further. Wild gourd seeds have been found in mastodon dung in Florida, noted the study’s authors, led by post-doctoral researcher Logan Kistler of Penn State.
After the wild gourds established themselves of dry land, humans may have domesticated the plant in multiple places independently, suggested the study authors. The fruits’ sea-going ability allowed bottle gourds to be the only domesticated crop with a global distribution before the 1500′s. Domesticated gourds first appeared in archeological finds in the western Hemisphere from 10,000 years ago. People used the dried shell of the gourds to make water bottles, spoons, bowls and other items. In some places, such as the coast of western South America, people likely used gourds as containers before inventing ceramic pots.
Earlier studies suggested that bottle gourds may have accompanied prehistoric people as they crossed the now-submerged land bridge from Asia to North America in the Arctic. The PNAS study authors pointed out that the early Native Americans would have needed to cultivate the gourds in the frigid north as they migrated. However, gourds need warm weather to grow.
Read more at Discovery News
'Sloshing' Supernova Sheds Light on Star's Death
By tracing radioactive material in the remains of a nearby exploded star, scientists have a new understanding of what happened in the star’s final moments and how similar explosions create the calcium, gold, iron and other elements spread throughout the cosmos.
The discovery comes from NASA’s Nuclear Spectroscopic Telescope Array, or NuSTAR, which was launched in 2012 to home in on the highest energy X-ray radiation emanating from celestial objects.
Astronomers took a look at a popular target, Cassiopeia A, which is the remnant of star that exploded some 11,000 years ago. In visible light, Cas A is an expanding spherical cloud of debris stretching 10 light years, or some 60 trillion miles, across the sky.
NuSTAR’s X-ray eye shows a different scene. The explosion that marked the star’s death was not symmetrical. Rather than jets — generated by the spinning star’s collapse — as some computer models predict, the detonation more likely was triggered by its core sloshing around, similar to the disrupted surface of a pot of boiling water.
“Stars are spherical balls of gas, and so you might think that when they end their lives and explode, that explosion would look like a uniform ball expanding out with great power,” NuSTAR’s lead scientist Fiona Harrison, with the California Institute of Technology, said in a statement.
“Our new results show how the explosion’s heart, or engine, is distorted, possibly because the inner regions literally slosh around before detonating,” she said.
When a massive star runs out of hydrogen for nuclear fusion, gravity eventually takes the upper hand and begins crushing it, building up pressure inside and fusing together even heavier elements.
When there is nothing left to fuse, at the very center of the star tiny particles called neutrinos form and start heating material just behind the shock wave.
Astrophysicist Brian Grefenstette, also with Caltech, likens the process to boiling water.
“You’re heating up the water. That makes bubbles that rise up and the top of your boiling water sloshes around a little bit,” Grefenstette told reporters.
Neutrinos cause a similar phenomenon in the heart of a collapsing star.
“That’s where you get your big bubbles,” Grefenstette said. “They come up and they make ripples in the shock wave.”
“It sort of pushes the material out of the way, just like the bubbles in your pot (of boiling water.) In this case, they’re letting the shock wave out and the shock wave tears apart the rest of the star,” he said.
Read more at Discovery News
The discovery comes from NASA’s Nuclear Spectroscopic Telescope Array, or NuSTAR, which was launched in 2012 to home in on the highest energy X-ray radiation emanating from celestial objects.
Astronomers took a look at a popular target, Cassiopeia A, which is the remnant of star that exploded some 11,000 years ago. In visible light, Cas A is an expanding spherical cloud of debris stretching 10 light years, or some 60 trillion miles, across the sky.
NuSTAR’s X-ray eye shows a different scene. The explosion that marked the star’s death was not symmetrical. Rather than jets — generated by the spinning star’s collapse — as some computer models predict, the detonation more likely was triggered by its core sloshing around, similar to the disrupted surface of a pot of boiling water.
“Stars are spherical balls of gas, and so you might think that when they end their lives and explode, that explosion would look like a uniform ball expanding out with great power,” NuSTAR’s lead scientist Fiona Harrison, with the California Institute of Technology, said in a statement.
“Our new results show how the explosion’s heart, or engine, is distorted, possibly because the inner regions literally slosh around before detonating,” she said.
When a massive star runs out of hydrogen for nuclear fusion, gravity eventually takes the upper hand and begins crushing it, building up pressure inside and fusing together even heavier elements.
When there is nothing left to fuse, at the very center of the star tiny particles called neutrinos form and start heating material just behind the shock wave.
Astrophysicist Brian Grefenstette, also with Caltech, likens the process to boiling water.
“You’re heating up the water. That makes bubbles that rise up and the top of your boiling water sloshes around a little bit,” Grefenstette told reporters.
Neutrinos cause a similar phenomenon in the heart of a collapsing star.
“That’s where you get your big bubbles,” Grefenstette said. “They come up and they make ripples in the shock wave.”
“It sort of pushes the material out of the way, just like the bubbles in your pot (of boiling water.) In this case, they’re letting the shock wave out and the shock wave tears apart the rest of the star,” he said.
Read more at Discovery News
Feb 19, 2014
Quantum Microscope May Be Able to See Inside Living Cells
By combining quantum mechanical quirks of light with a technique called photonic force microscopy, scientists can now probe detailed structures inside living cells like never before. This ability could bring into focus previously invisible processes and help biologists better understand how cells work.
Photonic force microscopy is similar to atomic force microscopy, where a fine-tipped needle is used to scan the surface of something extremely small such as DNA. Rather than a needle, researchers used extremely tiny fat granules about 300 nanometers in diameter to map out the flow of cytoplasm inside yeast cells with high precision.
To see where these miniscule fat particles were, they shined a laser on them. Here, the researchers had to rely on what’s known as squeezed light. Photons of light are inherently noisy and because of this, a laser beam’s light particles won’t all hit a detector at the same time. There is a slight randomness to their arrival that makes for a fuzzy picture. But squeezed light uses quantum mechanical tricks to reduce this noise and clear up the fuzziness.
“The essential idea was to use this noise-reduced light to locate the nano-particles inside a cell,” said physicist Warwick Bowen of the University of Queensland in Australia, co-author of a paper that came out Feb. 4 in Physical Review X.
The reason behind all this was to overcome a fundamental optical limit that has always caused headaches for biologists. The diffraction limit of light puts a constraint on the size of something you can resolve with a microscope for a given wavelength of light. For visible wavelengths, this limit is about 250 nanometers. Anything smaller can’t be easily seen. The trouble is, a lot of structures inside of cells, including organelles, cytoskeletons, and individual proteins, are much smaller than this.
Scientists have come up with clever ways to get around the diffraction limit and resolve things as small as 20 nanometers. But the new quantum technique has pushed that limit even farther. Instead of using light, Bowen’s team passed a nano-particle over the surface of cellular structures, sort of like running your finger over a bumpy surface. They held onto their fat granule probe using optical tweezers, which are basically a nanoscale version of a tractor beam. In an optical tweezer, scientists create a laser beam with an electromagnetic field along its length. The field is strongest at the center of the beam, allowing tiny objects to be drawn to this point and held there.
Because the fat granules occur naturally, the cells don’t need to be prepared like they would for atomic force microscopy, which generally involves killing the cells. That’s a big deal because it means photonic force microscopy can be used to visualize processes inside living cells. The team has tracked these granules with a resolution of about 10 nanometers.
To get to this resolution, the researchers needed to see exactly where the fat globules were. For this they needed the quantum mechanical squeezed light because it provided greater clarity than would be possible with fuzzy classical light. Squeezed light relies on a quantum mechanical law known as the Heisenberg uncertainty principle. At the subatomic level, there are limits to the amount of knowledge we can have about particles. You might already know that Heisenberg showed that both the position and speed of a particle can’t be perfectly known at the same time. There is an equivalent relationship between the intensity of photons and their phase.
Light can be thought of as both a wave and a particle. The phase of a wave is the point where the wave begins; either at its peak or trough or somewhere in between. The fuzziness of classical light comes from the fact that the phases of its photons don’t all line up. Some are arriving at a detector while near the top of their wave, others while near the bottom. Squeezed light reduces the intensity of light waves to force them to all have a similar phase. It’s kind of like letting all of the photons out from the starting gate at the same time.
This squeezed beam allows the researchers to get a very good read on where their nano-particle is. Though the recent experiments have achieved resolutions of around 10 nanometers, Bowen thinks they can get down to a nanometer or less with better squeezing of the light.
Using this method, the team was able to follow their fat globule and measure the viscosity of cytoplasm inside of yeast cells. For now, they can only see how the nano-particles travel in one dimension. If they can track them in three dimensions, they could better map out particular cellular structures, such as actin filaments, or tiny pores that open and close on cell walls to allow nutrients to flow in and out.
Read more at Wired Science
Bears Use Wildlife Crossings to Find New Mates
As more and more roads cut across the territories of wild animals, wildlife crossings are being built to bridge these barriers. But there has been little evidence that animals actually use the crossings.
Now, a team of researchers at Montana State University has compared the genetics of grizzly bears and black bears at road crossings in the Canadian Rockies, finding the bears do indeed move across the Trans-Canada Highway, and breed with mates on the other side.
The study provides the first proof that wildlife crossings maintain genetic diversity, the researchers say.
"Roads connect human populations, but fragment wildlife populations," wrote the authors of the study, detailed today (Feb. 18) in the journal Proceedings of the Royal Society B.
Busy roads can lead to deaths or deter animals trying to cross the pathways. This prevents gene flow — the transfer of genes from one population to another — reducing genetic diversity and making it harder for the animals to adapt to a changing environment.
The effects will only worsen with climate change, the researchers added.
Wildlife biologist Michael Sawaya of Montana State University and his colleagues conducted a three-year study of grizzly (Ursus arctos) and black bears (Ursus americanus) at Banff National Park, Canada, to test how effectively wildlife crossing structures actually bridged bear populations.
The researchers set up barbed-wire hair traps on highway underpasses and overpasses, and sequenced the DNA from fur left behind by passing bears. The scientists compared genetic data from the wildlife crossings with data from bear populations in surrounding areas.
Results showed a genetic discontinuity — a division between two distinct populations — at the Trans-Canada Highway for grizzly bears, but not for black bears. Genetic tests revealed that 47 percent of black bears and 27 percent of grizzly bears that used the crossings (including males and females) bred successfully.
The findings are good news for bears and other animals whose territories are increasingly divided by highways. "It is clear that male and female individuals using crossing structures are successfully migrating, breeding and moving genes across the roadway," the researchers wrote.
Read more at Discovery News
Now, a team of researchers at Montana State University has compared the genetics of grizzly bears and black bears at road crossings in the Canadian Rockies, finding the bears do indeed move across the Trans-Canada Highway, and breed with mates on the other side.
The study provides the first proof that wildlife crossings maintain genetic diversity, the researchers say.
"Roads connect human populations, but fragment wildlife populations," wrote the authors of the study, detailed today (Feb. 18) in the journal Proceedings of the Royal Society B.
Busy roads can lead to deaths or deter animals trying to cross the pathways. This prevents gene flow — the transfer of genes from one population to another — reducing genetic diversity and making it harder for the animals to adapt to a changing environment.
The effects will only worsen with climate change, the researchers added.
Wildlife biologist Michael Sawaya of Montana State University and his colleagues conducted a three-year study of grizzly (Ursus arctos) and black bears (Ursus americanus) at Banff National Park, Canada, to test how effectively wildlife crossing structures actually bridged bear populations.
The researchers set up barbed-wire hair traps on highway underpasses and overpasses, and sequenced the DNA from fur left behind by passing bears. The scientists compared genetic data from the wildlife crossings with data from bear populations in surrounding areas.
Results showed a genetic discontinuity — a division between two distinct populations — at the Trans-Canada Highway for grizzly bears, but not for black bears. Genetic tests revealed that 47 percent of black bears and 27 percent of grizzly bears that used the crossings (including males and females) bred successfully.
The findings are good news for bears and other animals whose territories are increasingly divided by highways. "It is clear that male and female individuals using crossing structures are successfully migrating, breeding and moving genes across the roadway," the researchers wrote.
Read more at Discovery News
Shark Attack Stats: Why So Unpredictable?
A number of factors are affecting how many shark attacks and fatalities occur each year, and most of them have little to do with sharks and more to do with humans, according to shark experts.
The reasons help to explain why shark attack statistics fluctuate so much from year to year. A report released by the University of Florida’s International Shark Attack File earlier this week, for example, found that there were 10 human fatalities worldwide due to shark attacks, which is higher than the 10-year average from 2003-2012.
The U.S., on the other hand, only had one fatality- in Hawaii- and 47 shark attacks nationwide. This was lower than the 2012 total of 54.
“Shark attack rates, in general, have been rising every decade since the 1900s and yet there has been a sharp decline in the shark population,” George Burgess, curator of the Shark Attack File, told Discovery News. “Sharks are highly migratory animals that act predictably, so other forces are at work.”
Social and economic factors are two big drivers. A lousy economy usually helps keep shark attacks down.
“If the economy’s bad,” Burgess explained, “people won’t have money for vacations at the beach, and they won’t be as likely to gas up their car to go surfing.”
Our population continues to rise, however, as does our mobility.
“The more off the beaten path we go, the more likely shark attacks will occur,” he said.
Globalization, tourism and population growth worldwide have all led to shark attacks in historically low-contact areas. These include places like Reunion Island, Papua New Guinea, Madagascar, Solomon Island and the small island of Diego Garcia in the Indian Ocean. The latter saw its first recorded shark attack in 2013.
Cage diving, where tour organizers attract sharks with bait, also can increase the chances for attack.
“We have previously analyzed data to see which sharks are hanging around shark tours with cage divers on Oahu, and one of the things we noticed was that you’d get a spike in how many tiger sharks are seen in October, which would match our predicted model that you’re having an influx of big, pregnant females coming from the northwestern Hawaiian Islands,” said Yannis Papastamatiou, a marine biologist with the Florida Museum of Natural History.
Tiger sharks are one of the big three sharks of concern to experts because they tend to be large, with huge serrated teeth that can lead to serious injuries and fatalities. The other two in the big three are great whites and bull sharks. Most attacks, however, are by smaller whitetip and blacktip sharks, particularly in waters off of Florida.
Ocean current changes and climate greatly influence shark attack stats.
“If there’s a hurricane, obviously most people aren’t going to be flocking to the beach,” Burgess said.
Read more at Discovery News
The reasons help to explain why shark attack statistics fluctuate so much from year to year. A report released by the University of Florida’s International Shark Attack File earlier this week, for example, found that there were 10 human fatalities worldwide due to shark attacks, which is higher than the 10-year average from 2003-2012.
The U.S., on the other hand, only had one fatality- in Hawaii- and 47 shark attacks nationwide. This was lower than the 2012 total of 54.
“Shark attack rates, in general, have been rising every decade since the 1900s and yet there has been a sharp decline in the shark population,” George Burgess, curator of the Shark Attack File, told Discovery News. “Sharks are highly migratory animals that act predictably, so other forces are at work.”
Social and economic factors are two big drivers. A lousy economy usually helps keep shark attacks down.
“If the economy’s bad,” Burgess explained, “people won’t have money for vacations at the beach, and they won’t be as likely to gas up their car to go surfing.”
Our population continues to rise, however, as does our mobility.
“The more off the beaten path we go, the more likely shark attacks will occur,” he said.
Globalization, tourism and population growth worldwide have all led to shark attacks in historically low-contact areas. These include places like Reunion Island, Papua New Guinea, Madagascar, Solomon Island and the small island of Diego Garcia in the Indian Ocean. The latter saw its first recorded shark attack in 2013.
Cage diving, where tour organizers attract sharks with bait, also can increase the chances for attack.
“We have previously analyzed data to see which sharks are hanging around shark tours with cage divers on Oahu, and one of the things we noticed was that you’d get a spike in how many tiger sharks are seen in October, which would match our predicted model that you’re having an influx of big, pregnant females coming from the northwestern Hawaiian Islands,” said Yannis Papastamatiou, a marine biologist with the Florida Museum of Natural History.
Tiger sharks are one of the big three sharks of concern to experts because they tend to be large, with huge serrated teeth that can lead to serious injuries and fatalities. The other two in the big three are great whites and bull sharks. Most attacks, however, are by smaller whitetip and blacktip sharks, particularly in waters off of Florida.
Ocean current changes and climate greatly influence shark attack stats.
“If there’s a hurricane, obviously most people aren’t going to be flocking to the beach,” Burgess said.
Read more at Discovery News
Electron Mass Measured to Record-Breaking Precision
Scientists in Germany said Wednesday they had made the most precise measurement yet of the mass of the electron, one of the building blocks of matter.
The feat should provide a useful tool for scientists testing the "Standard Model" of physics -- the most widely-accepted theory of the particles and forces that comprise the Universe, they said.
Electrons are particles with a negative electrical charge that orbit the nucleus of an atom.
They were discovered in 1897 by Britain's Joseph John ("J.J.") Thomson, who dubbed them "corpuscles" -- a name later changed to "electron" because of its connection with electrical charge.
A team led by Sven Sturm of the Max Planck Institute for Nuclear Physics in Heidelberg "weighed" electrons using a device called a Penning trap, which stores charged particles in a combination of magnetic and electrical fields.
They measured a single electron that was bound to a carbon nucleus whose mass was already known.
The electron has 0.000548579909067 of an atomic mass unit, the measurement unit for particles, according to the calculation, which factors in variables for statistical and experimental uncertainties.
The estimate is a 13-fold improvement in accuracy on previous attempts at determining the electron's mass.
"This result lays the foundation for future fundamental physics experiments and precision tests of the Standard Model," according to the study published in the journal Nature.
Taken from Discovery News
The feat should provide a useful tool for scientists testing the "Standard Model" of physics -- the most widely-accepted theory of the particles and forces that comprise the Universe, they said.
Electrons are particles with a negative electrical charge that orbit the nucleus of an atom.
They were discovered in 1897 by Britain's Joseph John ("J.J.") Thomson, who dubbed them "corpuscles" -- a name later changed to "electron" because of its connection with electrical charge.
A team led by Sven Sturm of the Max Planck Institute for Nuclear Physics in Heidelberg "weighed" electrons using a device called a Penning trap, which stores charged particles in a combination of magnetic and electrical fields.
They measured a single electron that was bound to a carbon nucleus whose mass was already known.
The electron has 0.000548579909067 of an atomic mass unit, the measurement unit for particles, according to the calculation, which factors in variables for statistical and experimental uncertainties.
The estimate is a 13-fold improvement in accuracy on previous attempts at determining the electron's mass.
"This result lays the foundation for future fundamental physics experiments and precision tests of the Standard Model," according to the study published in the journal Nature.
Taken from Discovery News
Feb 18, 2014
When a black hole shreds a star, a bright flare tells the story
Enrico Ramirez-Ruiz uses computer simulations to explore the universe's most violent events, so when the first detailed observations of a star being ripped apart by a black hole were reported in 2012 (Gezari et al., Nature), he was eager to compare the data with his simulations. He was also highly skeptical of one of the published conclusions: that the disrupted star was a rare helium star.
"I was sure it was a normal hydrogen star and we were just not understanding what's going on," said Ramirez-Ruiz, a professor of astronomy and astrophysics at the University of California, Santa Cruz.
In a paper accepted for publication in the Astrophysical Journal and available online at arXiv.org, Ramirez-Ruiz and his students explain what happens during the disruption of a normal sun-like star by a supermassive black hole, and they show why observers might fail to see evidence of the hydrogen in the star. First author and UCSC graduate student James Guillochon (now an Einstein Fellow at Harvard University) and undergraduate Haik Manukian worked with Ramirez-Ruiz to run a series of detailed computer simulations of encounters between stars and black holes.
Supermassive black holes are thought to lurk at the centers of most galaxies. Some (known as active galactic nuclei) are very bright, emitting intense radiation from superheated gas falling into the black hole. But the central black holes of most galaxies in the local universe have run out of gas and are quiescent. Only when an unlucky star approaches too close and gets shredded by the black hole's powerful tidal forces does the galactic center emit a bright flare of light. Astronomers call this a "tidal disruption event" (TDE), and in a typical galaxy it happens about once every 10,000 years.
"That means you have to survey the nearest 10,000 galaxies in order to see one event, so for many years this was very much a theoretical field," Ramirez-Ruiz said.
Then came Pan-STARRS (Panoramic Survey Telescope and Rapid Response System), which is surveying the sky on a continual basis and has begun detecting and recording observations of these very rare events. The first one, known as PS1-10jh, was detected in 2010 and published in 2012. Astronomers recorded the light curve (the rise and fall in brightness over time) and took a spectrum at peak brightness to study the different wavelengths of light.
Something missing
The spectrum of an active galactic nucleus (AGN) shows characteristic "emission lines" at specific wavelengths corresponding to the most common elements such as hydrogen and helium. These emission lines appear as spikes of increased intensity in a continuous spectrum. The shocking thing about PS1-10jh was the absence of a hydrogen line in the spectrum.
"It's very unusual to have seen helium and not hydrogen. Stars are mainly made of hydrogen, and stars made only of helium are extremely rare, so this was a huge issue," Guillochon said. "People said maybe it was a giant star with a helium core and a hydrogen envelope, and the black hole removed the hydrogen first and then the helium core in a second pass."
Guillochon began to explore the possibilities using computer simulations. The results provide a new understanding of the origin of the emission lines in a tidal disruption event. They show that the flare of light from a tidal disruption contains information about the type of star and the size of the black hole. And they show that PS1-10jh involved the most common type of star (a main-sequence star much like our sun) and a relatively small supermassive black hole.
When a star gets disrupted by a supermassive black hole, the tidal forces first stretch the star into an elongated blob before shredding it. In a full disruption, about half of the star's mass gets ejected and the other half remains bound in elliptical trajectories, eventually forming an "accretion disk" of material spiraling into the black hole.
Previously, researchers had thought that the unbound material formed a wide "fan," and that this fan of ejected material was the main source of emission lines. But in Guillochon's simulations, the unbound material is confined by self-gravity into a narrow band that doesn't have enough surface area to be the source of the emission lines. Instead, the emission lines must come from the accretion disk. The simulations show how this disk forms over time, starting with the inner part and growing outward.
Birth of an AGN
According to Ramirez-Ruiz, it is like watching the birth of an active galactic nucleus. The emission lines in a TDE correspond to the well-studied "broad line region" of AGNs. In an AGN, the emission lines of different elements are produced at different distances from the central black hole. Helium lines are produced deep in, while hydrogen lines are produced farther out where the intensity of ionizing radiation is slightly lower. When the spectrum of PS1-10jh was taken, the accretion disk simply had not grown big enough to reach the distance where hydrogen starts to produce an emission line.
"The hydrogen is there, you just don't see it because it is so highly ionized. The way to understand the spectrum of a TDE is to think of it as an AGN with a truncated disk, because the disk is still growing," Guillochon said. "In an AGN, the emission is steady because the disk is established. In our model of tidal disruption, you are seeing the broad line region being built."
More recently, another TDE was detected (PS1-11af), and its spectrum had neither hydrogen nor helium emission lines. "Our model tells us that this would have to be a smaller black hole, and when the spectrum was taken the disk was so small you would not expect to see either hydrogen or helium," Guillochon said.
The new paper also shows how the light curve of a TDE can yield information about the masses of both the star and the black hole. The light curves derived from the simulations match the observed light curves remarkably well. "With this simple model, we get a perfect fit to the data, and we're able to explain the light curve in multiple color bands," Ramirez-Ruiz said. "The type of star and the size of the black hole are imprinted in the light curve."
According to Ramirez-Ruiz, Pan-STARRS is expected to detect dozens of tidal disruptions, and the planned Large Synoptic Survey Telescope (LSST) could detect thousands per year. This means that astronomers will be able to study quiescent black holes at the centers of local galaxies that would otherwise be difficult if not impossible to detect. If it is not emitting light, a supermassive black hole reveals its presence only through its effects on the motions of stars, and the smaller the black hole, the harder it is to see those effects.
Read more at Science Daily
"I was sure it was a normal hydrogen star and we were just not understanding what's going on," said Ramirez-Ruiz, a professor of astronomy and astrophysics at the University of California, Santa Cruz.
In a paper accepted for publication in the Astrophysical Journal and available online at arXiv.org, Ramirez-Ruiz and his students explain what happens during the disruption of a normal sun-like star by a supermassive black hole, and they show why observers might fail to see evidence of the hydrogen in the star. First author and UCSC graduate student James Guillochon (now an Einstein Fellow at Harvard University) and undergraduate Haik Manukian worked with Ramirez-Ruiz to run a series of detailed computer simulations of encounters between stars and black holes.
Supermassive black holes are thought to lurk at the centers of most galaxies. Some (known as active galactic nuclei) are very bright, emitting intense radiation from superheated gas falling into the black hole. But the central black holes of most galaxies in the local universe have run out of gas and are quiescent. Only when an unlucky star approaches too close and gets shredded by the black hole's powerful tidal forces does the galactic center emit a bright flare of light. Astronomers call this a "tidal disruption event" (TDE), and in a typical galaxy it happens about once every 10,000 years.
"That means you have to survey the nearest 10,000 galaxies in order to see one event, so for many years this was very much a theoretical field," Ramirez-Ruiz said.
Then came Pan-STARRS (Panoramic Survey Telescope and Rapid Response System), which is surveying the sky on a continual basis and has begun detecting and recording observations of these very rare events. The first one, known as PS1-10jh, was detected in 2010 and published in 2012. Astronomers recorded the light curve (the rise and fall in brightness over time) and took a spectrum at peak brightness to study the different wavelengths of light.
Something missing
The spectrum of an active galactic nucleus (AGN) shows characteristic "emission lines" at specific wavelengths corresponding to the most common elements such as hydrogen and helium. These emission lines appear as spikes of increased intensity in a continuous spectrum. The shocking thing about PS1-10jh was the absence of a hydrogen line in the spectrum.
"It's very unusual to have seen helium and not hydrogen. Stars are mainly made of hydrogen, and stars made only of helium are extremely rare, so this was a huge issue," Guillochon said. "People said maybe it was a giant star with a helium core and a hydrogen envelope, and the black hole removed the hydrogen first and then the helium core in a second pass."
Guillochon began to explore the possibilities using computer simulations. The results provide a new understanding of the origin of the emission lines in a tidal disruption event. They show that the flare of light from a tidal disruption contains information about the type of star and the size of the black hole. And they show that PS1-10jh involved the most common type of star (a main-sequence star much like our sun) and a relatively small supermassive black hole.
When a star gets disrupted by a supermassive black hole, the tidal forces first stretch the star into an elongated blob before shredding it. In a full disruption, about half of the star's mass gets ejected and the other half remains bound in elliptical trajectories, eventually forming an "accretion disk" of material spiraling into the black hole.
Previously, researchers had thought that the unbound material formed a wide "fan," and that this fan of ejected material was the main source of emission lines. But in Guillochon's simulations, the unbound material is confined by self-gravity into a narrow band that doesn't have enough surface area to be the source of the emission lines. Instead, the emission lines must come from the accretion disk. The simulations show how this disk forms over time, starting with the inner part and growing outward.
Birth of an AGN
According to Ramirez-Ruiz, it is like watching the birth of an active galactic nucleus. The emission lines in a TDE correspond to the well-studied "broad line region" of AGNs. In an AGN, the emission lines of different elements are produced at different distances from the central black hole. Helium lines are produced deep in, while hydrogen lines are produced farther out where the intensity of ionizing radiation is slightly lower. When the spectrum of PS1-10jh was taken, the accretion disk simply had not grown big enough to reach the distance where hydrogen starts to produce an emission line.
"The hydrogen is there, you just don't see it because it is so highly ionized. The way to understand the spectrum of a TDE is to think of it as an AGN with a truncated disk, because the disk is still growing," Guillochon said. "In an AGN, the emission is steady because the disk is established. In our model of tidal disruption, you are seeing the broad line region being built."
More recently, another TDE was detected (PS1-11af), and its spectrum had neither hydrogen nor helium emission lines. "Our model tells us that this would have to be a smaller black hole, and when the spectrum was taken the disk was so small you would not expect to see either hydrogen or helium," Guillochon said.
The new paper also shows how the light curve of a TDE can yield information about the masses of both the star and the black hole. The light curves derived from the simulations match the observed light curves remarkably well. "With this simple model, we get a perfect fit to the data, and we're able to explain the light curve in multiple color bands," Ramirez-Ruiz said. "The type of star and the size of the black hole are imprinted in the light curve."
According to Ramirez-Ruiz, Pan-STARRS is expected to detect dozens of tidal disruptions, and the planned Large Synoptic Survey Telescope (LSST) could detect thousands per year. This means that astronomers will be able to study quiescent black holes at the centers of local galaxies that would otherwise be difficult if not impossible to detect. If it is not emitting light, a supermassive black hole reveals its presence only through its effects on the motions of stars, and the smaller the black hole, the harder it is to see those effects.
Read more at Science Daily
Ancient Rural Town Uncovered in Israel
On the outskirts of Jerusalem, archaeologists have discovered the remains of a 2,300-year-old rural village that dates back to the Second Temple period, the Israel Antiquities Authority (IAA) announced.
Trenches covering some 8,000 square feet (750 square meters) revealed narrow alleys and a few single-family stone houses, each containing several rooms and an open courtyard. Among the ruins, archaeologists also found dozens of coins, cooking pots, milling tools and jars for storing oil and wine.
"The rooms generally served as residential and storage rooms, while domestic tasks were carried out in the courtyards," Irina Zilberbod, the excavation director for the IAA, explained in a statement.
Archaeologists don't know what the town would have been called in ancient times, but it sits near the legendary Burma Road, a route that allowed supplies and food to flow into Jerusalem during the 1948 Arab-Israeli War. The rural village located on a ridge with a clear view of the surrounding countryside, and people inhabiting the region during the Second Temple period likely cultivated orchards and vineyards to make a living, IAA officials said.
The Second Temple period (538 B.C. to A.D. 70) refers to the lifetime of the Jewish temple that was built on Jerusalem's Temple Mount to replace the First Temple after it was destroyed. Archaeological evidence suggests this provincial village hit its peak during the third century B.C., when Judea was under the control of the Seleucid monarchy after the breakup of Alexander the Great's empire. Residents seem to have abandoned the town at the end of the Hasmonean dynasty — when Herod the Great came into power in 37 B.C. — perhaps to chase better job opportunities in the city amid an economic downturn.
"The phenomenon of villages and farms being abandoned at the end of the Hasmonean dynasty or the beginning of Herod the Great's succeeding rule is one that we are familiar with from many rural sites in Judea," archaeologist Yuval Baruch explained in a statement. "And it may be related to Herod's massive building projects in Jerusalem, particularly the construction of the Temple Mount, and the mass migration of villagers to the capital to work on these projects."
Read more at Discovery News
Trenches covering some 8,000 square feet (750 square meters) revealed narrow alleys and a few single-family stone houses, each containing several rooms and an open courtyard. Among the ruins, archaeologists also found dozens of coins, cooking pots, milling tools and jars for storing oil and wine.
"The rooms generally served as residential and storage rooms, while domestic tasks were carried out in the courtyards," Irina Zilberbod, the excavation director for the IAA, explained in a statement.
Archaeologists don't know what the town would have been called in ancient times, but it sits near the legendary Burma Road, a route that allowed supplies and food to flow into Jerusalem during the 1948 Arab-Israeli War. The rural village located on a ridge with a clear view of the surrounding countryside, and people inhabiting the region during the Second Temple period likely cultivated orchards and vineyards to make a living, IAA officials said.
The Second Temple period (538 B.C. to A.D. 70) refers to the lifetime of the Jewish temple that was built on Jerusalem's Temple Mount to replace the First Temple after it was destroyed. Archaeological evidence suggests this provincial village hit its peak during the third century B.C., when Judea was under the control of the Seleucid monarchy after the breakup of Alexander the Great's empire. Residents seem to have abandoned the town at the end of the Hasmonean dynasty — when Herod the Great came into power in 37 B.C. — perhaps to chase better job opportunities in the city amid an economic downturn.
"The phenomenon of villages and farms being abandoned at the end of the Hasmonean dynasty or the beginning of Herod the Great's succeeding rule is one that we are familiar with from many rural sites in Judea," archaeologist Yuval Baruch explained in a statement. "And it may be related to Herod's massive building projects in Jerusalem, particularly the construction of the Temple Mount, and the mass migration of villagers to the capital to work on these projects."
Read more at Discovery News
Serpent-Handling Pastor Killed by Snake: Where Was God?
A snake-handling preacher who survived nine previous bites succumbed to his final, fatal bite in Kentucky over the weekend.
As CNN reported, Jamie Coots, a Pentecostal believer who stars in a reality show, "Snake Salvation," died Saturday evening. CNN said Coots believed that a passage in the Bible suggests poisonous snakebites will not harm believers as long as they are anointed by God.
Evangelical preachers like Coots not only handle venomous snakes but also engage in other dangerous activities such as drinking poison. They base their faith on Biblical verses in Mark 16: "And these signs will follow those who believe: in My name they will cast out demons; they will speak with new tongues; they will take up serpents; and if they drink anything deadly, it will by no means hurt them; they will lay their hands on the sick, and they will recover."
Pastor Coots and his followers are Biblical literalists, believing that each and every word in the Bible is the true and inerrant word of God. This is a position that Bill Nye "The Science Guy" took creationist Ken Ham to task about during their debate last month, when Nye described the Bible as "verses translated into English over 30 centuries."
Even assuming that God wrote the Bible through men, all that copying and translating, Nye noted, leaves many opportunity for errors to creep into the verses. Thus the Mark 16 reference to snakes may simply be a metaphor, part of a well-known tradition of depicting Satan or evil in the form of serpents. Many evangelicals, however, take it literally.
The premise behind snake handling is to demonstrate their faith, both to themselves and as an inspiration to others, by doing something dangerous. It just happens to be serpents because of a bible passage, but in theory the same ritual role could be fulfilled by drunk bullfighting or playing Russian roulette.
Seeking medical attention for a snake bite is seen as a lack of faith in God's ability to heal, a belief that can also be found in other religions including Christian Scientists and Scientologists. In many cases children have even died because their devout parents refused to take them to a doctor.
Coots, though well-known because of his high-profile status on a popular television show, is far from alone in this practice. Though not common (and in fact illegal in many places), snake handling at evangelical events occurs on a regular basis. It's not clear how many people have died from it -- since official numbers are not kept and only high-profile deaths such as Coots are likely to make the news -- but the victims likely number in the hundreds.
The No-Lose Psychology of Salvation
Many wonder what effect Coots's death will have on his followers. The most likely answer, surprisingly, is none.
Their religious belief is what in logic is called non-falsifiable; that is, it can't be proven wrong or false. No matter the outcome of snake handling, it's God's will: if he gets bitten and dies, it's fine because God called him home and it was his time to pass, and if he doesn't get bit (or survives the bite) it's because God protected him. It's framed as a win-win situation, so no matter the outcome it reinforces their religious beliefs.
In fact it would be more surprising if Coots's followers' faith was shaken: After all, the whole point of serpent handling is about affirmation of faith; for them to lose faith because of what happened to him would be the ultimate betrayal.
It's not clear whether Jamie Coots's son, Little Cody, will keep up the snake-handling tradition that killed his father, but it seems likely. In 2012 another well-known Pentecostal serpent handler, Mack Wolford, was killed in his West Virginia church after being fatally bitten by one of his snakes. Wolford's father was also a snake handler, and he, also, was killed by a snake in 1983.
Read more at Discovery News
As CNN reported, Jamie Coots, a Pentecostal believer who stars in a reality show, "Snake Salvation," died Saturday evening. CNN said Coots believed that a passage in the Bible suggests poisonous snakebites will not harm believers as long as they are anointed by God.
Evangelical preachers like Coots not only handle venomous snakes but also engage in other dangerous activities such as drinking poison. They base their faith on Biblical verses in Mark 16: "And these signs will follow those who believe: in My name they will cast out demons; they will speak with new tongues; they will take up serpents; and if they drink anything deadly, it will by no means hurt them; they will lay their hands on the sick, and they will recover."
Pastor Coots and his followers are Biblical literalists, believing that each and every word in the Bible is the true and inerrant word of God. This is a position that Bill Nye "The Science Guy" took creationist Ken Ham to task about during their debate last month, when Nye described the Bible as "verses translated into English over 30 centuries."
Even assuming that God wrote the Bible through men, all that copying and translating, Nye noted, leaves many opportunity for errors to creep into the verses. Thus the Mark 16 reference to snakes may simply be a metaphor, part of a well-known tradition of depicting Satan or evil in the form of serpents. Many evangelicals, however, take it literally.
The premise behind snake handling is to demonstrate their faith, both to themselves and as an inspiration to others, by doing something dangerous. It just happens to be serpents because of a bible passage, but in theory the same ritual role could be fulfilled by drunk bullfighting or playing Russian roulette.
Seeking medical attention for a snake bite is seen as a lack of faith in God's ability to heal, a belief that can also be found in other religions including Christian Scientists and Scientologists. In many cases children have even died because their devout parents refused to take them to a doctor.
Coots, though well-known because of his high-profile status on a popular television show, is far from alone in this practice. Though not common (and in fact illegal in many places), snake handling at evangelical events occurs on a regular basis. It's not clear how many people have died from it -- since official numbers are not kept and only high-profile deaths such as Coots are likely to make the news -- but the victims likely number in the hundreds.
The No-Lose Psychology of Salvation
Many wonder what effect Coots's death will have on his followers. The most likely answer, surprisingly, is none.
Their religious belief is what in logic is called non-falsifiable; that is, it can't be proven wrong or false. No matter the outcome of snake handling, it's God's will: if he gets bitten and dies, it's fine because God called him home and it was his time to pass, and if he doesn't get bit (or survives the bite) it's because God protected him. It's framed as a win-win situation, so no matter the outcome it reinforces their religious beliefs.
In fact it would be more surprising if Coots's followers' faith was shaken: After all, the whole point of serpent handling is about affirmation of faith; for them to lose faith because of what happened to him would be the ultimate betrayal.
It's not clear whether Jamie Coots's son, Little Cody, will keep up the snake-handling tradition that killed his father, but it seems likely. In 2012 another well-known Pentecostal serpent handler, Mack Wolford, was killed in his West Virginia church after being fatally bitten by one of his snakes. Wolford's father was also a snake handler, and he, also, was killed by a snake in 1983.
Read more at Discovery News
Kepler's Laws Govern Awesome Comet Mission
I am sure that you, like me, followed the 'Wake Up Rosetta' campaign with great interest, as the tiny European explorer was awoken from its 31 month slumber. It is great to see that after all that time the systems have come back online and the craft is fully functional, ready for its rendezvous with Comet 67P/Churyumov-Gerasimenko this November. I only wish my car was so reliable.
As I followed the updates, my mind drifted off to how wonderful it is that we can actually send man made objects many millions of kilometers to tiny pieces of rock and actually arrive in the right place at the right time -- particularly as the targets are often moving at many thousands of kilometers per hour themselves. Navigating around the solar system is a tricky business but it was made a whole lot easier with the 'discovery' of three laws that govern planetary motion.
It was back in the 1600's that Johannes Kepler published his three laws of planetary motion and they are still as relevant today as they were over 400 years ago. Not only do they govern the motion of the planets around the sun but they also govern the motion of moons around planets and even exoplanets around distant stars. The laws have been invaluable in understanding not only the movements of the planets in our own solar system but also help us learn about families of new planets in the depths of our galaxy.
The first of the laws states that all planets in our solar system move in elliptical orbits with the sun at one of the points of focus of the ellipse. That is not surprising, perhaps as many of us have grown up knowing that the Earth's orbit and indeed the orbits of all the planets are elliptical.
An ellipse is essentially a squashed circle and you can imagine how it might have two points of focus if you first visualize a circle with a point at its center. If you were to squash the circle from top and bottom, the central dot would split in two and both would move outward. In the case of the planets in the solar system; the sun is found at one of these points and it is that point that they all appear to orbit.
Kepler's second law states that a line joining the sun to a planet, known as the radius vector, sweeps out equal areas of space over equal time intervals. Put another way, planets move faster when they are closer to the sun and slower when further away. But it is Kepler's third and final law that was only published ten years after the first two which describes the mathematical relationship between the time it takes for a planet to complete an orbit and its distance from the sun. In the words of Kepler, "...the square of the orbital period of a planet is directly proportional to the cube of its mean distance from the sun." This means that we can measure how long an object takes to orbit the sun from simple observation and by knowing that, we can calculate its average distance with some accuracy.
Read more at Discovery News
As I followed the updates, my mind drifted off to how wonderful it is that we can actually send man made objects many millions of kilometers to tiny pieces of rock and actually arrive in the right place at the right time -- particularly as the targets are often moving at many thousands of kilometers per hour themselves. Navigating around the solar system is a tricky business but it was made a whole lot easier with the 'discovery' of three laws that govern planetary motion.
It was back in the 1600's that Johannes Kepler published his three laws of planetary motion and they are still as relevant today as they were over 400 years ago. Not only do they govern the motion of the planets around the sun but they also govern the motion of moons around planets and even exoplanets around distant stars. The laws have been invaluable in understanding not only the movements of the planets in our own solar system but also help us learn about families of new planets in the depths of our galaxy.
The first of the laws states that all planets in our solar system move in elliptical orbits with the sun at one of the points of focus of the ellipse. That is not surprising, perhaps as many of us have grown up knowing that the Earth's orbit and indeed the orbits of all the planets are elliptical.
An ellipse is essentially a squashed circle and you can imagine how it might have two points of focus if you first visualize a circle with a point at its center. If you were to squash the circle from top and bottom, the central dot would split in two and both would move outward. In the case of the planets in the solar system; the sun is found at one of these points and it is that point that they all appear to orbit.
Kepler's second law states that a line joining the sun to a planet, known as the radius vector, sweeps out equal areas of space over equal time intervals. Put another way, planets move faster when they are closer to the sun and slower when further away. But it is Kepler's third and final law that was only published ten years after the first two which describes the mathematical relationship between the time it takes for a planet to complete an orbit and its distance from the sun. In the words of Kepler, "...the square of the orbital period of a planet is directly proportional to the cube of its mean distance from the sun." This means that we can measure how long an object takes to orbit the sun from simple observation and by knowing that, we can calculate its average distance with some accuracy.
Read more at Discovery News
Feb 17, 2014
Why does the brain remember dreams?
The reason for dreaming is still a mystery for the researchers who study the difference between "high dream recallers," who recall dreams regularly, and "low dream recallers," who recall dreams rarely. In January 2013 (work published in the journal Cerebral Cortex), the team led by Perrine Ruby, Inserm researcher at the Lyon Neuroscience Research Center, made the following two observations: "high dream recallers" have twice as many time of wakefulness during sleep as "low dream recallers" and their brains are more reactive to auditory stimuli during sleep and wakefulness. This increased brain reactivity may promote awakenings during the night, and may thus facilitate memorization of dreams during brief periods of wakefulness.
In this new study, the research team sought to identify which areas of the brain differentiate high and low dream recallers. They used Positron Emission Tomography (PET) to measure the spontaneous brain activity of 41 volunteers during wakefulness and sleep. The volunteers were classified into 2 groups: 21 "high dream recallers" who recalled dreams 5.2 mornings per week in average, and 20 "low dream recallers," who reported 2 dreams per month in average. High dream recallers, both while awake and while asleep, showed stronger spontaneous brain activity in the medial prefrontal cortex (mPFC) and in the temporo-parietal junction (TPJ), an area of the brain involved in attention orienting toward external stimuli.
"This may explain why high dream recallers are more reactive to environmental stimuli, awaken more during sleep, and thus better encode dreams in memory than low dream recallers. Indeed the sleeping brain is not capable of memorizing new information; it needs to awaken to be able to do that," explains Perrine Ruby, Inserm Research Fellow.
The South African neuropsychologist Mark Solms had observed in earlier studies that lesions in these two brain areas led to a cessation of dream recall. The originality of the French team's results is to show brain activity differences between high and low dream recallers during sleep and also during wakefulness.
"Our results suggest that high and low dream recallers differ in dream memorization, but do not exclude that they also differ in dream production. Indeed, it is possible that high dream recallers produce a larger amount of dreaming than low dream recallers" concludes the research team.
Taken from Science Daily
In this new study, the research team sought to identify which areas of the brain differentiate high and low dream recallers. They used Positron Emission Tomography (PET) to measure the spontaneous brain activity of 41 volunteers during wakefulness and sleep. The volunteers were classified into 2 groups: 21 "high dream recallers" who recalled dreams 5.2 mornings per week in average, and 20 "low dream recallers," who reported 2 dreams per month in average. High dream recallers, both while awake and while asleep, showed stronger spontaneous brain activity in the medial prefrontal cortex (mPFC) and in the temporo-parietal junction (TPJ), an area of the brain involved in attention orienting toward external stimuli.
"This may explain why high dream recallers are more reactive to environmental stimuli, awaken more during sleep, and thus better encode dreams in memory than low dream recallers. Indeed the sleeping brain is not capable of memorizing new information; it needs to awaken to be able to do that," explains Perrine Ruby, Inserm Research Fellow.
The South African neuropsychologist Mark Solms had observed in earlier studies that lesions in these two brain areas led to a cessation of dream recall. The originality of the French team's results is to show brain activity differences between high and low dream recallers during sleep and also during wakefulness.
"Our results suggest that high and low dream recallers differ in dream memorization, but do not exclude that they also differ in dream production. Indeed, it is possible that high dream recallers produce a larger amount of dreaming than low dream recallers" concludes the research team.
Taken from Science Daily
Study on flu evolution may change textbooks, history books
A new study reconstructing the evolutionary tree of flu viruses challenges conventional wisdom and solves some of the mysteries surrounding flu outbreaks of historical significance.
The study, published in the journal Nature, provides the most comprehensive analysis to date of the evolutionary relationships of influenza virus across different host species over time. In addition to dissecting how the virus evolves at different rates in different host species, the study challenges several tenets of conventional wisdom -- for example, the notion that the virus moves largely unidirectionally from wild birds to domestic birds rather than with spillover in the other direction. It also helps resolve the origin of the virus that caused the unprecedentedly severe influenza pandemic of 1918.
The new research is likely to change how scientists and health experts look at the history of influenza virus, how it has changed genetically over time and how it has jumped between different host species. The findings may have implications ranging from the assessment of health risks for populations to developing vaccines.
"We now have a really clear family tree of theses viruses in all those hosts -- including birds, humans, horses, pigs -- and once you have that, it changes the picture of how this virus evolved," said Michael Worobey, a professor of ecology and evolutionary biology at the University of Arizona, who co-led the study with Andrew Rambaut, a professor at the Institute of Evolutionary Biology at the University of Edinburgh. "The approach we developed works much better at resolving the true evolution and history than anything that has previously been used."
Worobey explained that "if you don't account for the fact that the virus evolves at a different rates in each host species, you can get nonsense -- nonsensical results about when and from where pandemic viruses emerged."
"Once you resolve the evolutionary trees for these viruses correctly, everything snaps into place and makes much more sense," Worobey said, adding that the study originated at his kitchen table.
"I had a bunch of those evolutionary trees printed out on paper in front of me and started measuring the lengths of the branches with my daughter's plastic ruler that happened to be on the table. Just like branches on a real tree, you can see that the branches on the evolutionary tree grow at different rates in humans versus horses versus birds. And I had a glimmer of an idea that this would be important for our public health inferences about where these viruses come from and how they evolve."
"My longtime collaborator Andrew Rambaut implemented in the computer what I had been doing with a plastic ruler. We developed software that allows the clock to tick at different rates in different host species. Once we had that, it produces these very clear and clean results."
The team analyzed a dataset with more than 80,000 gene sequences representing the global diversity of the influenza A virus and analyzed them with their newly developed approach. The influenza A virus is subdivided into 17 so-called HA subtypes -- H1 through H17 -- and 10 subtypes of NA, N1-N10. These mix and match, for example H1N1, H7N9, with the greatest diversity seen in birds.
Using the new family tree of the flu virus as a map showed which species moved to which host species and when. It revealed that for several of its 8 genomic segments avian influenza virus is not nearly as ancient as often assumed.
"What we're finding is that the avian virus has an extremely shallow history in most genes, not much older than the invention of the telephone," Worobey explained.
The research team, which included UA graduate student Guan-Zhu Han and Andrew Rambaut, a professor from the University of Edinburgh who is also affiliated with the U.S. National Institutes of Health, found a strong signature in the data suggesting that something revolutionary happened to avian influenza virus, with the majority of its genetic diversity being replaced by some new variant in a selective sweep in an extremely synchronous event.
Worobey said the timing is provocative because of the correlation of that sudden shift in the flu virus' evolution with historical events in the late nineteenth century.
"In the 1870s, an immense horse flu outbreak swept across North America," Worobey said, "City by city and town by town, horses got sick and perhaps five percent of them died. Half of Boston burned down during the outbreak, because there were no horses to pull the pump wagons. Out here in the West, the U.S. Cavalry was fighting the Apaches on foot because all the horses were sick. This happened at a time when horsepower was actual horse power. The horse flu outbreak pulled the rug out from under the economy."
According to Worobey, the newly generated evolutionary trees show a global replacement of the genes in the avian flu virus coinciding closely with the horse flu outbreak, which the analyses also reveal to be the closest relative to the avian virus.
"Interestingly, a previous research paper analyzing old newspaper records reported that in the days following the horse flu outbreak, there were repeated outbreaks described at the time as influenza killing chickens and other domestic birds," Worobey said. "That's another unexpected link in the history, and the there is a possibility that the two might be connected, given what we see in our trees."
He added that the evolutionary results didn't allow for a definitive determination of whether the virus jumped from horses to birds or vice versa, but a close relationship between the two virus species is clearly there.
With regard to humans, the research sheds light on a longstanding mystery. Ever since the influenza pandemic of 1918, it has not been possible to narrow down even to a hemisphere the geographic origins of any of the genes of the pandemic virus.
Read more at Science Daily
OCD Genes Found In Dogs
Incessant tail chasing, repetitive shadow stalking, relentless paw chewing for hours and hours every day: Dogs can suffer from obsessive compulsive disorder, too. And a new study helps explain why.
Researchers have zeroed in on four genes that are connected to OCD in dogs. If the same genes turn out to be malfunctioning in the human version of the disorder -- and there are clues that they do -- this line of research may eventually help scientists develop better drugs for a human disease that is notoriously difficult to treat.
"This is really exciting because psychiatric diseases tend to be very heritable, but finding genes associated with psychiatric diseases in humans has been really difficult," said Elinor Karlsson, a computational biologist at the Broad Institute at Harvard University.
The antidepressant medications that are currently available for OCD only help about 50 percent of people and dogs that use them, she added, and the medicines can cause unwanted side effects.
"The question is: can we use genetics to pinpoint what the brain pathways are that are going wrong in these diseases? And can we design drugs that target those pathways in ways that are much more specific than we are doing now?," she added. "Anything we can use to pick apart exactly what is going wrong so we can treat these diseases is going to be a huge benefit."
Instead of repetitive hand washing or hoarding, dogs with OCD may chew blankets or chase their tails way more than normal. Owners often say they can't distract their pets from their obsessive tasks.
A few breeds of dogs exhibit particularly high rates of OCD, including Doberman Pinschers. And because dogs are genetically simpler than people, Karlsson and colleagues turned to these dogs in their search for OCD-related genes.
The team began by sequencing and comparing a large section of the genomes of 90 Dobermans that had OCD with 60 that didn't.
They searched for regions that looked different between sick and healthy dogs. They also searched for genes that looked the same in all of the Dobermans but that differed between that breed and others.
When they had zeroed in on several suspicious areas of the genome, the researchers compared the suspect Doberman genes with genes from a sample of bull terriers, Shetland sheepdogs and German shepherds -- three other breeds that also suffer higher-than-usual rates of OCD.
Those analyses pinpointed four genes that have unusually high rates of mutations in dogs with obsessive and compulsive behaviors, the team reported Sunday in the journal Genome Biology. The researchers also found OCD-linked mutations in a tiny piece of the genome more than million bases away from any gene that likely plays a role in regulating the genes that play into the disease.
The genes implicated in the new study play roles in pathways that have also been connected to human OCD, Karlsson said, suggesting that dogs could provide a helpful model system for developing better treatments for people.
Read more at Discovery News
Under Active Volcanoes, Cold Magma Waits for Heat
Strike that iconic image of a tall, snow-capped volcano sitting atop a liquid pool of hot, molten magma. It turns out that many volcanoes prefer cold storage, a new study suggests.
The findings come from a detailed study of crystals in lavas at Oregon's Mount Hood, from two different eruptions 220 years ago and about 1,500 years ago. These crystals formed inside the volcano's magma chamber, and provide a chronology and a temperature history.
The crystals told a fairy tale story -- they were trapped beneath the volcano, at surprisingly cold temperatures, for as long as 100,000 years. No boiling super-villain's lair for these tiny pieces of plagioclase. Instead, the magma was so cold it was like a jar of old honey from the fridge -- sticky and full of crystals. That means, most of the time, it was too sluggish to erupt. The researchers think that it took a hot kiss of fresh magma, rising from deep in Earth, to reheat the molten rock until it was thin enough to blast into the sky.
"This tells us that the standard state of magma for this system is that it can't be erupted," said Kari Cooper, a geochemist at the University of California, Davis. "That means that having a magma that can erupt is a special condition. Our expectation is that there's a lot of volcanoes that behave this way."
The findings were published on Sunday in the journal Nature.
The results suggest that monitoring volcanoes for liquid magma could warn of coming eruptions. Not all kinds of volcanoes behave like Mount Hood -- Hawaii, for instance, is built differently, atop a giant hot spot -- but most of the world's most active volcanoes are in similar settings.
"If you can see a body of magma that has a high amount of liquid, perhaps this magma is getting ready to erupt or at least has some potential to erupt," said study co-author Adam Kent, a geologist at Oregon State University. "It wouldn't be a slam-dunk guarantee."
The liquid cut-off is about 50 percent crystals, the researchers said. More crystals than that and the magma is too thick to squeeze out of fractures leading to the surface.
In the cold zone
Mount Hood is a subduction zone volcano, sitting atop a collision where one of Earth's tectonic plates slides into the mantle, the hotter layer beneath Earth's crust, underneath another plate. Fluids released from the descending plate melt rocks above it, which ascend to the surface, eventually forming volcanoes.
Looking at the "Ring of Fire" around the Pacific Ocean reveals the link between subduction zones and volcanoes. Inland of each subduction zone lies a chain of spouting volcanoes called a volcanic arc, such as Oregon's Cascades, Alaska's Aleutian Islands and Indonesia's 130 active volcanoes.
Read more at Discovery News
Feb 16, 2014
Jet Stream Shift Could Mean Harsher Winters
A warmer Arctic could permanently affect the pattern of the high-altitude polar jet stream, resulting in longer and colder winters over North America and northern Europe, U.S. scientists say.
The jet stream, a ribbon of high-altitude, high-speed wind in northern latitudes that blows from west to east, is formed when the cold Arctic air clashes with warmer air from further south.
The greater the difference in temperature, the faster the jet stream moves.
According to Jennifer Francis, a climate expert at Rutgers University, the Arctic air has warmed in recent years as a result of melting polar ice caps, meaning there is now less of a difference in temperatures when it hits air from lower latitudes.
"The jet stream is a very fast-moving river of air over our head," she said Saturday at a meeting of the American Association for the Advancement of Science.
"But over the past two decades the jet stream has weakened. This is something we can measure," she said.
As a result, instead of circling the earth in the far north, the jet stream has begun to meander, like a river heading off course.
This has brought chilly Arctic weather further south than normal, and warmer temperatures up north. Perhaps most disturbingly, it remains in place for longer periods of time.
The United States is currently enduring an especially bitter winter, with the midwestern and southern US states experiencing unusually low temperatures.
In contrast, far northern regions like Alaska are going through an unusually warm winter this year.
This suggests "that weather patterns are changing," Francis said. "We can expect more of the same and we can expect it to happen more frequently."
Temperatures in the Arctic have been rising "two to three times faster than the rest of the planet," said James Overland, a weather expert with the National Oceanic and Atmospheric Administration (NOAA).
Francis says it is premature to blame humans for these changes.
"Our data to look at this effect is very short and so it is hard to get very clear signal," she said. "But as we have more data I do think we will start to see the influence of climate change."
Dire impact on agriculture
The meandering jet steam phenomenon, sometimes called "Santa's Revenge," remains a controversial idea.
"There is evidence for and against it," said Mark Serreze, director of the National Snowland Ice Data Center in Boulder, Colo.
But he said rising Arctic temperatures are directly linked to melting ice caps.
"The sea ice cover acts as a lid which separates the ocean from a colder atmosphere," Serreze told the conference.
But if the lid is removed, then warmth contained in the water rises into the atmosphere.
This warming trend and the shifting jet stream will have a dire impact on agriculture, especially in the farm-rich middle-latitudes in the United States.
"We are going to see changes in patterns of precipitation, of temperatures that might be linked to what is going on in the far north," said Serreze.
Jerry Hatfield, head of the National Laboratory for Agriculture and Environment in the midwestern state of Iowa, warned that this is not a phenomenon that affects only the United States.
"Look around the world -- we produce the bulk of our crops around this mid-latitude area," he said.
Read more at Discovery News
The jet stream, a ribbon of high-altitude, high-speed wind in northern latitudes that blows from west to east, is formed when the cold Arctic air clashes with warmer air from further south.
The greater the difference in temperature, the faster the jet stream moves.
According to Jennifer Francis, a climate expert at Rutgers University, the Arctic air has warmed in recent years as a result of melting polar ice caps, meaning there is now less of a difference in temperatures when it hits air from lower latitudes.
"The jet stream is a very fast-moving river of air over our head," she said Saturday at a meeting of the American Association for the Advancement of Science.
"But over the past two decades the jet stream has weakened. This is something we can measure," she said.
As a result, instead of circling the earth in the far north, the jet stream has begun to meander, like a river heading off course.
This has brought chilly Arctic weather further south than normal, and warmer temperatures up north. Perhaps most disturbingly, it remains in place for longer periods of time.
The United States is currently enduring an especially bitter winter, with the midwestern and southern US states experiencing unusually low temperatures.
In contrast, far northern regions like Alaska are going through an unusually warm winter this year.
This suggests "that weather patterns are changing," Francis said. "We can expect more of the same and we can expect it to happen more frequently."
Temperatures in the Arctic have been rising "two to three times faster than the rest of the planet," said James Overland, a weather expert with the National Oceanic and Atmospheric Administration (NOAA).
Francis says it is premature to blame humans for these changes.
"Our data to look at this effect is very short and so it is hard to get very clear signal," she said. "But as we have more data I do think we will start to see the influence of climate change."
Dire impact on agriculture
The meandering jet steam phenomenon, sometimes called "Santa's Revenge," remains a controversial idea.
"There is evidence for and against it," said Mark Serreze, director of the National Snowland Ice Data Center in Boulder, Colo.
But he said rising Arctic temperatures are directly linked to melting ice caps.
"The sea ice cover acts as a lid which separates the ocean from a colder atmosphere," Serreze told the conference.
But if the lid is removed, then warmth contained in the water rises into the atmosphere.
This warming trend and the shifting jet stream will have a dire impact on agriculture, especially in the farm-rich middle-latitudes in the United States.
"We are going to see changes in patterns of precipitation, of temperatures that might be linked to what is going on in the far north," said Serreze.
Jerry Hatfield, head of the National Laboratory for Agriculture and Environment in the midwestern state of Iowa, warned that this is not a phenomenon that affects only the United States.
"Look around the world -- we produce the bulk of our crops around this mid-latitude area," he said.
Read more at Discovery News
Arctic biodiversity under serious threat from climate change
Unique and irreplaceable Arctic wildlife and landscapes are crucially at risk due to global warming caused by human activities according to the Arctic Biodiversity Assessment (ABA), a new report prepared by 253 scientists from 15 countries under the auspices of the Conservation of Arctic Flora and Fauna (CAFF), the biodiversity working group of the Arctic Council.
"An entire bio-climatic zone, the high Arctic, may disappear. Polar bears and the other highly adapted organisms cannot move further north, so they may go extinct. We risk losing several species forever," says Hans Meltofte of Aarhus University, chief scientist of the report.
From the iconic polar bear and elusive narwhal to the tiny Arctic flowers and lichens that paint the tundra in the summer months, the Arctic is home to a diversity of highly adapted animal, plant, fungal and microbial species. All told, there are more than 21,000 species.
Maintaining biodiversity in the Arctic is important for many reasons. For Arctic peoples, biodiversity is a vital part of their material and spiritual existence. Arctic fisheries and tourism have global importance and represent immense economic value. Millions of Arctic birds and mammals that migrate and connect the Arctic to virtually all parts of the globe are also at risk from climate change in the Arctic as well as from development and hunting in temperate and tropical areas. Marine and terrestrial ecosystems such as vast areas of lowland tundra, wetlands, mountains, extensive shallow ocean shelves, millennia-old ice shelves and huge seabird cliffs are characteristic to the Arctic. These are now at stake, according to the report.
"Climate change is by far the worst threat to Arctic biodiversity. Temperatures are expected to increase more in the Arctic compared to the global average, resulting in severe disruptions to Arctic biodiversity some of which are already visible," warns Meltofte.
A planetary increase of 2 °C, the worldwide agreed upon acceptable limit of warming, is projected to result in vastly more heating in the Arctic with anticipated temperature increases of 2.8-7.8 °C this century. Such dramatic changes will likely result in severe damage to Arctic biodiversity.
Climate change impacts are already visible in several parts of the Arctic. These include northward range expansions of many species, earlier snow melt, earlier sea ice break-up and melting permafrost together with development of new oceanic current patterns.
It is expected that climate change could shrink Arctic ecosystems on land, as northward moving changes are pressed against the boundary of the Arctic Ocean: the so called "Arctic squeeze." As a result, Arctic terrestrial ecosystems may disappear in many places, or only survive in alpine or island refuges.
Disappearing sea ice is affecting marine species, changing dynamics in the marine food web and productivities of the sea. Many unique species found only in the Arctic rely on this ice to hunt, rest, breed and/or escape predators.
Other key findings
- Generally speaking, overharvest is no longer a primary threat, although pressures on some populations remain a serious problem.
- A variety of contaminants have bioaccumulated in several Arctic predator species to levels that threaten the health and ability to reproduce of both animals and humans. However, it is not clear if this is affecting entire populations of species.
- Arctic habitats are among the least anthropogenic disturbed on Earth, and huge tracts of almost pristine tundra, mountain, freshwater and marine habitats still exist.
- Regionally, ocean bottom trawling, non-renewable resource development and other intensive forms of land use pose serious challenges to Arctic biodiversity.
- Pollution from oil spills at sites of oil and gas development and from oil transport is a serious local level threat particularly in coastal and marine ecosystems.
- Uptake of CO2 in sea water is more pronounced in the cold Arctic waters than elsewhere, and the resulting acidification of Arctic seas threaten calcifying organisms and maybe even fisheries.
Subscribe to:
Posts (Atom)