A compelling new genetic study tracing the origins of immature egg cells, or 'oocytes', from the embryonic period throughout adulthood adds new information to a growing controversy. The notion of a "biological clock" in women arises from the fact that oocytes progressively decline in number as females get older, along with a decades-old dogmatic view that oocytes cannot be renewed in mammals after birth.
After careful assessment of data from a recent study published in PLoS Genetics, scientists from Massachusetts General Hospital and the University of Edinburgh argue that the findings support formation of new eggs during adult life; a topic that has been historically controversial and has sparked considerable debate in recent years.
Eggs are formed from progenitor germ cells that exit the mitotic cycle, thereby ending their ability to proliferate through cell division, and subsequently enter meiosis, a process unique to the formation of eggs and sperm which removes one half of the genetic material from each type of cell prior to fertilization.
While traditional thinking has held that female mammals are born with all of the eggs they will ever have, newer research has demonstrated that adult mouse and human ovaries contain a rare population of progenitor germ cells called oogonial stem cells capable of dividing and generating new oocytes. Using a powerful new genetic tool that traces the number of divisions a cell has undergone with age (its 'depth') Shapiro and colleagues counted the number of times progenitor germ cells divided before becoming oocytes; their study was published in PLoS Genetics in February this year.
If traditional thinking held true, all divisions would have occurred prior to birth, and thus all oocytes would exhibit the same depth regardless of age. However, the opposite was found -- eggs showed a progressive increase in depth as the female mice grew older.
In their assessment of the work by Shapiro and colleagues -- published recently in a PLoS Genetics Perspective article -- reproductive biologists Dori Woods, Evelyn Telfer and Jonathan Tilly conclude that the most plausible explanation for these findings is that progenitor germ cells in ovaries continue to divide throughout reproductive life, resulting in production of new oocytes with greater depth as animals age.
Read more at Science Daily
Jul 28, 2012
Science of Eyewitness Memory Enters Courtroom
Science has prevailed over injustice in the state of New Jersey, where all jurors will soon learn about memory’s unreliability and the limits of eyewitness testimony.
According to instructions issued July 19 by New Jersey’s Supreme Court, judges must tell jurors that “human memory is not foolproof,” and enumerate the many ways in which eyewitness recall can be distorted or mistaken.
Cognitive scientists who study memory have celebrated the new requirements.
“Eyewitness identification evidence is seen by jurors as being trustworthy and reliable,” said psychologist Charles Brainerd of Cornell University, who specializes in memory. “The science shows exactly the opposite.”
The guidelines were prompted by State v. Henderson, in which the New Jersey Supreme Court overturned the conviction of Larry Henderson, an accused murder accomplice whose identification from a lineup was unduly influenced by police.
Though egregiously unjust, Henderson’s case was hardly unusual: Eyewitness misidentification is the most common cause of wrongful conviction in the United States. Of prisoners exonerated by DNA testing, some 75 percent were mistakenly identified.
“In the U.S., in 95 percent of felony cases, there’s no evidence other than what people say and report,” Brainerd said. “Over the years, we’ve had thousands of experiments on eyewitness identification. Under the best conditions, people have about a 50/50 chance of getting it right.”
Rarely do the best possible conditions prevail. The vast majority of eyewitness identifications are based on police presentations of photographic lineups, which may be constructed so as to point subtly at a lead suspect.
A witness may be shown a suspect’s mug shot, for example, while other photographs in an array come from driver’s licenses. And while arrays should ideally contain a group of similar-looking people, some may obviously not be suspects.
Sometimes the bias isn’t subtle, but blatant. “I had one case in Missouri in which I was given the six-person photo spread used in the case, and there was a checkmark under the suspect,” said Brainerd.
Psychologist Gary Wells of Iowa State University, who served as an expert in State v. Henderson, called New Jersey’s new rules “a great advance.” But he warned that jury instructions aren’t always effective.
“There’s no substitute for trying to prevent mistaken identifications from occurring in the first place,” he said, calling for the development of “even better lineup procedures and safeguards.” These can include computer-generated lineup arrays designed to minimize bias.
In stark contrast to New Jersey’s example, some states don’t allow jurors to hear criticisms of eyewitness fallibility from experts like Brainerd and Wells. Cognitive scientists may also be overruled.
“In the few trials where I testified on eyewitness reliability, the introduction to the jury directed them to place confidence in eyewitnesses,” said psychologist Barbara Tversky of Stanford University. “That certainly disturbed me as someone who is all too aware of the fallability and malleabilty of memory.”
Another example of reform comes from the United Kingdom, where an especially egregious case of mistaken identification prompted the country to forbid convictions based solely on eyewitness identification. While that won’t likely occur anytime soon in the United States, other states may follow New Jersey’s lead.
Read more at Wired Science
According to instructions issued July 19 by New Jersey’s Supreme Court, judges must tell jurors that “human memory is not foolproof,” and enumerate the many ways in which eyewitness recall can be distorted or mistaken.
Cognitive scientists who study memory have celebrated the new requirements.
“Eyewitness identification evidence is seen by jurors as being trustworthy and reliable,” said psychologist Charles Brainerd of Cornell University, who specializes in memory. “The science shows exactly the opposite.”
The guidelines were prompted by State v. Henderson, in which the New Jersey Supreme Court overturned the conviction of Larry Henderson, an accused murder accomplice whose identification from a lineup was unduly influenced by police.
Though egregiously unjust, Henderson’s case was hardly unusual: Eyewitness misidentification is the most common cause of wrongful conviction in the United States. Of prisoners exonerated by DNA testing, some 75 percent were mistakenly identified.
“In the U.S., in 95 percent of felony cases, there’s no evidence other than what people say and report,” Brainerd said. “Over the years, we’ve had thousands of experiments on eyewitness identification. Under the best conditions, people have about a 50/50 chance of getting it right.”
Rarely do the best possible conditions prevail. The vast majority of eyewitness identifications are based on police presentations of photographic lineups, which may be constructed so as to point subtly at a lead suspect.
A witness may be shown a suspect’s mug shot, for example, while other photographs in an array come from driver’s licenses. And while arrays should ideally contain a group of similar-looking people, some may obviously not be suspects.
Sometimes the bias isn’t subtle, but blatant. “I had one case in Missouri in which I was given the six-person photo spread used in the case, and there was a checkmark under the suspect,” said Brainerd.
Psychologist Gary Wells of Iowa State University, who served as an expert in State v. Henderson, called New Jersey’s new rules “a great advance.” But he warned that jury instructions aren’t always effective.
“There’s no substitute for trying to prevent mistaken identifications from occurring in the first place,” he said, calling for the development of “even better lineup procedures and safeguards.” These can include computer-generated lineup arrays designed to minimize bias.
In stark contrast to New Jersey’s example, some states don’t allow jurors to hear criticisms of eyewitness fallibility from experts like Brainerd and Wells. Cognitive scientists may also be overruled.
“In the few trials where I testified on eyewitness reliability, the introduction to the jury directed them to place confidence in eyewitnesses,” said psychologist Barbara Tversky of Stanford University. “That certainly disturbed me as someone who is all too aware of the fallability and malleabilty of memory.”
Another example of reform comes from the United Kingdom, where an especially egregious case of mistaken identification prompted the country to forbid convictions based solely on eyewitness identification. While that won’t likely occur anytime soon in the United States, other states may follow New Jersey’s lead.
Read more at Wired Science
Jul 27, 2012
The Longer You're Awake, the Slower You Get
Anyone that has ever had trouble sleeping can attest to the difficulties at work the following day. Experts recommend eight hours of sleep per night for ideal health and productivity, but what if five to six hours of sleep is your norm? Is your work still negatively affected? A team of researchers at Brigham and Women's Hospital (BWH) have discovered that regardless of how tired you perceive yourself to be, that lack of sleep can influence the way you perform certain tasks.
This finding is published in the July 26, 2012 online edition of The Journal of Vision.
"Our team decided to look at how sleep might affect complex visual search tasks, because they are common in safety-sensitive activities, such as air-traffic control, baggage screening, and monitoring power plant operations," explained Jeanne F. Duffy, PhD, MBA, senior author on this study and associate neuroscientist at BWH. "These types of jobs involve processes that require repeated, quick memory encoding and retrieval of visual information, in combination with decision making about the information."
Researchers collected and analyzed data from visual search tasks from 12 participants over a one month study. In the first week, all participants were scheduled to sleep 10-12 hours per night to make sure they were well-rested. For the following three weeks, the participants were scheduled to sleep the equivalent of 5.6 hours per night, and also had their sleep times scheduled on a 28-hour cycle, mirroring chronic jet lag. The research team gave the participants computer tests that involved visual search tasks and recorded how quickly the participants could find important information, and also how accurate they were in identifying it. The researchers report that the longer the participants were awake, the more slowly they identified the important information in the test. Additionally, during the biological night time, 12 a.m. -6 a.m., participants (who were unaware of the time throughout the study) also performed the tasks more slowly than they did during the daytime.
"This research provides valuable information for workers, and their employers, who perform these types of visual search tasks during the night shift, because they will do it much more slowly than when they are working during the day," said Duffy. "The longer someone is awake, the more the ability to perform a task, in this case a visual search, is hindered, and this impact of being awake is even stronger at night."
While the accuracy of the participants stayed the fairly constant, they were slower to identify the relevant information as the weeks went on. The self-ratings of sleepiness only got slightly worse during the second and third weeks on the study schedule, yet the data show that they were performing the visual search tasks significantly slower than in the first week. This finding suggests that someone's perceptions of how tired they are do not always match their performance ability, explains Duffy.
Read more at Science Daily
This finding is published in the July 26, 2012 online edition of The Journal of Vision.
"Our team decided to look at how sleep might affect complex visual search tasks, because they are common in safety-sensitive activities, such as air-traffic control, baggage screening, and monitoring power plant operations," explained Jeanne F. Duffy, PhD, MBA, senior author on this study and associate neuroscientist at BWH. "These types of jobs involve processes that require repeated, quick memory encoding and retrieval of visual information, in combination with decision making about the information."
Researchers collected and analyzed data from visual search tasks from 12 participants over a one month study. In the first week, all participants were scheduled to sleep 10-12 hours per night to make sure they were well-rested. For the following three weeks, the participants were scheduled to sleep the equivalent of 5.6 hours per night, and also had their sleep times scheduled on a 28-hour cycle, mirroring chronic jet lag. The research team gave the participants computer tests that involved visual search tasks and recorded how quickly the participants could find important information, and also how accurate they were in identifying it. The researchers report that the longer the participants were awake, the more slowly they identified the important information in the test. Additionally, during the biological night time, 12 a.m. -6 a.m., participants (who were unaware of the time throughout the study) also performed the tasks more slowly than they did during the daytime.
"This research provides valuable information for workers, and their employers, who perform these types of visual search tasks during the night shift, because they will do it much more slowly than when they are working during the day," said Duffy. "The longer someone is awake, the more the ability to perform a task, in this case a visual search, is hindered, and this impact of being awake is even stronger at night."
While the accuracy of the participants stayed the fairly constant, they were slower to identify the relevant information as the weeks went on. The self-ratings of sleepiness only got slightly worse during the second and third weeks on the study schedule, yet the data show that they were performing the visual search tasks significantly slower than in the first week. This finding suggests that someone's perceptions of how tired they are do not always match their performance ability, explains Duffy.
Read more at Science Daily
New Research Method Provides Better Insights Into the World of Microbes
A team of Luxembourg-based researchers, working at the Luxembourg Centre for Systems Biomedicine, has developed a research method that will allow scientists to study microbes in more depth than ever before.
Microbes, such as bacteria, fungi and viruses, are invisible to the human eye but have been identified just about everywhere on earth: in soil, water, and air. Many different types of microbes usually live closely together in 'microbial communities'.
Microbial communities also reside in the human body, which is home to trillions of these tiny organisms. Biologists have long suggested that microbes in the human body can cause or contribute to some human diseases, such as diabetes. The new research method might soon be able to prove this hypothesis.
"In order for us to understand the impact that microbes might have on human health, we need to be able to measure the biomolecular information contained within the DNA, RNA (ribonucleic acid), proteins, and small molecules of microbes in a truly systematic way. This was not possible until now," explains Dr. Paul Wilmes, who runs the Eco-Systems Biology lab at the LCSB.
Using the method described by Wilmes, researchers can, for the first time, measure and integrate all the important biomolecular information from a single sample, providing scientists with high-resolution molecular snapshots of microbial communities in, for example, the human stomach and intestine.
The research method developed by Wilmes can also be applied to microbial communities found in other environments. In this particular study, authors used, for example, microbes that grow in wastewater treatment plants and that accumulate large amounts of high-energy fats. By using Wilmes' new research method, scientist can do in-depth studies on these communities. "Once these microbial communities are better understood, we might be able to exploit these communities for the comprehensive reclamation of energy-rich fats from wastewater," says Dr. Paul Wilmes.
Read more at Science Daily
Microbes, such as bacteria, fungi and viruses, are invisible to the human eye but have been identified just about everywhere on earth: in soil, water, and air. Many different types of microbes usually live closely together in 'microbial communities'.
Microbial communities also reside in the human body, which is home to trillions of these tiny organisms. Biologists have long suggested that microbes in the human body can cause or contribute to some human diseases, such as diabetes. The new research method might soon be able to prove this hypothesis.
"In order for us to understand the impact that microbes might have on human health, we need to be able to measure the biomolecular information contained within the DNA, RNA (ribonucleic acid), proteins, and small molecules of microbes in a truly systematic way. This was not possible until now," explains Dr. Paul Wilmes, who runs the Eco-Systems Biology lab at the LCSB.
Using the method described by Wilmes, researchers can, for the first time, measure and integrate all the important biomolecular information from a single sample, providing scientists with high-resolution molecular snapshots of microbial communities in, for example, the human stomach and intestine.
The research method developed by Wilmes can also be applied to microbial communities found in other environments. In this particular study, authors used, for example, microbes that grow in wastewater treatment plants and that accumulate large amounts of high-energy fats. By using Wilmes' new research method, scientist can do in-depth studies on these communities. "Once these microbial communities are better understood, we might be able to exploit these communities for the comprehensive reclamation of energy-rich fats from wastewater," says Dr. Paul Wilmes.
Read more at Science Daily
Live Spider Exhibit Will Make Your Skin Crawl
Regardless of whether you love them or fear them, spiders are pretty much always fascinating.
A new exhibit called Spiders Alive! at the American Museum of Natural History in New York City opening July 28 will give visitors the chance to see spiders and their relatives live and up close.
The exhibit will feature important information about spiders, detailing their anatomy, defensive behavior, hunting strategies, life cycles, and diversity. Approximately 20 different species will be on display alongside spider models, videos, and fossils. For those who aren’t too squeamish, museum staff will be handling live arachnids, giving museum-goers a chance to see them up close. On display will be some of the most impressive arachnid species, including the Goliath bird eater, one of the largest spiders in the world, and the western black widow, one of the most venomous spiders in North America.
For those not traveling to New York anytime soon or who are too arachnophobic to check out the spiders live, we’ll take a look at these species from the comfort of your computer.
Captions courtesy of the American Museum of Natural History
Above:
Funnel-web wolf spider
This spider spins a sheet-like web attached to a narrow tube, or funnel. Sitting at the mouth of the tube, the spider waits to strike after feeling vibrations of prey crossing the web.
Read more at Wired Science
A new exhibit called Spiders Alive! at the American Museum of Natural History in New York City opening July 28 will give visitors the chance to see spiders and their relatives live and up close.
The exhibit will feature important information about spiders, detailing their anatomy, defensive behavior, hunting strategies, life cycles, and diversity. Approximately 20 different species will be on display alongside spider models, videos, and fossils. For those who aren’t too squeamish, museum staff will be handling live arachnids, giving museum-goers a chance to see them up close. On display will be some of the most impressive arachnid species, including the Goliath bird eater, one of the largest spiders in the world, and the western black widow, one of the most venomous spiders in North America.
For those not traveling to New York anytime soon or who are too arachnophobic to check out the spiders live, we’ll take a look at these species from the comfort of your computer.
Captions courtesy of the American Museum of Natural History
Above:
Funnel-web wolf spider
This spider spins a sheet-like web attached to a narrow tube, or funnel. Sitting at the mouth of the tube, the spider waits to strike after feeling vibrations of prey crossing the web.
Read more at Wired Science
Vampire Stars Suck Life Out Of Stellar Partners
A surprising number of massive stars in our Milky Way galaxy are part of close stellar duos, a new study finds, but most of these companion stars have turbulent relationships -- with one "vampire star" sucking gas from the other, or the two stars violently merging to form a single star.
Astronomers using the European Southern Observatory's Very Large Telescope in Chile studied massive O-type stars, which are very hot and incredibly bright. These stars, which have surface temperatures of more than 54,000 degrees Fahrenheit (30,000 degrees Celsius) live short, violent lives, but they play key roles in the evolution of galaxies.
The researchers discovered that more than 70 percent of these massive stars have close companions, making up so-called binary systems in which two stars orbit each other.
While this percentage is far more than was previously expected, the astronomers were more surprised to find that majority of these stellar pairs have tumultuous relationships with one another, said study co-author Selma de Mink, of the Space Telescope Science Institute in Baltimore.
"We already knew that massive stars are very often in binaries," de Mink told SPACE.com. "What is very surprising to us is that they're so close, and such a large fraction is interacting. If a star has a companion so close next to it, it will have a very different evolutionary path. Before, this was very complicated for us to model, so we were hoping it was a minority of stars. But, if 70 percent of massive stars are behaving like this, we really need to change how we view these stars."
Studying Stellar Behemoths
Type O stars drive galaxy evolution, but these stellar giants can also exhibit extreme behavior, garnering the nickname "vampire stars" for the way they suck matter from neighboring companions.
"These stars are absolute behemoths," study lead author Hugues Sana, of the University of Amsterdam in the Netherlands, said in a statement. "They have 15 or more times the mass of our sun and can be up to a million times brighter."
These massive stars typically end their lives in violent explosions, such as core-collapse supernovas or gamma-ray bursts, which are so luminous they can be observed throughout most of the universe.
For the new study, the astronomers analyzed the light coming from 71 O-type stars -- a mix of single and binary stars -- in six different star clusters, all located roughly 6,000 light-years away.
The researchers found that almost three-quarters of these stars have close companions. Most of these pairs are also close enough to interact with one another, with mass being transferred from one star to the other in a sort of stellar vampirism. About one-third of these binary systems are even expected to eventually merge to form a single star, the researchers said.
The results of the study indicate that massive stars with companions are more common than was once thought, and that these heavyweights in binary systems evolve differently than single stars -- a fact that has implications for how scientists understand galaxy evolution.
"It makes a big difference for understanding the life of massive stars and how they impact the whole universe," said de Mink.
Big Stars with a Big Impact
Type O stars make up less than 1 percent of the stars in the universe, but they have powerful effects on their surroundings. The winds and shocks from these stars can both trigger and halt star formation processes, the researchers said.
Over the course of their lives, culminating in the supernova explosions that signal their death, these massive stars also produce all the heavy elements in the universe. These elements enrich galaxies and are crucial for life.
Read more at Discovery News
Astronomers using the European Southern Observatory's Very Large Telescope in Chile studied massive O-type stars, which are very hot and incredibly bright. These stars, which have surface temperatures of more than 54,000 degrees Fahrenheit (30,000 degrees Celsius) live short, violent lives, but they play key roles in the evolution of galaxies.
The researchers discovered that more than 70 percent of these massive stars have close companions, making up so-called binary systems in which two stars orbit each other.
While this percentage is far more than was previously expected, the astronomers were more surprised to find that majority of these stellar pairs have tumultuous relationships with one another, said study co-author Selma de Mink, of the Space Telescope Science Institute in Baltimore.
"We already knew that massive stars are very often in binaries," de Mink told SPACE.com. "What is very surprising to us is that they're so close, and such a large fraction is interacting. If a star has a companion so close next to it, it will have a very different evolutionary path. Before, this was very complicated for us to model, so we were hoping it was a minority of stars. But, if 70 percent of massive stars are behaving like this, we really need to change how we view these stars."
Studying Stellar Behemoths
Type O stars drive galaxy evolution, but these stellar giants can also exhibit extreme behavior, garnering the nickname "vampire stars" for the way they suck matter from neighboring companions.
"These stars are absolute behemoths," study lead author Hugues Sana, of the University of Amsterdam in the Netherlands, said in a statement. "They have 15 or more times the mass of our sun and can be up to a million times brighter."
These massive stars typically end their lives in violent explosions, such as core-collapse supernovas or gamma-ray bursts, which are so luminous they can be observed throughout most of the universe.
For the new study, the astronomers analyzed the light coming from 71 O-type stars -- a mix of single and binary stars -- in six different star clusters, all located roughly 6,000 light-years away.
The researchers found that almost three-quarters of these stars have close companions. Most of these pairs are also close enough to interact with one another, with mass being transferred from one star to the other in a sort of stellar vampirism. About one-third of these binary systems are even expected to eventually merge to form a single star, the researchers said.
The results of the study indicate that massive stars with companions are more common than was once thought, and that these heavyweights in binary systems evolve differently than single stars -- a fact that has implications for how scientists understand galaxy evolution.
"It makes a big difference for understanding the life of massive stars and how they impact the whole universe," said de Mink.
Big Stars with a Big Impact
Type O stars make up less than 1 percent of the stars in the universe, but they have powerful effects on their surroundings. The winds and shocks from these stars can both trigger and halt star formation processes, the researchers said.
Over the course of their lives, culminating in the supernova explosions that signal their death, these massive stars also produce all the heavy elements in the universe. These elements enrich galaxies and are crucial for life.
Read more at Discovery News
Jul 26, 2012
In Early Italy: Ice Cream All the Rage
Ice cream was enjoyed in Italy by rich and poor alike long before freezing technology brought iced products to the masses, says new research.
Long portrayed as a luxury product that in its early days would have been enjoyed primarily by an elite set of society, ice cream was sold on the streets in Naples as long as 300 years ago.
At that time, making ice cream was a laborious task which relied on large amounts of ice. Until the late 19th century, when industrial refrigeration eliminated the need for ice houses, ice was harvested from glaciers in the mountains and transported to towns and cities where it was used to cool buckets of mixes.
Snow too was gathered and made into ice by being compressed in pits, where it was kept cold for months.
In fact, the early history of ice cream-making suggests that snow often inspired frozen treats.
Popular lore has attributed the creation of iced desserts to the Roman Emperor Nero, who had slaves bring buckets of snow from the mountains to mix with fruit and honey. There was also Marco Polo, who brought from China a recipe closely resembling the sherbet. The royal chef of England's King Charles I apparently made frozen treats and Catherine de' Medici supposedly brought Italian chefs able to make "flavored ice" to France when she became the wife of Henry II.
Others credit Bernardo Buontalenti, an architect and engineer at the Medici court in Florence, with making the first gelato. At the 1600 wedding of Maria de' Medici and Henry IV of France, he conceived "marvels of gelati" made of snow, salt, lemons, sugar, egg whites and milk.
Ice cream, whoever invented it, was long associated to wealthy tables, as surviving royal porcelain artifact relating to the consumption of iced desserts testify.
But according to Melissa Calaresu, an historian at Cambridge University, U.K., ice cream was not reserved for the elite –- at least not in southern Italy.
"Contemporary sources suggest that there was much greater intermingling and overlapping of social milieus in cities such as Naples than historians have thought," Calaresu said.
The stifling heat of Neapolitan summer represented a lucrative market for cool refreshments that would have included both rich and poor, said Calaresu.
The consumption of ice in Naples was so large that it was soon considered an important commodity. Official records show that as well as other staples such as grain and oil, ice was taxed and prices for it were recorded and regulated.
"The passion for iced water is so great and so general in Naples, that none but mere beggars would drink it in its natural state; and, I believe, a scarcity of bread would not be more severely felt than a failure of snow," the English travel writer Henry Swinburne, who traveled to Naples in the 1780s, wrote.
According to Calaresu, in 18th century Naples iced treats were enjoyed by the lazzaroni, the Neapolitan lower classes, as well as by the aristocrats. At that time, the city was the third largest in Europe and a stopping point on the Grand Tour, a rite of passage undertaken by middle and upper class young men from Northern Europe.
Evidence for the social power of ice creams came from a number of prints sold as souvenirs to the Grand Tour travelers.
For example, an engraving by Achille Vianelli, shows a sorbet vendor with a long apron selling his wares from a table set near the Castel Nuovo.
"Two gentlemen in top hats and fitted jackets scoop their sorbets from small pots while a rascally-looking fellow with bare feet and a missing trouser leg tips his sorbet straight into his open mouth," Calaresu wrote.
An engraving of a similar scene by Pietro Fabris shows a couple of barefoot boys reaching out to lick the spoon of an ice-cream seller who has stationed himself and his wooden pails in a square beside Naples' Angevin Castle.
Calaresu also found that many travelers noted the dependence of the poor as well as rich on iced products. John Moore, an English doctor living in Naples in the 1780s, even described cold drinks as a threat to social order.
Read more at Discovery News
Long portrayed as a luxury product that in its early days would have been enjoyed primarily by an elite set of society, ice cream was sold on the streets in Naples as long as 300 years ago.
At that time, making ice cream was a laborious task which relied on large amounts of ice. Until the late 19th century, when industrial refrigeration eliminated the need for ice houses, ice was harvested from glaciers in the mountains and transported to towns and cities where it was used to cool buckets of mixes.
Snow too was gathered and made into ice by being compressed in pits, where it was kept cold for months.
In fact, the early history of ice cream-making suggests that snow often inspired frozen treats.
Popular lore has attributed the creation of iced desserts to the Roman Emperor Nero, who had slaves bring buckets of snow from the mountains to mix with fruit and honey. There was also Marco Polo, who brought from China a recipe closely resembling the sherbet. The royal chef of England's King Charles I apparently made frozen treats and Catherine de' Medici supposedly brought Italian chefs able to make "flavored ice" to France when she became the wife of Henry II.
Others credit Bernardo Buontalenti, an architect and engineer at the Medici court in Florence, with making the first gelato. At the 1600 wedding of Maria de' Medici and Henry IV of France, he conceived "marvels of gelati" made of snow, salt, lemons, sugar, egg whites and milk.
Ice cream, whoever invented it, was long associated to wealthy tables, as surviving royal porcelain artifact relating to the consumption of iced desserts testify.
But according to Melissa Calaresu, an historian at Cambridge University, U.K., ice cream was not reserved for the elite –- at least not in southern Italy.
"Contemporary sources suggest that there was much greater intermingling and overlapping of social milieus in cities such as Naples than historians have thought," Calaresu said.
The stifling heat of Neapolitan summer represented a lucrative market for cool refreshments that would have included both rich and poor, said Calaresu.
The consumption of ice in Naples was so large that it was soon considered an important commodity. Official records show that as well as other staples such as grain and oil, ice was taxed and prices for it were recorded and regulated.
"The passion for iced water is so great and so general in Naples, that none but mere beggars would drink it in its natural state; and, I believe, a scarcity of bread would not be more severely felt than a failure of snow," the English travel writer Henry Swinburne, who traveled to Naples in the 1780s, wrote.
According to Calaresu, in 18th century Naples iced treats were enjoyed by the lazzaroni, the Neapolitan lower classes, as well as by the aristocrats. At that time, the city was the third largest in Europe and a stopping point on the Grand Tour, a rite of passage undertaken by middle and upper class young men from Northern Europe.
Evidence for the social power of ice creams came from a number of prints sold as souvenirs to the Grand Tour travelers.
For example, an engraving by Achille Vianelli, shows a sorbet vendor with a long apron selling his wares from a table set near the Castel Nuovo.
"Two gentlemen in top hats and fitted jackets scoop their sorbets from small pots while a rascally-looking fellow with bare feet and a missing trouser leg tips his sorbet straight into his open mouth," Calaresu wrote.
An engraving of a similar scene by Pietro Fabris shows a couple of barefoot boys reaching out to lick the spoon of an ice-cream seller who has stationed himself and his wooden pails in a square beside Naples' Angevin Castle.
Calaresu also found that many travelers noted the dependence of the poor as well as rich on iced products. John Moore, an English doctor living in Naples in the 1780s, even described cold drinks as a threat to social order.
Read more at Discovery News
Discover Your Own Tiny Galaxy
Think galaxies, think big!
That's what most people will conclude when it comes to talking about galaxies, yet within this family of Goliaths, there are the big ones and there are the tiny ones.
Among the smallest of galaxies are the so-called "dwarf galaxies" that typically provide a gravitational home for a few billion stars as opposed to several hundred billion stars -- like the Milky Way or Andromeda Galaxy. This delineation is vague (at best) but there are a number of dwarf galaxies within the grasp of amateur astronomers, despite their feeble output of light.
Like most galaxies, dwarf galaxies formed out of the gentle variations in the distribution of matter in the early Universe. Studies have shown that, while the rate of galaxy formation has slowed dramatically since the boom just after the Big Bang, there are still a handful of new galaxies, including dwarfs, slowly forming. More specific to dwarf galaxy formation is the process around larger galaxies where dark matter and other "baryonic" material slowly coalesces over billions of years to form new satellite galaxies.
For the amateur astronomer, there are some great examples of dwarf galaxies that can be seen in the sky, some requiring a good pair of binoculars and a couple just the naked eye.
Northern Hemisphere observers can see two great examples of dwarf galaxies in orbit around the Andromeda Galaxy. With the catalog designations M32 (Messier 32) and M110 (Messier 110) they were both recorded by Charles Messier, the 18th Century comet hunter. Both dwarf galaxies shine at around 8th magnitude, which means a good pair of binoculars and dark skies can just about detect them.
M32 is the brighter of the two and is classed as a "dwarf elliptical galaxy," 2.65 million light-years from the Milky Way. Due to its structure -- the presence of old red and yellow stars and a hint of stellar formation in its core -- it is now believed it may have once been a spiral galaxy that had its outer spiral arms stripped off by the immense gravity of the mighty Andromeda Galaxy. M110, on the other hand, is a "dwarf spheroidal galaxy" meaning it's similar to M32 but has a more spherical shape but may have once also been a spiral and suffered a similar fate.
Southern Hemisphere observers have a much better view of what are arguably the best dwarf galaxies in the sky: the Magellanic Clouds. Both are classed as "irregular dwarf galaxies" and were once thought to be in orbit around the Milky Way.
Both are visible as what seems to be detached portions of the Milky Way glowing faintly to its west. The Large Magellanic Cloud is the more prominent and brightest of the two and sits on the border of the constellations Dorado and Mensa.
The aptly-named Small Magellanic Cloud is found a little further to the west in the constellation of Tucana. Both galaxies are now thought to have once been "barred spiral galaxies" that had their shape distorted by the gravity of the Milky Way.
Read more at Discovery News
That's what most people will conclude when it comes to talking about galaxies, yet within this family of Goliaths, there are the big ones and there are the tiny ones.
Among the smallest of galaxies are the so-called "dwarf galaxies" that typically provide a gravitational home for a few billion stars as opposed to several hundred billion stars -- like the Milky Way or Andromeda Galaxy. This delineation is vague (at best) but there are a number of dwarf galaxies within the grasp of amateur astronomers, despite their feeble output of light.
Like most galaxies, dwarf galaxies formed out of the gentle variations in the distribution of matter in the early Universe. Studies have shown that, while the rate of galaxy formation has slowed dramatically since the boom just after the Big Bang, there are still a handful of new galaxies, including dwarfs, slowly forming. More specific to dwarf galaxy formation is the process around larger galaxies where dark matter and other "baryonic" material slowly coalesces over billions of years to form new satellite galaxies.
For the amateur astronomer, there are some great examples of dwarf galaxies that can be seen in the sky, some requiring a good pair of binoculars and a couple just the naked eye.
Northern Hemisphere observers can see two great examples of dwarf galaxies in orbit around the Andromeda Galaxy. With the catalog designations M32 (Messier 32) and M110 (Messier 110) they were both recorded by Charles Messier, the 18th Century comet hunter. Both dwarf galaxies shine at around 8th magnitude, which means a good pair of binoculars and dark skies can just about detect them.
M32 is the brighter of the two and is classed as a "dwarf elliptical galaxy," 2.65 million light-years from the Milky Way. Due to its structure -- the presence of old red and yellow stars and a hint of stellar formation in its core -- it is now believed it may have once been a spiral galaxy that had its outer spiral arms stripped off by the immense gravity of the mighty Andromeda Galaxy. M110, on the other hand, is a "dwarf spheroidal galaxy" meaning it's similar to M32 but has a more spherical shape but may have once also been a spiral and suffered a similar fate.
Southern Hemisphere observers have a much better view of what are arguably the best dwarf galaxies in the sky: the Magellanic Clouds. Both are classed as "irregular dwarf galaxies" and were once thought to be in orbit around the Milky Way.
Both are visible as what seems to be detached portions of the Milky Way glowing faintly to its west. The Large Magellanic Cloud is the more prominent and brightest of the two and sits on the border of the constellations Dorado and Mensa.
The aptly-named Small Magellanic Cloud is found a little further to the west in the constellation of Tucana. Both galaxies are now thought to have once been "barred spiral galaxies" that had their shape distorted by the gravity of the Milky Way.
Read more at Discovery News
Strange Underwater Tower Off California
While mapping the seafloor off San Diego, researchers found something odd: a seafloor mound about the height of a two-story building and the size of a city block.
Further investigation found evidence the formation was caused by methane leaking out of the seafloor, which would make it the first so-called "methane seep" in San Diego County, the Scripps Institution of Oceanography at UC San Diego announced yesterday (July 25).
The Scripps researchers took samples from 3,400 feet (1,036 meters) below the surface and bought up strange worms and clams that likely live off symbiotic bacteria that break down the clear, flammable gas.
Such sites are important oases of life on the dark seafloor, with the methane-eating bacteria at the base of a rich and productive community that helps sustain the surrounding deep-sea ecosystem.
"These chemosynthetic ecosystems are considered 'hot spots' of life on the seafloor in an otherwise desert-like landscape," said expedition member Alexis Pasulka, a Scripps biological oceanography graduate student, in a statement. "New forms of life are continuously being discovered in these environments."
Organisms collected from the newly discovered site include thread-like tubeworms called siboglinids and several clams. Siboglinids lack a mouth and digestive system and gain nutrition from a symbiotic relationship with bacteria living inside them, while many clams at seeps get some of their food from sulfide-loving bacteria living on their gills.
The scientists found the site 20 miles (32 kilometers) west of Del Mar, Calif. It's centered on a fault zone known as the San Diego Trough Fault zone. Methane, or natural gas, exists in the Earth's crust under the seafloor along many of the world's continental margins. Faults can provide a pathway for methane to "seep" upward toward the seafloor.
Read more at Discovery News
Further investigation found evidence the formation was caused by methane leaking out of the seafloor, which would make it the first so-called "methane seep" in San Diego County, the Scripps Institution of Oceanography at UC San Diego announced yesterday (July 25).
The Scripps researchers took samples from 3,400 feet (1,036 meters) below the surface and bought up strange worms and clams that likely live off symbiotic bacteria that break down the clear, flammable gas.
Such sites are important oases of life on the dark seafloor, with the methane-eating bacteria at the base of a rich and productive community that helps sustain the surrounding deep-sea ecosystem.
"These chemosynthetic ecosystems are considered 'hot spots' of life on the seafloor in an otherwise desert-like landscape," said expedition member Alexis Pasulka, a Scripps biological oceanography graduate student, in a statement. "New forms of life are continuously being discovered in these environments."
Organisms collected from the newly discovered site include thread-like tubeworms called siboglinids and several clams. Siboglinids lack a mouth and digestive system and gain nutrition from a symbiotic relationship with bacteria living inside them, while many clams at seeps get some of their food from sulfide-loving bacteria living on their gills.
The scientists found the site 20 miles (32 kilometers) west of Del Mar, Calif. It's centered on a fault zone known as the San Diego Trough Fault zone. Methane, or natural gas, exists in the Earth's crust under the seafloor along many of the world's continental margins. Faults can provide a pathway for methane to "seep" upward toward the seafloor.
Read more at Discovery News
'Grand Canyon' Discovered Beneath Antarctic Ice
A dramatic gash in the surface of the Earth that could rival the majesty of the Grand Canyon has been discovered secreted beneath Antarctica's vast, featureless ice sheet.
Dubbed the Ferrigno Rift for the glacier that fills it, the chasm's steep walls plunge nearly a mile down (1.5 kilometers) at its deepest. It is roughly 6 miles (10 km) across and at least 62 miles (100 km) long, possibly far longer if it extends into the sea.
The rift was discovered during a grueling 1,500-mile (2,400 km) trek that, save for a few modern conveniences, hearkens back to the days of early Antarctic exploration. And it came as a total surprise, according to the man who first sensed that something incredible was literally underfoot, hidden by more than a half-mile (1 km) of ice.
Old-school exploration
Robert Bingham, a glaciologist at the University of Aberdeen, along with field assistant Chris Griffiths, had embarked on a nine-week trip during the 2009-2010 field season to survey the Ferrigno Glacier, a region humans had visited only once before, 50 years earlier. Over the last decade, satellites have revealed the glacier is the site of the most dramatic ice loss in its West Antarctica neighborhood, a fringe of coastline just west of the Antarctic Peninsula — the narrow finger of land that points toward South America.
The two-man team set out aboard snowmobiles, dragging radar equipment behind them to measure the topography of the rock beneath the windswept ice, in a region notorious for atrocious weather. Braced for arduous, yet uneventful fieldwork, the surprise came right away.
"It was literally one of the first days that we were driving across the ice stream, doing what we thought was a pretty standard survey, that I saw the bed of the ice just dropping away," Bingham said.
The drop was so sudden and so deep that Bingham drove back and forth across the area two or three more times to check the data, and saw the same pattern. "We got the sense that there was something really exciting under there," he told OurAmazingPlanet. "It was one of the most exciting science missions I've ever had."
Slippery implications
Bingham compared the hidden chasm to the Grand Canyon in scale, but said that tectonic forces of continental rifting — in contrast to erosion — created the Ferrigno Rift, wrenching the fissure's walls apart probably tens of millions of years ago, when Antarctica was ice-free.
Excitement surrounding the discovery has deeper implications than the mere gee-whiz factor of finding such a massive feature. The Ferrigno Rift's "existence profoundly affects ice loss," Bingham and co-authors from the British Antarctic Survey wrote in a paper published in Nature today (July 25).
"The geology and topography under the ice controls how the ice flows," said Robin Bell, a geophysicist and professor at Columbia University's Lamont-Doherty Earth Observatory, who was not associated with the research. "Ice will flow faster over sediments, like those found in rifts," said Bell, a veteran Antarctic researcher, who has long studied yet another dramatic, yet invisible geological feature, the hidden Gamburtsev Mountains in East Antarctica.
In addition, the study authors write, the rift is providing a channel for warm ocean water to creep toward the interior of the West Antarctic Ice Sheet, gnawing away at the Ferrigno Glacier from below.
Together, these two factors could be speeding the glacier's march to the sea, and the overall effects could have implications for the stability of the West Antarctic Ice Sheet, which is responsible for 10 percent of global sea level rise that is currently happening.
Scientists are still only just beginning to understand the myriad mechanisms that control the seemingly dramatic melting observed in regions of West Antarctica, and how climate change is affecting all the moving parts.
Read more at Discovery News
Dubbed the Ferrigno Rift for the glacier that fills it, the chasm's steep walls plunge nearly a mile down (1.5 kilometers) at its deepest. It is roughly 6 miles (10 km) across and at least 62 miles (100 km) long, possibly far longer if it extends into the sea.
The rift was discovered during a grueling 1,500-mile (2,400 km) trek that, save for a few modern conveniences, hearkens back to the days of early Antarctic exploration. And it came as a total surprise, according to the man who first sensed that something incredible was literally underfoot, hidden by more than a half-mile (1 km) of ice.
Old-school exploration
Robert Bingham, a glaciologist at the University of Aberdeen, along with field assistant Chris Griffiths, had embarked on a nine-week trip during the 2009-2010 field season to survey the Ferrigno Glacier, a region humans had visited only once before, 50 years earlier. Over the last decade, satellites have revealed the glacier is the site of the most dramatic ice loss in its West Antarctica neighborhood, a fringe of coastline just west of the Antarctic Peninsula — the narrow finger of land that points toward South America.
The two-man team set out aboard snowmobiles, dragging radar equipment behind them to measure the topography of the rock beneath the windswept ice, in a region notorious for atrocious weather. Braced for arduous, yet uneventful fieldwork, the surprise came right away.
"It was literally one of the first days that we were driving across the ice stream, doing what we thought was a pretty standard survey, that I saw the bed of the ice just dropping away," Bingham said.
The drop was so sudden and so deep that Bingham drove back and forth across the area two or three more times to check the data, and saw the same pattern. "We got the sense that there was something really exciting under there," he told OurAmazingPlanet. "It was one of the most exciting science missions I've ever had."
Slippery implications
Bingham compared the hidden chasm to the Grand Canyon in scale, but said that tectonic forces of continental rifting — in contrast to erosion — created the Ferrigno Rift, wrenching the fissure's walls apart probably tens of millions of years ago, when Antarctica was ice-free.
Excitement surrounding the discovery has deeper implications than the mere gee-whiz factor of finding such a massive feature. The Ferrigno Rift's "existence profoundly affects ice loss," Bingham and co-authors from the British Antarctic Survey wrote in a paper published in Nature today (July 25).
"The geology and topography under the ice controls how the ice flows," said Robin Bell, a geophysicist and professor at Columbia University's Lamont-Doherty Earth Observatory, who was not associated with the research. "Ice will flow faster over sediments, like those found in rifts," said Bell, a veteran Antarctic researcher, who has long studied yet another dramatic, yet invisible geological feature, the hidden Gamburtsev Mountains in East Antarctica.
In addition, the study authors write, the rift is providing a channel for warm ocean water to creep toward the interior of the West Antarctic Ice Sheet, gnawing away at the Ferrigno Glacier from below.
Together, these two factors could be speeding the glacier's march to the sea, and the overall effects could have implications for the stability of the West Antarctic Ice Sheet, which is responsible for 10 percent of global sea level rise that is currently happening.
Scientists are still only just beginning to understand the myriad mechanisms that control the seemingly dramatic melting observed in regions of West Antarctica, and how climate change is affecting all the moving parts.
Read more at Discovery News
Jul 25, 2012
Ancient Life-Size Lion Statues Baffle Scientists
Two sculptures of life-size lions, each weighing about 5 tons in antiquity, have been discovered in what is now Turkey, with archaeologists perplexed over what the granite cats were used for.
One idea is that the statues, created between 1400 and 1200 B.C., were meant to be part of a monument for a sacred water spring, the researchers said.
The lifelike lions were created by the Hittites who controlled a vast empire in the region at a time when the Asiatic lion roamed the foothills of Turkey.
"The lions are prowling forward, their heads slightly lowered; the tops of their heads are barely higher than the napes," write Geoffrey Summers, of the Middle East Technical University, and researcher Erol Özen in an article published in the most recent edition of the American Journal of Archaeology.
The two lion sculptures have stylistic differences and were made by different sculptors. The lion sculpture found in the village of Karakiz is particularly lifelike, with rippling muscles and a tail that curves around the back of the granite boulder.
"The sculptors certainly knew what lions looked like," Summers told LiveScience in an interview. He said that both archaeological and ancient written records indicate that the Asiatic lion, now extinct in Turkey, was still very much around, some even being kept by the Hittites in pits.
Curiously the sculpture at Karakiz has an orange color caused by the oxidization of minerals in the stone. Summers said that he doesn't believe it had this color when it was first carved.
The story of the discovery of the massive lions began in 2001, when Özen, at the time director of the Yozgat Museum, was alerted to the existence of the ancient quarry by a man from Karakiz village and an official from the Ministry of Culture. An extensive search of the area was undertaken in spring 2002 with fieldwork occurring in the following years.
Looters, however, beat the archaeologists to the catch. The Karakiz lion was found dynamited in two, likely in the mistaken belief that it contained hidden treasure. "There's this belief that monuments like this contain treasure," said Summers, explaining that the dynamiting of monuments is a problem in Turkey. "It makes the Turkish newspapers every month or so."
The second lion, found to the northeast of the village, had also been split in two. As a result of this destruction both lion sculptures, which originally were paired with another, now mainly have one lion intact.
The danger of new looting loomed over the researchers while they went about their work. In the summer of 2008 evidence of "fresh treasure hunting" was found at the ancient quarry along with damage to a drum-shaped rock that, in antiquity, was in the process of being carved.
What were they intended for?
The discovery of the massive lions, along with other pieces in the quarry, such as a large stone basin about 7 feet (2 meters) in diameter, left the archaeologists with a mystery — what were they intended for?
A search of the surrounding area revealed no evidence of a Hittite settlement dating back to the time of the statues. Also, the sheer size of the sculptures meant that the sculptors likely did not intend to move them very far.
Summers hypothesizes that, rather than being meant for a palace or a great city, the lions were being created for a monument to mark something else – water.
"I think it's highly likely that that monument was going to be associated with one of the very copious springs that are quite close," he said in the interview. "There are good parallels for associations of Hittite sculptural traditions with water sources."
Indeed one well-known monument site, known as Eflatun P?nar, holds a sacred pool that "is fed by a spring beneath the pool itself," write Yi?it Erbil and Alice Mouton in an article that was published in the most recent edition of the Journal of Near Eastern Studies. The two researchers were writing about water religions in ancient Anatolia (Turkey).
Read more at Discovery News
One idea is that the statues, created between 1400 and 1200 B.C., were meant to be part of a monument for a sacred water spring, the researchers said.
The lifelike lions were created by the Hittites who controlled a vast empire in the region at a time when the Asiatic lion roamed the foothills of Turkey.
"The lions are prowling forward, their heads slightly lowered; the tops of their heads are barely higher than the napes," write Geoffrey Summers, of the Middle East Technical University, and researcher Erol Özen in an article published in the most recent edition of the American Journal of Archaeology.
The two lion sculptures have stylistic differences and were made by different sculptors. The lion sculpture found in the village of Karakiz is particularly lifelike, with rippling muscles and a tail that curves around the back of the granite boulder.
"The sculptors certainly knew what lions looked like," Summers told LiveScience in an interview. He said that both archaeological and ancient written records indicate that the Asiatic lion, now extinct in Turkey, was still very much around, some even being kept by the Hittites in pits.
Curiously the sculpture at Karakiz has an orange color caused by the oxidization of minerals in the stone. Summers said that he doesn't believe it had this color when it was first carved.
The story of the discovery of the massive lions began in 2001, when Özen, at the time director of the Yozgat Museum, was alerted to the existence of the ancient quarry by a man from Karakiz village and an official from the Ministry of Culture. An extensive search of the area was undertaken in spring 2002 with fieldwork occurring in the following years.
Looters, however, beat the archaeologists to the catch. The Karakiz lion was found dynamited in two, likely in the mistaken belief that it contained hidden treasure. "There's this belief that monuments like this contain treasure," said Summers, explaining that the dynamiting of monuments is a problem in Turkey. "It makes the Turkish newspapers every month or so."
The second lion, found to the northeast of the village, had also been split in two. As a result of this destruction both lion sculptures, which originally were paired with another, now mainly have one lion intact.
The danger of new looting loomed over the researchers while they went about their work. In the summer of 2008 evidence of "fresh treasure hunting" was found at the ancient quarry along with damage to a drum-shaped rock that, in antiquity, was in the process of being carved.
What were they intended for?
The discovery of the massive lions, along with other pieces in the quarry, such as a large stone basin about 7 feet (2 meters) in diameter, left the archaeologists with a mystery — what were they intended for?
A search of the surrounding area revealed no evidence of a Hittite settlement dating back to the time of the statues. Also, the sheer size of the sculptures meant that the sculptors likely did not intend to move them very far.
Summers hypothesizes that, rather than being meant for a palace or a great city, the lions were being created for a monument to mark something else – water.
"I think it's highly likely that that monument was going to be associated with one of the very copious springs that are quite close," he said in the interview. "There are good parallels for associations of Hittite sculptural traditions with water sources."
Indeed one well-known monument site, known as Eflatun P?nar, holds a sacred pool that "is fed by a spring beneath the pool itself," write Yi?it Erbil and Alice Mouton in an article that was published in the most recent edition of the Journal of Near Eastern Studies. The two researchers were writing about water religions in ancient Anatolia (Turkey).
Read more at Discovery News
500-Million-Year-Old 'Mistake' Led to Humans
Over 500 million years ago a spineless creature on the ocean floor experienced two successive doublings in the amount of its DNA, a "mistake" that eventually triggered the evolution of humans and many other animals, says a new study.
The good news is that these ancient DNA doublings boosted cellular communication systems, so that our body cells are now better at integrating information than even the smartest smartphones. The bad part is that communication breakdowns, traced back to the very same genome duplications of the Cambrian Period, can cause diabetes, cancer and neurological disorders.
"Organisms that reproduce sexually usually have two copies of their entire genome, one inherited from each of the two parents," co-author Carol MacKintosh explained to Discovery News. "What happened over 500 million years ago is that this process 'went wrong' in an invertebrate animal, which somehow inherited twice the usual number of genes. In a later generation, the fault recurred, doubling the number of copies of each gene once again."
MacKintosh, a professor in the College of Life Sciences at the University of Dundee, said that such duplications also happened in plant evolution. As for the progeny of the newly formed animal, they remarkably survived and thrived.
"The duplications were not stable, however, and most of the resulting gene duplicates were lost quickly -- long before humans evolved," she continued. But some did survive, as MacKintosh and her team discovered.
Her research group studies a network of several hundred proteins that work inside human cells to coordinate their responses to growth factors and to insulin, a hormone. Key proteins involved in this process are called 14-3-3.
For this latest study, the scientists mapped, classified and conducted a biochemical analysis of the proteins. This found that they date back to the genome duplications, which occurred during the Cambrian.
The first animal to carry them remains unknown, but gene sequencing shows that a modern day invertebrate known as amphioxus "is most similar to the original spineless creature before the two rounds of whole genome duplication," MacKintosh said. "Amphioxus can therefore be regarded as a ‘very distant cousin’ to all the vertebrate (backboned) species."
The inherited proteins appear to have evolved to make a "team" that can tune into more growth factor instructions than would be possible with a single protein.
"These systems inside human cells therefore behave like the signal multiplexing systems that enable our smartphones to pick up multiple messages," MacKintosh shared.
The teamwork may not always be a good thing, though. The researchers propose that if a critical function were performed by a single protein, as in amphioxus, then its loss or mutation would likely be lethal, resulting in no disease.
If multiple proteins are working as a team, however, and one or more becomes lost or mutated, the individual may survive, but could still wind up with a debilitating disorder. Such breakdowns could help to explain how diseases, such as diabetes and cancer, are so entrenched in humans.
"In type 2 diabetes, muscle cells lose their ability to absorb sugars in response to insulin," MacKintosh said. "In contrast, greedy cancer cells don't await instructions, but scavenge nutrients and grow out of control."
Read more at Discovery News
The good news is that these ancient DNA doublings boosted cellular communication systems, so that our body cells are now better at integrating information than even the smartest smartphones. The bad part is that communication breakdowns, traced back to the very same genome duplications of the Cambrian Period, can cause diabetes, cancer and neurological disorders.
"Organisms that reproduce sexually usually have two copies of their entire genome, one inherited from each of the two parents," co-author Carol MacKintosh explained to Discovery News. "What happened over 500 million years ago is that this process 'went wrong' in an invertebrate animal, which somehow inherited twice the usual number of genes. In a later generation, the fault recurred, doubling the number of copies of each gene once again."
MacKintosh, a professor in the College of Life Sciences at the University of Dundee, said that such duplications also happened in plant evolution. As for the progeny of the newly formed animal, they remarkably survived and thrived.
"The duplications were not stable, however, and most of the resulting gene duplicates were lost quickly -- long before humans evolved," she continued. But some did survive, as MacKintosh and her team discovered.
Her research group studies a network of several hundred proteins that work inside human cells to coordinate their responses to growth factors and to insulin, a hormone. Key proteins involved in this process are called 14-3-3.
For this latest study, the scientists mapped, classified and conducted a biochemical analysis of the proteins. This found that they date back to the genome duplications, which occurred during the Cambrian.
The first animal to carry them remains unknown, but gene sequencing shows that a modern day invertebrate known as amphioxus "is most similar to the original spineless creature before the two rounds of whole genome duplication," MacKintosh said. "Amphioxus can therefore be regarded as a ‘very distant cousin’ to all the vertebrate (backboned) species."
The inherited proteins appear to have evolved to make a "team" that can tune into more growth factor instructions than would be possible with a single protein.
"These systems inside human cells therefore behave like the signal multiplexing systems that enable our smartphones to pick up multiple messages," MacKintosh shared.
The teamwork may not always be a good thing, though. The researchers propose that if a critical function were performed by a single protein, as in amphioxus, then its loss or mutation would likely be lethal, resulting in no disease.
If multiple proteins are working as a team, however, and one or more becomes lost or mutated, the individual may survive, but could still wind up with a debilitating disorder. Such breakdowns could help to explain how diseases, such as diabetes and cancer, are so entrenched in humans.
"In type 2 diabetes, muscle cells lose their ability to absorb sugars in response to insulin," MacKintosh said. "In contrast, greedy cancer cells don't await instructions, but scavenge nutrients and grow out of control."
Read more at Discovery News
Kepler Spots 'Perfectly Aligned' Alien Worlds
When NASA's Kepler space telescope started finding planets at odd angles to their parent stars, scientists wondered if our solar system's tidy geometry, with the planets neatly orbiting around the sun's equator, was an exception to the rule.
That idea can be laid to rest thanks to an innovative use of the Kepler data which aligned three planets circling the sun-like star Kepler-30 with a giant spot on the star's surface.
The study showed the trio of planets orbiting within one degree, relative to each other and relative to the star's equator. That finding is an indication that Kepler-30, like our own solar system, formed from a rotating disk of gas.
"The planets themselves are not all that remarkable -- two giant Jupiters and one super-Earth -- but what is remarkable is that they aligned so perfectly," astronomer Drake Deming, with the University of Maryland, told Discovery News.
"The dynamics of the system are important for the possible development of life," he added.
The alignment of the Kepler-30 brood is the most precise found yet.
The Kepler telescope is studying about 150,000 sun-like stars for signs of Earth-like planets. Multi-planet systems would have to be somewhat aligned to fall into the telescope's narrow and deep field of view.
Kepler's targets are all hundreds to thousands of light years away. One light-year is about 5.9 trillion miles.
Read more at Discovery News
That idea can be laid to rest thanks to an innovative use of the Kepler data which aligned three planets circling the sun-like star Kepler-30 with a giant spot on the star's surface.
The study showed the trio of planets orbiting within one degree, relative to each other and relative to the star's equator. That finding is an indication that Kepler-30, like our own solar system, formed from a rotating disk of gas.
"The planets themselves are not all that remarkable -- two giant Jupiters and one super-Earth -- but what is remarkable is that they aligned so perfectly," astronomer Drake Deming, with the University of Maryland, told Discovery News.
"The dynamics of the system are important for the possible development of life," he added.
The alignment of the Kepler-30 brood is the most precise found yet.
The Kepler telescope is studying about 150,000 sun-like stars for signs of Earth-like planets. Multi-planet systems would have to be somewhat aligned to fall into the telescope's narrow and deep field of view.
Kepler's targets are all hundreds to thousands of light years away. One light-year is about 5.9 trillion miles.
Read more at Discovery News
'Seeds' of Supermassive Black Holes Discovered
We've found small black holes and we've found really, really big black holes. But what about the "inbetweener" black holes?
The very existence of this class of black hole is disputed, but a Japanese group of astronomers have found the potential locations of three intermediate black hole (IMBH) candidates inside previously unknown star clusters near the center of the Milky Way.
But what are IMBHs and why are they so important?
Conventional stellar-mass black holes are the ones we are taught at school when discussing the life cycles of massive stars. When a star -- over ten-times the mass of our sun -- runs out of fuel, its death throes culminate in a supernova. This powerful explosion will create the extreme gravitational conditions ripe for a stellar black hole to form.
At the other end of the black hole spectrum are the supermassive ones. As the superlative suggests, these black holes are monsters. We know that the majority of galaxies we can observe -- including our own -- play host to supermassive black holes in their cores. These black holes are very different from their stellar tiddler counterparts; supermassive black holes grow from tens of thousands to billions of times the mass of our sun.
But some big questions have vexed astrophysicists as to where supermassive black holes come from. How did they become so massive? What's the link between stellar-mass black holes and supermassive black holes? And is there an "intermediate" black hole phase?
Logic would dictate that IMBHs should be out there, but there were few candidates until the discovery of Hyper-Luminous X-ray Source 1 (HLX-1) was confirmed earlier this year by the Commonwealth Scientific and Industrial Research Organisation (CSIRO) telescope in Australia. The apparent dearth of the objects, however, is causing some puzzlement.
When Black Holes Unite
Black hole formation theories suggest that supermassive black holes were formed through the agglomeration of many intermediate-sized black holes. Therefore, it stands to reason that there should at least be some IMBHs near the centers of galaxies.
One environment that may be fertile for the growth of IMBHs is that of densely packed star clusters surrounding the galactic center -- these clusters could be dense enough to regularly kick-off supernovae, creating a supply of stellar black holes that also accumulate and grow into intermediate black holes.
So, with this theory in mind, using the 10-meter Atacama Submillimeter Telescope Experiment (ASTE) in the Atacama Desert, Chile, and the 45-meter Nobeyama Radio Observatory (NRO) in Japan, a research group headed by Keio University's Tomoharu Oka hunted for the emissions from molecular gases associated with supernovae in star clusters.
"Huge star clusters at the center of the Milky Way Galaxy have an important role related to formation and growth of the Milky Way Galaxy's nucleus," said Oka.
Find the Gas; Find the Cluster
One would think that finding giant star clusters is easy, but as we look through the Milky Way's disk toward the galactic center (30,000 light-years away), it is hard to see the star clusters through the gas, dust and stars in front. It's a cosmic equivalent of "you can't see the wood for the trees!" -- we can't see the star clusters for the stars (and dust)!.
"The huge amount of gas and dust lying between the solar system and the center of the Milky Way Galaxy prevent not only visible light, but also infrared light, from reaching the Earth," said Oka. "Moreover, innumerable stars in the bulge and disc of the Milky Way Galaxy lie in the line of sight. Therefore, no matter how large the star cluster is, it is very difficult to directly see the star cluster at the center of the Milky Way Galaxy."
So to detect the clusters, Oka and his team surveyed the center of our galaxy for the emissions from molecular clouds -- particularly the millimeter wavelength emission from carbon monoxide. This wavelength can penetrate the obscuring galactic disk, providing the researchers with a window into the core of our galaxy. The distribution of warm gas of more than 50 Kelvin (-370 degrees Fahrenheit/-223 degrees Celcius) with a density of more than 10,000 hydrogen molecules per cubic centimeter could then be mapped.
The group managed to find three previously unknown warm clumps of gas, all of which exhibit signs of rapid expansion. A fourth clump was found in the location of Sagittarius A (Sgr A), a very well-known radio source and lair of Sagittarius A* (Sgr A*) -- the Milky Way's very own supermassive black hole with a mass of 4 million suns.
"It can be inferred that the gas clump 'Sgr A' has a disk-shaped structure with radius of 25 light-years and revolves around the supermassive black hole (Sgr A*) at a very fast speed," added Oka.
According to the National Astronomical Observatory of Japan press release, the expansion detected inside the other three previously unknown clumps of molecular gas can be attributed to recent supernova activity. The researchers believe these clumps therefore correspond to clusters of stars, where one of the clusters is comparable to the largest known star cluster in the Milky Way, with a mass of around 100,000 solar masses.
Seeds of the Supermassive
Where there's regular supernovae popping-off inside a cluster, stellar black holes are forming. In one of the most active clusters, there is evidence to suggest that, on average, one supernova every 300 years is detonating. The other two clusters also show signs of recent supernova activity.
According to theory, inside these dense violent supernova pressure-cookers of star clusters, stellar black holes are being born, merging and then bulking-up to form IMBHs. Oka's team predicts that there should be an IMBH inside each of these three clusters, weighing-in at several hundred solar masses. In the grand galactic scale, these IMBHs would eventually sink into the center of the galaxy and get swallowed by Sgr A*, potentially explaining how the supermassive black holes in the cores of galaxies get so massive.
Read more at Discovery News
The very existence of this class of black hole is disputed, but a Japanese group of astronomers have found the potential locations of three intermediate black hole (IMBH) candidates inside previously unknown star clusters near the center of the Milky Way.
But what are IMBHs and why are they so important?
Conventional stellar-mass black holes are the ones we are taught at school when discussing the life cycles of massive stars. When a star -- over ten-times the mass of our sun -- runs out of fuel, its death throes culminate in a supernova. This powerful explosion will create the extreme gravitational conditions ripe for a stellar black hole to form.
At the other end of the black hole spectrum are the supermassive ones. As the superlative suggests, these black holes are monsters. We know that the majority of galaxies we can observe -- including our own -- play host to supermassive black holes in their cores. These black holes are very different from their stellar tiddler counterparts; supermassive black holes grow from tens of thousands to billions of times the mass of our sun.
But some big questions have vexed astrophysicists as to where supermassive black holes come from. How did they become so massive? What's the link between stellar-mass black holes and supermassive black holes? And is there an "intermediate" black hole phase?
Logic would dictate that IMBHs should be out there, but there were few candidates until the discovery of Hyper-Luminous X-ray Source 1 (HLX-1) was confirmed earlier this year by the Commonwealth Scientific and Industrial Research Organisation (CSIRO) telescope in Australia. The apparent dearth of the objects, however, is causing some puzzlement.
When Black Holes Unite
Black hole formation theories suggest that supermassive black holes were formed through the agglomeration of many intermediate-sized black holes. Therefore, it stands to reason that there should at least be some IMBHs near the centers of galaxies.
One environment that may be fertile for the growth of IMBHs is that of densely packed star clusters surrounding the galactic center -- these clusters could be dense enough to regularly kick-off supernovae, creating a supply of stellar black holes that also accumulate and grow into intermediate black holes.
So, with this theory in mind, using the 10-meter Atacama Submillimeter Telescope Experiment (ASTE) in the Atacama Desert, Chile, and the 45-meter Nobeyama Radio Observatory (NRO) in Japan, a research group headed by Keio University's Tomoharu Oka hunted for the emissions from molecular gases associated with supernovae in star clusters.
"Huge star clusters at the center of the Milky Way Galaxy have an important role related to formation and growth of the Milky Way Galaxy's nucleus," said Oka.
Find the Gas; Find the Cluster
One would think that finding giant star clusters is easy, but as we look through the Milky Way's disk toward the galactic center (30,000 light-years away), it is hard to see the star clusters through the gas, dust and stars in front. It's a cosmic equivalent of "you can't see the wood for the trees!" -- we can't see the star clusters for the stars (and dust)!.
"The huge amount of gas and dust lying between the solar system and the center of the Milky Way Galaxy prevent not only visible light, but also infrared light, from reaching the Earth," said Oka. "Moreover, innumerable stars in the bulge and disc of the Milky Way Galaxy lie in the line of sight. Therefore, no matter how large the star cluster is, it is very difficult to directly see the star cluster at the center of the Milky Way Galaxy."
So to detect the clusters, Oka and his team surveyed the center of our galaxy for the emissions from molecular clouds -- particularly the millimeter wavelength emission from carbon monoxide. This wavelength can penetrate the obscuring galactic disk, providing the researchers with a window into the core of our galaxy. The distribution of warm gas of more than 50 Kelvin (-370 degrees Fahrenheit/-223 degrees Celcius) with a density of more than 10,000 hydrogen molecules per cubic centimeter could then be mapped.
The group managed to find three previously unknown warm clumps of gas, all of which exhibit signs of rapid expansion. A fourth clump was found in the location of Sagittarius A (Sgr A), a very well-known radio source and lair of Sagittarius A* (Sgr A*) -- the Milky Way's very own supermassive black hole with a mass of 4 million suns.
"It can be inferred that the gas clump 'Sgr A' has a disk-shaped structure with radius of 25 light-years and revolves around the supermassive black hole (Sgr A*) at a very fast speed," added Oka.
According to the National Astronomical Observatory of Japan press release, the expansion detected inside the other three previously unknown clumps of molecular gas can be attributed to recent supernova activity. The researchers believe these clumps therefore correspond to clusters of stars, where one of the clusters is comparable to the largest known star cluster in the Milky Way, with a mass of around 100,000 solar masses.
Seeds of the Supermassive
Where there's regular supernovae popping-off inside a cluster, stellar black holes are forming. In one of the most active clusters, there is evidence to suggest that, on average, one supernova every 300 years is detonating. The other two clusters also show signs of recent supernova activity.
According to theory, inside these dense violent supernova pressure-cookers of star clusters, stellar black holes are being born, merging and then bulking-up to form IMBHs. Oka's team predicts that there should be an IMBH inside each of these three clusters, weighing-in at several hundred solar masses. In the grand galactic scale, these IMBHs would eventually sink into the center of the galaxy and get swallowed by Sgr A*, potentially explaining how the supermassive black holes in the cores of galaxies get so massive.
Read more at Discovery News
Jul 24, 2012
Humans Blamed for Neanderthal Extinction
About 40,000 years ago, a huge volcanic eruption west of what is now Naples, Italy, showered ash over much of central and Eastern Europe. Some researchers have suggested that this super-eruption, combined with a sharp cold spell that hit the Northern Hemisphere at the same time, created a “volcanic winter” that did in the Neandertals. But a new study of microscopic particles of volcanic glass left behind by the explosion concludes that the eruption happened after the Neandertals were already mostly gone, putting the blame for their extinction on competition with modern humans.
Why the Neandertals disappeared is one of archaeology’s longest-running debates. Over the years, opinions have shifted back and forth between climate change, competition with modern humans, and combinations of the two. Earlier this year, the climate change contingent got a boost when a European team determined that the Italian eruption, known as the Campanian Ignimbrite (CI), was two to three times larger than previous estimates. The researchers calculated that ash and chemical aerosols released into the atmosphere by the eruption cooled the Northern Hemisphere by as much as 2°C for up to 3 years.
Modern humans entered Europe from Africa and possibly the Middle East around the time of the eruption and Neandertals’ demise, give or take several thousand years. The timing is critical. If Neandertals began disappearing before the eruption, it could not be responsible for their extinction; if their demise began at the same time or shortly afterward, the correlation with climate might still hold.
With these issues in mind, a team of more than 40 researchers from across Europe, led by geographer John Lowe of Royal Holloway, University of London in Egham, U.K., used a new technique for detecting volcanic ash across a much larger area than previously possible. The new method relies on deposits of cryptotephra, tiny particles of volcanic glass that are invisible to the naked eye. Unlike visible ash deposits, which are found over a more limited range, the much lighter cryptotephra can penetrate and be recovered from far-flung archaeological sites as well as marine, lake, and marsh environments. Moreover, by analyzing the chemical composition of the microscopic particles, researchers can trace them back to specific volcanic eruptions, in this case the CI.
The team collected samples containing CI cryptotephra from four central European caves where stone tools and other artifacts typical of Neandertals and modern humans have been found. They also gathered the particles from a modern human site in Libya and from marshland and marine sites in Greece and the Aegean Sea. The results, the team argues in a paper published online this week in the Proceedings of the National Academy of Sciences, are incompatible with the hypothesis that the CI was responsible for Neandertal extinction, at least in central Europe. The CI cryptotephra lie above, and so postdate, the transition from Neandertal to modern human stone tool types at all four central European sites, indicating that modern humans had replaced Neandertals before the catastrophic events of 40,000 years ago.
Moreover, analysis of tree pollen and other climatic indicators from the marsh and marine sediments confirmed that the CI was contemporaneous with a sharp cold spell called a Heinrich event, which is also often cited as a contributor to Neandertal extinction. So the data suggest that the eruption and the cold snap happened after the Neandertals had already vanished from central Europe.
“Climate was probably not directly responsible for Neandertal extinction, and catastrophic events most certainly were not,” says co-author William Davies, an archaeologist at the University of Southampton, Avenue Campus, in the United Kingdom. That leaves competition with modern humans as the most likely culprit, the team contends.
Nevertheless, the authors concede that their results are only directly applicable to central and probably Eastern Europe, and not to Western Europe, where some researchers have claimed that Neandertals hung on until at least 35,000 years ago in Portugal and Spain. Because the team has not been able to find cryptotephra that far west, “we cannot rule out the survival of Neandertals post-CI and post Heinrich … in refugia like the Iberian Peninsula,” says co-author Chris Stringer of the Natural History Museum in London. “But it must have been a very limited survival at best, as they headed to physical extinction.”
Read more at Wired Science
Why the Neandertals disappeared is one of archaeology’s longest-running debates. Over the years, opinions have shifted back and forth between climate change, competition with modern humans, and combinations of the two. Earlier this year, the climate change contingent got a boost when a European team determined that the Italian eruption, known as the Campanian Ignimbrite (CI), was two to three times larger than previous estimates. The researchers calculated that ash and chemical aerosols released into the atmosphere by the eruption cooled the Northern Hemisphere by as much as 2°C for up to 3 years.
Modern humans entered Europe from Africa and possibly the Middle East around the time of the eruption and Neandertals’ demise, give or take several thousand years. The timing is critical. If Neandertals began disappearing before the eruption, it could not be responsible for their extinction; if their demise began at the same time or shortly afterward, the correlation with climate might still hold.
With these issues in mind, a team of more than 40 researchers from across Europe, led by geographer John Lowe of Royal Holloway, University of London in Egham, U.K., used a new technique for detecting volcanic ash across a much larger area than previously possible. The new method relies on deposits of cryptotephra, tiny particles of volcanic glass that are invisible to the naked eye. Unlike visible ash deposits, which are found over a more limited range, the much lighter cryptotephra can penetrate and be recovered from far-flung archaeological sites as well as marine, lake, and marsh environments. Moreover, by analyzing the chemical composition of the microscopic particles, researchers can trace them back to specific volcanic eruptions, in this case the CI.
The team collected samples containing CI cryptotephra from four central European caves where stone tools and other artifacts typical of Neandertals and modern humans have been found. They also gathered the particles from a modern human site in Libya and from marshland and marine sites in Greece and the Aegean Sea. The results, the team argues in a paper published online this week in the Proceedings of the National Academy of Sciences, are incompatible with the hypothesis that the CI was responsible for Neandertal extinction, at least in central Europe. The CI cryptotephra lie above, and so postdate, the transition from Neandertal to modern human stone tool types at all four central European sites, indicating that modern humans had replaced Neandertals before the catastrophic events of 40,000 years ago.
Moreover, analysis of tree pollen and other climatic indicators from the marsh and marine sediments confirmed that the CI was contemporaneous with a sharp cold spell called a Heinrich event, which is also often cited as a contributor to Neandertal extinction. So the data suggest that the eruption and the cold snap happened after the Neandertals had already vanished from central Europe.
“Climate was probably not directly responsible for Neandertal extinction, and catastrophic events most certainly were not,” says co-author William Davies, an archaeologist at the University of Southampton, Avenue Campus, in the United Kingdom. That leaves competition with modern humans as the most likely culprit, the team contends.
Nevertheless, the authors concede that their results are only directly applicable to central and probably Eastern Europe, and not to Western Europe, where some researchers have claimed that Neandertals hung on until at least 35,000 years ago in Portugal and Spain. Because the team has not been able to find cryptotephra that far west, “we cannot rule out the survival of Neandertals post-CI and post Heinrich … in refugia like the Iberian Peninsula,” says co-author Chris Stringer of the Natural History Museum in London. “But it must have been a very limited survival at best, as they headed to physical extinction.”
Read more at Wired Science
Pharaoh Snefru's Playground In the Desert
Pharaoh Snefru, the "King of the Pyramids," developed his building skills over a 2.3 square mile playground in the desert, according to a new study into the geology of the Dahshur royal necropolis in Egypt.
The first king of the 4th dynasty, Snefru (reigned 2575-2551 BC) built Egypt’s first true pyramid at Dashur, after a couple of failures. The task was overshadowed by his son Khufu, or Cheops, when he built the Great Pyramid at Giza.
More than 3.5 million cubic meters (123 million cubic feet) of building material were mined and transported at Dashur, some 20 miles from Cairo, yet very little evidence remains of what went on at the pyramid practice site some 4500 years ago. Nature wiped virtually any trace of human activity.
To expose the ancient pyramid playground, a team of Earth scientists from Germany turned to fractals.
Fractals are natural or artificially created geometric patterns that form designs. These appear to repeat themselves over and over when magnified.
Deltas created where rivers meet the ocean often display fractal properties. Dissected by river channels which drain into the floodplain of the Nile, the area around Dahshur was indeed supposed to show an abundance of natural fractals. The new study showed that was’t really the case.
Arne Ramisch of the Freie Universität Berlin in Germany and colleagues from the German Archaeological Institute in Egypt created a digital model of the topography around Dahshur and investigated the region using fractal pattern recognition analysis.
The researchers discovered distinct differences "between natural and human-shaped areas," they wrote in the journal Quaternary International.
In particular, the researchers identified a huge non-fractal footprint around the pyramids.
Read more at Discovery News
The first king of the 4th dynasty, Snefru (reigned 2575-2551 BC) built Egypt’s first true pyramid at Dashur, after a couple of failures. The task was overshadowed by his son Khufu, or Cheops, when he built the Great Pyramid at Giza.
More than 3.5 million cubic meters (123 million cubic feet) of building material were mined and transported at Dashur, some 20 miles from Cairo, yet very little evidence remains of what went on at the pyramid practice site some 4500 years ago. Nature wiped virtually any trace of human activity.
To expose the ancient pyramid playground, a team of Earth scientists from Germany turned to fractals.
Fractals are natural or artificially created geometric patterns that form designs. These appear to repeat themselves over and over when magnified.
Deltas created where rivers meet the ocean often display fractal properties. Dissected by river channels which drain into the floodplain of the Nile, the area around Dahshur was indeed supposed to show an abundance of natural fractals. The new study showed that was’t really the case.
Arne Ramisch of the Freie Universität Berlin in Germany and colleagues from the German Archaeological Institute in Egypt created a digital model of the topography around Dahshur and investigated the region using fractal pattern recognition analysis.
The researchers discovered distinct differences "between natural and human-shaped areas," they wrote in the journal Quaternary International.
In particular, the researchers identified a huge non-fractal footprint around the pyramids.
Read more at Discovery News
Did Ancient Warming Reunite Polar and Brown Bears?
Polar bears' past may echo their future, indicates a genetic study that finds the white-furred, sea ice-dwelling bears interbred with brown bears long after the two species separated as much as 5 million years ago.
Climate change likely drove this mixing among bears, writes the research team, noting there is evidence this is happening again.
"Maybe we're seeing a hint that in really warm times, polar bears changed their lifestyle and came into contact, and indeed interbred, with brown bears," said study researcher Stephan Schuster, a professor of biochemistry and molecular biology at Pennsylvania State University, and a research scientist at Nanyang Technological University in Singapore, in a statement.
The study estimates polar bears split from brown bears between 4 million and 5 million years ago, after which they endured fluctuations in climate, including ice ages and warmer times.
Polar bears are currently facing the effects of climate change, this time caused by humans, as the Arctic sea ice upon which they live recedes to unprecedented levels.
"If this trend continues, it is possible that future [polar bears] throughout most of their range may be forced to spend increasingly more time on land, perhaps even during the breeding season, and therefore come into contact with brown bears more frequently," the researchers write in results published today (July 23) in the journal Proceedings of the National Academy of Sciences.
"Recently, wild hybrids and even second-generation offspring have been documented in the Northern Beaufort Sea of Arctic Canada where the ranges of brown bears and [polar bears] appear to overlap, perhaps as a recent response to climatic changes," they write.
Schuster and colleagues sequenced genomes (the complete genetic blueprint) of three brown bears and a black bear and compared them with the genomes of polar bears, one modern and the other obtained from remains from a 120,000-year-old polar bear.
Read more at Discovery News
Climate change likely drove this mixing among bears, writes the research team, noting there is evidence this is happening again.
"Maybe we're seeing a hint that in really warm times, polar bears changed their lifestyle and came into contact, and indeed interbred, with brown bears," said study researcher Stephan Schuster, a professor of biochemistry and molecular biology at Pennsylvania State University, and a research scientist at Nanyang Technological University in Singapore, in a statement.
The study estimates polar bears split from brown bears between 4 million and 5 million years ago, after which they endured fluctuations in climate, including ice ages and warmer times.
Polar bears are currently facing the effects of climate change, this time caused by humans, as the Arctic sea ice upon which they live recedes to unprecedented levels.
"If this trend continues, it is possible that future [polar bears] throughout most of their range may be forced to spend increasingly more time on land, perhaps even during the breeding season, and therefore come into contact with brown bears more frequently," the researchers write in results published today (July 23) in the journal Proceedings of the National Academy of Sciences.
"Recently, wild hybrids and even second-generation offspring have been documented in the Northern Beaufort Sea of Arctic Canada where the ranges of brown bears and [polar bears] appear to overlap, perhaps as a recent response to climatic changes," they write.
Schuster and colleagues sequenced genomes (the complete genetic blueprint) of three brown bears and a black bear and compared them with the genomes of polar bears, one modern and the other obtained from remains from a 120,000-year-old polar bear.
Read more at Discovery News
Mysterious, Colorful Lobsters Being Caught
Lobsters sporting rare, unexpected colors and patterns are becoming more common in catches, and no one knows why.
Blue, pink, orange and even calico lobsters are winding up in traps. The orange ones are perhaps causing the most problems, since some chefs think they've already been cooked. But then the live, snapping crustacean reminds them otherwise.
Maybe social media is partly to blame?
"Are we seeing more because the Twitter sphere is active and people get excited about colorful lobsters?" Michael Tlusty, research director at the New England Aquarium in Boston, told Associated Press. "Is it because we're actually seeing an upswing in them? Is it just that we're catching more lobsters so we have the opportunity to see more?"
He added, "Right now you can make a lot of explanations, but the actual data to find them out just isn't there."
Information from NOAA points out that lobsters sometimes turn an odd, different color when they eat a single type of food. (That reminds me of Willy Wonka's Violet, the Blueberry Girl.) That phenomenon, for lobsters, usually only happens in the lab, however.
In the ocean, blue lobsters appear as a genetic anomaly. I'm guessing the calico and other colored/patterned lobsters do as well. Supposedly, once cooked, they look and taste the same as a regular hued lobster.
But why are there so many unusual colored ones now?
As AP mentions:
Such off-colored lobsters look as bizarre to other marine life as they do to us, so they are more visible to predators.
"But with the predator population down, notably cod, there might be greater survival rates among these color morphs that are visually easier to pick out," said Diana Cowan, executive director of The Lobster Conservancy.
Read more at Discovery News
Blue, pink, orange and even calico lobsters are winding up in traps. The orange ones are perhaps causing the most problems, since some chefs think they've already been cooked. But then the live, snapping crustacean reminds them otherwise.
Maybe social media is partly to blame?
"Are we seeing more because the Twitter sphere is active and people get excited about colorful lobsters?" Michael Tlusty, research director at the New England Aquarium in Boston, told Associated Press. "Is it because we're actually seeing an upswing in them? Is it just that we're catching more lobsters so we have the opportunity to see more?"
He added, "Right now you can make a lot of explanations, but the actual data to find them out just isn't there."
Information from NOAA points out that lobsters sometimes turn an odd, different color when they eat a single type of food. (That reminds me of Willy Wonka's Violet, the Blueberry Girl.) That phenomenon, for lobsters, usually only happens in the lab, however.
In the ocean, blue lobsters appear as a genetic anomaly. I'm guessing the calico and other colored/patterned lobsters do as well. Supposedly, once cooked, they look and taste the same as a regular hued lobster.
But why are there so many unusual colored ones now?
As AP mentions:
The odds of catching a blue lobster are 1-in-2 million, while orange comes in at 1-in-10 million. Yellow and orange-and-black calico lobsters have been pegged at 1-in-30 million, split-colored varieties at 1-in-50 million, and white -- the rarest of all -- at 1-in-100 million.
Such off-colored lobsters look as bizarre to other marine life as they do to us, so they are more visible to predators.
"But with the predator population down, notably cod, there might be greater survival rates among these color morphs that are visually easier to pick out," said Diana Cowan, executive director of The Lobster Conservancy.
Read more at Discovery News
Jul 23, 2012
Why Does a Vivid Memory 'Feel So Real?'
Neuroscientists have found strong evidence that vivid memory and directly experiencing the real moment can trigger similar brain activation patterns.
The study, led by Baycrest's Rotman Research Institute (RRI), in collaboration with the University of Texas at Dallas, is one of the most ambitious and complex yet for elucidating the brain's ability to evoke a memory by reactivating the parts of the brain that were engaged during the original perceptual experience. Researchers found that vivid memory and real perceptual experience share "striking" similarities at the neural level, although they are not "pixel-perfect" brain pattern replications.
The study appears online this month in the Journal of Cognitive Neuroscience, ahead of print publication.
"When we mentally replay an episode we've experienced, it can feel like we are transported back in time and re-living that moment again," said Dr. Brad Buchsbaum, lead investigator and scientist with Baycrest's RRI. "Our study has confirmed that complex, multi-featured memory involves a partial reinstatement of the whole pattern of brain activity that is evoked during initial perception of the experience. This helps to explain why vivid memory can feel so real."
But vivid memory rarely fools us into believing we are in the real, external world -- and that in itself offers a very powerful clue that the two cognitive operations don't work exactly the same way in the brain, he explained.
In the study, Dr. Buchsbaum's team used functional magnetic resonance imaging (fMRI), a powerful brain scanning technology that constructs computerized images of brain areas that are active when a person is performing a specific cognitive task. A group of 20 healthy adults (aged 18 to 36) were scanned while they watched 12 video clips, each nine seconds long, sourced from YouTube.com and Vimeo.com. The clips contained a diversity of content -- such as music, faces, human emotion, animals, and outdoor scenery. Participants were instructed to pay close attention to each of the videos (which were repeated 27 times) and informed they would be tested on the content of the videos after the scan.
A subset of nine participants from the original group were then selected to complete intensive and structured memory training over several weeks that required practicing over and over again the mental replaying of videos they had watched from the first session. After the training, this group was scanned again as they mentally replayed each video clip. To trigger their memory for a particular clip, they were trained to associate a particular symbolic cue with each one. Following each mental replay, participants would push a button indicating on a scale of 1 to 4 (1 = poor memory, 4 = excellent memory) how well they thought they had recalled a particular clip.
Dr. Buchsbaum's team found "clear evidence" that patterns of distributed brain activation during vivid memory mimicked the patterns evoked during sensory perception when the videos were viewed -- by a correspondence of 91% after a principal components analysis of all the fMRI imaging data.
The so-called "hot spots," or largest pattern similarity, occurred in sensory and motor association areas of the cerebral cortex -- a region that plays a key role in memory, attention, perceptual awareness, thought, language and consciousness.
Dr. Buchsbaum suggested the imaging analysis used in his study could potentially add to the current battery of memory assessment tools available to clinicians. Brain activation patterns from fMRI data could offer an objective way of quantifying whether a patient's self-report of their memory as "being good or vivid" is accurate or not.
Read more at Science Daily
The study, led by Baycrest's Rotman Research Institute (RRI), in collaboration with the University of Texas at Dallas, is one of the most ambitious and complex yet for elucidating the brain's ability to evoke a memory by reactivating the parts of the brain that were engaged during the original perceptual experience. Researchers found that vivid memory and real perceptual experience share "striking" similarities at the neural level, although they are not "pixel-perfect" brain pattern replications.
The study appears online this month in the Journal of Cognitive Neuroscience, ahead of print publication.
"When we mentally replay an episode we've experienced, it can feel like we are transported back in time and re-living that moment again," said Dr. Brad Buchsbaum, lead investigator and scientist with Baycrest's RRI. "Our study has confirmed that complex, multi-featured memory involves a partial reinstatement of the whole pattern of brain activity that is evoked during initial perception of the experience. This helps to explain why vivid memory can feel so real."
But vivid memory rarely fools us into believing we are in the real, external world -- and that in itself offers a very powerful clue that the two cognitive operations don't work exactly the same way in the brain, he explained.
In the study, Dr. Buchsbaum's team used functional magnetic resonance imaging (fMRI), a powerful brain scanning technology that constructs computerized images of brain areas that are active when a person is performing a specific cognitive task. A group of 20 healthy adults (aged 18 to 36) were scanned while they watched 12 video clips, each nine seconds long, sourced from YouTube.com and Vimeo.com. The clips contained a diversity of content -- such as music, faces, human emotion, animals, and outdoor scenery. Participants were instructed to pay close attention to each of the videos (which were repeated 27 times) and informed they would be tested on the content of the videos after the scan.
A subset of nine participants from the original group were then selected to complete intensive and structured memory training over several weeks that required practicing over and over again the mental replaying of videos they had watched from the first session. After the training, this group was scanned again as they mentally replayed each video clip. To trigger their memory for a particular clip, they were trained to associate a particular symbolic cue with each one. Following each mental replay, participants would push a button indicating on a scale of 1 to 4 (1 = poor memory, 4 = excellent memory) how well they thought they had recalled a particular clip.
Dr. Buchsbaum's team found "clear evidence" that patterns of distributed brain activation during vivid memory mimicked the patterns evoked during sensory perception when the videos were viewed -- by a correspondence of 91% after a principal components analysis of all the fMRI imaging data.
The so-called "hot spots," or largest pattern similarity, occurred in sensory and motor association areas of the cerebral cortex -- a region that plays a key role in memory, attention, perceptual awareness, thought, language and consciousness.
Dr. Buchsbaum suggested the imaging analysis used in his study could potentially add to the current battery of memory assessment tools available to clinicians. Brain activation patterns from fMRI data could offer an objective way of quantifying whether a patient's self-report of their memory as "being good or vivid" is accurate or not.
Read more at Science Daily
Infants Can Use Language to Learn About People's Intentions
Infants are able to detect how speech communicates unobservable intentions, researchers at New York University and McGill University have found in a study that sheds new light on how early in life we can rely on language to acquire knowledge about matters that go beyond first-hand experiences.
Their findings appear in the Proceedings of the National Academy of Sciences (PNAS).
"Much of what we know about the world does not come from our own experiences, so we have to obtain this information indirectly -- from books, the news media, and conversation," explained Athena Vouloumanos, an assistant professor at NYU and one of the study's co-authors. "Our results show infants can acquire knowledge in much the same way -- through language, or, specifically, spoken descriptions of phenomena they haven't -- or that can't be -- directly observed."
The study's other co-authors were Kristine Onishi, an associate professor in the Department of Psychology at Canada's McGill University, and Amanda Pogue, a former research assistant at NYU who is now a graduate student at the University of Waterloo.
Previous scholarship has established that infants seem to understand that speech can be used to categorize and communicate about observable entities such as objects and people. But no study has directly examined whether infants recognize that speech can communicate about unobservable aspects.
In the PNAS study, the researchers sought to determine if one-year-old infants could recognize that speech can communicate about one unobservable phenomenon that is crucial for understanding social interactions: a person's intentions.
To explore this question, the researchers had adults act out short scenarios for the infants. Some scenes ended predictably (that is, with an ending that is congruent with our understanding of the world) while others ended unpredictably (that is, incongruently).
The researchers employed a commonly used method to measure infants' detection of incongruent scenes: looking longer at an incongruent scene.
Infants saw an adult actor (the communicator) attempt, but fail, to stack a ring on a funnel because the funnel was just out of reach. Previous research showed that infants would interpret the actor's failed behavior as signaling the actor's underlying intention to stack the ring. The experimenters then introduced a second actor (the recipient) who was able to reach all the objects. In the key test scene, the communicator turned to the recipient and uttered either a novel word unknown to infants ("koba") or coughed.
Although infants always knew the communicator's intention (through observing her prior failed stacking attempts), the recipient only sometimes had the requisite information to accomplish the communicator's intended action-specifically, when the communicator vocalized appropriately using speech, but not when she coughed.
If infants understood that speech -- but not non-speech -- could transfer information about an intention, when the communicator used speech and the recipient responded by stacking the ring on the funnel, infants should treat this as a congruent outcome. Results confirmed this prediction. The infants looked longer when the recipient performed a different action, such as imitating the communicators' prior failed movements or stacking the ring somewhere other than on the funnel, suggesting they treated these as incongruent, or surprising, outcomes.
Because coughing doesn't communicate intentions, infants looked equally no matter what the recipient's response was.
"As adults, when we hear people speaking, we have the intuition that they're providing information to one another, even when we don't understand the language being spoken. And it's the same for infants," Onishi said. "Even when they don't understand the meaning of the specific words they hear, they realize that words -- like our nonsense word 'koba' -- can provide information in a way that coughing cannot."
"What's significant about this is it tells us that infants have access to another channel of communication that we previously didn't know they had," added Vouloumanos.
Read more at Science Daily
Their findings appear in the Proceedings of the National Academy of Sciences (PNAS).
"Much of what we know about the world does not come from our own experiences, so we have to obtain this information indirectly -- from books, the news media, and conversation," explained Athena Vouloumanos, an assistant professor at NYU and one of the study's co-authors. "Our results show infants can acquire knowledge in much the same way -- through language, or, specifically, spoken descriptions of phenomena they haven't -- or that can't be -- directly observed."
The study's other co-authors were Kristine Onishi, an associate professor in the Department of Psychology at Canada's McGill University, and Amanda Pogue, a former research assistant at NYU who is now a graduate student at the University of Waterloo.
Previous scholarship has established that infants seem to understand that speech can be used to categorize and communicate about observable entities such as objects and people. But no study has directly examined whether infants recognize that speech can communicate about unobservable aspects.
In the PNAS study, the researchers sought to determine if one-year-old infants could recognize that speech can communicate about one unobservable phenomenon that is crucial for understanding social interactions: a person's intentions.
To explore this question, the researchers had adults act out short scenarios for the infants. Some scenes ended predictably (that is, with an ending that is congruent with our understanding of the world) while others ended unpredictably (that is, incongruently).
The researchers employed a commonly used method to measure infants' detection of incongruent scenes: looking longer at an incongruent scene.
Infants saw an adult actor (the communicator) attempt, but fail, to stack a ring on a funnel because the funnel was just out of reach. Previous research showed that infants would interpret the actor's failed behavior as signaling the actor's underlying intention to stack the ring. The experimenters then introduced a second actor (the recipient) who was able to reach all the objects. In the key test scene, the communicator turned to the recipient and uttered either a novel word unknown to infants ("koba") or coughed.
Although infants always knew the communicator's intention (through observing her prior failed stacking attempts), the recipient only sometimes had the requisite information to accomplish the communicator's intended action-specifically, when the communicator vocalized appropriately using speech, but not when she coughed.
If infants understood that speech -- but not non-speech -- could transfer information about an intention, when the communicator used speech and the recipient responded by stacking the ring on the funnel, infants should treat this as a congruent outcome. Results confirmed this prediction. The infants looked longer when the recipient performed a different action, such as imitating the communicators' prior failed movements or stacking the ring somewhere other than on the funnel, suggesting they treated these as incongruent, or surprising, outcomes.
Because coughing doesn't communicate intentions, infants looked equally no matter what the recipient's response was.
"As adults, when we hear people speaking, we have the intuition that they're providing information to one another, even when we don't understand the language being spoken. And it's the same for infants," Onishi said. "Even when they don't understand the meaning of the specific words they hear, they realize that words -- like our nonsense word 'koba' -- can provide information in a way that coughing cannot."
"What's significant about this is it tells us that infants have access to another channel of communication that we previously didn't know they had," added Vouloumanos.
Read more at Science Daily
Undead: The Rabies Virus Remains a Medical Mystery
Today, though, Precious is back just to visit. In the halls of the pediatric ward, where zoo animals cavort in backlit photos, doing their best to dispel the hospital pall, the nurses who treated Precious greet her with delight. She does not remember them at all. But she speaks shyly to each, listening as they recount to her, in turn, their roles in rescuing her. She grows more talkative when describing the life she has resumed back in Willow Creek, in the wilds of California’s Humboldt County. To get in shape for the peewee wrestling season, Precious has been running laps in the long driveway of the farm where she lives with her siblings and grandparents. She also has resumed her pursuit of “mutton bustin’,” a sport in which kids ride rodeo-style on the backs of frantic sheep for as long as they can; at a recent match, she took home the third-place purse of $23.
Precious’ brush with death began with a simple flu-like illness that soon was accompanied by some odd symptoms: head and neck pain, weakness in her legs. At the hospital, a nurse asked her to drink something, but she choked, unable to swallow the fluid. “She looked at me like ‘Grandma, please help,’” her grandmother, Shirlee Roby, recalls. “I could tell this was no damn flu.” Her symptoms were so severe that the local hospital decided to transfer her by helicopter to UC Davis. When the state health department heard the symptoms and the fact that the patient had come from rural Humboldt County, it immediately suspected rabies. Lab tests confirmed the diagnosis: Precious had antibodies against the disease in her blood serum and cerebrospinal fluid, an impossibility in the absence of infection or vaccination. As it turned out, a feral cat had bitten her a few weeks before as she played outside her elementary school. But no one had thought to treat her at the time, and now it was too late for the standard intervention against rabies—a vaccine, administered in multiple shots over the course of two weeks, that allows the body to mount an immune response before the virus reaches the brain. In Precious’ case, it was clear that her brain had already been infected.
Not long ago, the medical response to this grim situation would have been little more than “comfort care”: administration of sedatives and painkillers to ease the suffering. Untreated, this suffering can be unbearable to watch, let alone experience. That telltale difficulty in swallowing, known as hydrophobia, results in desperately thirsty patients whose bodies rebel involuntarily whenever drink is brought to their lips. Soon fevers spike, and the victims are subject to violent convulsions as well as sudden bouts of aggression; their cries of agony, as expressed through a spasming throat, can produce the impression of an almost animal bark. Eventually the part of the brain that controls autonomic functions, like respiration and circulation, stops working, and the patients either suffocate or die in cardiac arrest. A decade ago, the only choice was to sedate them so their deaths would arrive with as little misery as possible.
But today, after millennia of futility, hospitals have an actual treatment to try. It was developed in 2004 by a pediatrician in Milwaukee named Rodney Willoughby, who, like the vast majority of American doctors, had never seen a case of rabies before. (In the US, there are usually fewer than five per year.) Yet Willoughby managed to save a young rabies patient, a girl of 15, by using drugs to induce a deep, week-long coma and then carefully bringing her out of it. It was the first documented case of a human surviving rabies without at least some vaccination before the onset of symptoms. Soon Willoughby posted his regimen online, and he worked with hospitals around the world to repeat and refine its use. Now referred to as the Milwaukee protocol, his methodology has continued to show limited success: Of 41 attempts worldwide, five more patients have pulled through, including Precious, whose recovery has been the most impressive of any victim to date.
Read more at Wired Science
The Pioneer Anomaly: a Wild Goose Chase?
Everyone loves a good mystery. And, some science mysteries are so strange that they take on legendary status.
The so-called Pioneer Anomaly -- which at first seemed to challenge the laws of physics -- is a case study of when it's best to bank on the simplest explanation for even the weirdest of observations.
Small, yet odd perturbations in the velocity of a pair of Pioneer spacecraft have spawned numerous science papers and conference discussions over the past two decades. In the end, it looks like the solution is rather mundane.
Nevertheless, the Pioneer spooky story became a magnet for exotic as well as plain kooky ideas. Commentary in some discussion groups have even tried to link it to Earth's Ice Ages, and an ad hoc idea called "fractal gravity." Creationists have glommed onto the mystery to try and demonstrate that "secular" scientists are wrong for ignoring so-called biblical cosmology.
The pair of Pioneer spacecraft, launched in the early 1970s to explore the outer solar system, are among an exclusive NASA fleet of five robotic "starships" that are moving fast enough to escape the sun's gravitational pull and drift through our galaxy forever.
Pioneer has even made a cameo appearance in a Star Trek movie when the evil Klingons find it in interstellar space and shoot it for target practice.
Now over 7 billion miles from Earth (10 light-hours) Pioneer 11 and 12 serve as "test particles" for measuring the effects of gravity on manmade objects over very large distances. Such a test has never before been possible.
In the 1980s several research teams independently measured what was interpreted as an infinitesimal deceleration of both Pioneers, which are streaking away from us in nearly opposite directions. The amount was inconsequential by engineering standards, but a huge discrepancy in predictions made by the laws of gravity.
The direction of the anomalous force had also come under question: is it really in the sun's direction, or Earth's, or along the spacecraft's spin axis or velocity direction?
Scientists began toying with the exotic theories for explaining the anomaly. Perhaps the laws of gravity needed to be modified. Was dark matter in our local neighborhood tugging on the Pioneers? Or did it have an even deeper implication for cosmology? One idea was that a localized blob of dark matter could be trapped in the sun's gravitational field. Any effects from dark energy would be way to small for explaining the Pioneer motion.
The peculiar speed difference is nearly equal to the value calculated (in the same units) by multiplying speed of light by the expansion rate of the universe. Without any clear causal link, the mathematical tie can best be dismissed as coincidence. It's just pseudo-scientific numerology. For example, the ratio of the perimeter to twice the altitude of the Great Pyramid of Ghiza is equal to the mathematical value for Pi. The height of the pyramid multiplied by 100 million yields the distance from Earth to the sun. So what?
Creation scientist, Russell Humphreys, has written extensively that the Pioneer Anomaly bolsters biblical scripture by demonstrating there really is a center to the universe, and that Earth must be near it. He reasons that the starbound Pioneers are being pulled back to the center of the universe, like a hiker struggling to climb up a steep slope. He envisions the space-time fabric of an 8,000 year-old universe relaxing like a worn bed mattress, and the Pioneer velocity change reflects this. However, a century's worth of cosmological observation demonstrate that the universe has no center, and the idea is anti-Copernican to boot.
Simple explanations for the Pioneer Anomaly, dating back to the late 1990s, looked at non-gravitational forces produced by the spacecraft itself due to thermal and electrical sources. Heat from Pioneer's electronics is 100 Watts. The heat from the radioactive plutonium-238 power source on Pioneer outputs 2.5 kilowatts. The nuclear "battery" is on a boom extending from one side of the 550-pound spacecraft. This would cause "thermal recoil" as one side of the vehicle was slightly warmed (thermal model above).
Slava Turyshev of NASA’s Jet Propulsion Laboratory proposed the recoil theory several years ago. Since then he has extract more archival data from Pioneer's tracking. The smoking gun, as described in a recently published paper that the data show a drop in Pioneer's anomalous motion. This is exactly what would be predicted if thermal heating is the culprit. The 10 pounds of plutonium aboard Pioneer cools as it decays exponentially.
Both Pioneer 11 and 12 would show exactly the same anomaly because they are identically built. But what about testing other spacecraft?
The New Horizons probe blazing its way to Pluto should also have peculiarities due to heat from its nuclear power source, though its tracking is not a precise as for the Pioneers. The two Voyager spacecraft are less sensitive to the effect seen on Pioneer, because their thrusters align it along three axes, whereas the Pioneer spacecraft rely on spinning to stay stable. Other solar system spacecraft are in the wrong orbit, have larger nuclear power sources, and do frequent maneuvers.
"For the foreseeable future Pioneer 10 and 11 remain the largest scale precision gravitational experiment ever conducted," wrote Victor Toth (Perimeter Institute for Theoretical Physics, Waterloo, Ontario Canada) in 2011. "Far more likely this (Pioneer anomaly) was just a wild goose chase."
Lessons learned are that there are limits to our tracking and navigational accuracy, it is critical to archive long-term data on spacecraft, and estimates of small forces acting on a spacecraft really need to be precisely done.
Read more at Discovery News
The so-called Pioneer Anomaly -- which at first seemed to challenge the laws of physics -- is a case study of when it's best to bank on the simplest explanation for even the weirdest of observations.
Small, yet odd perturbations in the velocity of a pair of Pioneer spacecraft have spawned numerous science papers and conference discussions over the past two decades. In the end, it looks like the solution is rather mundane.
Nevertheless, the Pioneer spooky story became a magnet for exotic as well as plain kooky ideas. Commentary in some discussion groups have even tried to link it to Earth's Ice Ages, and an ad hoc idea called "fractal gravity." Creationists have glommed onto the mystery to try and demonstrate that "secular" scientists are wrong for ignoring so-called biblical cosmology.
The pair of Pioneer spacecraft, launched in the early 1970s to explore the outer solar system, are among an exclusive NASA fleet of five robotic "starships" that are moving fast enough to escape the sun's gravitational pull and drift through our galaxy forever.
Pioneer has even made a cameo appearance in a Star Trek movie when the evil Klingons find it in interstellar space and shoot it for target practice.
Now over 7 billion miles from Earth (10 light-hours) Pioneer 11 and 12 serve as "test particles" for measuring the effects of gravity on manmade objects over very large distances. Such a test has never before been possible.
In the 1980s several research teams independently measured what was interpreted as an infinitesimal deceleration of both Pioneers, which are streaking away from us in nearly opposite directions. The amount was inconsequential by engineering standards, but a huge discrepancy in predictions made by the laws of gravity.
The direction of the anomalous force had also come under question: is it really in the sun's direction, or Earth's, or along the spacecraft's spin axis or velocity direction?
Scientists began toying with the exotic theories for explaining the anomaly. Perhaps the laws of gravity needed to be modified. Was dark matter in our local neighborhood tugging on the Pioneers? Or did it have an even deeper implication for cosmology? One idea was that a localized blob of dark matter could be trapped in the sun's gravitational field. Any effects from dark energy would be way to small for explaining the Pioneer motion.
The peculiar speed difference is nearly equal to the value calculated (in the same units) by multiplying speed of light by the expansion rate of the universe. Without any clear causal link, the mathematical tie can best be dismissed as coincidence. It's just pseudo-scientific numerology. For example, the ratio of the perimeter to twice the altitude of the Great Pyramid of Ghiza is equal to the mathematical value for Pi. The height of the pyramid multiplied by 100 million yields the distance from Earth to the sun. So what?
Creation scientist, Russell Humphreys, has written extensively that the Pioneer Anomaly bolsters biblical scripture by demonstrating there really is a center to the universe, and that Earth must be near it. He reasons that the starbound Pioneers are being pulled back to the center of the universe, like a hiker struggling to climb up a steep slope. He envisions the space-time fabric of an 8,000 year-old universe relaxing like a worn bed mattress, and the Pioneer velocity change reflects this. However, a century's worth of cosmological observation demonstrate that the universe has no center, and the idea is anti-Copernican to boot.
Simple explanations for the Pioneer Anomaly, dating back to the late 1990s, looked at non-gravitational forces produced by the spacecraft itself due to thermal and electrical sources. Heat from Pioneer's electronics is 100 Watts. The heat from the radioactive plutonium-238 power source on Pioneer outputs 2.5 kilowatts. The nuclear "battery" is on a boom extending from one side of the 550-pound spacecraft. This would cause "thermal recoil" as one side of the vehicle was slightly warmed (thermal model above).
Slava Turyshev of NASA’s Jet Propulsion Laboratory proposed the recoil theory several years ago. Since then he has extract more archival data from Pioneer's tracking. The smoking gun, as described in a recently published paper that the data show a drop in Pioneer's anomalous motion. This is exactly what would be predicted if thermal heating is the culprit. The 10 pounds of plutonium aboard Pioneer cools as it decays exponentially.
Both Pioneer 11 and 12 would show exactly the same anomaly because they are identically built. But what about testing other spacecraft?
The New Horizons probe blazing its way to Pluto should also have peculiarities due to heat from its nuclear power source, though its tracking is not a precise as for the Pioneers. The two Voyager spacecraft are less sensitive to the effect seen on Pioneer, because their thrusters align it along three axes, whereas the Pioneer spacecraft rely on spinning to stay stable. Other solar system spacecraft are in the wrong orbit, have larger nuclear power sources, and do frequent maneuvers.
"For the foreseeable future Pioneer 10 and 11 remain the largest scale precision gravitational experiment ever conducted," wrote Victor Toth (Perimeter Institute for Theoretical Physics, Waterloo, Ontario Canada) in 2011. "Far more likely this (Pioneer anomaly) was just a wild goose chase."
Lessons learned are that there are limits to our tracking and navigational accuracy, it is critical to archive long-term data on spacecraft, and estimates of small forces acting on a spacecraft really need to be precisely done.
Read more at Discovery News
Jul 22, 2012
New Clues to the Early Solar System from Ancient Meteorites
In order to understand Earth's earliest history--its formation from Solar System material into the present-day layering of metal core and mantle, and crust--scientists look to meteorites. New research from a team including Carnegie's Doug Rumble and Liping Qin focuses on one particularly old type of meteorite called diogenites. These samples were examined using an array of techniques, including precise analysis of certain elements for important clues to some of the Solar System's earliest chemical processing.
Their work is published online July 22 by Nature Geoscience.
At some point after terrestrial planets or large bodies accreted from surrounding Solar System material, they differentiate into a metallic core, asilicate mantle, and a crust. This involved a great deal of heating. The sources of this heat are the decay of short-lived radioisotopes, the energy conversion that occurs when dense metals are physically separated from lighter silicate, and the impact of large objects. Studies indicate that the Earth's and Moon's mantles may have formed more than 4.4 billion years ago, and Mars's more than 4.5 billion years ago.
Theoretically, when a planet or large body differentiates enough to form a core, certain elements including osmium, iridium, ruthenium, platinum, palladium, and rhenium -- known as highly siderophile elements -- are segregated into the core. But studies show that mantles of Earth, Moon and Mars contain more of these elements than they should. Scientists have several theories about why this is the case and the research team -- which included lead author James Day of Scripps Institution of Oceanography and Richard Walker of the University of Maryland -- set out to explore these theories by looking at diogenite meteorites.
Diogenites are a kind of meteorite that may have come from the asteroid Vesta, or a similar body. They represent some of the Solar System's oldest existing examples of heat-related chemical processing. What's more, Vesta or their other parent bodies were large enough to have undergone a similar degree of differentiation to Earth, thus forming a kind of scale model of a terrestrial planet.
The team examined seven diogenites from Antarctica and two that landed in the African desert. They were able to confirm that these samples came from no fewer than two parent bodies and that the crystallization of their minerals occurred about 4.6 billion years ago, only 2 million years after condensation of the oldest solids in the Solar System.
Examination of the samples determined that the highly siderophile elements present in the diogenite meteorites were present during formation of the rocks, which could only occur if late addition or 'accretion' of these elements after core formation had taken place. This timing of late accretion is earlier than previously thought, and much earlier than similar processes are thought to have occurred on Earth, Mars, or the Moon.
Remarkably, these results demonstrate that accretion, core formation, primary differentiation, and late accretion were all accomplished in just over 2 to 3 million years on some parent bodies. In the case of Earth, there followed crust formation, the development of an atmosphere, and plate tectonics, among other geologic processes, so the evidence for this early period is no longer preserved.
Read more at Science Daily
Their work is published online July 22 by Nature Geoscience.
At some point after terrestrial planets or large bodies accreted from surrounding Solar System material, they differentiate into a metallic core, asilicate mantle, and a crust. This involved a great deal of heating. The sources of this heat are the decay of short-lived radioisotopes, the energy conversion that occurs when dense metals are physically separated from lighter silicate, and the impact of large objects. Studies indicate that the Earth's and Moon's mantles may have formed more than 4.4 billion years ago, and Mars's more than 4.5 billion years ago.
Theoretically, when a planet or large body differentiates enough to form a core, certain elements including osmium, iridium, ruthenium, platinum, palladium, and rhenium -- known as highly siderophile elements -- are segregated into the core. But studies show that mantles of Earth, Moon and Mars contain more of these elements than they should. Scientists have several theories about why this is the case and the research team -- which included lead author James Day of Scripps Institution of Oceanography and Richard Walker of the University of Maryland -- set out to explore these theories by looking at diogenite meteorites.
Diogenites are a kind of meteorite that may have come from the asteroid Vesta, or a similar body. They represent some of the Solar System's oldest existing examples of heat-related chemical processing. What's more, Vesta or their other parent bodies were large enough to have undergone a similar degree of differentiation to Earth, thus forming a kind of scale model of a terrestrial planet.
The team examined seven diogenites from Antarctica and two that landed in the African desert. They were able to confirm that these samples came from no fewer than two parent bodies and that the crystallization of their minerals occurred about 4.6 billion years ago, only 2 million years after condensation of the oldest solids in the Solar System.
Examination of the samples determined that the highly siderophile elements present in the diogenite meteorites were present during formation of the rocks, which could only occur if late addition or 'accretion' of these elements after core formation had taken place. This timing of late accretion is earlier than previously thought, and much earlier than similar processes are thought to have occurred on Earth, Mars, or the Moon.
Remarkably, these results demonstrate that accretion, core formation, primary differentiation, and late accretion were all accomplished in just over 2 to 3 million years on some parent bodies. In the case of Earth, there followed crust formation, the development of an atmosphere, and plate tectonics, among other geologic processes, so the evidence for this early period is no longer preserved.
Read more at Science Daily
Behold, the Artificial Jellyfish: Researchers Create Moving Model, Using Silicone Polymer and Heart Muscle Cells
Using recent advances in marine biomechanics, materials science, and tissue engineering, a team of researchers at Harvard University and the California Institute of Technology (Caltech) have turned inanimate silicone and living cardiac muscle cells into a freely swimming "jellyfish."
The finding serves as a proof of concept for reverse engineering a variety of muscular organs and simple life forms. It also suggests a broader definition of what counts as synthetic life in an emerging field that has primarily focused on replicating life's building blocks.
The researchers' method for building the tissue-engineered jellyfish, dubbed "Medusoid," was published in a Nature Biotechnology paper on July 22.
An expert in cell- and tissue-powered actuators, coauthor Kevin Kit Parker has previously demonstrated bioengineered constructs that can grip, pump, and even walk. The inspiration to raise the bar and mimic a jellyfish came out of his own frustration with the state of the cardiac field.
Similar to the way a human heart moves blood throughout the body, jellyfish propel themselves through the water by pumping. In figuring out how to take apart and then rebuild the primary motor function of a jellyfish, the aim was to gain new insights into how such pumps really worked.
"It occurred to me in 2007 that we might have failed to understand the fundamental laws of muscular pumps," says Parker, Tarr Family Professor of Bioengineering and Applied Physics at the Harvard School of Engineering and Applied Sciences (SEAS) and a Core Faculty Member at the Wyss Institute for Biologically Inspired Engineering at Harvard. "I started looking at marine organisms that pump to survive. Then I saw a jellyfish at the New England Aquarium and I immediately noted both similarities and differences between how the jellyfish and the human heart pump."
To build the Medusoid, Parker collaborated with Janna Nawroth, a doctoral student in biology at Caltech and lead author of the study, who performed the work as a visiting researcher in Parker's lab. They also worked with Nawroth's adviser, John Dabiri, a professor of aeronautics and bioengineering at Caltech, who is an expert in biological propulsion.
"A big goal of our study was to advance tissue engineering," says Nawroth. "In many ways, it is still a very qualitative art, with people trying to copy a tissue or organ just based on what they think is important or what they see as the major components -- without necessarily understanding if those components are relevant to the desired function or without analyzing first how different materials could be used."
It turned out that jellyfish, believed to be the oldest multi-organ animals in the world, were an ideal subject, as they use muscles to pump their way through water, and their basic morphology is similar to that of a beating human heart.
To reverse engineer a medusa jellyfish, the investigators used analysis tools borrowed from the fields of law enforcement biometrics and crystallography to make maps of the alignment of subcellular protein networks within all of the muscle cells within the animal. They then conducted studies to understand the electrophysiological triggering of jellyfish propulsion and the biomechanics of the propulsive stroke itself.
Based on such understanding, it turned out that a sheet of cultured rat heart muscle tissue that would contract when electrically stimulated in a liquid environment was the perfect raw material to create an ersatz jellyfish. The team then incorporated a silicone polymer that fashions the body of the artificial creature into a thin membrane that resembles a small jellyfish, with eight arm-like appendages.
Using the same analysis tools, the investigators were able to quantitatively match the subcellular, cellular, and supracellular architecture of the jellyfish musculature with the rat heart muscle cells.
The artificial construct was placed in container of ocean-like salt water and shocked into swimming with synchronized muscle contractions that mimic those of real jellyfish. (In fact, the muscle cells started to contract a bit on their own even before the electrical current was applied.)
"I was surprised that with relatively few components -- a silicone base and cells that we arranged -- we were able to reproduce some pretty complex swimming and feeding behaviors that you see in biological jellyfish," says Dabiri.
Their design strategy, they say, will be broadly applicable to the reverse engineering of muscular organs in humans.
"As engineers, we are very comfortable with building things out of steel, copper, concrete," says Parker. "I think of cells as another kind of building substrate, but we need rigorous quantitative design specs to move tissue engineering to a reproducible type of engineering. The jellyfish provides a design algorithm for reverse engineering an organ's function and developing quantitative design and performance specifications. We can complete the full exercise of the engineer's design process: design, build, and test."
In addition to advancing the field of tissue engineering, Parker adds that he took on the challenge of building a creature to challenge the traditional view of synthetic biology which is "focused on genetic manipulations of cells." Instead of building just a cell, he sought to "build a beast."
Looking forward, the researchers aim to further evolve the artificial jellyfish, allowing it to turn and move in a particular direction, and even incorporating a simple "brain" so it can respond to its environment and replicate more advanced behaviors like heading toward a light source and seeking energy or food.
Read more at Science Daily
The finding serves as a proof of concept for reverse engineering a variety of muscular organs and simple life forms. It also suggests a broader definition of what counts as synthetic life in an emerging field that has primarily focused on replicating life's building blocks.
The researchers' method for building the tissue-engineered jellyfish, dubbed "Medusoid," was published in a Nature Biotechnology paper on July 22.
An expert in cell- and tissue-powered actuators, coauthor Kevin Kit Parker has previously demonstrated bioengineered constructs that can grip, pump, and even walk. The inspiration to raise the bar and mimic a jellyfish came out of his own frustration with the state of the cardiac field.
Similar to the way a human heart moves blood throughout the body, jellyfish propel themselves through the water by pumping. In figuring out how to take apart and then rebuild the primary motor function of a jellyfish, the aim was to gain new insights into how such pumps really worked.
"It occurred to me in 2007 that we might have failed to understand the fundamental laws of muscular pumps," says Parker, Tarr Family Professor of Bioengineering and Applied Physics at the Harvard School of Engineering and Applied Sciences (SEAS) and a Core Faculty Member at the Wyss Institute for Biologically Inspired Engineering at Harvard. "I started looking at marine organisms that pump to survive. Then I saw a jellyfish at the New England Aquarium and I immediately noted both similarities and differences between how the jellyfish and the human heart pump."
To build the Medusoid, Parker collaborated with Janna Nawroth, a doctoral student in biology at Caltech and lead author of the study, who performed the work as a visiting researcher in Parker's lab. They also worked with Nawroth's adviser, John Dabiri, a professor of aeronautics and bioengineering at Caltech, who is an expert in biological propulsion.
"A big goal of our study was to advance tissue engineering," says Nawroth. "In many ways, it is still a very qualitative art, with people trying to copy a tissue or organ just based on what they think is important or what they see as the major components -- without necessarily understanding if those components are relevant to the desired function or without analyzing first how different materials could be used."
It turned out that jellyfish, believed to be the oldest multi-organ animals in the world, were an ideal subject, as they use muscles to pump their way through water, and their basic morphology is similar to that of a beating human heart.
To reverse engineer a medusa jellyfish, the investigators used analysis tools borrowed from the fields of law enforcement biometrics and crystallography to make maps of the alignment of subcellular protein networks within all of the muscle cells within the animal. They then conducted studies to understand the electrophysiological triggering of jellyfish propulsion and the biomechanics of the propulsive stroke itself.
Based on such understanding, it turned out that a sheet of cultured rat heart muscle tissue that would contract when electrically stimulated in a liquid environment was the perfect raw material to create an ersatz jellyfish. The team then incorporated a silicone polymer that fashions the body of the artificial creature into a thin membrane that resembles a small jellyfish, with eight arm-like appendages.
Using the same analysis tools, the investigators were able to quantitatively match the subcellular, cellular, and supracellular architecture of the jellyfish musculature with the rat heart muscle cells.
The artificial construct was placed in container of ocean-like salt water and shocked into swimming with synchronized muscle contractions that mimic those of real jellyfish. (In fact, the muscle cells started to contract a bit on their own even before the electrical current was applied.)
"I was surprised that with relatively few components -- a silicone base and cells that we arranged -- we were able to reproduce some pretty complex swimming and feeding behaviors that you see in biological jellyfish," says Dabiri.
Their design strategy, they say, will be broadly applicable to the reverse engineering of muscular organs in humans.
"As engineers, we are very comfortable with building things out of steel, copper, concrete," says Parker. "I think of cells as another kind of building substrate, but we need rigorous quantitative design specs to move tissue engineering to a reproducible type of engineering. The jellyfish provides a design algorithm for reverse engineering an organ's function and developing quantitative design and performance specifications. We can complete the full exercise of the engineer's design process: design, build, and test."
In addition to advancing the field of tissue engineering, Parker adds that he took on the challenge of building a creature to challenge the traditional view of synthetic biology which is "focused on genetic manipulations of cells." Instead of building just a cell, he sought to "build a beast."
Looking forward, the researchers aim to further evolve the artificial jellyfish, allowing it to turn and move in a particular direction, and even incorporating a simple "brain" so it can respond to its environment and replicate more advanced behaviors like heading toward a light source and seeking energy or food.
Read more at Science Daily
Subscribe to:
Posts (Atom)