Aug 4, 2012

Hubble Sees a Ten Billion Year Stellar Dance

The NASA/ESA Hubble Space Telescope offers a delightful view of the crowded stellar encampment called Messier 68, a spherical, star-filled region of space known as a globular cluster. Mutual gravitational attraction amongst a cluster's hundreds of thousands or even millions of stars keeps stellar members in check, allowing globular clusters to hang together for many billions of years.

Astronomers can measure the ages of globular clusters by looking at the light of their constituent stars. The chemical elements leave signatures in this light, and the starlight reveals that globular clusters' stars typically contain fewer heavy elements, such as carbon, oxygen and iron, than stars like the Sun. Since successive generations of stars gradually create these elements through nuclear fusion, stars having fewer of them are relics of earlier epochs in the Universe. Indeed, the stars in globular clusters rank among the oldest on record, dating back more than 10 billion years.

More than 150 of these objects surround our Milky Way galaxy. On a galactic scale, globular clusters are indeed not all that big. In Messier 68's case, its constituent stars span a volume of space with a diameter of little more than a hundred light-years. The disc of the Milky Way, on the other hand, extends over some 100,000 light-years or more.

Messier 68 is located about 33,000 light-years from Earth in the constellation Hydra (the female water snake). French astronomer Charles Messier notched the object as the sixty-eighth entry in his famous catalogue in 1780.

Read more at Science Daily

Signs Changing Fast for Voyager at Solar System Edge

Two of three key signs of changes expected to occur at the boundary of interstellar space have changed faster than at any other time in the last seven years, according to new data from NASA's Voyager 1 spacecraft.

For the last seven years, Voyager 1 has been exploring the outer layer of the bubble of charged particles the sun blows around itself. In one day, on July 28, data from Voyager 1's cosmic ray instrument showed the level of high-energy cosmic rays originating from outside our solar system jumped by five percent. During the last half of that same day, the level of lower-energy particles originating from inside our solar system dropped by half. However, in three days, the levels had recovered to near their previous levels.

A third key sign is the direction of the magnetic field, and scientists are eagerly analyzing the data to see whether that has, indeed, changed direction. Scientists expect that all three of these signs will have changed when Voyager 1 has crossed into interstellar space. A preliminary analysis of the latest magnetic field data is expected to be available in the next month.

"These are thrilling times for the Voyager team as we try to understand the quickening pace of changes as Voyager 1 approaches the edge of interstellar space," said Edward Stone, the Voyager project scientist based at the California Institute of Technology, Pasadena, Calif. "We are certainly in a new region at the edge of the solar system where things are changing rapidly. But we are not yet able to say that Voyager 1 has entered interstellar space."

The levels of high-energy cosmic ray particles have been increasing for years, but more slowly than they are now. The last jump -- of five percent -- took one week in May. The levels of lower-energy particles from inside our solar system have been slowly decreasing for the last two years. Scientists expect that the lower-energy particles will drop close to zero when Voyager 1 finally crosses into interstellar space.

"The increase and the decrease are sharper than we've seen before, but that's also what we said about the May data," Stone said. "The data are changing in ways that we didn't expect, but Voyager has always surprised us with new discoveries."

Voyager 1, which launched on Sept. 5, 1977, is 11 billion miles (18 billion kilometers) from the sun. Voyager 2, which launched on Aug. 20, 1977, is close behind, at 9.3 billion miles (15 billion kilometers) from the sun.

"Our two veteran Voyager spacecraft are hale and healthy as they near the 35th anniversary of their launch," said Suzanne Dodd, Voyager project manager based at NASA's Jet Propulsion Laboratory, Pasadena. "We know they will cross into interstellar space. It's just a question of when."

Read more at Science Daily

Aug 3, 2012

Birds That Live With Varying Weather Sing More Versatile Songs

A new study of North American songbirds reveals that birds that live with fluctuating weather are more flexible singers.

Mixing it up helps birds ensure that their songs are heard no matter what the habitat, say researchers at Australian National University and the National Evolutionary Synthesis Center.

To test the idea, the researchers analyzed song recordings from more than 400 male birds spanning 44 species of North American songbirds -- a data set that included orioles, blackbirds, warblers, sparrows, cardinals, finches, chickadees and thrushes.

They used computer software to convert each sound recording -- a medley of whistles, warbles, cheeps, chirps, trills and twitters -- into a spectrogram, or sound graph. Like a musical score, the complex pattern of lines and streaks in a spectrogram enable scientists to see and visually analyze each snippet of sound.

For each bird in their data set, they measured song characteristics such as length, highest and lowest notes, number of notes, and the spacing between them.

When they combined this data with temperature and precipitation records and other information such as habitat and latitude, they found a surprising pattern -- males that experience more dramatic seasonal swings between wet and dry sing more variable songs.

"They may sing certain notes really low, or really high, or they may adjust the loudness or tempo," said co-author Clinton Francis of the National Evolutionary Synthesis Center.

The Pyrrhuloxia or desert cardinal from the American southwest and northern Mexico and Lawrence's goldfinch from California are two examples.

In addition to variation in weather across the seasons, the researchers also looked at geographic variation and found a similar pattern. Namely, species that experience more extreme differences in precipitation from one location to the next across their range sing more complex tunes. House finches and plumbeous vireos are two examples, Francis said.

Why might this be?

"Precipitation is closely related to how densely vegetated the habitat is," said co-author Iliana Medina of Australian National University. Changing vegetation means changing acoustic conditions.

"Sound transmits differently through different vegetation types," Francis explained. "Often when birds arrive at their breeding grounds in the spring, for example, there are hardly any leaves on the trees. Over the course of just a couple of weeks, the sound transmission changes drastically as the leaves come in."

"Birds that have more flexibility in their songs may be better able to cope with the different acoustic environments they experience throughout the year," Medina added.

A separate team reported similar links between environment and birdsong in mockingbirds in 2009, but this is the first study to show that the pattern holds up across dozens of species.

Read more at Science Daily

Bilingualism 'Can Increase Mental Agility'

Bilingual children outperform children who speak only one language in problem-solving skills and creative thinking, according to research led at the University of Strathclyde.

A study of primary school pupils who spoke English or Italian- half of whom also spoke Gaelic or Sardinian- found that the bilingual children were significantly more successful in the tasks set for them. The Gaelic-speaking children were, in turn, more successful than the Sardinian speakers.

The differences were linked to the mental alertness required to switch between languages, which could develop skills useful in other types of thinking. The further advantage for Gaelic-speaking children may have been due to the formal teaching of the language and its extensive literature.

In contrast, Sardinian is not widely taught in schools on the Italian island and has a largely oral tradition, which means there is currently no standardised form of the language.

Dr Fraser Lauchlan, an Honorary Lecturer at the University of Strathclyde’s School of Psychological Sciences & Health, led the research. It was conducted with colleagues at the University of Cagliari in Sardinia, where he is a Visiting Professor.

Dr Lauchlan said: “Bilingualism is now largely seen as being beneficial to children but there remains a view that it can be confusing, and so potentially detrimental to them.

“Our study has found that it can have demonstrable benefits, not only in language but in arithmetic, problem solving and enabling children to think creatively. We also assessed the children’s vocabulary, not so much for their knowledge of words as their understanding of them. Again, there was a marked difference in the level of detail and richness in description from the bilingual pupils.  

“We also found they had an aptitude for selective attention- the ability to identify and focus on information which is important, while filtering out what is not- which could come from the ‘code-switching’ of thinking in two different languages.”

In the study, a total of 121 children in Scotland and Sardinia- 62 of them bilingual- were set tasks in which they were asked to reproduce patterns of coloured blocks, to repeat orally a series of numbers, to give clear definitions of words and to resolve mentally a set of arithmetic problems. The tasks were all set in English or Italian and the children taking part were aged around nine.

Read more at Science Daily

What You Don't Know Can Hurt You

Is it possible for a health care system to redesign its services to better educate patients to deal with their immediate health issues and also become more savvy consumers of medicine in the long run?

The answer is yes, according to a study led by scientists at the University of California, San Francisco (UCSF) and San Francisco General Hospital and Trauma Center (SFGH) that was recently reported by the Institute of Medicine (IOM).

The team's paper describes ten attributes that health care organizations should utilize to make it easier for people to better navigate health information, make sense of services and better manage their own health -- assistance for which there is a profound societal need.

Some 77 millions people in the United States have difficulty understanding even very basic health information, which clouds their ability to follow doctors' recommendations, and millions more simply lack the skills necessary to make clear, informed decisions about their own health care, said senior author Dean Schillinger, MD, a UCSF professor of medicine, chief of the Division of General Internal Medicine at SFGH, and director of the Health Communications Program the UCSF Center for Vulnerable Populations at SFGH.

"Depending on how you define it, nearly half the U.S. population has poor health literacy skills," he said.

"Over the last two decades, we have focused on what patients can do to improve their health literacy," he said. "In this report, we look at the other side of the health literacy coin, and focus on what health care systems can do."

Emerging from an IOM Roundtable that brought together leaders from academia, industry, government agencies, non-profit organizations and patient and consumer interest groups, the new paper examines the programs, practices, attitudes and attributes of organizations that create environments that foster health literacy.

Why Health Literacy is So Important

The importance of enhancing health literacy has been demonstrated by numerous clinical studies over the years, said Schillinger, many of them carried out at UCSF. Health literacy is linked directly to patient wellness. People who are adept at understanding health information tend to make better choices, are better able to self-manage their chronic conditions, and have significantly better outcomes than people who do not.

Adults with low health literacy may find it especially difficult to navigate the healthcare system, and are more likely to have higher rates of serious medication errors, more emergency room visits and hospitalizations, gaps in their preventative care, increased likelihood of dying, and even poorer health outcomes for their children.

A number of health policy organizations have recognized that health literacy not only is important to individuals, but also benefits society because helping patients help themselves is an important pathway to keeping down health care costs. Successful self-management reduces disease complications and can cut down on unnecessary emergency room visits and eliminate other wasteful spending

Organizations that promote proper health literacy tend to do certain things very well. The ten attributes in the report include items such as:

  • Making improving health literacy a priority at every level of the organization;
     
  • Measuring health literacy and using those measurements to guide their practices;
     
  • Taking into account the particular needs of the populations they serve;
     
  • Avoiding stigmatizing people who lack health literacy;
     
  • Providing easy access to health information and assistance navigating services;
  •  
  • Distributing easy-to-understand information across print, audiovisual, and social media channels;
     
  • Taking health literacy into account when discussing medicines or in other high-risk situations by using proven educational techniques, such as the teach-back method;
     
  • Training the healthcare workforce in health communication techniques; and
     
  • Letting patients know what their insurance policies cover and what they are themselves responsible for paying.
Read more at Science Daily

Cheetah Sets New Land Speed Record, Beats Bolt by 4 Seconds

The fastest athlete this summer isn’t competing in London: She’s a cheetah at the Cincinnati Zoo.

The 11-year-old cheetah named Sarah ran 100 meters in 5.95 seconds, breaking her own record as fastest land mammal. In 2009, she ran it in 6.13 seconds.

By comparison, the world’s fastest man, Jamaican sprinter Usain Bolt, has a top time of 9.58 seconds for that distance — more than four seconds slower than Sarah.

The cheetah reached a top speed of 61 mph while chasing a furry dog toy on a course that was certified by the Road Running Technical Council of USA Track & Field.

The run was captured on video and photographed by National Geographic, which is featuring Sarah and the Cincinnati Zoo’s other four cheetahs in its November issue. Documentation of the run was also sponsored by National Geographic’s Big Cats Initiative to preserve big cats in the wild through conservation, economic incentives and spreading public awareness.

Read more at Wired Science

Goose Bumps Never Lie

Goose bumps often turn out to be a skin orgasm of sorts, frequently resulting from an emotional climax stimulated by a "powerful other," according to new research.

A study exploring the scientific and social aspects of goose bumps finds that this common form of piloerection is associated with feelings of awe. This physical reaction also cannot be faked.

The study, published in the journal Motivation and Emotion, helps to explain how a defense mechanism protecting the body from cold also surfaces during moments of wonderment.

"We suggest that goose bumps may be the initial reaction: a blend of fear, surprise and defense, which is displaced by a positive appraisal made even more positive by the contrast from bad to good," co-author Richard Smith told Discovery News.

Goose bumps come at the intersection of our fight-or-flight response, even if the emotional jolt arises from something as seemingly harmless as a musical performance, Smith, a professor in the University of Kentucky's Department of Psychology, explained.

"[The] powerful other has the capacity to harm, but does not, assuming our submissive response," he said. "An initial 'fight' response, after a subsequent positive appraisal, precludes a 'flight' response. This positive response may be made all the stronger by the contrast."

From a physical standpoint, goose bumps occur when the muscles underneath the skin contract, making the individual's hair stand on end. This is useful for the survival of animals with skin fur or hair, the authors note, since goose bumps aid in the retention of body heat. This explains why we get them when exposed to sudden cold.

Goose bumps are also, however, associated with intense emotional moments.

For the study, researchers had participants keep a four-week journal making detailed entries each time they experienced goose bumps and rating their feelings during such moments.

Awe was the second most-cited response as a cause of goose bumps, followed by reactions to cold. The intensity of goose bumps was also positively correlated with awe, but negatively correlated with envy.

The absence of goose bumps with envy is notable, since both awe and envy are emotions that can result from observing, or otherwise experiencing, a powerful other. Awe, however, should stabilize social hierarchies while envy should undermine them.

Jonathan Haidt, a professor of business ethics at the NYU Stern School of Business, along with colleague Dacher Keltner, has extensively studied awe. This feeling may have its origins in the emotional reactions to powerful, and thus potentially dangerous, leaders, explained Haidt, who told Discovery News that the latest findings about awe and goose bumps make sense.

"The fear aspect may be, in part, what connects awe with goose bumps, in addition to a general effect of there being a rush of emotion," Smith said.

Read more at Discovery News

Aug 2, 2012

Bats Incredible: The Mystery of Rabies Survivorship Deepens

Until recently, a human rabies infection was considered inescapably fatal. But a teenage girl who defied this death sentence eight years ago has had doctors and scientists debating how she survived ever since. And now a surprising report from remote Amazonia is adding to the mystery.

In our feature for Wired this month, we explore the controversy over the Milwaukee Protocol: an experimental treatment regimen for rabies that may have saved the teenage girl in 2004 and five more patients since then. But many top rabies scientists still doubt whether the treatment method, which involves inducing a medical coma, is the best way to treat rabies patients.

Central to their doubts is the question of whether some humans might well have been surviving rabies without treatment all along. No other disease kills every single human it afflicts, after all. And studies in dogs and bats have shown that those rabies carriers, who almost always die from the infection, nevertheless will occasionally survive.

Now a new study provides more ammunition for the idea that humans might survive rabies on their own.

A research team led by Amy Gilbert of the U.S. Centers for Disease Control and Prevention studied two communities in Peru where vampire bat attacks on cattle are common. Of 63 people tested, seven of them came back positive for virus-neutralizing antibodies against rabies — and only one of them had ever received rabies vaccination, which would induce the immune system to create the antibodies. That fact strongly suggests that the other six produced the antibodies after being exposed to rabies but failed to die from the illness. And indeed, most of the seropositive Peruvians reported that they had been bitten by a vampiro at least once.

As the authors note, this is one of the first (and definitely the strongest) study to show that humans can naturally develop rabies antibodies without dying from the disease. But does this settle the debate over rabies survival? Probably not.

Since the 19th century, it’s been known that not everyone who gets bit by a rabid animal will come down with the deadly brain infection. In many cases — probably in most cases — of a rabid bite, the virus never actually gets to the brain. It might be that virus replication occurs at the site of the bite but the immune system clears the infection. Or it might be that the immune system is exposed to defective or incomplete virus particles that are able to provoke antibodies against rabies but unable to replicate.

Almost certainly those seven Peruvians were bit by rabid vampire bats, and developed an immune response against rabies as a result. But the study doesn’t investigate whether they experienced any of the neurologic symptoms of the disease, which progress from fevers and malaise to hallucinations, difficulty in swallowing, and worse. And absent a report of those symptoms, it’s impossible to say whether they ever developed the brain infection that doctors generally mean when they say a patient “has” rabies.

Read more at Wired Science

Humanlike Skin Cancer Found in Wild Fish

The first case of skin cancer in a wild marine fish population looks eerily similar to the melanoma that plagues humans, researchers report today (Aug. 1).

Coral trout living on Australia's Great Barrier Reef are directly beneath the Antarctic ozone hole, the world's largest, which is the result of the depletion of ozone in the atmosphere that normally protects humans from harmful UV rays.

"Further work needs to be carried out to establish the exact cause of the cancer, but having eliminated other likely factors such as microbial pathogens and marine pollution, UV radiation appears to be the likely cause," study researcher Michael Sweet, of Newcastle University in the United Kingdom, said in a statement.

Sweet and his colleagues examined 136 common coral trout (Plectropomus leopardus), and found 20 individuals, or 15 percent, showed dark skin lesions. The lesions ranged in size from small (covering just 5 percent of the skin) to large, covering the fish's full body, they report online in the journal PLoS ONE.

"The individuals we looked at had extensive — but only surface — melanomas," Sweet said. "This means the cancer had not spread any deeper than the skin, so apart from the surface lesions, the fish were basically healthy."

The lesions looked nearly identical to skin cancer found in humans, he said.

Once the melanoma spreads, Sweet added, fish would likely show signs of sickness, becoming less active and maybe feeding less. As such, the sick fish would be less likely to get caught. "This suggests the actual percentage affected by the cancer is likely to be higher than observed in this study," Sweet said in the statement.

Read more at Discovery News

'Fire Rainbow' Over South Florida

So-called "fire rainbows" are neither on fire nor are they rainbows, but they sure are stunning.

They are technically known as iridescent clouds, a relatively rare phenomenon caused by clouds of water droplets of nearly uniform size, according to a release by NASA. These clouds diffract, or bend, light in a similar manner, which separates out light into different wavelengths, or colors.

That makes them similar to rainbow-colored glories, which are also formed by diffraction, and also produce an oscillating pattern of colors ranging from blue to green to red to purple and back to blue again.

Although iridescent clouds have rainbow-like colors, the way light is scattered to produce them is slightly different. Rainbows are formed by refraction and reflection. When light is refracted, it is bent by passing through mediums of different densities, such as water or a prism. Reflected light bounces off a surface at an angle equal to the angle it hit the surface at. Diffraction, though, involves light waves being scattered into a ring-like pattern.

 As with other iridescent objects, like peacock feathers, the color changes depending upon one's position relative to the sun and the object.

Iridescence usually occurs in newly formed clouds. That appears to be the case here as well. According to the Weather Channel, these are pileus clouds caused by a fast-growing thunderstorm that shoved air into the upper atmosphere through a layer of moisture. This created a fog-like cloud that looks like a glowing dome atop the thunderstorm.

Read more at Discovery News

Continental Drift, Hollywood Style

I’m psyched for any movie that dares to feature geological terminology in the title. So, you can bet I dragged my family to see "Ice Age: Continental Drift" as soon as it opened (two weeks later here in China than back home in the U.S.).

In animated films, perhaps even more so than live action, I am fully prepared for science to be sold out for a song. That’s why I was so pleased to note a few geologic details the moviemakers clearly made a point to get right.

Released ten years after the original, this fourth installment continues the adventures of three protagonist pals from the Pleistocene—mammoth Manny, sloth Sid and saber-toothed cat Diego. The trio gets separated from their herd and Manny’s family when the supercontinent Pangea splits apart into today’s familiar landmasses.

Okay, put aside for a moment your smug knowledge that Pangea, the most recent of Earth’s great supercontinents, first started breaking up 175 million years ago (a good long while before mammoths evolved) and that tectonic plates drift no more than a few millimeters a year. The key point here is that Hollywood actually played off the idea of a supercontinent! It even looked vaguely like Pangea. That’s a start.

Let’s look instead at this movie’s opening and closing scenes from the viewpoint of another recurring character, a saber-toothed squirrel named Scrat (while we also overlook the fact that no such creature ever existed, at least not during the ice age.)

Scrat starts the show with his precious acorn in hand (the same acorn he has pursued relentlessly for all four movies). Delicately, he places the nut on the icy ground, only to have the impact trigger a hairline crack in the ice that, after a slight dramatic pause, branches into dozens, hundreds more. In a matter of seconds, the cracks penetrate the bedrock below and the supercontinent on which he stands is torn asunder.

Like a legitimate time-lapse animation of plate tectonics at work, the animated continents rift apart and move to their current locations at top speed. During close-ups of the cracks unzipping, I couldn’t help feel a jolt of joy: The moviemakers depicted the cracks spreading across not only the land but the seafloor as well—presumably to avoid promoting the all-too common misconception that continents float across seawater the way icebergs do. Woo-hoo for myth-busting!

The movie ends with Scrat’s greed inciting yet another geological calamity, one far more esoteric than the break-up of a supercontinent. Once again he grabs a giant acorn. This one happens to be plugging a hole on a floating island (don’t ask), so his impulse results in the island sinking to the bottom of the ocean. An enormous volume of seawater somehow drains out of that same hole (you just have to see it for yourself), leaving Scrat dry and parched in a desert I took to be Death Valley.

Read more at Discovery News

Aug 1, 2012

In Fly DNA, the Footprint of a Fly Virus

In a curious evolutionary twist, several species of a commonly studied fruit fly appear to have incorporated genetic material from a virus into their genomes, according to new research by University at Buffalo biologists.

The study found that several types of fruit fly -- scientific name Drosophila -- harbored genes similar to those that code for the sigma virus, a fly virus in the same family as rabies. The authors believe the genetic information was acquired during past viral infections and passed on from fruit fly parent to offspring through many generations.

The discovery could open the door for research on why flies and other organisms selectively retain viral genes -- dubbed "fossil" genes -- through evolution, said lead author Matthew Ballinger, a PhD candidate in UB's Department of Biological Sciences.

One hypothesis is that viral genes provide an anti-viral defense, but scientists have had trouble testing this theory because viral genes found in animals are often millions of years old -- ancient enough that the genes' genetic sequence differs significantly from that of modern-day viruses.

The new study, in contrast, uncovered a viral gene that appears to be relatively young, with genetic material closely mirroring that of a modern sigma virus.

"We don't know that these genes have an anti-viral function, but it's something we'd like to test," Ballinger said. "It's tempting to think that these genes are retained and express RNA because there's some kind of advantage to the host."

He and his co-authors -- Professor Jeremy Bruenn and Associate Professor Derek Taylor in UB's Department of Biological Sciences -- reported their results online on June 26 in the journal Molecular Phylogenetics and Evolution. The research, supported in part by UB's Center for Advanced Molecular Biology and Immunology, will also appear in a forthcoming print edition of the journal.

"Our findings establish that sigma virus-like (genes) are present in Drosophila species and that these infection scars represent a rich evolutionary history between virus and host," the researchers wrote in their paper.

Another important contribution the study makes is advancing our understanding of how flies and other organisms acquire copies of virus-like genes in the first place.

The sigma virus belongs to a class of RNA viruses that lack an important enzyme, reverse transcriptase, that enables other viruses to convert their genetic material into DNA for integration into host genomes.

Given this limitation, how did sigma virus genes get into fly genomes?

The new study supplies one possible answer, suggesting that viruses may use reverse transcriptase present in host cells to facilitate incorporation of viral genes into host DNA.

In the genome of one fly, the researchers found a sigma fossil gene right in the middle of a retrotransposon, a genetic sequence that produces reverse transcriptase for the purpose of making new copies of itself to paste into the genome.

The position and context of the viral gene suggests that the retrotransposon made a copying error and copied and pasted virus genes into the fly genome. This is the clearest evidence yet that non-retroviral RNA virus genes naturally enter host genomes by the action of enzymes already present in the cell, Ballinger said.

The study builds on prior research by Taylor and Bruenn, who previously co-authored a paper showing that bats, rodents and wallabies harbor fossil copies of genes that code for filoviruses, which cause deadly Ebola and Marburg hemorrhagic fevers in humans.

Read more at Science Daily

Tropical Climate in the Antarctic: Palm Trees Once Thrived On Today’s Icy Coasts 52 Million Years Ago

Given the predicted rise in global temperatures in the coming decades, climate scientists are particularly interested in warm periods that occurred in the geological past. Knowledge of past episodes of global warmth can be used to better understand the relationship between climate change, variations in atmospheric carbon dioxide and the reaction of Earth’s biosphere. An international team led by scientists from the Goethe University and the Biodiversity and Climate Research Centre in Frankfurt, Germany, has discovered an intense warming phase around 52 million years ago in drill cores obtained from the seafloor near Antarctica — a region that is especially important in climate research.

The study published in the journal Nature shows that tropical vegetation, including palms and relatives of today’s tropical Baobab trees, was growing on the coast of Antarctica 52 million years ago. These results highlight the extreme contrast between modern and past climatic conditions on Antarctica and the extent of global warmth during periods of elevated atmospheric carbon dioxide levels.

Around 52 million years ago, the concentration of the greenhouse gas carbon dioxide (CO2) in the atmosphere was more than twice as high as today. “If the current CO2 emissions continue unabated due to the burning of fossil fuels, CO2 concentrations in the atmosphere, as they existed in the distant past, are likely to be achieved within a few hundred years”, explains Prof. Jörg Pross, a paleoclimatologist at the Goethe University and member of the Biodiversity and Climate Research Centre (BiK-F) in Frankfurt, Germany. “By studying naturally occurring climate warming periods in the geological past, our knowledge of the mechanisms and processes in the climate system increases. This contributes enormously to improving our understanding of current human-induced global warming.”

Computer models indicate that future climate warming will be particularly pronounced in high-latitude regions, i.e., near the poles. Until now, however, it has been unclear how Antarctic terrestrial ecosystems responded in the geological past to a greenhouse climate with high atmospheric CO2 concentrations.

The scientists working with Prof. Pross analysed rock samples from drill cores on the seabed, which were obtained off the coast of Wilkes Land, Antarctica, as part of the Integrated Ocean Drilling Program (IODP). The rock samples are between 53 and 46 million years old and contain fossil pollen and spores that are known to originate from the Antarctic coastal region. The researchers were thus able to reconstruct the local vegetation on Antarctica and, accordingly, interpret the presence of tropical and subtropical rainforests covering the coastal region 52 million years ago.

In an area where the Antarctic ice sheet borders the Southern Ocean today, frost-sensitive and warmth-loving plants such as palms and the ancestors of today’s baobab trees flourished 52 million years ago. The scientists’ evaluations show that the winter temperatures on the Wilkes Land coast of Antarctica were warmer than 10 degrees Celsius at that time, despite three months of polar night. The continental interior, however, was noticeably cooler, with the climate supporting the growth of temperate rainforests characterized by southern beech and Araucaria trees of the type common in New Zealand today. Additional evidence of extremely mild temperatures was provided by analysis of organic compounds that were produced by soil bacteria populating the soils along the Antarctic coast.

Read more at Science Daily

Boy or Girl? Mother Can Control Outcome

Mothers can adjust the sex of their unborn children in response to the environment where they live, according to new research.

The study, published in the latest Proceedings of the Royal Society B, finds that mothers exert far more control than fathers do over whether or not the couple has a son or daughter. The goal is to improve the child’s survival.

“It seems likely that when there are large and predictable costs associated with producing and/or rearing either sons or daughters in a given environment, females should bias offspring sex ratios to produce the sex that will perform best in the given environment,” co-author Sarah Pryke told Discovery News.

“Altering offspring sex ratios in response to the quality of the local environment is likely to be highly advantageous to any species, as it should allow mothers to best match the phenotype of their offspring to the prevailing condition, and thus maximize their own fitness,” added Pryke, a researcher in Australian National University’s Research School of Biology.

Prior studies on birds, reptiles and mammals -- including humans -- has long suggested that this was the case, but scientists were unclear on what factors triggered the son or daughter outcome. Some researchers, for example, speculated that the overall body condition and health of the mother affected the outcome of her child’s sex.

To help eliminate that possibility, Pryke and colleague Lee Rollins studied a bird, the blue-faced parrot finch, whose body condition appears largely insensitive to changes in nutritional quality.

The researchers randomly assigned 56 of the female birds either a high-quality or low-quality diet. The former contained 20 percent protein, with egg, wheat germ, a seed mixture and more, while the latter contained only 8 percent protein. After 12 weeks on the diet, the birds were weighed and underwent blood tests to measure various aspects of their health. Based on these tests, all of the females were in comparably good and equivalent shape both before and after the 3-month study period.

Mother birds fed the lower quality diet, however, later produced far more sons than daughters.

“In this case, it is adaptive for mothers to produce more sons when conditions are poor because sons are much less vulnerable to nutritional stress than daughters,” Pryke explained. “For example, sons reared on poor quality diets grew faster, were healthier, fledged earlier and were much more likely to survive than daughters. Indeed, more than 51.5 percent of daughters reared on low quality diets died before reaching parental independence compared to only 7.3 percent of sons.”

It is unclear whether or not human moms would produce more sons or daughters when environmental conditions are poor. That will probably remain a mystery for quite a while, since, as Pryke said, “researchers can’t do experimental manipulations, like in the current study” on humans.

The sex of an individual is also at least partially determined by genes, giving dads some level of indirect control over the sex outcome of their progeny.

Read more at Discovery News

World's Northernmost Coral Reef Discovered

When you think of coral reefs, you probably picture scuba divers gliding through warm, crystal-clear waters. And for the most part, you'd be right: more than 90 percent of the world's coral reefs are located in the tropics.

Now researchers in Japan have found what is — so far — the world's northernmost coral reef. Located off the coast of Tsushima Island at 34 degrees north latitude, the newly discovered reef is far different from other coral reefs and is 217 miles (350 kilometers) north of most others in the region.

"Coral reefs have been believed to develop under warm-water settings — at least 18 degrees Celsius (64 degrees Fahrenheit) in winter. This setting is 13 degrees Celsius in winter (55 degrees Fahrenheit), which is unbelievably low," Hiroya Yamano, a researcher at Japan's National Institute for Environmental Studies, told OurAmazingPlanet. "The species, and thus seascape, is completely different from normal reefs."

Unfriendly waters

A team led by Yamano found a similar reef off the coast of Japan's Iki Island in 2001, and until now, that reef was the planet's northernmost. The newfound Tsushima Island reef is 43 miles (70 km) north of the Iki Island reef.

 Yamano's team tracked down this reef after poring over old manuscripts and interviewing local residents. Their sleuthing paid off, and they eventually found the 4,300-year old reef in one of Tsushima's murky inner bays.

"The water in this bay is basically turbid" — cloudy with lots of suspended particles — "and water temperatures are low, especially in winter," Yamano said.

Both of those qualities make life difficult for most corals. But this reef is composed of a genus of corals called Favia, a massive brown coral type. Most reefs are made up of corals from the genus Acropora, which can be a variety of colors and grow in branching or platelike polyps. Favia species tend to tolerate cold, turbid waters better than Acropora do, Yamano said.

So why did this reef start building itself in such an unfriendly environment? The team isn't sure, but Yamano thinks the Tsushima Warm Current, a stream of warm water flowing along the northwestern coast of Japan, probably helped transport coral larvae northward into the turbid inner bays of Iki and Tsushima islands. Yamano thinks there may be many more undiscovered reefs in similar settings throughout the region.

Read more at Discovery News

Jul 31, 2012

Concussions and Head Impacts May Accelerate Brain Aging

Concussions and even lesser head impacts may speed up the brain's natural aging process by causing signaling pathways in the brain to break down more quickly than they would in someone who has never suffered a brain injury or concussion.

Researchers from the University of Michigan School of Kinesiology and the U-M Health System looked at college students with and without a history of concussion and found changes in gait, balance and in the brain's electrical activity, specifically attention and impulse control, said Steven Broglio, assistant professor of kinesiology and director of the Neurotrauma Research Laboratory.

The declines were present in the brain injury group up to six years after injury, though the differences between the study groups were very subtle, and outwardly all of the participants looked and acted the same.

Broglio, who is also affiliated with Michigan NeuroSport, stressed that the studies lay out a hypothesis where concussions and head impacts accelerate the brain's natural aging process.

The study appears in the July issue of journal Exercise and Sport Sciences Reviews.

"The last thing we want is for people to panic. Just because you've had a concussion does not mean your brain will age more quickly or you'll get Alzheimer's," Broglio said. "We are only proposing how being hit in the head may lead to these other conditions, but we don't know how it all goes together just yet."

Broglio stressed that other factors, such as lifestyle choices, smoking, alcohol consumption, physical exercise, family history and whether or not you "exercise" your brain also impact the brain's aging process. Concussion may only be one small factor.

To begin to understand how concussions might impact brain activity and its signaling pathways, researchers asked the participants to perform certain tasks in front of a computer, and took images of their brains. The brains of the nonconcussed group showed a greater area of electrical activation than the participants with a history of brain injury.

The signaling pathways in our brains are analogous to a five-lane highway. On a new highway, traffic runs smoothly and quickly as all lanes are in top shape. However, during normal aging, the asphalt deteriorates and lanes might become bumpy or even unusable. Traffic slows.

Similarly, our brains start with all pathways clear to transfer electrical signals rapidly. As we age, the brain's pathways break down and can't transfer the information as quickly. Concussive and other impacts to the head may result in a 'pothole' on the brain's highway, causing varying degrees of damage and speeding the pathway's natural deterioration.

"What we don't know is if you had a single concussion in high school, does that mean you will get dementia at age 50?" Broglio said. "Clinically, we don't see that. What we think is it will be a dose response.

"So, if you played soccer and sustained some head impacts and maybe one concussion, then you may have a little risk. If you went on and played in college and took more head balls and sustained two more concussions, you're probably at a little bigger risk. Then if you play professionally for a few years, and take more hits to the head, you increase the risk even more. We believe it's a cumulative effect."

In the next phase of study, researchers will look at people in their 20s, 40s and 60s who did and did not sustain concussions during high school sports. They hope to learn if there is an increasing effect of concussion as the study subjects age.

Read more at Science Daily

Allergies? Your Sneeze Is a Biological Response to the Nose's 'Blue Screen of Death'

New research suggests that sneezing is the body's natural reboot and that patients with disorders of the nose such as sinusitis can't reboot, explaining why they sneeze more often than others.

Who would have thought that our noses and Microsoft Windows' infamous blue screen of death could have something in common? But that's the case being made by a new research report appearing online in The FASEB Journal. Specifically, scientists now know exactly why we sneeze, what sneezing should accomplish, and what happens when sneezing does not work properly. Much like a temperamental computer, our noses require a "reboot" when overwhelmed, and this biological reboot is triggered by the pressure force of a sneeze. When a sneeze works properly, it resets the environment within nasal passages so "bad" particles breathed in through the nose can be trapped. The sneeze is accomplished by biochemical signals that regulate the beating of cilia (microscopic hairs) on the cells that line our nasal cavities.

"While sinusitis rarely leads to death, it has a tremendous impact on quality of life, with the majority of symptoms coming from poor clearance of mucus," said Noam A. Cohen, M.D., Ph.D., a researcher involved in the work from the Department of Otorhinolaryngology-Head and Neck Surgery at the University of Pennsylvania in Philadelphia. "By understanding the process by which patients with sinusitis do not clear mucus from their nose and sinuses, we can try to develop new strategies to compensate for their poor mucus clearance and improve their quality of life."

To make this discovery, Cohen and colleagues used cells from the noses of mice which were grown in incubators and measured how these cells cleared mucus. They examined how the cells responded to a simulated sneeze (puff of air) by analyzing the cells' biochemical responses. Some of the experiments were replicated in human sinus and nasal tissue removed from patients with and without sinusitis. They found that cells from patients with sinusitis do not respond to sneezes in the same manner as cells obtained from patients who do not have sinusitis. The researchers speculate that sinusitis patients sneeze more frequently because their sneezes fail to reset the nasal environment properly or are less efficient at doing so. Further understanding of why sinusitis patients have this difficulty could aid in the development of more effective medications or treatments.

Read more at Science Daily

Later Stone Age Got Earlier Start in South Africa Than Thought

The Later Stone Age emerged in South Africa more than 20,000 years earlier than previously believed -- about the same time humans were migrating from Africa to the European continent, says a new international study led by the University of Colorado Boulder.

The study shows the onset of the Later Stone Age in South Africa likely began some 44,000 to 42,000 years ago, said Paola Villa, a curator at the University of Colorado Museum of Natural History and lead study author. The new dates are based on the use of precisely calibrated radiocarbon dates linked to organic artifacts found at Border Cave in the Lebombo Mountains on the border of South Africa and Swaziland containing evidence of hominid occupation going back 200,000 years.

The Later Stone Age is synonymous to many archaeologists with the Upper Paleolithic Period, when modern humans moved from Africa into Europe roughly 45,000 years ago and spread rapidly, displacing and eventually driving Neanderthals to extinction. The timing of the technological innovations and changes in the Later Stone Age in South Africa are comparable to that of the Upper Paleolithic, said Villa.

"Our research proves that the Later Stone Age emerged in South Africa far earlier than has been believed and occurred at about the same time as the arrival of modern humans in Europe," said Villa. "But differences in technology and culture between the two areas are very strong, showing the people of the two regions chose very different paths to the evolution of technology and society."

A paper on the subject was published July 30 in the Proceedings of the National Academy of Sciences. Co-authors included Sylvain Soriano of the Center National de la Recherche Scientifique, or CNRS, at the University of Paris; Tsenka Tsanova of the Max Planck Institute of Evolutionary Biology in Leipzig, Germany; Ilaria Degano, Jeannette Lucejko and Maria Perla Colombini of the University of Pisa in Italy; Thomas Highham of the University of Oxford in England; Francesco d'Errico of the CNRS at the University of Bordeaux in France; Lucinda Blackwell of the University of Witwatersrand in South Africa; and Peter Beaumont of the McGregor Museum in South Africa.

A companion paper published in PNAS and led by d'Errico reports on organic materials found at Border Cave dating to the Later Stone Age, an indication that the San hunter-gatherer culture first thought to have begun about 20,000 years ago in the region probably emerged as early as 44,000 years ago, said Villa.

Organic artifact assemblages at Border Cave dating to the Later Stone Age included ostrich eggshell beads, thin bone arrowhead points, wooden digging sticks, a gummy substance called pitch that was used to haft, or attach, bone and stone blades to shafts and a lump of beeswax likely used for hafting. The assemblage also included worked tusks of members of the pig family, which likely were used to plane wood, and notched bones that may have been used for counting.

A wooden digging stick from Border Cave dated to about 40,000 years ago was found in association with bored but broken stones likely used to weight such sticks. The sticks and stone weights are similar to digging implements used by women of the prehistoric San hunter-gatherer culture in the region to unearth bulbs and termite larvae, a practice that continued into historic times, said Villa. "These digging sticks from Border Cave are the oldest artifacts of this kind known from South Africa or anywhere else in Africa."

The new PNAS study led by Villa also indicates big changes were occurring in hunting technology during the Later Stone Age at Border Cave, said Villa. They included a shift from spears hafted with stone points -- the main hunting weapon in the Middle Stone Age -- to the likely use of the bow and arrow, a technology that included very thin bone points that probably were tipped with poison, she said.

"The very thin bone points from the Later Stone Age at Border Cave are good evidence for bow and arrow use," said Villa. "The work by d'Errico and colleagues shows that the points are very similar in width and thickness to the bone points produced by San culture that occupied the region in prehistoric times, whose people were known to use bows and arrows with poison-tipped bone points as a way to bring down medium and large-sized herbivores."

Chemical analyses showed the poison used with such bone points was most likely ricinoleic acid, which can be derived from the seeds of castor oil plants and which has been identified as being used in South Africa at least 24,000 years ago. "Such bone points could have penetrated thick hides, but the lack of 'knock-down' power means the use of poison probably was a requirement for successful kills," said Villa.

The lump of beeswax from Border Cave also dating to about 40,000 years ago -- the oldest known beeswax used by humans ever discovered -- was wrapped in plant fibers that may have been similar to fibers used to make the strings for hunting bows, said Villa.

While stone tools continued to be manufactured in the Later Stone Age at Border Cave, stone spear points from the Middle Stone Age gave way to tiny, thin flakes known as microliths that were probably hafted on shafts, much like the bone points, with pitch made from the bark of a common type of coniferous tree found in the region.

While a 2011 study co-authored by Villa and Wil Roebroeks of Leiden University in the Netherlands showed that Neanderthals mastered the manufacture of pitch in Europe 200,000 years ago, it was not a particularly simple task since the process involved burning peeled bark in the absence of air, said Villa. The Later Stone Age inhabitants of South Africa probably dug holes into the ground and inserted bark peels, then lit them on fire and covered the holes tightly with stones. "This is the first time pitch-making is demonstrated in South Africa," said Villa.

The Upper Paleolithic Period in Europe that corresponds to the Later Stone Age in South Africa also spurred complex new technologies that helped humans survive and thrive in much different environments. Artifacts from the Upper Paleolithic included spear-throwers, bone needles with eyelets for sewing furs, bone fishing hooks, bone flutes and even ivory figurines carved from mammoth tusks.

Read more at Science Daily

Presenting the World's Slowest Shark

Greenland sharks are the slowest known sharks, according to a new study that found these sharks move through the water at only about a mile per hour.

The study, which will be published in the September issue of the Journal of Experimental Marine Biology and Ecology, also finds that the sharks are the slowest known fish, when body size is factored in. Greenland sharks just reach top speeds of about 1.7 miles per hour.

"We think that the slow speed of Greenland sharks might be due to low water temperature in the Arctic Ocean," lead author Yuuki Watanabe of the National Institute of Polar Research told Discovery News.

Watanabe and his colleagues used data logging tags to measure the swim speed and tail beat frequency of six free-swimming Greenland sharks. The measurements were then compared to previously recorded data concerning other sharks and fish. (Sharks are considered to be a type of fish.) The mean tail beat frequency for Greenland sharks topped off at 9 beat cycles per minute.

Both this tail beat frequency and the maximum recorded speed for Greenland sharks were the lowest values, when size is considered, across all other known measurements for fish species. Since tail beats reflect muscle-shortening speed, which in turn has been tied to decreasing temperature, the researchers suspect the cold environment of Greenland sharks has turned them into slowpokes.

"Many physiological processes, including muscle shortening speed, slow down with decreasing temperature," Watanabe explained. "This is called the Q10 effect, and it's very strong. Humans do not feel this, because our body temperature is kept high even at cold temperatures. In the case of fishes, body temperature decreases with decreasing ambient temperature, and many physiological processes are depressed."

The slowness of the sharks puts a different twist on the usual marine predator-prey scenario. Many aggressive shark predators zip quickly through the water, enabling them to overtake slower-moving fish and sea mammals.

When the stomach contents of Greenland sharks have been analyzed in the past, scientists have found a lot of fish, but also fresh seals that weren't just scavenged.

"How Greenland sharks hunt seals remains a mystery," Watanabe said. "We hypothesize that sharks hunt seals sleeping in the water because of two reasons. First, seals in captivity can sleep in the water -- at the surface or on the bottom."

"Second," he added, "although how seals sleep in the wild is not known, we in the field observed a seal floating at the surface. At first we thought it was a dead seal, and approached it by boat and actually touched it. Then the seal suddenly started moving and dove down."

There's a good chance, then, that the slow-moving Greenland sharks are simply able to approach a snoozing seal and enjoy a relatively easy dinner.

As for the Greenland shark's own predators, they don't have many because of their very large body size. Some individuals can grow to become 21 feet long, weighing 2,200 pounds. Killer whales might be able to eat them, Watanabe suspects, but Greenland sharks usually stay within deep water that killer whales do not frequent.

Read more at Discovery News

Jul 30, 2012

Archeologists Unearth Extraordinary Human Sculpture in Turkey

A beautiful and colossal human sculpture is one of the latest cultural treasures unearthed by an international team at the Tayinat Archaeological Project (TAP) excavation site in southeastern Turkey. A large semi-circular column base, ornately decorated on one side, was also discovered. Both pieces are from a monumental gate complex that provided access to the upper citadel of Kunulua, capital of the Neo-Hittite Kingdom of Patina (ca. 1000-738 BC).

"These newly discovered Tayinat sculptures are the product of a vibrant local Neo-Hittite sculptural tradition," said Professor Tim Harrison, the Tayinat Project director and professor of Near Eastern Archaeology in the University of Toronto's Department of Near and Middle Eastern Civilizations. "They provide a vivid glimpse into the innovative character and sophistication of the Iron Age cultures that emerged in the eastern Mediterranean following the collapse of the great imperial powers of the Bronze Age at the end of the second millennium BC."

The head and torso of the human figure, intact to just above its waist, stands approximately 1.5 metres in height, suggesting a total body length of 3.5 to four metres. The figure's face is bearded, with beautifully preserved inlaid eyes made of white and black stone, and its hair has been coiffed in an elaborate series of curls aligned in linear rows. Both arms are extended forward from the elbow, each with two arm bracelets decorated with lion heads. The figure's right hand holds a spear, and in its left is a shaft of wheat. A crescent-shaped pectoral adorns its chest. A lengthy Hieroglyphic Luwian inscription, carved in raised relief across its back, records the campaigns and accomplishments of Suppiluliuma, likely the same Patinean king who faced a Neo-Assyrian onslaught of Shalmaneser III as part of a Syrian-Hittite coalition in 858 BC.

The second sculpture is a large semi-circular column base, approximately one metre in height and 90 centimetres in diameter, lying on its side next to the human figure. A winged bull is carved on the front of the column and it is flanked by a sphinx on its left. The right side of the column is flat and undecorated, an indication that it originally stood against a wall.

"The two pieces appear to have been ritually buried in the paved stone surface of the central passageway through the Tayinat gate complex," said Harrison. The complex would have provided a monumental ceremonial approach to the upper citadel of the royal city. Tayinat, a large low-lying mound, is located 35 kilometres east of Antakya (ancient Antioch) along the Antakya-Aleppo road.

The presence of colossal human statues, often astride lions or sphinxes, in the citadel gateways of the Neo-Hittite royal cities of Iron Age Syro-Anatolia continued a Bronze Age Hittite tradition that accentuated their symbolic role as boundary zones, and the role of the king as the divinely appointed guardian or gate keeper of the community. By the ninth and eighth centuries BC, these elaborately decorated gateways, with their ornately carved reliefs, had come to serve as dynastic parades, legitimizing the power of the ruling elite. The gate reliefs also formed linear narratives, guiding their audiences between the human and divine realms, with the king serving as the link between the two worlds.

The Tayinat gate complex appears to have been destroyed following the Assyrian conquest of the region in 738 BC, when the area was paved over and converted into the central courtyard of an Assyrian sacred precinct. These smashed and deposited monumental sculptures also include a magnificently carved lion that was discovered last year and Hieroglyphic Luwian-inscribed stelae (stone slabs or pillars used for commemoratives purposes). Together these finds hint of an earlier Neo-Hittite complex that might have once faced the gateway approach.

Read more at Science Daily

Leaning Colosseum of Rome?

"While stands the Coliseum, Rome shall stand; When falls the Coliseum, Rome shall fall; And when Rome falls--the World."

If the Venerable Bede (673-735), the Anglo-Saxon monk and the first English historian who wrote these words (later translated by the poet Lord Byron), heard the news today, he might indeed believe that the end of the world was near.

According to Rome's authorities, the symbol of the Eternal City is in need of support as its south side is 16 inches lower than the north.

The leaning Colosseum might require the kind of structural intervention that straightened the Tower of Pisa.

"The concrete foundation on which the Colosseum rests is like a 42-foot-thick oval doughnut. There could be a stress fracture inside it," Giorgio Monti, from the department of structural engineering at Rome's La Sapienza University, told the Italian daily Corriere della Sera.

The crack in the base below the 2,000-year-old arena may explain why the north and the south side of the monument are no longer horizontally aligned.

"If our doubts are confirmed, we are dealing with two structurally different monuments. At that point, it would be necessary to reunite them," Rossella Rea, the director of the Colosseum, said.

Rea has asked experts from Rome's Sapienza University and the Institute of Environmental Geology and Geoengineering (IGAG) to carry a scientific study on the phenomenon over the next year.

Marking one of the busiest intersections in the city, the Flavian Amphitheater (the Colosseum's proper name) is continuously rocked by vibrations from heavy traffic and a nearby underground metro.

"Following the worrying news, we have begun collecting signatures to support the request for a vast pedestrian area in the streets surrounding the Colosseum and the Roman Forum," the environmental group Legambiente said in a statement.

According to Rea, the same sort of restoration work that saved the Leaning Tower of Pisa more than a decade ago might be needed.

Like the Tower of Pisa, which has been tilting since its construction in 1173, the Colosseum has been built on problematic soil.

The iconic symbol of imperial Rome was built in A.D. 72 by the Flavian emperor Vespasian on the marshy bed of a drained lake. It was opened in A.D. 80 by Vespasian's son Titus with a festival that lasted 100 days and included gladiatorial combats, fights with wild beasts and naval battles for which the arena was flooded.

Read more at Discovery News

Summer Olympics: 1896 vs. 2012

With 116 years separating the first moden Olympics from this year's games, there are bound to be some noticeable changes between the two. Aside from archived photos of the Olympics, one of clearest windows into the first modern Olympics held in 1896 is from G. S. Robertson in an essay titled, "An Englishman at the first modern Olympics" (via http://Longform.org).

Robertson's account of the 1896 paints a picture of an Olympics in its infancy that, while grappling with the challenges of hosting an international competition without the benefits of modern telecommunications or transporation, still manages to capture what would be described in later generations as the Olympic spirit.

If you can manage to squeeze it in between commercial breaks while watching this year's Olympics, Robertson's essay is worth the full read. For those short on time, here are the starkest contrasts between this year's games and the 1896 Olympics:

Entry requirements for the 1896 Olympics: Show up.

Organized by Pierre de Coubertin, considered the father of the modern Olympic movement, the first modern games in Athens in 1896 was meant to include the participation of as many nations from as many corners of the world as possible. As Robertson notes, however, the promoters of the first Olympics had "apparently forgotten that few athletes are classical scholars, and that still fewer have either the time or the money to make so long a voyage."

This meant that a disproportionate number of participants were from the home country, Greece, other nations in continental Europe, and the United States. The United States and Hungary were the only ones to attempt to send an all-around team, according to Robertson. In fact, the first Olympic champion in modern history was an American, triple-jumper James Connolly.

Similarly, although throngs of foreign visitors were expected to swarm Athens for the first Olympics, spectators in Athens were almost exclusively Greek, according to Robertson. Newspapers had estimated 20,000 foreign visitors, but the author puts it closer to 1,000. Still, some 80,000 spectators cheered on the athletes and their victories in 1896.

English was not an official language in the 1896 games.

English and French are the official languages of the Olympic Games. But in 1896, the official book of rules and the program of the Olympics were only printed in French. An unofficial German edition was later printed as well.

According to Robertson, an English-language edition was only printed by a private firm with little lead time ahead of the games. This was partly due to participation of English-language speaking countries as an afterthought by the organizing committee of the first Olympics. This led to a general lack of participation among British athletes.

Despite a general lack of encouragement, American teams turned up in droves fully equipped, due "to the natural enterprise of the American people and to the peculiarly perfect method in which athletics are organized in the United States."

The foundations of the Olympic Charter, the document that guides the Olympic Movement and sets forth the principles by which all participating nations are expected to abide, weren't written down until 1898. Another 10 years would pass before the Charter was published in 1908, bearing the title, "Annuaire du Comité International Olympique." Although the Charter specifies that English and French are the official languages of the movement, the French version of the document supersedes the English edition when there are any discrepancies between the two.

Female athletes did not compete in 1896.

For the first time in Olympic history, the 2012 Olympics will include a female athlete from every participating nation. Saudi Arabia, Qatar and Brunei were the only countries who had yet to field at least one woman on their national teams since the 2008 Olympics in Beijing.

In 1896, however, women weren't allowed to participate in the games at all. Although female athletes were permitted to enter the games four years later, a mere 2 percent of athletes -- 22 out of 997 -- were women competing in five sports: tennis, sailing, croquet, golf and equestrian, according to the International Olympic Committee (IOC) (PDF).

Read more at Discovery News

August Will Be a Blue Moon Month

The month of August brings us not one, but two full moons. The first will kick off the month on Wednesday (Aug.1), and will be followed by a second on Aug. 31.

Some almanacs and calendars assert that when two full moons occur within a calendar month, the second full moon is called a "blue moon."

The full moon that night will likely look no different than any other full moon. But the moon can change color in certain conditions.

After forest fires or volcanic eruptions, the moon can appear to take on a bluish or even lavender hue.  Soot and ash particles, deposited high in the Earth's atmosphere, can sometimes make the moon appear bluish. Smoke from widespread forest fire activity in western Canada created a blue moon across eastern North America in late September 1950. In the aftermath of the massive eruption of Mount Pinatubo in the Philippines in June 1991 there were reports of blue moons (and even blue suns) worldwide.

Origin of the term

The phrase "once in a blue moon" was first noted in 1824 and refers to occurrences that are uncommon, though not truly rare. Yet, to have two full moons in the same month is not as uncommon as one might think.  In fact, it occurs, on average, about every 2.66 years.  And in the year 1999, it occurred twice in a span of just three months.

For the longest time no one seemed to have a clue as to where the "blue moon rule" originated.  Many years ago in the pages of Natural History magazine, I speculated that the rule might have evolved out of the fact that the word "belewe" came from the Old English, meaning, "to betray."  "Perhaps," I suggested, "the second full Moon is 'belewe' because it betrays the usual perception of one full moon per month."

But as innovative as my explanation was, it turned out to be completely wrong.

More mistakes

It was not until that "double blue moon year" of 1999 that the origin of the calendrical term "blue moon" was at long last discovered.  It was during the time frame from 1932 through 1957 that the Maine Farmers' Almanac suggested that if one of the four seasons (winter, spring, summer or fall) contained four full moons instead of the usual three, that the third full moon should be called a blue moon.

But thanks to a couple of misinterpretations of this cryptic definition, first by a writer in a 1946 issue of Sky & Telescope magazine, and much later, in 1980 in a syndicated radio program, it now appears that the second full moon in a month is the one that's now popularly accepted as the definition of a blue moon.   

This time around, the moon will turn full on Aug. 31 at 9:58 a.m. Eastern Daylight Time (6:58 a.m. Pacific Standard Time), making it a blue moon.

However, there is an exception: for those living in the Kamchatka region of the Russian Far East as well as in New Zealand, that same full moon occurs after midnight, on the calendar date of Sept. 1. So in these regions of world, this will not be the second of two full moons in August, but the first of two full moons in September. So, if (for example) you reside in Petropavlovsk-Kamchatsky or Christchurch, you'll have to wait until September 30 to declare that the moon is "officially" blue.

Blue Moon/New Moon

While we've assigned the name blue moon to the second full moon of the month, it seems that we have no such name for the second new moon of the month.  Nonetheless, these opposing phases seem to be connected with each other. For if two new moons occur within a specific month, then in most cases, four years later, two full moons will also occur in that very same month.

Read more at Discovery News

Jul 29, 2012

Immortality for Humans by 2045

A Russian mogul wants to achieve cybernetic immortality for humans within the next 33 years. He's pulled together a team intent on creating fully functional holographic human avatars that house our artificial brains. Now he's asking billionaires to help fund the advancements needed along the way.

The man behind the 2045 Initiative, described as a nonprofit organization, is a Russian named Dmitry Itskov. The ambitious timeline he's laid out involves creating different avatars. First a robotic copy that's controlled remotely through a brain interface. Then one in which a human brain can be transplanted at the end of life. The next could house an artificial human brain, and finally we'd have holographic avatars containing our intelligence much like the movie "Surrogates."

Gizmag's Dario Borghino wisely warned that "one must be careful not to believe that improbable technological advances automatically become more likely simply by looking further away in the future." And in the grand scheme of things, 2045 is not that far away. So just how likely is it that this project will succeed? For more insight, let's check in with Ted Williams. Oh, wait.

Recently Itskov published an open letterto the Forbes world's billionaires list telling them that they have the ability to finance the extension of their own lives up to immortality. He writes that he can prove the concept's viability to anyone who's skeptical and will coordinate their personal immortality projects for free. PopSci's Clay Dillowdescribed Itskov in March as a 31-year-old media mogul, but I couldn't find a detailed biography for him.

Read more at Discovery News

Magnetic Field, Mantle Convection and Tectonics

On a time scale of tens to hundreds of millions of years, the geomagnetic field may be influenced by currents in the mantle. The frequent polarity reversals of Earth's magnetic field can also be connected with processes in the mantle. These are the research results presented by a group of geoscientists in the new advance edition of Nature Geoscience on July 29th.

The results show how the rapid processes in the outer core, which flows at rates of up to about one millimeter per second, are coupled with the processes in the mantle, which occur more in the velocity range of centimeters per year.

The international group of scientists led by A. Biggin of the University of Liverpool included members of the GFZ German Research Centre for Geosciences, the IPGP Paris, the universities of Oslo and Utrecht, and other partners.

It is known that Earth's magnetic field is produced by convection currents of an electrically conducting iron-nickel alloyin the liquid core, about 3,000 kilometers below Earth's surface. The geomagnetic field is highly variable, there are changes in Earth's magnetic field on a multitude of spatial and temporal scales. Above the liquid outer core is the mantle, the rock in which behaves plastically deformable due to the intense heat and high pressure. At the boundary between Earth's core and mantle at 2900 km depth there is an intense heat exchange, which is on the one hand directed from Earth's core into the mantle. On the other hand, processes within Earth's mantle in turn also affect the heat flow. The interesting question is how the much slower flow in the solid mantle influences the heat flow and its spatial distribution at the core-mantle boundary, and how this will affect the Earth's magnetic field which is produced due to the much faster currents in Earth's core.

Key variable heat transfer

"The key variable is the heat flow. A cooler mantle accelerates the flow of heat from the hot core of Earth, and in this way alters the also heat-driven convection in Earth's core," said Bernhard Steinberger of the GFZ German Research Centre for Geosciences. "Ocean floor sinking into the mantle due to tectonic processes can lead to cooling in the mantle. They cause at these sites an increased heat flow into the cooler parts, namely until they have been heated to the ambient temperature." That might take several hundred million years, however.

Conversely, the hot core of Earth leads to the ascent of heated rocks in form of large bubbles, so-called mantle plumes that separate from the core-mantle boundary and make their way up to the surface of Earth. This is how Hawaii came into existence. This increases the local heat flux out of Earth's core and in turn modifies the generator of the geomagnetic field.

Reversals of the magnetic field

In Earth's history, polarity reversals of the geomagnetic field are nothing extraordinary. The most recent took place only 780 000 years ago, geologically speaking a very short period of time. The research team was able to determine that in the period of 200 to 80 million years before present, reversals initially happened more often, namely up to ten times in hundred million years. "Surprisingly, these reversals stopped about 120 million years ago and were absent for nearly 40 million years," explains GFZ scientist Sachs. Scientists presume that the reason for this is a concurrent reorientation of the whole mantle and crust with a shift in the geographic and magnetic poles of about 30°. Known as "true polar wander," thisprocess is caused by a change in density distribution in the mantle. If it increases the heat flux in equatorial regions, it would presumably lead to more frequent field reversals, if it decreases it, the field reversal might not occur.

Read more at Science Daily