A crocodile large enough to swallow humans once lived in East Africa, according to a University of Iowa researcher. "It’s the largest known true crocodile,” says Christopher Brochu, associate professor of geoscience. “It may have exceeded 27 feet in length. By comparison, the largest recorded Nile crocodile was less than 21 feet, and most are much smaller.”
Brochu’s paper on the discovery of a new crocodile species was just published in the May 3 issue of the Journal of Vertebrate Paleontology. The new species lived between 2 and 4 million years ago in Kenya. It resembled its living cousin, the Nile crocodile, but was more massive.
He recognized the new species from fossils that he examined three years ago at the National Museum of Kenya in Nairobi. Some were found at sites known for important human fossil discoveries. “It lived alongside our ancestors, and it probably ate them,” Brochu says. He explains that although the fossils contain no evidence of human/reptile encounters, crocodiles generally eat whatever they can swallow, and humans of that time period would have stood no more than four feet tall.
"We don’t actually have fossil human remains with croc bites, but the crocs were bigger than today’s crocodiles, and we were smaller, so there probably wasn’t much biting involved,” Brochu says.
He adds that there likely would have been ample opportunity for humans to encounter crocs. That’s because early man, along with other animals, would have had to seek water at rivers and lakes where crocodiles lie in wait.
Regarding the name he gave to the new species, Brochu said there was never a doubt.
The crocodile Crocodylus thorbjarnarsoni is named after John Thorbjarnarson, famed crocodile expert and Brochu’s colleague who died of malaria while in the field several years ago.
“He was a giant in the field, so it only made sense to name a giant after him,” Brochu says. “I certainly miss him, and I needed to honor him in some way. I couldn’t not do it.”
Among the skills needed for one to discover a new species of crocodile is, apparently, a keen eye.
Not that the fossilized crocodile head is small—it took four men to lift it. But other experts had seen the fossil without realizing it was a new species. Brochu points out that the Nairobi collection is “beautiful” and contains many fossils that have been incompletely studied. “So many discoveries could yet be made,” he says.
In fact, this isn’t the first time Brochu has made a discovery involving fossils from eastern Africa. In 2010, he published a paper on his finding a man-eating horned crocodile from Tanzania named Crocodylus anthropophagus—a crocodile related to his most recent discovery.
Read more at Science Daily
May 5, 2012
Early Spring Means More Bat Girls
There must be something in the warm breeze. A study on bats suggests that bats produce twice as many female babies as male ones in years when spring comes early. The earlier in the spring the births occur, the more likely the females are to survive and then reproduce a year later, as one-year olds, compared to later-born pups, according to Robert Barclay's research published in PLoS ONE.
"The early-born females are able to reproduce as one year olds, whereas male pups can't," explains Barclay, professor in the Department of Biological Sciences.
"Thus, natural selection has favoured internal mechanisms that result in a skewed sex ratio because mothers that produce a daughter leave more offspring in the next generation than mothers who produce a son."
The length of the growing season has an impact on the ratio of female to male offspring and the time available for female pups to reach sexual maturity, the study found. This suggests that not only does sex-ratio vary seasonally and among years, but it also likely varies geographically due to differences in season length.
Barclay analyzed long-term data on the variation in offspring sex-ratio of the big brown bat, Eptesicus fuscus, a common North-American species that consumes insects.
"In this species, more eggs are fertilized than eventually result in babies, so there is some mechanism by which a female embryo is preferentially kept and male embryos are resorbed early in pregnancy," says Barclay. But, he adds, the biochemistry behind the skewed sex ratio is unknown.
"Some other mammals and some birds have the ability to adjust the sex ratio of their offspring," says Barclay. "Even human-baby ratios vary -- there is a study showing that billionaires produce more sons than daughters, for example."
Read more at Science Daily
"The early-born females are able to reproduce as one year olds, whereas male pups can't," explains Barclay, professor in the Department of Biological Sciences.
"Thus, natural selection has favoured internal mechanisms that result in a skewed sex ratio because mothers that produce a daughter leave more offspring in the next generation than mothers who produce a son."
The length of the growing season has an impact on the ratio of female to male offspring and the time available for female pups to reach sexual maturity, the study found. This suggests that not only does sex-ratio vary seasonally and among years, but it also likely varies geographically due to differences in season length.
Barclay analyzed long-term data on the variation in offspring sex-ratio of the big brown bat, Eptesicus fuscus, a common North-American species that consumes insects.
"In this species, more eggs are fertilized than eventually result in babies, so there is some mechanism by which a female embryo is preferentially kept and male embryos are resorbed early in pregnancy," says Barclay. But, he adds, the biochemistry behind the skewed sex ratio is unknown.
"Some other mammals and some birds have the ability to adjust the sex ratio of their offspring," says Barclay. "Even human-baby ratios vary -- there is a study showing that billionaires produce more sons than daughters, for example."
Read more at Science Daily
May 4, 2012
Hubble to Use Moon as Mirror to See Venus Transit
This mottled landscape showing the impact crater Tycho is among the most violent-looking places on our Moon. Astronomers didn't aim NASA's Hubble Space Telescope to study Tycho, however. The image was taken in preparation to observe the transit of Venus across the Sun's face on June 5-6.
Hubble cannot look at the Sun directly, so astronomers are planning to point the telescope at Earth's moon, using it as a mirror to capture reflected sunlight and isolate the small fraction of the light that passes through Venus's atmosphere. Imprinted on that small amount of light are the fingerprints of the planet's atmospheric makeup.
These observations will mimic a technique that is already being used to sample the atmospheres of giant planets outside our solar system passing in front of their stars. In the case of the Venus transit observations, astronomers already know the chemical makeup of Venus's atmosphere, and that it does not show signs of life on the planet. But the Venus transit will be used to test whether this technique will have a chance of detecting the very faint fingerprints of an Earth-like planet, even one that might be habitable for life, outside our solar system that similarly transits its own star. Venus is an excellent proxy because it is similar in size and mass to our planet.
The astronomers will use an arsenal of Hubble instruments, the Advanced Camera for Surveys, Wide Field Camera 3, and Space Telescope Imaging Spectrograph, to view the transit in a range of wavelengths, from ultraviolet to near-infrared light. During the transit, Hubble will snap images and perform spectroscopy, dividing the sunlight into its constituent colors, which could yield information about the makeup of Venus's atmosphere.
Hubble will observe the Moon for seven hours, before, during, and after the transit so the astronomers can compare the data. Astronomers need the long observation because they are looking for extremely faint spectral signatures. Only 1/100,000th of the sunlight will filter through Venus's atmosphere and be reflected off the Moon.
This image, taken with Hubble's Advanced Camera for Surveys, reveals lunar features as small as roughly 560 feet (170 meters) across. The large "bulls-eye" near the top of the picture is the impact crater, caused by an asteroid strike about 100 million years ago. The bright trails radiating from the crater were formed by material ejected from the impact area during the asteroid collision. Tycho is about 50 miles (80 kilometers) wide and is circled by a rim of material rising almost 3 miles (5 kilometers) above the crater floor. The image measures 430 miles (700 kilometers) across, which is slightly larger than New Mexico.
Because the astronomers only have one shot at observing the transit, they had to carefully plan how the study would be carried out. Part of their planning included the test observations of the Moon, made on Jan. 11, 2012, as shown in the release image.
Read more at Science Daily
Hubble cannot look at the Sun directly, so astronomers are planning to point the telescope at Earth's moon, using it as a mirror to capture reflected sunlight and isolate the small fraction of the light that passes through Venus's atmosphere. Imprinted on that small amount of light are the fingerprints of the planet's atmospheric makeup.
These observations will mimic a technique that is already being used to sample the atmospheres of giant planets outside our solar system passing in front of their stars. In the case of the Venus transit observations, astronomers already know the chemical makeup of Venus's atmosphere, and that it does not show signs of life on the planet. But the Venus transit will be used to test whether this technique will have a chance of detecting the very faint fingerprints of an Earth-like planet, even one that might be habitable for life, outside our solar system that similarly transits its own star. Venus is an excellent proxy because it is similar in size and mass to our planet.
The astronomers will use an arsenal of Hubble instruments, the Advanced Camera for Surveys, Wide Field Camera 3, and Space Telescope Imaging Spectrograph, to view the transit in a range of wavelengths, from ultraviolet to near-infrared light. During the transit, Hubble will snap images and perform spectroscopy, dividing the sunlight into its constituent colors, which could yield information about the makeup of Venus's atmosphere.
Hubble will observe the Moon for seven hours, before, during, and after the transit so the astronomers can compare the data. Astronomers need the long observation because they are looking for extremely faint spectral signatures. Only 1/100,000th of the sunlight will filter through Venus's atmosphere and be reflected off the Moon.
This image, taken with Hubble's Advanced Camera for Surveys, reveals lunar features as small as roughly 560 feet (170 meters) across. The large "bulls-eye" near the top of the picture is the impact crater, caused by an asteroid strike about 100 million years ago. The bright trails radiating from the crater were formed by material ejected from the impact area during the asteroid collision. Tycho is about 50 miles (80 kilometers) wide and is circled by a rim of material rising almost 3 miles (5 kilometers) above the crater floor. The image measures 430 miles (700 kilometers) across, which is slightly larger than New Mexico.
Because the astronomers only have one shot at observing the transit, they had to carefully plan how the study would be carried out. Part of their planning included the test observations of the Moon, made on Jan. 11, 2012, as shown in the release image.
Read more at Science Daily
2012 Doomsday Poll Brings Out the Believers
Last night, Jay Leno cracked a joke on The Tonight Show that, oddly, generated few laughs.
Commenting on TV shopping channels and the idea that you are more likely to be persuaded to buy junk late at night, Leno said, "I bought a 2013 Mayan calendar, I feel like such a moron."
The audience's reaction could be interpreted one of two ways -- they were either fatigued of the "Mayan doomsday" nonsense (one would hope), or a large part of the 22 percent of the American public who actually believe the world is going to end in their lifetime attended Leno's show. (Or the joke wasn't that funny, sorry Jay.)
According to an international poll carried out by Ipsos Global Public Affairs on behalf of Reuters News, 22 percent of Americans believe they will experience some kind of Armageddon in their lifetime. When asked specifically about the idiotic notion that an ancient mesoamerican calendar can foretell doom, 12 percent of Americans agreed with the statement: "the Mayan calendar, which some say 'ends' in 2012, marks the end of the world."
But it's OK America, you're not alone:
One in ten (10 percent) respondents in 21 countries agree ‘the Mayan calendar, which some say ‘ends’ in 2012, marks the end of the world’ – 2 percent strongly agree, 8 percent somewhat agree. The majority of world citizens (90 percent), however, disagree with this interpretation – 73 percent strongly, 16 percent somewhat. Two in ten (20 percent) of those in China are in agreement with the statement, followed at the top of the global list by 13 percent in each of: Turkey, Russia, Mexico, South Korea and Japan. Only 4 percent in Germany and Indonesia seem to believe the prophecy, joined by 7 percent in Great Britain, South Africa and Italy. -- Ipsos
The poll was conducted among 16,262 adults in 21 countries.
Does this survey really indicate some kind of thriving "doomsday" subculture? Has Roland Emmerich's bad doomsday flick "2012" bored its way into society's psyche? Is the end of the world really nigh and only a small number of the populous privy to a warning about this heinous future event?
Actually, apart from the "Mayan doomsday prophesy" being wrong, if I had to go out on a limb, I'd argue that the Reuters poll is just reflecting a 'normal' proportion of doomsday believers versus skeptics in society. And despite what would appear to seem like a huge number of people that believe the world will end in their lifetime (or on Dec. 21 2012), they actually represent a similar portion of people who thought the world would end on previous (failed) doomsdays.
For example, during a CBS poll in 1999 taken ahead of the much-publicized "Y2K bug" doomsday scenario, it was reported that 18 percent of Americans believed "major problems" would occur through computer errors when the year transitioned from 1999 to 2000. A Gallop poll taken ahead of the same event reported that 20 percent of Americans believed the 1999-2000 switch would cause major problems to them personally.
Y2K was a specific event based on a real problem in world-wide computing systems. A vulnerability in computer programming was to blame and concerns grew for the consequences of large numbers of computer systems crashing at the same time. Worries for the failure of power grids, air traffic control and military systems were just some of the secondary events that, according to a few doomsayers, could have caused global catastrophe.
The Mayan doomsday, however, is vastly over-hyped. The Mayan Long Count calendar's 13th "Baktun" cycle ends on Dec. 21, 2012, and while many Central American states are looking forward to a huge party to welcome in a "new age," there are a determined group of people trying to make a fast buck out of terrifying the world with their fake doomsday theories; selling books, website advertising and blockbuster movies on the topic.
To quote Erik Velasquez, an etchings specialist at the National Autonomous University of Mexico (UNAM), the whole Mayan doomsday trend is nothing more than "a marketing fallacy."
Read more at Discovery News
Commenting on TV shopping channels and the idea that you are more likely to be persuaded to buy junk late at night, Leno said, "I bought a 2013 Mayan calendar, I feel like such a moron."
The audience's reaction could be interpreted one of two ways -- they were either fatigued of the "Mayan doomsday" nonsense (one would hope), or a large part of the 22 percent of the American public who actually believe the world is going to end in their lifetime attended Leno's show. (Or the joke wasn't that funny, sorry Jay.)
According to an international poll carried out by Ipsos Global Public Affairs on behalf of Reuters News, 22 percent of Americans believe they will experience some kind of Armageddon in their lifetime. When asked specifically about the idiotic notion that an ancient mesoamerican calendar can foretell doom, 12 percent of Americans agreed with the statement: "the Mayan calendar, which some say 'ends' in 2012, marks the end of the world."
But it's OK America, you're not alone:
One in ten (10 percent) respondents in 21 countries agree ‘the Mayan calendar, which some say ‘ends’ in 2012, marks the end of the world’ – 2 percent strongly agree, 8 percent somewhat agree. The majority of world citizens (90 percent), however, disagree with this interpretation – 73 percent strongly, 16 percent somewhat. Two in ten (20 percent) of those in China are in agreement with the statement, followed at the top of the global list by 13 percent in each of: Turkey, Russia, Mexico, South Korea and Japan. Only 4 percent in Germany and Indonesia seem to believe the prophecy, joined by 7 percent in Great Britain, South Africa and Italy. -- Ipsos
The poll was conducted among 16,262 adults in 21 countries.
Does this survey really indicate some kind of thriving "doomsday" subculture? Has Roland Emmerich's bad doomsday flick "2012" bored its way into society's psyche? Is the end of the world really nigh and only a small number of the populous privy to a warning about this heinous future event?
Actually, apart from the "Mayan doomsday prophesy" being wrong, if I had to go out on a limb, I'd argue that the Reuters poll is just reflecting a 'normal' proportion of doomsday believers versus skeptics in society. And despite what would appear to seem like a huge number of people that believe the world will end in their lifetime (or on Dec. 21 2012), they actually represent a similar portion of people who thought the world would end on previous (failed) doomsdays.
For example, during a CBS poll in 1999 taken ahead of the much-publicized "Y2K bug" doomsday scenario, it was reported that 18 percent of Americans believed "major problems" would occur through computer errors when the year transitioned from 1999 to 2000. A Gallop poll taken ahead of the same event reported that 20 percent of Americans believed the 1999-2000 switch would cause major problems to them personally.
Y2K was a specific event based on a real problem in world-wide computing systems. A vulnerability in computer programming was to blame and concerns grew for the consequences of large numbers of computer systems crashing at the same time. Worries for the failure of power grids, air traffic control and military systems were just some of the secondary events that, according to a few doomsayers, could have caused global catastrophe.
The Mayan doomsday, however, is vastly over-hyped. The Mayan Long Count calendar's 13th "Baktun" cycle ends on Dec. 21, 2012, and while many Central American states are looking forward to a huge party to welcome in a "new age," there are a determined group of people trying to make a fast buck out of terrifying the world with their fake doomsday theories; selling books, website advertising and blockbuster movies on the topic.
To quote Erik Velasquez, an etchings specialist at the National Autonomous University of Mexico (UNAM), the whole Mayan doomsday trend is nothing more than "a marketing fallacy."
Read more at Discovery News
Cinco de Mayo: NOT Mexico's Independence Day
With a history steeped in battles and rebuilding, Mexico has earned every right to be proud. Today marks a Mexican holiday that more and more people every year celebrate in the United States, many not knowing the reason is: The "Batalla de Puebla" (Battle of Puebla) or "Cinco de Mayo" (Fifth of May).
While it may all seem like a huge fiesta now, the history of this holiday is covered in bloodshed and remembrance.
Contrary to popular belief, Cinco de Mayo is not the celebration of Mexico's independence day. The El Grito de la Indepedencia (Cry of Independence) is held annually on Sept. 16 in honor of Mexico's independence from Spanish rule in 1810.
Cinco de Mayo is the celebration of freedom from a different oppressive European empire: France.
French occupation viciously swept across Mexico after the Mexican-American War of 1846-48. Mexico was left ripped to shreds and bankrupt after having suffered incredible defeat against the Americans. By the 1850s, the country was in a state of crisis.
Newly elected President Benito Juarez issued a moratorium on July 17, 1861 to help get a handle on his country’s wrecked economy, according to UCLA’s Chicano and Latino issues resource center.
The moratorium stipulated a hold on all foreign debt payments for the next two years so that Mexico could get out of financial ruin. Payments could resume after the two-year mark, but in the meantime, Mexico was forced to default on debts abroad.
England, Spain and France -- all of which Mexico owed money to -- were furious. According to History.com, all three sent naval ships to Veracruz to demand reimbursement. British and Spanish forces eventually negotiated with Mexico and withdrew, but it was France that decided to take severe action.
Seeing an opportunity to take advantage of a fallen nation, French ruler Napoleon III had hoped to be victorious over the weakened Mexican army and carve out an independent empire for France.
According to UCLA, there is some speculation that the United States’ enactment of the Monroe Doctrine in 1823, which stated that any European attempts to re-colonize any part of the Americas would be considered an act of war, may have sparked the French invasion frenzy. At the time, the United States’ quick and immense expansion was seen as a threat to other world powers.
In 1862, French General Charles Latrille de Lorencez was ordered to march his forces into Veracruz and attack with 6,000 troops and 2,000 French loyalists headed for Puebla de Los Angeles, just east of Mexico City -- Napoleon’s ultimate goal. In response, Juarez gathered up any Mexican loyalists he could find and put together a 4,000-strong, but hackneyed, force against the French. Many were farmers armed with hunting rifles and machetes, according to a PBS report.
For nearly 50 years, the French army had remained undefeated until they clashed with the Mexican army on May 5, 1862 in Puebla. Led by Texas-born General Ignacio Zaragoza, the outnumbered and poorly supplied Mexican army defeated French forces in what became known as the "Batalla de Puebla."
According to History.com, the French lost 500 men in a single day, while Mexican forces lost fewer than 100. The victory gave the Mexicans a huge morale boost, and the French withdrew six years later. Puebla de Los Angeles was renamed Puebla de Zaragoza in honor of the general’s great triumph.
Read more at Discovery News
While it may all seem like a huge fiesta now, the history of this holiday is covered in bloodshed and remembrance.
Contrary to popular belief, Cinco de Mayo is not the celebration of Mexico's independence day. The El Grito de la Indepedencia (Cry of Independence) is held annually on Sept. 16 in honor of Mexico's independence from Spanish rule in 1810.
Cinco de Mayo is the celebration of freedom from a different oppressive European empire: France.
French occupation viciously swept across Mexico after the Mexican-American War of 1846-48. Mexico was left ripped to shreds and bankrupt after having suffered incredible defeat against the Americans. By the 1850s, the country was in a state of crisis.
Newly elected President Benito Juarez issued a moratorium on July 17, 1861 to help get a handle on his country’s wrecked economy, according to UCLA’s Chicano and Latino issues resource center.
The moratorium stipulated a hold on all foreign debt payments for the next two years so that Mexico could get out of financial ruin. Payments could resume after the two-year mark, but in the meantime, Mexico was forced to default on debts abroad.
England, Spain and France -- all of which Mexico owed money to -- were furious. According to History.com, all three sent naval ships to Veracruz to demand reimbursement. British and Spanish forces eventually negotiated with Mexico and withdrew, but it was France that decided to take severe action.
Seeing an opportunity to take advantage of a fallen nation, French ruler Napoleon III had hoped to be victorious over the weakened Mexican army and carve out an independent empire for France.
According to UCLA, there is some speculation that the United States’ enactment of the Monroe Doctrine in 1823, which stated that any European attempts to re-colonize any part of the Americas would be considered an act of war, may have sparked the French invasion frenzy. At the time, the United States’ quick and immense expansion was seen as a threat to other world powers.
In 1862, French General Charles Latrille de Lorencez was ordered to march his forces into Veracruz and attack with 6,000 troops and 2,000 French loyalists headed for Puebla de Los Angeles, just east of Mexico City -- Napoleon’s ultimate goal. In response, Juarez gathered up any Mexican loyalists he could find and put together a 4,000-strong, but hackneyed, force against the French. Many were farmers armed with hunting rifles and machetes, according to a PBS report.
For nearly 50 years, the French army had remained undefeated until they clashed with the Mexican army on May 5, 1862 in Puebla. Led by Texas-born General Ignacio Zaragoza, the outnumbered and poorly supplied Mexican army defeated French forces in what became known as the "Batalla de Puebla."
According to History.com, the French lost 500 men in a single day, while Mexican forces lost fewer than 100. The victory gave the Mexicans a huge morale boost, and the French withdrew six years later. Puebla de Los Angeles was renamed Puebla de Zaragoza in honor of the general’s great triumph.
Read more at Discovery News
Sharks Mistaken for Lake Monsters?
A shark researcher has offered a new theory about what might be behind some of the world's famous lake monsters.
Bruce Wright, a senior scientist at the Aleutian Pribilof Island Association, wrote an article for the Alaska Dispatch newspaper that proposed an interesting idea: "For years, legendary tales from Scotland and Western Alaska described large animals or monsters thought to live in Loch Ness and Lake Iliamna. But evidence has been mounting that the Loch Ness and Lake Iliamna monsters may, in fact, be sleeper sharks."
Wright suggests that the sharks, which can reach 20 feet long and weigh over four tons, might migrate through rivers and into lakes and be mistaken for monsters.
The Lake Iliamna monster (known as "Illie") is said to resemble a whale or a seal, and be between 10 and 20 feet long. There have been fewer than a half-dozen sightings of Illie since it was first seen in 1942.
The best known American lake monster is not said to be in Alaska but instead in Lake Champlain, which forms the border between Vermont and New York. “Champ,” as the creature is called, has allegedly been seen by hundreds of witnesses and is anywhere between 10 and 187 feet long, has one or more humps, and is gray, black, dark green or other colors.
The best evidence for Champ -- in fact, for any lake monster -- was a 1977 photo taken by a woman named Sandra Mansi showing what appeared to be a dark head and hump in the lake. Later investigation revealed that the object was a floating log that looked serpentine from a certain angle.
While Wright's hypothesis is interesting, there are many problems with his theory, including the fact that both Ness and Iliamna are freshwater lakes, while Pacific sleeper sharks, as their name suggests, inhabit saltwater oceans.
Some saltwater animals can adapt to brackish or fresh water (freshwater bull sharks and dolphins, for example), but there are no known freshwater sleeper sharks.
Another problem with Wright's shark-as-lake monster theory is that, despite his suggestion that "the monsters' shape and colors usually match that of sleeper sharks," in fact most descriptions of the monsters in Ness and Iliamna bear little resemblance to sleeper sharks. Many eyewitnesses suggest that the unknown aquatic monster in Loch Ness resembles a long-extinct dinosaur-like marine reptile called the plesiosaur.
As for Lake Iliamna, at least one eyewitness reported that Illie had a prominent (three-foot-high) dorsal fin, while sleeper sharks have very low-profile dorsal fins, barely a bump on the back.
Researcher Matthew Bille interviewed an Illie eyewitnesses for his book "Shadows of Existence: Discoveries and Speculations in Zoology" (2006, Hancock House), and believes that the most likely explanation for the monster is not a sleeper shark but instead a white sturgeon, which can grow over 20 feet long: "the appearance of the White sturgeon-gray to brown in color, with huge heads and long cylindrical bodies -- appears to match most Iliamna accounts."
Indeed, it would not be the first time that sturgeon have been mistaken for monsters.
Read more at Discovery News
Bruce Wright, a senior scientist at the Aleutian Pribilof Island Association, wrote an article for the Alaska Dispatch newspaper that proposed an interesting idea: "For years, legendary tales from Scotland and Western Alaska described large animals or monsters thought to live in Loch Ness and Lake Iliamna. But evidence has been mounting that the Loch Ness and Lake Iliamna monsters may, in fact, be sleeper sharks."
Wright suggests that the sharks, which can reach 20 feet long and weigh over four tons, might migrate through rivers and into lakes and be mistaken for monsters.
The Lake Iliamna monster (known as "Illie") is said to resemble a whale or a seal, and be between 10 and 20 feet long. There have been fewer than a half-dozen sightings of Illie since it was first seen in 1942.
The best known American lake monster is not said to be in Alaska but instead in Lake Champlain, which forms the border between Vermont and New York. “Champ,” as the creature is called, has allegedly been seen by hundreds of witnesses and is anywhere between 10 and 187 feet long, has one or more humps, and is gray, black, dark green or other colors.
The best evidence for Champ -- in fact, for any lake monster -- was a 1977 photo taken by a woman named Sandra Mansi showing what appeared to be a dark head and hump in the lake. Later investigation revealed that the object was a floating log that looked serpentine from a certain angle.
While Wright's hypothesis is interesting, there are many problems with his theory, including the fact that both Ness and Iliamna are freshwater lakes, while Pacific sleeper sharks, as their name suggests, inhabit saltwater oceans.
Some saltwater animals can adapt to brackish or fresh water (freshwater bull sharks and dolphins, for example), but there are no known freshwater sleeper sharks.
Another problem with Wright's shark-as-lake monster theory is that, despite his suggestion that "the monsters' shape and colors usually match that of sleeper sharks," in fact most descriptions of the monsters in Ness and Iliamna bear little resemblance to sleeper sharks. Many eyewitnesses suggest that the unknown aquatic monster in Loch Ness resembles a long-extinct dinosaur-like marine reptile called the plesiosaur.
As for Lake Iliamna, at least one eyewitness reported that Illie had a prominent (three-foot-high) dorsal fin, while sleeper sharks have very low-profile dorsal fins, barely a bump on the back.
Researcher Matthew Bille interviewed an Illie eyewitnesses for his book "Shadows of Existence: Discoveries and Speculations in Zoology" (2006, Hancock House), and believes that the most likely explanation for the monster is not a sleeper shark but instead a white sturgeon, which can grow over 20 feet long: "the appearance of the White sturgeon-gray to brown in color, with huge heads and long cylindrical bodies -- appears to match most Iliamna accounts."
Indeed, it would not be the first time that sturgeon have been mistaken for monsters.
Read more at Discovery News
May 3, 2012
Early North Americans Lived With Extinct Giant Beasts
A new University of Florida study that determined the age of skeletal remains provides evidence humans reached the Western Hemisphere during the last ice age and lived alongside giant extinct mammals.
The study published online May 3 in the Journal of Vertebrate Paleontology addresses the century-long debate among scientists about whether human and mammal remains found at Vero Beach in the early 1900s date to the same time period. Using rare earth element analysis to measure the concentration of naturally occurring metals absorbed during fossilization, researchers show modern humans in North America co-existed with large extinct mammals about 13,000 years ago, including mammoths, mastodons and giant ground sloths.
"The Vero site is still the only site where there was an abundance of actual human bones, not just artifacts, associated with the animals," said co-author Barbara Purdy, UF anthropology professor emeritus and archaeology curator emeritus at the Florida Museum of Natural History on the UF campus. "Scientists who disputed the age of the human remains in the early 20th century just did not want to believe that people were in the Western Hemisphere that early. And 100 years later, every single book written about the prehistory of North America includes this site and the controversy that still exists."
Following discovery of the fossils in South Florida between 1913 and 1916, some prominent scientists convinced researchers the human skeletons were from more recent burials and not as old as the animals, a question that remained unanswered because no dating methods existed.
"The uptake of rare earth elements is time-dependent, so an old fossil is going to have very different concentrations of rare earth elements than bones from a more recent human burial," said lead author Bruce MacFadden, Florida Museum vertebrate paleontology curator. "We found the human remains have statistically the same concentrations of rare earth elements as the fossils."
The little information known about the first humans to appear in North America is primarily based on bone fragments and artifacts, such as stone points used for hunting. Other sites in California, Montana and Texas show human presence around the same time period based on artifacts, but two nearly complete human skeletons were discovered at the Vero Beach site.
As bones begin to fossilize they absorb elements from the surrounding sediment, and analysis is effective in distinguishing different-aged fossils deposited in the same locality. Instead of radiocarbon dating, which requires the presence of collagen in bones, researchers used mass spectrometry to compare rare earth elements in the specimens because a lack of collagen in the Vero Beach specimens made radiocarbon dating impossible, Purdy said.
Researchers analyzed samples from 24 human bones and 48 animal fossils in the Florida Museum's collections and determined the specimens were all from the late Pleistocene epoch about 13,000 years ago. While rare earth element analysis method is not as precise as radiocarbon dating, Purdy said the significance of human skeletons found in Vero Beach is unquestionable in terms of their presence in the Western Hemisphere.
"It is important to note that they [the authors] did not provide an absolute or chronometric date, rather the geochemistry shows that the trace elemental geochemistry is the same, thus the bones must be of the same age," said Kenneth Tankersley, an assistant professor in the University of Cincinnati anthropology and geology departments.
Native fauna during the last ice age ranged from extinct jaguars and saber-toothed cats to shrews, mice and squirrels still present in Florida. Researchers speculate humans would have been wanderers much like the animals because there was less fresh water than in later years, Purdy said.
"Humans would have been following the animals for a food supply, but that's about all we know," Purdy said. "We know what some of their tools looked like and we know they were hunting the extinct animals but we know practically nothing about their family life, such as how these ancient people raised their children and grieved for their dead."
Read more at Science Daily
The study published online May 3 in the Journal of Vertebrate Paleontology addresses the century-long debate among scientists about whether human and mammal remains found at Vero Beach in the early 1900s date to the same time period. Using rare earth element analysis to measure the concentration of naturally occurring metals absorbed during fossilization, researchers show modern humans in North America co-existed with large extinct mammals about 13,000 years ago, including mammoths, mastodons and giant ground sloths.
"The Vero site is still the only site where there was an abundance of actual human bones, not just artifacts, associated with the animals," said co-author Barbara Purdy, UF anthropology professor emeritus and archaeology curator emeritus at the Florida Museum of Natural History on the UF campus. "Scientists who disputed the age of the human remains in the early 20th century just did not want to believe that people were in the Western Hemisphere that early. And 100 years later, every single book written about the prehistory of North America includes this site and the controversy that still exists."
Following discovery of the fossils in South Florida between 1913 and 1916, some prominent scientists convinced researchers the human skeletons were from more recent burials and not as old as the animals, a question that remained unanswered because no dating methods existed.
"The uptake of rare earth elements is time-dependent, so an old fossil is going to have very different concentrations of rare earth elements than bones from a more recent human burial," said lead author Bruce MacFadden, Florida Museum vertebrate paleontology curator. "We found the human remains have statistically the same concentrations of rare earth elements as the fossils."
The little information known about the first humans to appear in North America is primarily based on bone fragments and artifacts, such as stone points used for hunting. Other sites in California, Montana and Texas show human presence around the same time period based on artifacts, but two nearly complete human skeletons were discovered at the Vero Beach site.
As bones begin to fossilize they absorb elements from the surrounding sediment, and analysis is effective in distinguishing different-aged fossils deposited in the same locality. Instead of radiocarbon dating, which requires the presence of collagen in bones, researchers used mass spectrometry to compare rare earth elements in the specimens because a lack of collagen in the Vero Beach specimens made radiocarbon dating impossible, Purdy said.
Researchers analyzed samples from 24 human bones and 48 animal fossils in the Florida Museum's collections and determined the specimens were all from the late Pleistocene epoch about 13,000 years ago. While rare earth element analysis method is not as precise as radiocarbon dating, Purdy said the significance of human skeletons found in Vero Beach is unquestionable in terms of their presence in the Western Hemisphere.
"It is important to note that they [the authors] did not provide an absolute or chronometric date, rather the geochemistry shows that the trace elemental geochemistry is the same, thus the bones must be of the same age," said Kenneth Tankersley, an assistant professor in the University of Cincinnati anthropology and geology departments.
Native fauna during the last ice age ranged from extinct jaguars and saber-toothed cats to shrews, mice and squirrels still present in Florida. Researchers speculate humans would have been wanderers much like the animals because there was less fresh water than in later years, Purdy said.
"Humans would have been following the animals for a food supply, but that's about all we know," Purdy said. "We know what some of their tools looked like and we know they were hunting the extinct animals but we know practically nothing about their family life, such as how these ancient people raised their children and grieved for their dead."
Read more at Science Daily
Labels:
Animals,
Archeology,
History,
Human,
Science
Gene which sparked human brain leap identified
By duplicating itself two and a half million years ago the gene could have given early human brains the power of speech and invention, leaving cousins such as chimpanzees behind.
The gene, known as SRGAP2, helps control the development of the neocortex – the part of the brain responsible for higher functions like language and conscious thought.
Having an extra copies slowed down the development of the brain, allowing it to forge more connections between nerve cells and in doing so grow bigger and more complex, researchers said.
In a study published in the Cell journal, the scientists reported that the gene duplicated about 3.5 million years ago to create a "daughter" gene, and again a million years later creating a "granddaughter" copy.
Although humans and chimpanzees separated six million years ago, we still share 96 per cent of our genome and the gene is one of only about 30 which have copied themselves since that time.
The first duplication was relatively inactive but the second occurred at about the time when primitive Homo separated from its brother Australopithecus species and began developing more sophisticated tools and behaviours.
Evan Eichler at the University of Washington, who led the research, said the benefit of the duplication would have been instant, meaning human ancestors could have distanced themselves from rival species within a generation.
Read more at The Telegraph
The gene, known as SRGAP2, helps control the development of the neocortex – the part of the brain responsible for higher functions like language and conscious thought.
Having an extra copies slowed down the development of the brain, allowing it to forge more connections between nerve cells and in doing so grow bigger and more complex, researchers said.
In a study published in the Cell journal, the scientists reported that the gene duplicated about 3.5 million years ago to create a "daughter" gene, and again a million years later creating a "granddaughter" copy.
Although humans and chimpanzees separated six million years ago, we still share 96 per cent of our genome and the gene is one of only about 30 which have copied themselves since that time.
The first duplication was relatively inactive but the second occurred at about the time when primitive Homo separated from its brother Australopithecus species and began developing more sophisticated tools and behaviours.
Evan Eichler at the University of Washington, who led the research, said the benefit of the duplication would have been instant, meaning human ancestors could have distanced themselves from rival species within a generation.
Read more at The Telegraph
'Conclusive' evidence of human sacrifice found in Mexico
Mexico's National Institute of Anthropology and History said the finding clearly corroborates accounts from later cultures about the use of sharp obsidian knives in sacrificing humans.
Other physical evidence such as cut marks on the bones of ancient human skeletons had previously offered indirect proof of the practice.
Researchers in Mexico had noticed what they believed were fossilised blood stains on stone knives as long as 20 years ago. But the institute said it took a methodical examination using a scanning electron microscope to positively identify the human tissues on 31 knives from the Cantona site in the central Mexico state of Puebla.
The collection of stone knives is from the little-known Cantona culture, which flourished just after the mysterious city-state of Teotihuacan. Cantona preceded by more than 1,000 years the region's most famous human sacrifice practitioners, the Aztecs.
The archaeologists who found the knives gave them to researcher Luisa Mainou at the anthropology institute's restoration laboratories about two years ago. With help from specialists at Mexico's National Autonomous University, they were studied under the scanning electronic microscope and found to contain red blood cells, collagen, tendon and muscle fibre fragments.
While historical accounts from Aztec times, as well as drawings and paintings from earlier cultures, had long suggested that priests used knives and other instruments for non-life-threatening bloodletting rituals, the presence of the muscle and tendon traces indicates the cuts were deep and intended to sever portions of the victim's body.
"These finds confirm that the knives were used for sacrifices," Mainou said.
Susan Gillespie, associate professor of anthropology at the University of Florida who was not involved in the research project, said it was the first time to her knowledge that such tissue remains had been identified on obsidian knives.
"This is a compelling demonstration that these knives were used to cut human flesh," Gillespie said in an email.
She said other studies have found trace elements of organic remains such as food on ancient artefacts, so "with the right conditions such remains can preserve for long periods."
Gillespie said human sacrifice practices either described by the Spanish conquerors or depicted in pre-Conquest paintings include heart removal, decapitation, dismemberment, disembowelling and skinning of victims.
Interestingly, the find announced Wednesday has already begun to shed some new light on the murky sacrifice practices of pre-Hispanic cultures, which believed that human blood was a sort of vital liquid needed to keep the cosmos in balance.
For example, some knives in the test had more traces of red blood cells, while others had more skin, and others more muscle or collagen, "which suggest that each cutting tool was used for a different purpose, according to its form," Mainou said.
Gillespie said the find also suggested the intriguing possibility that the sacrificial knives were ritually deposited, unwashed, in some special site after being used.
Read more at The Telegraph
Other physical evidence such as cut marks on the bones of ancient human skeletons had previously offered indirect proof of the practice.
Researchers in Mexico had noticed what they believed were fossilised blood stains on stone knives as long as 20 years ago. But the institute said it took a methodical examination using a scanning electron microscope to positively identify the human tissues on 31 knives from the Cantona site in the central Mexico state of Puebla.
The collection of stone knives is from the little-known Cantona culture, which flourished just after the mysterious city-state of Teotihuacan. Cantona preceded by more than 1,000 years the region's most famous human sacrifice practitioners, the Aztecs.
The archaeologists who found the knives gave them to researcher Luisa Mainou at the anthropology institute's restoration laboratories about two years ago. With help from specialists at Mexico's National Autonomous University, they were studied under the scanning electronic microscope and found to contain red blood cells, collagen, tendon and muscle fibre fragments.
While historical accounts from Aztec times, as well as drawings and paintings from earlier cultures, had long suggested that priests used knives and other instruments for non-life-threatening bloodletting rituals, the presence of the muscle and tendon traces indicates the cuts were deep and intended to sever portions of the victim's body.
"These finds confirm that the knives were used for sacrifices," Mainou said.
Susan Gillespie, associate professor of anthropology at the University of Florida who was not involved in the research project, said it was the first time to her knowledge that such tissue remains had been identified on obsidian knives.
"This is a compelling demonstration that these knives were used to cut human flesh," Gillespie said in an email.
She said other studies have found trace elements of organic remains such as food on ancient artefacts, so "with the right conditions such remains can preserve for long periods."
Gillespie said human sacrifice practices either described by the Spanish conquerors or depicted in pre-Conquest paintings include heart removal, decapitation, dismemberment, disembowelling and skinning of victims.
Interestingly, the find announced Wednesday has already begun to shed some new light on the murky sacrifice practices of pre-Hispanic cultures, which believed that human blood was a sort of vital liquid needed to keep the cosmos in balance.
For example, some knives in the test had more traces of red blood cells, while others had more skin, and others more muscle or collagen, "which suggest that each cutting tool was used for a different purpose, according to its form," Mainou said.
Gillespie said the find also suggested the intriguing possibility that the sacrificial knives were ritually deposited, unwashed, in some special site after being used.
Read more at The Telegraph
Mystery of Cincinatti Sea Beast Deepens
Is it animal, vegetable or mineral -- or something else entirely?
The collective brainpower of several dozen scientists was unable to unravel the mystery of a strange beast nearly half a billion years old, tentatively nicknamed “Godzillus.”
Ron Fine, an amateur paleontologist from Dayton, Ohio, hoped the supersmart group of scientists at a regional meeting of the Geological Society of America could help explain the baffling find he made recently: the fossil of a very large, very mysterious "monster" that lived near Cincinnati 450 million years ago.
Unfortunately, the sea beast of Cincinnati had them scratching their heads, too.
“Everybody else was just as puzzled as we are -- and personally, I think that’s pretty awesome,” Fine told FoxNews.com.
He found the fossilized specimen last summer, a roughly elliptical shape with multiple lobes totaling almost 7 feet in length. It dates from almost half a billion years ago, when a shallow sea covered Cincinnati.
And despite its size, no one has ever found a fossil of this “monster” until its discovery by the amateur paleontologist last year.
But neither Fine nor the other members of the Dry Dredgers, an association of amateur paleontologists based at the University of Cincinnati that has a long history of collaborating with professional scientists, could explain what it is.
“We all have a theory, that’s the problem! We’re considering both animal and plant,” Fine told FoxNews.com. “We know it’s a fossil, something that was alive. But it’s so different than anything else, we can’t tell if it's animal or plant.”
David L. Meyer of the University of Cincinnati geology department -- and co-author of "A Sea without Fish: Life in the Ordovician Sea of the Cincinnati Region" -- is able to whittle it down a little, though he remains just as baffled.
“In general, we’re heading toward this being some sort of microbial structure that was preserved on the sea bottom, and it preserved some unusual patterns in the rock,” he told FoxNews.com.
In other words, the fossil wasn’t some ancient turtle, shark or other beast with fins and jaws and a tail. It was a mat or membrane made up of algae or bacteria that somehow captured enough dirt and debris that it could form into an unusual fossil.
“It’s not the kind of thing that you would expect to fossilize,” Meyer admitted. But there is a precedent in modern times: Microbes in the oceans today form such mats, he said.
“We know that microbial organisms in modern seas do kind of similar things,” Meyer said. Not all fossils are the remains of creatures, he explained. Scientists often discover burrows, tracks trails and other trace evidence of prehistoric activity, he said.
Read more at Discovery News
The collective brainpower of several dozen scientists was unable to unravel the mystery of a strange beast nearly half a billion years old, tentatively nicknamed “Godzillus.”
Ron Fine, an amateur paleontologist from Dayton, Ohio, hoped the supersmart group of scientists at a regional meeting of the Geological Society of America could help explain the baffling find he made recently: the fossil of a very large, very mysterious "monster" that lived near Cincinnati 450 million years ago.
Unfortunately, the sea beast of Cincinnati had them scratching their heads, too.
“Everybody else was just as puzzled as we are -- and personally, I think that’s pretty awesome,” Fine told FoxNews.com.
He found the fossilized specimen last summer, a roughly elliptical shape with multiple lobes totaling almost 7 feet in length. It dates from almost half a billion years ago, when a shallow sea covered Cincinnati.
And despite its size, no one has ever found a fossil of this “monster” until its discovery by the amateur paleontologist last year.
But neither Fine nor the other members of the Dry Dredgers, an association of amateur paleontologists based at the University of Cincinnati that has a long history of collaborating with professional scientists, could explain what it is.
“We all have a theory, that’s the problem! We’re considering both animal and plant,” Fine told FoxNews.com. “We know it’s a fossil, something that was alive. But it’s so different than anything else, we can’t tell if it's animal or plant.”
David L. Meyer of the University of Cincinnati geology department -- and co-author of "A Sea without Fish: Life in the Ordovician Sea of the Cincinnati Region" -- is able to whittle it down a little, though he remains just as baffled.
“In general, we’re heading toward this being some sort of microbial structure that was preserved on the sea bottom, and it preserved some unusual patterns in the rock,” he told FoxNews.com.
In other words, the fossil wasn’t some ancient turtle, shark or other beast with fins and jaws and a tail. It was a mat or membrane made up of algae or bacteria that somehow captured enough dirt and debris that it could form into an unusual fossil.
“It’s not the kind of thing that you would expect to fossilize,” Meyer admitted. But there is a precedent in modern times: Microbes in the oceans today form such mats, he said.
“We know that microbial organisms in modern seas do kind of similar things,” Meyer said. Not all fossils are the remains of creatures, he explained. Scientists often discover burrows, tracks trails and other trace evidence of prehistoric activity, he said.
Read more at Discovery News
White Dwarfs Are Eating 'Earth-like' Planets for Dinner
There is one doomsday scenario that will, without a doubt, come true.
In 4-5 billion years time, when the sun runs out of fuel, it will become a bloated red giant star. During this violent phase, it will blowtorch the Earth before shedding huge quantities of mass and disintegrating into a planetary nebula. A tiny white dwarf star will remain -- the remnant of our sun's core -- with the dust cloud of pulverized inner solar system planets raining down onto it.
Now, using data from the Hubble Space Telescope, astronomers from the University of Warwick have discovered four white dwarf stars containing dust in their atmospheres, giving us a rare glimpse into the future death of our own solar system.
Although dusty white dwarfs are a well known astronomical phenomenon -- the extreme tidal shear and dynamical instability produced by a white dwarf will pulverize planetary bodies in orbit through a demolition derby of epic proportions -- these four new examples may be what our solar system will look like in a few billion years time.
In each case, the researchers have detected oxygen, magnesium, iron and silicon hanging in their stellar atmospheres. The presence of these elements are a tell-tail sign that rocky worlds used to exist in orbit. Interestingly, these four elements make up the composition of approximately 93 per cent of the Earth.
In addition to these key elements is the detection of small quantities of carbon in proportions that closely match the proportion of carbon found inside the solar system's rocky planets. This is the first time such a proportion of carbon has been detected in the dusty debris surrounding white dwarfs.
Although the term "Earth-like" is often misrepresented in the field of exoplanetary studies, the Warwick astronomers are acutely aware of the implications of spotting these elements around distant stars. "What we are seeing today in these white dwarfs several hundred light-years away could well be a snapshot of the very distant future of the Earth," said lead researcher Boris Gänsicke.
Although we have little clue about the physical characteristics of the exoplanets before they were pulverized, all the components that make up the terrestrial planets -- Mercury, Venus, Earth, Mars and the asteroids -- are present in the white dwarfs' dust. The proportions of these elements are about as "Earth-like" as it gets.
There is one white dwarf, called PG0843+516, that stands out from the other three -- it has an overabundance of iron, nickel and sulfur in its atmosphere. These particular elements are found in the cores of rocky planets. During planetary evolution, gravity pulls these elements into the core -- a process known as "differentiation." Differentiation will occur in large rocky worlds like Earth, forming a core, mantle, crust and -- probably -- tectonic activity.
Read more at Discovery News
In 4-5 billion years time, when the sun runs out of fuel, it will become a bloated red giant star. During this violent phase, it will blowtorch the Earth before shedding huge quantities of mass and disintegrating into a planetary nebula. A tiny white dwarf star will remain -- the remnant of our sun's core -- with the dust cloud of pulverized inner solar system planets raining down onto it.
Now, using data from the Hubble Space Telescope, astronomers from the University of Warwick have discovered four white dwarf stars containing dust in their atmospheres, giving us a rare glimpse into the future death of our own solar system.
Although dusty white dwarfs are a well known astronomical phenomenon -- the extreme tidal shear and dynamical instability produced by a white dwarf will pulverize planetary bodies in orbit through a demolition derby of epic proportions -- these four new examples may be what our solar system will look like in a few billion years time.
In each case, the researchers have detected oxygen, magnesium, iron and silicon hanging in their stellar atmospheres. The presence of these elements are a tell-tail sign that rocky worlds used to exist in orbit. Interestingly, these four elements make up the composition of approximately 93 per cent of the Earth.
In addition to these key elements is the detection of small quantities of carbon in proportions that closely match the proportion of carbon found inside the solar system's rocky planets. This is the first time such a proportion of carbon has been detected in the dusty debris surrounding white dwarfs.
Although the term "Earth-like" is often misrepresented in the field of exoplanetary studies, the Warwick astronomers are acutely aware of the implications of spotting these elements around distant stars. "What we are seeing today in these white dwarfs several hundred light-years away could well be a snapshot of the very distant future of the Earth," said lead researcher Boris Gänsicke.
Although we have little clue about the physical characteristics of the exoplanets before they were pulverized, all the components that make up the terrestrial planets -- Mercury, Venus, Earth, Mars and the asteroids -- are present in the white dwarfs' dust. The proportions of these elements are about as "Earth-like" as it gets.
There is one white dwarf, called PG0843+516, that stands out from the other three -- it has an overabundance of iron, nickel and sulfur in its atmosphere. These particular elements are found in the cores of rocky planets. During planetary evolution, gravity pulls these elements into the core -- a process known as "differentiation." Differentiation will occur in large rocky worlds like Earth, forming a core, mantle, crust and -- probably -- tectonic activity.
Read more at Discovery News
May 2, 2012
Are We Really Still Evolving?
As much as humans have controlled our environment, we are still subject to Charles Darwin's natural selection theory, concludes an analysis of records of about 6,000 Finnish people born between 1760-1849.
"We have shown advances have not challenged the fact that our species is still evolving, just like all the other species `in the wild,´" said project leader Dr. Virpi Lummaa, of the University of Sheffield´s Department of Animal and Plant Sciences, in a press release. "It is a common misunderstanding that evolution took place a long time ago, and that to understand ourselves we must look back to the hunter-gatherer days of humans."
The scientists wanted to see if demographic, cultural and technological changes of the agricultural revolution affected natural and sexual selection in our species. In order to do that, they needed lots of detailed information. For that, they turned to Finland, which has extensive church records on births, deaths, marriages and wealth status which were kept for tax purposes.
From that data set, the researchers extrapolated statistics on survival to adulthood, mate access, mating success, and fertility per mate.
Over the last 10,000 years, they concluded in the project published in the Proceedings of the National Academy of Sciences, natural and sexual selection among humans was still going strong during that time period.
Almost half of the people in the study died before age 15, notes Science Now. If they were susceptible to disease, those genes were not passed along. Another 20 percent did not get married or have children, leading researchers to believe that some traits may have prevented individuals from reproducing and passing on their genes to the next generation.
Of particular note, the experts were surprised to find that selection affect rich and poor people similarly. Also, while sexual selection occurs in males and females, the authors noted a higher variance in reproductive success in males.
"Characteristics increasing the mating success of men are likely to evolve faster than those increasing the mating success of women," principal investigator Dr. Alexandre Courtiol, of the Wissenschftskolleg zu Berlin, said. "This is because mating with more partners was shown to increase reproductive success more in men than in women."
With one partner, the study showed, the average number of children for men was about five ; with four partners, that jumped to 7.5. At the time, there were strict rules against divorce and extramarital affairs in Finland.
Read more at Discovery News
"We have shown advances have not challenged the fact that our species is still evolving, just like all the other species `in the wild,´" said project leader Dr. Virpi Lummaa, of the University of Sheffield´s Department of Animal and Plant Sciences, in a press release. "It is a common misunderstanding that evolution took place a long time ago, and that to understand ourselves we must look back to the hunter-gatherer days of humans."
The scientists wanted to see if demographic, cultural and technological changes of the agricultural revolution affected natural and sexual selection in our species. In order to do that, they needed lots of detailed information. For that, they turned to Finland, which has extensive church records on births, deaths, marriages and wealth status which were kept for tax purposes.
From that data set, the researchers extrapolated statistics on survival to adulthood, mate access, mating success, and fertility per mate.
Over the last 10,000 years, they concluded in the project published in the Proceedings of the National Academy of Sciences, natural and sexual selection among humans was still going strong during that time period.
Almost half of the people in the study died before age 15, notes Science Now. If they were susceptible to disease, those genes were not passed along. Another 20 percent did not get married or have children, leading researchers to believe that some traits may have prevented individuals from reproducing and passing on their genes to the next generation.
Of particular note, the experts were surprised to find that selection affect rich and poor people similarly. Also, while sexual selection occurs in males and females, the authors noted a higher variance in reproductive success in males.
"Characteristics increasing the mating success of men are likely to evolve faster than those increasing the mating success of women," principal investigator Dr. Alexandre Courtiol, of the Wissenschftskolleg zu Berlin, said. "This is because mating with more partners was shown to increase reproductive success more in men than in women."
With one partner, the study showed, the average number of children for men was about five ; with four partners, that jumped to 7.5. At the time, there were strict rules against divorce and extramarital affairs in Finland.
Read more at Discovery News
Iceman Lived a While After Arrow Wound
Ötzi, the 5,300-year-old "Iceman" mummy of the Alps, lived for some time after being shot in the back by an arrow, scientists said on Tuesday after using forensic technology to analyze his preserved blood.
Contrary to a leading theory, Ötzi did not expire immediately from his wounds, they reported in the Journal of the Royal Society Interface, published by Britain's academy of sciences.
Scientists led by Albert Zink of the Ludwig Maximilian University in Munich, southern Germany used nano-scale methods to probe the oldest blood known to modern science, preserved by thousands of years of alpine chill.
Using a so-called atomic force microscope able to resolve images just a few nanometers (billionths of a meter) across, they identified corpuscles with the classic doughnut shape of healthy blood cells.
"To be absolutely sure that we were not dealing with pollen, bacteria or even a negative imprint of a blood cell, but indeed with actual blood cells, we used a second analytical method," Zink said.
They deployed Raman spectroscopy, in which refracted light from a laser beam gives chemical clues about a sample.
This showed the presence of hemoglobin and fibrin, which are key components in blood clotting, at the arrow wound on Ötzi's back.
"Because fibrin is present in fresh wounds and then degrades, the theory that Oetzi died straight after he had been injured by the arrow, as had once been mooted, and not some days after, can no longer be upheld," Zink said.
Ötzi's remains were discovered by two German hikers in September 1991 in the Ötzal Alps in South Tyrol, northern Italy, 3,210 meters (10,500 feet) above sea level.
Scientists have used high-tech, non-invasive diagnostics and genomic sequencing to penetrate his mysterious past.
These efforts have determined Ötzi died around the age of 45, was about 1.60 meters (five foot, three inches) tall and weighed 50 kilos (110 pounds).
He suffered a violent death, with an arrow severing a major blood vessel between the rib cage and the left scapula, as well as a laceration on the hand.
Read more at Discovery News
Contrary to a leading theory, Ötzi did not expire immediately from his wounds, they reported in the Journal of the Royal Society Interface, published by Britain's academy of sciences.
Scientists led by Albert Zink of the Ludwig Maximilian University in Munich, southern Germany used nano-scale methods to probe the oldest blood known to modern science, preserved by thousands of years of alpine chill.
Using a so-called atomic force microscope able to resolve images just a few nanometers (billionths of a meter) across, they identified corpuscles with the classic doughnut shape of healthy blood cells.
"To be absolutely sure that we were not dealing with pollen, bacteria or even a negative imprint of a blood cell, but indeed with actual blood cells, we used a second analytical method," Zink said.
They deployed Raman spectroscopy, in which refracted light from a laser beam gives chemical clues about a sample.
This showed the presence of hemoglobin and fibrin, which are key components in blood clotting, at the arrow wound on Ötzi's back.
"Because fibrin is present in fresh wounds and then degrades, the theory that Oetzi died straight after he had been injured by the arrow, as had once been mooted, and not some days after, can no longer be upheld," Zink said.
Ötzi's remains were discovered by two German hikers in September 1991 in the Ötzal Alps in South Tyrol, northern Italy, 3,210 meters (10,500 feet) above sea level.
Scientists have used high-tech, non-invasive diagnostics and genomic sequencing to penetrate his mysterious past.
These efforts have determined Ötzi died around the age of 45, was about 1.60 meters (five foot, three inches) tall and weighed 50 kilos (110 pounds).
He suffered a violent death, with an arrow severing a major blood vessel between the rib cage and the left scapula, as well as a laceration on the hand.
Read more at Discovery News
Labels:
Archeology,
Biology,
History,
Human,
Science
Killer Fish Looked Half Shark, Half Tuna
The appropriately named Rebellatrix, a shark-like predatory fish that partly resembled tuna, terrorized ocean dwellers 240 million years ago.
Rebellatrix, meaning the rebel coelacanth, is described in the latest issue of the Journal of Vertebrate Paleontology. Coelacanths are iconic fishes today, well known as living fossils.
Coelacanths are thought of as slow movers, so the speediness of Rebellatrix comes as a surprise. Lead author Andrew Wendruff from the University of Alberta told Discovery News that the fish measured over three feet long and had a tuna-like forked tail.
“Since the tail of a fish is used for locomotion, much can be deduced about the type of locomotion as well as its lifestyle,” Wendruff said. “Fish with forked tails are able to achieve higher speeds and sustain them over a greater period of time. The forked tail of Rebellatrix indicated that it was a fast-moving aggressive predator.”
Co-author Mark Wilson, also from the University of Alberta, continued that most fossil coelacanths have broad, flexible tails.
“Those coelacanths were slow moving and usually lay in wait for their prey,” Wilson said. “Rebellatrix was able to search actively for the fishes that it preyed upon and catch them at high speed.”
Remains of the killer fish were found on rocky slopes in the Hart Ranges of Wapiti Lake Provincial Park, British Columbia, which was once off the western coast of the supercontinent Pangaea. Rebellatrix is so unusual that it has been put in its own family. The fish represents the first major change in body shape for the coelacanth group in more than 70 million years.
After analyzing its fossils, Wendruff and his team think the unusual shape comes down to two possibilities. One could be that the fossil record for coelacanths is vastly undiscovered and there are others like this fish yet to be found.
Another possibility is that Rebellatrix represents a bizarre adaptation to life following Earth’s greatest mass extinction event, which happened at the end of the Permian 250 million years ago. Coelacanths then may have evolved to fill a vacant niche unoccupied by other predatory fish.
“Rebellatrix, most importantly, shatters the commonly held notion that coelacanths were an evolutionarily stagnant group in that their body shape and lifestyle changed little since the origin of the group,” Wendruff said. “Rebellatrix is dramatically different from any coelacanth previously known, and thus had undergone significant evolutionary change in its ancestry.”
He further believes that the fish was a “dead end in the evolution of cruising predation” since, once it went extinct, no other coelacanth evolved the tuna-like forked tail and other adaptations for shark-like hunting.
Read more at Discovery News
Rebellatrix, meaning the rebel coelacanth, is described in the latest issue of the Journal of Vertebrate Paleontology. Coelacanths are iconic fishes today, well known as living fossils.
Coelacanths are thought of as slow movers, so the speediness of Rebellatrix comes as a surprise. Lead author Andrew Wendruff from the University of Alberta told Discovery News that the fish measured over three feet long and had a tuna-like forked tail.
“Since the tail of a fish is used for locomotion, much can be deduced about the type of locomotion as well as its lifestyle,” Wendruff said. “Fish with forked tails are able to achieve higher speeds and sustain them over a greater period of time. The forked tail of Rebellatrix indicated that it was a fast-moving aggressive predator.”
Co-author Mark Wilson, also from the University of Alberta, continued that most fossil coelacanths have broad, flexible tails.
“Those coelacanths were slow moving and usually lay in wait for their prey,” Wilson said. “Rebellatrix was able to search actively for the fishes that it preyed upon and catch them at high speed.”
Remains of the killer fish were found on rocky slopes in the Hart Ranges of Wapiti Lake Provincial Park, British Columbia, which was once off the western coast of the supercontinent Pangaea. Rebellatrix is so unusual that it has been put in its own family. The fish represents the first major change in body shape for the coelacanth group in more than 70 million years.
After analyzing its fossils, Wendruff and his team think the unusual shape comes down to two possibilities. One could be that the fossil record for coelacanths is vastly undiscovered and there are others like this fish yet to be found.
Another possibility is that Rebellatrix represents a bizarre adaptation to life following Earth’s greatest mass extinction event, which happened at the end of the Permian 250 million years ago. Coelacanths then may have evolved to fill a vacant niche unoccupied by other predatory fish.
“Rebellatrix, most importantly, shatters the commonly held notion that coelacanths were an evolutionarily stagnant group in that their body shape and lifestyle changed little since the origin of the group,” Wendruff said. “Rebellatrix is dramatically different from any coelacanth previously known, and thus had undergone significant evolutionary change in its ancestry.”
He further believes that the fish was a “dead end in the evolution of cruising predation” since, once it went extinct, no other coelacanth evolved the tuna-like forked tail and other adaptations for shark-like hunting.
Read more at Discovery News
Ancient Egyptians Tracked Eclipsing Binary Star Algol
"Aldebaran's great, okay
Algol's pretty neat,
Betelgeuse's pretty girls
Will knock you off your feet".
-- Douglas Adams, Hitchhiker's Guide to the Galaxy
Turn your telescope to the constellation of Perseus and you might note an unusual star called Algol, dubbed the "Demon Star" or the "Raging One." You wouldn't notice anything much different at first, unless you happened to be looking during a window of a few hours -- every 2.867 days -- when Algol's brightness visibly dims.
This unusual feature was first noticed back in 1667 by an astronomer named Geminiano Montanari, and later confirmed -- with a proposed possible mechanism -- in 1783 by John Goodricke, who precisely measured the period of variability: it dims every 2.867 days.
But a new paper by researchers at the University of Helsinki, Finland, claims that the ancient Egyptians may have recorded Algol's periodic variability 3000 years ago, based on their statistical analysis of a bit of papyrus known as the Cairo Calendar.
This isn't the first time people have hypothesized that Algol's variable nature was known prior to its discovery in the 17th century. Certainly it was a familiar object, prominent in mythology and lore. In the second century, Ptolemy referred to Algol as the "Gorgon of Perseus," and associated it with death by decapitation. (In Greek mythology, the hero Perseus slays the snake-headed Gorgon, Medusa, by chopping off her head.)
Other cultures also associated the star with violence and bad fortune. It's no coincidence that H.P. Lovecraft marked the onset of his final battle in the 1919 short story, "Beyond the Wall of Sleep," with the appearance of a nova near Algol.
But the Helsinki researchers go beyond mythology and conjecture and provide a solid statistical analysis, based on historical documentation.
Goodricke proposed that Algol's periodic variability was due to an eclipsing factor: namely, an orbiting dark body occasionally passed in front of the star, dimming its brightness temporarily.
Alternatively, he suggested that Algol itself had a darker side that turned toward the Earth every 2.687 days.
His hypothesis wouldn't be confirmed until 1881, when Edward Charles Pickering discovered that Algol is actually a binary star system: there were two stars circling together, Algol A and Algol B.
Even more intriguing: it was an "eclipsing binary," i.e., one in which the dimmer star in the system occasionally passes in front of its brighter sibling, dimming the latter according to predictable periods. Goodricke's hypothesis was correct.
Actually, astronomers now know that Algol is a triple-star system, with a third star, Algol C, located a bit further out from the main pair, with a larger orbit.
All of this is necessary background for understanding the conclusions of the Helsinki scientists. The whole point of tracking the heavens so meticulously, for the Egyptians, was to make predictions about the future, dividing the calendar into "lucky" and "unlucky" days. The Cairo Calendar, while badly damaged, nonetheless contains a complete list of such days over a full year, circa 1200 B.C.
How did the Egyptians decide how to rate specific days? That's a mystery. But the Finnish team took the raw data and reassembled it into a tie series, then used statistical techniques to determine the cycles within it. There were two significant periodic cycles. One was 29.6 days, very close to current estimates of a lunar month (29.53059 days).
Read more at Discovery News
Algol's pretty neat,
Betelgeuse's pretty girls
Will knock you off your feet".
-- Douglas Adams, Hitchhiker's Guide to the Galaxy
Turn your telescope to the constellation of Perseus and you might note an unusual star called Algol, dubbed the "Demon Star" or the "Raging One." You wouldn't notice anything much different at first, unless you happened to be looking during a window of a few hours -- every 2.867 days -- when Algol's brightness visibly dims.
This unusual feature was first noticed back in 1667 by an astronomer named Geminiano Montanari, and later confirmed -- with a proposed possible mechanism -- in 1783 by John Goodricke, who precisely measured the period of variability: it dims every 2.867 days.
But a new paper by researchers at the University of Helsinki, Finland, claims that the ancient Egyptians may have recorded Algol's periodic variability 3000 years ago, based on their statistical analysis of a bit of papyrus known as the Cairo Calendar.
This isn't the first time people have hypothesized that Algol's variable nature was known prior to its discovery in the 17th century. Certainly it was a familiar object, prominent in mythology and lore. In the second century, Ptolemy referred to Algol as the "Gorgon of Perseus," and associated it with death by decapitation. (In Greek mythology, the hero Perseus slays the snake-headed Gorgon, Medusa, by chopping off her head.)
Other cultures also associated the star with violence and bad fortune. It's no coincidence that H.P. Lovecraft marked the onset of his final battle in the 1919 short story, "Beyond the Wall of Sleep," with the appearance of a nova near Algol.
But the Helsinki researchers go beyond mythology and conjecture and provide a solid statistical analysis, based on historical documentation.
Goodricke proposed that Algol's periodic variability was due to an eclipsing factor: namely, an orbiting dark body occasionally passed in front of the star, dimming its brightness temporarily.
Alternatively, he suggested that Algol itself had a darker side that turned toward the Earth every 2.687 days.
His hypothesis wouldn't be confirmed until 1881, when Edward Charles Pickering discovered that Algol is actually a binary star system: there were two stars circling together, Algol A and Algol B.
Even more intriguing: it was an "eclipsing binary," i.e., one in which the dimmer star in the system occasionally passes in front of its brighter sibling, dimming the latter according to predictable periods. Goodricke's hypothesis was correct.
Actually, astronomers now know that Algol is a triple-star system, with a third star, Algol C, located a bit further out from the main pair, with a larger orbit.
All of this is necessary background for understanding the conclusions of the Helsinki scientists. The whole point of tracking the heavens so meticulously, for the Egyptians, was to make predictions about the future, dividing the calendar into "lucky" and "unlucky" days. The Cairo Calendar, while badly damaged, nonetheless contains a complete list of such days over a full year, circa 1200 B.C.
How did the Egyptians decide how to rate specific days? That's a mystery. But the Finnish team took the raw data and reassembled it into a tie series, then used statistical techniques to determine the cycles within it. There were two significant periodic cycles. One was 29.6 days, very close to current estimates of a lunar month (29.53059 days).
Read more at Discovery News
May 1, 2012
'Netanyahu' Seal From 8th Century BC Found
A 2,700-year-old seal bearing a name similar to Israel's Prime Minister Benjamin Netanyahu has been unearthed near the Temple Mount in Jerusalem, the Israel Antiquities Authority (IAA) said.
Found within the remains of a building dating to the First Temple period –- between the end of the 8th century BCE and 586 BCE -- the seal is made of a semi-precious stone.
According to the ancient Hebrew inscription, it belonged to Matanyahu, who was the son of a man whose name started with the letters "Ho" : "Lematanyahu Ben Ho…" (meaning: "Belonging to Matanyahu Ben Ho…"). Unfortunately, the rest of the inscription is erased.
"The name Matanyahu, like the name Netanyahu, means giving to God. These names are mentioned several times in the Bible. They are typical of the names in the Kingdom of Judah in latter part of the First Temple period," Eli Shukron, the IAA excavation director, said.
Less than one inch in diameter, the personal seal was set in a ring and used for signing letters.
"To find a seal from the First Temple period at the foot of the Temple Mount walls is rare and very exciting. This is a tangible greeting of sorts from a man named Matanyahu who lived here more than 2,700 years ago," Shukron said.
From Discovery News
Found within the remains of a building dating to the First Temple period –- between the end of the 8th century BCE and 586 BCE -- the seal is made of a semi-precious stone.
According to the ancient Hebrew inscription, it belonged to Matanyahu, who was the son of a man whose name started with the letters "Ho" : "Lematanyahu Ben Ho…" (meaning: "Belonging to Matanyahu Ben Ho…"). Unfortunately, the rest of the inscription is erased.
"The name Matanyahu, like the name Netanyahu, means giving to God. These names are mentioned several times in the Bible. They are typical of the names in the Kingdom of Judah in latter part of the First Temple period," Eli Shukron, the IAA excavation director, said.
Less than one inch in diameter, the personal seal was set in a ring and used for signing letters.
"To find a seal from the First Temple period at the foot of the Temple Mount walls is rare and very exciting. This is a tangible greeting of sorts from a man named Matanyahu who lived here more than 2,700 years ago," Shukron said.
From Discovery News
Signals of Natural Selection Found in Recent Human Evolution
In a world where we’ve tamed our environment and largely protected ourselves from the vagaries of nature, we may think we’re immune to the forces of natural selection. But a new study finds that the process that drives evolution was still shaping us as recently as the 19th century.
The finding comes from an analysis of the birth, death, and marital records of 5,923 people born between 1760 and 1849 in four farming or fishing villages in Finland. Researchers led by evolutionary biologist Alexandre Courtiol of the Institute for Advanced Study Berlin picked this time period because agriculture was well established by then and there were strict rules against divorce and extramarital affairs. The team looked at four aspects of life that affect survival and reproduction, key signposts of natural selection: Who lived beyond age 15, who got married and who didn’t, how many marriages each person had (second marriages were possible only if a spouse died), and how many children were born in each marriage. “All these steps can influence the number of offspring you have,” says Courtiol.
Natural selection was alive and well in all of the villages the researchers surveyed. Almost half of the people died before age 15, for example, suggesting that they had traits disfavored by natural selection, such as susceptibility to disease. As a result, they contributed none of their genes to the next generation. Of those that made it through childhood, 20 percent did not get married and had no children, again suggesting that some traits prevented individuals from obtaining mates and passing on their genes to the next generation.
The numbers were about the same for landed and landless individuals, indicating that wealth did not buffer the environment enough to prevent natural selection from culling or favoring individuals. “Although there is agriculture and transmission of wealth, there is still as much room for evolution to proceed as in other animals,” says Courtiol, whose team reports its findings online today in the Proceedings of the National Academy of Sciences.
The Finns were also subject to sexual selection, in that men who were able to attract new mates had more offspring. With one partner, the average was about five children; with four partners, that jumped to 7.5, Courtiol notes. Men benefited more than women in terms of begetting more children, most likely because they tended to remarry young women with good child-bearing potential. Thus sexual selection was more important in men than in women.
From the records they had, the researchers could not tell which traits were being selected for, but the variation in the number of offspring—from zero to 17—indicates there was a large opportunity for selection to occur. That variation is the grist for evolution.
The importance of sexual selection is well accepted in birds and fish, “but this is the first time that sexual selection has been so well documented in humans,” says Stephen Stearns, an evolutionary biologist at Yale University. As for showing natural selection, “they are providing additional, confirmational evidence.”
“Without a doubt, natural selection occurs in modern humans,” agrees Jacob Moorad, an evolutionary biologist at Duke University in Durham, North Carolina, who was not involved in the study. He thinks this work will inspire other researchers with large databases of data on humans to look at how selection operates in populations.
Read more at Wired Science
The finding comes from an analysis of the birth, death, and marital records of 5,923 people born between 1760 and 1849 in four farming or fishing villages in Finland. Researchers led by evolutionary biologist Alexandre Courtiol of the Institute for Advanced Study Berlin picked this time period because agriculture was well established by then and there were strict rules against divorce and extramarital affairs. The team looked at four aspects of life that affect survival and reproduction, key signposts of natural selection: Who lived beyond age 15, who got married and who didn’t, how many marriages each person had (second marriages were possible only if a spouse died), and how many children were born in each marriage. “All these steps can influence the number of offspring you have,” says Courtiol.
Natural selection was alive and well in all of the villages the researchers surveyed. Almost half of the people died before age 15, for example, suggesting that they had traits disfavored by natural selection, such as susceptibility to disease. As a result, they contributed none of their genes to the next generation. Of those that made it through childhood, 20 percent did not get married and had no children, again suggesting that some traits prevented individuals from obtaining mates and passing on their genes to the next generation.
The numbers were about the same for landed and landless individuals, indicating that wealth did not buffer the environment enough to prevent natural selection from culling or favoring individuals. “Although there is agriculture and transmission of wealth, there is still as much room for evolution to proceed as in other animals,” says Courtiol, whose team reports its findings online today in the Proceedings of the National Academy of Sciences.
The Finns were also subject to sexual selection, in that men who were able to attract new mates had more offspring. With one partner, the average was about five children; with four partners, that jumped to 7.5, Courtiol notes. Men benefited more than women in terms of begetting more children, most likely because they tended to remarry young women with good child-bearing potential. Thus sexual selection was more important in men than in women.
From the records they had, the researchers could not tell which traits were being selected for, but the variation in the number of offspring—from zero to 17—indicates there was a large opportunity for selection to occur. That variation is the grist for evolution.
The importance of sexual selection is well accepted in birds and fish, “but this is the first time that sexual selection has been so well documented in humans,” says Stephen Stearns, an evolutionary biologist at Yale University. As for showing natural selection, “they are providing additional, confirmational evidence.”
“Without a doubt, natural selection occurs in modern humans,” agrees Jacob Moorad, an evolutionary biologist at Duke University in Durham, North Carolina, who was not involved in the study. He thinks this work will inspire other researchers with large databases of data on humans to look at how selection operates in populations.
Read more at Wired Science
Dinosaurs on Road to Extinction Before Asteroid Strike
Big asteroid strike aside, some dinosaur populations were already dying out during the last 12 million years of the Cretaceous, long before an asteroid smashed into Earth, a new study claims.
The asteroid that hit 65.5 million years ago may have been just one factor among many that led to the demise of the world's non-flying dinosaurs, says the research.
"I think our study highlights the fact that we still have a long way to go until we fully understand the extinction of the dinosaurs," lead author Stephen Brusatte, a Columbia University graduate student affiliated with the American Museum of Natural History's Division of Paleontology, told Discovery News. The study was published in the latest issue of Nature Communications.
"There are a couple of things we know for sure," he added. "We know a large asteroid or comet hit the planet about 65.5 million years ago, right when the dinosaurs completely disappeared from the fossil record."
"We also know there was massive volcanism and major sea level changes at this time. We now also know that at least some groups of dinosaurs were undergoing long-term declines in biodiversity during the final 12 million years of the Cretaceous, at least in North America."
The study presents the first look at dinosaur extinction based on morphological disparity, meaning the variability of body structure within particular groups of dinosaurs. The more the variability in a species, generally, the healthier the population was.
Earlier research was based almost always on estimates of change in the number of dinosaur species over time, but that can be affected by uneven sampling within the fossil record.
Some geological formations, for example, tend to preserve dinosaur remains better than others.
Brusatte and his team calculated differences in body size for seven major dinosaur groups using databases that include wide-ranging characteristics about the intricate skeletal structure of nearly 150 different species.
They discovered that large-bodied, bulk-feeding plant-eaters were dying out long before the natural disasters of 65.5 million years ago. These animals included hadrosaurs and ceratopsids.
On the other hand, small plant-eaters (ankylosaurs and pachycephalosaurs), carnivorous dinosaurs (tyrannosaurs and coelurosaurs) and huge plant-eaters without advanced chewing abilities (sauropods) remained fairly stable over the same period of time.
While the die-off of the larger species remains a mystery, Brusatte said, "Something was going on with large herbivores in the late Cretaceous, at least in North America. Maybe it was the fact that the local environments were in flux due to drastic sea level changes and mountain building at the time."
He explained that plant-eaters may have felt the effects of a changing land area first since they sat at the bottom of the food chain.
"Given a few more million years we would have seen declines in other dinosaur groups higher up in the food chain," he said.
Paul Upchurch, a University College London paleobiologist, doesn't buy it and stands by the idea that a big asteroid wiped out the dinosaurs.
"First, only some dinosaur groups show reduced disparity in the final 12 million years, while other groups continue to do well. So this study could actually be taken as evidence in favor of a sudden extinction," Upchurch added. "We need a mechanism that explains why the smaller dinosaurs and large sauropods died out suddenly at the end of the Cretaceous."
Read more at Discovery News
The asteroid that hit 65.5 million years ago may have been just one factor among many that led to the demise of the world's non-flying dinosaurs, says the research.
"I think our study highlights the fact that we still have a long way to go until we fully understand the extinction of the dinosaurs," lead author Stephen Brusatte, a Columbia University graduate student affiliated with the American Museum of Natural History's Division of Paleontology, told Discovery News. The study was published in the latest issue of Nature Communications.
"There are a couple of things we know for sure," he added. "We know a large asteroid or comet hit the planet about 65.5 million years ago, right when the dinosaurs completely disappeared from the fossil record."
"We also know there was massive volcanism and major sea level changes at this time. We now also know that at least some groups of dinosaurs were undergoing long-term declines in biodiversity during the final 12 million years of the Cretaceous, at least in North America."
The study presents the first look at dinosaur extinction based on morphological disparity, meaning the variability of body structure within particular groups of dinosaurs. The more the variability in a species, generally, the healthier the population was.
Earlier research was based almost always on estimates of change in the number of dinosaur species over time, but that can be affected by uneven sampling within the fossil record.
Some geological formations, for example, tend to preserve dinosaur remains better than others.
Brusatte and his team calculated differences in body size for seven major dinosaur groups using databases that include wide-ranging characteristics about the intricate skeletal structure of nearly 150 different species.
They discovered that large-bodied, bulk-feeding plant-eaters were dying out long before the natural disasters of 65.5 million years ago. These animals included hadrosaurs and ceratopsids.
On the other hand, small plant-eaters (ankylosaurs and pachycephalosaurs), carnivorous dinosaurs (tyrannosaurs and coelurosaurs) and huge plant-eaters without advanced chewing abilities (sauropods) remained fairly stable over the same period of time.
While the die-off of the larger species remains a mystery, Brusatte said, "Something was going on with large herbivores in the late Cretaceous, at least in North America. Maybe it was the fact that the local environments were in flux due to drastic sea level changes and mountain building at the time."
He explained that plant-eaters may have felt the effects of a changing land area first since they sat at the bottom of the food chain.
"Given a few more million years we would have seen declines in other dinosaur groups higher up in the food chain," he said.
Paul Upchurch, a University College London paleobiologist, doesn't buy it and stands by the idea that a big asteroid wiped out the dinosaurs.
"First, only some dinosaur groups show reduced disparity in the final 12 million years, while other groups continue to do well. So this study could actually be taken as evidence in favor of a sudden extinction," Upchurch added. "We need a mechanism that explains why the smaller dinosaurs and large sauropods died out suddenly at the end of the Cretaceous."
Read more at Discovery News
Biggest Full Moon of 2012 Occurs This Week
Skywatchers take note: The biggest full moon of the year is due to arrive this weekend.
The moon will officially become full Saturday (May 5) at 11:35 p.m. EDT. And because this month's full moon coincides with the moon's perigee — its closest approach to Earth — it will also be the year's biggest.
The moon will swing in 221,802 miles (356,955 kilometers) from our planet, offering skywatchers a spectacular view of an extra-big, extra-bright moon, nicknamed a supermoon.
And not only does the moon's perigee coincide with full moon this month, but this perigee will be the nearest to Earth of any this year, as the distance of the moon's close approach varies by about 3 percent, according to meteorologist Joe Rao, SPACE.com's skywatching columnist. This happens because the moon's orbit is not perfectly circular.
This month's full moon is due to be about 16 percent brighter than average. In contrast, later this year on Nov. 28, the full moon will coincide with apogee, the moon's farthest approach, offering a particularly small and dim full moon.
Though the unusual appearance of this month's full moon may be surprising to some, there's no reason for alarm, scientists warn. The slight distance difference isn't enough to cause any earthquakes or extreme tidal effects, experts say.
However, the normal tides around the world will be particularly high and low. At perigee, the moon will exert about 42 percent more tidal force than it will during its next apogee two weeks later, Rao said.
The last supermoon occurred in March 2011.
Read more at Discovery News
The moon will officially become full Saturday (May 5) at 11:35 p.m. EDT. And because this month's full moon coincides with the moon's perigee — its closest approach to Earth — it will also be the year's biggest.
The moon will swing in 221,802 miles (356,955 kilometers) from our planet, offering skywatchers a spectacular view of an extra-big, extra-bright moon, nicknamed a supermoon.
And not only does the moon's perigee coincide with full moon this month, but this perigee will be the nearest to Earth of any this year, as the distance of the moon's close approach varies by about 3 percent, according to meteorologist Joe Rao, SPACE.com's skywatching columnist. This happens because the moon's orbit is not perfectly circular.
This month's full moon is due to be about 16 percent brighter than average. In contrast, later this year on Nov. 28, the full moon will coincide with apogee, the moon's farthest approach, offering a particularly small and dim full moon.
Though the unusual appearance of this month's full moon may be surprising to some, there's no reason for alarm, scientists warn. The slight distance difference isn't enough to cause any earthquakes or extreme tidal effects, experts say.
However, the normal tides around the world will be particularly high and low. At perigee, the moon will exert about 42 percent more tidal force than it will during its next apogee two weeks later, Rao said.
The last supermoon occurred in March 2011.
Read more at Discovery News
Thieving Stars Snatch Orphan Planets
Our galaxy is thought to be teeming with billions of "nomad" planets. These worlds are interstellar orphans, with no stellar parent to call home. Some were likely gravitationally flung from their parent star at an early age, while others may have evolved on their lonesome, clumping from small clouds of interstellar gas and dust.
If there are so many orphaned worlds drifting alone, how often might they be snatched by the gravitational tug of a star that happens to be drifting in the same direction?
Surprisingly, say astronomers from the Harvard-Smithsonian Center for Astrophysics in Cambridge, Mass., and Peking University in China, it may happen more often than astrophysicists ever dreamed.
"Stars trade planets just like baseball teams trade players," said CfA's Hagai Perets in an April 17 press release.
In a Feb. 24 Discovery News article, space correspondent Irene Klotz detailed research based on the microlensing of nomad planets and the estimate that there may be as many as 100,000 of these interstellar orphans per star in our galaxy.
Perets' research is based on the assumption that there is only one nomad planet per star in a simulated young star cluster.
From this low estimate, Perets and collaborator Thijs Kouwenhoven (Peking University) found that 3-6 percent of the stars in their simulated cluster captured a nomad planet at some point in time. Naturally, the bigger the star, the greater its gravity, and therefore the bigger the chance a nomad planet may be snatched.
Although it may sound as if Perets and Kouwenhoven are being overly conservative with their estimate of nomad planets (especially in light of the 100,000 planets per star estimate), they deliberately focused on a young cluster of stars, which, by their nature, would be closely packed. Also, there would be fewer nomad worlds in these early years. As clusters of stars grow older, they spread out, greatly reducing the density of stars. It's for this reason that most of the "planet snatching" would happen in the very early history of star cluster evolution -- despite there being fewer nomad worlds.
During the cluster's formative years, gravitational interactions between the stars and the inevitable dynamical chaos within young star systems would cause planets to be slingshotted from their stable orbits. Then, as they travel through interstellar space, having been evicted from their orbital homes, neighboring stars traveling in the same direction may pull the nomads into new orbits.
Read more at Discovery News
If there are so many orphaned worlds drifting alone, how often might they be snatched by the gravitational tug of a star that happens to be drifting in the same direction?
Surprisingly, say astronomers from the Harvard-Smithsonian Center for Astrophysics in Cambridge, Mass., and Peking University in China, it may happen more often than astrophysicists ever dreamed.
"Stars trade planets just like baseball teams trade players," said CfA's Hagai Perets in an April 17 press release.
In a Feb. 24 Discovery News article, space correspondent Irene Klotz detailed research based on the microlensing of nomad planets and the estimate that there may be as many as 100,000 of these interstellar orphans per star in our galaxy.
Perets' research is based on the assumption that there is only one nomad planet per star in a simulated young star cluster.
From this low estimate, Perets and collaborator Thijs Kouwenhoven (Peking University) found that 3-6 percent of the stars in their simulated cluster captured a nomad planet at some point in time. Naturally, the bigger the star, the greater its gravity, and therefore the bigger the chance a nomad planet may be snatched.
Although it may sound as if Perets and Kouwenhoven are being overly conservative with their estimate of nomad planets (especially in light of the 100,000 planets per star estimate), they deliberately focused on a young cluster of stars, which, by their nature, would be closely packed. Also, there would be fewer nomad worlds in these early years. As clusters of stars grow older, they spread out, greatly reducing the density of stars. It's for this reason that most of the "planet snatching" would happen in the very early history of star cluster evolution -- despite there being fewer nomad worlds.
During the cluster's formative years, gravitational interactions between the stars and the inevitable dynamical chaos within young star systems would cause planets to be slingshotted from their stable orbits. Then, as they travel through interstellar space, having been evicted from their orbital homes, neighboring stars traveling in the same direction may pull the nomads into new orbits.
Read more at Discovery News
Apr 30, 2012
Yellowstone 'Super-Eruption' Less Super, More Frequent Than Thought
The Yellowstone "super-volcano" is a little less super -- but more active -- than previously thought.
Researchers at Washington State University and the Scottish Universities Environmental Research Centre say the biggest Yellowstone eruption, which created the 2 million year old Huckleberry Ridge deposit, was actually two different eruptions at least 6,000 years apart.
Their results paint a new picture of a more active volcano than previously thought and can help recalibrate the likelihood of another big eruption in the future. Before the researchers split the one eruption into two, it was the fourth largest known to science.
"The Yellowstone volcano's previous behavior is the best guide of what it will do in the future," says Ben Ellis, co-author and post-doctoral researcher at Washington State University's School of the Environment. "This research suggests explosive volcanism from Yellowstone is more frequent than previously thought."
The new ages for each Huckleberry Ridge eruption reduce the volume of the first event to 2,200 cubic kilometers, roughly 12 percent less than previously thought. A second eruption of 290 cubic kilometers took place more than 6,000 years later.
That first eruption still deserves to be called "super," as it is one of the largest known to have occurred on Earth and darkened the skies with ash from southern California to the Mississippi River. By comparison, the 1980 eruption of Mount St. Helens produced 1 cubic kilometer of ash. The larger blast of Oregon's Mount Mazama 6,850 years ago produced 116 cubic kilometers of ash.
The study, funded by the National Science Foundation and published in the June issue of the Quaternary Geochronology, used high-precision argon isotope dating to make the new calculations. The radioactive decay rate from potassium 40 to argon 40 serves as a "rock clock" for dating samples and has a precision of .2 percent. Darren Mark, co-author and a post-doctoral research fellow at the SUERC, recently helped fine tune the technique and improve it by 1.2 percent -- a small-sounding difference that can become huge across geologic time.
"Improved precision for greater temporal resolution is not just about adding another decimal place to a number, says Mark. "It's far more exciting. It's like getting a sharper lens on a camera. It allows us to see the world more clearly."
Read more at Science Daily
Researchers at Washington State University and the Scottish Universities Environmental Research Centre say the biggest Yellowstone eruption, which created the 2 million year old Huckleberry Ridge deposit, was actually two different eruptions at least 6,000 years apart.
Their results paint a new picture of a more active volcano than previously thought and can help recalibrate the likelihood of another big eruption in the future. Before the researchers split the one eruption into two, it was the fourth largest known to science.
"The Yellowstone volcano's previous behavior is the best guide of what it will do in the future," says Ben Ellis, co-author and post-doctoral researcher at Washington State University's School of the Environment. "This research suggests explosive volcanism from Yellowstone is more frequent than previously thought."
The new ages for each Huckleberry Ridge eruption reduce the volume of the first event to 2,200 cubic kilometers, roughly 12 percent less than previously thought. A second eruption of 290 cubic kilometers took place more than 6,000 years later.
That first eruption still deserves to be called "super," as it is one of the largest known to have occurred on Earth and darkened the skies with ash from southern California to the Mississippi River. By comparison, the 1980 eruption of Mount St. Helens produced 1 cubic kilometer of ash. The larger blast of Oregon's Mount Mazama 6,850 years ago produced 116 cubic kilometers of ash.
The study, funded by the National Science Foundation and published in the June issue of the Quaternary Geochronology, used high-precision argon isotope dating to make the new calculations. The radioactive decay rate from potassium 40 to argon 40 serves as a "rock clock" for dating samples and has a precision of .2 percent. Darren Mark, co-author and a post-doctoral research fellow at the SUERC, recently helped fine tune the technique and improve it by 1.2 percent -- a small-sounding difference that can become huge across geologic time.
"Improved precision for greater temporal resolution is not just about adding another decimal place to a number, says Mark. "It's far more exciting. It's like getting a sharper lens on a camera. It allows us to see the world more clearly."
Read more at Science Daily
Highly Religious People Are Less Motivated by Compassion Than Are Non-Believers
"Love thy neighbor" is preached from many a pulpit. But new research from the University of California, Berkeley, suggests that the highly religious are less motivated by compassion when helping a stranger than are atheists, agnostics and less religious people.
In three experiments, social scientists found that compassion consistently drove less religious people to be more generous. For highly religious people, however, compassion was largely unrelated to how generous they were, according to the findings which are published in the most recent online issue of the journal Social Psychological and Personality Science.
The results challenge a widespread assumption that acts of generosity and charity are largely driven by feelings of empathy and compassion, researchers said. In the study, the link between compassion and generosity was found to be stronger for those who identified as being non-religious or less religious.
"Overall, we find that for less religious people, the strength of their emotional connection to another person is critical to whether they will help that person or not," said UC Berkeley social psychologist Robb Willer, a co-author of the study. "The more religious, on the other hand, may ground their generosity less in emotion, and more in other factors such as doctrine, a communal identity, or reputational concerns."
Compassion is defined in the study as an emotion felt when people see the suffering of others which then motivates them to help, often at a personal risk or cost.
While the study examined the link between religion, compassion and generosity, it did not directly examine the reasons for why highly religious people are less compelled by compassion to help others. However, researchers hypothesize that deeply religious people may be more strongly guided by a sense of moral obligation than their more non-religious counterparts.
"We hypothesized that religion would change how compassion impacts generous behavior," said study lead author Laura Saslow, who conducted the research as a doctoral student at UC Berkeley.
Saslow, who is now a postdoctoral scholar at UC San Francisco, said she was inspired to examine this question after an altruistic, nonreligious friend lamented that he had only donated to earthquake recovery efforts in Haiti after watching an emotionally stirring video of a woman being saved from the rubble, not because of a logical understanding that help was needed.
"I was interested to find that this experience -- an atheist being strongly influenced by his emotions to show generosity to strangers -- was replicated in three large, systematic studies," Saslow said.
In the first experiment, researchers analyzed data from a 2004 national survey of more than 1,300 American adults. Those who agreed with such statements as "When I see someone being taken advantage of, I feel kind of protective towards them" were also more inclined to show generosity in random acts of kindness, such as loaning out belongings and offering a seat on a crowded bus or train, researchers found.
When they looked into how much compassion motivated participants to be charitable in such ways as giving money or food to a homeless person, non-believers and those who rated low in religiosity came out ahead: "These findings indicate that although compassion is associated with pro-sociality among both less religious and more religious individuals, this relationship is particularly robust for less religious individuals," the study found.
In the second experiment, 101 American adults watched one of two brief videos, a neutral video or a heartrending one, which showed portraits of children afflicted by poverty. Next, they were each given 10 "lab dollars" and directed to give any amount of that money to a stranger. The least religious participants appeared to be motivated by the emotionally charged video to give more of their money to a stranger.
"The compassion-inducing video had a big effect on their generosity," Willer said. "But it did not significantly change the generosity of more religious participants."
In the final experiment, more than 200 college students were asked to report how compassionate they felt at that moment. They then played "economic trust games" in which they were given money to share -- or not -- with a stranger. In one round, they were told that another person playing the game had given a portion of their money to them, and that they were free to reward them by giving back some of the money, which had since doubled in amount.
Those who scored low on the religiosity scale, and high on momentary compassion, were more inclined to share their winnings with strangers than other participants in the study.
"Overall, this research suggests that although less religious people tend to be less trusted in the U.S., when feeling compassionate, they may actually be more inclined to help their fellow citizens than more religious people," Willer said.
Read more at Science Daily
In three experiments, social scientists found that compassion consistently drove less religious people to be more generous. For highly religious people, however, compassion was largely unrelated to how generous they were, according to the findings which are published in the most recent online issue of the journal Social Psychological and Personality Science.
The results challenge a widespread assumption that acts of generosity and charity are largely driven by feelings of empathy and compassion, researchers said. In the study, the link between compassion and generosity was found to be stronger for those who identified as being non-religious or less religious.
"Overall, we find that for less religious people, the strength of their emotional connection to another person is critical to whether they will help that person or not," said UC Berkeley social psychologist Robb Willer, a co-author of the study. "The more religious, on the other hand, may ground their generosity less in emotion, and more in other factors such as doctrine, a communal identity, or reputational concerns."
Compassion is defined in the study as an emotion felt when people see the suffering of others which then motivates them to help, often at a personal risk or cost.
While the study examined the link between religion, compassion and generosity, it did not directly examine the reasons for why highly religious people are less compelled by compassion to help others. However, researchers hypothesize that deeply religious people may be more strongly guided by a sense of moral obligation than their more non-religious counterparts.
"We hypothesized that religion would change how compassion impacts generous behavior," said study lead author Laura Saslow, who conducted the research as a doctoral student at UC Berkeley.
Saslow, who is now a postdoctoral scholar at UC San Francisco, said she was inspired to examine this question after an altruistic, nonreligious friend lamented that he had only donated to earthquake recovery efforts in Haiti after watching an emotionally stirring video of a woman being saved from the rubble, not because of a logical understanding that help was needed.
"I was interested to find that this experience -- an atheist being strongly influenced by his emotions to show generosity to strangers -- was replicated in three large, systematic studies," Saslow said.
In the first experiment, researchers analyzed data from a 2004 national survey of more than 1,300 American adults. Those who agreed with such statements as "When I see someone being taken advantage of, I feel kind of protective towards them" were also more inclined to show generosity in random acts of kindness, such as loaning out belongings and offering a seat on a crowded bus or train, researchers found.
When they looked into how much compassion motivated participants to be charitable in such ways as giving money or food to a homeless person, non-believers and those who rated low in religiosity came out ahead: "These findings indicate that although compassion is associated with pro-sociality among both less religious and more religious individuals, this relationship is particularly robust for less religious individuals," the study found.
In the second experiment, 101 American adults watched one of two brief videos, a neutral video or a heartrending one, which showed portraits of children afflicted by poverty. Next, they were each given 10 "lab dollars" and directed to give any amount of that money to a stranger. The least religious participants appeared to be motivated by the emotionally charged video to give more of their money to a stranger.
"The compassion-inducing video had a big effect on their generosity," Willer said. "But it did not significantly change the generosity of more religious participants."
In the final experiment, more than 200 college students were asked to report how compassionate they felt at that moment. They then played "economic trust games" in which they were given money to share -- or not -- with a stranger. In one round, they were told that another person playing the game had given a portion of their money to them, and that they were free to reward them by giving back some of the money, which had since doubled in amount.
Those who scored low on the religiosity scale, and high on momentary compassion, were more inclined to share their winnings with strangers than other participants in the study.
"Overall, this research suggests that although less religious people tend to be less trusted in the U.S., when feeling compassionate, they may actually be more inclined to help their fellow citizens than more religious people," Willer said.
Read more at Science Daily
Why Bigger Animals Aren't Always Faster
New research in the journal Physiological and Biochemical Zoology shows why bigger isn't always better when it comes to sprinting speed.
"Typically, bigger animals tend to run faster than smaller animals, because they have longer legs," said Christofer J. Clemente of Harvard University, who led the research. "But this only works up to a point. The fastest land animal is neither the biggest nor the smallest, but something in between. Think about the size of an elephant, a mouse and a cheetah."
Clemente and his team studied monitor lizards to show that that the same principle applies within species as well as across species, and to identify why this is the case. Because adult monitor lizards vary substantially in size, they are an ideal species for testing how size affects speed. The researchers timed and photographed monitors ranging from two to 12 pounds, as sprinted across a 45-foot track.
The researchers found that the midsize lizards were fastest-and they discovered why.
Using high-speed cameras and markers placed at key spots on the lizards' bodies, the researchers created computer models comparing characteristics of the lizards' running strides.
"We then looked at how the mechanics of the stride changed with body size, and we found that the changes in the stride were consistent with the changes in speed," Clemente said. "Above a certain size, lizards were changing the way they ran, perhaps due to a decreased ability of the bones and muscles to support a larger body mass."
Testing this phenomenon within a single species helps clear up questions about why the biggest animals aren't the fastest. Large animals tend to be closely related evolutionarily. So it's hard to tell whether slower speeds are due to biomechanical issues stemming from size, or from any number of other factors stemming from a shared evolutionary history.
Looking at individuals within a species rather than making cross-species comparisons helps to eliminate this phylogenetic bias. The results bolster the hypothesis that large size creates biomechanical constraints.
Read more at Science Daily
"Typically, bigger animals tend to run faster than smaller animals, because they have longer legs," said Christofer J. Clemente of Harvard University, who led the research. "But this only works up to a point. The fastest land animal is neither the biggest nor the smallest, but something in between. Think about the size of an elephant, a mouse and a cheetah."
Clemente and his team studied monitor lizards to show that that the same principle applies within species as well as across species, and to identify why this is the case. Because adult monitor lizards vary substantially in size, they are an ideal species for testing how size affects speed. The researchers timed and photographed monitors ranging from two to 12 pounds, as sprinted across a 45-foot track.
The researchers found that the midsize lizards were fastest-and they discovered why.
Using high-speed cameras and markers placed at key spots on the lizards' bodies, the researchers created computer models comparing characteristics of the lizards' running strides.
"We then looked at how the mechanics of the stride changed with body size, and we found that the changes in the stride were consistent with the changes in speed," Clemente said. "Above a certain size, lizards were changing the way they ran, perhaps due to a decreased ability of the bones and muscles to support a larger body mass."
Testing this phenomenon within a single species helps clear up questions about why the biggest animals aren't the fastest. Large animals tend to be closely related evolutionarily. So it's hard to tell whether slower speeds are due to biomechanical issues stemming from size, or from any number of other factors stemming from a shared evolutionary history.
Looking at individuals within a species rather than making cross-species comparisons helps to eliminate this phylogenetic bias. The results bolster the hypothesis that large size creates biomechanical constraints.
Read more at Science Daily
Dirty Books Reveal Medieval Reading Habits
Dirty pages of century old books have revealed the fears, desires and humanity of medieval Europeans, suggesting that they were as self-interested and afraid of illness as people are today.
Kathryn Rudy, lecturer in the School of Art History at the University of St Andrews, analyzed a number of 15th and early 16th century European prayer books to reconstruct the reading habits of people who lived in medieval times.
The book turned out to be a kind of forensic analysis of what interested people of the time. She soon realized that the darkness of thumbed pages correlated to the intensity of their use and handling. The dirtiest pages were most likely also the most read, while relatively clean pages were probably neglected.
Using a densitometer, a machine that measures the darkness of a reflecting surface, Rudy was able to intepret how a reader handled a book, which sections were the most popular and which were ignored.
"Although it is often difficult to study the habits, private rituals and emotional states of people, this new technique can let us into the minds of people from the past," Rudy said.
The densitometer spiked at a manuscript dedicated to St. Sebastian, who was often prayed to for protection against the plague.
According to Rudy, the result shows that the reader was terrified of the plague and repeated the prayer in a bid to ward off the disease.
Similarly, pages which contained prayers for personal salvation were much more soiled and worn than those asking for other people's redemption.
"Religion was inseparable from physical health, time management, and interpersonal relationships in medieval times. In the century before printing, people ordered tens of thousands of prayer books -- sometimes quite beautifully illuminated ones -- even though they might cost as much as a house," Rudy said.
Treasured and read several times a day at key prayer times, those religious book appear to have produced some "side effects" -- they put the reader to sleep.
Read more at Discovery News
Kathryn Rudy, lecturer in the School of Art History at the University of St Andrews, analyzed a number of 15th and early 16th century European prayer books to reconstruct the reading habits of people who lived in medieval times.
The book turned out to be a kind of forensic analysis of what interested people of the time. She soon realized that the darkness of thumbed pages correlated to the intensity of their use and handling. The dirtiest pages were most likely also the most read, while relatively clean pages were probably neglected.
Using a densitometer, a machine that measures the darkness of a reflecting surface, Rudy was able to intepret how a reader handled a book, which sections were the most popular and which were ignored.
"Although it is often difficult to study the habits, private rituals and emotional states of people, this new technique can let us into the minds of people from the past," Rudy said.
The densitometer spiked at a manuscript dedicated to St. Sebastian, who was often prayed to for protection against the plague.
According to Rudy, the result shows that the reader was terrified of the plague and repeated the prayer in a bid to ward off the disease.
Similarly, pages which contained prayers for personal salvation were much more soiled and worn than those asking for other people's redemption.
"Religion was inseparable from physical health, time management, and interpersonal relationships in medieval times. In the century before printing, people ordered tens of thousands of prayer books -- sometimes quite beautifully illuminated ones -- even though they might cost as much as a house," Rudy said.
Treasured and read several times a day at key prayer times, those religious book appear to have produced some "side effects" -- they put the reader to sleep.
Read more at Discovery News
Apr 29, 2012
Hubble Images Searchlight Beams from a Preplanetary Nebula
The NASA/ESA Hubble Space Telescope has been at the cutting edge of research into what happens to stars like our sun at the ends of their lives. One stage that stars pass through as they run out of nuclear fuel is called the preplanetary or protoplanetary nebula stage. A new Hubble image of the Egg Nebula shows one of the best views to date of this brief but dramatic phase in a star's life.
The preplanetary nebula phase is a short period in the cycle of stellar evolution, and has nothing to do with planets. Over a few thousand years, the hot remains of the aging star in the center of the nebula heat it up, excite the gas, and make it glow as a subsequent planetary nebula. The short lifespan of preplanetary nebulae means there are relatively few of them in existence at any one time. Moreover, they are very dim, requiring powerful telescopes to be seen. This combination of rarity and faintness means they were only discovered comparatively recently. The Egg Nebula, the first to be discovered, was first spotted less than 40 years ago, and many aspects of this class of object remain shrouded in mystery.
At the center of this image, and hidden in a thick cloud of dust, is the nebula's central star. While we can't see the star directly, four searchlight beams of light coming from it shine out through the nebula. It is thought that ring-shaped holes in the thick cocoon of dust, carved by jets coming from the star, let the beams of light emerge through the otherwise opaque cloud. The precise mechanism by which stellar jets produce these holes is not known for certain, but one possible explanation is that a binary star system, rather than a single star, exists at the center of the nebula.
The onion-like layered structure of the more diffuse cloud surrounding the central cocoon is caused by periodic bursts of material being ejected from the dying star. The bursts typically occur every few hundred years.
Read more at Science Daily
The preplanetary nebula phase is a short period in the cycle of stellar evolution, and has nothing to do with planets. Over a few thousand years, the hot remains of the aging star in the center of the nebula heat it up, excite the gas, and make it glow as a subsequent planetary nebula. The short lifespan of preplanetary nebulae means there are relatively few of them in existence at any one time. Moreover, they are very dim, requiring powerful telescopes to be seen. This combination of rarity and faintness means they were only discovered comparatively recently. The Egg Nebula, the first to be discovered, was first spotted less than 40 years ago, and many aspects of this class of object remain shrouded in mystery.
At the center of this image, and hidden in a thick cloud of dust, is the nebula's central star. While we can't see the star directly, four searchlight beams of light coming from it shine out through the nebula. It is thought that ring-shaped holes in the thick cocoon of dust, carved by jets coming from the star, let the beams of light emerge through the otherwise opaque cloud. The precise mechanism by which stellar jets produce these holes is not known for certain, but one possible explanation is that a binary star system, rather than a single star, exists at the center of the nebula.
The onion-like layered structure of the more diffuse cloud surrounding the central cocoon is caused by periodic bursts of material being ejected from the dying star. The bursts typically occur every few hundred years.
Read more at Science Daily
DNA Fingerprinting Enters 21st Century
As any crime show buff can tell you, DNA evidence identifies a victim's remains, fingers the guilty, and sets the innocent free. But in reality, the processing of forensic DNA evidence takes much longer than a 60-minute primetime slot.
To create a victim or perpetrator's DNA profile, the U.S. Federal Bureau of Investigation (FBI) scans a DNA sample for at least 13 short tandem repeats (STRs). STRs are collections of repeated two to six nucleotide-long sequences, such as CTGCTGCTG, which are scattered around the genome. Because the number of repeats in STRs can mutate quickly, each person's set of these genetic markers is different from every other person's, making STRs ideal for creating a unique DNA fingerprint.
The FBI first introduced their STR identification system in 1998, when STRs were the darling of the genetics community. However, other identifying genomic markers were soon discovered and gained in popularity. Around the same time, high throughput sequencing allowed researchers to process vast amounts of DNA, but using methods that were ineffectual in repeated DNA, including STRs. STRs were mostly forgotten by geneticists, and innovations to study them stalled.
Now Whitehead Institute researchers have pulled STR identification into the 21st Century by creating lobSTR, a three-step system that accurately and simultaneously profiles more than100,000 STRs from a human genome sequence in one day -- a feat that previous systems could never complete. The lobSTR algorithm is described in the May issue of Genome Research.
"lobSTR found that in one human genome, 55% of the STRs are polymorphic, they showed some difference, which is very surprising," says Whitehead Fellow Yaniv Erlich. "Usually DNA's polymorphism rate is very low because most DNA is identical between two people. With this tool, we provide access to tens of thousands of quickly changing markers that you couldn't get before, and those can be used in medical genetics, population genetics, and forensics."
To create a DNA fingerprint, lobSTR first scans an entire genome to identify all STRs and what nucleotide pattern is repeated within those stretches of DNA. Then, lobSTR notes the non-repeating sequences flanking either end of the STRs. These sequences anchor each STR's location within the genome and determine the number of repeats at the STRs. Finally, lobSTR removes any "noise" to produce an accurate description of the STRs' configuration.
According to Melissa Gymrek, who is the first author of the Genome Research paper, lobSTR's ability to accurately and efficiently describe thousands of STRs in one genome has opened up many new research opportunities.
Read more at Science Daily
To create a victim or perpetrator's DNA profile, the U.S. Federal Bureau of Investigation (FBI) scans a DNA sample for at least 13 short tandem repeats (STRs). STRs are collections of repeated two to six nucleotide-long sequences, such as CTGCTGCTG, which are scattered around the genome. Because the number of repeats in STRs can mutate quickly, each person's set of these genetic markers is different from every other person's, making STRs ideal for creating a unique DNA fingerprint.
The FBI first introduced their STR identification system in 1998, when STRs were the darling of the genetics community. However, other identifying genomic markers were soon discovered and gained in popularity. Around the same time, high throughput sequencing allowed researchers to process vast amounts of DNA, but using methods that were ineffectual in repeated DNA, including STRs. STRs were mostly forgotten by geneticists, and innovations to study them stalled.
Now Whitehead Institute researchers have pulled STR identification into the 21st Century by creating lobSTR, a three-step system that accurately and simultaneously profiles more than100,000 STRs from a human genome sequence in one day -- a feat that previous systems could never complete. The lobSTR algorithm is described in the May issue of Genome Research.
"lobSTR found that in one human genome, 55% of the STRs are polymorphic, they showed some difference, which is very surprising," says Whitehead Fellow Yaniv Erlich. "Usually DNA's polymorphism rate is very low because most DNA is identical between two people. With this tool, we provide access to tens of thousands of quickly changing markers that you couldn't get before, and those can be used in medical genetics, population genetics, and forensics."
To create a DNA fingerprint, lobSTR first scans an entire genome to identify all STRs and what nucleotide pattern is repeated within those stretches of DNA. Then, lobSTR notes the non-repeating sequences flanking either end of the STRs. These sequences anchor each STR's location within the genome and determine the number of repeats at the STRs. Finally, lobSTR removes any "noise" to produce an accurate description of the STRs' configuration.
According to Melissa Gymrek, who is the first author of the Genome Research paper, lobSTR's ability to accurately and efficiently describe thousands of STRs in one genome has opened up many new research opportunities.
Read more at Science Daily
Newest Wildlife Tracking Tool: Leeches?
Leeches may be a wildlife biologist's new best friend when in the field.
A team of Danish and British researchers has found that DNA (deoxyribonucleic acid, the genetic code for life) in the blood consumed by leeches can be used to confirm the presence of mammals, particularly shy, hard-to-find or rare ones.
This provides a cheap, simple alternative to traditional tracking tools, from sensor-equipped cameras to hair and feces collection. And, unlike the elusive mammals researchers are often seeking, the leeches come to humans in the hopes of a good meal.
First, researchers tested 20 medical leeches that had eaten goat blood from the Copenhagen Zoo. The leeches contained traces of goat DNA for more than four months after their blood meal, they found. That genetic signature would allow researchers to identify the mammals a leech had come into contact with.
Next, they tested the technique in the field, by catching leeches in a Vietnamese rain forest. Of 25 leeches tested, 21 contained traces of DNA from local mammals, including an Annamite striped rabbit, which has not been seen in the area since its discovery in 1996.
Read more at Discovery News
A team of Danish and British researchers has found that DNA (deoxyribonucleic acid, the genetic code for life) in the blood consumed by leeches can be used to confirm the presence of mammals, particularly shy, hard-to-find or rare ones.
This provides a cheap, simple alternative to traditional tracking tools, from sensor-equipped cameras to hair and feces collection. And, unlike the elusive mammals researchers are often seeking, the leeches come to humans in the hopes of a good meal.
First, researchers tested 20 medical leeches that had eaten goat blood from the Copenhagen Zoo. The leeches contained traces of goat DNA for more than four months after their blood meal, they found. That genetic signature would allow researchers to identify the mammals a leech had come into contact with.
Next, they tested the technique in the field, by catching leeches in a Vietnamese rain forest. Of 25 leeches tested, 21 contained traces of DNA from local mammals, including an Annamite striped rabbit, which has not been seen in the area since its discovery in 1996.
Read more at Discovery News
Older People Hold Stronger Belief in God
Across the world, people have varying levels of belief (and disbelief) in God, with some nations being more devout than others. But new research reveals one constant across parts of the globe: As people age, their belief in God seems to increase.
The new study is based on data collected as part of the General Social Survey by researchers at the National Opinion Research Center (NORC) at the University of Chicago.
The researchers looked at data from 30 countries where surveys, taken at two or more time points between 1991 and 2008, asked residents about their belief in God. Participants answered three main "belief" questions, including their level of belief (from strong to atheistic), their changing beliefs over their lifetime and their attitude toward the notion that God is concerned with their personal lives.
Age seemed to be a big factor in belief. Belief in God was highest among older adults, with 43 percent of those 68 and older saying they are certain that God exists, compared with 23 percent of those 27 and younger, averaged across the countries surveyed.
"Looking at differences among age groups, the largest increases in belief in God most often occur among those 58 years of age and older," Smith said in a statement, referring to the change in belief between the 58 to 67 age group and those 68 and older. "This suggests that belief in God is especially likely to increase among the oldest groups, perhaps in response to the increasing anticipation of mortality."
Atheism ranged from 52 percent in the former East Germany to less than 1 percent in the Philippines; a largely opposite pattern was found for strong belief in God, with 84 percent of Philippine residents indicating such and 8 percent of those in East Germany saying they were certain God existed. The lowest "strong belief" in God came from Japan, where the level was 4 percent.
Overall, countries dominated by Catholic societies, like the Philippines, showed the strongest belief in God. The United States stood out for its high belief in God among developed countries. At the other end of the religion spectrum, atheism seemed to have the strongest hold in northwest European countries such as Scandinavia and those of the former Eastern Bloc, excluding Poland.
Support for the concept that God is concerned with people in a personal way ranged from 8 percent in the former East Germany to 82 percent in the Philippines. About 68 percent of individuals in the United States held that personal view of God.
Over the study period, just five of the countries showed a consistent growth in their belief in God: West Germany, Israel, Japan, Russia and Slovenia. Meanwhile, 16 countries showed a consistent decline in belief: Australia, Austria, East Germany, Great Britain, Ireland, the Netherlands, New Zealand, Northern Ireland, Norway and Poland. Some countries showed a mixed pattern, with some measures moving toward belief and others away.
Read more at Discovery News
The new study is based on data collected as part of the General Social Survey by researchers at the National Opinion Research Center (NORC) at the University of Chicago.
The researchers looked at data from 30 countries where surveys, taken at two or more time points between 1991 and 2008, asked residents about their belief in God. Participants answered three main "belief" questions, including their level of belief (from strong to atheistic), their changing beliefs over their lifetime and their attitude toward the notion that God is concerned with their personal lives.
Age seemed to be a big factor in belief. Belief in God was highest among older adults, with 43 percent of those 68 and older saying they are certain that God exists, compared with 23 percent of those 27 and younger, averaged across the countries surveyed.
"Looking at differences among age groups, the largest increases in belief in God most often occur among those 58 years of age and older," Smith said in a statement, referring to the change in belief between the 58 to 67 age group and those 68 and older. "This suggests that belief in God is especially likely to increase among the oldest groups, perhaps in response to the increasing anticipation of mortality."
Atheism ranged from 52 percent in the former East Germany to less than 1 percent in the Philippines; a largely opposite pattern was found for strong belief in God, with 84 percent of Philippine residents indicating such and 8 percent of those in East Germany saying they were certain God existed. The lowest "strong belief" in God came from Japan, where the level was 4 percent.
Overall, countries dominated by Catholic societies, like the Philippines, showed the strongest belief in God. The United States stood out for its high belief in God among developed countries. At the other end of the religion spectrum, atheism seemed to have the strongest hold in northwest European countries such as Scandinavia and those of the former Eastern Bloc, excluding Poland.
Support for the concept that God is concerned with people in a personal way ranged from 8 percent in the former East Germany to 82 percent in the Philippines. About 68 percent of individuals in the United States held that personal view of God.
Over the study period, just five of the countries showed a consistent growth in their belief in God: West Germany, Israel, Japan, Russia and Slovenia. Meanwhile, 16 countries showed a consistent decline in belief: Australia, Austria, East Germany, Great Britain, Ireland, the Netherlands, New Zealand, Northern Ireland, Norway and Poland. Some countries showed a mixed pattern, with some measures moving toward belief and others away.
Read more at Discovery News
Subscribe to:
Posts (Atom)