Jan 18, 2014
A team of researchers at the University of Melbourne led by Piers Howe presented test subjects with pairs of color photographs of the same person’s face. In some cases the two photographs were identical.
In others there were minor but significant differences (for example in one photo the person might be wearing glasses, or have a different hairstyle). Each photograph was seen for one and a half seconds, with a one-second break between the images. The subjects were then asked to determine whether or not a change had occurred—and if it had, to correctly identify the change from a list of possible options.
The researchers conclude, “In this study we have provided direct behavioural evidence that observers can regularly detect when a change has occurred without necessarily being able to identify what has changed…. We found that this ability to detect unidentified changes is not unique to images containing faces.” Though the general phenomena (known as change blindness) has been known for decades, according to Dr. Howe this is the first scientific study to demonstrate that that people can reliably sense changes that they cannot visually identify.
Illusions of ESP
Humans pick up subconscious visual cues from their environment and assimilate it into their knowledge without realizing it. Say, for example, in a test a subject momentarily meets a person, exchanges a few words, and is afterward asked to give as much information as possible about the person they met for less than a minute. Without saying a word, we absorb enormous amounts of information about people; how a man dresses gives clues about his lifestyle and economic class; how a woman speaks can provide key information about her education, upbringing, and even nationality; their general physique gives clues about health, level of fitness, and even career (a typical construction worker’s body will look different from a ballet dancer’s, or a football player’s).
None of this information is completely accurate, of course; they are what psychologists call heuristics, or general rules of thumb that are likely to be correct based upon common sense, logic, and probability.
In a nutshell, you gather information, or notice that something has changed, but you don’t know how or why you know it. Since you’re not aware of noticing the change, the information seems to come from outside yourself, perhaps in the form of intuition or even psychic information.
Dr. Howe notes that from the perspective of a person who “knows without knowing” (that is, know something but doesn’t know how or why she knows it), “the experience was similar to that of a sixth sense, in that they could sense information that they believed that they could not see. We were able to show how this processed worked and debunk the claim that this was due a quasi-magical ability such as the sixth sense. The point is that people can sometimes get the strong impression that they can sense changes that they cannot see. What we showed was that while this sensing ability is indeed real, it has nothing to do with a sixth sense, and can be explained in terms of known visual processes.”
Read more at Discovery News
The ring, which surrounds the star HD 142527 over 456 light-years away in the southern constellation Lupus, is asymmetric in that it has a portion that is noticeably denser than the rest. This northern region, which shines brightly for ALMA in submillimeter-wave radio emissions, is located incredibly far from the star itself — about 22 billion kilometers, or five times the distance between the sun and Neptune.
“We are very surprised at the brightness of the northern side,” said Misato Fukagawa, the leader of the team and an assistant professor at Osaka University. “The brightest part in submillimeter wave is located far from the central star, and the distance is comparable to five times the distance between the Sun and the Neptune. I have never seen such a bright knot in such a distant position.”
A “bright knot” indicates the densest clustering of material in that portion of the ring — just the right kind of scenario to spark the formation of protoplanets, based on current models.
“When a sufficient amount of material is accumulated, planets or comets can be formed here,” Fukagawa added.
This is also the first firm evidence of planetary formation observed so far from the central star in a protoplanetary disk.
Based on measurements of the dense knot’s submillimeter emission strength and temperature, Fukagawa’s team concludes that either rocky planets or giant Jupiter-sized worlds are actively forming around HD 142527.
While gaps in the protoplanetary disk around this star have previously been observed, indicating the likelihood of planetary formation, this is the first time direct observations have been made within the internal part of the dust ring itself.
Watch an animation of the HD 142527 system below:
These findings put one more card in the deck of what astronomers know about how planets form around other stars — and thus how our planets in our own solar system probably formed as well.
Read more at Discovery News
Jan 17, 2014
What's more, the extensive DNA analysis -- published in the latest PLoS Genetics -- found that dogs are more closely related to each other than to wolves, regardless of their geographic origin. The genetic overlap seen today between dogs and wolves is likely then due to interbreeding after dog domestication.
"The common ancestor of dogs and wolves was a large, wolf-like animal that lived between 9,000 and 34,000 years ago," Robert Wayne, co-senior author of the study, told Discovery News. "Based on DNA evidence, it lived in Europe."
For the study, Wayne, a professor in UCLA's Department of Ecology and Evolutionary Biology, and his colleagues generated genome sequences from three gray wolves: one each from China, Croatia and Israel, representing three regions where dogs are believed to have originated.
The researchers also produced genomes for two dog breeds: a basenji, a breed that originates in central Africa, and a dingo from Australia. Both locations have been historically isolated from modern wolf populations. The scientists, co-led by John Novembre, additionally sequenced the genome of a golden jackal to serve as an example representing earlier divergence.
Instead of all three dogs being closely related to one of the wolf lineages, or each dog being related to its closest geographic counterpart, the DNA points to the dogs having descended from an unknown wolf-like ancestor.
Wayne explained that many animals went extinct during the late Pleistocene (20,000 to about 12,000 years ago), which experienced a global ice age. Coincidentally -- or maybe not -- modern humans also became more prevalent in Europe at this time. It could be that humans led to the extinction of some animals at that time, but the jury is still out on the issue.
Dogs clearly were not in that group. Wayne now believes that dog and human interactions went through three primary stages:
1- Hunter gatherers, possibly even Neanderthals, interacted with dogs, probably benefiting from their presence. For example, dogs might have kept other, more dangerous, carnivores out of the way. They could have also helped with hunting.
2- With the emergence of agriculture, dogs lived near humans and adapted to an agricultural diet. Prior studies have found that dogs in such regions possess higher numbers of amylase genes that help to digest starch. Wolves have these genes too, the scientists found, but usually not in such high amounts.
3- In more recent history, humans have selectively bred dogs, which has dramatically changed the appearance, behavior and other attributes of dogs.
Throughout this overall period of time, interbreeding with wolves occurred, and still happens, further complicating the genetic relationship between wolves and dogs.
Read more at Discovery News
Excavated in the heart of London more than 25 years ago and dated to between 120 and 160 A.D., the skulls are believed to have belonged to defeated gladiators or victims of Roman soldiers’ practice of “headhunting,” in which heads of enemies were displayed as trophies.
“At least one of the skulls shows evidence of being chewed at by dogs, so it was still fleshed when it was lying in the open,” said Rebecca Redfern, from the Center for Human Bioarchaeology at the Museum of London.
The skulls and bones appear to have belonged to about 40 young men. They were excavated in 1988 from a Walbrook stream site within the Roman city walls and deposited at the nearby Museum of London.
It wasn’t an unusual finding. Skulls and human remains have been recovered from the Walbrook Valley for over 175 years. They were often interpreted as bones washed out of Roman cemeteries, or victims of the Boudican rebellion — decapitated and thrown into the river when the Iceni tribe, who led a revolt against the Roman Empire in 60-61 A.D., torched Roman settlements and towns.
But improved forensic techniques revealed that the skulls had a different fate.
According to Redfern and colleague Heather Bonney, from the Earth Sciences Department of the Natural History Museum, the remains were deposited over a 40-year period in 11 pits or dumps.
“Therefore, it was not just one event,” Redfern said.
The majority of the skulls had numerous blows to the head, which were probably the cause of death.
“Many also had healed injuries, suggesting that violence was a common feature of their life,” Redfern and Bonney wrote in the Journal of Archaeological Science.
“As there is no evidence for warfare or civil unrest in London at the time, the two most likely scenarios to explain this evidence is that they represent deaths in the arena –- executed criminals or defeated gladiators — or trophy heads displayed at the fort,” Redfern told Discovery News.
The remains belonged to men mostly between the ages of 25 and 35 and consisted of a number of bones and 39 skulls. They feature direct, blunt force blows to the face, mouth and sides of the head, possibly the result of arena combats.
“These are the first human bones that could be the remains of gladiators from Britain,” Redfern said.
Other skulls, one in particular, show evidence of decapitation marks. Decapitation was a way of finishing off gladiators, but also criminals executed in Londinium’s amphitheater, which at the time was close to the Walbrook pits.
The evidence for decapitation, the large number of skulls and the unusual injuries observed on a male individual all would support the hypothesis that some of these remains derive from trophy heads.
“Nor does the evidence exclude the possibility that the fleshed/decomposing material was displayed, without mounting or suspension, at the fort or forum, with their eventual disposal in the nearby ritual space of the Walbrook Valley,” the researchers wrote.
Read more at Discovery News
News of the errant rock was announced by NASA Mars Exploration Rover lead scientist Steve Squyres of Cornell University at a special NASA Jet Propulsion Laboratory “10 years of roving Mars” event at the California Institute of Technology (Caltech), Pasadena, Calif., on Thursday night. The science star-studded public event was held in celebration of the decade since twin rovers Spirit and Opportunity landed on the Red Planet in January 2004.
While chronicling the scientific discoveries made by both rovers over the years, Squyres discussed the recent finding of suspected gypsum near the rim of Endeavour Crater — a region of Meridiani Planum that Opportunity has been studying since 2011 — and the discovery of clays that likely formed in a pH-neutral wet environment in Mars past. While these discoveries have been nothing short of groundbreaking, Squyres shared the Mars rover’s team’s excitement for that one strange rock, exclaiming: “Mars keeps throwing new stuff at us!”
“It’s about the size of a jelly doughnut,” Squyres told Discovery News. “It was a total surprise, we were like ‘wait a second, that wasn’t there before, it can’t be right. Oh my god! It wasn’t there before!’ We were absolutely startled.”
But the rover didn’t roll over that area, so where did Pinnacle Island come from?
Only two options have so far been identified as the rock’s source: 1) The rover either “flipped” the object as it maneuvered or, 2) it landed there, right in front of the rover, after a nearby meteorite impact event. The impact ejecta theory, however, is the least likely of the two.
“So my best guess for this rock … is that it’s something that was nearby,” said Squyres. “I must stress that I’m guessing now, but I think it happened when the rover did a turn in place a meter or two from where this rock now lies.”
Opportunity’s front right steering actuator has stopped working, so Squyres identified that as the possible culprit behind the whole mystery.
Each wheel on the rover has its own actuator. Should an actuator jam or otherwise fail, the robot’s mobility can suffer. In the case of this wheel, it can no longer turn left or right. “So if you do a turn in place on bedrock,” continued Squyres, “as you turn that wheel across the rock, it’s gonna kinda ‘chatter.’” This jittery motion across the bedrock may have propelled the rock out of place, “tiddlywinking” the object from its location and flipping it a few feet away from the rover.
Never missing a scientific opportunity, Opportunity scientists hope to study the bright rock. “It obligingly turned upside down, so we’re seeing a side that hasn’t seen the Martian atmosphere in billions of years and there it is for us to investigate. It’s just a stroke of luck,” he said.
“You think of Mars as being a very static place and I don’t think there’s a smoking hole nearby so it’s not a bit of crater ejecta, I think it’s something that we did … we flung it.”
Although this is the leading theory behind the case of the random rock, Squyres pointed out that the investigation is still under way and it will be a few days before his team can definitively say where Pinnacle Island came from.
Read more at Discovery News
The biologist, now at Baylor College of Medicine in Houston, hoped to resolve a major debate that had rocked biology in different incarnations for more than 100 years. Were organisms capable of altering themselves to meet the needs of their environment, as Jean Baptiste Lamarck had proposed in the early 1800s? Or did mutations occur randomly, creating a mixture of harmful, harmless or beneficial outcomes, which in turn fueled the trial-and-error process of natural selection, as Charles Darwin proposed in “On the Origin of Species”?
Although Darwin’s ideas have clearly triumphed in modern biology, hints of a more Lamarckian style of inheritance have continued to surface. Rosenberg’s experiments were inspired by a controversial study, published in the late 1980s, that suggested that bacteria could somehow direct their evolution, “choosing which mutations will occur,” the authors wrote — a modern molecular biologist’s version of Lamarckian theory.
Rosenberg’s results, published in 1997, disputed those findings, as other’s hadbefore, but with a twist. Rather than targeting specific traits, as Lamarck’s theory would have predicted, the mutations struck random genes, with some good outcomes and some bad. However, the process wasn’t completely random. Rosenberg’s findings suggested that bacteria were capable of increasing their mutation rates, which might in turn produce strains capable of surviving new conditions.
|Biologist Susan Rosenberg of Baylor College of Medicine in Houston studies how bacteria mutate when under stress.|
Rosenberg expected the biology community to be relieved. Darwin, after all, had prevailed. But some scientists questioned the findings. Indeed, the research triggereddebates that played out in the pages of scientific journals for several years. Accurately measuring mutation rates can be tricky, and given that most mutations are harmful to the cell, boosting their frequency seemed like a risky evolutionary move.
Over the past decade, however, labs around the world have found similar patterns in bacteria, human cancer cells and plants. And Rosenberg and others have pinpointed the molecular mechanisms underlying the stress-induced mutations, which vary from organism to organism.
Scientists are now beginning to explore how these mechanisms can be targeted for medical treatments, such as new cancer therapies and long-lasting antibiotics. The research provides insight into how both cancer cells and pathogenic bacteria evolve resistance to treatment, a stubborn and deadly problem that has plagued physicians and drug developers.
|Bacterial colonies mutate more frequently when put under stress, as shown by the visible mutation in these blue-green colonies.|
Most scientists now accept that stress boosts mutation rates in some organisms, although questions remain regarding how much the phenomenon contributes to their evolution. “What’s controversial now is whether cells evolved to do this to create mutations,” said Patricia Foster, a biologist at Indiana University in Bloomington.
In 1943, Max Delbrück and Salvador Luria, two of the founding fathers of molecular biology, performed a landmark experiment designed to examine the nature of mutation. They showed that mutations in bacteria arise spontaneously, rather than in response to a specific environmental pressure. The work, which ultimately won them a Nobel Prize, was all the more impressive given that scientists did not yet know the structure of DNA.
We now know that mutations arise in a variety of ways, typically when a cell is copying or repairing its DNA. Every so often, the molecular machinery that makes DNA inserts the wrong building block, or the copying machinery jumps elsewhere in the genome and copies the wrong piece. Those changes can have no effect, or they can alter the structure of the protein that the DNA produces, changing its function for better or, more often, for worse.
|Under stressful conditions, such as when food is scarce, E. coli bacteria employ an enzyme that tends to make mistakes when copying DNA.|
The debate surfaced again in 1988, when the biologist John Cairns and collaborators at Harvard University made a provocative proposal in the journal Nature, that bacteria could somehow choose which genes to mutate. The evidence? Bacteria incapable of digesting a sugar called lactose evolved that ability when given no other alternative food. “The paper was hugely controversial,” recalled Foster, a friend of Cairns’ who collaborated with him on follow-up studies. “Letters flew back and forth.”
The idea that cells can regulate their mutation rates is not as outlandish as it might seem. Certain immune system cells, for example, mutate much more frequently than others, enabling them to produce varieties of antibodies that can subdue novel invaders. But these cells are confined to the immune system and do not pass along their mutations to the next generation.
It was Cairns’ finding that inspired Rosenberg to undertake her experiments. She suspected that his proposal was wrong, but not entirely. “People fought about it for five years in the front pages of major journals,” she said. “It was clear to me that it was a hugely important question.”
Subsequent research from both Rosenberg and Foster showed that mutations were scattered across the E. coli genome, rather than directed to specific genes, as Cairns had proposed. (Cairns abandoned his hypothesis after follow-up experiments with Foster.) They also found that stress, including lack of food, was a crucial factor in boosting the mutation rate.
“It was a surprise for people,” Rosenberg said. “Cells actually decide to turn up their mutation rate when they are poorly adapted to the environment. That’s a different kind of picture from constant random mutation that is blind to the environment the cell is in.”
|A network of more than 90 proteins is required to trigger stress-induced mutations in E. coli bacteria.|
Some scientists are still skeptical, if not about the phenomenon itself, then about how significant it is for an organism’s survival and evolution. At the heart of the debate is a paradox. Most random mutations will be harmful to the organism, knocking out vital proteins, for example. Therefore, more frequent mutations would be likely to generate a less-fit population. “People have still been doubting the phenomenon because they believe that it would be maladaptive,” said Foster. “Increasing the mutation rate would increase deleterious mutations as well as advantageous ones.” Some scientists think that evolution would not select such a mechanism, she said.
Read more at Wired Science
|Steller’s sea cow grew to an astonishing 33 feet long and 24,000 pounds while its head stayed comically small.|
He and a handful of other men were shipwrecked. They’d run aground on a small frozen island between Russia and what is now Alaska and had little food. Their captain, the famed Vitus Bering, was dead of scurvy. Steller, a brilliant man who did not suffer fools gladly, was woefully unpopular among the crew, who happened to be fools. Furious dispute had erupted when, no joke, Steller insisted they eat their vegetables to stave off scurvy.
But Steller needed their help to handle the colossal creature, a cousin of the manatee that reached a mind-boggling 33 feet long and 24,000 pounds. For perspective, 24,000 pounds is equal to 20 manatees, or four killer whales, or one school bus, including all the kids and their books and Lunchables and whatnot.
The crew, however, was more concerned with constructing a new boat from the wreckage, as Steller recounted in his posthumously published work The Beasts of the Sea. But he succeeded in using tobacco to bribe men to help him pull the guts out of the critter, including a stomach that measured 6 by 5 feet. The hired hands, though, “in their ignorance and dislike for the work,” jerked mercilessly at the organs and tore them to shreds, not leaving a single one intact.
Between the foxes and fools and freezing rain and lack of reference books, it’s a miracle he could compile so astonishingly thorough a description of the beast that would take his name: Steller’s sea cow (Hydrodamalis gigas). It was a new sirenian, an order of marine mammals including manatees and dugongs so named because they flash and scream when threatened (no they don’t — they’re named after the sirens of Greek mythology, who, like these animals, frequented shorelines).
|The Vitus Bering expedition shipwrecks near birds that the artist spent literally 20 seconds painting.|
While no one is exactly sure why the sea cow’s distribution had shrunk so dramatically in the millennia before its introduction to science, we know why those final 2,000 perished. Their niche, according to paleontologist Daryl Domning, had been compromised.
In the mid-18th century, the Russian market for sea otter pelts went wild in what is known as the Fur Rush, a blood orgy that nearly wiped the creature from the planet. “What that did was remove a predator of sea urchins,” said Domning, “and so when they knocked back the sea otter populations, then the sea urchins would have proliferated. We’ve observed this happening in the modern era. And with more sea urchins, they would feed on the kelp, which was the sea cow’s food supply.”
|A Steller’s sea cow grazes as a sea otter applauds its efforts. I jest. The otter is just holding a sea urchin … I think. This is from a U.S. government document. They don’t exactly splurge on artwork over there.|
Beyond the loss of kelp, European otter hunters wouldn’t hesitate to slaughter sea cows for their plentiful meat. And the northern Pacific wasn’t even an ideal habitat for sea cows to begin with, on account of these creatures preferring temperate zones. Indeed, during cruel arctic winters, Steller said, they became so skinny that “all the ribs” showed.
It’s quite the feat to fuel a body the size of a school bus — on kelp no less, which isn’t exactly calorie-rich. So the sea cow would eat incessantly, and “because they are so greedy they keep their heads always under water, without regard to life and safety,” Steller wrote.
Thus sea cows gently trudged the shallows in imposing yet harmless herds, hoovering up kelp until full, at which point they would roll over on their backs and sleep, floating around like bloated, overcooked sausages. Their rather aloof demeanor, though, in addition to their seeming disregard for human contact, made them easy targets.
“Hence a man in a boat, or swimming naked,” wrote Steller, for whatever reason finding the need to point out the nudity, “can move among them without danger and select at ease the one of the herd he desires to strike — and accomplish it all while they are feeding.”
Having spent millions of years largely beyond reach of predators because of its size, the sea cow suddenly found itself outmatched by man, in this particular case the starving crew. Steller’s detailed accounts of their often wildly inhumane hunts are unsettling, and don’t require elaboration here. Those interested may read for themselves. Brutality notwithstanding, it’s a truly fascinating report.
These hunts were massacres, though Steller wrote that the young were far harder to pursue than adults, since they were able to move about much more “vigorously.” This is contrary to the general order of things in nature: Lions, for example, target young prey for their weakness and sluggishness, not just to be jerks.
One might wonder why such a massive, ungainly body would evolve in the first place. “That is most obviously an adaptation to cold weather,” said Domning. “They simply have a better surface-to-volume ratio, less surface area per unit of volume, so they’re better insulated against the cold.”
|Georg Steller atop a female sea cow on July 12, 1742, with two crewmen who sooo don’t want to be here right now.|
“And also they got rid of their finger bones, basically,” added Domning. “They had very short, stubby flippers, which would among other things have cut down the rate of heat loss.” All brilliant adaptations in an amazing beast that humankind knew far too briefly.
As for Georg Steller, he eventually got off that island. After nine months of misery, the crew finally finished building the makeshift boat and made it back to mainland Russia. All told, on that strange journey Steller had recorded four marine mammals new to science: Steller’s sea lion, the sea otter, the fur seal, and the glorious sea cow.
Read more at Wired Science
Jan 16, 2014
Planets orbiting stars outside the Solar System are now known to be very common. These exoplanets have been found orbiting stars of widely varied ages and chemical compositions and are scattered across the sky. But, up to now, very few planets have been found inside star clusters . This is particularly odd as it is known that most stars are born in such clusters. Astronomers have wondered if there might be something different about planet formation in star clusters to explain this strange paucity.
Anna Brucalassi (Max Planck Institute for Extraterrestrial Physics, Garching, Germany), lead author of the new study, and her team wanted to find out more. "In the Messier 67 star cluster the stars are all about the same age and composition as the Sun. This makes it a perfect laboratory to study how many planets form in such a crowded environment, and whether they form mostly around more massive or less massive stars."
The team used the HARPS planet-finding instrument on ESO's 3.6-metre telescope at the La Silla Observatory. These results were supplemented with observations from several other observatories around the world . They carefully monitored 88 selected stars in Messier 67  over a period of six years to look for the tiny telltale motions of the stars towards and away from Earth that reveal the presence of orbiting planets.
This cluster lies about 2500 light-years away in the constellation of Cancer (The Crab) and contains about 500 stars. Many of the cluster stars are fainter than those normally targeted for exoplanet searches and trying to detect the weak signal from possible planets pushed HARPS to the limit.
Three planets were discovered, two orbiting stars similar to the Sun and one orbiting a more massive and evolved red giant star. The first two planets both have about one third the mass of Jupiter and orbit their host stars in seven and five days respectively. The third planet takes 122 days to orbit its host and is more massive than Jupiter .
The first of these planets proved to be orbiting a remarkable star -- it is one of the most similar solar twins identified so far and is almost identical to the Sun . It is the first solar twin in a cluster that has been found to have a planet.
Two of the three planets are "hot Jupiters" -- planets comparable to Jupiter in size, but much closer to their parent stars and hence much hotter. All three are closer to their host stars than the habitable zone where liquid water could exist.
Read more at Science Daily
Be-type stars are quite common across the Universe. In our Galaxy alone more than 80 of them are known in binary systems together with neutron stars. 'Their distinctive property is their strong centrifugal force: they rotate very fast, close to their break-up speed. It's like they were cosmic spinning tops,'says Jorge Casares of the Instituto de Astrofísica de Canarias (IAC) and La Laguna University (ULL). Casares is the lead author and an expert in stellar-mass black holes (he presented the first solid proof of their existence back in 1992).
The newly discovered black hole orbits the Be star known as MWC 656, located in the constellation Lacerta (the Lizard) -- 8,500 light years from Earth. The Be star rotates so fast that its surface speed exceeds 1 million kilometres per hour. 'We started studying this star back in 2010, when space telescopes detected transient gamma-ray emission coming from its direction,' explains Marc Ribó, of the Institut de Ciències del Cosmos of Barcelona University (ICC/IEEC-UB). 'No more gamma-ray emission has subsequently been detected, but we found that the star was part of a binary system,' he adds.
A detailed analysis of its spectrum allowed scientists to infer the characteristics of its companion. 'It turned out to be an object with a mass between 3.8 and 6.9 solar masses. An object like that, invisible to telescopes and with such large mass, can only be a black hole, because no neutron star with more than three solar masses can exist,' states Ignasi Ribas, of CSIC in the Instituto de Ciencias del Espacio (IEEC-CSIC).
The black hole orbits the (more massive) Be star and is fed by matter ejected from the latter. 'The high rotation speed of the Be star causes matter to be ejected into an equatorial disc. This matter is attracted by the black hole and falls on to it, forming another disc -- called an "accretion disc"'. By studying the emission from the accretion disc we could analyse the motion of the black hole and measure its mass,' comments Ignacio Negueruela, a lecturer at the University of Alicante (UA).
Scientists believe this object to be a nearby member of a hidden population of Be stars paired with black holes. 'We think these systems are much more common than previously thought, but they're difficult to detect because their black holes are fed from gas ejected by the Be stars without producing much radiation, in a "silent" way, so to speak. However, we hope to detect other similar binary systems in the Milky Way and other nearby galaxies by using bigger telescopes, such as the Gran Telescopio Canarias,' concludes Casares.
Also participating in the study with Jorge Casares, Ignacio Negueruela, Marc Ribó and Ignasi Ribas are Josep M. Paredes , of Institut de Ciències del Cosmos of Barcelona University (ICC/IECC-UB) and Artemio Herrero and Sergio Simón, both from the IAC and ULL.
Black holes, an ongoing challenge
The detection of black holes has been a challenge since their existence was first surmised by John Michell and Pierre Laplace in the 18th century. Given that they are invisible -- their enormous gravitational force prevents light from escaping -- telescopes cannot detect them. However, black holes can occasionally trigger high energy radiation from the environment surrounding them and can thus be traced by X-ray satellites. This is the case with active black holes, fed by matter transferred from a nearby star. If violent X-ray emission is detected from a place where nothing but a normal star is seen, a black hole might be hiding there.
Using this method, researchers have discovered 55 potential black holes over the last 50 years. Seventeen of them have what astronomers call a 'dynamic confirmation': the feeding star has been localised, allowing for the mass of its invisible companion to be measured. If it is above three solar masses, then it is considered to be a black hole.
Read more at Science Daily
The skeleton of Woseribre Senebkay, who appears to be one of the earliest kings of a forgotten Abydos Dynasty (1650–1600 B.C.) was found by a University of Pennsylvania expedition working with Egypt's Supreme Council of Antiquities. It rested in a four-chambered tomb amidst the fragmented debris of his coffin, funerary mask and canopic chest. Such chests were used to contain the organs of an individual.
Senebkay's tomb dates to about 1650 B.C., during Egypt's Second Intermediate Period, when central authority collapsed, giving rise to several small kingdoms. It was found close to a larger royal sarcophagus chamber, recently identified as belonging to king Sobekhotep (probably Sobekhotep I, ca. 1780 BC) of the 13th Dynasty.
According to the archaeologists, the kings of the Abydos Dynasty placed their burials near the tombs of earlier Middle Kingdom pharaohs, including Senwosret III of the 12th Dynasty (about 1880–1840 B.C.) and Sobekhotep I.
In fact, there is evidence for about 16 royal tombs belonging to the dynasty, whose existence was first hypothesized by Egyptologist Kim Ryholt in 1997.
"It's exciting to find not just the tomb of one previously unknown pharaoh, but the necropolis of an entire forgotten dynasty," said Josef Wegner, Egyptian Section Associate Curator of the Penn Museum, who led the University of Pennsylvania team.
Badly plundered by ancient tomb robbers, the tomb of Senebkay is modest in scale. It features a limestone burial chamber painted with images of the goddesses Nut, Nephthys, Selket, and Isis flanking Senebkay's canopic shrine.
Other texts in the tomb identify the pharaoh as the "king of Upper and Lower Egypt, Woseribre, the son of Re, Senebkay."
Senebkay's name may have appeared in a broken section of the Turin King List, a papyrus dating to the reign of Ramesses II (about 1200 B.C.), which is believed to contain the most extensive list of kings compiled by the Egyptians.
"Two kings with the throne name 'Woser...re' are recorded at the head of a group of more than a dozen kings, most of whose names are entirely lost," the Penn Museum said in a statement.
According to the archaeologists, the badly decayed remains of Senebkay's canopic chest provide important insights into the economic situation of the Abydos Kingdom, which lay in the southern part of Middle Egypt between the larger kingdoms of Thebes (Dynasties 16–17) and the Hyksos (Dynasty 15) in northern Egypt.
Read more at Discovery News
The trough’s valley is filled with deep ice (3,000 meters or 9,842 feet) as it stretches for 300 kilometers (186 miles) between the ice-encrusted Ellsworth Mountains and surrounding highlands. At its widest, the Ellsworth Trough spans 25 kilometers (15.5 miles). The canyon and valley system runs roughly northwest to southwest and ends by plunging into the sea.
For comparison, the Grand Canyon attains a depth of approximately 1,737 meters (about 1 mile) and extends 433 kilometers (277 miles) through the southwestern United States.
The ice-filled canyon gives geologists clues about how ice first overwhelmed Antarctica. The frozen coating of western Antarctica may have spread from the Ellsworth Mountains and surrounding highlands. The ice sheet covering the sea may have formed when the growing glaciers from the highlands reached the sea, similarly to the modern-day Antarctic Peninsula. Now, these highlands may serve as anchors for the glaciers and ice sheets as the planet continues to warm.
Scientists mapped the hidden Antarctic troughs using radar to peer beneath the ice, along with satellite images. The Geological Society of America Bulletin published their results.
“To me, this just goes to demonstrate how little we still know about the surface of our own planet,” lead author Neil Ross of Newcastle University, told Forbes.
Read more at Discovery News
Jan 15, 2014
However, that bamboo buffet may have disappeared as the Tibetan Plateau rose and ushered in a cooler climate. Without bamboo, the apes may have turned to sugary fruits that rotted their teeth, reported New Scientist.
Near the end of apes’ time on Earth, the animals’ now-fossilized teeth bore deep erosion and potential signs of decay. This may mean they ate increased amounts of acidic, sugary fruit as the bamboo dwindled, the lead author of recent study in Quaternary International, Yingqi Zhang of the Chinese Academy of Sciences, told New Scientist.
Zhang based his dental diagnosis of Gigantopithecus’ demise on 17 teeth recently excavated from Hejiang Cave in China. The teeth were found along with fossils from rhinos, pandas, tapirs, hyenas, colobine monkeys, tigers and other animals.
The mixture of other animals suggests the giant ape may have lived in or near dense forests (monkeys and pandas) and mixed woodlands (rhinos and tapirs). The giant ape also co-existed with Homo erectus, an ancestor of humans, according to an earlier study published in Proceedings of the National Academy of Sciences.
In its forested habitat, Gigantopithecus ate a mixture of tough, fibrous grasses, likely bamboo, and fruits and seeds from plants in the fig family. Russell Ciochon, biological anthropologist at the University of Iowa, discovered the extinct ape’s diet by examining residues left on fossilized teeth. These ancient left-overs, known as opal phytoliths, were microscopic silica structures that formed in the plants. Their distinctive shapes indicated which plants created the phytoliths. Proceedings of the National Academy of Sciences published Ciochon’s results.
Read more at Discovery News
This ancestor, the first placental mammal, lived between 88.3 to 91.6 million years ago, according to the study, published in the latest issue of Biology Letters. Placental mammals today include humans and all other mammals except those that lay eggs or have pouches (marsupials).
The study counters prior research, based solely on fossil evidence, which theorized this “mother of all placental mammals” arose after the dinosaurs died out. The researchers instead believe that it preceded the non-avian dino die off and that we wouldn’t even be here if the dinosaurs were still around.
“When dinosaurs died out, many ecological niches became vacant, and placental mammals took over,” lead author Mario dos Reis told Discovery News. “The placental ancestor diversified and evolved into the modern mammals we see today, such as rodents, deer, whales, horses, bats, carnivores, monkeys and ultimately humans.”
“If dinosaurs had not died out, then placental mammals may not have had the opportunity to diversify the way they did, and our own species would not have evolved!” added dos Reis, a research associate in the Department of Genetics, Evolution and Environment at University College London.
He and colleagues Philip Donoghue and Ziheng Yang analyzed 36 complete mammal genomes together with information from the mammal fossil record. The results determined placental mammals originated in the Cretaceous.
Dos Reis explained that the DNA of organisms accumulates changes, called mutations, at a constant rate in time. This is referred to as the “molecular clock.” For example, certain DNA in humans and other apes mutates at a pace of about 1 percent every 10 million years.
The molecular clock is not perfect, however, and it runs a bit fast in some species and a little slow in others.
Dos Reis and his team therefore “estimated the number of mutations that accumulated in each mammal lineage, corrected for the flaky clock, and together with ages from known fossils estimated the age of the placental ancestor,” he said.
Based on earlier research, it’s thought that this animal was small, nocturnal and pretty scrappy. It either lived far away from the asteroid impact site that caused the extinction of non-avian dinosaurs, or was somehow saved because of its size, habitat and/or lifestyle.
About 70 percent of all species died out during the mass extinction event 66 million years ago, with even some mammals, birds and plants going extinct then.
“To understand why the big lumbering behemoths went extinct and the gracile birds and mammals did not, we need to further explore the fossil record based on predictions shaped by our molecular analysis which, for instance, suggests the age intervals in which we should find evidence of specific mammal groups,” Donoghue told Discovery News.
Michael Benton, a professor in the School of Earth Sciences at the University of Bristol, said he believes that the DNA/molecular clock approach of estimating an animal group's age, used by dos Reis and colleagues, "applies standard, accepted, conservative approaches that take account of missing data in the fossil record."
Read more at Discovery News
This Nordic "grog" predates the Vikings. It was found buried in tombs alongside warriors and priestesses, and is now available at liquor stores across the United States, thanks to a reconstruction effort by Patrick McGovern, a biomolecular archaeologist at the University of Pennsylvania Museum of Archaeology and Anthropology and Delaware-based Dogfish Head Craft Brewery.
"You'd think, with all these different ingredients, it sort of makes your stomach churn," McGovern, the study's lead author, told LiveScience. "But actually, if you put it in the right amounts and balance out the ingredients, it really does taste very good."
McGovern began the journey toward uncovering the ingredients of ancient Nordic alcohol decades ago, when he began combing through museums in Denmark and Sweden, looking for pottery shards that held traces of old beverages. But in the mid-1990s, the technology to analyze these chemical remnants just wasn't available, he said.
More recently, McGovern and his co-authors re-examined the remnants with modern tools. They analyzed samples from four sites, two of which were grave sites in Sweden and Denmark. The oldest of these sites dated back to 1500 B.C. — more than 3,500 years ago. The oldest sample came from a large jar buried with a male warrior in Denmark. The other three came from strainer cups, used to serve wine, found in Denmark and Sweden. One of the strainer cups came from a tomb where four women were buried. One of the women, who died at around age 30, clutched the strainer in her hand.
Beer brewing goes back at least 10,000 years, and ancient humans were endlessly creative in their recipes for intoxicants. Studies of pollen content in northern European drinking vessels suggested the ancient residents drank honey-based mead and other alcoholic brews. But the exact ingredients were not well understood. Ancient texts written by Greeks and Romans proved that southern Europeans were among the first wine snobs — these authors dismissed Northern beverages as "barley rotted in water."
In fact, Nordic grog was a complex brew, McGovern and his colleagues found. The ingredients included honey, cranberries and lingonberries (acidic red berries that grow in Scandinavia). Wheat, rye and barley — and, occasionally, imported grape wine from southern Europe — formed a base for the drink. Herbs and spices — such as bog myrtle, yarrow, juniper and birch resin — added flavor and perhaps medicinal qualities.
The oldest sample, which was buried with a male warrior, was an anomaly. The jug found in that grave contained only traces of honey, suggesting that the occupant went to his grave with a jar of unadulterated mead. Because the warrior had well-crafted weapons in his tomb, he was likely of high status. Pure mead was probably a drink for the elite, because honey was expensive and scarce, the researchers reported online Dec. 23 in the Danish Journal of Archaeology.
The grog was likely a high-class beverage, McGovern said. In the 1920s, archaeologists uncovered a remarkably well-preserved burial of a young blond woman in Denmark. Dubbed "Egtved Girl" (pronounced "eckt-VED"), the corpse was buried wearing a wool string skirt with a bucket of grog at her feet. The young woman's clothes and ornaments suggest that she was a priestess who likely danced in religious ceremonies, McGovern said.
In other graves, wine-serving kits imported from southern Europe are also associated with women, McGovern said.
"That gives the impression that the women were the ones who would make the beverages in antiquity, and they were the ones that would serve it to the warriors," he said.
The imported wine strainer cups and traces of grape wine, which was only produced in southern Europe, suggest a robust trade network in this period, McGovern said. Northerners likely shipped Baltic amber southward in return for the wine and drinking utensils.
With McGovern's help, Dogfish Head recreated the Nordic grog in October 2013, using wheat, berries, honey and herbs. The only difference was that Dogfish Head's brew contains a few hops, the bittering agents used in most modern beers. Hops weren't used in beers in Europe until the 1500s.
Dogfish Head's grog is called Kvasir, a name that hints at its roots. In Nordic legend, Kvasir was a wise man created by gods spitting into a jar. Two dwarfs later murdered Kvasir and mixed his blood with honey, creating a beverage that was said to confer wisdom and poetry onto the drinker.
Read more at Discovery News
If they are right the lake last existed before the start of the most recent ice age, or glacial period, and extended 100 miles farther south than proposed by previous researchers. It also held twice as much water as previously thought and rivaled the size of today's Great Lakes of North America.
The White Nile connects the lower Nile to its source at Lake Victoria to the south. For years scientists have been curious about what appear to be ancient lake shores way up on the hills around the current White Nile Valley. Previous research was limited to rough estimates of the size of the lake based on maps that hadn't very accurate elevation information. There were also no clues to how long ago the lake existed.
The new study, published in the January issue of the journal Geology, solves all that by using highly accurate data from the Shuttle Radar Topography Mission to map out the lake level all around the valley.
To date the shoreline, as well as river channels that helped the nearby Blue Nile back-up and fill the lake, the scientists employed a rare but naturally occurring soil element -- beryllium-10 -- which is created at a known rate over the millenniums by cosmic rays raining down into Earth's atmosphere. That pegged the date of the lake to at least 109,000 years ago.
"What I think really clinches it is that the Blue Nile was further south than it is now," said the University of Exeter's Timothy Barrows, the lead author on the paper.
"It's in the same ballpark as the Great Lakes (of North America)," said Nile lake researcher Ted Maxwell of the Smithsonian Institution, who was not involved in the latest study. "It would be like the Hoover Dam on the Colorado River, only naturally."
To compare, the White Nile mega lake would have been more than half the size of Lake Superior, or, put another way, larger than the combined areas of Lake Erie and Lake Ontario.
Read more at Discovery News
The discovery was made by astronomers using the European Southern Observatory’s HARPS exoplanet-hunting instrument attached to the 3.6-meter telescope at the La Silla Observatory in Chile and was confirmed by other collaborating observatories. The astronomers’ attention was focused on the Messier 67 open star cluster, which is located approximately 2,600 light-years away in the constellation Cancer.
It is believed that all stars originated from within some kind of stellar cluster, including our sun. Clusters of stars are a consequence of a brood of stars emerging from a stellar nursery and, throughout their stellar evolution, remain gravitationally bound.
However, there is a mysterious lack of exoplanetary discoveries inside star clusters, leading astrophysicists to hypothesize that perhaps the planet-forming rules inside clusters are somehow different from stars that have gone on to disassociate themselves from their clusters. This is what inspired the focus on this particular star cluster.
“In the Messier 67 star cluster the stars are all about the same age and composition as the sun,” said Anna Brucalassi of the Max Planck Institute for Extraterrestrial Physics, Garching, Germany, in an ESO press release. “This makes it a perfect laboratory to study how many planets form in such a crowded environment, and whether they form mostly around more massive or less massive stars.”
The cluster is composed of around 500 stars, of which HARPS monitored 88 for slight “wobbles” over six years. These wobbles betray the gravitational presence of orbiting exoplanets — as the alien worlds swing around their host stars, they are massive enough to exert a gravitational tug, shifting the star slightly off-center, allowing HARPS to detect a slight Doppler shifting of starlight received from that star. This exoplanet-hunting technique is known as the radial velocity method.
But at distance of over 2,500 light-years, the challenge to detect the slight wobble in the faint starlight was formidable.
Two of the exoplanets are approximately one-third the mass of Jupiter and orbit their sun-like stars in five and seven days. These compact orbits ensure that the worlds aren’t remotely “Earth-like”; they are “hot-Jupiters”, hellish worlds that are baked by their host stars. The third world to be discovered is more massive than Jupiter and orbits a red giant star, taking 122 days to complete one orbit.
It is one of the two hot-Jupiters that orbits a star that appears to be the same size, age and composition as our sun.
“These new results show that planets in open star clusters are about as common as they are around isolated stars — but they are not easy to detect,” said Luca Pasquini of the ESO, Garching, Germany and co-author of the research. “The new results are in contrast to earlier work that failed to find cluster planets, but agrees with some other more recent observations. We are continuing to observe this cluster to find how stars with and without planets differ in mass and chemical makeup.”
Read more at Discovery News
Jan 14, 2014
The researchers found that the electrostatic properties of the glue that coats spider webs causes them to reach out to grab all charged particles, from pollen and pollutants to flying insects. They also showed that the glue spirals can distort Earth's electric field within a few millimetres of the web, which may enable insects to spot the webs with their antennae 'e-sensors'.
The study, published in Naturwissenschaften, shows how a quirk of physics causes webs to move towards all airborne objects, regardless of whether they are positively or negatively charged. This explains how webs are able to collect small airborne particles so efficiently and why they spring towards insects.
According to the researchers, common garden spider webs around the world could be used for environmental monitoring as they actively filter airborne pollutants with an efficiency comparable to expensive industrial sensors.
'The elegant physics of these webs make them perfect active filters of airborne pollutants including aerosols and pesticides,' said Professor Fritz Vollrath of Oxford University's Department of Zoology, who led the study. 'Electrical attraction drags these particles to the webs, so you could harvest and test webs to monitor pollution levels -- for example, to check for pesticides that might be harming bee populations.
'Even more fascinating, you would be able to detect some airborne chemicals just by looking at the shape of the webs! Many spiders recycle their webs by eating them, and would include any particles and chemicals that are electrically drawn to the web. We already know that spiders spin different webs when on different drugs, for example creating beautiful webs on LSD and terrible webs on caffeine. As a result, the web shapes alone can tell us if any airborne chemicals affect the animal's behaviour.'
Working with Dr Donald Edmonds from Oxford University's Department of Physics, Professor Vollrath showed that webs like that of the garden cross spider also cause local distortions in Earth's electric field since they behave like conducting discs. Many insects are able to detect small electrical disturbances, including bees that can sense the electric fields of different flowers and other bees.
'Pretty much all flying insects should be capable of sensing electrical disturbances,' said Professor Vollrath. 'Their antennae act as 'e-sensors' when the tips are connected to the body by insulating materials, meaning the charge at the tip will be different from the rest of the insect. As insects approach charged objects, the tips of their antennae will move by a small amount, which they may be able to feel. Bees already use e-sensors to sense flowers and other bees, so it now remains to be seen whether they might also use them to avoid webs and thus becoming dinner.'
Electrical disturbances caused by spider webs are extremely short-ranged, so it is not yet clear whether insects would be able to sense them before the web snaps out to grab them. Either way, it is clear that electrostatic charges play an important role in the insect world.
Read more at Science Daily
In the summer of 2013 archaeologists were excavating an ancient building at Sardis that was constructed after the earthquake. Underneath the floor, they found two curious containers that each held small bronze tools, an eggshell and a coin, resting just atop the remains of an earlier elite building that was destroyed during the disaster.
The objects in the odd assemblages were important in ancient rituals to keep evil forces at bay, and the archaeologists who found them believe they could be rare examples of how the earthquake affected ancient people on a personal level.
One of the eggshells found under the floor at Sardis was amazingly still intact when excavators lifted the lid on the container last summer.
"That was really fantastic," Elizabeth Raubolt of the University of Missouri, Columbia told LiveScience "You can almost see where they chiseled a perfect circle and then let the contents aspirate."
Raubolt has worked on the excavations at Sardis (which are led by Nick Cahill of the University of Wisconsin-Madison) as a Roman pottery specialist for the past four field seasons. When she presented her findings this month at the Archaeological Institute of America's annual meeting in Chicago, she noted that several superstitions in the ancient world involved eggs.
The Roman historian Pliny wrote about how people would immediately break or pierce the shells of eggs with a spoon after eating them to ward off evil spells. Eggshells were also put inside "demon traps" buried in modern-day Iraq and Iran to lure and disarm malevolent forces, Raubolt explained. And sometimes, whole eggs were buried at someone's gate to put a curse on that person.
"You can imagine how nice it smelled after a while," Raubolt said.
With those precedents in mind, Raubolt thinks the eggshells at Sardis served as a way to protect the people in this building from evil forces, including future earthquakes, and maybe even curses cast by others.
Nearly identical ritual deposits dating back to the early Imperial era were found around the Artemis Temple in Sardis during the early 20th-century excavations, Raubolt noted. And locals seem to have buried strange things under their floors long before the earthquake.
In one grisly example, archaeologists in the 1960s found 30 pots and jars dating back to the Lydian period, some 500 years earlier, each containing an iron knife and a puppy skeleton with butchering marks. It's not clear if those "puppy burials" are linked to the later egg entombments of the Roman era, but they at least attest to the long tradition of ritual practice in the region, Raubolt said.
It's been difficult for researchers to find direct evidence of the A.D. 17 earthquake. Archaeologists have found some large fills of earthquake debris that had been dumped to relevel the ground. They can see evidence of reconstruction on the Temple of Artemis. From literary sources and accounts of public rebuilding efforts, they know and that Imperial aid flowed into Sardis from Rome. To thank the emperors, the people of Sardis even renamed themselves the "Kaisareis Sardianoi" or "Sardians of the Caesars." But how the "average Joes of antiquity" reacted to the quake has been largely unknown, Raubolt told LiveScience.
Read more at Discovery News
Elves and fairies are closely related in folklore, and though elves specifically seem to have sprung from early Norse mythology, by the 1800s fairies and elves were widely considered to be simply different names for the same magical creatures. Polls find that over half of Iceland's population believes in elves, or at least doesn't rule out the possibility of their existence.
But why do so many Icelanders believe? The passed-down tales are just part of the picture. Iceland's concept of the natural world takes on a mystical tone; pair that with environmentalism, the want to preserve this mystical world, and magical creatures almost make sense.
In the book "Icelandic Folk and Fairy Tales" (Iceland Review Library, 1987), folklorists May and Hallberg Hallmundsson explain how the Icelandic conception of nature is intimately tied to its folklore of elves and fairies.
"Icelanders are generally very attached to their country, perhaps more so than most other peoples ... It is a love for the land itself in its physical presence, for its soil, mountains, streams, valleys, and even its fire-spewing volcanoes and frozen wastes of ice," the authors write. "To the Icelanders, the land was never just an accumulation of inanimate matter — a pile of stones here, a patch of earth there — but a living entity by itself. Each feature of the landscape had a character all its own, revered or feared as the case may be, and such an attitude was not a far cry from believing that it was actually alive."
That life spirit said to inhabit the hills and streams of this island nation has come to be personified as elves and other magical beings. While it's easy to mock such folk beliefs as backward or antiquated, most cultures profess a belief in supernatural or magical beings, including demons, angels, ghosts and genies (djinn). These elves, like the fairies of early British lore, have many human qualities and may exact revenge if mistreated or disturbed. Elves and fairies are believed to live in their own separate, hidden world and generally ignore humans, but must be treated with respect; to do otherwise invites anything from mischievous pranks to child abduction by elves.
This wouldn't be the first ecological protest to involve diminutive magical beings.
Folklorist Andy Lechter, in his "Folklore" journal article "The Scouring of the Shire: Fairies, Trolls, and Pixies in Eco-Protest Culture" (October 2001), describes ecological protests involving fairies that are very similar to the current controversy in Iceland. "Fairies have inspired a counter-cultural movement. The 1990s in Britain were marked by large and dramatic public protests against a government-sponsored programme of road building, and ... opencast quarrying," Lechter writes.
"A distinctive protest culture flourished in response to this, combining the politics of direct action and an anarcho-travelling lifestyle, with a definite neo-pagan sensibility. This culture adopted an important fairy mythology which placed protesters within an almost fairytalelike struggle between the benevolent forces of nature and a tyrannical and destructive humanity."
Lechter notes, "In this animistic view, the natural world ... is threatened by human encroachment. Protesters see themselves as aided by, or aiding, these nature spirits. Here, the forces of nature, which include fairies, are regarded as benign, as opposed to humanity, which is seen as malign, corrupt, and divorced from nature."
The evoking of fairies and elves in the struggle to preserve natural areas not only captures the public's romantic imaginations but also taps into deep pre-existing social and cultural concerns about environmentalism. The theme of threatening new changes and the idea that modern ways disrupt the natural order of things are universal, and appear explicitly in many classic literary works. Perhaps the most famous is J.R.R. Tolkien's "Lord of the Rings" saga, in which the idyllic Hobbit homeland, the Shire, is threatened by dirty, polluting industrialization at the hand of evil wizard Saruman. The overcoming of peace and nature over-threatening change is a key theme in Tolkien's books, and conveys a powerful message of environmentalism.
Read more at Discovery News
But why does the cold of winter smell different from the heat of summer?
One reason is that odor molecules move much more slowly as the air temperature drops, said Pamela Dalton, an olfactory scientist at the Monell Chemical Senses Center in Philadelphia. That means that there are simply fewer smells to smell on a cold, crisp day than there are on a hot and humid one.
It's the same reason why hot soup smells more than cold soup does and why the garbage truck leaves behind the strongest odors on steamy summer days.
What's more, our noses don't work quite as well when the ambient air is cold, Dalton said. In experiments that require biopsies of olfactory receptors that lie deep inside the nose, researchers at Monell have discovered that the receptors "bury themselves a little more deeply in the nose in winter," she said, possibly as a protective response against cold, dry air.
"We're not as sensitive to odors in winter," she added. "And odors aren't as available to be smelled."
Cold air also stimulates the irritant-sensitive trigeminal nerve, said Alan Hirsch, a neurologist and psychiatrist in Chicago. The trigeminal nerve is what makes you cry when you chop an onion and delivers a hit of spiciness when you inhale a whiff of strong mint.
When odors stimulate both the trigeminal nerve and the olfactory nerve, the experience of smell becomes more intense.
There is a strong psychological component to our sense of smell, Hirsch added, and what we expect to smell has a big influence on what we actually smell.
In "The Invalid's Story" by Mark Twain, for example, a man is stuck on a train next to what he thinks is a rotting corpse but is actually a box of stinky cheese. So overwhelmed by the smell, he spends too long seeking fresh air on the freezing platform and develops a fever that ends up killing him.
"What you think a smell will be impacts whether you like it and what you perceive it to be," Hirsch said. "So, if you go outside in the winter and you are used to smelling snow or chestnuts in the fire or whatever you happen to smell outside, that's what you will interpret smells to be."
Of course, the smells that are available to be smelled differ as the seasons change. Summer brings flowers and dirt and barbeque smoke. In the most wintery of places, there isn’t much outside on cold days except snow, blustery wind and cars warming up.
Read more at Discovery News
Jan 13, 2014
The beastie, Tiktaalik roseae, represents the best-known transitional species between fish and land-dwelling animals, according to researchers. It lived 375 million years ago.
“Tiktaalik was a combination of primitive and advanced features,” co-author Edward Daeschler, Associate Curator of Vertebrate Zoology at the Academy of Natural Sciences of Drexel University, said in a press release.
While classified as a fish, Tiktaalik looked like a cross between a fish and a crocodile. It could grow to 9 feet in length, and likely spent its days hunting in shallow freshwater environments. It had gills, scales and fins, but also had features associated with terrestrial animals. These included a mobile neck, a robust ribcage and primitive lungs.
Of most interest to the researchers, its large forefins had shoulders, elbows and partial wrists, which allowed it to support itself on ground.
The presence of these limb-like features challenges the theory that such mobile hind appendages developed only after species transitioned to life on land.
“Previous theories, based on the best available data, propose that a shift occurred from ‘front-wheel drive’ locomotion in fish to more of a ‘four-wheel drive’ in tetrapods (four-footed animals),” said co-author Neil Shubin, who is a professor of Anatomy at the University of Chicago. “But it looks like this shift actually began to happen in fish, not in limbed animals.”
Even some modern fish can walk, such as the African lungfish. You can see a bit of that, and learn more about this unusual fish, in this video.
Lungfish often look like they are slithering, more than walking, but if you see them in an aquarium, their little limbs are evident, and they do walk around.
Read more at Discovery News
Some archeologists interpreted the painting as a Google Earth-style layout of Çatalhöyük, a Stone Age settlement in modern-day Turkey, with Mount Hasan (Hasan Daği in Turkish) in the background. In the 3 meter-long wall painting, the twin-peaked mountain seems to be erupting. But no evidence for an eruption of Mount Hasan had been found from the right time period.
However, geologists recently found a layer of volcanic pumice on the summit of Mount Hasan that may have come from the eruption depicted in the painting. The chemical signature of the pumice suggested the volcano erupted in 6960 BC (± 640 years), the same time when thousands of humans lived in Çatalhöyük. PLOS ONE published the results of the analysis led by Earth scientists at UCLA.
The 9,000-year-old layer of volcanic rock provides support for anthropologists and geographers who consider the Çatalhöyük wall painting to be the oldest known map. In 1967, archeologist James Mellaart first published his interpretation of the drawing as a map.
However, in 2006 an paper in Anatolian Studies suggested that the map may have been a geometric design, while the volcano may have been a leopard skin.
“I can’t say with 100 percent certainty,” Keith Clarke, a cartographer not involved in the PLOS ONE study, told NPR, “but I would believe that the evidence is now in … favor of it actually being a map.”
Read more at Discovery News
[SBW2007] 1 (or SBW1) is located 20,000 light-years from Earth and features an enigmatic double-ringed planetary nebula. The rings are gases that have been blasted from the outermost layers of the blue supergiant star in the nebula’s core. The star, which was estimated to be 20 times the mass of the sun before it became unstable, is going through its final death throes before a supernova is initiated. But don’t worry, the supernova would be a safe distance from us, although it will put on an exciting light show.
Massive stars like SBW1 live fast and die young. Blue supergiants burn through their supply of hydrogen quickly in their cores where fusion processes take place. In only a few million years they run out of hydrogen and are forced to burn through heavier and heavier elements until they begin to bloat and powerful stellar winds shed their outer layers to form a nebula. At that stage, the stage is set for one of the biggest explosions known to occur in the Cosmos.
But how do we know SBW1 is about to blow?
This new Hubble observation isn’t without precedent. In 1987, another star with a strikingly similar nebula detonated as a supernova — the famous SN 1987A. From the shape and size of that star’s nebulous rings, astronomers knew that the gases were likely stripped from the star 20,000 years earlier. Using the knowledge they accumulated about SN 1987A, astronomers believe that SBW1 is also likely to go supernova as its rings are also approximately 20,000 years old. The nebula rings are analogous to a fuse on a bomb — it’s giving us an approximate idea about how long the star’s self destruct timer has been set.
Read more at Discovery News
In research published in the journal Nature Geoscience on Sunday, the work of Clément Narteau of the Global Institute for Physics in Paris and his team is discussed. In 2008, Clément bulldozed 160,000 square meters (16 hectares) of sand dunes in the Tengger desert of Inner Mongolia. Then, over the next three and a half years, the researchers watched how the desert winds re-formed the dunes, revealing information about the prevailing winds.
As noted by New Scientist, the researchers tracked two seasonal prevailing winds that both contributed to the dunes’ shapes and orientation. The dunes’ orientation turned into a “compromise” between the two different prevailing wind directions, strength and duration.
Although this research may sound very terrestrial, there are applications that go far beyond our planet.
“Our landscape-scale experiment suggests that the alignment of aeolian dunes can be used to determine wind forcing patterns on the Earth and other planetary bodies,” writes Clément.
For example, Mars and the Saturnian moon Titan are known to possess vast dune fields, shaped by persistent wind-shaping (aeolian) processes. Already, planetary scientists use Mars’ beautiful barchan dunes to glean information about prevailing wind direction on the red planet’s surface.
Read more at Discovery News