Aug 26, 2017

New dinosaur discovery suggests new species roosted together like modern birds

Photo and sketch of the confiscated specimen showing three different juveniles of the same species of dinosaur preserved in roosting posture, immediately next to each other.
The Mongolian Desert has been known for decades for its amazing array of dinosaurs, immaculately preserved in incredible detail and in associations that give exceedingly rare glimpses at behavior in the fossil record. New remains from this region suggest an entirely unknown behavior for bird-like dinosaurs about 70 million years ago. At least some dinosaurs likely roosted together to sleep, quite possibly as a family, much like many modern birds do today. Gregory Funston, Ph.D. Candidate at the University of Alberta, will present the team's research findings at the annual meeting of the Society of Vertebrate Paleontology, held this year in Calgary, Alberta (Canada) on Friday, Aug. 25th.

This new evidence for dinosaur roosting stems from a confiscated fossil block that was illegally exported from Mongolia, which preserved the amazing remains of three juvenile dinosaurs known as oviraptorids (part of the bird line of dinosaur evolution). These three dinosaurs represent the same species that were roughly the same age, preserved in a sleeping posture, so close to each other that they would have been touching in life. Known as "communal roosting," this behavior is seen in many birds today including chickens and pigeons. The specimen luckily made its way into the hands of researchers currently led by Gregory Funston of the University of Alberta, along with his advisor Dr. Philip Currie (also of the University of Alberta) and the Institute of Paleontology and Geology of Mongolia (based in Ulaanbaatar). Regarding the finding, Funston said, "It's a fantastic specimen. It's rare to find a skeleton preserved in life position, so having two complete individuals and parts of a third is really incredible."

The three juvenile oviraptors had several features that indicated they belonged to a whole new species. Other fossils found in Mongolia also seem to belong to this new species, and further flesh out the life history of these animals. The notable head crest is present even at a young age, but the dinosaurs would have had gradually shorter tails as they aged, and some of their bones fused across their lifetime. Their head crests and tails have been argued to represent sexual display features used in mating, somewhat similar to modern peacocks or turkeys. Funston added "The origins of communal roosting in birds are still debated, so this specimen will provide valuable information on roosting habits in bird-line theropods."

From Science Daily

Fossils reveal how bizarre mammal beat extinction

Solenodon, a bizarre venomous mammal from the Caribbean, survived extinction due to its generalist ecology.
Animals that live on islands are among the most at risk from extinction. A remarkable eighty percent of extinctions occurring since 1500AD have been on islands, with inhabitants facing dangers from climate change, sea level rise, invasive species, and human interactions. However, new research presented this week at the Society of Vertebrate Paleontology conference in Alberta, Canada, suggests that one island mammal may hold the key to survival -- a flexible diet.

Alexis Mychajliw, a PhD student at Stanford University mentored by Dr. Elizabeth Hadly, is using the Caribbean islands as a living experiment to understand factors driving the extinction of mammals. Since before the time of Columbus, humans have been altering the environments of the Caribbean. Mychajliw wanted to know how the native species responded, which survived and why. She explains "The Caribbean is now full of introduced mammals -- cows, pigs, rats, dogs -- that look nothing like the original fauna. Are they doing similar things ecologically, or have we created an entirely different dynamic? And where do the remaining native mammals fit in this new ecosystem?"

As part of an international team, Mychajliw gathered data on fossil mammal species and ancient human populations coexisting in the Caribbean for 7000 years. She correlated changes in size and diversity of Caribbean mammals over time with human population growth and climate change. She discovered that there have been extinctions on the Caribbean islands associated with arrival of both indigenous and European human populations, wiping out many of the smallest and largest species. "By piecing together the radiocarbon record, our analysis makes it clear that people had a hand in Caribbean extinctions -- both before and after Columbus' arrival," says collaborator Dr. Siobhan Cooke. However, one mammal has persisted throughout, the enigmatic Solenodon.

These bizarre mammals resemble giant venomous shrews. "Looking at solenodons today, you might think of them as a strange relictual creature that, like many ancient lineages, can't keep up with modern threats. But the recent fossil record tells a different story: the solenodon is a survivor, persisting when nearly every other mammal around it went extinct." Mychajliw explains. To understand the hardiness of Solenodon documented in the fossil record, Mychajliw turned to an unusual source of data -- their feces. Using a technique called 'metabarcoding', she could genetically analyze the feces of modern Solenodon and piece together variations in their diet. Based on this data, she classified Solenodon as a 'flexible generalist' -- an animal capable of eating virtually anything. Mychajliw thinks that it is this flexibility that allowed Solenodon to survive human and climatic changes that made other species extinct.

Read more at Science Daily

Aug 24, 2017

Farming, cheese, chewing changed human skull shape

UC Davis anthropologist David Katz measured specific points on hundreds of human skull bones (top) to create a wire frame model of the skull and jaw (bottom). Blue dashes indicate changes in skull shape from foragers to dairy farmers.
The advent of farming, especially dairy products, had a small but significant effect on the shape of human skulls, according to a recently published study from anthropologists at UC Davis.

Humans who live by hunting and foraging wild foods have to put more effort into chewing than people living from farming, who eat a softer diet. Although previous studies have linked skull shape to agriculture and softer foods, it has proved difficult to determine the extent and consistency of these changes at a global scale.

Graduate student David Katz, with Professor Tim Weaver and statistician Mark Grote, used a worldwide collection of 559 crania and 534 lower jaws (skull bones) from more than two dozen pre-industrial populations to model the influence of diet on the shape, form, and size of the human skull during the transition to agriculture.

They found modest changes in skull morphology for groups that consumed cereals, dairy, or both cereals and dairy.

"The main differences between forager and farmer skulls are where we would expect to find them, and change in ways we might expect them to, if chewing demands decreased in farming groups," said Katz, who is now a postdoctoral researcher at the University of Calgary, Alberta.

The largest changes in skull morphology were observed in groups consuming dairy products, suggesting that the effect of agriculture on skull morphology was greatest in populations consuming the softest food (cheese!).

"At least in early farmers, milk did not make for bigger, stronger skull bones," Katz said.

However, differences due to diet tended to be small compared to other factors, such as the difference between males and females or between individuals with the same diet from different populations, Katz said.

From Science Daily

Here's Why Whiskey Tastes Better With a Little Water

Whiskey connoisseurs have long thought that mixing a few drops of water with the drink could enhance its flavor.

Now, a new study reveals a scientific explanation for why that may be true.

This finding could also help improve the effectiveness of medicines that include alcohol in their ingredients, such as cough syrups, said study lead author Björn Karlsson, a computational chemist at Linnaeus University in Sweden.

It could "have consequences for how we administer and design liquid drug formulations," he told Live Science.

Strong spirit

Whiskey, also spelled "whisky," stems from the Gaelic word "uisge," meaning "water." Whiskey is a powerful alcoholic spirit distilled from fermented grains, typically barley or rye. (In the United States, bourbon whiskeycontains at least 51 percent corn.) It is often aged in wooden casks, and grains may be smoked over peat prior to fermentation to impart a smoky flavor.

Many historical figures have waxed poetic over whiskey. For instance, playwright George Bernard Shaw opined that "whisky is liquid sunshine," while author Mark Twain thought "too much of anything is bad, but too much good whiskey is barely enough."

Before whiskey is bottled, water is often added to it to dilute it to about 40 percent alcohol by volume, in the belief that doing so significantly changes its flavor, Karlsson said. Whiskey enthusiasts also often add a few drops of water to whiskey before drinking it to enhance its taste. But how dilution might achieve this effect was not clear until now.

Flavor on top


To help solve this mystery, Karlsson and his colleague Ran Friedman, also at Linnaeus University, carried out computer simulations of water and alcohol. The simulations also included organic compounds associated with the flavor of whiskey. Many of these molecules are so-called amphipathic molecules, which have both water-repelling and water-attracting regions.

The researchers focused on a small amphipathic compound known as guaiacol. This molecule is linked with the smoky taste that develops when malted barley is smoked on peat fires, and is far more common in Scottish whiskies than in American or Irish ones, the researchers said.

When the computer models diluted whiskey to just 45 percent alcohol, guaiacol was more likely to be present at the surface of the whiskey than in the bulk of the liquid. This would help the guaiacol better contribute to both the smell and taste of the spirit at this interface between the fluid and the air, the researchers report online Aug. 17 in the journal Scientific Reports.

In contrast, at concentrations of alcohol above 59 percent, guaiacol was driven away from the surface of the whiskey. The researchers said they expect similar results with other flavor molecules found in whiskey, such as vanillin, found in vanilla extract, and limonene, found in lemon and orange oils.

Read more at Seeker

NASA Photo Shows Saturn’s Icy Moon Tethys Orbiting Above the Planet’s Rings

Tethys above Saturn's rings
Saturn's icy moon Tethys hovers above the planet's iconic rings in a breathtaking photo by NASA's Cassini spacecraft.

Though NASA released the image Monday (Aug. 21), Cassini actually captured it on May 13, 2017. At the time, the probe was about 750,000 miles (1.2 million kilometers) from Saturn and 930,000 miles (1.5 million km) from Tethys, agency officials said.

The night side of Tethys is lit up by "Saturnshine" — sunlight reflected off its parent planet — in the image. But this Saturnshine isn't quite as powerful as the photo makes it seem.

"Tethys was brightened by a factor of two in this image to increase its visibility," NASA officials wrote in an image description. "A sliver of the moon's sunlit northern hemisphere is seen at top. A bright wedge of Saturn's sunlit side is seen at lower left."

At 660 miles (1,062 km) across, Tethys is the fifth-largest moon of Saturn. (The only bigger ones are Titan, Rhea, Iapetus, and Dione.) Tethys has some pretty dramatic features that aren't visible in this photo — a deep canyon that snakes across three-fourths of its surface, for example, and a crater called Odysseus that's 250 miles (400 km) wide.

Cassini has been capturing stunning images like this one since arriving in orbit around Saturn in July 2004. But the probe's work is nearly done: Cassini is in the "Grand Finale" phase of its mission, which will culminate with an intentional death dive into Saturn's thick atmosphere on Sept. 15.

This maneuver is designed to ensure that Cassini doesn't contaminate Titan or fellow Saturn satellite Enceladus with microbes from Earth. (Astrobiologists think Titan and Enceladus may be capable of supporting life.)

The $3.2 billion Cassini-Huygens mission is a collaboration involving NASA, the European Space Agency and the Italian Space Agency. Huygens was a piggyback lander that traveled with the Cassini mothership and touched down on Titan in January 2005.

From Seeker

Ancient Babylonian Tablet Identified as the World’s Oldest Trigonometry Table

The 3,700-year-old Babylonian tablet Plimpton 322 at the Rare Book and Manuscript Library at Columbia University in New York
Plimpton 322, a 3,700-year-old Babylonian clay tablet, was shrouded in mystery after it surfaced in the early 1900s. Scavengers likely found the tablet in what is now Iraq and sold it to an antiques dealer, who then passed it on to an American antiquities enthusiast named Edgar Banks.

Banks, who was also a diplomat and roving archaeologist, worked for a time at the American consul in Baghdad. His global exploits are thought to have inspired the fictional character Indiana Jones, the protagonist of the famous film franchise.

Banks is credited with first recognizing the importance of Plimpton 322, which he sold to New York publisher George Arthur Plimpton. Plimpton housed the artifact in his private collection. Upon Plimpton’s death in 1936, it was donated to Columbia University, where it has remained ever since.

Researchers have puzzled over the tablet’s meaning throughout the decades, but it was never fully deciphered — until now.

Daniel Mansfield of the University of New South Wales School of Mathematics and Statistics and his colleague Norman Wildberger have just reported the conclusions of their two-year analysis of the artifact. In a paper published in the journal Historia Mathematica, they conclude that the inscriptions on the tablet form the world’s oldest trigonometric table.

The American antiquarian Edgar Banks inspired the fictional character Indiana Jones.
Previously it was thought that the Greek astronomer Hipparchus of Nicaea (120–190 BC) invented trigonometry, or the study of triangles. It now appears the Babylonians beat him to this achievement by more than 1,000 years.

What’s more, the trigonometry is “of an ancient and unusual kind that predates the modern notion of the angle,” Mansfield said.

He first read about Plimpton 322 by chance when preparing material for a first-year mathematics class. He and his colleague decided to examine the tablet more closely after realizing that it had parallels with a form of trigonometry explained in Wildberger’s book, “Divine Proportions: Rational Trigonometry to Universal Geometry.” Rational trigonometry often relies upon simple fractions to solve problems.

Mansfield said Wildberger’s approach was "essential" to unlocking its secrets because it allowed the team to begin thinking about a kind of trigonometry that might predate the invention of angles.

“I recall the moment that we thought there might be applications of this way of thinking,” he continued. “This is a new way of thinking about fundamental concepts that have been unchanged for hundreds of years, and it felt like staring into the abyss.”

Daniel Mansfield with the Plimpton 322 tablet at the Rare Book and Manuscript Library at Columbia University in New York
The researchers determined that the 15 rows of inscriptions on the tablet describe a sequence of 15 right-angle triangles that are steadily decreasing in inclination. The left-hand edge of the tablet is broken, but the new analysis reveals that there were originally 6 columns in 38 rows.

Mansfield and Wildberger further demonstrate how the ancient scribes used a base-60 numerical arithmetic, similar to modern clocks, rather than the base-10 number system favored today.

Mansfield suspects their choice of a base-60 system as opposed to base-10, or even another number, was based on their society's system of taxation.

“The base-60 system allows for easy calculation of fractions, so it is easier to take, say, one-sixth of the grain or one-third of the harvest,” he explained.

The tablet also suggests that the ancient Babylonians had knowledge of what we refer to today as the Pythagorean triple. This is a set consisting of three positive whole numbers — a, b, and c — such that a2 + b2 = c2. The integers 3, 4, and 5 collectively form a well-known example of a Pythagorean triple, but the values on Plimpton 322 are often considerably larger. For example, the first row references the triple: 119, 120, and 169.

A close-up of the inscriptions on Plimpton 322
The ancients appear to have been fascinated by such order, not to mention triangular shapes. Pyramids were built, not only by the ancient Egyptians, but also by the Aztecs and other early cultures. It is even possible that the tablet was used to calculate how to construct many different types of structures, from palaces and temples to canals.

Babylon was a key kingdom within Mesopotamia, an ancient cultural region corresponding to modern-day Iraq, as well as parts of Iran, Syria, and Turkey.

“The ancient Mesopotamians built largely out of clay and mud, so their structures didn’t last that long,” Wildberger said, adding that there were a few exceptions, “the most notable being the Ziggurat of Ur, which is still around — it was restored by later kings and also by Saddam Hussein — and dates from before the old Babylonian period.”

Read more at Seeker

This Is How Horses Turned Their Toes Into Hooves

Brianna McHorse is a Harvard doctoral student who has always been interested in the animal mentioned in her family’s surname.

“I’m fascinated by the incredible athletic performance of modern horses, and as a paleontologist, I'm interested in how skeletal shape influences the way an animal moves in life, and how that changes through time,” McHorse, who has ridden horses for many years and works in the labs of Stephanie Pierce and Andrew Biewener at Harvard’s Department of Organismic and Evolutionary Biology, told Seeker.

“Combining those [interests] with the incredibly rich fossil record of horses, plus their status as a classic evolutionary story, offers a fruitful area of study that hasn't received very much quantitative attention yet,” she added.

That area of study has now received attention, as McHorse and her colleagues have just conducted the most extensive investigation to date on how movement and load-bearing stresses acted on early horse ancestors as they evolved to become the world’s only living monodactyl, or single-toed animal.

The findings, published in the journal Proceedings of the Royal Society B, suggest that, at least for horses, having just one sturdy toe per limb — hooves are technically toes — can be better than having many toes, as humans and numerous other animals do.

Artist’s reconstruction of Hyracotherium at the Carnegie Museum of Natural History.
McHorse notes that the early ancestors of horses had four toes on each front limb and three toes on each back limb. At this time in horse history, roughly 55 million years ago, such animals like those in the genus Hyracotherium were about the size of a small dog and lived in forests that covered much of North America.

It has long been known that changing climatic conditions allowed grasslands to expand. Selective pressures resulting from the new open terrains then drove increases in the body mass of horse ancestors and caused them to lose all but one toe per limb. Many questions have remained, however, such as what the underlying mechanical consequences were of standing on just a single toe.

To help answer these questions, McHorse, Pierce, and Biewener performed micro-CT scans of 12 fossil species in the horse family tree. The scientists then used an engineered “beam bending” analysis to calculate how much stress each species’ lower leg bones were experiencing during regular movement and high-speed running. The stress data were then compared to the fracture stress of bone.

The researchers now believe that, as horses evolved, they soon lost their fourth toes on their front limbs, leaving them with three toes on each limb. Each was “not quite a hoof, but not quite a claw either,” McHorse said, adding that the toes were “more like what a living hyrax has — sort of thick, modified nails.”

Hyracotherium probably had some sort of pad under each foot, and would have had a less upright foot posture than living horses, which essentially stand on tip-toe all of the time.

“As body mass increased, and side toes shrunk, the middle digit compensated by changing its internal geometry, allowing ever-bigger horse species to eventually stand and move on one toe,” Pierce explained.

“The bone within the load-bearing digit of later horses was distributed farther away from the center of its cross-section, allowing it to better resist bending,” she continued. “The total amount of bone also increased, allowing it to better resist compression as well as bending, which are of critical importance for animals with large body sizes.”

Illustration showing the evolution of digital reduction in the forelimb of horses.
Horses first evolved into monodactyls at least five million years ago, and there were trends toward significant digit reduction in the lineage before then. As a result, humans didn’t influence these anatomical changes in horses, according to the researchers.

“I believe cave art depicting horses has been dated to around 30,000 years ago, but that would have been long before actual domestication,” McHorse said.

Ancestors of other modern animals, such as deer, were subject to similar climate and environmental changes as the early horses. Today’s deer and other hoofed animals, however, have two toes per limb instead of just one. Their anatomy and weight-bearing needs placed the axis of symmetry per limb right between their two digits, as opposed to the arrangement in horses, where the axis of symmetry now runs right down the middle of the single digit.

“That means to maintain symmetry, you wind up with two toes,” McHorse said, explaining why cows, pigs, sheep, antelopes, deer and other animals have this number of digits.

Read more at Seeker

Aug 23, 2017

You and some 'cavemen' get a genetic checkup

Charted data clearly illustrate a progressive improvement over the millennia in the genetic foundations of health, in nearly all diseases examined. Smaller shapes indicate better overall foundations. The dotted round line labeled 50% indicates average modern human disease allele occurrence.
Had an arrow in his back not felled the legendary Iceman some 5,300 years ago, he would have likely dropped dead from a heart attack. Written in the DNA of his remains was a propensity for cardiovascular disease.

Heart problems were much more common in the genes of our ancient ancestors than in ours today, according to a new study by the Georgia Institute of Technology, which computationally compared genetic disease factors in modern humans with those of people through the millennia.

Overall, the news from the study is good. Evolution appears, through the ages, to have weeded out genetic influences that promote disease, while promulgating influences that protect from disease.

Evolutionary double-take

But for us modern folks, there's also a hint of bad news. That generally healthy trend might have reversed in the last 500 to 1,000 years, meaning that, with the exception of cardiovascular ailments, disease risks found in our genes may be on the rise. For mental health, our genetic underpinnings looked especially worse than those of our ancient forebears.

Though the long-term positive trend looks very clear in the data, it's too early to tell if the initial impression of a shorter-term reversal will hold. Further research in this brand new field could dismiss it.

"That could well happen," said principal investigator Joe Lachance, an assistant professor in Georgia Tech's School of Biological Sciences. "But it was still perplexing to see a good many of our ancestors' genomes looking considerably healthier than ours do. That wasn't really expected."

Lachance, former postdoctoral assistant Ali Berens, and undergraduate student Taylor Cooper published their results in the journal Human Biology. They hope that by better understanding our evolutionary history, researchers will someday be able to project future human populations' genomic health forward, as well as perhaps their medical needs.

Dismal distant past

Despite what may be a striking, recent negative trend, through the millennia genetic risks to health clearly appear to have diminished, according to the study's main finding. "That was to be expected because larger populations are better able to purge disease-causing genetic variants," Lachance said.

The researchers scoured DNA records covering thousands of years of human remains along with those of our distant evolutionary cousins, such as Neanderthals, for genetic locations, or "loci," associated with common diseases. "We looked at heart disease, digestive problems, dental health, muscle disorders, psychiatric issues, and some other traits," Cooper said.

After determining that they could computationally compare 3,180 disease loci common to ancients and modern humans, the researchers checked for genetic variants, or "alleles," associated with the likelihood of those diseases, or associated with the protection from them. Nine millennia ago and before that, the genetic underpinnings of the diseases looked dismal.

"Humans way back then, and Neanderthals and Denisovans -- they're our distant evolutionary cousins -- they appear to have had a lot more alleles that promoted disease than we do," Lachance said. "The genetic risks for cardiovascular disease were particularly troubling in the past."

Crumbling health genetics?


As millennia marched on, overall genetic health foundations got much better, the study's results showed. The frequency of alleles that promote disease dropped while protective alleles rose at a steady clip.

Then again, there's that nagging initial impression in the study's data that, for a few centuries now, things may have gone off track. "Our genetic risk was on a downward trend, but in the last 500 or 1,000 years, our lifestyles and environments changed," Lachance said.

This is speculation, but perhaps better food, shelter, clothing, and medicine have made humans less susceptible to disease alleles, so having them in our DNA is no longer as likely to kill us before we reproduce and pass them on.

A grain of data salt

Also, the betterment over millennia in genetic health underpinnings seen in the analysis of select genes from 147 ancestors stands out so clearly that the researchers have had to wonder if the reversal in pattern in recent centuries, which seems so inconsistent with that long-term trend, is not perhaps a coincidence in the initial data set. The scientists would like to analyze more data sets to feel more confident about the apparent reversal.

"We'd like to see more studies done on samples taken from humans who lived from 400 years ago to now," Cooper said.

They would also like to do more research on the positioning of the genetic health of ancients relative to modern humans. "We may be overestimating the genetic health of previous hominins (humans and evolutionary cousins including Neanderthals)," Lachance said, "and we may need to shift estimates of hereditary disease risks for them over, which would mean they all had a lot worse health than we currently think."

Until then, the researchers are taking the apparent slump in the genetic bedrock of health in recent centuries with a grain of salt. But that does not change the main observation.

"The trend shows clear long-term reduction over millennia in ancient genetic health risks," said Berens, a former postdoctoral assistant. Viewed in graphs, the improvement is eye-popping.

More psychiatric disorders

If the initial finding on the reversal does eventually hold up, it will mean that people who lived in the window of time from 2,000 to 6,000 years ago appear to have had, on the whole, DNA less prone to promoting disease than we do today, particularly for mental health. We moderns racked up much worse genetic likelihoods for depression, bipolar disorder, and schizophrenia.

"We did look genetically better on average for cardiovascular and dental health," Lachance said. "But at every time interval we examined, ancient individuals looked healthier for psychiatric disorders, and we looked worse."

Add to that a higher potential for migraine headaches.

The Iceman cometh

Drilling down in the data leads to individual genetic health profiles of famous ancients like the Altai Neanderthal, the Denisovan, and "Ötzi" the Iceman. Ötzi, like us, was Homo sapiens.

Along with his dicey heart, the Iceman probably contended with lactose intolerance and allergies. Their propensity was also written in his DNA, but so was a likelihood for strapping muscles and enviable levelheadedness, making him a potentially formidable hunter or warrior.

With his bow, recovered near his cadaver on a high mountain pass, Ötzi could have easily slain prey or foe at 100 paces. But the bow was unfinished and unstrung one fateful day around 3,300 B.C., leaving the Iceman with little defense against the enemy archer who punctured an artery near his left shoulder blade.

The Iceman probably bled to death within minutes. Eventually, snow entombed him, and he lay frozen in the ice until a summer glacier melt in 1991 re-exposed him to view. Two German hikers came upon his mummified corpse that September on a ridge above Austria's Ötztal valley, which gave the popular press fodder to nickname him "Ötzi."

DNA tatters

The near ideal condition of his remains, including genetic ones, has proven a treasure trove for ascientific study. But Ötzi is an extraordinary exception.

Usually, flesh-bare, dry bones or fragments are all that is left of ancient hominins or even just people who died a century ago. "Ancient DNA samples may not contain complete genomic information, and that can limit comparison possibilities, so we have to rely on mathematical models to account for the gaps," Berens said.

Collecting and analyzing more DNA samples from ancients will require vigorous effort by researchers across disciplines. But added data will give scientists a better idea of where the genetic underpinnings of human health came from, and where they're headed for our great grandchildren.

Read more at Science Daily

First X-rays detected from mystery supernovas

Scientists have detected the first X-rays from what appears to be a type Ia supernova, located inside the spiral-shaped galaxy ESO 336-G009, about 260 million light-years from Earth.
Exploding stars lit the way for our understanding of the universe, but researchers are still in the dark about many of their features.

A team of scientists, including scholars from the University of Chicago, appear to have found the first X-rays coming from type Ia supernovae. Their findings are published online Aug. 23 in the Monthly Notices of the Royal Astronomical Society.

Astronomers are fond of type Ia supernovas, created when a white dwarf star in a two-star system undergoes a thermonuclear explosion, because they burn at a specific brightness. This allows scientists to calculate how far away they are from Earth, and thus to map distances in the universe. But a few years ago, scientists began to find type Ia supernovas with a strange optical signature that suggested they carried a very dense cloak of circumstellar material surrounding them.

Such dense material is normally only seen from a different type of supernova called type II, and is created when massive stars start to lose mass. The ejected mass collects around the star; then, when the star collapses, the explosion sends a shockwave hurtling at supersonic speeds into this dense material, producing a shower of X-rays. Thus we regularly see X-rays from type II supernovas, but they have never been seen from type Ia supernovas.

When the UChicago-led team studied the supernova 2012ca, recorded by the Chandra X-ray Observatory, however, they detected X-ray photons coming from the scene.

"Although other type Ia's with circumstellar material were thought to have similarly high densities based on their optical spectra, we have never before detected them with X-rays," said study co-author Vikram Dwarkadas, research associate professor in the Department of Astronomy and Astrophysics.

The amounts of X-rays they found were small -- they counted 33 photons in the first observation a year and a half after the supernova exploded, and ten in another about 200 days later -- but present.

"This certainly appears to be a Ia supernova with substantial circumstellar material, and it looks as though it's very dense," he said. "What we saw suggests a density about a million times higher what we thought was the maximum around Ia's."

It's thought that white dwarfs don't lose mass before they explode. The usual explanation for the circumstellar material is that it would have come from a companion star in the system, but the amount of mass suggested by this measurement was very large, Dwarkadas said -- far larger than one could expect from most companion stars. "Even the most massive stars do not have such high mass-loss rates on a regular basis," he said. "This once again raises the question of how exactly these strange supernovas form."

"If it's truly a Ia, that's a very interesting development because we have no idea why it would have so much circumstellar material around it," he said.

Read more at Science Daily

Black holes: Scientists 'excited' by observations suggesting formation scenarios

The black hole named Cygnus X-1 formed when a large star caved in. This black hole pulls matter from the blue star beside it.
Physicists have described how observations of gravitational waves limit the possible explanations for the formation of black holes outside of our galaxy; either they are spinning more slowly than black holes in our own galaxy or they spin rapidly but are 'tumbled around' with spins randomly oriented to their orbit.

The paper, published in Nature, is based on data that came about following landmark observations of gravitational waves by the LIGO gravitational wave detector in 2015 and again in 2017.

In our own galaxy we have been able to electromagnetically observe black holes orbited by stars and map their behaviour -- notably their rapid spinning.

Gravitational waves carry information about the dramatic origins of black that cannot otherwise be obtained. Physicists concluded that the first detected gravitational waves, in September 2015, were produced during the final fraction of a second of the merger of two black holes to produce a single, more massive spinning black hole. Collisions of two black holes had been predicted, but never observed.

As such, gravitational waves present the best and only way to get a deep look at the population of stellar-mass binary black holes beyond our galaxy. This paper states that the black holes seen via gravitational waves are different to those previously seen in our galaxy in one of two possible ways.

The first possibility is that the black holes are spinning slowly. If that is the case it suggests that something different is happening to the stars that form these black holes than those observed in our galaxy.

The second possibility is that the black holes are spinning rapidly, much like those in our galaxy, but have been 'tumbled' during formation and are therefore no longer aligned with orbit. If this is the case, it would mean that the black holes are living in a dense environment -- most likely within star clusters. That would make for a considerably more dynamic formation.

There is, however, also the chance that both possibilities are true -- that there are instances of black holes spinning slowly in the field and instances of black holes spinning rapidly in a dense environment.

Dr Will Farr, from the School of Physics and Astronomy at the University of Birmingham, explained, "By presenting these two explanations for the observed behaviour, and ruling out other scenarios, we are providing those who study and try to explain the formation of black holes a target to hit. In our field, knowing the question to ask is almost as important as getting the answer itself."

Professor Ilya Mandel, also from the University of Birmingham, added "We will know which explanation is right within the next few years. This is something that has only been made possible by the LIGO detections of gravitational waves in the last couple of years. This field is in its infancy; I'm confident that in the near future we will look back on these first few detections and rudimentary models with nostalgia and a much better understanding of how these exotic binary systems form."

Read more at Science Daily

Dolphin that existed along South Carolina coast long ago

A rendering of the toothless dwarf dolphin, according to the researcher's findings.
Continuing to uncover fossil evidence along the coast of South Carolina, researchers, led by a faculty member at College of Charleston, have discovered a species of extinct dolphin. The toothless dolphin, which lived about 28-30 million years ago, provides new evidence of the evolution of feeding behavior in whales (which includes dolphins).

The species, named Inermorostrum xenops, lived during the same period as Coronodon havensteini, a species of ancient whale announced recently by investigators at New York Institute of Technology College of Osteopathic Medicine and College of Charleston in Current Biology.

The skull of Inermorostrum was discovered by a diver in the Wando River in Charleston, just miles from the location where Coronodon's remains were found, and presents the first clear evidence of suction feeding in echolocating sea mammals. The researchers estimate that the dolphin grew to be only four feet long, smaller than its closest relatives, and significantly smaller than today's bottlenose dolphins, which measure seven to twelve feet in length.

The study has been released in the journal Proceedings of the Royal Society B.

According to College of Charleston adjunct geology professor Robert W. Boessenecker, Ph.D., the dwarf dolphin had a short snout and entirely lacked teeth. The genus name, Inermorostrum xenops, means "defenseless snout," referring to its toothless condition. Boessenecker, the lead author of the study, believes that the suction-feeding dolphin fed primarily on fish, squid, and other soft-bodied invertebrates from the seafloor, similar to the feeding behavior of a walrus. Furthermore, a series of deep channels and holes for arteries on the snout indicate the presence of extensive soft tissues, likely enlarged lips, and also perhaps even whiskers.

"We studied the evolution of snout length in cetaceans, and found that during the Oligocene (25-35 million years ago) and early Miocene epochs (20-25 million years ago), the echolocating whales rapidly evolved extremely short snouts and extremely long snouts, representing an adaptive radiation in feeding behavior and specializations," says Boessenecker. "We also found that short snouts and long snouts have both evolved numerous times on different parts of the evolutionary tree -- and that modern dolphins like the bottlenose dolphin, which have a snout twice as long as it is wide, represent the optimum length as it permits both fish catching and suction feeding."

Research team member, Jonathan Geisler, Ph.D., chair of the Anatomy Department and associate professor at NYITCOM, says the discovery is an important step in understanding why the South Carolina Coast provides unique insights into cetacean evolution.

"Coronodon, a filter feeder whale, and Inermorostrum, a suction feeding dolphin, may well have fed on the same prey. Their feeding behaviors not only help us understand their vastly different body sizes, but also shed light on the ecology of habitats that led to Charleston's present-day fossil riches," says Geisler.

Dr. Danielle Fraser, a paleontologist at the Canadian Museum of Nature and also part of the research team, notes that the identification of Inermorostrum opens up new questions about the evolution of early whales. "The discovery of a suction feeding whale this early in their evolution is forcing us to revise what we know about how quickly new forms appeared, and what may have been driving early whale evolution" she explains. "Increased ocean productivity may have been one important factor," she says.

Read more at Science Daily

Aug 22, 2017

Climate Change Is Causing Fish to Shrink

Measuring the size of cod caught in the North Sea. New research helps to explain why climate change is causing fish to shrink in size.
Fishermen over the past several years have noted that fish appear to be shrinking. That observation was validated in 2014 by research that found commercially important fish stocks in the North Sea, such as sole, herring, and haddock, have decreased in maximum body size over a 40-year period. Scientists suspected that climate change was the culprit, but were unsure how warming waters could lead to fish shrinkage across entire species.

New research published in the journal Global Change Biology describes the mechanism that is likely causing fish to shrink. Lead author Daniel Pauly, a principal investigator with the Sea Around Us project at the University of British Columbia, said the findings apply to animals with gills, such as fish, sharks, squid, and lobsters.

Pauly's co-author William Cheung, director of science for the Nippon Foundation Nereus Program at the university, explained that these species and many others are ectotherms, meaning that their body temperature depends on environmental temperature.

“As the oceans warm up,” Cheung said, “their bodies will do so as well. Higher temperature within the scope that the fish can tolerate generally increases the rate of biochemical reactions in the fish’s body and thus increases their body metabolic rate.”

Metabolic rate refers to an animal’s oxygen consumption, which also naturally increases as fish grow into adulthood because their body mass becomes larger.

Graphic showing how climate change can cause the bodies of certain marine species to shrink in size
One might wonder why fish and other marine ectotherms aren’t just taking in ever more oxygen to coincide with this natural growth due to maturation and the rise of ocean temperatures. They don’t because at a certain point they cannot keep up.

The researchers point out that the surface area of an animal’s gills — where oxygen is obtained — does not grow at the same pace as the rest of its body.

“This is because gills, in order to work, must function as a two-dimensional surface (width by height) and thus cannot grow as fast as the three-dimensional volume (width by height by depth) they have to supply with oxygen,” Pauly said.

He and Cheung liken how a fish gill works to a car radiator. Both are made up of numerous thin layers that allow for the transfer of heat, which permits cooling. But both can only work in two dimensions because air or water pass through only once.

“There is not much that fish can do to solve this problem,” Pauly said. “They can have bigger gills — just as sports cars have bigger radiators — but ultimately, the weight always catches up, and the ratio of gill surface to body weight becomes too low.”

The researchers believe this set of principles, which they have named the Gill-Oxygen Limitation Theory helps to explain why so many populations of marine species are shrinking. They and others predict that the reductions will be in the range of 20–30 percent if ocean temperatures continue to climb due to climate change.

At the higher end of that range is one of the world’s most important commercial fish: tuna.

“Tunas are active, mobile, and fast-swimming animals that need a lot of oxygen to maintain their lifestyle,” Cheung said. “In fact, they have to keep swimming non-stop in order to get more water through their gills to obtain sufficient oxygen. Thus, when temperature increases, they are particularly susceptible to not having sufficient oxygen to support their body growth.”

He added that for a 2 degree Celsius (3.6 degree Fahrenheit) increase in water temperature, which is approximately what is expected to occur in oceans around the world by the mid-21st century, tunas such as the Atlantic bluefin tuna will potentially decrease in body size by 30 percent.

Sharks, many of which are already threatened with extinction, are also predicted to decrease in size, especially larger species.

In the case of tuna, haddock, cod, and other fish consumed by humans, shrinkage is predicted to decrease potential fisheries production. Since marine ecosystems are structured in part by the body size of organisms — basically larger fish eat smaller fish — the projected changes to body sizes will likely affect predator and prey interactions, as well as ecosystem structure and functions, Cheung said.

He said the most effective way to prevent these problems from occurring is to mitigate carbon dioxide emissions.

Read more at Seeker

The Asteroid That Killed the Dinosaurs Caused Catastrophic Climate Change

The last days of dinosaurs during the Cretaceous Period, caused by a giant asteroid impact at Chicxulub off the coast of Mexico.
Most schoolchildren learn the dinosaurs died out when an asteroid hit the earth.

Fortunately, schoolchildren are spared the gory details.

Scientists now believe that the asteroid that slammed into Mexico’s Yucatan Peninsula 66 million years ago was 6 miles wide — almost big enough to cover San Francisco — and caused cataclysmic destruction on a scale comparable to global thermonuclear war. In the aftermath, about three-quarters of all species on Earth died out.

The asteroid, named Chicxulub, sent huge tsunamis surging across the seas, and caused massive earthquakes and volcanoes. But that was just the beginning.

A massive amount of vaporized rock was propelled high above Earth, where it condensed into tiny particles and fell back down to the surface. Heated by friction, the descending cloud of rock dust reached temperatures hot enough to spark fires, literally broiling the Earth’s surface along with plant and animal life.

In other words, dinosaurs living far away from the Yucatan Peninsula were flambéed from above by the cloud of scorching rock particles falling on them. A thin layer of molten particles is still observable to geologists today.

New research, however, now suggests that following the asteroid strike global temperatures plunged in a way analogous to the planetary cooling thought to follow a global nuclear war, commonly known as nuclear winter.

The resulting wildfires sparked by the broiling rock dust sent so much soot back up into the sky the sun was blackened out completely for most of the next two years, shutting down photosynthesis and dramatically cooling the planet in one of the most significant known episodes of climate change in Earth’s history.

Global temperatures plummeted by 50 degrees Fahrenheit on land and 20°F over the oceans.

On land, temperatures returned to normal in about seven years, though they were still a couple of degrees Celsius below normal 15 years later, according to Charles Bardeen of the National Center for Atmospheric Research, who led a study of Earth’s climate following the impact. The results were published in the journal Proceedings of the National Academy of Sciences.

Bardeen and his colleagues from NASA and the University of Colorado Boulder used advanced computer models to simulate the climate’s reaction during the years after the asteroid landed.

Rampant, sudden wildfires caused by the meteor impact cooled global temperatures in a way that is thought to be likely following widespread detonations of nuclear weapons, Bardeen said.

“Fires created by an asteroid impact and fires created by a nuclear war can put large amounts of soot high up above where the rain happens, so they can exist for a longer period of time and have these global consequences,” Bardeen said. “As long as that soot gets injected above where the rain would happen, it can stay in the atmosphere for a long time.”

Yet, perhaps surprisingly, the dinosaur-killing asteroid brought climate consequences that likely far exceeded those of a modern day limited nuclear war.

A war between India and Pakistan, where perhaps 100 Hiroshima-sized nuclear weapons are used, would probably put enough soot into the atmosphere to have a 1-2°C impact on the atmosphere,” Bardeen said.

Read more at Seeker

Bacteria Covered in Semiconductor Crystals Outcompete Plants at Photosynthesis

An ARTIST'S RENDERING OF a BIOREACTOR (LEFT) LOADED WITH BACTERIA DECORATED WITH LIGHT-ABSORBING NANOCRYSTALS of CADMIUM SULFIDE (right) TO CONVERT LIGHT, WATER, AND CARBON DIOXIDE INTO USEFUL CHEMICALS.
In nature, plants use photosynthesis to convert sunlight into chemicals and energy. But this process is actually not very efficient because chlorophyll, the pigment that makes photosynthesis possible, only responds to one wavelength of light.

Now scientists have devised a way to take the light-harvesting efficiency of a semiconductor that can absorb more wavelengths of sunlight and combine it with the catalytic power of a bacterium to more efficiently convert light, water, and carbon dioxide (CO2) into chemicals, which can in turn be used to develop fuels, polymers, and pharmaceuticals.

The combination is described as a form of “cyborg bacteria.”

“The best path forward is to take the best of both worlds,” chemist Kelsey K. Sakimoto told Seeker. He began researching this organism in Peidong Yang’s lab at the University of California, Berkeley, and is now a post-doctoral fellow at Harvard University.

“With a higher efficiency light harvester you can begin to outcompete natural photosynthesis,” Sakimoto said.

To convert CO2 into products, researchers can produce very complex items in a simple way that chemists can’t even begin to think about doing with conventional chemistry.

In their experiments, Sakimoto and his colleagues achieved a solar-to-chemical conversion efficiency of around 80 percent. Conventional solar panels convert sunlight at an efficiency around 20 percent. Natural photosynthesis has a conversion efficiency that, theoretically, maxes out at 12 percent, according to Sakimoto.

The research team will present its findings today at the 254th National Meeting & Exposition of the American Chemical Society (ACS).

The team’s work focuses on the bacterium Moorella thermoacetica, which can be found in soil and at the bottom of stagnant ponds. As part of its normal respiration, it naturally uses hydrogen to turn CO2 into acetic acid, a versatile chemical that can form the basis of fuels and plastics that are otherwise derived from petrochemicals.

Moorella thermoacetica has another attribute that makes it ideal for this purpose: When it encounters the toxic metal cadmium in the environment, it turns it into tiny particles of cadmium sulfide on its outer cell wall so that it doesn’t get inside, where it can be harmful.

“It's basically a stress response that they have on the back burner if they encounter these conditions,” said Sakimoto. “So we tap this stress response and induce it by adding some cadmium.”

Cadmium sulfide nanocrystals function like tiny solar panels. In fact, some of the earliest solar panels were made using cadmium sulfide as a light-absorbing semiconductor.

The bacterium uses the cadmium sulfide like a piece of equipment that absorbs sunlight and transfers that energy to produce a molecule that the bacteria can eat. In doing so, CO2 is turned into acetic acid.

An advantage of using this bacteria to produce acetic acid is that chemists can make large amounts of it relatively simply.

“One of the reasons we opted for a free-living system in which the semiconductors are directly attached to the bacteria is that you can make giant vats of this stuff and [the bacteria] will kind of chug along happily like micro algae,” Sakimoto remarked.

In these quantities, it’s easier and cheaper to expose the bacteria to cadmium rather than hydrogen, which would require an expensive, energy-intensive system called an electrolysis reactor.

“The nanoparticles accomplish all of that in a single step,” said Sakimoto. “Having the microbes make all the equipment itself simplifies the process even further.”

In vats under the sun, the bacteria would self-replicate without producing waste.

The acetate produced would then be harnessed to make other products. Previous researchers have shown that genetically engineered E. coli bacteria can turn acetic acid into fuels, such as butanol, as well as into biopolymers to make plastics and into various compounds that are useful in a variety of pharmaceuticals.

Sakimoto believes it’s worthwhile to do some “bioprospecting” for other examples of bacteria that could be tapped to create chemicals. The fact that Moorella thermoacetica produced cadmium sulfide nanoparticles as a stress response was documented as a curious observation in a paper, he said — it was essentially a side note.

“The first, easiest step is to go back and look through the old biological and microbiological literature with the lens of a chemist or a material chemist to see what has been already discovered that other fields of science haven't really found a use,” he said.

Read more at Seeker

X-Ray Reveals Ancient Roman Portrait Covered in Mt. Vesuvius Ash

An iron element map (right) made with new X-ray technology reveals the underlying craftsmanship hidden beneath a damaged portrait of a Roman woman (left).
For centuries, the ancient Roman resort town of Herculaneum was buried under 66 feet of volcanic material. The city on the Italian coast, along with nearby Pompeii, was destroyed during an eruption of Mount Vesuvius in 79 AD. Excavations in the mid-19th century uncovered much of Herculaneum, including its large “House of the Mosaic Atrium,” but an ancient painting in the house went almost unnoticed, until now.

A newly developed portable macro X-ray fluorescence instrument, ELIO by XGLab SRL, revealed the Roman woman in the portrait that has been subjected to molten lava, volcanic ash, grime, salt, and humidity over the years. As if that weren’t rough treatment enough, its exposure since it was excavated 70 years ago has caused much of it to deteriorate.

The portable X-ray instrument was brought directly to the site at Herculaneum, where the noninvasive analysis of the mid-1st century AD painting occurred.

“As far as we know, this is the first study of an ancient Roman wall painting — or any other historical wall painting — in situ, in its original setting,” Eleonora Del Federico, a professor of chemistry at the Pratt Institute who studies artists’ materials and conservation, told Seeker. “The technique is fairly new, and has been used for studies at museums on Rembrandts, Picassos and Van Goghs, among others.”

The ELIO device scanning the Roman portrait within the House of the Mosaic Atrium at Herculaneum.
While ELIO works best at just over a half an inch away from an artwork’s surface, the instrument never actually touches a painting. It is not difficult to operate, but ELIO is not cheap and the data analysis and interpretation of the results it provides require specialized training.

Del Federico, who conducted this latest research in conjunction with the Herculaneum Conservation Project, will present her findings today at the 254th National Meeting & Exposition of the American Chemical Society in Washington, DC.

Her analysis revealed that an artist created a sketch of the young woman with an iron-based pigment, and then put highlighting around the woman’s eyes in the sketch with a lead pigment. High levels of potassium in the woman’s cheeks in the artwork suggest that a green Earth pigment was used as an underpainting to help create a flesh-toned color.

“We were very surprised at the complexity and sophistication of the painting technique, the use of color, mixture of pigments and layering,” Del Federico said.

Images of the early Roman painting showing the various elements revealed by the macro X-ray fluorescence instrument.
She explained that the painting method actually involved two primary techniques. The first, known as fresco, involved applying pigments on a wet surface of lime mortar, which consists of a mixture of calcium hydroxide and sand and/or pumice stone. The second, called secco, involved applying pigments with organic binders to the surface once the lime mortar had set.

Together, these methods were used to contour the portrait and to give realistic-looking volume to the subject’s face, cheeks and nose.

ELIO allowed Del Federico to create an “iron map” highlighting the woman’s primary features.

“When we look at the map of iron atoms, it reveals a beautiful young woman caught in a moment of deep thought,” she said. “You can always feel her presence, talk to her and feel her humanity, at least for me. She was gone by the years of exposure to the elements, and now she is back to life.”

The portrait is not signed, so the artist, for now, remains a mystery. Intriguingly, a similar portrait known as “Saffo” is on exhibit at the National Archaeological Museum of Naples. It is possible that both portraits were created by an artist who was given the honorary title “pictor imaginarius.” Many painters worked in early Roman towns, but the “pictor imaginarius” was among the most skilled, and would be brought in to tackle prominent wall areas requiring greater expertise.

A sketch of the woman in the early Roman painting created with an iron-based pigment.
As for who the woman was in the Herculaneum portrait, she remains a mystery for now, too. Del Federico said that she is possibly wearing a tiara. The House of the Mosaic Atrium “was indeed a well-to-do household, but not necessarily aristocratic.”

Roger Ling, a professor of classical art and archaeology at the University of Manchester, believes that a portrait such as this functioned as “a symbol of aristocratic luxury by which householders liked to surround themselves.”

Del Federico added, “A good quality wall painting was a sign of status.”

Read more at Seeker