Oct 7, 2017

Decapitated Toad Carcasses Found Inside 4,000-Year-Old Jerusalem Burial

In one of the rock-cut tombs, archaeologists made a rare discovery: a jar full of bones from nine headless toads. The toads had been decapitated before they were buried with the dead, possibly as a way to prepare the animals to be "eaten."
Finding a tomb that's been sealed for thousands of years is always a treat for archaeologists —especially when that tomb contains a jar of headless toads.

That's what archaeologists discovered inside a 4,000-year-old burial in Jerusalem, the Israel Antiquities Authority (IAA) announced last week.

The excavators think the jar might have been a funeral offering to feed the dead in the afterlife.

In 2014, archaeologists were excavating at a Bronze Age cemetery of more than 60 rock-cut tombs in Jerusalem's Manaḥat neighborhood. They discovered a sealed tomb, and after they rolled back the stone that was covering its opening, they found one poorly preserved human skeleton. The person had been buried lying on their back among intact ceramic bowls and jars. Based on the style of the pottery, the researchers think the tomb likely dates to the early part of the Middle Bronze Age (about 4,000 years ago).

One of the jars held a heap of small bones from nine toads that had all been decapitated.

"It is impossible to determine what role the toads played, but they are clearly part of the funerary rituals," Shua Kisilevitz, one of the excavation directors with the IAA, told Live Science.

Kisilevitz added that during this period toads were a symbol of regeneration for people in Egypt (the neighbors and sometimes overlords of the ancient Canaanites who lived in the Levant). But it's also possible that the toads had a more practical function: At the time, the dead were often buried with offerings that would serve them in their passage to the afterlife.

"Food offerings are a staple of burial customs during this period, and there is a possibility that the toads were indeed placed in the jar as such," Kisilevitz said.

The fact that they were decapitated is another clue: One way to prepare toads for eating is to remove the head and edges of the limbs so that the sometimes-toxic skin could be removed, Kisilevitz added.

While rare, the jar of toads isn't entirely unprecedented. Kisilevitz said she knows of a Late Bronze Age tomb at Wadi Ara in the north of Israel that also included a vessel with decapitated toads.

Read more at Seeker

500 Years of Volcano Deaths Could Help Save the 800M People at Risk Today

In this long exposure image, Mt. Sakurajima is seen erupting on October 1, 2017 in Tarumizu, Kagoshima, Japan.
Volcanoes pose many threats to human life, both during eruptions and while dormant. More than 800 million people — one-tenth of the global population — live within 62 miles of an active volcano, making it a matter of public security to understand the risks.

Last week, a volcanic eruption forced the evacuation of the entire island of Ambae, which is part of the Pacific nation of Vanuatu. The island’s 11,000 residents were relocated to other islands, with authorities warning them of exposure to gas, ash fall, and acid rain.

Meanwhile, more than 140,000 people have evacuated Bali as scientists have declared a high alert for the eruption of Mount Agung, the highest volcano on the island. The country has instituted a seven-mile hazard zone around the volcano and is stocking up on supplies. More than 1,000 people were killed when Agung erupted in 1963.

Against this backdrop, a new study from the University of Bristol and published in the Journal of Applied Volcanology examines deaths caused by volcanoes with the aim of increasing public knowledge of volcanic hazards.  

Sarah Brown, a professor of earth sciences at Bristol, and her team analyzed 500 years of data on volcanic fatalities. From 1500 to 2017, they found, more than 278,000 people have died from volcano-related hazards, amounting to 540 deaths per year on average.

The researchers collected information on the distance each person was from the volcano when they died, using scientific reports, media coverage, and volcano activity bulletins.

Nearly half of all volcanic deaths occurred within 7 miles of the eruption, but fatalities also occurred as far away as 106 miles. For those who were within 3 miles of a volcano, the most common cause of death was volcanic bombs or ballistics.

Between 3-10 miles, an avalanche of hot rock, ash, and gas known as pyroclastic flow was the most common cause of death, while volcanic mudslides, tsunamis, and ash fall are the main danger at greater distances.

Brown and her colleagues also examined the personal information of the victims. Most commonly, they were residents who lived very near to the volcano. But 561 tourist deaths were recorded, as well as 67 scientists (mostly volcanologists), 57 emergency responders, and 30 media personnel. In many cases, the victims were in known exclusion zones.

Read more at Seeker

Oct 6, 2017

Carbon feedback from forest soils to accelerate global warming

Heated and control plots in a long-term soil warming study at Harvard Forest, Petersham, Mass. Jerry Melillo of the Marine Biological Laboratory, Woods Hole, Mass., and colleagues began the study in 1991.
After 26 years, the world's longest-running experiment to discover how warming temperatures affect forest soils has revealed a surprising, cyclical response: Soil warming stimulates periods of abundant carbon release from the soil to the atmosphere alternating with periods of no detectable loss in soil carbon stores. Overall, the results indicate that in a warming world, a self-reinforcing and perhaps uncontrollable carbon feedback will occur between forest soils and the climate system, adding to the build-up of atmospheric carbon dioxide caused by burning fossil fuels and accelerating global warming. The study, led by Jerry Melillo, Distinguished Scientist at the Marine Biological Laboratory (MBL), appears in the October 6 issue of Science.

Melillo and colleagues began this pioneering experiment in 1991 in a deciduous forest stand at the Harvard Forest in Massachusetts. They buried electrical cables in a set of plots and heated the soil 5° C above the ambient temperature of control plots. Over the course of the 26-year experiment (which still continues), the warmed plots lost 17 percent of the carbon that had been stored in organic matter in the top 60 centimeters of soil.

"To put this in context," Melillo says, "each year, mostly from fossil fuel burning, we are releasing about 10 billion metric tons of carbon into the atmosphere. That's what's causing the increase in atmospheric carbon dioxide concentration and global warming. The world's soils contain about 3,500 billion metric tons of carbon. If a significant amount of that soil carbon is added to the atmosphere, due to microbial activity in warmer soils, that will accelerate the global warming process. And once this self-reinforcing feedback begins, there is no easy way to turn it off. There is no switch to flip."

Over the course of the experiment, Melillo's team observed fluctuations in the rate of soil carbon emission from the heated plots, indicating cycles in the capacity of soil microbes to degrade organic matter and release carbon. Phase I (1991 to 2000) was a period of substantial soil carbon loss that was rapid at first, then slowed to near zero. In Phase II (2001-2007), there was no difference in carbon emissions between the warmed and the control plots. During that time, the soil microbial community in the warmed plots was undergoing reorganization that led to changes in the community's structure and function. In Phase III (2008-2013), carbon release from heated plots again exceeded that from control plots. This coincided with a continued shift in the soil microbial community. Microbes that can degrade more recalcitrant soil organic matter, such as lignin, became more dominant, as shown by genomic and extracellular enzyme analyses. In Phase IV (2014 to current), carbon emissions from the heated plots have again dropped, suggesting that another reorganization of the soil microbial community could be underway. If the cyclical pattern continues, Phase IV will eventually transition to another phase of higher carbon loss from the heated plots.

"This work emphasizes the value of long-term ecological studies that are the hallmark of research at the MBL's Ecosystems Center," says David Mark Welch, MBL's Director of Research. "These large field studies, combined with modeling and an increasingly sophisticated understanding of the role of microbial communities in ecosystem dynamics, provide new insight to the challenges posed by climate change."

Read more at Science Daily

Mars study yields clues to possible cradle of life

The Eridania basin of southern Mars is believed to have held a sea about 3.7 billion years ago, with seafloor deposits likely resulting from underwater hydrothermal activity.
The discovery of evidence for ancient sea-floor hydrothermal deposits on Mars identifies an area on the planet that may offer clues about the origin of life on Earth.

A recent international report examines observations by NASA's Mars Reconnaissance Orbiter (MRO) of massive deposits in a basin on southern Mars. The authors interpret the data as evidence that these deposits were formed by heated water from a volcanically active part of the planet's crust entering the bottom of a large sea long ago.

"Even if we never find evidence that there's been life on Mars, this site can tell us about the type of environment where life may have begun on Earth," said Paul Niles of NASA's Johnson Space Center, Houston. "Volcanic activity combined with standing water provided conditions that were likely similar to conditions that existed on Earth at about the same time -- when early life was evolving here."

Mars today has neither standing water nor volcanic activity. Researchers estimate an age of about 3.7 billion years for the Martian deposits attributed to seafloor hydrothermal activity. Undersea hydrothermal conditions on Earth at about that same time are a strong candidate for where and when life on Earth began. Earth still has such conditions, where many forms of life thrive on chemical energy extracted from rocks, without sunlight. But due to Earth's active crust, our planet holds little direct geological evidence preserved from the time when life began. The possibility of undersea hydrothermal activity inside icy moons such as Europa at Jupiter and Enceladus at Saturn feeds interest in them as destinations in the quest to find extraterrestrial life.

Observations by MRO's Compact Reconnaissance Spectrometer for Mars (CRISM) provided the data for identifying minerals in massive deposits within Mars' Eridania basin, which lies in a region with some of the Red Planet's most ancient exposed crust.

"This site gives us a compelling story for a deep, long-lived sea and a deep-sea hydrothermal environment," Niles said. "It is evocative of the deep-sea hydrothermal environments on Earth, similar to environments where life might be found on other worlds -- life that doesn't need a nice atmosphere or temperate surface, but just rocks, heat and water."

Niles co-authored the recent report in the journal Nature Communications with lead author Joseph Michalski, who began the analysis while at the Natural History Museum, London, and co-authors at the Planetary Science Institute in Tucson, Arizona, and the Natural History Museum.

The researchers estimate the ancient Eridania sea held about 50,000 cubic miles (210,000 cubic kilometers) of water. That is as much as all other lakes and seas on ancient Mars combined and about nine times more than the combined volume of all of North America's Great Lakes. The mix of minerals identified from the spectrometer data, including serpentine, talc and carbonate, and the shape and texture of the thick bedrock layers, led to identifying possible seafloor hydrothermal deposits. The area has lava flows that post-date the disappearance of the sea. The researchers cite these as evidence that this is an area of Mars' crust with a volcanic susceptibility that also could have produced effects earlier, when the sea was present.

The new work adds to the diversity of types of wet environments for which evidence exists on Mars, including rivers, lakes, deltas, seas, hot springs, groundwater, and volcanic eruptions beneath ice.

"Ancient, deep-water hydrothermal deposits in Eridania basin represent a new category of astrobiological target on Mars," the report states. It also says, "Eridania seafloor deposits are not only of interest for Mars exploration, they represent a window into early Earth." That is because the earliest evidence of life on Earth comes from seafloor deposits of similar origin and age, but the geological record of those early-Earth environments is poorly preserved.

Read more at Science Daily

Old Faithful's geological heart revealed

This is the model of Old Faithful's hydrogeological system suggested by the study's results.
Old Faithful is Yellowstone National Park's most famous landmark. Millions of visitors come to the park every year to see the geyser erupt every 44-125 minutes. But despite Old Faithful's fame, relatively little was known about the geologic anatomy of the structure and the fluid pathways that fuel the geyser below the surface. Until now.

University of Utah scientists have mapped the near-surface geology around Old Faithful, revealing the reservoir of heated water that feeds the geyser's surface vent and how the ground shaking behaves in between eruptions. The map was made possible by a dense network of portable seismographs and by new seismic analysis techniques. The results are published in Geophysical Research Letters. Doctoral student Sin-Mei Wu is the first author.

For Robert Smith, a long-time Yellowstone researcher and distinguished research professor of geology and geophysics, the study is the culmination of more than a decade of planning and comes as he celebrates his 60th year working in America's first national park.

"Here's the iconic geyser of Yellowstone," Smith says. "It's known around the world, but the complete geologic plumbing of Yellowstone's Upper Geyser Basin has not been mapped nor have we studied how the timing of eruptions is related to precursor ground tremors before eruptions."

Small seismometers

Old Faithful is an iconic example of a hydrothermal feature, and particularly of the features in Yellowstone National Park, which is underlain by two active magma reservoirs at depths of 5 to 40 km depth that provide heat to the overlying near-surface groundwater. In some places within Yellowstone, the hot water manifests itself in pools and springs. In others, it takes the form of explosive geysers.

Dozens of structures surround Old Faithful, including hotels, a gift shop and a visitor's center. Some of these buildings, the Park Service has found, are built over thermal features that result in excessive heat beneath the built environment. As part of their plan to manage the Old Faithful area, the Park Service asked University of Utah scientists to conduct a geologic survey of the area around the geyser.

For years, study co-authors Jamie Farrell and Fan-Chi Lin, along with Smith, have worked to characterize the magma reservoirs deep beneath Yellowstone. Although geologists can use seismic data from large earthquakes to see features deep in the earth, the shallow subsurface geology of the park has remained a mystery, because mapping it out would require capturing everyday miniature ground movement and seismic energy on a much smaller scale. "We try to use continuous ground shaking produced by humans, cars, wind, water and Yellowstone's hydrothermal boilings and convert it into our signal," Lin says. "We can extract a useful signal from the ambient background ground vibration."

To date, the University of Utah has placed 30 permanent seismometers around the park to record ground shaking and monitor for earthquakes and volcanic events. The cost of these seismometers, however, can easily exceed $10,000. Small seismometers, developed by Fairfield Nodal for the oil and gas industry, reduce the cost to less than $2,000 per unit. They're small white canisters about six inches high and are totally autonomous and self-contained. "You just take it out and stick it in the ground," Smith says.

In 2015, with the new instruments, the Utah team deployed 133 seismometers in the Old Faithful and Geyser Hill areas for a two-week campaign.

The sensors picked up bursts of intense seismic tremors around Old Faithful, about 60 minutes long, separated by about 30 minutes of quiet. When Farrell presents these patterns, he often asks audiences at what point they think the eruption of Old Faithful takes place. Surprisingly, it's not at the peak of shaking. It's at the end, just before everything goes quiet again.

After an eruption, the geyser's reservoir fills again with hot water, Farrell explains. "As that cavity fills up, you have a lot of hot pressurized bubbles," he says. "When they come up, they cool off really rapidly and they collapse and implode." The energy released by those implosions causes the tremors leading up to an eruption.

One scientist's noise is another scientist's signal

Typically, researchers create a seismic signal by swinging a hammer onto a metal plate on the ground. Lin and Wu developed the computational tools that would help find useful signals among the seismic noise without disturbing the sensitive environment in the Upper Geyser Basin. Wu says she was able to use the hydrothermal features themselves as a seismic source, to study how seismic energy propagates by correlating signals recorded at the sensor close to a persistent source to other sensors. "It's amazing that you can use the hydrothermal source to observe the structure here," she says.

When analyzing data from the seismic sensors, the researchers noticed that tremor signals from Old Faithful were not reaching the western boardwalk. Seismic waves extracted from another hydrothermal feature in the north slowed down and scattered significantly in nearly the same area suggesting somewhere west of Old Faithful was an underground feature that affects the seismic waves in an anomalous way. With a dense network of seismometers, the team could determine the shape, size, and location of the feature, which they believe is Old Faithful's hydrothermal reservoir.

Wu estimates that the reservoir, a network of cracks and fractures through which water flows, has a diameter of around 200 meters, a little larger than the University of Utah's Rice-Eccles Stadium, and can hold approximately 300,000 cubic meters of water, or more than 79 million gallons. By comparison, each eruption of Old Faithful releases around 30 m3 of water, or nearly 8,000 gallons. "Although it's a rough estimation, we were surprised that it was so large," Wu says.

Read more at Science Daily

12,000 years ago, Florida hurricanes heated up despite chilly seas

Figure 3 from the paper: Simulated changes in climatic controls on hurricane activity between the Younger Dryas (YD, 12.0-12.5 ka) and early Holocene (EH, 10.2-10.8 ka). A: Spatial difference in storm season surface temperature (Tsfc). B: Spatial difference in genesis potential index (GPI), averaged for each Transient Climate Evolution Experiment (TraCE) interval (see text). C: Filtered (20 yr) time series of maximum potential intensity (PI) near the Dry Tortugas (red) and Barbados (gray) from 13,850 yr ago through the EH. CAT - category; TS - tropical storm.
Category 5 hurricanes may have slammed Florida repeatedly during the chilly Younger Dryas, 12,000 years ago. The cause? Hurricane-suppressing effects of cooler sea surface were out-weighed by side effects of slowed ocean circulation. That's the finding of USGS researcher Michael Toomey and colleagues in their Geology article published online today.

As the last ice age waned, undersea landslide deposits called turbidites captured the fury of Florida's stormy days. Previously, Toomey linked turbidites in the Bahamas with modern hurricanes. For this study, the group examined turbidites in cores spanning the shift from the Younger Dryas into the warmer early Holocene, collected offshore the Dry Tortugas, Florida. The turbidites, complete with smashed up shells and jumbled sediments, reveal that in Younger Dryas days Florida was surprisingly hurricane-prone, at a time when cooler sea surface temperatures may have put the brakes on such intense storms elsewhere in the Atlantic.

To explore why, Toomey and colleagues analyzed computer models that simulated ocean and atmospheric conditions near Florida during that period. In modern times, the Atlantic Meridional Overturning Circulation (AMOC) brings cool water south and warm water north. But during the Younger Dryas the AMOC is thought to have weakened considerably, slowing circulation and reshaping environmental conditions across much of the Northern Hemisphere.

Modeling results indicated that lower sea surface temperatures in the tropical Atlantic, near Barbados, for example, corresponded with a drop in storm potential intensity. Near Florida, sea surfaces cooled as well. However, the change there was not as dramatic as further south or to the north. The relative warmth of waters offshore the southeastern U.S. compared to the regional Atlantic, explains Toomey, seems to have set the stage for intense hurricanes near Florida. "The modeling work suggests other factors, such as wind shear and humidity at mid-latitudes, outweighed changes in sea surface temperature at our core site," he says. Models and geologic records both show that by the early Holocene, as the AMOC regained strength, Florida's hurricanes subsided.

Read more at Science Daily

Oct 5, 2017

Violent helium reaction on white dwarf surface triggers supernova explosion

Upper panels show the first two-days observations of a peculiar type Ia supernova, MUSSES1604D, with Subaru/Hyper Suprime-Cam (left and middle) and follow-up observations with the Gemini-North telescope about one month after the first observation (right). Lower panels show the schematic light curves of MUSSES1604D (green circles denote the stages that the supernova is staying during observations).
An international team of researchers has found evidence a supernova explosion that was first triggered by a helium detonation, reports a new study in Nature this week.

A Type Ia supernova is a type of white dwarf star explosion that occurs in a binary star system where two stars are circling one another. Because these supernovae shine 5 billion times brighter than the Sun they are used in astronomy as a reference point when calculating distances of objects in space. However, no one has been able to find solid evidence of what triggers these explosions. Moreover, these explosions only occur once every 100 years in any given galaxy, making them difficult to spot.

"Studying Type Ia supernovae is important because they are a valuable tool researchers use to measure the expansion of the universe. A more precise understanding of their history and behavior will help all researchers obtain more accurate results," said author and University of Tokyo School of Science Professor Mamoru Doi.

A team of researchers including Senior Scientist Ken'ichi Nomoto, Professor Naoki Yasuda, and Project Assistant Professor Nao Suzuki from the Kavli Institute for the Physics and Mathematics of the Universe, and lead by University of Tokyo School of Science PhD candidate Ji-an Jiang and Professor Doi, Associate Professor Keichi Maeda at Kyoto University, and Dr. Masaomi Tanaka at the National Astronomical Observatory of Japan, hypothesized Type Ia supernova could be the result of a white dwarf star consuming helium from a companion star. The extra helium coating the star would trigger a violent burning reaction, which in turn would trigger the star to explode from within as a supernova.

To maximize the chances of finding a new or recent Type Ia supernova, the team used the Hyper Suprime-Cam camera on the Subaru Telescope, which can capture a large area of sky at once.

"Among 100 supernovae we discovered in a single night, we identified a Type Ia supernova that had exploded only within a day before our observation. Surprising, this supernova showed a bright flash in the first day, which we thought must be related to the nature of the explosion. By comparing the observational data with what we calculated on how burning helium would affect brightness and color over time, we found both theory and observation were in good agreement. This is the first time anyone has found solid evidence supporting a theory," said Maeda.

However, Nomoto says this does not mean they can explain everything about supernovae.

Read more at Science Daily

Milky Way's 'most-mysterious star' continues to confound

An inconspicuous star—KIC 8462852—harbors a great mystery. This ASAS-SN image is just a small portion of the sky, but shows how Tabby's star is one of millions.
In 2015, a star called KIC 8462852 caused quite a stir in and beyond the astronomy community due to a series of rapid, unexplained dimming events seen while it was being monitored by NASA's Kepler Space Telescope. And the star has continued to foil scientists' efforts to understand it ever since.

The latest findings from Carnegie's Josh Simon and Benjamin Shappee and collaborators take a longer look at the star, going back to 2006 -- before its strange behavior was detected by Kepler. Astronomers had thought that the star was only getting fainter with time, but the new study shows that it also brightened significantly in 2007 and 2014. These unexpected episodes complicate or rule out nearly all the proposed ideas to explain the star's observed strangeness.

Speculation to account for KIC 8462852's dips in brightness has ranged from it having swallowed a nearby planet to an unusually large group of comets orbiting the star to an alien megastructure.

In general, stars can appear to dim because a solid object like a planet or a cloud of dust and gas passes between it and the observer, eclipsing and effectively dimming its brightness for a time. But even before this evidence of two periods of increased brightness in the star's past, the erratic dimming periods seen in KIC 8462852 were unlike anything astronomers had previously observed.

Last year, Simon and Ben Montet (then at Caltech, now at University of Chicago), who is also a co-author on this current study, found that from 2009 to 2012, KIC 8462852 dimmed by almost 1 percent. Its brightness then dropped by an extraordinary 2 percent over just six months, remaining at about that level for the final six months of Kepler observations.

But the research team wanted to look at KIC 8462852 over a longer period of time. So, they went back and examined about 11 years of observing data from the All Sky Automated Survey (ASAS) and about two years of more-recent data from the high-precision All-Sky Automated Survey for Supernovae (ASAS-SN).

They found that the star has continued to dim since 2015 and is now 1.5 percent fainter than it was in February of that year. What's more, they showed that in addition to the dimming the star has experienced from 2009 to 2013 and 2015 to now, it underwent two periods of brightening.

"Up until this work, we had thought that the star's changes in brightness were only occurring in one direction -- dimming," Simon explained. "The realization that the star sometimes gets brighter in addition to periods of dimming is incompatible with most hypotheses to explain its weird behavior."

"An important next step will be to determine how the color of the star changes with time, especially during its brief dips in brightness," added Shappee. "That information would help narrow down the possible explanations for why this star is doing such strange things."

For example, if the dimming was caused by dust obscuring the star from us, then it would appear to get redder as it dimmed. But if large objects were blocking the star's light, then no color change would be seen.

Read more at Science Daily

Prehistoric humans are likely to have formed mating networks to avoid inbreeding

Detail of one of the burials from Sunghir, in Russia. The new study sequenced the genomes of individuals from the site and discovered that they were, at most, second cousins, indicating that they had developed sexual partnerships beyond their immediate social and family group.
Early humans seem to have recognised the dangers of inbreeding at least 34,000 years ago, and developed surprisingly sophisticated social and mating networks to avoid it, new research has found.

The study, reported in the journal Science, examined genetic information from the remains of anatomically modern humans who lived during the Upper Palaeolithic, a period when modern humans from Africa first colonised western Eurasia. The results suggest that people deliberately sought partners beyond their immediate family, and that they were probably connected to a wider network of groups from within which mates were chosen, in order to avoid becoming inbred.

This suggests that our distant ancestors are likely to have been aware of the dangers of inbreeding, and purposely avoided it at a surprisingly early stage in prehistory.

The symbolism, complexity and time invested in the objects and jewellery found buried with the remains also suggests that it is possible that they developed rules, ceremonies and rituals to accompany the exchange of mates between groups, which perhaps foreshadowed modern marriage ceremonies, and may have been similar to those still practised by hunter-gatherer communities in parts of the world today.

The study's authors also hint that the early development of more complex mating systems may at least partly explain why anatomically modern humans proved successful while other species, such as Neanderthals, did not. However, more ancient genomic information from both early humans and Neanderthals is needed to test this idea.

The research was carried out by an international team of academics, led by the University of Cambridge, UK, and the University of Copenhagen, Denmark. They sequenced the genomes of four individuals from Sunghir, a famous Upper Palaeolithic site in Russia, which is believed to have been inhabited about 34,000 years ago.

The human fossils buried at Sunghir represent a rare and highly valuable, source of information because very unusually for finds from this period, the people buried there appear to have lived at the same time and were buried together. To the researchers' surprise, however, these individuals were not closely related in genetic terms; at the very most, they were second cousins. This is true even in the case of two children who were buried head-to-head in the same grave.

Professor Eske Willerslev, who holds posts both as a Fellow at St John's College, Cambridge, and at the University of Copenhagen, was the senior author on the study. "What this means is that even people in the Upper Palaeolithic, who were living in tiny groups, understood the importance of avoiding inbreeding," he said. "The data that we have suggest that it was being purposely avoided."

"This means that they must have developed a system for this purpose. If small hunter-gatherer bands were mixing at random, we would see much greater evidence of inbreeding than we have here."

Early humans and other hominins such as Neanderthals appear to have lived in small family units. The small population size made inbreeding likely, but among anatomically modern humans it eventually ceased to be commonplace; when this happened, however, is unclear.

"Small family bands are likely to have interconnected with larger networks, facilitating the exchange of people between groups in order to maintain diversity," Professor Martin Sikora, from the Centre for GeoGenetics at the University of Copenhagen, said.

Sunghir contains the burials of one adult male and two younger individuals, accompanied by the symbolically-modified incomplete remains of another adult, as well as a spectacular array of grave goods. The researchers were able to sequence the complete genomes of the four individuals, all of whom were probably living on the site at the same time. These data were compared with information from a large number of both modern and ancient human genomes.

They found that the four individuals studied were genetically no closer than second cousins, while an adult femur filled with red ochre found in the children's' grave would have belonged to an individual no closer than great-great grandfather of the boys. "This goes against what many would have predicted," Willerslev said. "I think many researchers had assumed that the people of Sunghir were very closely related, especially the two youngsters from the same grave."

The people at Sunghir may have been part of a network similar to that of modern day hunter-gatherers, such as Aboriginal Australians and some historical Native American societies. Like their Upper Palaeolithic ancestors, these people live in fairly small groups of around 25 people, but they are also less directly connected to a larger community of perhaps 200 people, within which there are rules governing with whom individuals can form partnerships.

"Most non-human primate societies are organised around single-sex kin where one of the sexes remains resident and the other migrates to another group, minimising inbreeding" says Professor Marta Mirazón Lahr, from the Leverhulme Centre for Human Evolutionary Studies at the University of Cambridge. "At some point, early human societies changed their mating system into one in which a large number of the individuals that form small hunter-gatherer units are non-kin. The results from Sunghir show that Upper Palaeolithic human groups could use sophisticated cultural systems to sustain very small group sizes by embedding them in a wide social network of other groups."

By comparison, genomic sequencing of a Neanderthal individual from the Altai Mountains who lived around 50,000 years ago indicates that inbreeding was not avoided. This leads the researchers to speculate that an early, systematic approach to preventing inbreeding may have helped anatomically modern humans to thrive, compared with other hominins.

This should be treated with caution, however: "We don't know why the Altai Neanderthal groups were inbred," Sikora said. "Maybe they were isolated and that was the only option; or maybe they really did fail to develop an available network of connections. We will need more genomic data of diverse Neanderthal populations to be sure."

Willerslev also highlights a possible link with the unusual sophistication of the ornaments and cultural objects found at Sunghir. Group-specific cultural expressions may have been used to establish distinctions between bands of early humans, providing a means of identifying who to mate with and who to avoid as partners.

Read more at Science Daily

The Rise of Necrofauna and the Ethical Dilemma of De-Extinction

A Columbian Mammoth skeleton (Mammuthus columbi) with tusks, which was found in La Brea Tar Pits and is now in Page Museum, Los Angeles
In the not-too-distant future, at least four animal species and a tree now classified as being extinct are expected to be mostly biologically revived. The rise of the necrofauna, as biologist Britt Wray calls it, brings tremendous hope, but also concerns that humanity has never before faced.

Although the existence of necrofauna is still hypothetical, one animal has already undergone the de-extinction process: the Pyrenean ibex. It is hard to call that a success story just yet, however, as the cloned calf only lived for minutes in 2003 before it died due to a lung defect.

“It’s the only (de-extinction) case so far for an animal,” Wray, a researcher at the University of Copenhagen’s Center for Synthetic Biology and author of The Rise of the Necrofauna, told Seeker. “However, depending on how far you want to stretch the definition of ‘de-extinction,’ the genetic rescue of the American chestnut tree, which is currently underway, may count.”

According to the organization American Forests, the American chestnut was once the predominant tree species in eastern US forests. Billions of the trees used to tower over the landscape, before a blight began to kill them off beginning around the year 1904. The blight’s source — a pathogenic fungus called Cryphonectria parasitica — was introduced into the US from Japanese nursery stock that same year. In just 1-2 decades, all of the American chestnut trees had died.

In 1989, a breeding program created by the American Chestnut Foundation started to produce hybrid trees that are indistinguishable from the original American chestnut trees, but include a small amount of genetic material from the Chinese chestnut tree. The process involves “backcrossing,” whereby trees are crossed and then bred via successive generations as closely back to the original species as possible.

“It’s probably not the best tree we can achieve, but it’s good enough to start planting,” said Kim Steiner, director of Penn State University’s arboretum, and a science advisor to the ACF.

Paleontologist Jack Horner with a bird skeleton during a presentation at the Museo di Storia Naturale in Milan, Italy, in 2012
Another example of going backward in terms of evolution is paleontologist Jack Horner’s dino-chicken project, otherwise known as Chickenosaurus. As described in his and James Gorman’s 2009 book How to Build a Dinosaur: Extinction Doesn’t Have to Be Forever, Horner and his colleagues have been reverse-engineering characteristics in chickens.

“Birds are dinosaurs, so technically we’re making a dinosaur out of a dinosaur,” Horner, who served as the technical adviser for all of the Jurassic Park films, explained at the time of the book’s release. He added that he and his team hope to “awaken the dinosaur within.”

The process is a slow and gradual one, however. Advances so far include turning the beaks of chicken embryos back into dinosaur-like snouts by reverse, genetic engineering, and recreating dinosaur-like leg and foot anatomy in chicken embryos.

Horner believes that the work could lead to medical discoveries that may benefit humans, since learning what prompts and stops the growth of anatomical features could provide insights into serious human birth defects.

The dino-chicken project, even if fully successful, would not necessarily bring a particular species back from extinction. Paleontologists have not yet pinpointed what specific non-avian dinosaurs gave rise to modern birds.

One of the animal species that Wray believes could be brought back from extinction over the next decade is the woolly mammoth. Among those with that goal is George Church, a Harvard geneticist, and the Revive & Restore project.

“Church predicts he might be able to create an engineered elephant embryo that could give rise to an ‘unextinct’ mammoth in a couple of years,” Wray said, “but that’s just the embryo. It would take many more years after that to create a herd of animals successfully from those kinds of embryos.”

A de-extinction project is also underway in Europe to bring back giant wild cattle, known as aurochs, which went extinct in 1627. Rewilding Europe and the Taurus Foundation have established multiple breeding sites for the Taurus program, which involves crossbreeding Iberian and Podolian breeds of cattle in order to achieve mammals with auroch-like properties.

“The biggest misconception about de-extinction is that it is possible to bring extinct species back to life in identical form; it isn’t,” said Wray.

Aurochs depicted in a cave painting from Lascaux, France
Wray explained that there are always important differences between the extinct animal and the ones that scientists might be able to create in these animals’ images. With cloning, for example, the nucleus from a cell of the extinct animal is transferred to a living relative host’s egg cell to produce the new animal clone. But DNA is also stored in organelles called mitochondria, which sit outside of the cell’s nucleus in the cellular jelly.

“These don’t get transferred, and so the host egg cell provides the mitochondria for the clone that gets produced,” Wray said. “This could create only a small genetic difference, but either way, it’s no genetically identical copy of the original.”

The surrogate mothers that are used to bring lab-created fetuses to term could introduce hormonal, microbiotic, or other differences. Gene editing is also not an exact science, in terms of de-extinction goals, Wray said.

“It’s only select genetic changes from the genomes of the extinct animal that scientists will edit into the genomes of their closest living relatives in order to give them specific traits that are deemed important,” she said. “They’re not making 100 percent of the changes, and so they’re not really creating a hybrid between the extinct animal and its living relative.”

The other primary de-extinction technique, backbreeding, uses artificial selection techniques along the lines of the work to restore the American chestnut tree. “So the de-extinction process here is only ‘skin deep,’” Wray said, referring to how the new species may look like its predecessor, but is “not the real identical thing.”

Britt Wray
Undaunted by the present de-extinction limitations, Michael Archer and his team from the University of New South Wales and The Lazarus Project have been working to create a proxy for the gastric-brooding frog. The frog’s genus Rheobatrachus consisted of two species, both of which were classified as extinct in the mid-1980s.

The genus made headlines even before the die-out, as the frogs were the only ones known to have females that incubated their pre-juvenile young in their stomachs. When disturbed, the mothers would sometimes regurgitate their young in a single dramatic act of propulsive vomiting.

The upchucking did not lead to their extinction, though. Human introduction of a pathogenic fungi into the frog’s native range of eastern Australia did. Archer and his team have made progress in turning that seeming finality around.

“They’ve been able to clone DNA from the extinct frog in great-barred frog embryos, but those embryos haven’t yet developed successfully into tadpoles,” Wray said.

She added that Archer and his colleagues only have one chance a year to try to produce the embryos, due to the reproductive cycle of the great-barred host frogs. They plan to try again in 2018.

The gastric-brooding frog, now extinct, Rheobatrachus silus
Another animal in line for de-extinction is the passenger pigeon. Once endemic to North America, the pigeons were hunted extensively by both Native Americans and Europeans. Habitat loss also contributed to the ultimate demise of the birds, with the last wild one confirmed to have been shot in 1901. When a beloved passenger pigeon named “Martha” at the Cincinnati Zoo died in 1914, the entire species was declared extinct.

Ben Novak of Revive & Restore has been sequencing the passenger pigeon’s genome to study important aspects of the species’ ecological niche vital to its restoration. He and his colleagues estimate that engineered passenger pigeon proxies might be born around 2022.

Such research would appear to help fix problems that humans largely created, since our species contributed to the extinction of passenger pigeons and many of the other animals, including woolly mammoths.

Wray believes biotechnologies applied to de-extinction research should be utilized to help save threatened species, such as corals, bats, bees, and northern white rhinos. She believes that working to save these animals, before they actually go extinct, is where “the most ethical and beneficial application of the technology lies.”

She also thinks that de-extinction work could energize young students by showing them how science may be able to regenerate aspects of biodiversity.

“Young people mainly hear about the loss of precious ecosystems and the fact that they may never get to see a polar bear in the wild,” Wray said. “Perhaps de-extinction and its related technologies could make younger generations excited, rather than jaded, piquing their curiosity for what may be possible in their lifetimes. It’s about creating a hopeful narrative.”

She and others have serious concerns about de-extinction attempts, however.

Animal welfare is a major one. Wray said there is a lot of “animal failure involved in cloning processes, and some of the animals that do make it may end up in captivity for most, if not all, of their lives.”

The quest for necrofauna may also divert resources and attention from conservation programs for currently endangered species, but zoologist Philip Seddon of the University of Otago thinks that is a faulty argument.

“People have argued against de-extinction as pulling funding away from extant species conservation, but the counter to this is that those interested in funding high-tech approaches such as de-extinction are not likely to be interested in funding ongoing species conservation work, i.e. for every resurrected mammoth we would not have to fire a wildlife ranger,” he told Seeker.

“But, and this is a key issue, once resurrected species move out of the labs and into reserves they become the responsibility of cash-strapped conservation agencies, and unless more money enters the system, something has to drop off,” Seddon added.

It remains to be seen if these “reserves” could become the Jurassic Parks of the future, generating ample funds from members of the public hoping to see resurrected animals. Researchers may even patent their creations.

“Due to the high level of genetic modification that will be required to make many of the animals, they will not be seen as a product of nature in the eyes of the law, and therefore should be patentable,” Wray explained.

Read more at Seeker

Neanderthal DNA Influences the Looks and Behavior of Modern Humans

Modern human skull (left) and Neanderthal skull (right) at the Cleveland Museum of Natural History, 2008.
Neanderthals are the closest evolutionary relatives identified to date of all people living today. They are so close to us that some people — those of European and Asian heritage — retain a fair amount of DNA from these big-brained, big-headed hominids who once dominated much of the world.

Now new research finds that Neanderthals are even more with us than previously suspected. A paper published in the journal Science finds that individuals whose primary heritage lies outside of Africa possess 10–20 percent more Neanderthal DNA than was reported earlier, with probable influences on their appearance, behavior, health, and even habits, such as smoking.

East Asians were found to carry somewhat more of this DNA, 2.3–2.6 percent, than people now living in Western Eurasia, 1.8–2.4 percent.

“There are two hypotheses that can explain this difference,” lead author Kay Prüfer, of the Max Planck Institute for Evolutionary Anthropology, told Seeker. “Either Europeans possess ancestry from a modern human group with little Neanderthal admixture, or East Asians had additional admixture with Neanderthals.”

For now, the answer remains a mystery.

Less mysterious is fossil evidence showing that a related group of Neanderthals lived in Vindija Cave, Croatia, around 52,000 years ago — just 12,000 years before Neanderthals as a distinct type of human are thought to have died out in Europe. Prüfer and her team sequenced the genome of one of the Vindija’s females. The achievement marks only the second time that a Neanderthal genome has been sequenced in detail.

Prüfer and her team compared the newly generated sequence to that of the earlier detailed one for what is known as the Altai Neanderthal, whose remains were found in Denisova Cave in the Altai Mountains of southern Siberia. The researchers also compared these two sequences to those of still other known Neanderthals, such as the Mezmaiskaya Neanderthal of southern Russia.

“What we see is that the admixing Neanderthals were more closely related to Vindija and Mezmaiskaya compared to the Altai individual, and that they have a last common ancestor with Vindija and Mezmaiskaya sometime between 80,000 and 150,000 years before the present,” Prüfer said.

The landscape around Mezmaiskaya Cave in southern Russia.
People of European and Asian heritage today therefore retain DNA from a population of still-unknown Neanderthals who are ancestral to the identified ones from Croatia and southern Russia.

The genomes of both the Vindija and Altai Neanderthals provide evidence that Neanderthals lived in small and isolated populations of no more than about 3,000 individuals per region. Climate and available resources likely contributed to keeping their numbers low.

The Altai genome suggested that the individual’s parents were half-siblings, which led scientists to suspect that extreme inbreeding may have been ubiquitous among Neanderthals. While the Vindija female shared a maternal ancestor with two of the three other Neanderthals found in the Croatian cave, she did not show evidence of significant inbreeding.

Prüfer and her team additionally identified a wealth of new gene variants in the Neanderthal genome that are influential in people today of European and Asian heritage. The variants are related to blood levels of LDL cholesterol and vitamin D, as well as eating disorders, body fat accumulation, rheumatoid arthritis, schizophrenia, and the response to anti-psychotic drugs.

In most cases, the researchers know only of the association with the identified gene variants, and not what the genes might specifically do. But it is established, Prüfer noted, that the “LDL cholesterol variant from Neanderthals is associated with a lower level, that is, in the direction of reduced chance of heart disease.”

Neanderthal man on display at the Neanderthal Museum in Mettmann, Germany.
A second new paper concerning Neanderthals, published this week in the American Journal of Human Genetics, finds that their genetic influence in living populations also extends to skin tone, hair color, sleep patterns, mood, and a person’s smoking status.

“What was somewhat surprising is that we observe multiple different Neanderthal alleles contributing to skin and hair tones,” Janet Kelso, a computational biologist who co-authored the study with her colleague Michael Dannemann, told Seeker.

“Some Neanderthal alleles are associated with lighter tones and others with darker skin tones, and some with lighter and others with darker hair colors,” she added. “This may indicate that Neanderthals themselves were variable in these traits.”

The skin tones, she added, ranged from very fair to dark olive.

The origins of red hair still remain a mystery, she indicated, as a clear link to Neanderthal ancestry could not be established.

“If variants contributing to red hair were present in Neanderthals, they were probably not common,” Kelso said.

She and Dannemann, both from the Max Planck Institute for Evolutionary Anthropology, made these determinations and more after looking for known Neanderthal genetic variants in the UK Biobank, a database of information on 112,000 UK individuals that includes genetic data, along with details related to their physical appearance, diet, sun exposure, behavior and disease.

Per the other study led by Prüfer, Kelso and Dannemann could only identify associations between Neanderthal genetic variants and traits of people today, as opposed to determining what these variants actually did in Neanderthals, and how they precisely function now in their distant living relatives.

Homo naledi facial reconstruction.
Of her own research, Kelso said, “We cannot infer from this study that Neanderthals smoked, or suffered from other addictions or mood and sleep disorders.”

“What we instead learn,” she continued, “is that Neanderthal DNA that is present in people today has detectable effects on behavior. This suggests that Neanderthal DNA influences the brain in ways that affect these behaviors.”

“It’s very important to point out that we can’t start blaming Neanderthals for these traits,” she quickly added. “Behavioral traits such as sleep, mood and addiction behavior are complex, and there are many different parts of our genomes that contribute to them.”

Read more at Seeker

Oct 4, 2017

Ancient humans left Africa to escape drying climate

The Lamont-Doherty Core Repository contains a unique and important collection of scientific samples from the deep sea. Sediment cores from every major ocean and sea are archived here.
Humans migrated out of Africa as the climate shifted from wet to very dry about 60,000 years ago, according to research led by a University of Arizona geoscientist.

Genetic research indicates people migrated from Africa into Eurasia between 70,000 and 55,000 years ago. Previous researchers suggested the climate must have been wetter than it is now for people to migrate to Eurasia by crossing the Horn of Africa and the Middle East.

"There's always been a question about whether climate change had any influence on when our species left Africa," said Jessica Tierney, UA associate professor of geosciences. "Our data suggest that when most of our species left Africa, it was dry and not wet in northeast Africa."

Tierney and her colleagues found that around 70,000 years ago, climate in the Horn of Africa shifted from a wet phase called "Green Sahara" to even drier than the region is now. The region also became colder.

The researchers traced the Horn of Africa's climate 200,000 years into the past by analyzing a core of ocean sediment taken in the western end of the Gulf of Aden. Tierney said before this research there was no record of the climate of northeast Africa back to the time of human migration out of Africa.

"Our data say the migration comes after a big environmental change. Perhaps people left because the environment was deteriorating," she said. "There was a big shift to dry and that could have been a motivating force for migration."

"It's interesting to think about how our ancestors interacted with climate," she said.

The team's paper, "A climatic context for the out-of-Africa migration," is published online in Geology this week. Tierney's co-authors are Peter deMenocal of the Lamont-Doherty Earth Observatory in Palisades, New York, and Paul Zander of the UA.

The National Science Foundation and the David and Lucile Packard Foundation funded the research.

Tierney and her colleagues had successfully revealed the Horn of Africa's climate back to 40,000 years ago by studying cores of marine sediment. The team hoped to use the same means to reconstruct the region's climate back to the time 55,000 to 70,000 years ago when our ancestors left Africa.

The first challenge was finding a core from that region with sediments that old. The researchers enlisted the help of the curators of the Lamont-Doherty Core Repository, which has sediment cores from every major ocean and sea. The curators found a core collected off the Horn of Africa in 1965 from the R/V Robert D. Conrad that might be suitable.

Co-author deMenocal studied and dated the layers of the 1965 core and found it had sediments going back as far as 200,000 years.

At the UA, Tierney and Paul Zander teased out temperature and rainfall records from the organic matter preserved in the sediment layers. The scientists took samples from the core about every four inches (10 cm), a distance that represented about every 1,600 years.

To construct a long-term temperature record for the Horn of Africa, the researchers analyzed the sediment layers for chemicals called alkenones made by a particular kind of marine algae. The algae change the composition of the alkenones depending on the water temperature. The ratio of the different alkenones indicates the sea surface temperature when the algae were alive and also reflects regional temperatures, Tierney said.

To figure out the region's ancient rainfall patterns from the sediment core, the researchers analyzed the ancient leaf wax that had blown into the ocean from terrestrial plants. Because plants alter the chemical composition of the wax on their leaves depending on how dry or wet the climate is, the leaf wax from the sediment core's layers provides a record of past fluctuations in rainfall.

The analyses showed that the time people migrated out of Africa coincided with a big shift to a much drier and colder climate, Tierney said.

The team's findings are corroborated by research from other investigators who reconstructed past regional climate by using data gathered from a cave formation in Israel and a sediment core from the eastern Mediterranean. Those findings suggest that it was dry everywhere in northeast Africa, she said.

Read more at Science Daily

Sputnik Launch 60 Years Ago Was Slow to Resonate With Americans

Soviet technician working on Sputnik 1, 1957
A Smithsonian Institution curator will argue at an event this week that when the Sputnik satellite launched 60 years ago on Oct. 4, it was "hyperbolic" to argue that the public immediately panicked about Soviet Union technological superiority.

Sputnik was the first space satellite and viewed as the beginning of the space race, when the Soviets and the United States used Earth orbit as an arena to test out space equipment and astronaut capabilities. The race culminated with the US putting astronauts on the moon beginning in 1969. While the Soviets targeted the moon as well, after several rocket failures they chose instead to focus on constructing space stations.

Since the disintegration of the Soviet Union in 1991, relations between the United States and Russia have been very different, with the nations collaborating extensively on the International Space Station. In late September, the two nations signed a cooperation agreement to work on Deep Space Gateway, a proposed space station near the moon that NASA plans for the 2020s, according to Russian space agency Roscosmos. The station could serve as a launching point for Mars missions in subsequent decades.

Despite the collaboration, the early years of the US-Soviet space race are often cited in the media. Accounts generally say that Sputnik set off a terror in the American public that resonated through the education system and immediately brought in support for the space race. But according to Michael Neufeld, a senior curator at the Smithsonian National Air and Space Museum's space history department, the short run after Sputnik showed little effect.

Neufeld will speak Oct. 4 at an event in Washington, DC called "The Space Race and the Origins of the Space Age," which will also feature representatives from Blue Origin, Raytheon Integrated Defense Systems, Fordham University, and the Washington Post.

"All public opinion polls showed that concern was not very high at the beginning," Neufeld told Seeker, "There was a lot of admiration for the Soviet achievement."

The public was more concerned about the crisis over Little Rock Central High School in Arkansas, when controversy erupted over nine black students enrolling in a formerly all-white school, Neufeld said. Segregation was still active in many parts of the American South. Under President Dwight D. Eisenhower's command, the Arkansas National Guard was mobilized to support integration.

More US concern arose in early November 1957 when the 1,000-pound Sputnik II carried the dog Laika into space, Neufeld said, which appeared to legitimize the Soviet claim that they could launch intercontinental ballistic missiles. On Dec. 6, 1957, the US responded with a rocket launch attempt of its own, which failed spectacularly on national television when the rocket carrying the Vanguard 1A satellite exploded shortly after launch.

"This failure, increasingly, it exacerbated both the tide of political and media criticism and anger over the US coming second," Neufeld said. "It also increased public opinion against the Eisenhower administration and concern over the nuclear arms race, rockets, and space."

Following the failure of Vanguard 1A, the United States swiftly responded by putting another rocket type up for launch: the Redstone missile, which successfully launched the Explorer 1 satellite on Jan. 31, 1958. The group that developed the missile was led by the famous Nazi-turned-US rocketeer Wernher Von Braun, who later was the chief architect of the Saturn V rocket that flew humans to the moon in the late 1960s.

The space race, according to Neufeld, actually began in 1955 when both the United States and the Soviet Union declared they were going to launch satellites for International Geophysical Year (1957-1958). At that time, the US considered using either the Vanguard or Redstone missiles. It was a government committee that chose the Vanguard due to the superior scientific capability advertised for its satellite. Eisenhower, Neufeld said, had little to do with the decision.

But as the effects of the Sputnik launch reverberated through the Eisenhower administration, it produced a significant reorganization of the US government. Eisenhower upgraded the president's science committee to make it report directly to the president. The predecessor agency to the military testing organization DARPA, called ARPA (the Advanced Research Project Agency), was created in February 1958 to try to pull together the competing space programs among different US military branches, Neufeld added.

But by the summer of 1959, the Army — due in part to its successful Redstone program — dominated most military space activity, with the exception of the Corona reconnaissance satellite program run by the Air Force and the Central Intelligence Agency.

NASA was created in 1958 to give the United States a civilian space agency in addition to its military space activities. This made the US the first nation in the world to place space activity under civilian control, Neufeld said.

A common narrative suggests the Sputnik launch led to the moon race of the 1960s, but Neufeld said the decision to go the moon was unique to John F. Kennedy, Eisenhower's successor as president. Kennedy was embarrassed by the failed US military invasion of Cuba in April 1961 and was looking for another way to quickly upstage the Soviets. He happened to choose space as the arena of competition. "If [Richard] Nixon had won the election, and it was a close election, he might have made a different decision," Neufeld said.

The US and the Soviet Union had a "moment of détente," Neufeld said, in the 1970s that led to collaboration on the Apollo-Soyuz mission of July 1975. Soviet cosmonauts and American astronauts met up orbit and conducted joint scientific experiments and press conferences. Soviet-US relations cooled again, he added, when the Soviet Union invaded Afghanistan in 1979, and the tension continued through both of Ronald Reagan’s presidential terms in the 1980s.

The situation changed when the Soviet Union collapsed in 1991, leading to US fears that Russian engineers would go to Iran, North Korea, or Pakistan and work on their military programs, Neufeld said. The Russian economy was falling apart, so the US hosted a series of paid shuttle flights to the Soviet-built space station, Mir. Russia also became an early and key partner in building the International Space Station, whose first pieces reached orbit in 1998.

"The International Space Station created this mutual dependency of the United States and Russia on each other in space, because they needed us to prop up their space station and human space program," Neufeld said.

Today, the United States depends on the Russians to transport astronauts to the orbiting complex. The US space shuttle program was retired in 2011, leaving only the Russians capable of launching astronauts to the ISS aboard its Soyuz rocket. US commercial crew vehicles will start replacing Soyuz flights in the next year or so.

"The US needs Russian rockets to get there, and the Russians need us to support the ISS — it's 75 percent funded by the US," Neufeld said. "Regardless of the political context creating crisis between [Russian President Vladimir] Putin and the US administration, so far the ISS has continued undisturbed by all of this because both countries need each other to run that program."

Read more at Seeker

The Future of Electric Vehicles May Lie Below Ancient Supervolcanoes

Wizard Island in Crater Lake, a caldera lake in Crater Lake National Park, Oregon
Lithium-ion batteries are the fuel source of the future, already powering nearly every electronic gadget in your house and soon most cars on the road. Volvo recently announced that all of its new models will be either hybrid or fully electric starting in 2019, and Chinese automaker BYD expects to have a completely electrified fleet in the next decade.

While global supplies of lithium are currently high, the expected surge in demand from electric carmakers could create a lithium shortage by as early as 2030.

More than 75 percent of the world’s lithium is mined in Chile and Australia, but geologists from Stanford University have discovered that vast stores of the valuable metal may be available at hundreds of sites across North America in the remains of ancient supervolcanoes.

In a paper published in Nature Communications, the researchers explained how these massive volcanoes — 10,000 times more powerful than most active volcanoes today — created the perfect thermodynamic conditions to produce thick deposits of lithium-rich clays.

“Lithium is the new oil,” lead author Thomas Benson, a recent Ph.D. graduate of Stanford’s School of Earth, Energy, and Environmental Sciences, told Seeker.

As the price of lithium goes up, it will become increasingly risky to allow just a handful of countries and companies to control the global lithium supply. By tapping lithium deposits in the calderas of extinct supervolcanoes, many more countries — including the US, Canada, and Mexico — can help diversify lithium production.

Supervolcano isn’t a true scientific term, Benson explained, but generally refers to a volcanic eruption that produces at least 1,000 cubic kilometers of material. For comparison, the eruption of Mount St. Helens in 1980 produced only 4.2 cubic kilometers of material.

The best-known supervolcano site in America is probably Crater Lake in Oregon. At nearly 2,000 feet deep, it’s the deepest lake in the nation, filling a six-mile wide caldera left by the colossal eruption and collapse of Mount Mazama 7,700 years ago.

But when geologists go looking for lithium, places like Crater Lake don’t cut it. What they need is a caldera lakebed that’s long been drained of its water.

That’s why the McDermitt volcanic field in Nevada is just the type of place where America’s lithium boom might begin. Located 60 miles northwest of Winnemucca, Nevada, the ancient caldera was formed from a supervolcanic eruption more than 16 million years and looks today like nothing more than a rocky wasteland. But just 30 meters below the surface lies a seam of sedimentary clays with lithium concentrations of greater than 4,000 parts per million.

“The magma that originally erupted from the supervolcano had about 1,400 ppm lithium in it,” said Benson, describing a red-hot pyroclastic flow of pumice, ash, crystals, and rock that spread for 50 miles in all directions. “That’s not crazy levels of enrichment, but since there was this big hole in the ground and you had 200,000 to 300,000 years, that lithium was progressively leached from the nearby rock by rainwater and deposited in the caldera lake sediments.”

Not all ancient caldera sediments are loaded with lithium. Benson and his colleagues analyzed tiny samples of crystallized magma called melt inclusions taken from supervolcano sites around the world. It turns out that the most lithium-rich magma is formed when a lot of continental crust is melted into the mix. Magma that’s mostly from the deeper mantle, however, doesn’t produce a lot of lithium, neither does magma melted from oceanic crust.

That’s why ancient magma samples from the Pantelleria caldera off the coast of Sicily showed only 100 ppm lithium, while the Hideaway Park supervolcano site in Colorado, sitting atop a thick layer of continental crust, registered at 5,990 ppm lithium on average. Even when the Crater Lake caldera does dry out hundreds of thousands of years from now, its near-coastal location would likely make it a lousy source of lithium.

The McDermitt volcanic site in Nevada is believed to be the largest lithium deposit in America with an estimated two megatons of the prized, but volatile metal. That’s a good cache, considering that the total amount of extractable lithium on the planet is estimated around 14 megatons.

Read more at Seeker

2017 Nobel Prize in Chemistry Awarded to Pioneers in Cryo-Electron Microscopy

A screen displays portraits of winners of the 2017 Nobel Prize in Chemistry on October 4, 2017 at the Royal Swedish Academy of Sciences in Stockholm, Sweden (L-R) Jacques Dubochet from Switzerland, Joachim Frank from the US, and Richard Henderson from Britain.
A revolutionary technique dubbed cryo-electron microscopy, which has shed light on the Zika virus and an Alzheimer's enzyme, earned scientists Jacques Dubochet, Joachim Frank, and Richard Henderson the Nobel Chemistry Prize on Wednesday.

Thanks to the international team's "cool method," which uses electron beams to examine the tiniest structures of cells, "researchers can now freeze biomolecules mid-movement and visualize processes they have never previously seen," the Nobel chemistry committee said.

This has been "decisive for both the basic understanding of life's chemistry and for the development of pharmaceuticals," it added.

The ultra-sensitive imaging method allows molecules to be flash-frozen and studied in their natural form, without the need for dyes.

It has laid bare never-before-seen details of the tiny protein machines that run all cells.

"When researchers began to suspect that the Zika virus was causing the epidemic of brain-damaged newborns in Brazil, they turned to cryo-EM (electron microscopy) to visualize the virus," the committee said.

Frank, a 77-year-old, German-born biochemistry professor at Columbia University in New York, was woken from his sleep when the committee announced the prize in Stockholm, six hours ahead.

"There are so many other discoveries every day, I was in a way speechless," he said. "It's wonderful news."

In the first half of the 20th century, biomolecules – proteins, DNA, and RNA – were terra incognita on the map of biochemistry.

Because the powerful electron beam destroys biological material, electron microscopes were long thought to be useful only to study dead matter.

Tea party to celebrate

But 72-year-old Henderson, from the MRC Laboratory of Molecular Biology in Cambridge, used an electron microscope in 1990 to generate a three-dimensional image of a protein at atomic resolution, a groundbreaking discovery, which proved the technology's potential.

Frank made it widely usable between 1975 and 1986, developing a method to transform the electron microscope's fuzzy two-dimensional images into sharp, 3D composites.

Dubochet, today an honorary professor of biophysics at the University of Lausanne, added water.

Now 75, he discovered in the 1980s how to cool water so quickly that it solidifies in liquid form around a biological sample, allowing the molecules to retain their natural shape even in a vacuum.

The electron microscope's every nut and bolt has been optimized since these discoveries.

The required atomic resolution was reached in 2013, and researchers "can now routinely produce three-dimensional structures of biomolecules," according to the Nobel committee.

The trio will share the prize money of nine million Swedish kronor (around $1.1 million or 943,100 euros).

"Normally what I'd do if I was in Cambridge, we will have a party around tea-time in the lab, but I expect we'll have it tomorrow instead," said Henderson.

'Beautiful pictures'

The prize announcement was praised by the scientific community and observers around the world.

"By solving more and more structures at the atomic level we can answer biological questions, such as how drugs get into cells, that were simply unanswerable a few years ago," Jim Smith, science director at the London-based biomedical research charity Wellcome, said in a statement.

Daniel Davis, immunology professor at the University of Manchester, said details of crucial molecules and proteins that make the human immune system function, can now be seen like never before.

"It has been used in visualizing the way in which antibodies can work to stop viruses being dangerous, leading to new ideas for medicines — as just one example," he said.

Read more at Science Daily

Oct 3, 2017

Prehistoric squid was last meal of newborn ichthyosaur 200 million years ago

This is the specimen. It has a total length of around 70 cm. The specimen is on display at the Lapworth Museum of Geology, University of Birmingham.
Scientists from the UK have identified the smallest and youngest specimen of Ichthyosaurus communis on record and found an additional surprise preserved in its stomach.

The ichthyosaur fossil has a total length of just around 70 cm and had the remains of a prehistoric squid in its stomach. Ichthyosaurus communis was the first species of ichthyosaur, a group of sea-going reptiles, to be properly recognised by science, in 1821.

The University of Manchester palaeontologist and ichthyosaur expert, Dean Lomax, said: "It is amazing to think we know what a creature that is nearly 200 million years old ate for its last meal. We found many tiny hook-like structures preserved between the ribs. These are from the arms of prehistoric squid. So, we know this animal's last meal before it died was squid.

"This is interesting because a study by other researchers on a different type of ichthyosaur, called Stenopterygius, which is from a geologically younger age, found that the small -- and therefore young -- examples of that species fed exclusively on fish. This shows a difference in prey-preference in newborn ichthyosaurs."

Many early ichthyosaur examples were found by Victorian palaeontologist, Mary Anning, along the coast at Lyme Regis, Dorset. It is one of the most common Early Jurassic fossil reptiles in the UK.

The new specimen is from the collections of the Lapworth Museum of Geology, University of Birmingham. Palaeontologist Nigel Larkin, a research associate of The University of Cambridge, cleaned and studied the specimen in 2016, and recognised that it was important. The cleaning provided Dean with the opportunity to examine the fossil in detail.

Dean, who recently described the largest Ichthyosaurus on record, identified this specimen as a newborn Ichthyosaurus communis, based on the arrangement of bones in the skull. He added: "There are several small Ichthyosaurus specimens known, but most are incomplete or poorly preserved. This specimen is practically complete and is exceptional. It is the first newborn Ichthyosaurus communis to be found, which is surprising considering that the species was first described almost 200 years ago."

Unfortunately, no record of the specimen's location and age exists. However, with permission, Nigel removed some of the rock from around the skeleton. He passed this on to Ian Boomer (University of Birmingham) and Philip Copestake (Merlin Energy, Resources Ltd) so that they could analyse the rock for microscopic fossils. Based on the types of microfossil preserved, they were able to identify that this ichthyosaur was around 199-196 million years old, from the Early Jurassic.

Nigel added, "Many historic ichthyosaur specimens in museums lack any geographic or geological details and are therefore undated. This process of looking for microfossils in their host rock might be the key to unlocking the mystery of many specimens. Thus, this will provide researchers with lots of new information that otherwise is lost. Of course, this requires some extensive research, but it is worth the effort."

Read more at Science Daily

Music Was Just Encoded on DNA and Retrieved for the First Time

Even the highest quality archival medium is no match for DNA.

To demonstrate this, researchers stored historic audio recordings on these molecules for the first time and then retrieved them with 100 percent accuracy. The experiment showed that DNA not only offers a place to save a dense package of information in a tiny space, but because it can last for hundreds of years, it reduces the risk that it will go out of date or degrade in the way that cassette tapes, compact discs, and even computer hard drives can.

“DNA is intrinsically and exquisitely a stable molecule,” Emily Leproust, CEO of the biotech firm Twist Bioscience, which works on DNA synthesis, told Seeker. Her company collaborated with Microsoft, the University of Washington, and the Montreux Jazz Digital Project on the DNA data feat.

The two performances they stored and retrieved, “Smoke on the Water” by Deep Purple and “Tutu” by Miles Davis, are the first DNA-saved files to be added to UNESCO’s Memory of the World Archive, a collection of audio and visual pieces of cultural significance. Both were performed at the Montreux Jazz Festival, an annual event in Switzerland.

Last week, the retrieved versions of each song were played for a different audience, this one at the ArtTech Forum in Lausanne, Switzerland, which promotes innovations at the intersection of science and culture. In digital form, the songs take up about 140 MB of hard drive space. In DNA form, they’re mere specks, much smaller than a grain of sand.

Leproust told Seeker that if the all of music from the Montreux Jazz Digital Project — six petabytes of digital data (the equivalent of six million gigabytes) — were saved to DNA, it would fit on a grain of rice.

Storing and retrieving files to and from DNA starts with the digital file. The researchers converted the binary code, the 1s and 0s of computer language, into the genetic code that makes up DNA, the A, C, T, and G nucleotide bases. For example, 00 could be turned into A, 10 could be turned into C, 01 could be turned into G, and 11 could be turned into T.

They then made synthetic segments of DNA by combining the As, Cs, Ts, and Gs in the sequences that represented the binary code. The short segments each contain about 12 bytes of data as well as a sequence number, which is also made of bases to indicate the location the specific data within the overall DNA file.

When this work was complete, they used conventional DNA sequencing technology to make sure that the genetic bases were in the correct order. Lastly, they then decoded the As, Cs, Ts, and Gs and turned them back into digital 1s and 0s so that the data could be played like a contemporary music file.

“In principal, it doesn’t really matter what the file is,” Leproust noted. “A movie or video or PDF file — that’s the beauty of DNA. It’s universal.”

Read more at Seeker

Gene Therapy Shows Early Success in Treating Common Cause of Blindness

It might be possible to reverse a common cause of blindness, retinitis pigmentosa, with the use of gene therapy, according to a University of Oxford study.

The researchers found they could increase light sensitivity in the eyes of mice, which have an inherited form of retinal degeneration, using a light-sensitive retinal protein called human melanopsin. They administered the melanopsin with a viral vector, which is a method for transferring therapeutic genes to modify cells or tissue. After one year, they observed that the mice who received the gene therapy were more aware of their surroundings than the untreated mice.

“Treated mice showed a number of visual responses including the ability to detect their environment based on visual information alone, whereas control mice were completely blind by this time point,” Samantha de Silva, lead study author and clinical research associate in medical sciences at Oxford, told Seeker. “We wouldn’t expect the mice to have the same level of vision as a completely normal mouse, but this would equate to a completely blind person being able to recognize their visual environment after treatment. It would be hugely beneficial in terms of navigation and quality of life.”

The findings were published in the journal Proceedings of the National Academy of Sciences.

The method is unlikely to become a cure-all for every form of blindness. For example there is specific circuitry that must be maintained in the retina for it to work, but it is a big step toward treating retinal degeneration and in developing future treatments for blindness.

“[R]etinal degenerations such as retinitis pigmentosa would be the ideal conditions to treat, and this would have significant impact since they are now the leading cause of blindness in the working age population,” De Silva said. “With future developments, we hope to use this approach to target further conditions.”

The research team is confident the treatment will work in humans because the viral vector that delivered the melanopsin in the mice is based on a design that has already been effective in clinical trials. But the results will vary depending on a person’s ability to interpret the new visual signals their brain receives.

Read more at Seeker