Dec 2, 2017

Skin pigmentation is far more genetically complex than previously thought

These are South African individuals in a household that exemplify the substantial skin pigmentation variability in the Khomani and Nama populations. Picture taken with consent for publication.
Many studies have suggested that the genetics of skin pigmentation are simple. A small number of known genes, it is thought, account for nearly 50 percent of pigment variation. However, these studies rely on datasets consisting almost entirely of information from northern Eurasian populations -- those that reside mostly in higher latitude regions.

Reporting in the November 30 issue of Cell, researchers from the Broad Institute of MIT and Harvard, Stanford University, and Stony Brook University report that while skin pigmentation is nearly 100 percent heritable, it is hardly a straightforward, Mendelian trait. By working closely with the KhoeSan, a group of populations indigenous to southern Africa, the researchers have found that the genetics of skin pigmentation become progressively complex as populations reside closer to the equator, with an increasing number of genes -- known and unknown -- involved, each making a smaller overall contribution.

"Africa has the greatest amount of phenotypic variability in skin color, and yet it's been underrepresented in large scale endeavors," said Alicia Martin, a postdoctoral scientist in the lab of Broad Institute member Mark Daly. "There are some genes that are known to contribute to skin pigmentation, but by and large there are many more new genes that have not been discovered."

"We need to spend more time focusing on these understudied populations in order to gain deeper genetic insights," said Brenna Henn, assistant professor in the Department of Ecology and Evolution at Stony Brook University who, along with Martin, is a co-corresponding author.

The paper is a culmination of seven years of research that spanned several institutions, starting with a collaboration between Stellenbosch University in South Africa and Stanford University in Carlos Bustamante's lab, where Martin and Henn trained. Martin, Henn, and their colleagues spent a great deal of time with the KhoeSan, interviewing individuals, and taking anthropometric measurements (height, age, gender), and using a reflectometer to quantitatively measure skin color. In total, they accumulated data for approximately 400 individuals.

The researchers genotyped each sample -- looking at hundreds of thousands of sites across the genome to identify genetic markers linked with pigmentation measures -- and sequenced particular areas of interest. They took this information and compared it to a dataset that comprised nearly 5,000 individuals representing globally diverse populations throughout Africa, Asia, and Europe.

What they found offers a counter-narrative to the common view on pigmentation.

The prevailing theory is that "directional selection" pushes pigmentation in a single direction, from dark to light in high latitudes and from light to dark in lower latitudes. But Martin and Henn's data showed that the trajectory is more complex. Directional selection, as a guiding principle, seems to hold in far northern latitudes. But as populations move closer to the equator, a dynamic called "stabilizing selection" takes effect. Here, an increasing number of genes begins to influence variability. Only about 10 percent of this variation can be attributed to genes known to affect pigmentation.

In addition, the researchers found some unexpected insights into particular genes associated with pigmentation. A derived mutation in one gene, SLC24A5, is thought to have arisen in Europe roughly 10,000 to 20,000 years ago. However, in the KhoeSan populations it appears in a much higher frequency than recent European admixture alone would suggest, indicating that it has either been positively selected in this population, actually arose in this population, or entered the population through gene flow thousands of years ago. "We're still teasing this apart," said Martin.

They also found that a gene called SMARCA2/VLDLR, which has not previously been associated with pigmentation in humans, seems to play a role among the KhoeSan. Several different variants are all uniquely associated with pigmentation near these genes, and variants in these genes have been associated with pigmentation in animals.

"Southern African KhoeSan ancestry appears to neither lighten or darken skin," said Martin. "Rather, it just increases variation. In fact, the KhoeSan are approximately fifty percent lighter than equatorial Africans. Ultimately, in northern latitudes pigmentation is more homogenous, while in lower latitudes, it's more diverse -- both genetically and phenotypically."

"The full picture of the genetic architecture of skin pigmentation will not be complete unless we can represent diverse populations worldwide," said Henn.

Read more at Science Daily

Voyager 1 fires up thrusters after 37 years

The Voyager team is able to use a set of four backup thrusters, dormant since 1980. They are located on the back side of the spacecraft in this orientation.
If you tried to start a car that's been sitting in a garage for decades, you might not expect the engine to respond. But a set of thrusters aboard the Voyager 1 spacecraft successfully fired up Wednesday after 37 years without use.

Voyager 1, NASA's farthest and fastest spacecraft, is the only human-made object in interstellar space, the environment between the stars. The spacecraft, which has been flying for 40 years, relies on small devices called thrusters to orient itself so it can communicate with Earth. These thrusters fire in tiny pulses, or "puffs," lasting mere milliseconds, to subtly rotate the spacecraft so that its antenna points at our planet. Now, the Voyager team is able to use a set of four backup thrusters, dormant since 1980.

"With these thrusters that are still functional after 37 years without use, we will be able to extend the life of the Voyager 1 spacecraft by two to three years," said Suzanne Dodd, project manager for Voyager at NASA's Jet Propulsion Laboratory, Pasadena, California.

Since 2014, engineers have noticed that the thrusters Voyager 1 has been using to orient the spacecraft, called "attitude control thrusters," have been degrading. Over time, the thrusters require more puffs to give off the same amount of energy. At 13 billion miles from Earth, there's no mechanic shop nearby to get a tune-up.

The Voyager team assembled a group of propulsion experts at NASA's Jet Propulsion Laboratory, Pasadena, California, to study the problem. Chris Jones, Robert Shotwell, Carl Guernsey and Todd Barber analyzed options and predicted how the spacecraft would respond in different scenarios. They agreed on an unusual solution: Try giving the job of orientation to a set of thrusters that had been asleep for 37 years.

"The Voyager flight team dug up decades-old data and examined the software that was coded in an outdated assembler language, to make sure we could safely test the thrusters," said Jones, chief engineer at JPL.

In the early days of the mission, Voyager 1 flew by Jupiter, Saturn, and important moons of each. To accurately fly by and point the spacecraft's instruments at a smorgasbord of targets, engineers used "trajectory correction maneuver," or TCM, thrusters that are identical in size and functionality to the attitude control thrusters, and are located on the back side of the spacecraft. But because Voyager 1's last planetary encounter was Saturn, the Voyager team hadn't needed to use the TCM thrusters since November 8, 1980. Back then, the TCM thrusters were used in a more continuous firing mode; they had never been used in the brief bursts necessary to orient the spacecraft.

All of Voyager's thrusters were developed by Aerojet Rocketdyne. The same kind of thruster, called the MR-103, flew on other NASA spacecraft as well, such as Cassini and Dawn.

On Tuesday, Nov. 28, 2017, Voyager engineers fired up the four TCM thrusters for the first time in 37 years and tested their ability to orient the spacecraft using 10-millisecond pulses. The team waited eagerly as the test results traveled through space, taking 19 hours and 35 minutes to reach an antenna in Goldstone, California, that is part of NASA's Deep Space Network.

Lo and behold, on Wednesday, Nov. 29, they learned the TCM thrusters worked perfectly -- and just as well as the attitude control thrusters.

"The Voyager team got more excited each time with each milestone in the thruster test. The mood was one of relief, joy and incredulity after witnessing these well-rested thrusters pick up the baton as if no time had passed at all," said Barber, a JPL propulsion engineer.

The plan going forward is to switch to the TCM thrusters in January. To make the change, Voyager has to turn on one heater per thruster, which requires power -- a limited resource for the aging mission. When there is no longer enough power to operate the heaters, the team will switch back to the attitude control thrusters.

The thruster test went so well, the team will likely do a similar test on the TCM thrusters for Voyager 2, the twin spacecraft of Voyager 1. The attitude control thrusters currently used for Voyager 2 are not yet as degraded as Voyager 1's, however.

Voyager 2 is also on course to enter interstellar space, likely within the next few years.

Read more at Science Daily

Dec 1, 2017

Fighting myocardial infarction with nanoparticle tandems

Via a cannula introduced into the infarction area, the cells loaded with magnetic nanoparticles are injected into the damaged heart muscle tissue of the mouse.
How can damaged cardiac tissue following a heart attack best be treated with replacement muscle cells? A research team under the supervision of the University of Bonn is now presenting an innovative method on mice: Muscle replacement cells, which are to take over the function of the damaged tissue, are loaded with magnetic nanoparticles. These nanoparticle-loaded cells are then injected into the damaged heart muscle and held in place by a magnet, causing the cells to engraft better onto the existing tissue. Using the animal model, the scientists show that this leads to a significant improvement in heart function. The specialist journal Biomaterials presents the results in advance online, the print version will be published in the near future.

In a heart attack, clots usually lead to persistent circulatory problems in parts of the heart muscle, which then cause heart muscle cells to die. Attempts have been made for some time to revitalize the damaged heart tissue with replacement cells. "However, most of the cells are pushed out of the puncture channel during the injection due to the pumping action of the beating heart," explains Prof. Dr. Wilhelm Röll from the Department of Cardiac Surgery at University Hospital Bonn. Therefore, only a few spare cells remain in the heart muscle, which means that repair is limited.

With an interdisciplinary team, Prof. Röll tested an innovative approach on how to ensure that the injected replacement cells remain in the desired location and engraft onto the heart tissue. The experiments were performed on mice that had previously suffered a heart attack. In order to be able to better follow the cardiac muscle replacement EGFP expressing cells obtained from fetal mouse hearts or mouse stem cells were employed. These fluorescent muscle cells were loaded with tiny magnetic nanoparticles and injected through a fine cannula into the damaged heart tissue of the mice.

In the magnetic field, the nanoparticle-loaded replacement cells remain in place

In some of the rodents treated this way, a magnet placed at a distance of a few millimeters from the surface of the heart ensured that a large part of the nanoparticle-loaded replacement cells remained at the desired location. "Without a magnet, about a quarter of the added cells remained in the heart tissue, with a magnet, about 60 percent of them stayed in place," reports Dr. Annika Ottersbach, who was a PhD student in Prof. Röll's team during the project. Ten minutes under the influence of the magnetic field were already sufficient to keep a significant proportion of nanoparticle-loaded muscle cells at the target site. Even days after the procedure, the injected cells remained in place and gradually attached themselves to the existing tissue.

"This is surprising, especially since the infarct tissue is relatively undersupplied due to poor perfusion," says Prof. Röll. Under the influence of the magnet, the replacement muscle cells did not die as frequently, engrafted better and multiplied more. The researchers investigated the reasons for the improved growth: It was found that these implanted heart muscle cells were packed more densely and could survive better thanks to the more intensive cell-cell interaction. Moreover, the gene activity of many survival functions, such as for cellular respiration, was higher than without a magnet in these replacement cells.

The researchers also demonstrated that cardiac function significantly improved in mice that were treated with nanoparticle muscle cells in combination with a magnet. "After two weeks, seven times as many replacement muscle cells survived, and after two months, four times as many compared to conventional implantation technology," reports Prof. Röll. Given the lifespan of mice of a maximum of two years, this is a surprisingly lasting effect.

Read more at Science Daily

New early gravity signals to quantify the magnitude of strong earthquakes

Seismograph
After an earthquake, there is a disturbance in the field of gravity almost instantaneously. This could be recorded before the seismic waves that seismologists usually analyze. In a study published in Science on December 1, 2017, a team formed of researchers from CNRS, IPGP, the Université Paris Diderot and Caltech has managed to observe these weak signals related to gravity and to understand where they come from. Because they are sensitive to the magnitude of earthquakes, these signals may play an important role in the early identification of the occurrence of a major earthquake.

This work came out of the interaction between seismologists who wanted to better understand earthquakes and physicists who were developing fine gravity measurements with a view to detecting gravitational waves. Earthquakes change the equilibrium of forces on Earth brutally and emit seismic waves whose consequences may be devastating. But these same waves also disturb Earth's field of gravity, which emits a different signal. This is particularly interesting with a view to fast quantification of tremors because it moves at the speed of light, unlike tremor waves, which propagate at speeds between 3 and 10 km/s. So seismometers at a station located 1000 km from the epicenter may potentially detect this signal more than two minutes before the seismic waves arrive.

The work presented here, which follows on a 2016 study which demonstrated this signal for the first time, greatly increases its understanding. First, the scientists did indeed observe these signals on the data from about ten seismometers located between 500 and 3000 km from the epicenter of the 2011 Japanese earthquake (magnitude 9.1). From their observations, the researchers then demonstrated that these signals were due to two effects. The first is the gravity change that occurs at the location of the seismometer, which changes the equilibrium position of the instrument's mass. The second effect, which is indirect, is due to the gravity change everywhere on Earth, which disturbs the equilibrium of the forces and produces new seismic waves that will reach the seismometer.

Taking account of these two effects, the researchers have shown that this gravity-related signal is very sensitive to the earthquake's magnitude, which makes it a good candidate for rapidly quantifying the magnitude of strong earthquakes. The future challenge is to manage to exploit this signal for magnitudes below about 8 to 8.5, because below this threshold, the signal is too weak relative to the seismic noise emitted naturally by Earth, and dissociating it from this noise is complicated. So several technologies, including some inspired from instruments developed to detect gravitational waves, are being envisaged to take a new step forward in detection of these precious signals.

From Science Daily

Hundreds of fossilized eggs shed light on pterosaur development

Hundreds of pterosaur bones laying on the surface, demonstrating the richness of these sites. This material relates to a paper that appeared in the 1 December 2017, issue of Science, published by AAAS. The paper, by X. Wang at Chinese Academy of Sciences in Beijing, China, and colleagues was titled, "Egg accumulation with 3D embryos provides insight into the life history of a pterosaur."
An invaluable collection of more than 200 eggs is providing new insights into the development and nesting habits of pterosaurs.

To date, only a small handful of pterosaur eggs with a well-preserved 3-D structure and embryo inside have been found and analyzed -- three eggs from Argentina and five from China. This sparse sample size was dramatically increased upon the discovery of 215 eggs of the pterosaurs species Hamipterus tianshanensis from a Lower Cretaceous site in China.

Xiaolin Wang et al. used computed tomography scanning to peer inside the eggs, 16 of which contain embryonic remains of varying intactness. The most complete embryo contains a partial wing and cranial bones, including a complete lower jaw. The samples of thigh bones that remain intact are well-developed, suggesting that the species benefited from functional hind legs shortly after hatching.

However, the structure supporting the pectoral muscle appears to be underdeveloped during the embryonic stage, suggesting that newborns were likely not able to fly. Therefore, the authors propose that newborns likely needed some parental care. Based on growth marks, the team estimates one of the individuals to be at least 2 years old and still growing at the time of its death, supporting the growing body of evidence that pterosaurs had long incubation periods.

Lastly, the fact that a single collection of embryos exhibits a range of developmental stages hints that pterosaurs participated in colonial nesting behavior, the authors say. Denis Deeming discusses these findings in a related Perspective.

From Science Daily

Blowing in the stellar wind: Scientists reduce the chances of life on exoplanets in so-called habitable zones

Image of starlight on exoplanet.
Is there life beyond Earth in the cosmos? Astronomers looking for signs have found that our Milky Way galaxy teems with exoplanets, some with conditions that could be right for extraterrestrial life. Such worlds orbit stars in so-called "habitable zones," regions where planets could hold liquid water that is necessary for life as we know it.

However, the question of habitability is highly complex. Researchers led by space physicist Chuanfei Dong of the U.S. Department of Energy's (DOE) Princeton Plasma Physics Laboratory (PPPL) and Princeton University have recently raised doubts about water on -- and thus potential habitability of -- frequently cited exoplanets that orbit red dwarfs, the most common stars in the Milky Way.

Impact of stellar wind

In two papers in The Astrophysical Journal Letters, the scientists develop models showing that the stellar wind -- the constant outpouring of charged particles that sweep out into space -- could severely deplete the atmosphere of such planets over hundreds of millions of years, rendering them unable to host surface-based life as we know it.

"Traditional definition and climate models of the habitable zone consider only the surface temperature," Dong said. "But the stellar wind can significantly contribute to the long-term erosion and atmospheric loss of many exoplanets, so the climate models tell only part of the story."

To broaden the picture, the first paper looks at the timescale of atmospheric retention on Proxima Centauri b (PCb), which orbits the nearest star to our solar system, some 4 light years away. The second paper questions how long oceans could survive on "water worlds" -- planets thought to have seas that could be hundreds of miles deep.

Two-fold effect

The research simulates the photo-chemical impact of starlight and the electromagnetic erosion of stellar wind on the atmosphere of the exoplanets. These effects are two-fold: The photons in starlight ionize the atoms and molecules in the atmosphere into charged particles, allowing pressure and electromagnetic forces from the stellar wind to sweep them into space. This process could cause severe atmospheric losses that would prevent the water that evaporates from exoplanets from raining back onto them, leaving the surface of the planet to dry up.

On Proxima Centauri b, the model indicates that high stellar wind pressure would cause the atmosphere to escape and prevent atmosphere from lasting long enough to give rise to surface-based life as we know it. "The evolution of life takes billions of years," Dong noted. "Our results indicate that PCb and similar exoplanets are generally not capable of supporting an atmosphere over sufficiently long timescales when the stellar wind pressure is high."

"It is only if the pressure is sufficiently low," he said, "and if the exoplanet has a reasonably strong magnetic shield like that of the Earth's magnetosphere, that the exoplanet can retain an atmosphere and has the potential for habitability."

Evolution of habitable zone

Complicating matters is the fact that the habitable zone circling red stars could evolve over time. So high stellar wind pressure early on could increase the rate of atmospheric escape. Thus, the atmosphere could have eroded too soon, even if the exoplanet was protected by a strong magnetic field like the magnetosphere surrounding Earth, Dong said. "In addition, such close-in planets could also be tidally locked like our moon, with one side always exposed to the star. The resultant weak global magnetic field and the constant bombardment of stellar wind would serve to intensify losses of atmosphere on the star-facing side."

Turning to water worlds, the researchers explored three different conditions for the stellar wind. These ranged from:

  • Winds that strike the Earth's magnetosphere today.
  • Ancient stellar winds flowing from young, Sun-like stars that were just a toddler-like 0.6 billion years old compared with the 4.6 billion year age of the Sun.
  • The impact on exoplanets of a massive stellar storm like the Carrington event, which knocked out telegraph service and produced auroras around the world in 1859.

The simulations illustrated that ancient stellar wind could cause the rate of atmospheric escape to be far greater than losses produced by the current solar wind that reaches the magnetosphere of Earth. Moreover, the rate of loss for Carrington-type events, which are thought to occur frequently in young Sun-like stars, was found to be greater still.

"Our analysis suggests that such space weather events may prove to be a key driver of atmospheric losses for exoplanets orbiting an active young Sun-like star," the authors write.

High probability of dried-up oceans

Given the increased activity of red stars and the close-in location of planets in habitable zones, these results indicate the high probability of dried-up surfaces on planets that orbit red stars that might once have held oceans that could give birth to life. The findings could also modify the famed Drake equation, which estimates the number of civilizations in the Milky Way, by lowering the estimate for the average number of planets per star that can support life.

Authors of the PCb paper note that predicting the habitability of planets located light years from Earth is of course filled with uncertainties. Future missions like the James Webb Space Telescope, which NASA will launch in 2019 to peer into the early history of the universe, will therefore "be essential for getting more information on stellar winds and exoplanet atmospheres," the authors say, "thereby paving the way for more accurate estimations of stellar-wind induced atmospheric losses."

Scientists spot potentially habitable worlds with regularity. Recently, a newly discovered Earth-sized planet orbiting Ross 128, a red dwarf star that is smaller and cooler than the sun located some 11 light years from Earth, was cited as a water candidate. Scientists noted that the star appears to be quiescent and well-behaved, not throwing off flares and eruptions that could undo conditions favorable to life.

Read more at Science Daily

Humble sponges are our deepest ancestors: Dispute in evolutionary biology solved

Sea sponge.
New research led by the University of Bristol has resolved evolutionary biology's most-heated debate, revealing it is the morphologically simple sponges, rather than the anatomically complex comb jellies, which represent the oldest lineage of living animals.

Recent genomic analyses have "flip-flopped" between whether sponges or comb jellies are our deepest ancestors, leading experts to suggest available data might not have the power to resolve this specific problem.

However, new research led by the University of Bristol has identified the cause of this "flip-flop" effect, and in doing so, has revealed sponges are the most ancient lineage.

Professor Davide Pisani of Bristol's Schools of Biological and Earth Sciences led the study, published today in Current Biology, with colleagues from the California Institute of Technology (Caltech -- USA), Ludwig-Maximilians-Universität (LMU), Munich (Germany), and other institutes around the world, which analysed all key genomic datasets released between 2015 and 2017.

Commenting on the breakthrough research, Professor Pisani said:

"The fact is, hypotheses about whether sponges or comb jellies came first suggest entirely different evolutionary histories for key animal organ systems like the nervous and the digestive systems. Therefore, knowing the correct branching order at the root of the animal tree is fundamental to understanding our own evolution, and the origin of key features of the animal anatomy."

In the new study, Professor Pisani and colleagues used cutting edge statistical techniques (Posterior Predictive Analyses) to test whether the evolutionary models routinely used in phylogenetics can adequately describe the genomic datasets used to study early animal evolution. They found that, for the same dataset, models that can better describe the data favour sponges at the root of the animal tree, while models that drastically fail to describe the data favour the comb jellies.

Dr Feuda from Caltech continued: "Our results offer a simple explanation to the 'flip-flop effect' cogently discussed by Professor David Hillis in a recent interview in Nature."

Dr Dohrmann from LMU added: "Our results rationalise this effect and illustrate how you can draw robust conclusions from flip-flopping datasets."

Professor Gert Wörheide of LMU said: "Indeed, a flip-flopping dataset is a dataset that supports different evolutionary histories or phylogenetic trees, when analysed using different evolutionary models.

Discriminating between alternative hypotheses in the face of a flip-flopping dataset requires clarifying how good the models are that support alternative phylogenetic trees. Posterior Predictive Analyses allow us to do exactly that. We found that models which describe the data poorly invariably identify the comb jellies at the root of the tree. Models that better describe the data invariably find the sponges in that position."

Read more at Science Daily

Nov 30, 2017

Copy of 'Jesus' secret revelations to his brother' discovered by biblical scholars

A piece of the Coptic translation of the First Apocalypse of James from the Nag Hammadi Codex V.
The first-known original Greek copy of a heretical Christian writing describing Jesus' secret teachings to his brother James has been discovered at Oxford University by biblical scholars at The University of Texas at Austin.

To date, only a small number of texts from the Nag Hammadi library -- a collection of 13 Coptic Gnostic books discovered in 1945 in Upper Egypt -- have been found in Greek, their original language of composition. But earlier this year, UT Austin religious studies scholars Geoffrey Smith and Brent Landau added to the list with their discovery of several fifth- or sixth-century Greek fragments of the First Apocalypse of James, which was thought to have been preserved only in its Coptic translations until now.

"To say that we were excited once we realized what we'd found is an understatement," said Smith, an assistant professor of religious studies. "We never suspected that Greek fragments of the First Apocalypse of James survived from antiquity. But there they were, right in front of us."

The ancient narrative describes the secret teachings of Jesus to his brother James, in which Jesus reveals information about the heavenly realm and future events, including James' inevitable death.

"The text supplements the biblical account of Jesus' life and ministry by allowing us access to conversations that purportedly took place between Jesus and his brother, James -- secret teachings that allowed James to be a good teacher after Jesus' death," Smith said.

Such apocryphal writings, Smith said, would have fallen outside the canonical boundaries set by Athanasius, Bishop of Alexandria, in his "Easter letter of 367" that defined the 27-book New Testament: "No one may add to them, and nothing may be taken away from them."

With its neat, uniform handwriting and words separated into syllables, the original manuscript was probably a teacher's model used to help students learn to read and write, Smith and Landau said.

"The scribe has divided most of the text into syllables by using mid-dots. Such divisions are very uncommon in ancient manuscripts, but they do show up frequently in manuscripts that were used in educational contexts," said Landau, a lecturer in the UT Austin Department of Religious Studies.

The teacher who produced this manuscript must have "had a particular affinity for the text," Landau said. It does not appear to be a brief excerpt from the text, as was common in school exercises, but rather a complete copy of this forbidden ancient writing.

Read more at Science Daily

Gravitational waves could shed light on the origin of black holes

Black hole artist's concept.
A new study published in Physical Review Letters outlines how scientists could use gravitational wave experiments to test the existence of primordial black holes, gravity wells formed just moments after the Big Bang that some scientists have posited could be an explanation for dark matter.

"We know very well that black holes can be formed by the collapse of large stars, or as we have seen recently, the merger of two neutron stars," said Savvas Koushiappas, an associate professor of physics at Brown University and coauthor of the study with Avi Loeb from Harvard University. "But it's been hypothesized that there could be black holes that formed in the very early universe before stars existed at all. That's what we're addressing with this work."

The idea is that shortly after the Big Bang, quantum mechanical fluctuations led to the density distribution of matter that we observe today in the expanding universe. It's been suggested that some of those density fluctuations might have been large enough to result in black holes peppered throughout the universe. These so-called primordial black holes were first proposed in the early 1970s by Stephen Hawking and collaborators but have never been detected -- it's still not clear if they exist at all.

The ability to detect gravitational waves, as demonstrated recently by the Laser Interferometer Gravitational-Wave Observatory (LIGO), has the potential to shed new light on the issue. Such experiments detect ripples in the fabric of spacetime associated with giant astronomical events like the collision of two black holes. LIGO has already detected several black hole mergers, and future experiments will be able to detect events that happened much further back in time.

"The idea is very simple," Koushiappas said. "With future gravitational wave experiments, we'll be able to look back to a time before the formation of the first stars. So if we see black hole merger events before stars existed, then we'll know that those black holes are not of stellar origin."

Cosmologists measure how far back in time an event occurred using redshift -- the stretching of the wavelength of light associated with the expansion of the universe. Events further back in time are associated with larger redshifts. For this study, Koushiappas and Loeb calculated the redshift at which black hole mergers should no longer be detected assuming only stellar origin.

They show that at a redshift of 40, which equates to about 65 million years after the Big Bang, merger events should be detected at a rate of no more than one per year, assuming stellar origin. At redshifts greater than 40, events should disappear altogether.

"That's really the drop-dead point," Koushiappas said. "In reality, we expect merger events to stop well before that point, but a redshift of 40 or so is the absolute hardest bound or cutoff point."

A redshift of 40 should be within reach of several proposed gravitational wave experiments. And if they detect merger events beyond that, it means one of two things, Koushiappas and Loeb say: Either primordial black holes exist, or the early universe evolved in a way that's very different from the standard cosmological model. Either would be very important discoveries, the researchers say.

For example, primordial black holes fall into a category of entities known as MACHOs, or Massive Compact Halo Objects. Some scientists have proposed that dark matter -- the unseen stuff that is thought to comprise most of the mass of the universe -- may be made of MACHOs in the form of primordial black holes. A detection of primordial black holes would bolster that idea, while a non-detection would cast doubt upon it.

The only other possible explanation for black hole mergers at redshifts greater than 40 is that the universe is "non-Gaussian." In the standard cosmological model, matter fluctuations in the early universe are described by a Gaussian probability distribution. A merger detection could mean matter fluctuations deviate from a Gaussian distribution.

"Evidence for non-Gaussianity would require new physics to explain the origin of these fluctuations, which would be a big deal," Loeb said.

Read more at Science Daily

New software can verify someone's identity by their DNA in minutes

Researcher Sophie Zaaijer uses the MinION, a portable DNA sequencer, to get a quick genetic readout of a sample of cells.
In the science-fiction movie Gattaca, visitors only clear security if a blood test and readout of their genetic profile matches the sample on file. Now, cheap DNA sequencers and custom software could make real-time DNA-authentication a reality.

Researchers at Columbia University and the New York Genome Center have developed a method to quickly and accurately identify people and cell lines from their DNA. The technology could have multiple applications, from identifying victims in a mass disaster to analyzing crime scenes. But its most immediate use could be to flag mislabeled or contaminated cell lines in cancer experiments, a major reason that studies are later invalidated. The discovery is described in the latest issue of eLife.

"Our method opens up new ways to use off-the-shelf technology to benefit society," said the study's senior author Yaniv Erlich, a computer science professor at Columbia Engineering, an adjunct core member at NYGC, and a member of Columbia's Data Science Institute. "We're especially excited about the potential to improve cell-authentication in cancer research and potentially speed up the discovery of new treatments."

The software is designed to run on the MinION, an instrument the size of a credit card that pulls in strands of DNA through its microscopic pores and reads out sequences of nucleotides, or the DNA letters A, T, C, G. The device has made it possible for researchers to study bacteria and viruses in the field, but its high error-rate and large sequencing gaps have, until now, limited its use on human cells with their billions of nucleotides.

In an innovative two-step process, the researchers outline a new way to use the $1,000 MinION and the abundance of human genetic data now online to validate the identity of people and cells by their DNA with near-perfect accuracy. First, they use the MinION to sequence random strings of DNA, from which they select individual variants, which are nucleotides that vary from person to person and make them unique. Then, they use a Bayesian algorithm to randomly compare this mix of variants with corresponding variants in other genetic profiles on file. With each cross-check, the algorithm updates the likelihood of finding a match, rapidly narrowing the search.

Tests show the method can validate an individual's identity after cross-checking between 60 and 300 variants, the researchers report. Within minutes, it verified the identity of the study's lead author, Sophie Zaaijer, a former member of NYGC and now a postdoctoral researcher at Cornell Tech.

To do this, the MinION matched the readout of Zaaijer's genome, gleaned from a sample of cheek cells, with a reference profile stored among 31,000 other genomes on the public database, DNA.land. Erlich's identity was verified the same way, with initial sequencing done by Columbia students in the Ubiquitous Genomics class he and Zaaijer taught in 2015.

They call their re-identification technique 'MinION sketching' which Zaaijer compares to the brain's ability to make out a bird from a few telling features in an abstract Picasso line-drawing. The MinION's genetic 'sketch' of a cell-sample is compared to a growing database of sketches -- similarly incomplete genetic profiles produced by at-home DNA-test kits like 23andMe and donated to science by consumers.

"Using our method, one needs only a few DNA reads to infer a match to an individual in the database," says Zaaijer.

The most promising use for 'MinION sketching' may be as a cheap cell-authentication tool in experimental research, say scientists familiar with its capabilities. In the study, researchers quickly matched a strain of leukemic cells sequenced by the MinION against a reference file in the Cancer Cell Line Encyclopedia database, they report. When they tried contaminating the cells with other cultures, it correctly rejected a match if contamination levels climbed above 25 percent.

The use of misidentified or contaminated cell lines in medical research is blamed for as much as a third of the estimated $28 billion spent each year on studies that can't be replicated, according to one recent study. In a 2014 essay in Science, the director of the National Institute of General Medical Sciences, Jon Lorsch, called for new policies and technologies to address the problem.

Lacking the expensive machinery needed to validate cell lines on their own, most researchers either skip validation or ship their cultures to specialized labs which can delay important findings and treatments. If an easier alternative were available, most researchers would use it, says Neville Sanjana, a core faculty member at NYGC and assistant professor at NYU's Department of Biology who works on skin and lung cancer cell lines and was not involved in the study.

Read more at Science Daily

Prehistoric women had stronger arms than today's elite rowing crews

Cambridge University Women's Boat Club openweight crew rowing during the 2017 Boat Race on the river Thames in London. The Cambridge women's crew beat Oxford in the race. The members of this crew were among those analysed in the study.
A new study comparing the bones of Central European women that lived during the first 6,000 years of farming with those of modern athletes has shown that the average prehistoric agricultural woman had stronger upper arms than living female rowing champions.

Researchers from the University of Cambridge's Department of Archaeology say this physical prowess was likely obtained through tilling soil and harvesting crops by hand, as well as the grinding of grain for as much as five hours a day to make flour.

Until now, bioarchaeological investigations of past behaviour have interpreted women's bones solely through direct comparison to those of men. However, male bones respond to strain in a more visibly dramatic way than female bones.

The Cambridge scientists say this has resulted in the systematic underestimation of the nature and scale of the physical demands borne by women in prehistory.

"This is the first study to actually compare prehistoric female bones to those of living women," said Dr Alison Macintosh, lead author of the study published today in the journal Science Advances.

"By interpreting women's bones in a female-specific context we can start to see how intensive, variable and laborious their behaviours were, hinting at a hidden history of women's work over thousands of years."

The study, part of the European Research Council-funded ADaPt (Adaption, Dispersals and Phenotype) Project, used a small CT scanner in Cambridge's PAVE laboratory to analyse the arm (humerus) and leg (tibia) bones of living women who engage in a range of physical activity: from runners, rowers and footballers to those with more sedentary lifestyles.

The bones strengths of modern women were compared to those of women from early Neolithic agricultural eras through to farming communities of the Middle Ages.

"It can be easy to forget that bone is a living tissue, one that responds to the rigours we put our bodies through. Physical impact and muscle activity both put strain on bone, called loading. The bone reacts by changing in shape, curvature, thickness and density over time to accommodate repeated strain," said Macintosh.

"By analysing the bone characteristics of living people whose regular physical exertion is known, and comparing them to the characteristics of ancient bones, we can start to interpret the kinds of labour our ancestors were performing in prehistory."

Over three weeks during trial season, Macintosh scanned the limb bones of the Open- and Lightweight squads of the Cambridge University Women's Boat Club, who ended up winning this year's Boat Race and breaking the course record. These women, most in their early twenties, were training twice a day and rowing an average of 120km a week at the time.

The Neolithic women analysed in the study (from 7400-7000 years ago) had similar leg bone strength to modern rowers, but their arm bones were 11-16% stronger for their size than the rowers, and almost 30% stronger than typical Cambridge students.

The loading of the upper limbs was even more dominant in the study's Bronze Age women (from 4300-3500 years ago), who had 9-13% stronger arm bones than the rowers but 12% weaker leg bones.

A possible explanation for this fierce arm strength is the grinding of grain. "We can't say specifically what behaviours were causing the bone loading we found. However, a major activity in early agriculture was converting grain into flour, and this was likely performed by women," said Macintosh.

"For millennia, grain would have been ground by hand between two large stones called a saddle quern. In the few remaining societies that still use saddle querns, women grind grain for up to five hours a day.

"The repetitive arm action of grinding these stones together for hours may have loaded women's arm bones in a similar way to the laborious back-and-forth motion of rowing."

However, Macintosh suspects that women's labour was hardly likely to have been limited to this one behaviour.

"Prior to the invention of the plough, subsistence farming involved manually planting, tilling and harvesting all crops," said Macintosh. "Women were also likely to have been fetching food and water for domestic livestock, processing milk and meat, and converting hides and wool into textiles.

"The variation in bone loading found in prehistoric women suggests that a wide range of behaviours were occurring during early agriculture. In fact, we believe it may be the wide variety of women's work that in part makes it so difficult to identify signatures of any one specific behaviour from their bones."

Read more at Science Daily

Nov 29, 2017

Traces of life on nearest exoplanets may be hidden in equatorial trap

This is an artist's impression of TRAPPIST 1d (right) and its host star TRAPPIST 1 (left). The new research shows how planets like this could hide traces of life from astronomers' observations.
New simulations show that the search for life on other planets may well be more difficult than previously assumed, in research published today in the journal Monthly Notices of the Royal Astronomical Society. The study indicates that unusual air flow patterns could hide atmospheric components from telescopic observations, with direct consequences for formulating the optimal strategy for searching for (oxygen-producing) life such as bacteria or plants on exoplanets.

Current hopes of detecting life on planets outside of our own Solar System rest on examining the planet's atmosphere to identify chemical compounds that may be produced by living beings. Ozone -- a variety of oxygen -- is one such molecule, and is seen as one of the possible tracers that may allow us to detect life on another planet from afar.

In Earth's atmosphere, this compound forms the ozone layer that protects us from the Sun's harmful UV radiation. On an alien planet, ozone could be one piece in the puzzle that indicates the presence of oxygen-producing bacteria or plants.

But now researchers, led by Ludmila Carone of the Max Planck Institute for Astronomy in Germany, have found that these tracers might be better hidden than we previously thought. Carone and her team considered some of the nearest exoplanets that have the potential to be Earth-like: Proxima b, which is orbiting the star nearest to the Sun (Proxima Centauri), and the most promising of the TRAPPIST-1 family of planets, TRAPPIST-1d.

These are examples of planets that orbit their host star in 25 days or fewer, and as a side effect have one side permanently facing their star, and the other side permanently facing away. Modelling the flow of air within the atmospheres of these planets, Carone and her colleagues found that this unusual day-night divide can have a marked effect on the distribution of ozone across the atmosphere: at least for these planets, the major air flow may lead from the poles to the equator, systematically trapping the ozone in the equatorial region.

Carone says: "Absence of traces of ozone in future observations does not have to mean there is no oxygen at all. It might be found in different places than on Earth, or it might be very well hidden."

Such unexpected atmospheric structures may also have consequences for habitability, given that most of the planet would not be protected against ultraviolet (UV) radiation. "In principle, an exoplanet with an ozone layer that covers only the equatorial region may still be habitable," Carone explains. "Proxima b and TRAPPIST-1d orbit red dwarfs, reddish stars that emit very little harmful UV light to begin with. On the other hand, these stars can be very temperamental, and prone to violent outbursts of harmful radiation including UV."

Read more at Science Daily

Wound healing or regeneration -- the environment decides?

This is a microscopic view of a half jelly comb larvae.
For humans, the loss of limbs is almost always an irreversible catastrophe. Many animals, however, are not only able to heal wounds but even to replace whole body parts. Biologists have now been able to prove for the first time that comb jellyfish can switch between two completely different self-healing processes depending on the environmental conditions.

It may be a bit macabre. But surely most people at some point in their childhood watched fascinated, how an earthworm cut in two parts apparently lived on unimpressed by the severe wound. For humans, the loss of limbs is a severe problem which can only be treated -- if at all -- by complex surgery. However, among animals there are numerous examples of amazing self-healing mechanisms, especially among invertebrates. How these regeneration mechanisms function genetically and biochemically is one of the most exciting research questions in developmental biology, but also in medicine.

A team of biologists from the GEOMAR Helmholtz Centre for Ocean Research Kiel, the Norwegian University of Science and Technology (NTNU) and the University of Florida has now been able to demonstrate with the comb jellyfish Mnemiopsis leidyi that, at least in this type of jellyfish, the mechanism of regeneration can be changed depending on the environmental conditions. The study has been published in the international nature publishing group journal Scientific Reports. "Jellyfish are perfect candidates for this kind of research while holding a key position at the phylogenetic base of the metazoan tree," says first author Katharina Bading, former Master student at GEOMAR and now PhD student at NTNU, Norway.

Serious injuries to comb jellyfish and their larvae can have various causes: Mechanical stress, for example, in rough seas or even predators. Depending on the season and the area they live in, the jellyfish have to regenerate in an environment with ample or few nutrients. "Whether and how the jellyfish react to these differences was our question," says Dr. Jamileh Javidpour from GEOMAR, corresponding author of the study.

Comb-jellyfish larvae that lived in a nutrient-rich environment were able to completely restore their bodies. Larvae that had to cope with less nutrients also survived and were able to heal their injuries, but were unable to fully regenerate their bodies. "Apparently, the comb jellyfish larvae are able to activate two fundamentally different regeneration processes, depending on the external circumstances," explains Dr. Javidpour, "if the circumstances are not good enough for a complete cure, then at least it can save their own survival with a simpler process."

For the researchers from Kiel, the discovery is interesting because they investigate the pathways and success of invasive species. Mnemiopsis leidyi was most likely introduced to the Black Sea and the Baltic Sea through ballast water of ships from North America. "In the pumping operations, the jellyfish are mechanically heavily stressed. A flexible self-regenerating process can be an advantage. However, this aspect has hardly been considered so far," Katharina Bading points out.

Read more at Science Daily

Jena Experiment: Loss of species destroys ecosystems

Due to its breadth, the Jena experiment proves for the first time that a loss of biodiversity has negative consequences for many individual components and processes in ecosystems.
How serious is the loss of species globally? Are material cycles in an ecosystem with few species changed? In order to find this out, the "Jena Experiment" was established in 2002, one of the largest biodiversity experiments worldwide. Professor Wolfgang Weisser from the Technical University of Munich (TUM) reports on two unexpected findings of the long-term study: Biodiversity influences almost half the processes in the ecosystem, and intensive grassland management does not result in higher yields than high biodiversity.

An ecosystem provides humans with natural "services," such as the fertility of the soil, the quality of the groundwater, the production of food, and pollination by insects, which is essential for many fruits. Hence, intact ecosystems are crucial for the survival of all living things. What functional significance therefore does the extinction of species have? Can the global loss of species ultimately lead to the poorer "functioning" of ecosystems?

Professor Weisser from the Chair for Terrestrial Ecology at the TUM has summarized the findings of the long-term project "Jena Experiment," which is managed by the Friedrich Schiller University Jena, since its inception in a 70-page article in the journal Basic and Applied Ecology. He was the speaker of the interdisciplinary research consortium up till the year 2015.

"One unique aspect of the Jena Experiment is the fact that we performed our experiments and analyses over 15 years," explains Prof. Weisser. "Because the influence of biodiversity is only visible after a delay, we were only able to observe certain effects from 2006 or 2007 onwards -- i.e. four or five years after the beginning of the project." If a habitat is destroyed due to human intervention, a species usually does not go extinct immediately, but instead some time later. According to these findings, this extinction then has a delayed effect on the material cycles.

The effects of biodiversity became correspondingly more pronounced over time in the Jena Experiment: In species-rich communities, the positive effects, such as carbon storage in the ground, microbial respiration, or the development of soil fauna only became more pronounced over time. On the other hand, the negative effects of monoculture also only became visible later on. "This means that the negative effects of current species extinctions will only become fully perceptible in a few years," warns Weisser.

Farmers are not more successful than nature

80,000 measurements were taken by interdisciplinary working groups from Germany, Austria, Switzerland, and the Netherlands. In more than 500 test plots, they planted varying numbers of plant species, from monocultures to mixtures of 60 species. In addition to plants, all other organisms occurring in the ecosystem were also examined -- in and above the ground. In addition, soil scientists also investigated the material cycles for carbon, nitrogen, and nitrate, as well as the water cycle over the entire 15-year period.

By doing so, researchers could prove how the diversity of species affected the capacity of the ground to absorb, store, or release water. "No other experiment to date has examined the nutrient cycle with such rigor," says Prof. Wolfgang W. Wilcke from the Institute of Geoecology at the KIT in Karlsruhe. In the Jena Experiment, it was demonstrated for the first time the extent to which e.g. the nitrogen cycle of a certain piece of land depended on a wide range of factors such as species diversity, microbiological organisms, the water cycle, and plant interaction.

Among other things, the findings led to the following conclusions:

  • High-diversity meadows had a higher productivity than low-diversity meadows over the entire period of the Jena Experiment. Increased cultivation intensity via additional fertilization and more frequent mowing achieved the same effect: When a farmer promotes certain species and fertilizes, he is on average not any more successful than mother nature.
  • The energy of the biomass (bioenergy content) from high-diversity meadows was significantly higher than that from low-diversity meadows, but at the same time similar to that of many of today's highly subsidized species, such as miscanthus.

Better ecosystem services through biodiversity

  • High-diversity areas achieved better carbon storage.
  • The number of insects and other species was significantly higher.
  • Reciprocal interactions between species such as pollination took place more frequently.
  • Higher-diversity meadows transported surface water into the soil better.
  • High-diversity ecosystems were more stable in the case of disruptions such as droughts or floods than low-diversity ecosystems.

Due to its breadth, the Jena Experiment proves for the very first time that a loss of biodiversity results in negative consequences for many individual components and processes in ecosystems. Hence, the loss of species worldwide not only means that a percentage of the evolutionary legacy of the earth is being irrecoverably lost, and that humans are not fulfilling their duty of care towards other creatures, but will have direct, unpleasant consequences for mankind. Among other things, the loss of species also has an effect on material cycles -- which in turn have a direct influence on water supply, the source of all life.

Read more at Science Daily

This Is the Deepest View Into the Universe That Has Ever Been Seen

This color image shows the Hubble Ultra Deep Field region, a tiny but much-studied region in the constellation of Fornax, as observed with the MUSE instrument on ESO’s Very Large Telescope. But this picture only gives a very partial view of the riches of the MUSE data, which also provide a spectrum for each pixel in the picture. This data set has allowed astronomers not only to measure distances for far more of these galaxies than before — a total of 1600 — but also to find out much more about each of them. Surprisingly 72 new galaxies were found that had eluded deep imaging with the NASA/ESA Hubble Space Telescope.
The Hubble Ultra Deep Field might be one of the most famous images in astronomy. The deepest ever high-resolution observations by the venerable Hubble Space Telescope were published in 2004, and in a seemingly empty patch of sky, Hubble revealed nearly 10,000 galaxies, with some of them more than 13 billion light-years from Earth.

Now, astronomers with the Very Large Telescope in Chile have taken a new look at the Hubble Ultra Deep Field (HUDF) region, staring even deeper than Hubble to reveal 72 new galaxies, as well as measuring the distances and properties of 1,600 very faint galaxies.

The team of over 50 astronomers says that the new observations using the MUSE (Multi Unit Spectroscopic Explorer) instrument have provided new insights on star formation in the early universe by allowing them to study the motions and other properties of early galaxies. The new studies also show how ground-based observations can fully contribute to our understanding of the cosmos.

“MUSE has the unique ability to extract information about some of the earliest galaxies in the Universe — even in a part of the sky that is already very well studied,” said Jarle Brinchmann in a statement.

Brinchmann, an astronomer at the University of Leiden in the Netherlands and the Institute of Astrophysics and Space Sciences at CAUP in Porto, Portugal, is the lead author of one of 10 papers describing results from this survey.

“We learn things about these galaxies that is only possible with spectroscopy," he said, “such as chemical content and internal motions — not galaxy by galaxy but all at once for all the galaxies!”

This new data contains the deepest spectroscopic observations ever made. Spectroscopy analyzes an object's  light,  and allows astronomers to infer the physical properties of that object, such as temperature, mass, luminosity, composition and even velocity.

MUSE’s spectroscopic information was measured for 1600 galaxies, which is 10 times more than what has been “painstakingly obtained in this field over the last decade by ground-based telescopes,” as the team put it in one of their papers.

“MUSE can do something that Hubble can’t,” said the principal investigator of the team, Roland Bacon, from the Lyon Astrophysical Research Center in France, in a statement. “It splits up the light from every point in the image into its component colors to create a spectrum. This allows us to measure the distance, colors and other properties of all the galaxies we can see — including some that are invisible to Hubble itself.”

This compound image shows the Hubble Ultra Deep Field region and highlights in blue the glowing haloes of gas around many distant galaxies discovered using the MUSE instrument on ESO's Very Large Telescope in Chile. The discovery of so many huge haloes, which radiate ultraviolet Lyma-alpha radiation, around many distant galaxies is one of the many results coming out of this very deep spectroscopic survey.
The HUDF region is in the constellation Fornax and is only a tenth of the size of the full moon. The astronomers specifically chose  the this  field of view to make their study, Brinchmann told Seeker.

“This is the best-studied spot on the sky,” he remarked, “thus we knew there was a lot of supporting data that would help us interpret our very deep observations.”

Brinchmann said being able to use the Hubble data along with the new data the team acquired was key in understanding their observations.

Hubble data “was very important for much of the analysis as it helped us disentangle objects that were blurred together by the Earth’s atmosphere,” he said via email. “In fact, in many  ways  it was a very mutually beneficial process — the value of the HST data are significantly enhanced by the information provided by MUSE, and without the HST data the MUSE results would be much less easy to interpret.”

The original HUDF image required 800 exposures taken over the course of 400 Hubble orbits around Earth, with a total exposure time of 11.3 days between Sept. 24,  2003  and Jan. 16, 2004. The new observations by MUSE were taken over the course of two years with a total of 137 hours of telescope time.

MUSE was able to detect galaxies 100 times fainter than in previous surveys, seeing galaxies of various ages, sizes, shapes, and colors. The smallest, reddest galaxies may be among the most distant known, existing when the universe was just 800 million years old. The larger, brighter, well-defined spirals and elliptical galaxies were active about a billion years ago. The new observations provide additional information about galaxies formation and evolution across time.

Additionally, the team found luminous hydrogen halos around galaxies in the early universe, which provides new information of how material flows in and out of early galaxies.

The 72 newly found galaxies are ones that shine only in Lyman-alpha light, a form of ultraviolet light that usually indicates extremely distant objects. This is perplexing because galaxies that shine in only one form of light have not been seen before. These galaxies are ripe for further study.

Brinchmann and his colleagues weren’t expecting to find new galaxies in their observations.

“We were surprised,” he said. “Finding new galaxies is in itself not so exciting — we find loads everywhere we look if no-one has looked there before. But this was the best-studied part of the sky, with the deepest images that have ever been obtained, and it was a real surprise that we could find new galaxies that were not visible in these ultra-deep images from Hubble. We had not quite expected to go deeper than Hubble, forced as we were to peer through the Earth’s atmosphere.”

Read more at Seeker

Nov 28, 2017

Archaeologist says fire, not corn, key to prehistoric survival in arid Southwest

UC professor Alan Sullivan's research is challenging the assumption that prehistoric people subsisted on corn in America's Southwest. Instead, he said evidence suggests they used fire to cultivate wild foods.
Conventional wisdom holds that prehistoric villagers planted corn, and lots of it, to survive the dry and hostile conditions of the American Southwest.

But University of Cincinnati archaeology professor Alan Sullivan is challenging that long-standing idea, arguing instead that people routinely burned the understory of forests to grow wild crops 1,000 years ago.

"There has been this orthodoxy about the importance of corn," said Sullivan, director of graduate studies in UC's Department of Anthropology in the McMicken College of Arts and Sciences. "It's been widely considered that prehistoric peoples of Arizona between A.D. 900 to 1200 were dependent on it.

"But if corn is lurking out there in the Grand Canyon, it's hiding successfully because we've looked all over and haven't found it."

Sullivan has published a dozen papers outlining the scarce evidence of corn agriculture at more than 2,000 sites where they have found pottery sherds and other artifacts of prehistoric human settlement. He summarized his findings in a presentation last month at Boston University.

Sullivan has spent more than two decades leading archaeological field research to Grand Canyon National Park and the region's Upper Basin, home to the 1.6-million-acre Kaibab National Forest.

When you think of the Grand Canyon, you might picture rocky cliffs and desert vistas. But the Upper Basin, where Sullivan and his students work, is home to mature forests of juniper and pinyon trees stretching as far as you can see, he said.

"When you look down into the Grand Canyon, you don't see any forest. But on either rim there are deep, dense forests," he said.

On these high-elevation plateaus, Sullivan and his students have unearthed ceramic jugs adorned with corrugated patterns and other evidence of prehistoric life. Sullivan is particularly interested in the cultural and social practices of growing, sharing and eating food, also called a foodway.

"What would constitute evidence of a corn-based foodway?" he asked. "And if experts agree it should look like this but we don't find evidence of it, that would seem to be a problem for that model."

Like a detective, Sullivan has pieced together clues firsthand and from scientific analysis to make a persuasive argument that people used fire to promote the growth of edible leaves, seeds and nuts of plants such as amaranth and chenopodium, wild relatives of quinoa. These plants are called "ruderals," which are the first to grow in a forest disturbed by fire or clear-cutting.

"It's definitely a paradigm-threatening opinion," Sullivan said. "It's not based on wild speculation. It's evidence-based theorizing. It has taken us about 30 years to get to the point where we can confidently conclude this."

Lab analysis identified ancient pollen from dirt inside clay pots that were used 1,000 years ago before Sullivan and his students found them.

"They've identified 6,000 or 7,000 pollen grains and only six [grains] were corn. Everything else is dominated by these ruderals," Sullivan said.

The corn itself looked nothing like the hearty ears of sweet corn people enjoy at barbecues today. The ears were puny, about one-third the size of a typical cob, with tiny, hard kernels, Sullivan said.

So if prehistoric people were not growing corn, what were they eating? Sullivan found clues around his excavation sites that people set fires big enough to burn away the understory of grasses and weeds but small enough not to harm the pinyon and juniper trees, important sources of calorie-rich nuts and berries.

Evidence for this theory was found in ancient trees. Raging wildfires leave burn scars in growth rings of surviving trees. In the absence of frequent small fires, forests would accumulate vast amounts of underbrush and fallen timber to create conditions ripe for an inferno sparked by a lightning strike. But examinations of ancient juniper and ponderosa pine trees found no burn scars, suggesting big fires are a relatively new phenomenon in Arizona.

"To me that confirms there weren't massive fires back then," Sullivan said.

Sullivan also studied the geologic layers at these sites. Like a time capsule, the stratigraphic analysis captured the periods before and after people lived there. He found higher concentrations of wild edible plants in the period when people lived there. And when people abandoned the sites, the area they left behind saw fewer of these plants.

But it was only this year that Sullivan found contemporary evidence supporting his theory that prehistoric people generated a spring bounty by setting fires. Sullivan returned to the Grand Canyon last spring to examine forest destroyed by a massive 2016 fire. Touched off by a lightning strike, the blaze called the Scott Fire laid waste to 2,660 acres of pines, junipers and sagebrush.

Despite the intensity of the forest fire, Sullivan found edible plants growing thick everywhere underfoot just months later.

"This burned area was covered in ruderals. Just covered," he said. "That to us was confirmation of our theory. Our argument is there's this dormant seed bed that is activated by any kind of fire."

Archaeologists with the National Park Service have found evidence that corn grew below the rim of the Grand Canyon, said Ellen Brennan, cultural resource program manager for the national park.

"It does appear that the ancient people of the Grand Canyon never pursued corn agriculture to the extent that other ancestral Puebloan peoples did in other parts of the Southwest," Brennan said. "In the Grand Canyon, it appears that there continued to be persistent use of native plants as a primary food source rather than corn."

The National Park Service has not examined whether prehistoric people used fire to improve growing conditions for native plants. But given what is known about cultures at the time, it is likely they did, Brennan said.

The first assumptions about what daily life was like in the Southwest 1,000 years ago came from more recent observations of Native Americans such as the Hopi, said Neil Weintraub, archaeologist for Kaibab National Forest. He worked alongside Sullivan at some of the sites in the Upper Basin.

"Corn is still a big part of the Hopi culture. A lot of dances they do are about water and the fertility of corn," he said. "The Hopi are seen as the descending groups of Puebloan."

While native peoples elsewhere in the Southwest no doubt relied on corn, Weintraub said, Sullivan's work has convinced him that residents of the Upper Basin relied on wild food -- and used fire to cultivate it.

"It's a fascinating idea because we really see that these people were highly mobile. On the margins where it's very dry we think they were taking advantage of different parts of the landscape at different times of the year," Weintraub said.

"It's been well documented that Native Americans burned the forest in other parts of the country. I see no reason why they wouldn't have been doing the same thing 1,000 years ago," he said.

The area around the Grand Canyon is especially dry, going many weeks without rain. Still, life persists. Weintraub said the forest generates a surprising bounty of food if you know where to look. Some years, the pinyon trees produce a bumper crop of tasty, nutritious nuts.

"In a good year, we didn't need to bring lunch in the field when we were out at our archaeological surveys. We'd be cracking pinyons all day," Weintraub said.

Weintraub recently studied the forest burned in last year's big Scott Fire. The exposed ground was thick with new undergrowth, particularly a wild relative of quinoa called goosefoot, he said.

"Goosefoot has a minty smell to it, especially in the fall. We actually started chewing on it. It was pretty pleasant," Weintraub said. "It's a high-nutrient food. I'd be curious to know more about how native peoples processed it for food."

UC's Sullivan said this prehistoric land management can teach us lessons today, especially when it comes to preventing devastating fires.

"Foresters call it 'the wicked problem.' All of our forests are anthropogenic [human-made] because of fire suppression and fire exclusion," Sullivan said.

"These forests are unnatural. They're alien to the planet. They have not had any major fires in them in decades," he said. "The fuel loads have built up to the point where you get a little ignition source and the fire is catastrophic in ways that they rarely were in the past."

The National Park Service often lets fires burn in natural areas when they do not threaten people or property. But increasingly people are building homes and businesses adjacent to or within forests. Forest managers are reluctant to conduct controlled burning so close to population, Sullivan said.

Eventually so much dry wood builds up that a dropped cigarette or unattended campfire can lead to devastating fires such as the 2016 blaze that killed 14 people and destroyed 11,000 acres in the Great Smoky Mountains or the fires in California this year that killed 40 people and caused an estimated $1 billion in property damage.

"It's a chronic problem. How do you fix it?" he asked. "The U.S. Forest Service has experimented with different methods: prescribed burning, which creates a lot of irritating smoke, or thinning the forest, which creates a disposal problem."

Fire also seems to increase the diversity of forest species. Sullivan said vegetation surveys find less biodiversity in forests today than he found in his archeological samples.

"That is one measure of how devastating our management of fire has been to these forests," he said. "These fire-responsive plants have basically disappeared from the landscape. Species diversity in some cases has collapsed."

Today, federal land managers conduct controlled burns when practical to address this problem, even in national parks such as the Grand Canyon.

"The fire management program for Grand Canyon National Park seeks to reintroduce fire as a natural agent of the environment," the park's Brennan said. "That is to reduce ground fuels through prescribed fire, mechanical thinning, and wildland fire."

Scientists also are studying how to adjust forest management techniques in the face of climate change, she said.

"Program managers are working to understand how climate change affects forest management and how to restore forests to the point where fire can follow a more natural return interval given a particular forest type," she said.

Climate change is expected to make wildfires more frequent and severe with rising temperatures and lower humidity. Meanwhile, public lands are under increasing pressure from private interests such as tourism and mining, putting more people at potential risk from fire, Sullivan said.

Read more at Science Daily

Infant stars found surprisingly near galaxy's supermassive black hole

Infant stars, like those recently identified near the supermassive black hole at the center of our galaxy, are surrounded by a swirling disk of dust and gas. In this artist's conception of infant solar system, the young star pulls material from its surroundings into rotating disk (right) and generates outflowing jets of material (left).
At the center of our galaxy, in the immediate vicinity of its supermassive black hole, is a region wracked by powerful tidal forces and bathed in intense ultraviolet light and X-ray radiation. These harsh conditions, astronomers surmise, do not favor star formation, especially low-mass stars like our sun. Surprisingly, new observations from the Atacama Large Millimeter/submillimeter Array (ALMA) suggest otherwise.

ALMA has revealed the telltale signs of eleven low-mass stars forming perilously close -- within three light-years -- to the Milky Way's supermassive black hole, known to astronomers as Sagittarius A* (Sgr A*). At this distance, tidal forces driven by the supermassive black hole should be energetic enough to rip apart clouds of dust and gas before they can form stars.

The presence of these newly discovered protostars (the formative stage between a dense cloud of gas and a young, shining star) suggests that the conditions necessary to birth low-mass stars may exist even in one of the most turbulent regions of our galaxy and possibly in similar locales throughout the universe.

The results are published in the Astrophysical Journal Letters.

"Despite all odds, we see the best evidence yet that low-mass stars are forming startlingly close to the supermassive black hole at the center of the Milky Way," said Farhad Yusef-Zadeh, an astronomer at Northwestern University in Evanston, Illinois, and lead author on the paper. "This is a genuinely surprising result and one that demonstrates just how robust star formation can be, even in the most unlikely of places."

The ALMA data also suggest that these protostars are about 6,000 years old. "This is important because it is the earliest phase of star formation we have found in this highly hostile environment," Yusef-Zadeh said.

The team of researchers identified these protostars by seeing the classic "double lobes" of material that bracket each of them. These cosmic hourglass-like shapes signal the early stages of star formation. Molecules, like carbon monoxide (CO), in these lobes glow brightly in millimeter-wavelength light, which ALMA can observe with remarkable precision and sensitivity.

Protostars form from interstellar clouds of dust and gas. Dense pockets of material in these clouds collapse under their own gravity and grow by accumulating more and more star-forming gas from their parent clouds. A portion of this infalling material, however, never makes it onto the surface of the star. Instead, it is ejected as a pair of high-velocity jets from the protostar's north and south poles. Extremely turbulent environments can disrupt the normal procession of material onto a protostar, while intense radiation -- from massive nearby stars and supermassive black holes -- can blast away the parent cloud, thwarting the formation of all but the most massive of stars.

The Milky Way's galactic center, with its 4 million solar mass black hole, is located approximately 26,000 light-years from Earth in the direction of the constellation Sagittarius. Vast stores of interstellar dust obscure this region, hiding it from optical telescopes. Radio waves, including the millimeter and submillimeter light that ALMA sees, are able to penetrate this dust, giving radio astronomers a clearer picture of the dynamics and content of this hostile environment.

Prior ALMA observations of the region surrounding Sgr A* by Yusef-Zadeh and his team revealed multiple massive infant stars that are estimated to be about 6 million years old. These objects, known as proplyds, are common features in more placid star-forming regions, like the Orion Nebula. Though the galactic center is a challenging environment for star formation, it is possible for particularly dense cores of hydrogen gas to cross the necessary threshold and forge new stars.

The new ALMA observations, however, revealed something even more remarkable, signs that eleven low-mass protostars are forming within 1 parsec -- a scant 3 light-years -- of the galaxy's central black hole. Yusef-Zadeh and his team used ALMA to confirm that the masses and momentum transfer rates -- the ability of the protostar jets to plow through surrounding interstellar material -- are consistent with young protostars found throughout the disk of our galaxy.

"This discovery provides evidence that star formation is taking place within clouds surprisingly close to Sagittarius A*," said Al Wootten with the National Radio Astronomy Observatory in Charlottesville, Virginia, and co-author on the paper. "Though these conditions are far from ideal, we can envision several pathways for these stars to emerge."

For this to occur, outside forces would have to compress the gas clouds near the center of our galaxy to overcome the violent nature of the region and allow gravity to take over and form stars. The astronomers speculate that high-velocity gas clouds could aid in star formation as they force their way through the interstellar medium. It is also possible that jets from the black hole itself could be plowing into the surrounding gas clouds, compressing material and triggering this burst of star formation.

"The next step is to take a closer look to confirm that these newly formed stars are orbited by disks of dusty gas," concluded Mark Wardle, an astronomer at Macquarie University in Sydney, Australia, and co-investigator on the team. "If so, it's likely that planets will eventually form from this material, as is the case for young stars in the galactic disk."

Read more at Science Daily

Mexico's Yucatan Peninsula reveals a cryptic methane-fueled ecosystem in flooded caves

Caves within a karst subterranean estuary are filled with separated fresh (green), brackish (gray) and saline (blue) waters. Within the subterranean estuary, methane (CH4) and other forms of dissolved organic carbon (DOC) created during the decomposition of soil from the overlying tropical forest sustain a complex cave-adapted ecosystem.
In the underground rivers and flooded caves of Mexico's Yucatan Peninsula, where Mayan lore described a fantastical underworld, scientists have found a cryptic world in its own right. Here, methane and the bacteria that feed off it form the lynchpin of an ecosystem that is similar to what has been found in deep ocean cold seeps and some lakes, according to recent research by Texas A&M University at Galveston, the U.S. Geological Survey and a team of collaborators from Mexico, The Netherlands, Switzerland and other U.S. institutions.

The research, conducted by scientists who are trained in cave diving in addition to their other expertise, is the most detailed ecological study ever for a coastal cave ecosystem that is always underwater. In fact, the scientists had to use techniques that had previously been used by deep-sea submergence vehicles to be able to study the environment.

"The opportunity to work with an international team of experts has been a remarkable experience for me," said David Brankovits, who is the paper's lead author and conducted the research during his Ph.D. studies at TAMUG. "Finding that methane and other forms of mostly invisible dissolved organic matter are the foundation of the food web in these caves explains why cave-adapted animals are able to thrive in the water column in a habitat without visible evidence of food."

The study was conducted in the Ox Bel Ha cave network of the northeastern Yucatan, which is described as a subterranean estuary because the flooded cave passages contain distinct water layers consisting of freshwater fed by rainfall and salt water from the coastal ocean. This subterranean estuary complex covers an area approximately the size of Galveston Bay, the seventh largest surface estuary in the United States.

The freshwater portion of the caves and the sinkholes, which are used to access the caves and are referred to locally as cenotes, are important sources of freshwater for communities throughout the Yucatan. Methane in the caves forms naturally beneath the jungle floor and migrates downward, deeper into the water and caves. Normally, all of the methane formed in soils migrates upward, towards the atmosphere.

This sets the stage for the bacteria and other microbes that form the basis for the cave ecosystem. The microbes eat both the methane in the water and other dissolved organic material that the freshwater brought with it from the surface. The microbes then fuel a food web that is dominated by crustaceans, including a cave-adapted shrimp species that obtains about 21 percent of its nutrition from methane.

"The processes we are investigating in these stratified groundwater systems are analogous to what is happening in the global ocean, especially in oxygen minimum zones where deoxygenation is a growing concern," says John Pohlman, a coauthor of the study and a USGS biogeochemist whose work from the early 90s motivated the research. "Although accessing these systems requires specialized training and strict adherence to cave diving safety protocols, relative to the complexity of an oceanographic expedition, the field programs we organize are simple and economical."

One surprising finding was how important the dissolved organic material like methane was to the caves' food web. Prior studies had assumed that the majority of organic material that feeds the microbes of caves came from vegetation and other detritus in the tropical forest that washed into the caves from the cenotes.

However, deep within the caves, where the study was conducted, there is very little of that surface debris, so the microbes depend on methane and the other dissolved organics percolating downward through the ceiling of the caves.

Read more at Science Daily

A horse is a horse, of course, of course -- except when it isn't

Two skulls of the new genus Haringtonhippus from Nevada (upper) and Texas (lower).
An international team of researchers has discovered a previously unrecognized genus of extinct horses that roamed North America during the last ice age.

The new findings, published November 28 in the journal eLife, are based on an analysis of ancient DNA from fossils of the enigmatic "New World stilt-legged horse" excavated from sites such as Natural Trap Cave in Wyoming, Gypsum Cave in Nevada, and the Klondike goldfields of Canada's Yukon Territory.

Prior to this study, these thin-limbed, lightly built horses were thought to be related to the Asiatic wild ass or onager, or simply a separate species within the genus Equus, which includes living horses, asses, and zebras. The new results, however, reveal that these horses were not closely related to any living population of horses.

Now named Haringtonhippus francisci, this extinct species of North American horse appears to have diverged from the main trunk of the family tree leading to Equus some 4 to 6 million years ago.

"The horse family, thanks to its rich and deep fossil record, has been a model system for understanding and teaching evolution. Now ancient DNA has rewritten the evolutionary history of this iconic group," said first author Peter Heintzman, who led the study as a postdoctoral researcher at UC Santa Cruz.

"The evolutionary distance between the extinct stilt-legged horses and all living horses took us by surprise, but it presented us with an exciting opportunity to name a new genus of horse," said senior author Beth Shapiro, professor of ecology and evolutionary biology at UC Santa Cruz.

The team named the new horse after Richard Harington, emeritus curator of Quaternary Paleontology at the Canadian Museum of Nature in Ottawa. Harington, who was not involved in the study, spent his career studying the ice age fossils of Canada's North and first described the stilt-legged horses in the early 1970s.

"I had been curious for many years concerning the identity of two horse metatarsal bones I collected, one from Klondike, Yukon, and the other from Lost Chicken Creek, Alaska. They looked like those of modern Asiatic kiangs, but thanks to the research of my esteemed colleagues they are now known to belong to a new genus," said Harington. "I am delighted to have this new genus named after me. "

The new findings show that Haringtonhippus francisci was a widespread and successful species throughout much of North America, living alongside populations of Equus but not interbreeding with them. In Canada's North, Haringtonhippus survived until roughly 17,000 years ago, more than 19,000 years later than previously known from this region.

At the end of the last ice age, both horse groups became extinct in North America, along with other large animals like woolly mammoths and saber-toothed cats. Although Equus survived in Eurasia after the last ice age, eventually leading to domestic horses, the stilt-legged Haringtonhippus was an evolutionary dead end.

"We are very pleased to name this new horse genus after our friend and colleague Dick Harington. There is no other scientist who has had greater impact in the field of ice age paleontology in Canada than Dick," said coauthor Grant Zazula, a Government of Yukon paleontologist. "Our research on fossils such as these horses would not be possible without Dick's life-long dedication to working closely with the Klondike gold miners and local First Nations communities in Canada's North."

Read more at Science Daily