Apr 28, 2018

Why a robot can't yet outjump a flea

The award for the fastest punch goes to mantis shrimp, which use their hammer-like appendages to smash open snail shells for food.
When it comes to things that are ultrafast and lightweight, robots can't hold a candle to the fastest-jumping insects and other small-but-powerful creatures.

New research could help explain why nature still beats robots, and describes how machines might take the lead.

Take the smashing mantis shrimp, a small crustacean not much bigger than a thumb. Its hammer-like mouthparts can repeatedly deliver 69-mile-per-hour wallops more than 100 times faster than the blink of an eye to break open hard snail shells.

Or the unassuming trap-jaw ant: In a zero-to-60 matchup, even the fastest dragster would have little chance against its snapping mandibles, which reach speeds of more than 140 miles per hour in less than a millisecond to nab their prey.

One of the fastest accelerations known on Earth is the hydra's sting. These soft-bodied aquatic creatures defend themselves with help from capsules along their tentacles that act like pressurized balloons. When triggered, they fire a barrage of microscopic poison spears that briefly accelerate 100 times faster than a bullet.

In a study to appear April 27 in the journal Science, researchers describe a new mathematical model that could help explain how these and other tiny organisms generate their powerful strikes, chomps, jumps and punches. The model could also suggest ways to design small, nature-inspired robots that come closer to their biological counterparts in terms of power or speed.

The secret to these organisms' explosive movements isn't powerful muscles, but rather spring-loaded parts they can cock and release like an archer's bow, said Sheila Patek, associate professor of biology at Duke University.

Tough yet flexible tendons, cuticles and other elastic structures stretch and release like slingshots, powering their jumps and snaps.

A short-legged insect called the froghopper, for example, has a bow-like structure called the pleural arch that acts like a spring. Latch-like protrusions on their legs control its release, allowing them to leap more than 100 times their body length despite their short legs. A person with that much power could jump nearly two football fields.

However, it's not clear how these mechanisms work together to enhance power, said Mark Ilton, a postdoctoral fellow at the University of Massachusetts Amherst.

While traditional mathematical models of performance take into account the inherent physical tradeoffs of muscle -- which can contract forcefully, or quickly, but not both -- they fail to factor in the tradeoffs inherent to springs and latch-like mechanisms as well. In other words, nothing can be faster, stronger, and more powerful at the same time.

"Until now these other components have been mostly black-boxed," Patek said.

The researchers developed a mathematical model of fast motion at small scales that incorporates constraints on springs and latches.

"Part of our goal was to try to develop a model that is equally generalizable to biological or engineered systems," said Manny Azizi, an assistant professor of ecology and evolutionary biology at the University of California, Irvine who studies jumping frogs.

First, they compiled data on the size and top speeds and accelerations for 104 species of elite plant and animal athletes. They compared the data to similar measurements for miniature robots inspired by ultrafast movements such as unfurling chameleon tongues, snapping Venus fly traps and hopping insects.

By incorporating the performance tradeoffs of biological and synthetic springs and latches, the researchers hope to better understand how variables such as spring mass, stiffness, material composition and latch geometry work together with muscles or motors to influence power.

The model allows researchers to input a set of spring, latch and muscle or motor parameters and get back details about an individual's theoretical maximum speed, acceleration, and other aspects of performance at a given weight.

The model has major implications for engineers. It suggests that robots can't yet outjump a flea in part because such quick, repeatable movements require components to be exquisitely fine-tuned to each other.

But the model gives researchers a tool to design small, fast-moving robots with more precisely matched components that work better together to enhance performance, said Sarah Bergbreiter, an associate professor of mechanical engineering at the University of Maryland who makes jumping robots the size of an ant.

"If you have a particular size robot that you want to design, for example, it would allow you to better explore what kind of spring you want, what kind of motor you want, what kind of latch you need to get the best performance at that size scale, and understand the consequences of those design choices," Bergbreiter said.

Read more at Science Daily

Mercury's thin, dense crust

Though Mercury may look drab to the human eye, different minerals appear in a rainbow of colors in this image from NASA's MESSENGER spacecraft.
Mercury is small, fast and close to the sun, making the rocky world challenging to visit. Only one probe has ever orbited the planet and collected enough data to tell scientists about the chemistry and landscape of Mercury's surface. Learning about what is beneath the surface, however, requires careful estimation.

After the probe's mission ended in 2015, planetary scientists estimated Mercury's crust was roughly 22 miles thick. One University of Arizona scientist disagrees.

Using the most recent mathematical formulas, Lunar and Planetary Laboratory associate staff scientist Michael Sori estimates that the Mercurial crust is just 16 miles thick and is denser than aluminum. His study, "A Thin, Dense Crust for Mercury," will be published May 1 in Earth and Planetary Science Letters and is currently available online.

Sori determined the density of Mercury's crust using data collected by the Mercury Surface, Space Environment and Geochemistry Ranging (MESSENGER) spacecraft. He created his estimate using a formula developed by Isamu Matsuyama, a professor in the Lunar and Planetary Laboratory, and University of California Berkeley scientist Douglas Hemingway.

Sori's estimate supports the theory that Mercury's crust formed largely through volcanic activity. Understanding how the crust was formed may allow scientists to understand the formation of the entire oddly structured planet.

"Of the terrestrial planets, Mercury has the biggest core relative to its size," Sori said.

Mercury's core is believed to occupy 60 percent of the planet's entire volume. For comparison, Earth's core takes up roughly 15 percent of its volume. Why is Mercury's core so large?

"Maybe it formed closer to a normal planet and maybe a lot of the crust and mantle got stripped away by giant impacts," Sori said. "Another idea is that maybe, when you're forming so close to the sun, the solar winds blow away a lot of the rock and you get a large core size very early on. There's not an answer that everyone agrees to yet."

Sori's work may help point scientists in the right direction. Already, it has solved a problem regarding the rocks in Mercury's crust.

Mercury's Mysterious Rocks

When the planets and Earth's moon formed, their crusts were born from their mantles, the layer between a planet's core and crust that oozes and flows over the course of millions of years. The volume of a planet's crust represents the percentage of mantle that was turned into rocks.

Before Sori's study, estimates of the thickness of Mercury's crust led scientists to believe 11 percent of the planet's original mantle had been turned into rocks in the crust. For the Earth's moon -- the celestial body closest in size to Mercury -- the number is lower, near 7 percent.

"The two bodies formed their crusts in very different ways, so it wasn't necessarily alarming that they didn't have the exact same percentage of rocks in their crust," Sori said.

The moon's crust formed when less dense minerals floated to the surface of an ocean of liquid rock that became the body's mantle. At the top of the magma ocean, the moon's buoyant minerals cooled and hardened into a "flotation crust." Eons of volcanic eruptions coated Mercury's surface and created its "magmatic crust."

Explaining why Mercury created more rocks than the moon did was a scientific mystery no one had solved. Now, the case can be closed, as Sori's study places the percentage of rocks in Mercury's crust at 7 percent. Mercury is no better than the moon at making rocks.

Sori solved the mystery by estimating the crust's depth and density, which meant he had to find out what kind of isostasy supported Mercury's crust.

Determining Density and Depth

The most natural shape for a planetary body to take is a smooth sphere, where all points on the surface are an equal distance from the planet's core. Isostasy describes how mountains, valleys and hills are supported and kept from flattening into smooth plains.

There are two main types isostasy: Pratt and Airy. Both focus on balancing the masses of equally sized slices of the planet. If the mass in one slice is much greater than the mass in a slice next to it, the planet's mantle will ooze, shifting the crust on top of it until the masses of every slice are equal.

Pratt isostasy states that a planet's crust varies in density. A slice of the planet that contains a mountain has the same mass as a slice that contains flat land, because the crust that makes the mountain is less dense than the crust that makes flat land. In all points of the planet, the bottom of the crust floats evenly on the mantle.

Until Sori completed his study, no scientist had explained why Pratt isostasy would or wouldn't support Mercury's landscape. To test it, Sori needed to relate the planet's density to its topography. Scientists had already constructed a topographic map of Mercury using data from MESSENGER, but a map of density didn't exist. So Sori made his own using MESSENGER's data about the elements found on Mercury's surface.

"We know what minerals usually form rocks, and we know what elements each of these minerals contain. We can intelligently divide all the chemical abundances into a list of minerals," Sori said of the process he used to determine the location and abundance of minerals on the surface. "We know the densities of each of these minerals. We add them all up, and we get a map of density."

Sori then compared his density map with the topographic map. If Pratt isostasy could explain Mercury's landscape, Sori expected to find high-density minerals in craters and low-density minerals in mountains; however, he found no such relationship. On Mercury, minerals of high and low density are found in mountains and craters alike.

With Pratt isostasy disproven, Sori considered Airy isostasy, which has been used to make estimates of Mercury's crustal thickness. Airy isostasy states that the depth of a planet's crust varies depending on the topography.

"If you see a mountain on the surface, it can be supported by a root beneath it," Sori said, likening it to an iceberg floating on water.

The tip of an iceberg is supported by a mass of ice that protrudes deep underwater. The iceberg contains the same mass as the water it displaces. Similarly, a mountain and its root will contain the same mass as the mantle material being displaced. In craters, the crust is thin, and the mantle is closer to the surface. A wedge of the planet containing a mountain would have the same mass as a wedge containing a crater.

"These arguments work in two dimensions, but when you account for spherical geometry, the formula doesn't exactly work out," Sori said.

The formula recently developed by Matsuyama and Hemingway, though, does work for spherical bodies like planets. Instead of balancing the masses of the crust and mantle, the formula balances the pressure the crust exerts on the mantle, providing a more accurate estimate of crustal thickness.

Sori used his estimates of the crust's density and Hemingway and Matsuyama's formula to find the crust's thickness. Sori is confident his estimate of Mercury's crustal thickness in its northern hemisphere will not be disproven, even if new data about Mercury is collected. He does not share this confidence about Mercury's crustal density.

MESSENGER collected much more data on the northern hemisphere than the southern, and Sori predicts the average density of the planet's surface will change when density data is collected over the entire planet. He already sees the need for a follow-up study in the future.

Read more at Science Daily

Apr 27, 2018

Genetic roadmap to building an entire organism from a single cell

A zebrafish egg cell divides.
Whether a worm, a human or a blue whale, all multicellular life begins as a single-celled egg.

From this solitary cell emerges the galaxy of others needed to build an organism, with each new cell developing in the right place at the right time to carry out a precise function in coordination with its neighbors.

This feat is one of the most remarkable in the natural world, and despite decades of study, a complete understanding of the process has eluded biologists.

Now, in three landmark studies published online April 26 in Science, Harvard Medical School and Harvard University researchers report how they have systematically profiled every cell in developing zebrafish and frog embryos to establish a roadmap revealing how one cell builds an entire organism.

Using single-cell sequencing technology, the research teams traced the fates of individual cells over the first 24 hours of the life of an embryo. Their analyses reveal the comprehensive landscape of which genes are switched on or off, and when, as embryonic cells transition into new cell states and types.

Together, the findings represent a catalog of genetic "recipes" for generating different cell types in two important model species and provide an unprecedented resource for the study of developmental biology and disease.

"With single-cell sequencing, we can, in a day's work, recapitulate decades of painstaking research on the decisions cells make at the earliest stages of life," said Allon Klein, HMS assistant professor of systems biology and co-corresponding author of two of the three Science studies.

Biomedically, these baseline resources for how organisms develop are as important as having baseline resources for their genomes, the researchers said.

"With the approaches that we've developed, we're charting what we think the future of developmental biology will be as it transforms into a quantitative, 'big-data'-driven science," Klein said.

In addition to shedding new light on the early stages of life, the work could open the door to a new understanding of a host of diseases, said Alexander Schier, the Leo Erikson Life Sciences Professor of Molecular and Cellular Biology at Harvard, and a corresponding author of the third study.

"We foresee that any complex biological process in which cells change gene expression over time can be reconstructed using this approach," Schier said. "Not just the development of embryos but also the development of cancer or brain degeneration."

One at a time


Every cell in a developing embryo carries within it a copy of the organism's complete genome. Like construction workers using only the relevant portion of a blueprint when laying a building's foundation, cells must express the necessary genes at the appropriate time for the embryo to develop correctly.

In their studies, Klein collaborated with co-authors Marc Kirschner, the HMS John Franklin Enders University Professor of Systems Biology, Sean Megason, HMS associate professor of systems biology and colleagues to analyze this process in zebrafish and western claw-toed frog (Xenopus tropicalis) embryos, two of the most well-studied model species in biology.

The researchers leveraged the power of InDrops, a single-cell sequencing technology developed at HMS by Klein, Kirschner and colleagues, to capture gene expression data from each cell of the embryo, one cell at a time. The teams collectively profiled more than 200,000 cells at multiple time points over 24 hours for both species.

To map the lineage of essentially every cell as an embryo develops, along with the precise sequence of gene expression events that mark new cell states and types, the teams developed new experimental and computational techniques, including the introduction of artificial DNA bar codes to track the lineage relationships between cells, called TracerSeq.

"Understanding how an organism is made requires knowing which genes are turned on or off as cells make fate decisions, not just the static sequence of a genome," Megason said. "This is the first technological approach that has allowed us to systematically and quantitatively address this question."

In the study co-led by Schier, the research team used Drop-Seq -- a single-cell sequencing technology developed by researchers at HMS and the Broad Institute of MIT and Harvard -- to study zebrafish embryos over 12 hours at high time resolution. Teaming with Aviv Regev, core member at the Broad, Schier and colleagues reconstructed cell trajectories through a computational method they named URD, after the Norse mythological figure who decides all fates.

Schier and colleagues profiled more than 38,000 cells, and developed a cellular "family tree" that revealed how gene expression in 25 cell types changed as they specialize. By combining that data with spatial inference, the team was also able to reconstruct the spatial origins of the various cells types in the early zebrafish embryo.

Recipe for success

In both species, the teams' findings mirrored much of what was previously known about the progression of embryonic development, a result that underscored the power of the new approaches. But the analyses were unprecedented in revealing in comprehensive detail the cascades of events that take cells from early progenitor or "generalist" states to more specialized states with narrowly defined functions.

The teams identified otherwise difficult-to-detect details such as rare cell types and subtypes and linked new and highly specific gene expression patterns to different cell lineages. In several cases, they found cell types emerging far earlier than was previously thought.

For scientists striving to answer questions about human disease, these data could be powerfully illuminating. In regenerative medicine, for example, researchers have for decades aimed to manipulate stem cells toward specific fates with the goal of replacing defective cells, tissues or organs with functional ones. Newly gleaned details about the sequence of gene expression changes that precipitate the emergence of specific cell types can propel these efforts further.

"With these datasets, if someone wants to make a specific cell type, they now have the recipe for the steps that those cells took as they formed in the embryo," Klein said. "We've in some sense established a gold standard reference for how complex differentiation processes actually progress in embryos, and set an example for how to systematically reconstruct these types of processes."

When combined with one of the core concepts in biological inquiry -- the idea of disrupting a system to study what happens -- single-cell sequencing can yield insights difficult to attain before, Klein said.

As a proof of principle, Klein, Megason and colleagues used the CRISPR/Cas9 gene editing system to create zebrafish with a mutant form of chordin, a gene involved in determining the back-to-front orientation of a developing embryo. Schier and colleagues took a similar approach by profiling zebrafish with a mutation in a different patterning gene known as one-eyed pinhead.

When analyzed with single-cell sequencing, the teams confirmed previously known descriptions of chordin and one-eyed pinhead mutants, and could describe in detail or even predict the effects of these mutations on developing cells and nascent tissues across the whole embryo.

Unexpectedly, the groups independently found that at the single-cell level, gene expression was the same in mutants and wildtype, despite the loss of an essential signaling pathway. The proportions of different cell types, however, changed.

"This work only became possible through recent technologies that let us analyze gene expression in thousands of individual cells," Schier said. "Now the scale is much larger, so that we can reconstruct the trajectory of almost all cells and all genes during embryogenesis. It is almost like going from seeing a few stars to seeing the entire universe."

Rethinking definitions

The research teams also demonstrated how these data can be mined to answer long-standing fundamental questions in biology.

When Klein, Kirschner, Megason and colleagues compared cell-state landscapes between zebrafish and frog embryos, they observed mostly similarities. But their analyses revealed numerous surprises as well. One such observation was that genes marking cell states in one species were often poor gene markers for the same cell state in the other species. In several instances, they found that the DNA sequence of a gene -- and the structure of the protein it encodes -- could be nearly identical between species but have very different expression patterns.

"This really shocked us, because it goes against all the intuition we had about development and biology," Klein said. "It was a really uncomfortable observation. It directly challenges our idea of what it means to be a certain 'cell type.'"

The reason that these differences were not spotted before, the researchers hypothesize, is that computational analyses "pay attention" to data in a way fundamentally different from how humans do.

"I think this reflects some level of confirmation bias. When scientists find something conserved between species, they celebrate it as a marker," Megason said. "But often, all the other nonconserved features are ignored. Quantitative data helps us move past some of these biases."

In another striking finding, the teams observed that the process of cell differentiation into distinct cell types -- which is commonly thought to occur in a tree-like structure where different cell types branch off from a common ancestor cell -- can form "loops" as well as branches.

For example, the neural crest -- a group of cells that give rise to diverse tissue types including smooth muscle, certain neurons and craniofacial bone -- initially emerges from neural and skin precursors, but is well-known to generate cells that appear almost identical to bone and cartilage precursors.

The new results suggest that similar loops might occur in other situations. That cells in the same state can have very different developmental histories suggests that our hierarchical view of development as a "tree" is far too simplified, Klein said.

All three teams also identified certain cell populations that existed in a kind of intermediate "decision making" state. Schier and colleagues found that, at certain key developmental branch points, cells appeared to go down one developmental trajectory but then changed their fate to another trajectory.

Klein, Megason, Kirschner and colleagues made a related observation that, early in development, some cells activated two distinct developmental programs. Though those intermediate cells would eventually adopt a single identity, these discoveries add to the picture of how cells develop their eventual fate and hint that there may be factors beyond genes involved in directing cell fate.

"With multilineage cells, we have to start wondering if their final fate is being determined by some selective force or interaction with the environment, rather than just genetic programs," Kirschner said.

Future foundation


The newly generated data sets and the new tools and technologies developed as part of these studies lay the foundation for a wide spectrum of future exploration, according to the authors.

Developmental biologists can gather more and higher quality data on many species, follow embryos further in time and perform any number of perturbation experiments, all of which can help improve our understanding of the fundamental rules of biology and disease.

These resources can also serve as a focal point for collaboration and interaction since most labs do not have the depth of expertise needed to exploit all the data and information generated, the authors noted.

"I think these studies are creating a real sense of community, with researchers raising questions and interacting with each other in a way that harkens back to earlier times in the study of embryology," Kirschner said.

The three studies, Schier said, are an example of how the scientific community can work on complementary questions to answer important questions in biology.

"Instead of competing, our groups were in regular contact over the past two years and coordinated the publication of our studies," he said. "And it is great how complementary the three papers are -- each highlights different ways such complex data sets can be generated, analyzed and interpreted."

The next conceptual leap, the teams suggest, will be to better understand how cell-fate decisions are made.

"Right now, we have a roadmap, but it doesn't tell us what the signs are," Megason said. "What we need to do is figure out the signals that direct cells down certain roads, and what the internal mechanisms are that allow cells to make those decisions."

Read more at Science Daily

Dinosaurs' tooth wear sheds light on their predatory lives

This figure shows microwear patterns on the teeth of three theropods.
Predatory, bird-like theropod dinosaurs from the Upper Cretaceous (100.5-66 million years ago) of Spain and Canada all relied on a puncture-and-pull bite strategy to kill and consume their prey. But close examination of patterns of wear and modeling of their serrated, blade-like teeth reported in Current Biology on April 26 also suggest that these dinosaurs weren't necessarily in direct competition for their next meal. Some of them apparently preyed on larger, struggling prey, while others stuck to softer or smaller fare.

"All these dinosaurs were living at the same time and place, so it is important to know if they were competing for food resources or if they were aiming for different prey," says Angelica Torices of Universidad de La Rioja, Spain. "Through this work we [can] begin to understand the interactions between these predatory dinosaurs in the ecosystem a bit better.

"We find that, in general, predatory coelurosaurian dinosaurs bite in the same way through a puncture-and-pull system, but troodontids and dromaeosaurids may have preferred different prey," she adds, noting that troodontids apparently favored requiring lower bite forces in comparison to dromeosaurs. Coelurosaurians include a group of theropod dinosaurs more closely related to birds than to other dinosaurs, including the allosaurs.

Torices has always had an interest in the teeth of carnivorous dinosaurs. At first, her goal was to match tooth remains to the dinosaur species they had come from. Over time, she grew curious about how various dinosaur species used their teeth, how that related to specific tooth shapes and sizes, and what she might learn about dinosaurs' lives based on that.

Torices first examined the microwear, or patterns of small scratches on the teeth, to see whether she could establish any pattern in the way various dinosaurs were eating. She, along with colleagues including Ryan Wilkinson from the University of Alberta, Canada, also used a modeling approach called finite elements analysis, commonly used to solve problems in engineering and mathematical physics, to explore how the dinosaurs' teeth most likely behaved at different cutting angles.

Both approaches led to the same general conclusion, she says. All of the dinosaurs studied employed a puncture-and-pull feeding movement, in which parallel scratches form while they bite down into prey, followed by oblique scratches as the head is pulled backwards with the jaws closed, the researchers report. However, they found, the different tooth shapes performed differently under a variety of simulated biting angles.

The evidence suggests that Dromaeosaurus and Saurornitholestes were well adapted for handling struggling prey or for processing bone as part of their diet. By comparison, Troodon teeth were more likely to fail at awkward bite angles. The findings suggest that troodontids may have preferred softer prey such as invertebrates, smaller prey that required a less powerful bite or could be swallowed whole, or immobile prey such as carrion.

Read more at Science Daily

Brain Reconstructions Suggest Reasons for the Decline of Neanderthals

Skulls are displayed as part of the Neanderthal exhibition at the Musee de l'Homme in Paris on March 26, 2018.
Since the brain is made of soft tissue, it starts to decompose just minutes after death due to autolysis, or self-digestion, which usually begins in the brain and liver. Putrefaction, a molecular process that turns tissue into gases, liquids, and salts, follows.

There are rare cases of ancient brains, such as the 2,600-year-old Heslington Brain, being found "pickled" in certain wet, anoxic environments, but prehistoric brains in the archaeological record are very few.

Scientists therefore lack intact Neanderthal and early Homo sapiens brains to study. But an innovative team has just reconstructed such brains using a technique called computational neuroanatomy. The 3D models they produced, reported in the journal Scientific Reports, are the first of their kind.

"Our attempt to actually reconstruct the brain inside of the fossil crania is completely new to the field," co-author Naomichi Ogihara of Keio University's Department of Mechanical Engineering told Seeker.

Ogihara, co-senior authors Norihiro Sadato and Takeru Akazawa, and their colleagues used virtual casts of four Neanderthal and four early Homo sapiens skull fossils to reconstruct the size of their brains. The Neanderthals lived in what are now Israel, France, and Gibraltar. The early Homo sapiens came from Israel and the Czech Republic.

The authors then used MRI data from the brains of 1,185 living humans to model the average human brain. They also considered non-human primate brains and the skull of a Cro-Magnon individual who lived 32,000 years ago.

The resulting computer model was then deformed to match the shape of the Neanderthal and early Homo sapiens skull casts. This allowed the researchers to predict what the brains of these humans might have looked like, and how individual brain regions could have differed between the two groups.

It should be noted that many researchers believe Neanderthals were members of our species. Ogihara told Seeker there is ample evidence "showing that Neanderthals and Homo sapiens interbred. We believe so, too."

As a result, the majority of people alive today retain Neanderthal DNA. These include people whose heritage is North African, as well as people with Eurasian ancestry.

It is even possible that the early modern humans included in the study were related to Neanderthals.

“We certainly cannot deny the possibility that the specimens we used already interbred with Neanderthals,” Ogihara told Seeker.

But, he added, “there is no obvious reason to think that way, so we basically assumed that the individuals used in the present study had not interbred with Neanderthals.”

The computer models confirmed prior findings that Neanderthal brains were larger than those of anatomically modern humans. The researchers, however, do not believe that bigger is always better when it comes to brains.

The international team concluded that Neanderthals and early modern humans possessed significantly different brain morphologies, including the latter having a larger cerebellum. Since this part of the brain is associated with language comprehension and production, working memory, and cognitive flexibility, the researchers believe that early modern humans were superior to Neanderthals in terms of these abilities.

"We are not saying that Neanderthals were incapable of processing languages," Ogihara said. "We think they could communicate verbally, but their social ability using languages was probably limited because of the brain structural differences."

It is possible that Neanderthals relied more on visual information. They are thought to have been the world's first artists. The earliest known cave art, reported this year, consists of 65,000-year-old paintings found in three Neanderthal caves in Spain.

Ogihara and his colleagues determined that Neanderthals had a larger occipital lobe than did early modern humans.

"The occipital lobe is the visual processing center," Ogihara explained. "Neanderthals possibly required the larger occipital lobe to compensate for low light levels in Europe."

Because of this, Neanderthals may have been unable to evolve the cerebellum expansion seen in early modern humans.

Brain comparison studies come with inherent challenges, as even brains within a particular species today are not the same. The brains of male humans, for example, tend to be slightly larger than those of females, but the majority of scientists believe that brain size does not necessarily correlate with intelligence.

A bigger Neanderthal brain does not appear to have been advantageous when early modern humans began to dominate their former territories. While Neanderthals — via their DNA — were absorbed into modern Homo sapiens to a certain extent, their extinction is widely believed to have begun around 40,000 years ago. This period coincides with greater numbers of early modern humans migrating into Eurasia.

Ogihara said that his team's research cannot definitively conclude what led to the disappearance of Neanderthals. But, he said, the study shows innate morphological differences in the brain structure actually existed between Neanderthals and early Homo sapiens, which possibly led to differences in cognitive and social abilities.

"Although the difference could be subtle,” he said, “such a subtle difference may become significant in terms of natural selection."

The jury is still out on Neanderthal brain power. Joao Zilhao of the Catalan Institute for Research and Advanced Studies was a member of the team that reported the Neanderthal cave art.

Zilhao told Seeker that both Neanderthals and early Homo sapiens must have possessed the cognitive hardware required for advanced symbolic behavior, such as cave art and body ornamentation.

“The fact that we find the capability in both Neandertals and early modern humans implies that said capability existed in the common ancestor around 500,000 years ago,” he said. “Ergo, I would think it entirely logical to consider that the null hypothesis is the co-evolution of brain, language, and symbolic thinking, and that the fundamentals of human cognition as we know it were in place ever since we see people with big brains in the fossil record — i.e., since at least 1.5 million years ago."

Read more at Seeker

Can We Stop a ‘Mass Extinction’ of Human Languages?

Tribal people of Papua prepare a feast during Bakar Batu party on February 23, 2015 in Wamena in Papua, Indonesia.
There are more than 7,000 languages on Earth, yet half of the world’s 7.6 billion people speak just 24 of them and 95 percent speak just 400 of them. That leaves five percent of the global population spread across 6,600 different languages, hundreds of them now spoken by less than ten people.

The rate of language loss has reached such a breakneck pace that some scholars predict we’ll lose 90 percent of the world’s languages in the next century, akin to a linguistic mass extinction event.

It’s not an accident that linguists have borrowed terms from biology to classify languages as vulnerable, endangered, or extinct. As many ethnobiologists and conservationists have come to understand, nature and culture are both products of evolution, and many of the same forces that threaten biological diversity also endanger linguistic diversity.

Jonathan Loh is an honorary research fellow at the University of Kent and the co-author, with Dave Harmon, of a 2014 report for the World Wildlife Fund called “Biocultural Diversity: Threatened Species, Threatened Languages.” Loh and Harmon define biocultural diversity as the sum of evolutionary processes that have produced distinctive species of plants and animals, as well as distinctive cultures and languages. Thanks to shifts in human activity, all are under threat.

The concept that languages evolve in similar ways to biological species isn’t new, Loh told Seeker.

"The formation of different languages and of distinct species, and the proofs that both have been developed through a gradual process, are curiously parallel," Charles Darwin wrote in The Descent of Man. "Dominant languages and dialects spread widely, and lead to the gradual extinction of other tongues. A language, like a species, when once extinct, never, as Sir C. Lyell remarks, reappears."

The reason Darwin was so knowledgeable about the evolution of language, Loh told Seeker, was that a century before Darwin and others were arguing that all species evolved from common ancestors, linguists like William Jones were doing the same thing with language. Jones, an 18th-century British judge in India, spoke more than a dozen languages and took an interest in Ancient Sanskrit, which he discovered had striking similarities to Greek and Latin.

“Which completely blew his mind, because he could think of no reason why there should be,” said Loh.

The answer, Jones decided, was that they must have branched off from some even more ancient tongue, which he called Proto-Indo-European. Jones and others created the first “family tree” of all the languages that diverged and re-converged from that original language — last spoken an estimated 9,000 years ago — including seemingly unrelated languages like Russian, Hindi, Spanish, Swedish, and English.

With the discovery of DNA, biologists began to understand how life on Earth, which began as single-celled organisms 3.9 billion years ago, evolved into the stunning diversity of species on the planet today. Around 540 million years ago, for example, favorable climate and atmospheric conditions led to the Cambrian Explosion, where scientists believe the genetic components came together to jump-start the evolution of multicellular life.

Some time after Homo sapiens were on the scene 200,000 years ago, explained Loh, there was a second explosion — a cultural explosion. And the trigger was the development of language. We don’t know exactly when and where human language first appeared, but language, like DNA, was the vehicle by which information could be passed from one generation to another.

This is where biological evolution and cultural evolution show their fascinating similarities. In natural selection, the gene is the basic currency. If a gene inherited from two parents offers a competitive advantage, it’s more likely to be passed on to the next generation. Biological diversity is powered by constant genetic mutations, which, if advantageous, can branch off into new species.

The evolution of culture, according to influential thinkers like Richard Dawkins and Daniel Dennett, has its own currency: the meme. A meme is unit of a cultural knowledge — like a song, story, recipe, art, or style of dress — that can be passed along primarily through the use of language. Memes, like genes, mutate as they pass from one brain to the next, if these mutated memes gain traction, they may evolve into new cultures and languages.

Interestingly, the regions of the world with the greatest biological diversity are also the ones with the most languages. In general, language diversity follows Rapaport’s Rule, which states that species density is highest at the equator and thins out as you move north and south toward the poles. And there are also distinct “hotspots” of biocultural diversity across the Amazon Basin, Central Africa, and the Indonesia/Malaysia region, home to the undisputed champion of linguistic diversity: New Guinea.

Of the 7,000 languages in the world, 1,000 of them are spoken exclusively in New Guinea. With a population of less than 12 million, that means that 14 percent of the world’s languages are spoken by 0.14 percent of global population.

One of the theories explaining why linguistic diversity blossoms in the tropics is that lots of rivers and mountains divide the landscape, isolating small pockets of people. As Darwin found on the Galapagos Islands, geographic isolation allows for distinct traits to evolve out of the same species. That may help explain the language diversity in New Guinea, said Loh, which is carved up by rivers and mountains, and where tribes are not only isolated, but often hostile to outsiders. 

The chief difference between biological and linguistic evolution is the speed of change.

“Biological evolution takes place over millions of years. Languages and cultures evolve incredibly fast by comparison,” said Loh. “If you go back in English to Chaucer, who died only 600 years ago, it’s really hard to read and understand his English. Within 25 generations, that ability to understand has gone, because the language has changed so much.”

Because the rate of change is so fast, languages can also go extinct much faster than biological species. An estimated six percent of global human languages have gone extinct since 1970, for example, while only one percent each of mammal, bird, and amphibian species have disappeared in that same time span.

The main driver of language endangerment and extinction is a process called language shift, when speakers switch from a native, typically indigenous tongue to the dominant national language. John Sullivant, a language data curator with the Archive of the Indigenous Languages of Latin America, told Seeker that language shift happens for a variety of reasons, but is largely driven by the level of contact with the national culture and the marginalization of indigenous communities.

“In Mexico,” Sullivant said, “nearly every indigenous group is in very close contact with Spanish. And that large amount of contact and the ability to move outside of the community — or having to move outside of the community for various reasons — makes the transmission of the language from parents to children that much more precarious.”

It’s clear that economic forces threaten both biological diversity and linguistic diversity. Members of economically marginalized indigenous communities often migrate to bigger cities or even other countries to support their families, shifting to the dominant language for work. Similarly, the globalization of manufacturing increases the plundering of natural resources, which drives habitat loss, one of the main ways that endangered species go extinct.

What’s doubly troubling is that when a language dies out, so does a wealth of knowledge about native plants and animals, exactly the type of information that conservationists need to protect critical species. Some conservation biologists estimate that indigenous communities, which cluster in regions with the greatest natural biodiversity, are the stewards of 99 percent of the world’s genetic diversity.

Richard Stepp is an ethnobiologist at the University of Florida who has conducted fieldwork among indigenous Mayan communities in Mexico, Belize, and Guatemala.

“In some of these cultures, the single largest category of nouns are plant names. They may have thousands of plant names,” Stepp told Seeker. “So the biodiversity is intimately linked to the language.”

Languages, like species, deserve to be preserved for their own sake, but there are also more utilitarian reasons to want to preserve the knowledge encoded in indigenous languages. For example, only a fraction of the world’s plants have been exhaustively studied for their medicinal properties, but it’s very likely that indigenous cultures have cumutively tested just about everything.

“For a lot of these cultures, their primary healthcare is what they find growing around their house,” said Stepp. “In order to know what’s on the shelf of that living pharmacy, you have to have the language.”

Loh believes that more linguists working with indigenous communities need to receive basic training in biology and botany so that they can capture the depth of the scientific knowledge encoded in endangered languages before they disappear. Stepp said that he brings along specialists for that very reason.

“I’ve seen instances where 5-year-old kids know more plants than adult Westerners. They can easily name 150 plant species,” Stepp said. “This knowledge is gained at a really early age, and not only allows them to survive, but to live a very rich life through the knowledge of food plants.”

Read more at Seeker

Apr 26, 2018

Magma ocean may be responsible for the moon's early magnetic field

The bottom-most layer of the moon's mantle melts to form a metal-rich "basal magma ocean" that sits on top of the moon's metal core. Convection in this layer may have driven a dynamo, creating a magnetic field which would have been recorded at the surface by the cooling lunar crust, including the samples brought back by Apollo astronauts.
Around four billion years ago, the Moon had a magnetic field that was about as strong as Earth's magnetic field is today. How the Moon, with a much smaller core than Earth's, could have had such a strong magnetic field has been an unsolved problem in the history of the Moon's evolution.

Scientist Aaron Scheinberg of Princeton, with Krista Soderlund from the University of Texas Institute for Geophysics, and Linda Elkins-Tanton of Arizona State University, set out to determine what may have powered this early lunar magnetic field. Their results and a new model for how this may have happened, have been recently published in Earth and Planetary Science Letters.

A new model

Earth's magnetic field protects our planet by deflecting most of the solar wind, whose charged particles would otherwise strip away the ozone layer that protects the Earth from harmful ultraviolet radiation.

While Earth's magnetic field is generated by the motions of its convecting liquid metal outer core, known as the dynamo, the Moon's core is too small to have produced a magnetic field of that magnitude.

So, the research team proposed a new model for how the magnetic field could have reached Earth-like levels. In this scenario, the dynamo is powered not by the Moon's small metal core, but by a heavy layer of molten (liquid) rock that sits on top of it.

In this proposed model, the bottom-most layer of the Moon's mantle melts to form a metal-rich "basal magma ocean" that sits on top of the Moon's metal core. Convection in this layer then drives the dynamo, creating a magnetic field.

"The idea of a basal magma ocean dynamo had been proposed for the early Earth's magnetic field, and we realized that this mechanism may also be important for the Moon," says co-author Soderlund.

Soderlund further explains that a partially molten layer is thought to still exist at the base of the lunar mantle today. "A strong magnetic field is easier to achieve at the Moon's surface if the dynamo operated in the mantle rather than in the core," she says, "because magnetic field strength decreases rapidly the farther away it is from the dynamo region."

In simulations of the core dynamo of the Moon conducted by the team, they kept finding that the lower layer of the Moon's mantle was overheating and melting. Initially, they tried to focus on cases without melting that were easier to model, but eventually considered that the melting process was the key to their new model.

"Once we started thinking of that melting as a feature, instead of a bug," says Scheinberg, "the pieces started fitting together and we wondered if the melting that we saw in the models could produce a metal-rich magma ocean to power the strong early field."

A later weak magnetic field

Further along in the evolution of the Moon (around 3.56 billion years ago), there is also evidence that the strong magnetic field that existed around the Moon eventually became a weak magnetic field, one that continued until relatively recently. The team's new model may also help explain this phenomenon as well.

"Our model provides an elegant potential solution," says Scheinberg. "As the Moon cooled, the magma ocean would have solidified, while the core dynamo would have continued to create the later weak field."

"We're excited by this result because it explains fundamental observations about the Moon -- its early, strong magnetic field and its subsequent weakening and then disappearance -- using first-order processes already supported by other observations," adds co-author Elkins-Tanton.

Beyond providing a new model to build from, this research may also provide a better understanding of planetary magnetic field generation elsewhere in our solar system and beyond.

Read more at Science Daily

Molecular evolution: How the building blocks of life may form in space

Star forming region (Pillars of Creation) in the Eagle Nebula. Low-energy electrons, created in matter by space radiation (e.g., galactic cosmic rays, GCR, etc.), can induce formation of glycine (2HN-CH2-COOH) in astrophysical molecular ices; here, icy grains of interstellar dust (or ices on planetary satellites) are simulated by ammonia, methane and carbon dioxide condensed at 20 K on Pt in UHV, and irradiated by 0-70 eV LEEs.
In a laboratory experiment that mimics astrophysical conditions, with cryogenic temperatures in an ultrahigh vacuum, scientists used an electron gun to irradiate thin sheets of ice covered in basic molecules of methane, ammonia and carbon dioxide. These simple molecules are ingredients for the building blocks of life. The experiment tested how the combination of electrons and basic matter leads to more complex biomolecule forms -- and perhaps eventually to life forms.

"You just need the right combination of ingredients," author Michael Huels said. "These molecules can combine, they can chemically react, under the right conditions, to form larger molecules which then give rise to the bigger biomolecules we see in cells like components of proteins, RNA or DNA, or phospholipids."

The right conditions, in space, include ionizing radiation. In space, molecules are exposed to UV rays and high-energy radiation including X-rays, gamma rays, stellar and solar wind particles and cosmic rays. They are also exposed to low-energy electrons, or LEEs, produced as a secondary product of the collision between radiation and matter. The authors examined LEEs for a more nuanced understanding of how complex molecules might form.

In their paper, in the Journal of Chemical Physics, from AIP Publishing, the authors exposed multilayer ice composed of carbon dioxide, methane and ammonia to LEEs and then used a type of mass spectrometry called temperature programmed desorption (TPD) to characterize the molecules created by LEEs.

In 2017, using a similar method, these researchers were able to create ethanol, a nonessential molecule, from only two ingredients: methane and oxygen. But these are simple molecules, not nearly as complex as the larger molecules that are the stuff of life. This new experiment has yielded a molecule that is more complex, and is essential for terrestrial life: glycine.

Glycine is an amino acid, made of hydrogen, carbon, nitrogen and oxygen. Showing that LEEs can convert simple molecules into more complex forms illustrates how life's building blocks could have formed in space and then arrived on Earth from material delivered via comet or meteorite impact.

In their experiment, for each 260 electrons of exposure, one molecule of glycine was formed. Seeking to know how realistic this rate of formation was in space, not just in the laboratory, the researchers extrapolated out to determine the probability that a carbon dioxide molecule would encounter both a methane molecule and ammonia molecule and how much radiation they, together, might encounter.

Read more at Science Daily

Archaeologists on ancient horse find in Nile River Valley

The Tombos horse was discovered in 2011. The ancient horse is dated to the Third Intermediate Period, 1050-728 B.C.E., and it was found more than 5 feet underground in a tomb. The horse, with some chestnut-colored fur remaining, had been buried in a funeral position with a burial shroud. The discovery provides a window into human-animal relationships more than 3,000 years ago.
An ancient horse burial at Tombos along the Nile River Valley shows that a member of the horse family thousands of years ago was more important to the culture than previously thought, which provides a window into human-animal relationships more than 3,000 years ago.

The research findings are published in Antiquity. The Tombos horse was discovered in 2011, and members of the Purdue team -- professor Michele Buzon and alumna Sarah Schrader -- played a part in the excavation and analysis. The horse is dated to the Third Intermediate Period, 1050-728 B.C.E., and it was found more than 5 feet underground in a tomb. The horse, with some chestnut-colored fur remaining, had been buried in a funeral position with a burial shroud.

"It was clear that the horse was an intentional burial, which was super fascinating," said Buzon, a professor of anthropology. "Remnants of fabric on the hooves indicate the presence of a burial shroud. Changes on the bones and iron pieces of a bridle suggest that the horse may have pulled a chariot. We hadn't found anything like this in our previous excavations at Tombos. Animal remains are very rare at the site."

Buzon, a bioarchaeologist, has worked with Stuart Tyson Smith, anthropology professor at the University of California, Santa Barbara, for 18 years at this site in modern-day Sudan, and both are principal investigators on the project. Buzon uses health and cultural evidence from more than 3,000-year-old burial sites to understand the lives of Nubians and Egyptians during the New Kingdom Empire. This is when Egyptians colonized the area in about 1500 B.C. to gain access to trade routes on the Nile River. Over the years, hundreds of artifacts, including pottery, tools, carvings and dishes were unearthed at this burial site for about 200 individuals.

"Finding the horse was unexpected," Schrader said. "Initially, we weren't sure if it was modern or not. But as we slowly uncovered the remains, we began to find artifacts associated with the horse, such as the scarab, the shroud and the iron cheekpiece. At that point, we realized how significant this find was. Of course, we became even more excited when the carbon-14 dates were assessed and confirmed how old the horse was."

Schrader, who graduated from Purdue in 2013 with a doctoral degree in anthropology, is an assistant professor of human osteoarchaeology at Leiden University in The Netherlands. Schrader is lead author on this article, and she helped frame this find within the context of Nubian history.

Once the archaeologists discovered the horse, Sandra Olsen, curator-in-charge at the Biodiversity Institute and Natural History Museum at the University of Kansas and a well-known ancient horse expert, was invited to Purdue to analyze the horse skeleton. Buzon coordinated the analysis between the team, and she established the chronology of the horse via radiocarbon dating.

Read more at Science Daily

Projectile cannon experiments show how asteroids can deliver water

Special delivery. Experiments using a high-powered projectile cannon suggest that asteroids can deliver surprising amounts of water when they smash into planetary bodies.
Experiments using a high-powered projectile cannon show how impacts by water-rich asteroids can deliver surprising amounts of water to planetary bodies. The research, by scientists from Brown University, could shed light on how water got to the early Earth and help account for some trace water detections on the Moon and elsewhere.

"The origin and transportation of water and volatiles is one of the big questions in planetary science," said Terik Daly, a postdoctoral researcher at Johns Hopkins University who led the research while completing his Ph.D. at Brown. "These experiments reveal a mechanism by which asteroids could deliver water to moons, planets and other asteroids. It's a process that started while the solar system was forming and continues to operate today."

The research is published in Science Advances.

The source of Earth's water remains something of a mystery. It was long thought that the planets of the inner solar system formed bone dry and that water was delivered later by icy comet impacts. While that idea remains a possibility, isotopic measurements have shown that Earth's water is similar to water bound up in carbonaceous asteroids. That suggests asteroids could also have been a source for Earth's water, but how such delivery might have worked isn't well understood.

"Impact models tell us that impactors should completely devolatilize at many of the impact speeds common in the solar system, meaning all the water they contain just boils off in the heat of the impact," said Pete Schultz, co-author of the paper and a professor in Brown's Department of Earth, Environmental and Planetary Sciences. "But nature has a tendency to be more interesting than our models, which is why we need to do experiments."

For the study, Daly and Schultz used marble-sized projectiles with a composition similar to carbonaceous chondrites, meteorites derived from ancient, water-rich asteroids. Using the Vertical Gun Range at the NASA Ames Research Center, the projectiles were blasted at a bone-dry target material made of pumice powder at speeds around 5 kilometers per second (more than 11,000 miles per hour). The researchers then analyzed the post-impact debris with an armada of analytical tools, looking for signs of any water trapped within it.

They found that at impact speeds and angles common throughout the solar system, as much as 30 percent of the water indigenous in the impactor was trapped in post-impact debris. Most of that water was trapped in impact melt, rock that's melted by the heat of the impact and then re-solidifies as it cools, and in impact breccias, rocks made of a mish-mash of impact debris welded together by the heat of the impact.

The research gives some clues about the mechanism through which the water was retained. As parts of the impactor are destroyed by the heat of the collision, a vapor plume forms that includes water that was inside the impactor.

"The impact melt and breccias are forming inside that plume," Schultz said. "What we're suggesting is that the water vapor gets ingested into the melts and breccias as they form. So even though the impactor loses its water, some of it is recaptured as the melt rapidly quenches."

The findings could have significant implications for understanding the presence of water on Earth. Carbonaceous asteroids are thought to be some of the earliest objects in the solar system -- the primordial boulders from which the planets were built. As these water-rich asteroids bashed into the still-forming Earth, it's possible that a process similar to what Daly and Schultz found enabled water to be incorporated in the planet's formation process, they say. Such a process could also help explain the presence of water within the Moon's mantle, as research has suggested that lunar water has an asteroid origin as well.

The work could also explain later water activity in the solar system. Water found on the Moon's surface in the rays of the crater Tycho could have been derived from the Tycho impactor, Schultz says. Asteroid-derived water might also account for ice deposits detected in the polar regions of Mercury.

Read more at Science Daily

Apr 25, 2018

To see the first-born stars of the universe

The galaxy cluster Abell 2744 lies at a distance of about 3.5 billion light-years and contains more than 400 member galaxies. The combined gravity of all the galaxies makes the cluster act as a lens to magnify the light from stars beyond including, the team hopes, the first stars to form in the universe.
About 200 to 400 million years after the Big Bang created the universe, the first stars began to appear. Ordinarily stars lying at such a great distance in space and time would be out of reach even for NASA's new James Webb Space Telescope, due for launch in 2020.

However, astronomers at Arizona State University are leading a team of scientists who propose that with good timing and some luck, the Webb Space Telescope will be able to capture light from the first stars to be born in the universe.

"Looking for the first stars has long been a goal of astronomy," said Rogier Windhorst, Regents' Professor of astrophysics in ASU's School of Earth and Space Exploration. "They will tell us about the actual properties of the very early universe, things we've only modeled on our computers until now."

Windhorst's collaborator, Frank Timmes, professor of astrophysics at the School of Earth and Space Exploration, adds, "We want to answer questions about the early universe such as, were binary stars common or were most stars single? How many heavy chemical elements were produced, cooked up by the very first stars, and how did those first stars actually form?

Duho Kim, a School of Earth and Space Exploration graduate student of Windhorst's, worked on modeling star populations and dust in galaxies.

The other collaborators on the paper are J. Stuart B. Wyithe (University of Melbourne, Australia), Mehmet Alpaslan (New York University), Stephen K. Andrews (University of Western Australia), Daniel Coe (Space Telescope Science Institute), Jose M. Diego (Instituto de Fisica de Cantabria, Spain), Mark Dijkstra (University of Oslo), and Simon P. Driver and Patrick L. Kelly (both University of California, Berkeley).

The team's paper, published in the Astrophysical Journal Supplement, describes how the challenging observations can be done.

Gravity's magnifying lens

The first essential step in the task relies on the infrared sensitivity of the Webb Telescope. While the first stars were large, hot and radiated far-ultraviolet light, they lie so far away that the expansion of the universe has shifted their radiation peak from the ultraviolet to much longer infrared wavelengths. Thus their starlight drops into the Webb Telescope's infrared detectors like a baseball landing in a fielder's mitt.

The second essential step is to use the combined gravity of an intervening cluster of galaxies as a lens to focus and magnify the light of the first generation stars. Typical gravitational lensing can magnify light 10 to 20 times, but that's not enough to make a first-generation star visible to the Webb Telescope. For Webb, the candidate star's light needs boosting by factor of 10,000 or more.

To gain that much magnification calls for "caustic transits," special alignments where a star's light is greatly magnified for a few weeks as the galaxy cluster drifts across the sky between Earth and the star.

Caustic transits occur because a cluster of galaxies acting as a lens doesn't produce a single image like a reading magnifier. The effect is more like looking through a lumpy sheet of glass, with null zones and hot spots. A caustic is where magnification is greatest, and because the galaxies in the lensing cluster spread out within it, they produce multiple magnifying caustics that trace a pattern in space like a spider web.

Playing the odds

How likely is such an alignment? Small but not zero, say the astronomers, and they note the spider web of caustics helps by spreading a net. Moreover each caustic is asymmetrical, producing a sharp rise to full magnification if a star approaches from one side, but a much slower rise if it approaches from the other side.

"Depending on which side of the caustic it approaches from, a first star would brighten over hours -- or several months," Windhorst explained. "Then after reaching a peak brightness for several weeks, it would fade out again, either slowly or quickly, as it moves away from the caustic line."

A key attribute of the first stars is that they formed out of the early universe's mix of hydrogen and helium with no heavier chemical elements such as carbon, oxygen, iron, or gold. Blazingly hot and brilliantly blue-white, the first stars display a textbook simple spectrum like a fingerprint, as calculated by the ASU team using the open software instrument Modules for Experiments in Stellar Astrophysics.

Another object potentially visible by the same magnifying effect is an accretion disk around the first black holes to form after the Big Bang. Black holes would be the final evolutionary outcome of the most massive first stars. And if any such stars were in a two-star (binary) system, the more massive star, after collapsing to a black hole, would steal gas from its companion to form a flat disk feeding into the black hole.

An accretion disk would display a different spectrum from a first star as it transits a caustic, producing enhanced brightness at shorter wavelengths from the hot, innermost part of the disk compared to the colder outer zones of it. The rise and decay in brightness would also take longer, though this effect would likely be harder to detect.

Accretion disks are expected to be more numerous because solitary first stars, being massive and hot, race through their lives in just a few million years before exploding as supernovas. However, theory suggests that an accretion disk in a black hole system could shine at least ten times longer than a solitary first star. All else being equal, this would increase the odds of detecting accretion disks.

It's educated guesswork at this stage, but the team calculates that an observing program which targets several galaxy clusters a couple of times a year for the lifetime of the Webb Telescope could find a lensed first star or black hole accretion disk. The researchers have selected some target clusters, including the Hubble Frontier Fields clusters and the cluster known as "El Gordo."

"We just have to get lucky and observe these clusters long enough," Windhorst said. "The astronomical community would need to continue to monitor these clusters during Webb's lifetime."

On beyond Webb

Which raises a point. While the Webb Space Telescope will be a technical marvel, it will not have a long operational lifetime like the Hubble Space Telescope. Launched in 1990, the Hubble Telescope is in low Earth orbit and has been serviced by astronauts five times.

The Webb Space Telescope, however, will be placed at a gravitationally stable point in interplanetary space, 1.5 million kilometers (930,000 miles) from Earth. It has been designed to operate for 5 to 10 years, which might with care stretch to about 15 years. But there's no provision for servicing by astronauts.

Accordingly, Windhorst notes that ASU has joined the Giant Magellan Telescope Organization. This is a consortium of universities and research institutions that will build its namesake telescope on a high and dry mountaintop at Las Campanas Observatory in Chile. The site is ideal for infrared observing.

Upon completion in 2026, the GMT will have a light-collecting surface 24.5 meters (80 feet) in diameter, built from seven individual mirrors. (The Webb Space Telescope's main mirror has 18 sections and a total diameter of 6.5 meters, or 21 feet.) The GMT mirrors are expected to achieve a resolving power 10 times greater than that of the Hubble Space Telescope in the infrared region of the spectrum.

There will be a period during which the Webb Telescope and the Giant Magellan Telescope will both be in operation.

"We're planning to make observations of first-generation stars and other objects with the two instruments," Windhorst said. "This will let us cross-calibrate the results from both."

The overlap between the two telescopes is important in another way, he said.

Read more at Science Daily

First Footprint Evidence of Human Hunting Discovered

Human footprint inside a sloth track. This composite track is part of a trackway in which the human appears to have stalked the sloth
Fossilized tracks — footprints created thousands of years ago — provide some of the best evidence for past behaviors. Sometimes they show animal predators hunting, but none have ever been from humans hunting, until now.

Prehistoric footprints for both humans and giant ground sloths have just been discovered at White Sands National Monument in New Mexico. The trackways are interpreted as being the first-known footprint evidence for people hunting. The footprints, described in the journal Science Advances, suggest people 11,000 years ago stalked a giant ground sloth, which was a strong and sharp-clawed animals that could grow up to about 9 feet long.

"Sloth anatomy is not built for speed, but strength," co-author Sally Reynolds of Bournemouth University's Institute for Studies in Landscapes and Human Evolution told Seeker.

"The sloth would have raised itself up to full height and attempted to keep the attackers at bay with its long forearms and large sharp claws," she added. "The hunters would have needed to wait patiently to get the right opportunity to strike the killing blow in a vulnerable part of the sloth anatomy, such as the heart, underbelly, neck, or eyes. The hunters would have been at significant physical risk to themselves while the animal was defending itself."

General view of Alkali Flat at White Sands National Monument (New Mexico) showing a series of excavated footprints in the foreground
Researchers are still dating the tracks and haven't ruled out that the prints could be much older than 11,000 years. If that estimate holds, however, the landscape at the time included "a lake bed with patches of seasonal water," senior author Matthew Bennett, also from Bournemouth University, told Seeker.

He explained that peat and sediment were probably reactivated by water, promoting creation of the now-preserved tracks.

He said footprints from multiple people of various ages were discovered along the edge of the lake bed, which is now a playa. A small number of the prehistoric individuals went out into the drying lake bed. The tracks that they left behind show that they were barefoot, but today would have fit into men's shoes sized at about 8.5 in US size, 8 in UK.

The humans appear to have been following, and sometimes even stepping into, prints left behind by about 2–3 giant ground sloths. In the absence of human tracks, these animals tend to travel in a straight or curvilinear fashion. Here, the lumbering beasts distantly related to anteaters and armadillos made sharp changes in their direction of travel. These changes correspond to the approaching humans.

In one track, a line of human toe impressions suggests that the person approached a giant ground sloth on tiptoe while at least one other person was behind the animal.

These, and the other footprints, according the authors suggest a group of people gathered along the edge of the drying lakebed, possibly to keep the sloths out on the flat mud where they could more easily be attacked.

"One hunter stalks the sloth, harassing it so that it turns toward the stalker," Bennett said. "It rises on its hind legs and swings its forelegs around, putting its claws down to steady itself as it swings."

"While the sloth is distracted," he continued, "another hunter approached and tried to land the killing blow. If successful, the sloth would then have been killed or followed as it bled to death. Having the rest of the group as distant observers would mean that others would be on hand to deal with the carcass, if required."

Composite cast showing a range of footprints from the White Sands National Monument field site
The researchers can tell that neither the hunters nor the sloths were running, perhaps due to the challenges of navigating the muddy substrate.

If the humans were, as suspected, members of the Clovis culture from the American Southwest, they likely would have been hunting the giant sloths with long spears in hand. The researchers are not sure what specific species of giant sloth met its doom at the site, now called Alkali Flat, but Nothrotheriops and Paramylodon both lived in what is now New Mexico during the late Pleistocene.

Notably, all four species of giant ground sloths went extinct shortly afterward — approximately 10,000 years ago.

"This helps us to clarify the debate about how human hunting may have impacted these megafaunal extinctions," Reynolds said.

Read more at Seeker

If the Rotten Egg Smell Doesn’t Kill You, the Negative 200°C Temperature of Uranus Will

View of Uranus from NASA's Voyager 2 probe
There's a lot of really smelly stuff wafting around Uranus.

The clouds in Uranus' upper atmosphere are composed largely of hydrogen sulfide, the molecule that makes rotten eggs so stinky, a new study suggests.

"If an unfortunate human were ever to descend through Uranus' clouds, they would be met with very unpleasant and odiferous conditions," study lead author Patrick Irwin of Oxford University in England said in a statement.

But that wayward pioneer would have bigger problems, he added: "Suffocation and exposure in the negative 200 degrees Celsius (minus 328 degrees Fahrenheit) atmosphere, made of mostly hydrogen, helium, and methane, would take its toll long before the smell."

Researchers have long wondered about the composition of the clouds high up in Uranus's sky — specifically, whether they're dominated by ammonia ice, as at Jupiter and Saturn, or by hydrogen sulfide ice. The answer has proved elusive because it's tough to make observations with the required detail on distant Uranus. (Not only are Jupiter and Saturn closer to Earth, they have also hosted dedicated orbiter missions. Uranus has been visited just once — a brief flyby by NASA's Voyager 2 probe in January 1986.)

Irwin and his colleagues studied Uranus's air using the Near-Infrared Integral Field Spectrometer (NIFS), an instrument on the 26-foot (8 meters) Gemini North telescope in Hawaii. NIFS scrutinized sunlight reflected from the atmosphere just above Uranus' cloud tops — and spotted the signature of hydrogen sulfide.

"Only a tiny amount remains above the clouds as a saturated vapor," study co-author Leigh Fletcher, from the University of Leicester in England, said in the same statement. "And this is why it is so challenging to capture the signatures of ammonia and hydrogen sulfide above cloud decks of Uranus. The superior capabilities of Gemini finally gave us that lucky break."

Neptune's clouds are likely similar to those of Uranus, the researchers said. The big difference between the clouds of these two "ice giants" and those of Jupiter and Saturn probably trace to the worlds' formation environments: Uranus and Neptune coalesced much farther from the sun than the two gas giants did.

Read more at Seeker

Experiments Confirm the Interiors of Uranus and Neptune Are Made of Superionic Ice

Uranus, on the left, Neptune, on the right
A unique form of water ice that is both solid and liquid at the same time might be found inside Uranus and Neptune, according to a recent set of experiments that mimicked the conditions inside the icy giants. 

The results, published in the journal Nature Physics, confirmed a 30-year old theory that a form of water ice called superionic ice likely exists in certain planetary conditions where liquids endure extreme heat and pressure. This includes the ice giants in our own solar system, as well as similar exoplanets discovered in other solar systems throughout our galaxy.  Superionic ice, however, is not found naturally on Earth.

“We wanted to see if we could confirm the prediction for superionic water ice and measure its properties in the laboratory,” lead author Marius Millot, a researcher at Lawrence Livermore National Laboratory, said in an email to Seeker. “It is such an unusual state of matter, we wanted to see if we could create it with shock waves.”

There are perhaps 17 — or more — types of water ice, although some remain theoretical. On Earth’s surface, only one kind of ice occurs naturally — the ice in your drink or that makes up the Antarctic ice sheets — called ice lh (pronounced “ice one h”). As water freezes and turns from a liquid into a solid, the water molecules crystalize into a hexagonal shape.

But depending on the pressure and temperature, the water molecules can line up into different shapes, creating different types of ice. Even water at high temperatures can turn solid when compressed under enough pressure. Ice of this type, called ice VII (pronounced “ice eight”), is known to exist deep within Earth, and it was recently found inside of diamonds. Ice VII has also been created in laboratories.

Superionic ice is thought to form at extreme temperatures and pressures, where oxygen atoms are locked into a crystal structure, but the hydrogen ions move around, making the ice simultaneously solid and liquid, somewhat similar to lava. Over the years, various research groups have explored the properties of water under high pressure using computer simulations of the structure of water.

“These simulations showed that when water is compressed to millions of [Earth] atmospheres and heated to thousands of degrees it forms a crystal of oxygen ions with hydrogen ions moving rapidly through the crystal in a fluid-like manner,” co-author Sebastien Hamel, also from LLNL, said in an email. “However, such simulations have been approximations and so we wanted to verify those predictions by reaching those pressure and temperature conditions for a sample of water in the lab and measuring whether or not it solidified and whether or not the hydrogen ions were fluid-like.”

Millot, Hamel, and their colleagues first created ice VII in their laboratory by putting a small, sub-millimeter-sized water sample inside a diamond anvil cell (DAC), a high-pressure device made up of two opposing diamonds, which places water under extreme pressure. They then hand-carried the sample to a laser facility at the University of Rochester.

“What was novel about our experiment was to combine the compressed ice with a laser-generated shock wave to compress and heat up the water sample to reach the conditions of pressure and temperature that we wanted,” Hamel said.

The water was pre-compressed in the DAC to about 30,000 atmospheres and the shockwave briefly increased the pressure to 2,000,000 atmospheres, while heating the sample to about 4,000 degrees Kelvin.

Over a year, the researchers conducted multiple tests and were able to confirm that the extreme pressure and temperatures created superionic ice. They measured the optical reflectivity and absorption levels, showing the samples were opaque, suggesting that the ions were moving.

“Physicist often measure the optical properties to understand the electronic structure,” Millot explained. “Superionic water ice is a semiconductor, and because there are not enough ‘free electrons’ able to carry electrical current, it is not shiny like a metal. Instead, it absorbs visible light and looks black, opaque if there is a thick enough layer.”

These results were consistent with the computer-simulated predictions and Hamel said the researchers are now working on developing a general capability for performing this type of experiment for various other materials.

Interestingly, the team brought the DAC carrying the ice sample inside a carry-on case on a commercial flight from LLNL in California to the laser facility in New York. Asked if that method of transport was nerve-wracking, Millot and Hamel said “not at all. “

“We often hand-carry our targets for laser experiments, so there is always a chance that one cell will break during the trip, but they are usually okay,” Millot said, adding that they usually bring multiple samples. “The final countdown for the laser shots is more stressful, because each cell is destroyed once we have fired the laser. So if the diagnostic did not record, the whole time preparing the target and setting up the laser shot is lost!”

But the researchers said understanding superionic ice could also solve a mystery about the odd, lopsided magnetic fields of Uranus and Neptune detected by the Voyager 2 mission in the 1980’s. Planetary magnetic fields are produced by the movement of electrically conducting internal fluids at high pressures, and any unusual magnetic fields are thought to be related to the consistency of the fluids that generate them.

“Given how we think planets like Neptune and Uranus form, a large fraction of their mass is water,” Hamel said. “Under the pressures and temperatures achieved in the interior of those giant planets, water will be a fluid for the outer part of the planet and a super-ionic solid for the deeper layers of the planet.”

Read more at Seeker

Apr 24, 2018

Galaxies grow bigger and puffier as they age

A new international study involving The Australian National University (ANU) and The University of Sydney has found that galaxies grow bigger and puffier as they age.

Co-researcher Professor Matthew Colless from ANU said that stars in a young galaxy moved in an orderly way around the galaxy's disk, much like cars around a racetrack.

"All galaxies look like squashed spheres, but as they grow older they become puffier with stars going around in all directions," said Professor Colless, who is the Director of the ANU Research School of Astronomy and Astrophysics and a Chief Investigator at the ARC Centre of Excellence in All-Sky Astrophysics in 3D (ASTRO 3D).

"Our Milky Way is more than 13 billion years old, so it is not young anymore, but the galaxy still has both a central bulge of old stars and spiral arms of young stars."

To work out a galaxy's shape, the research team measured the movement of stars with an instrument called SAMI on the Anglo-Australian Telescope at the ANU Siding Spring Observatory.

They studied 843 galaxies of all kinds and with a hundred-fold range in mass.

The study, which is published in Nature Astronomy, was funded by ASTRO 3D at ANU and the ARC Centre of Excellence for All Sky Astrophysics (CAASTRO) at The University of Sydney.

Lead author Dr Jesse van de Sande, from The University of Sydney and ASTRO 3D, said that it was not obvious that galaxy shape and age had to be linked, so the connection was surprising and could point to a deep underlying relationship.

"As a galaxy ages, internal changes take place and the galaxy may collide with others," Dr van de Sande said.

"These events disorder the stars' movements."

Co-author Dr Nicholas Scott, from the University of Sydney and ASTRO 3D, said scientists measured a galaxy's age through colour.

"Young, blue stars grow old and turn red," he said.

"When we plotted how ordered the galaxies were against how squashed they were, the relationship with age leapt out. Galaxies that have the same squashed spherical shape, have stars of the same age as well."

Dr van de Sande said scientists had known for a long time that shape and age were linked in very extreme galaxies, that is very flat ones and very round ones.

"This is the first time we've shown shape and age are related for all kinds of galaxies, not just the extremes -- all shapes, all ages, all masses," he said.

University of Sydney co-author Dr Julia Bryant, lead scientist for the SAMI instrument, said the team was still searching for the simple, powerful relationships like shape and age that underlie a lot of the complexity scientists see in galaxies.

"To see those relationships, you need detailed information on large numbers of galaxies," she said.

Read more at Science Daily

Did last ice age affect breastfeeding in Native Americans?

Photograph of human upper incisors with significant "shoveling," anatomical variation influenced by the EDAR V370A allele alongside an increase in mammary duct branching.
The critical role that breast feeding plays in infant survival may have led, during the last ice age, to a common genetic mutation in East Asians and Native Americans that also, surprisingly, affects the shape of their teeth.

The genetic mutation, which probably arose 20,000 years ago, increases the branching density of mammary ducts in the breasts, potentially providing more fat and vitamin D to infants living in the far north where the scarcity of ultraviolet radiation makes it difficult to produce vitamin D in the skin.

If the spread of this genetic mutation is, in fact, due to selection for increased mammary ductal branching, the adaptation would be the first evidence of selection on the human maternal-infant bond.

"This highlights the importance of the mother-infant relationship and how essential it has been for human survival," said Leslea Hlusko, an associate professor of integrative biology at the University of California, Berkeley.

As for the teeth, it just so happens that the gene controlling mammary duct growth also affects the shape of human incisors. Consequently, as the genetic mutation was selected for in an ancestral population living in the far north during the last Ice Age, shovel-shaped incisors became more frequent too. Shoveled incisors are common among Native Americans and northeastern Asian populations but rare in everyone else.

Hlusko and her colleagues outline the many threads of evidence supporting the idea in an article published this week in the journal Proceedings of the National Academy of Sciences.

The finding could also have implications for understanding the origins of dense breast tissue and its role in breast cancer.

For the study, Hlusko and her colleagues assessed the occurrence of shovel-shaped incisors in archeological populations in order to estimate the time and place of evolutionary selection for the trait. They found that nearly 100 percent of Native Americans prior to European colonization had shoveled incisors, as do approximately 40 percent of East Asians today.

The team then used the genetic effects that are shared with dental variation as a way to discern the evolutionary history of mammary glands because of their common developmental pathway.

"People have long thought that this shoveling pattern is so strong that there must have been evolutionary selection favoring the trait, but why would there be such strong selection on the shape of your incisors?" Hlusko said. "When you have shared genetic effects across the body, selection for one trait will result in everything else going along for the ride."

The vitamin D connection

Getting enough vitamin D, which is essential for a robust immune system and proper fat regulation as well as for calcium absorption, is a big problem in northern latitudes because the sun is low on the horizon all year long and, above the Arctic Circle, doesn't shine at all for part of the year. While humans at lower latitudes can get nearly all the vitamin D they need through exposure of the skin to ultraviolet light, the scarce UV at high latitudes forced northern peoples like the Siberians and Inuit to get their vitamin D from animal fat, hunting large herbivores and sea mammals.

But babies must get their vitamin D from mother's milk, and Hlusko posits that the increased mammary duct branching may have been a way of delivering more vitamin D and the fat that goes with it.

Hlusko, who specializes in the evolution of teeth among animals, in particular primates and early humans, discovered these connections after being asked to participate in a scientific session on the dispersal of modern humans throughout the Americas at the February 2017 American Association for the Advancement of Science meeting. In preparing her talk on what teeth can tell us about the peopling of the New World, she pulled together the genetics of dental variation with the archaeological evidence to re-frame our understanding of selection on incisor shape.

Incisors are called "shovel-shaped" when the tongue-side of the incisors -- the cutting teeth in the front of the mouth, four on top, four on the bottom -- have ridges along the sides and biting edge. It is distinctive of Native Americans and populations in East Asia -- Korea, Japan and northern China -- with an increasing incidence as you travel farther north. Unpersuaded by a previously proposed idea that shoveled incisors were selected for use softening animal hides, she looked at explanations unrelated to teeth.

The genetic mutation responsible for shoveling -- which occurs in at least one of the two copies, or alleles, of a gene called EDAR, which codes for a protein called the ectodysplasin A receptor -- is also involved in determining the density of sweat glands in the skin, the thickness of hair shafts and ductal branching in mammary glands. Previous genetic analysis of living humans concluded that the mutation arose in northern China due to selection for more sweat glands or sebaceous glands during the last ice age.

"Neither of those is a satisfying explanation," Hlusko said. "There are some really hot parts in the world, and if sweating was so sensitive to selective pressures, I can think of some places where we would have more likely seen selection on that genetic variation instead of in northern China during the Last Glacial Maximum."

The Beringian standstill

Clues came from a 2007 paper and later a 2015 study by Hlusko's coauthor Dennis O'Rourke, in which scientists deduced from the DNA of Native Americans that they split off from other Asian groups more than 25,000 years ago, even though they arrived in North American only 15,000 years ago. Their conclusion was that Native American ancestors settled for some 10,000 years in an area between Asia and North America before finally moving into the New World. This so-called Beringian standstill coincided with the height of the Last Glacial Maximum between 18,000 and 28,000 years ago.

According to the Beringian standstill hypothesis, as the climate became drier and cooler as the Last Glacial Maximum began, people who had been living in Siberia moved into Beringia. Gigantic ice sheets to the east prohibited migration into North America. They couldn't migrate southwest because of a large expanse of a treeless and inhospitable tundra. The area where they found refuge was a biologically productive region thanks to the altered ocean currents associated with the last ice age, a landmass increased in size by to the lower sea levels. Genetic studies of animals and plants from the region suggest there was an isolated refugium in Beringia during that time, where species with locally adaptive traits arose. Such isolation is ripe for selection on genetic variants that make it easier for plants, animals and humans to survive.

"If you take these data from the teeth to interpret the evolutionary history of this EDAR allele, you frame-shift the selective episode to the Beringian standstill population, and that gives you the environmental context," Hlusko said. "At that high latitude, these people would have been vitamin D deficient. We know they had a diet that was attempting to compensate for it from the archaeological record, and because there is evidence of selection in this population for specific alleles of the genes that influence fatty acid synthesis. But even more specifically, these genes modulate the fatty acid composition of breast milk. It looks like this mutation of the EDAR gene was also selected for in that ancestral population, and EDAR's effects on mammary glands is the most likely target of the selection."

The EDAR gene influences the development of many structures derived from the ectoderm in the fetus, including tooth shape, sweat glands, sebaceous glands, mammary glands and hair. As a consequence, selection on one trait leads to coordinated evolution of the others. The late evolutionary biologist and author Steven Jay Gould referred to such byproducts of evolution as spandrels.

Read more at Science Daily