May 13, 2017

In brain evolution, size matters, most of the time

An African sedge warbler, a species with a simple song that is also adapted to eating insects.
Which came first, overall bigger brains or larger brain regions that control specialized behaviors? Neuroscientists have debated this question for decades, but a new Cornell University study settles the score.

The study reports that though vertebrate brains differ in size, composition and abilities, evolution of overall brain size accounts for most of these differences, with larger brains leading to greater capabilities.

The study of 58 species of songbirds also found that once a species evolved a larger brain, brain regions that control the beak and mouth, and the area for song, developed additional complex neural networks.

The paper, co-authored by Jordan Moore Ph.D., currently a postdoctoral fellow at Columbia University, and Timothy DeVoogd, a Cornell professor of psychology, was published May 10 in the Proceedings of the Royal Society B.

The findings suggest that this principle may also help explain human evolution; we may have first evolved larger brains, which then allowed for adaptations that enhanced brain regions that control specific abilities, such as language.

"Most neuroscientists believe there is nothing special about the way that our brains have evolved, that what we need to do is understand the principles that underlie brain evolution in general, which is what this study involves," said DeVoogd. "The way you build a bigger brain is not just making everything bigger but rather slowing down or lengthening late pieces of development."

In this way, bigger brains have a more developed cortex (which plays key roles in memory, attention, perception, awareness, thought, language and consciousness) that is the last region to develop in animals and humans.

The study is the first to compare -- and resolve -- two competing theories of brain evolution. One theory holds that natural selection drove progressive changes in particular areas of the brain, which then led to larger overall brains in species that needed them to survive.

The other theory contends that some species acquired a bigger brain in general, and its larger basic parts could then be recruited for specific complex behaviors.

To test these theories, Moore and DeVoogd measured the sizes of overall brains and 30 discrete areas that control behaviors in 58 songbirds spanning 20 families.

"One of the advantages of looking in the brains of birds is that it's relatively easy to get samples from lots of different species, and there's a lot of data on what the different species do. And specific areas devoted to these functions can be easily seen in the brain," DeVoogd said.

Most of the variation in brain regions was accounted for by differences in the brain's overall size. But in two specific systems there was a significant amount of variation beyond what could be explained by brain size. Areas that controlled song were much larger in species that produce more varied and complex songs. Also, brain areas controlling the face and mouth were especially large in species with short, fat beaks that eat seeds, and they were small in species with long, thin beaks that eat insects.

"If you've ever watched a bird deal with a sunflower seed, it pushes the seed around with its tongue and grasps it with different points in its beak. And then it is able to break it open and get the inside out," DeVoogd explained.

Read more at Science Daily

Saying goodbye to glaciers

Twila Moon is pictured during field work to study ice-ocean interaction at the LeConte Glacier, Alaska.
Glaciers around the world are disappearing before our eyes, and the implications for people are wide-ranging and troubling, Twila Moon, a glacier expert at the University of Colorado Boulder, concludes in a Perspectives piece in the journal Science today.

The melting of glacial ice contributes to sea-level rise, which threatens to "displace millions of people within the lifetime of many of today's children," Moon writes. Glaciers also serve up fresh water to communities around the world, are integral to the planet's weather and climate systems, and they are "unique landscapes for contemplation or exploration."

And they're shrinking, fast, writes Moon, who returned to the National Snow and Ice Data Center this month after two years away. Her analysis, "Saying goodbye to glaciers," is published in the May 12 issue of Science.

Moon admits she was pretty giddy when an editor at Science reached out to her to write a perspective piece on the state of the world's glaciers, because of her research knowledge and extensive publication record. "There was some serious jumping up and down," Moon says. "I thought, 'I've made it!' Their invitation was an exciting recognition of my hard work and expertise."

But the topic, itself, is far from a happy one. Moon describes the many ways researchers study glacier dynamics, from in-place measurements on the ice to satellite-based monitoring campaigns to models. And she describes sobering trends: The projection that Switzerland will lose more than half of its small glaciers in the next 25 years; the substantial retreat of glaciers from the Antarctic, Patagonia, the Himalayas, Greenland and the Arctic; the disappearance of iconic glaciers in Glacier National Park, Montana, or reduction to chunks of ice that no longer move (by definition, a glacier must be massive enough to move).

In her piece, Moon calls for continued diligence by the scientific community, where ice research is already becoming a priority.

Moon says she got hooked on glaciers as an undergraduate in geological and environmental sciences at Stanford University, when she spent a semester abroad in Nepal. "For the first time I saw a big valley glacier, flowing through the Himalaya," she said, "and I thought it was about the coolest thing ever. After studying geology, the movement and sound of the ice, right now, made it feel almost alive.'"

That experience kicked off a research career that has taken Moon to Greenland, Alaska, Norway, and to conferences around the world. She began her work "merely" as a geologist and glaciologist, interested in ice itself, Moon said. Only later did the influence of climate change come to play in her work.

"I think I'm about as young as you can get for being a person who started in glaciology at a time when climate change was not a primary part of the conversation," says Moon, who is 35.

She is consistently sought out by journalists hoping to understand Earth's ice, and she's sought out in the scientific community as well, recognized as someone who likes to collaborate across disciplinary boundaries. She recently worked with a biologist in Washington, for example, on a paper about how narwhals use glacial fronts in summertime -- the tusked marine mammals appear to be attracted to glaciers with thick ice fronts and freshwater melt that's low in silt, though it's not yet clear why.

After a couple of post-doctoral research years, at the National Snow and Ice Data Center and then the University of Oregon, Moon and her husband headed to Bristol, England, where she took a faculty position at the University of Bristol's School of Geographical Sciences. When it became clear that her husband's work wouldn't transfer, the two determined to head back to the Rocky Mountains.

Read more at Science Daily

May 12, 2017

Different places warm at different paces

Temperature change per decade in the air (left) and the surface of the ocean (right) between 2005 and 2100, as projected by the Norwegian Earth System model under a strong emission scenario (RCP8.5). Notice the rapid ocean warming at around 40S, 40N, and in the Arctic, in contrast to the surface atmospheric warming mainly in the Arctic.
One of the robust features of global warming under increasing greenhouse gas concentrations is that different places warm at different paces. It turns out that the fast warming in each region has its own cause. Ocean heat transport links subpolar and Arctic warming, but that is not the whole story.

Ocean heat transport links subpolar and Arctic warming, but that is not the whole story, writes Aleksi Nummelin. Read his account of how he, Camille Li and Paul Hezel worked to solve a climate riddle.

Ocean heat transport links together the subpolar and Arctic warming -- but it's not the whole story. About a year ago Camille, Paul and I realized that something was missing in our understanding of the ocean's role for Arctic warming, and it was time to find out! Here is our story:

One of the robust features of global warming under increasing greenhouse gas concentrations is that different places warm at different paces.

Observations had already shown that Earth's surface warms fastest in the polar regions and slowest in the tropics. On the other hand, observations and model results had revealed that the ocean warms fastest over midlatitudes and the Arctic (Levitus et al. 2012, Wu et al. 2012, Armour et al. 2016).

Indeed, the pattern of ocean warming is not quite the same as the pattern of atmospheric warming. What is causing the ocean warming to follow a different pattern than the atmospheric warming, and how are the two linked?

We set out to answer these questions by exploring climate model simulations over the ongoing century. We started out by constructing a heat budget for the global ocean in the Norwegian Earth System model (NorESM), a fully coupled ocean-atmosphere model used in Norway. The model results clearly agreed with the observational data; the ocean was warming fastest in the midlatitudes and in the Arctic.

So why such a pattern? The next step was to look into the exchange of energy through the ocean surface in NorESM and in other climate models.

It turns out that the fast warming in each region had its own cause.

In the northern midlatitudes, the ocean warms quickly because more sunlight reaches the surface ocean. This happens because the subtropical high pressure region to the south expands northward and pushes away the cloudy skies.

This northward expansion of the subtropical high pressure region is really another somewhat complex story, but a short version goes roughly like this: Normally air rises in the tropics and descends in the subtropics, but global warming generally forces air upward (warm air is light) and acts against the descending motion in the subtropics. Since the air has to descend somewhere the descending branch of the circulation simply moves further north. NorESM simulations also revealed that, as a response to the circulation changes in the atmosphere, the whole subtropical ocean gyre moves slightly northward as well.

The Arctic Ocean is warming because it receives more heat from the subpolar oceans, which are warming because they are losing less heat to the warming atmosphere above. This was one of the important results of our study as we noted that the Arctic ocean warming was linked to the subpolar atmospheric warming, and not to the mid-latitude ocean warming.

In order to understand the subpolar to Arctic linkage in detail it helps to first consider the annual mean picture of ocean-atmosphere interaction at these northern latitudes. In the mean the relatively warm ocean loses heat to the relatively cold atmosphere above. However, under global warming, the atmosphere warms much faster than the ocean surface and as a result the ocean heat loss weakens. In a way the atmosphere does not need as much heat from the ocean as it did before and the ocean is happy to keep the heat. The extra heat does not remain in the subpolar oceans, but it is delivered to the Arctic by ocean circulation, not unlike the way the rapidly warming midlatitude Southern Ocean has been gaining its extra heat.

Indeed, last year Armour et al. showed that an atmospheric warming signal in the subpolar Southern Ocean moves to the rapidly warming southern midlatitudes with the equatorward ocean circulation. Our analysis of the climate model results show that locally in the rapidly warming southern midlatitudes, the atmosphere even acts against the ocean warming as the surface ocean loses an increasing amount of heat to the atmosphere.

With these results at hand we got excited. There was an oceanic linkage from the subpolar region to the Arctic that could potentially also link the atmospheric warming in these two regions.

As noted in the very beginning, one of the robust features of global warming is the relatively slow warming in the tropics and relatively fast warming in the Arctic. This phenomenon is known as Arctic amplification, and it is caused by several rather well understood mechanisms, although their relative importance is still discussed.

What is less well understood, however, is why models disagree on the amount of Arctic amplification that occurs under global warming. The most recent climate model estimates span a range between 2 to 5 times more warming in the Arctic than in the tropics.

Since one of the causes of Arctic amplification is the reduction in sea ice cover, we hypothesized that the mechanism connecting the subpolar region and the Arctic could be the following: As the ocean loses less heat to the atmosphere in the subpolar region, it remains relatively warm as the waters flow to the Arctic. The warmer waters flow northward and consequently the ocean also freezes up much further north, pushing the sea ice edge northward.

Because the atmosphere receives more heat from open water than from ice covered ocean, the atmospheric warming is amplified in the areas of sea ice loss. With this possibility in mind we went back to the climate model results and noticed that a reduced subpolar heat loss was indicative of an increase in the ocean heat loss in the Arctic, leading to strong Arctic amplification.

The ocean connected the Arctic to the subpolar region! After this realization, the pieces of the story started to fall in place.

As the atmosphere warms over the subpolar region more heat is left for the ocean to carry to the Arctic. Part of the extra heat carried by the ocean is released back to the atmosphere within the Arctic, explaining some of the differences in Arctic amplification between the models.

Read more at Science Daily

Jurassic drop in ocean oxygen lasted a million years

Pacific Ocean.
Dramatic drops in oceanic oxygen, which cause mass extinctions of sea life, come to a natural end -- but it takes about a million years.

The depletion of oxygen in the oceans is known as "anoxia," and scientists from the University of Exeter have been studying how periods of anoxia end.

They found that the drop in oxygen causes more organic carbon to be buried in sediment on the ocean floor, eventually leading to rising oxygen in the atmosphere which ultimately re-oxygenates the ocean.

Scientists believe the modern ocean is "on the edge of anoxia" -- and the Exeter researchers say it is "critical" to limit carbon emissions to prevent this.

"Once you get into a major event like anoxia, it takes a long time for the Earth's system to rebalance," said lead researcher Sarah Baker, a geographer at the University of Exeter.

"This shows the vital importance of limiting disruption to the carbon cycle to regulate the Earth system and keep it within habitable bounds."

The researchers, who also include Professor Stephen Hesselbo from the Camborne School of Mines, studied the Toarcian Oceanic Anoxic Event, which took place 183 million years ago and was characterized by a major disturbance to the global carbon cycle, depleted oxygen in Earth's oceans and mass extinction of marine life.

Numerical models predicted that increased burial of organic carbon -- due to less decomposition and more plant and marine productivity in the warmer, carbon-rich environment -- should drive a rise in atmospheric oxygen, causing the end of an anoxic event after one million years.

To test the theory, the scientists examined fossil charcoal samples to see evidence of wildfires -- as such fires would be more common in oxygen-rich times.

They found a period of increased wildfire activity started one million years after the onset of the anoxic event, and lasted for about 800,000 years.

"We argue that this major increase in fire activity was primarily driven by increased atmospheric oxygen," said Baker.

"Our study provides the first fossil-based evidence that such a change in atmospheric oxygen levels could occur in a period of one million years."

The increase in fire activity may have also helped end ocean anoxia by burning and reducing the amount of plants on land.

This is because plants can help to erode rocks on the land that contain nutrients needed for marine life -- therefore with fewer plants, fewer nutrients are available to be carried to the sea and used to support marine life in the oceans.

Less marine life -- that would use oxygen to breathe -- would mean less oxygen being used in the oceans, and could therefore help the oceans to build up a higher oxygen content, ending anoxia.

It may therefore be essential to maintain the natural functioning of wildfire activity to help regulate the Earth system in the long-term, the researchers say.

Read more at Science Daily

Ancient Mars impacts created tornado-like winds that scoured surface

An infrared image reveals strange bright streaks extending from Santa Fe crater on Mars. Researchers suggest the streaks were caused by tornado-force winds created by the impact that formed the crater.
In looking at NASA images of Mars a few years ago, Brown University geologist Peter Schultz noticed sets of strange bright streaks emanating from a few large-impact craters on the planet's surface. The streaks are odd in that they extend much farther from the craters than normal ejecta patterns, and they are only visible in thermal infrared images taken during the Martian night.

Using geological observation, laboratory impact experiments and computer modeling, Schultz and Brown graduate student Stephanie Quintana have offered a new explanation for how those streaks were formed. They show that tornado-like wind vortices -- generated by crater-forming impacts and swirling at 500 miles per hour or more -- scoured the surface and blasted away dust and small rocks to expose the blockier surfaces beneath.

"This would be like an F8 tornado sweeping across the surface," Schultz said. "These are winds on Mars that will never be seen again unless another impact."

The research is published online in the journal Icarus.

Schultz says he first saw the streaks during one of his "tours of Mars." In his downtime between projects, he pulls up random images from NASA's orbital spacecraft just to see if he might spot something interesting. In this case, he was looking at infrared images taken during the Martian nighttime by the THEMIS instrument, which flies aboard the Mars Odyssey orbiter.

The infrared images capture contrasts in heat retention on the surface. Brighter regions at night indicate surfaces that retain more heat from the previous day than surrounding surfaces, just as grassy fields cool off at night while buildings in the city remain warmer.

"You couldn't see these things at all in visible wavelength images, but in the nighttime infrared they're very bright," Schultz said. "Brightness in the infrared indicates blocky surfaces, which retain more heat than surfaces covered by powder and debris. That tells us that something came along and scoured those surfaces bare."

And Schultz had an idea what that something might be. He has been studying impacts and impact processes for years using NASA's Vertical Gun Range, a high-powered cannon that can fire projectiles at speeds up to 15,000 miles per hour.

"We had been seeing some things in experiments we thought might cause these streaks," he said.

When an asteroid or other body strikes a planet at high speed, tons of material from both the impactor and the target surface are instantly vaporized. Schultz's experiments showed that vapor plumes travel outward from an impact point, just above the impact surface, at incredible speeds. Scaling laboratory impacts to the size of those on Mars, a vapor plume's speed would be supersonic. And it would interact with the Martian atmosphere to generate powerful winds.

The plume and its associated winds on their own didn't cause the strange streaks, however. The plumes generally travel just above the surface, which prevents the kind of deep scouring seen in the streaked areas. But Schultz and Quintana showed that when the plume strikes a raised surface feature, it disturbs the flow and causes powerful tornadic vortices to form and drop to the surface. And those vortices, the researchers say, are responsible for scouring the narrow streaks.

Schultz and Quintana showed that the streaks are nearly always seen in conjunction with raised surface features. Very often, for example, they are associated with the raised ridges of smaller impact craters that were already in place when the larger impact occurred. As the plume raced outward from the larger impact, it encountered the small crater rim, leaving bright twin streaks on the downwind side.

"Where these vortices encounter the surface, they sweep away the small particles that sit loose on the surface, exposing the bigger blocky material underneath, and that's what gives us these streaks," Schultz said.

Schultz says the streaks could prove useful in establishing rates of erosion and dust deposition in areas where the streaks are found.

"We know these formed at the same time as these large craters, and we can date the age of the craters," Schultz said. "So now we have a template for looking at erosion."

But with more research, the streaks could eventually reveal much more than that. From a preliminary survey of the planet, the researchers say the streaks appear to form around craters in the ballpark of 20 kilometers across. But they don't appear in all such craters. Why they form in some places and not others could provide information about the Martian surface at the time of the impact.

Read more at Science Daily

Mars May Have Been Born in the Asteroid Belt

Mars and Earth have very different histories. A simple example: Earth is mostly covered with water, while Mars lost its water in the distant past. But scientists have also known that the elements on Mars have different isotopes, or atomic masses, particularly for chromium, titanium, and oxygen.

A new paper published in the journal Earth and Planetary Science Letters argues these compositional differences arose because Mars formed in a different part of the solar system than where it is now located. Instead of being between the sun and the asteroid belt, the paper argues Mars formed within the asteroid belt before migrating somewhat closer to the sun to where it is now. The migration occurred, the paper says, due to Mars gravitationally interacting with planetesimals – small bodies such as asteroids – within the belt.

"Since Mars is more massive than the planetesimals, it tends to lose energy when it scatters these planetesimals because it passes them to Jupiter, which then ejects them from the solar system," Ramon Brasser, lead author and associate professor at the Tokyo Institute of Technology's Earth-Life Science Institute, wrote in an e-mail.

The prevailing theory of the solar system's formation suggests that the sun and its planets formed after a cloud of gas and dust was gravitationally compressed, perhaps from a passing star. Over time, small gas and dust particles stuck together, forming the sun and the modern-day planets.

There is still some debate about whether planets migrated during this process. Previously, scientists theorized that the rocky planets of Mercury, Venus, Earth and Mars collected less gas than the gas giants of Jupiter, Uranus, Saturn and Neptune because the rocky planets were closer to the sun. It was thought that the sun's radiation blew most of the gas into the outer solar system. However, scientists have spotted several Jupiter-sized exoplanets very close to their host stars, which could imply a different formation process that included migration.

In this case, the team tested their hypothesis about Mars' formation by running simplified computer scenarios of the formation of the terrestrial, or rocky, planets, and also looked at samples from Earth, Mars, the moon, and Vesta, which is an asteroid. "We looked for Mars analogs that accreted material in a portion of the disc which Earth did not, and we concluded that the only way to do this is to form Mars far from the Sun, in the inner asteroid belt," Brasser wrote.

Based on previous work from the University of Chicago's Nicolas Dauphas and colleagues, the simulations used by Brasser's team started with sub-Mars planetary embryos. The simulations suggest that Mars grew quickly, then lost access to most of the material, such as gas and dust, to grow within 5-10 million years after the solar system formed. It settled into its current orbit about 120 million years after the solar system's birth, at which point its liquid surface hardened into a crust.

Mars likely had liquid water running on its surface in the ancient past, but over time its atmosphere thinned and made it impossible for water to exist in liquid form on the surface. Brasser said, however, that the origins of Mars would not influence that process. While Mars was formed in a colder environment (since the asteroid belt is further from the sun), it was only there for a few million years before migrating to its current location.

Read more at Discovery News

May 11, 2017

Baleen whales' ancestors were toothy suction feeders

This photo shows members of the excavation team digging around the skeleton of Mystacodon selenensis at the Media Luna locality in the Pisco Basin, Peru.
Modern whales' ancestors probably hunted and chased down prey, but somehow, those fish-eating hunters evolved into filter-feeding leviathans. An analysis of a 36.4-million-year-old whale fossil suggests that before baleen whales lost their teeth, they were suction feeders that most likely dove down and sucked prey into their large mouths. The study published on May 11 in Current Biology also shows that whales most likely lost the hind limbs that stuck out from their bodies more recently than previously estimated.

The specimen, which researchers unearthed in the Pisco Basin in southern Peru, is the oldest known member of the mysticete group, which includes the blue whale, the humpback whale, and the right whale. At 3.75-4 meters long, this late Eocene animal was smaller than any of its living relatives, but the most important difference was in the skull. Modern mysticetes have keratin fibers -- called baleen -- in place of teeth that allow them to trap and feed on tiny marine animals such as shrimp. However, the newly described whale has teeth, so the paleontologists dubbed it Mystacodon, meaning "toothed mysticete."

"This find by our Peruvian colleague Mario Urbina fills a major gap in the history of the group, and it provides clues about the ecology of early mysticetes," says paleontologist and study co-author Olivier Lambert of the Royal Belgian Institute of Natural Sciences. "For example, this early mysticete retains teeth, and from what we observed of its skull, we think that it displays an early specialization for suction feeding and maybe for bottom feeding."

Mystacodon's teeth exhibit a pattern of wear that differs from more archaic whales, the basilosaurids. Many basilosaurids were probably active hunters, similar to modern orcas, with mouths that were suited for biting and attacking, but Mystacodon has a mouth more suited for sucking in smaller animals, leading the researchers to conclude that Mystacodon most likely represents an intermediate step between raptorial and filter feeding and between the ancient basilosaurids and modern mysticetes.

"For a long time, Creationists took the evolution of whales as a favorite target to say that, 'Well, you say that whales come from a terrestrial ancestor, but you can't prove it. You can't show the intermediary steps in this evolution,'" says Lambert. "And that was true, maybe thirty years ago. But now, with more teams working on the subject, we have a far more convincing scenario."

Mystacodon bolsters that argument by displaying features of both basilosaurids and mysticetes. "It perfectly matches what we would have expected as an intermediary step between ancestral basilosaurids and more derived mysticetes,"says Lambert. "This nicely demonstrates the predictive power of the theory of evolution."

Lambert and his colleagues think that Mystacodon may have started suction feeding in response to ecological changes. In illustrated reconstructions, Mystacodon is depicted diving down to the sea floor in a shallow cove, but based on this initial analysis, the researchers aren't sure to which extent Mystacodon was adapted to bottom feeding. "We will look inside the bone to see if we can find some changes that may be correlated with this specialized behavior," says Lambert. "Among marine mammals, when a slow-swimming animal is living close to the sea floor, generally the bone is much more compact, and this is something we want to test with these early mysticetes."

The fossil's pelvis offered another surprise: Mystacodon had fully articulated, tiny vestigial hind limbs that would have stuck out away from the whale's body. Previously, paleontologists had thought that whales lost the hip articulation during the basilosaurid phase of their evolution, before baleen whales and modern toothed whales diverged. Though Mystacodon's hind limbs were already tiny and well down the path toward being vestigial and useless, their articulation with the pelvis suggests that mysticetes and modern toothed whales may have lost this feature independently.

Read more at Science Daily

Millennia-Old Mummies Are Under Threat From Yemen's Civil War

Yemen's war has claimed thousands of lives and pushed millions to the brink of famine. Now the conflict threatens to erase a unique part of the country's ancient history.

A collection of millennia-old mummies at Sanaa University Museum in the Yemeni capital could face destruction as a result of the fighting.

With electricity intermittent at best and the country's ports under blockade, experts are fighting to save the 12 mummies in the face of heat, humidity, and a lack of preservative chemicals.

Some of the remains, from pagan kingdoms that ruled the region around 400 BC, still have teeth and strands of hair.

"These mummies are tangible evidence of a nation's history," said Abdulrahman Jarallah, head of the archaeology department at Sanaa University, but "even our mummies are affected by the war."

"Mummies need a suitable, controlled environment and regular care, including sanitization every six months," he told AFP. "Some of them have begun to decay as we cannot secure electricity and the proper preservative chemicals, and we're struggling to control the stench."

"We're concerned both for the conservation of the mummies and for the health of those handling them," Jarallah said.

The mummies are among a host of priceless ancient remains threatened by conflicts across the region.

From Syria's Palmyra to Libya's Leptis Magna, millennia-old historical remains face looting and destruction in various parts of the Middle East.

The Islamic State group systematically demolished pre-Islamic monuments in Syria and Iraq after seizing swathes of both countries in 2014, looting and selling smaller pieces on the black market to fund their rule.

Swiss authorities last year seized cultural relics looted from Yemen, Syria, and Libya that had been stored in Geneva's free ports — highly secured warehouses where valuables can be stashed tax-free with few questions asked.

Supplies, Experts Needed

Old Sanaa, inscribed on UNESCO's World Heritage List since 1986, faces other dangers.

Perched 2,300 meters (7,500 feet) up in Yemen's western mountains, it has been continuously inhabited for over 2,500 years and is home to some of the earliest Islamic architecture.

With more than 100 mosques and 6,000 houses built before the 11th century, the old city is famed for its multi-storied homes of red basalt rock, with arched windows decorated with white latticework.

But months after a Saudi-led coalition intervened against Iran-backed Houthi rebels in March 2015, UNESCO added the ancient city to its List of World Heritage in Danger.

In June that year, a bombing in the old city killed five people and destroyed a section including several houses and an Ottoman fort.

Witnesses blamed an air strike by the Saudi-led coalition on the rebel-held capital.

No party has claimed responsibility for the strike.

The coalition has also imposed an air and naval blockade on Houthi-controlled Red Sea ports that are crucial entry points for food and aid.

The UN estimates 60 percent of Yemen's population is at risk of famine.

Yemeni archaeologists have appealed to both local authorities and international organizations to help preserve Yemen's mummies by easing the flow of supplies and personnel.

"We can already see the mummies suffering the effects of a long period of not having been properly maintained," Sanaa University Museum restoration specialist Fahmi al-Ariqi told AFP.

"We need supplies and experts in this sort of maintenance to work with us to save the 12 mummies here at the university, as well as another dozen at the National Museum in Sanaa."

Read more at Discovery News

The Trail of Antibiotic-Resistant Superbugs Leads Back to the First Land Animals

Early life as it is believed to have looked 335 million years ago, well before the age of the dinosaurs. Ancestors of hospital pathogens are now believed to have lived in the guts of these ancient land animals.
One of today’s biggest biohazards may have its roots in the days before the dinosaurs, when animals first started crawling onto land. 

Enterococcus bacteria are hard-to-kill bugs that flourish in the digestive systems of nearly all land animals. They’re also able to survive long periods without food or water and withstand many of the disinfectants used in hospitals — traits they started to acquire shortly after emerging more than 425 million years ago, researchers at Harvard and the Massachusetts Institute of Technology reported today.

Knowing how they got that tough can help scientists figure out new ways to protect people from the harmful infections they produce, said Ashley Earl, a microbiologist and head of the bacterial genomics group at the Broad Institute, a joint venture of Harvard and MIT.

The fact that enterococcus species are so widespread “suggests an early origin,” Earl said. The researchers then used what geneticists call a molecular clock, a technique used to estimate when an evolutionary change occurred.

“Using genomic data, we can make best guesses for the timing at which different species likely went their separate ways from their last common ancestor and became something new,” she said. And when she and her partners compared those estimates to the fossil record, which shows when other species emerged, “We start to see this really interesting pattern that kind of fits beautifully for the expansion and the reductions of terrestrial animal life.”

“It fits really nicely with the dating we get from the genomic information,” she said. “That sort of lands the timing for the very first emergence of enterococcus lands at right around the time that terrestrialization first began,” Earl said.

The findings were published in the scientific journal Cell. The research team included scientist from Massachusetts Eye and Ear and the Harvard-wide Program on Antibiotic Resistance. 

The rise of drug-resistant bacteria has become a leading worry for doctors and scientists, raising fears that old killers like pneumonia or tuberculosis could make a comeback. Modern enterococci are a common source of hospital illnesses; they can take root in the blood, urinary tract, and the lining of the heart and fight off several common antibiotics.

Bacteria excreted from the guts of marine life find themselves in a “comparatively hospitable” water, where those excreted from land animals “would experience comparative isolation, starvation, desiccation, and possibly extinction.” Those excreted by land animals risk drying out or starving to death — but the ones that could survive those threats passed those traits on to their descendants, producing the hardy microbes that concern doctors today.

Over the course of the study, the scientists compiled what Earl called a “parts list” of genes that might reveal the bacteria’s vulnerabilities.

“We want to take those strengths and turn them into their weaknesses,” she said. She compared the bacteria to a tank — a dangerous, durable piece of military hardware.

Read more at Discovery News

Watery Atmosphere Discovered on Neptune-Like Exoplanet

The atmosphere of the distant “warm Neptune” HAT-P-26b, illustrated here, is unexpectedly primitive, composed primarily of hydrogen and helium. By combining observations from NASA’s Hubble and Spitzer space telescopes, researchers determined that, unlike Neptune and Uranus, the exoplanet has relatively low metallicity, an indication of the how rich the planet is in all elements heavier than hydrogen and helium.
While the number of known exoplanets continues to soar, uncovering specific details about these distant worlds has been more challenging. The delicate task of finding out the specifics of an exoplanet’s atmosphere can provide data about the planet’s potential habitability, as well as clues to how the exoplanet formed.

A new study of a Neptune-size world has revealed a surprisingly “primitive atmosphere” composed almost entirely of hydrogen and helium, with water vapor and possible clouds. Scientists say this finding could provide an important breakthrough in understanding how planets form, and how atmospheres can vary between exoplanets with different masses.

“Astronomers have just begun to investigate the atmospheres of these distant Neptune-mass planets, and almost right away we found an example that goes against the trend in our solar system,” said Hannah Wakeford from NASA’s Goddard Space Flight Center. “This kind of unexpected result is why I really love exploring the atmospheres of alien planets.”

Wakeford, along with David Sing from the University of Exeter and a team of other researchers, looked at the exoplanet HAT-P-26b, which was discovered in 2010 and is located about 430 light years from Earth. It is a “warm” world that orbits relatively close to its star, circling it every 4.23 days.

Using observations from the Hubble Space Telescope and the Spitzer Space Telescope, the researchers found strong indications of water as well as a possible cloud layer with a unique makeup in the atmosphere of HAT-P-26b.

“What we have detected is a strong water absorption feature with some evidence of a cloud layer deep in the atmosphere at the base altitude of the measurements we have made,” Wakeford told Seeker. “This cloud would not be made of water due to the high temperature of the atmosphere but would instead be more exotic and composed most likely of Na2S (di-sodium sulfide). At the altitudes we probed with these observations and the strong water vapor absorption signature, this is a relatively cloudless portion of the atmosphere.”

Measuring the abundance of atmospheric water provides information about the proportion of elements in the atmosphere that are heavier than hydrogen and helium, a value astronomers refer to as the metallicity. Surprisingly, it was lower than expected, only about 4.8 times that of the Sun — less than that of Neptune and Uranus, for example, and closer to the value for Jupiter.

“HAT-P-26b is the first to buck the trend seen with our solar system and other exoplanets, which suggest that lower mass planets have higher metallicities,” Wakeford said via email. “As the first to buck this trend, HAT-P-26b will be an important marker in our understanding of planetary formation scenarios.”

When a distant planet transits, or passes in front its host star, starlight gets filtered through the planet’s atmosphere. Using a spectrometer, astronomers can measure the light to see what elements are present in the planet’s atmosphere.

“This is the strongest water absorption feature that has measured for an exoplanet of this size,” Wakeford said. “Additionally, what is most important is that we have been able to use this detection to approximate the overall metallicity of the atmosphere based on the water abundance, and compared to similar sized planets in our solar system we find that HAT-P-26b has a much lower metallicity.”

This suggests that it formed differently — perhaps closer to the star or late in the lifetime of the disk around the star — compared to Neptune.

In our own solar system, the metallicity in Jupiter, which is five times greater than the Sun, and Saturn (10 times), suggest that these “gas giants” are made almost entirely of hydrogen and helium. Neptune and Uranus, however, are richer in the heavier elements, with metallicities of about 100 times that of the sun.

Read more at Discovery News

May 10, 2017

African lions under same threats as extinct sabre-toothed tigers faced

This is a picture of African Lion taken in Kenya.
The seven big cats that went extinct towards the end of the last Ice Age, including several sabre-toothed cats, are those which lost the greatest proportion of their prey, according to an international team of scientists who believe the African Lion and Sunda clouded leopard are next on the list.

A new study led by scientists from the universities of Sussex, Oxford's Wildlife Conservation Research Unit (WildCru), Aarhus and Goteborg has assessed whether Ice Age extinction trends could be applied to populations of big cat species now, by using a new global database FelidDIET.

The team researched the cause of extinction of seven large cats from the Ice Age: four different types of sabre-toothed cats, the cave and American lions and the American cheetah. They discovered that if these animals were alive today on average, only 25 percent of their preferred prey species would still remain across their former natural ranges -- the majority have gone extinct, partly due to human pressure. The team believe this devastating loss of prey species was a major contributing factor to the extinction of these big cats.

The team have also used the database to work out whether a similar decline in the availability of prey species now could lead to the demise of some of the world's most well-known big cat species. They have discovered that if all the currently threatened and declining prey species within big cat natural ranges were to go extinct, only 39 percent of the African lion's prey and 37 percent of Sunda clouded leopard's would remain.

Worryingly the researchers believe that if this prey loss trend continues this poses 'a high risk of extinction' to these two big cat species in particular. They also report that prey diversity within the geographical ranges of tiger, leopard and cheetah puts them at risk too.

Dr Chris Sandom, from the University of Sussex, said: "This joint study clearly shows that if primary big cat prey continues to decline at such a rate then big cats, including lion, Sunda clouded leopard, tiger and cheetah are at high risk of extinction.

"Where prey species have, or are likely to become extinct, this poses a serious risk to the big cat species which feed on them and we now know this is the continuation of an unhappy trend which began during the last Ice Age.

"We need to buck this Ice Age trend once and for all and to reinforce the urgent need for governments to protect both big cat species and their prey."

Professor David Macdonald, Director of the University of Oxford's WildCRU, remarked: "The fairy-tale consequences of Old Mother Hubbard's cupboard being bare are all too vividly real for modern big cats. Our study of the consequences of prey loss -- 'defaunation' in the jargon -- is about, in everyday language "what if" or perhaps better "if only": without the extinctions of the Pleistocene, in which the fingerprints of humanity are all to incriminating, there would have been between one and five more felid species in most places today. He adds: "The Churchillian aphorism that those that fail to learn from history are doomed to repeat it was painfully in mind when we saw how many of the prey of lions and East Africa and of clouded leopards in Indo-Malaya look set to go down the same drain down which their counterparts in other regions have already been flushed."

Read more at Science Daily

Fossil Find Could Rewrite the Record of Life on Earth

A microscopic image of geyserite textures from the ancient Dresser Formation shows surface hot spring deposits from 3.5 billion years ago.
Ancient hot spring deposits found in Australia’s desert could help unlock the mystery of how life evolved on Earth, and provide clues for scientists searching for extraterrestrial existence, according to a new study.

A team of scientists scouring the remote Pilbara region of Western Australia have uncovered evidence of fossilized microbial deposits that extend the geological record of life in hot springs.

The discovery could have important implications for our understanding about the origins of life, the researchers say, because it suggests it may have evolved on land, rather than deep in the ocean.

“Prior to this work, the oldest evidence of life on land was 2.7 billion years old. That wasn’t in hot springs, it was in South Africa in rich soils. Basically, that was an argument to say: ‘We don’t see life on land very early on, it must have adapted later,’” said lead author Tara Djokic of the University of New South Wales. “Now, we are seeing life was already on land 3.5 billion years ago.”

The findings were published today in the journal Nature Communications.

The Dresser Formation in western Australia.
Currently, there are two competing hypothesis for the origin of life. Either it evolved in deep sea hydrothermal vents or, as Charles Darwin theorized in his “warm little pond scenario,” it was sparked by the mixing of chemicals on land.

Scientists published details last year of what is thought to be the oldest evidence of life on Earth, which was found in Greenland. The 3.7 billion-year-old fossil stromatolites were located on the sea floor.

Djokic said the discovery of fossils in the Dresser Formation in the Pilbara Craton provides a new geological perspective that supports Darwin’s theory.

“I don’t think the Dresser Formation is anywhere close to the origin of life, but it does lend weight to that environment being available, which was previously not known,” she said.

Among the fossils found in the harsh, dry environment were layered rock structures created by communities of ancient microbes, called stromatolites, and preserved gas bubbles.

Evidence of geyserite, which helped preserve the fossils, provided the “smoking gun” that the deposits had a terrestrial origin, Djokic said. Geyserite is a silica-rich mineral deposit that is only found in a land-based, hot spring environment.

The findings show there was life on land up to 580 million years earlier than previously thought.

Read more at Discovery News

Comet 67P Found to Be Producing Its Own Oxygen in Deep Space

A single frame Rosetta navigation camera image of Comet 67P/Churyumov-Gerasimenko.
In 2015, scientists announced the detection molecular oxygen at Comet 67P/Churyumov-Gerasimenko, which was studied by the Rosetta spacecraft. It was the “biggest surprise of the mission,” they said — a discovery that could change our understanding of how the solar system formed.

While molecular oxygen is common on Earth, it is rarely seen elsewhere in the universe. In fact, astronomers have detected molecular oxygen outside the solar system only twice, and never before on a comet.

The initial explanation for the oxygen found in the faint envelope of gas that surrounds the comet was that the oxygen was frozen inside the comet since the beginning of our solar system some 4.6 billion years ago. It was believed that the oxygen had thawed as the comet made its way closer to the Sun.

But researchers are rethinking that theory, thanks to a chemical engineer from Caltech who usually works on developing microprocessors.  

Konstantinos P. Giapis was intrigued by the Rosetta finding because it appeared to him that the chemical reactions occurring on Comet 67P’s surface were very similar to experiments he had been performing in his lab for the past 20 years. Giapis studies chemical reactions involving high-speed charged atoms, or ions, colliding with semiconductor surfaces in order to develop faster computer chips and larger digital memories for computers and phones.

“I started to take an interest in space and was looking for places where ions would be accelerated against surfaces,” Giapis said in a statement. “After looking at measurements made on Rosetta's comet, particularly regarding the energies of the water molecules hitting the comet, it all clicked. What I've been studying for years is happening right here on this comet.”

In a new paper, Giapis and his co-author and Caltech colleague Yunxi Yaos propose that the molecular oxygen at Comet 67P is not ancient, but is being produced right now by interactions within the comet’s nebulous aura, or coma, between water molecules coursing off the comet and particles streaming from the sun.

“We have shown experimentally that it is possible to form molecular oxygen dynamically on the surface of materials similar to those found on the comet,” said Yaos.

Here’s how it works: Water vapor molecules stream off the comet as it is heated by the sun. The water molecules become ionized, or charged, by ultraviolet light from the sun, and then the sun's wind blows the ionized water molecules back toward the comet. When the water molecules hit the comet's surface, which contains oxygen bound in materials such as rust and sand, the molecules pick up another oxygen atom from the surface and O2 is formed.

“This abiotic production mechanism is consistent with reported trends in the 67P coma,” the researchers write in their paper, “and raises awareness of the role of energetic negative ions,” not only in comets but other planetary bodies as well.

This oxygen-producing mechanism could be happening in a wide range of situations.

“Understanding the origin of molecular oxygen in space is important for the evolution of the Universe and the origin of life on Earth,” the researchers wrote.

The finding muddies the waters in how detecting oxygen in the atmospheres of exoplanets might not necessarily point to life, as this abiotic process means that oxygen can be produced in space without the need for life. The researchers say this finding might influence how researchers search for signs of life on exoplanets in the future.

Read more at Discovery News

The Higgs Boson Particle Isn’t So Godlike After All

Computer simulation of particle traces from an Large Hadron Collider collision in which a Higgs Boson is produced.
Paul Sutter is an astrophysicist at The Ohio State University and the chief scientist at COSI science center. Sutter is also host of Ask a Spaceman, RealSpace and COSI Science Now. He contributed this article to's Expert Voices: Op-Ed & Insights.

Let's be perfectly honest. The Higgs boson and its role in the universe are not the easiest things to explain. It doesn't help that the Higgs has the horrible nickname of "the God Particle" and is often described as being "responsible for mass in the universe" or something like that.

The Higgs boson is indeed an important part of modern physics, but elevating it to the status of a deity seems a bit of a stretch, and the whole "making mass" thing isn't even this particle's most important job.

Oh, and physicists don't really care about the particle.

Don't worry — I'll explain in a bit.

What's in a Name?

First, I want to talk about the particle's name. It has a perfectly acceptable name: the Higgs boson. Physicist Leon Lederman coined the nickname "God Particle" in the early '90s and used it as the title of his book on the subject. I'm sure he thought it was just a cute name (especially since Lederman claims his publisher rejected his original idea of the "Goddamn Particle"), but the media went crazy with the name, and now it's hard to disentangle the real physics of the Higgs from the hype.

The particle's real name, the Higgs boson, is actually quite informative. Indeed, it references two people, further highlighting the particle's importance: Peter Higgs, who with a bunch of colleagues first proposed the particle back in the 1960s, and Satyendra Nath Bose, who was a pioneering figure in the early days of particle physics.

"Boson" is the term for one of the two kinds of particles in the universe, with the other called a "fermion" (after Enrico Fermi). Very, very loosely, you can think of fermions as the building blocks of the everyday world. Think electrons, quarks, protons, neutrinos and all their friends. Meanwhile, the bosons are the forces between them: photons, gluons and so on.

So right there, the name gives you a hint: Because this particle is called a "boson," it must have something to do with forces.

Field of Dreams

But modern particle physics isn't really about the particles themselves, and that goes for the Higgs boson, too. No, in the contemporary view of the rules of the universe, the primary physical object is the field, an entity that permeates all of space and time. This field can take different values at different points in space-time, and each value corresponds to the average number of particles observers see in that patch. In this view (and indeed, in reality), particles can be created and destroyed at will, simply by adding or removing energy from the field.

In other words, you can slap a field and make some particles. A single particle is just the minimum possible amount of energy that a field can support. Every kind of particle that scientists know of, from the electron to a photon, is associated with its own space-time-filling vibrating field.

I'm spending a couple of paragraphs making this distinction clear because the hunt for the Higgs boson isn't about the particle itself. Machines like the Large Hadron Collider are trying to study the Higgs field, but the only way to do so is to make some Higgs particles (i.e., some slaps in the field) and see how they work.

Speaking of work: It's the Higgs field, not the Higgs particle, that's doing interesting things in the universe.

A Broken Universe

The "interesting thing" that the Higgs boson does in the universe relates to a fundamental question of modern physics. Physicists observe four forces of nature: electromagnetic, strong nuclear, weak nuclear and gravity. The photon carries the electromagnetic force, while the W+, W-and Z bosons carry the weak nuclear force and a set of gluons carries the strong nuclear force. And gravity is carried by … well, perhaps that's a subject for another day.

These four forces of nature are, as you may have noticed, radically different from one another. It's nothing at all like the families of fermions: In that realm, a simple change of charge or different measure of mass will get you a new kind of particle. In the boson world, the electromagnetic force is completely different from the weak nuclear force in terms of mass, range, and interactions, and their respective force carriers aren't even on speaking terms, let alone related to each other.

But why? Really, why? Why are the forces of nature so dang different?

One clue to this perplexing mystery is that, at high enough energy densities — like, say, in the business end of a particle collider — there are only three forces of nature. You read that right: three, not four! There's strong nuclear, gravity, and a strange hybrid of electromagnetic and weak nuclear called, appropriately enough, the electroweak force.

This force appears only at high energies, and a quartet of massless particles carry it. Mathematically, these particles and their associated force are in a highly symmetric state. But at low (read: normal, everyday) energies, that unified symmetric force breaks apart to become the awkwardly-split-but-still-have-to-live-together electromagnetic force (carried by the still-massless photon) and weak nuclear force (carried by a much heavier trio of particles).

And the cause of the split is the good ol' Higgs (which you may have guessed, because that's the focus of this article).

Making the Split

At that high-energy, symmetric state, not only are there four massless carries of the electroweak force, but there are also four Higgs fields. The reason there are precisely four isn't that there's a rigged matching game; the same deep symmetries that lead to the electroweak unification also provide the mathematical machinery for constructing four Higgs fields. In other words, if you're going to propose the existence of a Higgs field at high energies, you don't get any choice but to construct four – it's baked into the fundamental symmetries of our universe.

I haven't seen anyone name these four high-energy Higgs fields "the higglets" yet, so I'll just go ahead and make that a thing.

At high temperatures, the four carries of the electroweak force do their thing (carry the electroweak force) and the four higglets do their thing (not much of anything). But at low temperatures, the higglets get disrupted. Three of them "glue" (for lack of a better term) to three of the electroweak carriers. These hybrid creatures become massive, and physicists know them as the W, Z+, and Z- bosons — and, voilà, the weak force is born.

But the fourth higglet gets "stuck" (again, for lack of a better term) in an asymmetrical state that prevents it from matching up with the remaining electroweak carrier. That carrier then gets to remain massless — and, aha, you get the photon, carrying the now-familiar electromagnetic force.

Read more at Discovery News

May 9, 2017

Surprise! When a brown dwarf is actually a planetary mass object

This is an artist's conception of SIMP J013656.5+093347, or SIMP0136 for short, which the research team determined is a planetary like member of a 200-million-year-old group of stars called Carina-Near.
Sometimes a brown dwarf is actually a planet -- or planet-like anyway. A team led by Carnegie's Jonathan Gagné, and including researchers from the Institute for Research on Exoplanets (iREx) at Université de Montréal, the American Museum of Natural History, and University of California San Diego, discovered that what astronomers had previously thought was one of the closest brown dwarfs to our own Sun is in fact a planetary mass object.

Their results are published by The Astrophysical Journal Letters.

Smaller than stars, but bigger than giant planets, brown dwarfs are too small to sustain the hydrogen fusion process that fuels stars and allows them to remain hot and bright for a long time. So after formation, brown dwarfs slowly cool down and contract over time. The contraction usually ends after a few hundred million years, although the cooling is continuous.

"This means that the temperatures of brown dwarfs can range from as hot as stars to as cool as planets, depending on how old they are," said the AMNH's Jackie Faherty, a co-author on this discovery.

The team determined that a well-studied object known as SIMP J013656.5+093347, or SIMP0136 for short, is a planetary like member of a 200-million-year-old group of stars called Carina-Near.

Groups of similarly aged stars moving together through space are considered prime regions to search for free-floating planetary like objects, because they provide the only means of age-dating these cold and isolated worlds. Knowing the age, as well as the temperature, of a free-floating object like this is necessary to determine its mass.

Gagné and the research team were able to demonstrate that at about 13 times the mass of Jupiter, SIMP0136 is right at the boundary that separates brown dwarf-like properties, primarily the short-lived burning of deuterium in the object's core, from planet-like properties.

Free-floating planetary mass objects are valuable because they are very similar to gas giant exoplanets that orbit around stars, like our own Solar System's Jupiter or Saturn, but it is comparatively much easier to study their atmospheres. Observing the atmospheres of exoplanets found within distant star systems is challenging, because dim light emitted by those orbiting exoplanets is overwhelmed by the brightness of their host stars, which blinds the instruments that astronomers use to characterize an exoplanet's atmospheres.

"The implication that the well-known SIMP0136 is actually more planet-like than we previously thought will help us to better understand the atmospheres of giant planets and how they evolve," Gagné said.

They may be easier to study in great detail, but these free-floating worlds are still extremely hard to discover unless scientists spend a lot of time observing them at the telescope, because they can be located anywhere in the sky and they are very hard to tell apart from brown dwarfs or very small stars. For this reason, researchers have confirmed only a handful of free-floating planetary like objects so far.

Étienne Artigau, co-author and leader of the original SIMP0136 discovery, added: "This newest addition to the very select club of free-floating planetary like objects is particularly remarkable, because we had already detected fast-evolving weather patterns on the surface of SIMP0136, back when we thought it was a brown dwarf."

Read more at Science Daily

'Humanlike' ways of thinking evolved 1.8 million years ago

A volunteer creates an ancient Acheulean hand axe wearing a cap designed to measure brain activity.
By using highly advanced brain imaging technology to observe modern humans crafting ancient tools, an Indiana University neuroarchaeologist has found evidence that human-like ways of thinking may have emerged as early as 1.8 million years ago.

The results, reported May 8 in the journal Nature Human Behavior, place the appearance of human-like cognition at the emergence of Homo erectus, an early apelike species of human first found in Africa whose evolution predates Neanderthals by nearly 600,000 years.

"This is a significant result because it's commonly thought our most modern forms of cognition only appeared very recently in terms of human evolutionary history," said Shelby S. Putt, a postdoctoral researcher with The Stone Age Institute at Indiana University, who is first author on the study. "But these results suggest the transition from apelike to humanlike ways of thinking and behaving arose surprisingly early."

The study's conclusions are based upon brain activity in modern individuals taught to create two types of ancient tools: simple Oldowan-era "flake tools" -- little more than broken rocks with a jagged edge -- and more complicated Acheulian-era hand axes, which resemble a large arrowhead. Both are formed by smashing rocks together using a process known as "flintknapping."

Oldowan tools, which first appeared about 2.6 million years ago, are among the earliest used by humanity's ancestors. Acheulian-era tool use dates from 1.8 million to 100,000 years ago.

Putt said that neuroarchaeologists look to modern humans to understand how pre-human species evolved cognition since the act of thinking -- unlike fossilized bones or ancient artifacts -- leave no physical trace in the archaeological record.

The methods used to conduct studies on modern humans crafting ancient tools was limited until recently by brain imaging technology. Previous studies depended on placing people within the confines of a functional magnetic resonance imaging machine -- essentially a narrow mental tube -- to observe their brain activity while watching videos of people crafting tools.

Putt's study, by contrast, employed more advanced functional near-infrared spectroscopy -- a device that resembles a lightweight cap with numerous wires used to shine highly sensitive lasers onto the scalp -- to observe brain activity in people as they learned to craft both types of tools with their hands.

In the study, 15 volunteers were taught to craft both types of tools through verbal instruction via videotape. An additional 16 volunteers were shown the same videos without sound to learn toolmaking through nonverbal observation. These experiments were conducted in the lab of John P. Spencer at the University of Iowa, where Putt earned her Ph.D. before joining IU. Spencer is now a faculty member at the University of East Anglia.

The resulting brain scans revealed that visual attention and motor control were required to create the simpler Oldowan tools. A much larger portion of the brain was engaged in the creation of the more complex Acheulian tools, including regions of the brain associated with the integration of visual, auditory and sensorimotor information; the guidance of visual working memory; and higher-order action planning.

Read more at Science Daily

‘Baby Louie’ Solves Dinosaur Development Mysteries

A nesting gigantic cassowary-like dinosaur named Beibeilong sinensis in the act of incubating its eggs
During the late 1980s and early 1990s, farmers from Henan, China, excavated and collected thousands of Cretaceous Era dinosaur eggs, many of which were sold overseas in rock and gem shows, stores, and markets. One shipment, imported in 1993 by Colorado-based The Stone Company, included an impressive clutch of big dinosaur eggs. Even more surprising than the multiple eggs was the unveiling of a small dinosaur skeleton, nicknamed Baby Louie in recognition of Louis Psihoyos, who photographed the striking remains.

In 2001, the Children’s Museum of Indianapolis acquired the specimen and put it on public exhibit for 12 years, until Baby Louie and the eggs were repatriated to China in December 2013. Baby Louie’s new home is at the Henan Geological Museum in its province of origin, and the tiny dinosaur embryo has a new identity, too.

Paleontologists have just determined that Baby Louie represents a new species of gigantic oviraptorosaur, a dinosaur that would have resembled an oversized modern cassowary. Given the scientific name Beibeilong sinensis (“Baby Dragon”), the new species and associated remains are described in the journal Nature Communications.

“Baby Louie may have been an omnivore, eating both meats and plants,” said co-author Darla Zelenitsky, a professor at the University of Calgary. “It would have had a very strong and robust, but toothless, jaw.”

Curled embryo of Beibelong sinensis on top of eggs (eggshell is dark grey in color).
She added that Baby Louie’s “bones are relatively well formed, so it was probably in the latter stages of incubation, closer to hatching.”

Baby Louie and the rest of its unhatched siblings were deposited by their mother around 90 million years ago into an enormous nest bigger than a monster truck tire, the researchers believe.

The entire nest would have contained two dozen or more eggs positioned at the periphery of a giant ring configuration close to 10 feet in diameter. The eggs are about 18 inches long and weighed around 11 pounds after being laid, making them some of the largest dinosaur eggs ever discovered.

“The giant (parent) dinosaur likely sat in the middle of the nest, perhaps protecting, covering its eggs with its feathered arms and body,” Zelenitsky said. “It would have been a sight to behold with a three-ton animal like this sitting on its nest of eggs.”

Curled embryo of Beibelong sinensis on top of eggs (eggshell is dark grey in color).
She continued that a flood event likely disrupted this peaceful dinosaur family scene, with water and sediment covering the nest and killing the eggs. The fate of Baby Louie’s parents right after the flood remains unknown.

Had the embryos hatched, they probably would have weighed close to 9 pounds each and would have been fairly self-sufficient, the scientists suspect. As adults, the dinosaurs likely measured over 26 feet long.

Read more at Discovery News

This Human Relative May Have Lived Alongside Our Species in Africa

A reconstruction of Homo naledi.
When the discovery of Homo naledi was announced two years ago, the news prompted both amazement and incredulity. H. naledi was described as a small-bodied hominid with a brain one third the size of that of Homo sapiens. Its remains were found within the Dinaledi Chamber of the Rising Star Cave system, which is part of the Cradle of Humankind World Heritage Site northwest of Johannesburg. Some scientists believed the researchers — who published their finds in the nascent journal eLife and worked under the glare of television cameras — played fast and loose with the truth.

Now the leader of that earlier research, paleontologist Lee Berger of the University of Witwatersrand (Wits University), and his colleagues have announced via three papers in the same journal more startling finds concerning H. naledi.

They report the discovery of a second chamber within Rising Star with abundant H. naledi fossils, including one of the most complete skeletons of an early human ever found, as well as the remains of at least one child and another adult. They further mention that dating of the site and original H. naledi remains shows these individuals were alive sometime between 236,000–335,000 years ago.

Map of the Rising Star Cave System.
Berger said the earliest fossil remains of modern humans are those from the Omo Kibish region of Ethiopia and are nearly 200,000 years old. While no Homo sapiens fossils are known from subequatorial Africa as early as this, Berger and his team now believe it is possible that some populations of H. naledi came into direct contact with modern humans or their ancestors.

“We can no longer assume that we know which species made which tools, or even assume that it was modern humans that were the innovators of some of these critical technological and behavioral breakthroughs in the archaeological record of Africa,” Berger said in a statement. “If there is one other species out there that shared the world with ‘modern humans’ in Africa, it is very likely there are others. We just need to find them.”

The researchers say Rising Star Cave was dated using a combination of optically stimulated luminescence of sediments with uranium-thorium dating and paleomagnetic analyses of flowstones to establish how the cave sediments relate to the geological timescale in the Dinaledi Chamber. Uranium series and electron spin resonance dating were used to determine the estimated age of H. naledi teeth.

Geologist Hannah Hilbert-Wolf studying difficult-to-reach flowstones in a small side passage in the Dinaledi Chamber.
The second and more recently discovered room in the cave was named the Lesedi Chamber. Lesedi means “light” in the Setswana language. The researchers additionally named the near-complete H. naledi skeleton found in the Lesedi Chamber: Neo. Analysis of Neo and the other remains reveals that H. naledi had features that are shared with some of the earliest known fossil members of our genus, such as Homo rudolfensis and Homo habilis, species that lived two million years ago.

The scientists believe the approximately 5-feet-tall hominid also shared features with modern humans, such as its humanlike hands, wrists, feet, and lower limbs. H. naledi’s anatomy suggests to the researchers that it was both an effective walker and climber.

“Neo” skull of Homo naledi, frontal view.
“Lucy,” the 3.2-million-year-old skeleton of the hominid Australopithecus afarensis (left) and “Neo,” a skeleton of Homo naledi (right) that was dated as being roughly 250,000 years old.
The Lesedi Chamber is about 109 yards from the Dinaledi Chamber, where at least 15 individuals of various ages were found. Both chambers are difficult to access.

“I have never been inside either of the chambers, and never will be,” co-author John Hawks of the University of Wisconsin-Madison and Wits University said in a statement. “In fact, I watched Lee Berger being stuck for almost an hour, trying to get out of the narrow underground squeeze of the Lesedi Chamber.”

Berger eventually had to be extricated using ropes tied to his wrists.

The remoteness and distance between the cave chambers suggests to the researchers that H. naledi was caching its dead, and likely was controlling fire to see within the deep, dark cave. No tools directly associated with this species of human have been found yet, though.

Chris Stringer, a merit researcher at the Natural History Museum in London, is a leading expert on early human origins. He expressed amazement over the conclusion that H. naledi lived around 300,000 years ago.

“This is astonishingly young for a species that still displays primitive characteristics found in fossils about 2 million years old, such as the small brain size, curved fingers, and form of the shoulder, trunk, and hip joint,” Stringer said. “Yet the wrist, hands, legs and feet look more like those of Neanderthals and modern humans, and the teeth are relatively small and simple, and set in lightly built jawbones.”

He believes that H. naledi could be a “relic species, retaining many primitive traits from a much earlier time.” Homo floresiensis, aka the Hobbit Human that lived until relatively recently, came to his mind. The diminutive Hobbits are thought to have lived when several other species of humans were in Europe and Asia. H. floresiensis lived on the island of Flores, however, so isolation at that location could help to explain how it remained a distinct species of human.

H. naledi does not appear to have been isolated, so Stringer posed the compelling question: “How did a comparably strange and small-brained species linger on in southern Africa, seemingly alongside more ‘advanced’ humans?”

Homo naledi was very different from archaic humans that lived around the same time. Kabwe skull from Zambia, an archaic human (left) and ''Neo'' skull of Homo naledi (right).
He also questioned the theory that H. naledi cached its dead, and has not ruled out that accidental or natural processes resulted in the placement of the remains in the two remote cave chambers.

Nevertheless, Stringer said that the discovery and dating of H. naledi “remind us that about 95 percent of the area of Africa is still essentially unexplored for its fossil human record, and its history even within the last 500,000 years may well be as complex as that of Eurasia with its 5 known kinds of humans — Homo erectus, heidelbergensis, neanderthalensis, Denisovans, and floresiensis.”

Read more at Discovery News

May 8, 2017

Gap growing between longest and shortest lifespans in the US

Life expectancy in the US over time, 1980-2014.
Babies born today in 13 US counties have shorter expected lifespans than their parents did when they were born decades ago, according to a new study. For example, life expectancy at birth in Owsley County, Kentucky, was 72.4 in 1980, dropping to 70.2 in 2014.

The gap between counties with the highest and lowest life expectancies is larger now than it was back in 1980 -- a more than a 20-year difference in 2014 -- highlighting massive and growing inequality in the health of Americans.

Oglala Lakota County, South Dakota -- a county that includes the Pine Ridge Native American reservation -- revealed the lowest life expectancy in the country in 2014 at 66.8 years, comparable to countries like Sudan (67.2), India (66.9), and Iraq (67.7).

Clusters of counties with low life expectancies were also identified in Kentucky, West Virginia, Alabama, and several states along the Mississippi River. Several counties in these states and others saw decreases in life expectancy since 1980, while much of the country experienced increases.

However, a cluster of counties in Colorado had the highest life expectancies in the US, with Summit County topping the list at 86.8 years, followed by Pitkin County (86.5) and Eagle County (85.9). By comparison, at the country level Andorra had the highest life expectancy in the world that same year at 84.8, followed by Iceland at 83.3.

"These findings demonstrate an urgent imperative, that policy changes at all levels are gravely needed to reduce inequality in the health of Americans," said Dr. Ali Mokdad, an author on the study by the Institute for Health Metrics and Evaluation (IHME) at the University of Washington in Seattle who leads US county health research at IHME. "Federal, state, and local health departments need to invest in programs that work and engage their communities in disease prevention and health promotion."

In the study published today (May 8, 2017) in JAMA Internal Medicine, the authors calculated life expectancy by county from 1980 to 2014. They also examined the risk of dying among five age groups, as well as the extent to which risk factors, socioeconomics and race, and health care contribute to inequality.

"Looking at life expectancy on a national level masks the massive differences that exist at the local level, especially in a country as diverse as the United States," explained lead author Laura Dwyer-Lindgren, a researcher at IHME. "Risk factors like obesity, lack of exercise, high blood pressure, and smoking explain a large portion of the variation in lifespans, but so do socioeconomic factors like race, education, and income."

All counties saw a drop in the risk of dying before age 5, and the gap between the counties with the highest and lowest levels of under-5 mortality narrowed since 1980. This is likely a result of health programs and services focused on infants and children.

On the other hand, 11.5% of counties saw an increased risk of death in adults between 25 and 45. In addition, inequality in the probability of dying has risen for people between 45 and 85 since 1980.

The authors also looked at the extent to which several factors contributed to the inequality in life expectancy. Risk factors -- obesity, lack of exercise, smoking, hypertension, and diabetes -- explained 74% of the variation in longevity. Socioeconomic factors, a combination of poverty, income, education, unemployment, and race, were independently related to 60% of the inequality, and access to and quality of health care explained 27%.

This new research suggests that policies and programs that promote healthy behaviors could be the most effective in reducing health inequalities, but that socioeconomic status and other related factors should not be ignored.

"The inequality in health in the United States -- a country that spends more on health care than any other -- is unacceptable," said Dr. Christopher Murray, director of IHME.

"Every American, regardless of where they live or their background, deserves to live a long and healthy life. If we allow trends to continue as they are, the gap will only widen between counties."

In 2014, the United States spent $9,237 per person on health care. Australia, a country with a higher life expectancy in 2014 (82.3), spent $4,032 per person. Japan, where life expectancy was 83.1 -- one of the highest in the world -- spent just $3,816 per person.

Read more at Science Daily

Changes in Early Stone Age tool production have 'musical' ties

Stone arrow head.
New research suggests that advances in the production of Early Stone Age tools had less to do with the evolution of language and more to do with the brain networks involved in modern piano playing.

Around 1.75 million years ago there was a revolutionary innovation in stone tool technology, when early humans moved from making simple Oldowan flake and pebble tools to producing two-sided, shaped tools, such as Acheulian hand axes and cleavers. This advance is thought to reflect an evolutionary change in intelligence and language abilities.

Understanding the link between brain evolution and cognition is a challenge, however, because it is impossible to observe the brain activity of extinct humans. An innovative approach to this challenge is to bring together modern neuroscience methods and material artefacts from the archaeological record.

To understand the brain changes that might have co-evolved with the advance in tool use, researchers in the field of neuroarcheology -- from the University of East Anglia's (UEA) School of Psychology, The Stone Age Institute at Indiana University, and the Department of Anthropology at the University of Iowa -- have been examining the brain activity of modern humans as they learn to make Oldowan and Acheulian stone tools.

To test whether learning with language impacts which brain networks are involved in stone toolmaking, 15 of the 31 participants learned to knap stone via verbal instruction by watching videos of a skilled knapper's hands during individual training sessions. The other 16 participants learned via nonverbal instruction using the same videos, but with the sound turned off.

The researchers found that the co-ordination of visual attention and motor control networks were sufficient to remove simple flakes for Oldowan tools. But the production of Acheulian tools required the integration of visual working memory, auditory and sensorimotor information, and complex action-planning -- the same brain areas that are activated in modern piano playing. These findings, published in the journal Nature Human Behaviour, are a major step forward in understanding the evolution of human intelligence.

Lead author Dr Shelby Putt, from the Stone Age Institute, said: "This work offers novel insights into prehistoric cognition using a cutting-edge neuroimaging technique that allows people to engage in complex actions while we are measuring localized brain activity.

"The study reveals key brain networks that might underlie the shift towards more human-like intelligence around 1.75 million years ago. We think this marked a turning point in the evolution of the human brain, leading to the evolution of a new species of human."

The researchers also reported that brain networks specialised for language in modern humans were only activated during Acheulian tool production when participants learned to make tools in the verbal instruction condition. Since language was likely not available 1.75 million years ago, this suggests that Acheulian tool production did not rely heavily on the evolution of language centres in the brain.

Co-author Prof John Spencer from UEA said: "Our findings do not neatly overlap with prior claims that language and stone tool production co-evolved. There is more support for the idea that working memory and auditory-visual integration networks laid the foundation for advances in stone tool-making.

"It is fascinating that these same brain networks today allow modern humans to perform such behaviours as skilfully playing a musical instrument."

Previous studies have attempted to simulate early tool making, for example, by showing participants images of tool production and then looking at brain activity.

Conducted at the University of Iowa, this is the first neuroimaging study to use a cutting-edge technique -- functional near-infrared spectroscopy (fNIRS) -- to enable researchers to track real time changes in brain activity as participants made these two types of stone tools.

Summing up the study, co-author Prof Robert Franciscus from the University of Iowa said: "When and how humans became the exceptionally intelligent and language-using species that we are today is still a great mystery. We discovered that the appearance of a type of more complexly shaped stone tool kit in the archaeological record marked an important cognitive shift when our ancestors started to think and act more like humans rather than apes.

Read more at Science Daily