Apr 1, 2017
The dead ones tend to have smaller brains, scientists who performed 3,521 avian autopsies said Wednesday.
What might be called the "bird brain rule" applies to different species, depending on the ratio of grey matter to body mass, they reported in the journal Royal Society Open Science.
Crows, for example, have big brains relative to their size, and have shown a remarkable knack for navigating oncoming traffic.
While picking at road kill on Florida highways, earlier research showed, the scavengers learned to ignore cars and trucks whizzing by them within inches, but would fly away just in time when threatened by a vehicle in their lane.
"I don't know if we can say they are 'smarter,' but they do exhibit cognitive behaviour that makes them likely to survive," lead author Anders Pape Moller, an evolutionary biologist at the French National Center for Scientific Research, told AFP.
Pigeons, by contrast, appear to be less adept at avoiding collisions, a deficiency visible in compressed form on most big-city streets.
Not coincidentally, they have teeny-weeny brains.
More surprisingly, the relation between brain size and traffic accidents also holds within the same species, the study found.
"Common European blackbirds, house sparrows and robins all show this difference in individuals that were hit by cars, and those that were not," Moller said by phone.
Undamaged brains from birds killed in accidents - weighed to within a hundredth of a gram - were consistently smaller, he said.
In total, the researchers examined 251 different species.
The finding raises the question of whether certain species, and certain populations within species, have evolved over the space of dozens of generations such that birds with car-dodging abilities have become more common.
Moller is skeptical.
Recent research has shown that evolutionary changes in many animals - fish and insects, in particular - are occurring in response to climate change and human encroachment on habitat far more quickly that scientists imagined possible.
But given that fast-moving cars have only been on the roads for less than a century, it is unlikely that birds have adapted - as a species level - that fast.
"The other problem is that total mortality due to traffic is not big enough to cause evolutionary change," Moller said.
It would take a very high death toll, in other words, to exert a strong influence on the course of evolution.
Having said that, a lot of birds get killed by moving vehicles.
The State of the Birds annual report estimated in 2014 that 200 million birds perish on the road every year in the United States alone.
Estimates for Europe are lower, but worldwide the grim tally is certainly in the hundreds of millions.
Read more at Discovery News
The event, which has seldom been reported in archaeology, is known as postmortem fetal extrusion. It results from a build-up of gas pressure within the decomposing body.
"In this case, we have a partial expulsion of a 38- to 40-week-old fetus, which was found to be complete and to lie within the birth canal," Deneb Cesana, at the University of Genova, told Seeker.
The remains of the woman and her unborn baby were originally uncovered in 2006, interred with two other young individuals that scientists say were aged 12 and three years old. Only recently has the discovery been fully investigated.
The research was led by Cesana and her colleagues Ole Jørgen Benedictow, a plague historian at the University of Oslo, and Raffaella Bianucci, a bioanthropologist at the University of Warwick in England. Their work appears in the journal Anthropological Science.
The gravesite was found in the cemetery of the "ospitale" (hostel) of San Nicolao di Pietra Colice, located some 45 miles from Genova.
The hostel, which also housed a church, was situated in the Northern Apennines at about 2,600 feet above sea level, and was used as a resting place by travelers and pilgrims heading to Rome and trekking along the two major transit routes of the Liguria region.
"The woman was found laying slightly on her side, while on her left there were two young individuals of unknown sex," said archaeologist Fabrizio Benente, of the University of Genova.
Benente, who was not involved in the anthropological study, directed the excavation campaign with a team of the International Institute of Ligurian Studies and the University of Genova.
"This was the only multiple burial found at the cemetery," he said. "The others were all single graves."
He added that the corpses had been buried simultaneously and directly into the soil, and dated the burial to the second half of the 14th century.
The timing corresponded to the arrival of the Black Death in Genoa in 1348. The researchers hypothesized that the woman and the two children likely died of the bubonic plague.
Bianucci's analysis confirmed that three of the four individuals - the woman, the fetus, and the 12-year-old child - tested positive for the F1 antigen of Yersinia pestis, the bacterium that causes the plague.
"This is the first evidence of Y. pestis infection in 14th-century Liguria," Bianucci said.
"Our finding supports the notion that the contagion, which had originally started from Genoa's port area, progressively spread and disseminated through the main communication routes," she added.
Anthropological investigations carried out and funded by the Archaeological Museum of Sestri Levante and the Archaeological Superintendency of Liguria showed that the woman, who was about 5 feet 11 inches tall, was between 30 and 39 years old when she died.
It emerged that she had several ailments during her life. Her teeth revealed localized periodontitis and linear enamel hypoplasia - a band-like dental defect that denotes childhood physiological stress - while her bones showed evidence of other diseases.
Read more at Discovery News
Mar 31, 2017
But scientists, writing in a new research paper, theorize a Mars-sized exomoon of a gas giant planet and ask whether or not liquid water could be found on its surface.
In our own solar system, the closest analog is Jupiter's Ganymede, the biggest moon in the solar system and about five-sixth the size of Mars.
NASA confirmed in 2015 the presence of a liquid ocean on Ganymede after performing Hubble Space Telescope observations of the moon's auroras, which appear to rock back and forth less than expected with Jupiter's magnetic field. The space agency said the attenuation is likely due to a salty ocean under Ganymede's surface.
As for this theoretical Mars-sized exomoon, the picture is murky. The scientists considered energy sources such as stellar radiation (which changes as a function of distance to the star), the stellar reflected light from a Jupiter-sized planet on the moon, the planet's own thermal emission on the moon, and the tidal heating inside the moon that is generated due to the changing gravitational pull of the planet. (This tidal heating would be most pronounced if the moon had an eccentric orbit, like the volcanic moon Io has around Jupiter.)
It's know that tidal heating rates decrease if a moon is molten inside, because lava creates an inherent negative feedback mechanism where the heating sort of switches off, and the moon cools down inside. This is called the "tidal thermostat effect," co-author of the paper Rene Heller said in an email. Heller is an astrophysicist at the Max Planck Institute for Solar System Research in Germany.
"We investigate, for the first time, the interplay of all the possible exomoon heat sources as a function of various distances from the host star," he added. "Actually, we even consider two possible types of host stars: a sun-like star, and a red dwarf star (an M dwarf)."
For a sun-like star, the authors found that any moon around a gas giant beyond three astronomical units, or three Earth-sun distances, would have a high enough energy flux to stop the tidal thermostat effect from happening. But if the moon is volatile enough, it could have global volcanism - just like what we see on Io.
Heller described this situation as "dangerous" for organisms.
"They might have lots of liquid surface water, but their surfaces could at the same time be blotched with devastating volcanoes," he wrote. "Nevertheless, we illustrate that they could be habitable given the right amount of tidal heating, and we show at which distances to their planets these moons would need to be."
M-dwarfs are a common target for exoplanet searches because they are smaller and dimmer, making it easier to see planets passing across their surfaces, or the effect of planets tugging on the star itself. But for exomoons, it's even less clear how habitable they would be in such a system. "Moons cannot be stable in the very inner regions of the stellar habitable zone," Heller said.
The best examples for tidally heated bodies in our own solar system are all moons: Jupiter's Io and Europa, as well as Saturn's Enceladus. While Europa and Enceladus are strongly suspected to have oceans underneath an icy surface, Heller pointed out his research is more focused on habitability on the surface of the moon. A better analog, he said, might be Saturn's moon Titan - but with a much warmer surface. Titan has a thick orange atmosphere, as well as liquid hydrocarbon lakes.
"Due to observational selection effects, which will prefer big moons around low-mass planets, I think the first exomoon will be unlike anything we know from the solar system," Heller said.
Read more at Discovery News
Using the Atacama Large Millimeter/submillimeter Array (ALMA), a large radio observatory in Chile, researchers have taken "baby pictures" of Milky Way-like galaxies when their star formation was just beginning to accelerate. At that time, the universe was nearly two billion years old. Since light moves at a finite speed, looking deep into the universe also means looking back in time, and these young galaxies are about 12 billion light-years away from Earth. The cosmos itself is about 13.8 billion years old.
Looking at two of the ancient galaxies in infrared wavelengths, the researchers saw that very early in the galaxies' development, they had what look like extended discs of hydrogen gas that far outpaced the smaller, star-forming regions within. These galaxies also already had rotating discs of gas and dust, and were forming stars at a relatively rapid pace: up to 100 solar masses (the mass of Earth's sun) per year.
Officially designated ALMA J081740.86+135138.2 and ALMA J120110.26+211756.2, the galaxies were observed using light from two quasars in the background. Quasars are supermassive black holes surrounded by bright accretion discs, and are themselves thought to be the centers of particularly active galaxies.
The glowing carbon also offered another clue to the galaxies' structure, because its position was offset from the hydrogen gases that the astronomers initially saw, as revealed by the quasars' shine. That means that the galaxies' gases extend far from the dense carbon regions, suggesting each galaxy has a large halo of hydrogen encircling it, the researchers said.
Looking at the foreground objects that a shining background quasar could reveal, "we had expected we would see faint emissions right on top of the quasar, and instead we saw bright galaxies at large separations from the quasar," J. Xavier Prochaska, an astrophysicist at the University of California, Santa Cruz, and co-author on the new study, said in a statement.
The data also showed that the young galaxies have already begun rotating, which is a hallmark of spiral galaxies like the Milky Way, the study said.
The effort to find such early stage galaxies began in 2003, when Prochaska first worked on the idea of using the spectra of quasars, the wavelengths of light they emit, to reveal those of galaxies in the foreground. Such arrangements are called damped Lyman-alpha systems, because the hydrogen gas blocks certain wavelengths of light from the quasar, revealing the gas's presence and extent.
Read more at Discovery News
In addition to revealing greater detail of what the faces of tyrannosaurs looked like, the new dinosaur — called Horner’s Frightful Lizard (Daspletosaurus horneri) — helps to solve many other animal-related mysteries, such as why tyrannosaurs tended to have small arms, why today’s birds lack lips, and why alligators and crocodiles possess such sensitive snouts.
The findings are published in the journal Scientific Reports.
“We hypothesize that tyrannosaurs, including D. horneri and T. rex, had a tactile sensory system that was comparable, if not identical, to what is seen in living crocodylians,” lead author Thomas Carr, a vertebrate paleontologist at Carthage College, told Seeker, referring to an order of species that includes alligators, crocodiles, caimans, and the gharial.
“Work done by biologists on crocodylians has found that their faces were extremely sensitive to touch, rivaling that of human fingertips,” he continued. “The sensitivity comes from a combination of multiple branches of the trigeminal nerve that innervate the skin at specialized structures of thin, dome-like coverings of skin where neurons are in high concentration.”
Carr and his team made this determination after studying the new dinosaur’s well-preserved remains, which were unearthed at a site called the Two Medicine Formation in Montana. The remains include the skull and skeleton of a juvenile, the skull and skeleton of an adult, as well as other fossils from the species, which lived 75.1 to 74.4 million years ago.
The researchers conducted work that co-author Jayc Sedlmayr of the Louisiana State University Health Sciences Center New Orleans described as “arm deep in blood and guts,” dissecting birds, alligators, and crocodiles to see how facial nerves and arteries leave traces on bones. The scientists then used this information to flesh out the face of Horner’s Frightful Lizard that included small horns above the eyes.
The dinosaur’s super-sensitive snout likely would have been important “for prey manipulation, feeding, and for reproduction, such as building a nest, manipulating eggs, and safely moving hatchlings,” co-author David Varricchio, a professor at Montana State University, told Seeker.
He added that the snout could therefore function like an arm, given that “the small arms of tyrannosaurs were likely not effective at any of these tasks.”
The tender snout also appears likely to have served a sensual purpose when it came to mating.
“In courtship, tyrannosaurids might have rubbed their sensitive faces together as a vital part of pre-copulatory play,” says the report.
Horner’s Frightful Lizard used its snout skills to help devour prey that included horned, crested duckbill and dome-headed dinosaurs, in addition to much smaller carnivorous dinos, the researchers believe.
Carr said that “killing would have been the work of the jaws,” with their sensitive margins likely giving the hunter “important information regarding how hard to bite, whether or not the prey was still living, and the locations of soft and hard parts.”
The dinosaur’s similarities with today’s crocodiles and alligators are evidence of a genetic inheritance from a shared ancestor.
“Crocodylians are not dinosaurs, and dinosaurs are not crocodylians," Carr clarified, "but they do share a common ancestor.”
Birds, on the other hand, are considered to be living dinosaurs. Birds probably inherited their lack of lips from their equally lipless dino ancestors, according to the scientists.
The researchers further believe that Horner’s Frightful Lizard evolved via a rare form of speciation known as anagenesis, where one species gradually morphs into a new one over time.
Co-author Jason Moore of the University of New Mexico explained that he most common form of speciation, known as cladogenesis, “most frequently occurs by a small population of one species being separated from the remainder of the members of that species and being subject to a different set of environmental conditions, and hence evolving different sets of traits."
Read more at Discovery News
The study, published in Quaternary Science Reviews, examined about 500 years of Maya history, from 363 to 888 AD.
This is the so-called Classic period in which the Mesoamerican civilization boomed, with its people constructing extensive cities and massive pyramids, as well as developing one of the earliest writing systems in the Americas.
Indeed, the Maya began a tradition of recording historical events on stone monuments.
“The inscriptions that have been translated provide often remarkably detailed accounts of myths and political events, including conflicts between city-states,” said the report, which was authored by Mark Collard, Canada research chair at Simon Fraser University in British Columbia and professor of archaeology at the University of Aberdeen in Scotland, along with Christopher Carleton and David Campbell, both of Simon Fraser University.
The researchers cataloged inscriptions on monuments related to violent struggles and compiled temperature and rainfall records for the regions inhabited during the Classic period: the lowlands of the Yucatán Peninsula, which includes parts of southern Mexico, Guatemala, and Belize.
A total of 144 unique conflicts emerged from inscriptions on monuments from more than 30 major Maya centers. The research team then compared conflict records to palaeoclimate data, and the correspondence was impressive.
“The change in conflict levels between 350 and 900 AD was considerable,” they wrote. “The number of conflicts increased from 0 to 3 every 25 years in the first two centuries to 24 conflicts every 25 years near the end of the period.”
“There's been quite a bit of discussion about the impact of climate change on the Classic Maya, but this discussion has focused on drought,” Collard told Seeker. “Our study suggests that we've been looking in the wrong place and that the impact of temperature needs to be looked at more closely.”
Experts think that there are two potential mechanisms by which increases in temperature can lead to greater conflict.
One is psychological — when temperatures rise, tempers shorten. Several studies suggest it is possible that increased average summer temperatures made the Classic Maya more bellicose.
The other mechanism, which Collard and his colleagues find more likely and compelling, is economic, and involves the staple crop for the Classic Maya: maize.
Throughout the Classic period, average temperature fluctuated between 82.4 degrees Fahrenheit (28 degrees Celsius) and 84.2°F (29°C). During periods when the temperature was around 82.4°F (28°C) or less, maize yields were reasonably stable, with little or no food shortage and little conflict.
But as temperature continued to rise and the region experienced days at or above 86°F (30°C), crop shortfalls occurred frequently. Large-scale deforestation throughout the Classic period caused by urban expansion worsened the effect, increasing regional temperatures by reducing soil moisture availability.
The result was food shortage, which led to spiking levels of conflict.
“We had originally thought that it all came down to starvation, but after talking with Maya specialists, we decided that wasn't convincing,” Collard said.
He explained that maize would have been difficult to transport, in which case the idea of attacking neighbors for food did not seem very likely.
“Instead, it's probably better to consider the increase in warfare in a way that we often think about warfare today — namely as a tool for the elite to maintain support,” Collard said.
With declining maize yields, a ruler could not have relied on opulent festivals or fed large labor forces needed to build impressive monuments. Consequently, going to war more often would have been an effective tactic to maintain status, prestige, and power.
“I think of it as being similar to the way that some modern political leaders seem to use conflict with neighbors to distract from problems within their country,” Collard said.
Eventually, the growth in conflict became explosive.
The researchers believe the findings have implications for the debate about contemporary climate change. Concern is growing that climate change effects would increase violence within and between human societies.
The Intergovernmental Panel on Climate Change has cautioned that climate change will exacerbate conflict at a range of scales, from inter-personal violence to civil war, while the US Department of Defense has classified climate change as a threat multiplier, suggesting that it could lead to political and social unrest and increased terrorism.
“Our study shows that small year-to-year changes in climate can result in large, negative effects over the long term,” Collard said. “This is a problem for us, humans, because most of us are oriented towards the short term.”
“We run the risk of ignoring changes that will affect our children and grandchildren, because we can't perceive those changes,” he added.
Read more at Discovery News
Mar 30, 2017
|An illustration of the MAVEN spacecraft.|
"We've determined that most of the gas ever present in the Mars atmosphere has been lost to space," said Bruce Jakosky, principal investigator for MAVEN and a professor at the Laboratory for Atmospheric and Space Physics (LASP). "The team made this determination from the latest result, which reveals that about 65 percent of the argon that was ever in the atmosphere has been lost to space."
Jakosky is lead author of a paper on this research to be published in Science on Friday. Marek Slipski, a LASP graduate student, co-authored the study.
MAVEN team members had previously announced measurements showing that atmospheric gas was being lost to space and that described the processes by which atmosphere was being stripped away. The present analysis uses measurements of today's atmosphere to give the first estimate of how much gas has been removed through time.
Liquid water, essential for life, is not stable on Mars' surface today because the atmosphere is too cold and thin to support it. However, evidence such as features resembling dry riverbeds and minerals that only form in the presence of liquid water indicates the ancient Martian climate was much different -- warm enough for water to flow on the surface for extended periods.
There are many ways a planet can lose some of its atmosphere. For example, chemical reactions can lock gas away in surface rocks or an atmosphere can be eroded by radiation and wind from the planet's parent star. The new result reveals that solar wind and radiation were responsible for most of the atmospheric loss on Mars and that the depletion was enough to transform the Martian climate. The solar wind is a thin stream of electrically conducting gas constantly blowing from the surface of the sun.
Young stars have far more intense ultraviolet radiation and winds, so atmospheric loss by these processes was likely much greater early in Mars' history, and these processes may have been the dominant ones controlling the planet's climate and habitability, according to the team. It's possible that microbial life could have existed at the surface early in Mars' history. As the planet cooled off and dried up, any life could have been driven underground or forced into occasional or rare surface oases.
Jakosky and his team got the result by measuring the atmospheric abundance of two different isotopes of argon gas. Isotopes are atoms of the same element with different masses. Because the lighter of the two isotopes escapes to space more readily, it will leave the gas remaining behind enriched in the heavier isotope. The team used this enrichment together with how it varied with altitude in the atmosphere to estimate what fraction of the atmospheric gas has been lost to space.
As a "noble gas" argon cannot react chemically with anything so it won't get sequestered in rocks, and the only process that can remove it to space is a physical process called "sputtering" by the solar wind. In sputtering, ions picked up by the solar wind impact Mars at high speeds and physically knock atmospheric gas into space. The team tracked argon because it can be removed only by sputtering. Once they determined the amount of argon lost by sputtering, they could use the efficiency of sputtering to determine the sputtering loss of other atoms and molecules, including carbon dioxide (CO2).
CO2 is of interest because it is the major constituent of Mars' atmosphere and because it's an efficient greenhouse gas that can retain heat and warm the planet.
"We determined that the majority of the planet's CO2 also has been lost to space by sputtering," said Jakosky. "There are other processes that can remove CO2, so this gives the minimum amount of CO2 that's been lost to space."
The team made its estimate using data on the Martian upper atmosphere from MAVEN's Neutral Gas and Ion Mass Spectrometer (NGIMS) instrument supported by measurements from the Martian surface made by NASA's Sample Analysis at Mars (SAM) instrument on board the Curiosity rover.
"The combined measurements enable a better determination of how much Martian argon has been lost to space over billions of years," said Paul Mahaffy of NASA's Goddard Space Flight Center in Greenbelt, Maryland. Mahaffy, a co-author of the paper, is principal investigator on the SAM instrument and lead on the NGIMS instrument, both of which were developed at NASA Goddard.
Read more at Science Daily
|Legend of the image: left: notched raven bone from Zaskalnaya VI Neanderthal site, Crimea. center: experimental notching of a bird bone; right: sequences of experimentally made notches compared to those from Zaskalnaya VI.|
Majkic and colleagues conducted a mixed-methods study to assess whether the two extra notches on the ZSK raven bone were made by Neanderthals with the intention of making the final series of notches appear to be evenly spaced. First, researchers conducted a multi-phase experiment where recruited volunteers were asked to create evenly spaced notches in domestic turkey bones, which are similar in size to the ZSK raven bone. Morphometric analyses reveal that the equal spacing of the experimental notches was comparable to the spacing of notches in the ZSK raven bone, even when adjusted for errors in human perception. Archeological specimens featuring aligned notches from different sites were also analyzed and compared with the ZSK raven bone specimen.
Researchers concluded that the two extra notches on the ZSK raven bone may have been made by Neanderthals intentionally to create a visually consistent, and perhaps symbolic, pattern.
A series of recent discoveries of altered bird bones across Neanderthal sites has caused many researchers to argue that the objects were used for personal ornaments, as opposed to butchery tools or activities. But this study is the first that provides direct evidence to support a symbolic argument for intentional modifications on a bird bone.
From Science Daily
The X-ray source was originally discovered in October 2014 by Bin Luo, a Penn State postdoctoral researcher; Niel Brandt, the Verne M. Willaman Professor of Astronomy and Astrophysics and professor of physics at Penn State; and Franz Bauer, an associate professor of astrophysics at the Pontifical Catholic University of Chile in Santiago, Chile. Luo has since moved from Brandt's group to become a professor of astronomy and space science at Nanjing University in China, and Bauer had been a postdoctoral researcher in Brandt's group from 2000 to 2003. The data were gathered using the Advanced CCD Imaging Spectrometer on Chandra, an instrument conceived and designed by a team led by Penn State Evan Pugh Professor Emeritus of Astronomy and Astrophysics Gordon Garmire.
"This flaring source was a wonderful surprise bonus that we accidentally discovered in our efforts to explore the poorly understood realm of the ultra-faint X-ray universe," said Brandt. "We definitely `lucked out' with this find and now have an exciting new transient phenomenon to explore in future years."
Located in a region of the sky known as the Chandra Deep Field-South (CDF-S), the X-ray source has remarkable properties. Prior to October 2014, this source was not detected in X-rays, but then it erupted and became at least a factor of 1,000 brighter in a few hours. After about a day, the source had faded completely below the sensitivity of Chandra.
Thousands of hours of legacy data from the Hubble and Spitzer Space Telescopes helped determine that the event came from a faint, small galaxy about 10.7 billion light years from Earth. For a few minutes, the X-ray source produced a thousand times more energy than all the stars in this galaxy.
"Ever since discovering this source, we've been struggling to understand its origin," said Bauer. "It's like we have a jigsaw puzzle but we don't have all of the pieces."
Two of the three main possibilities to explain the X-ray source invoke gamma-ray burst (GRB) events. GRBs are jetted explosions triggered either by the collapse of a massive star or by the merger of a neutron star with another neutron star or a black hole. If the jet is pointing towards the Earth, a burst of gamma-rays is detected. As the jet expands, it loses energy and produces weaker, more isotropic radiation at X-ray and other wavelengths.
Possible explanations for the CDF-S X-ray source, according to the researchers, are a GRB that is not pointed toward Earth, or a GRB that lies beyond the small galaxy. A third possibility is that a medium-sized black hole shredded a white dwarf star.
"None of these ideas fits the data perfectly," said co-author Ezequiel Treister, also of the Pontifical Catholic University, "but then again, we've rarely if ever seen any of the proposed possibilities in actual data, so we don't understand them well at all."
The mysterious X-ray source was not seen during the two-and-a-half months of exposure time Chandra has observed the CDF-S region, which has been spread out over the past 17 years. Moreover, no similar events have yet to be found in Chandra observations of other parts of the sky.
This X-ray source in the CDF-S has different properties from the as yet unexplained variable X-ray sources discovered in the elliptical galaxies NGC 5128 and NGC 4636 by Jimmy Irwin of the University of Alabama and collaboratorsUniversity of Alabama and collaborators. In particular, the CDF-S source is likely associated with the complete destruction of a neutron star or white dwarf, and is roughly 100,000 times more luminous in X-rays. It is also located in a much smaller and younger host galaxy, and is only detected during a single, several-hour burst.
"We may have observed a completely new type of cataclysmic event," said co-author Kevin Schawinski, of ETH Zurich in Switzerland. "Whatever it is, a lot more observations are needed to work out what we're seeing."
Additional highly targeted searches through the Chandra archive and those of ESA's XMM-Newton and NASA's Swift satellite may uncover more examples of this type of variable object that have until now gone unnoticed. Future X-ray observations by Chandra and other X-ray observatories such as the planned Chinese Einstein Probe also may reveal the same phenomenon from other objects.
Read more at Science Daily
|This image shows the skull of the venomous species Meiacanthus grammistes.|
When the researchers did a proteomic analysis of extracted fang blenny venom, they found three venom components -- a neuropeptide that occurs in cone snail venom, a lipase similar to one from scorpions, and an opioid peptide. And, surprisingly, when they injected the blenny venom into lab mice, the mice didn't show any signs of pain.
"For the fang blenny venom to be painless in mice was quite a surprise," says study co-author Bryan Fry of University of Queensland. "Fish with venomous dorsal spines produce immediate and blinding pain. The most pain I've ever been in other than the time I broke my back was from a stingray envenomation. 'Sting'ray sounds so benign. They don't sting. They are pure hell."
Fang blenny venom, however, seems to have a very different effect on its victims. Since the researchers used rodents for the pain test, they can't entirely rule out the possibility of blenny venom causing pain in fish, but it seems plausible that the neuropeptide and opioid components may cause a sudden drop in blood pressure, most likely leaving the blenny's attacker disorientated and unable to give chase. "By slowing down potential predators, the fang blennies have a chance to escape," says Fry. "While the feeling of pain is not produced, opioids can produce sensations of extremely unpleasant nausea and dizziness [in mammals]."
Extracting the tiny fish's venom for chemical tests was no easy feat. When blenny fish bite an attacker, they only inject a tiny amount of venom, making it extremely difficult to collect enough for proteomic analyses. The researchers ended up using a quirky but labor-intensive method for extracting blenny venom: they would pluck the little fish out of their tanks, dangle a cotton swab in front of them so that the blenny would bite the cotton swab, and then suspended the cotton swabs in a solution that drew out the venom (after putting the fish back in the tank).
Nonvenomous fang blennies and other small fish capitalize on the venom's success by mimicking venomous fang blennies' colors and patterns. "Predatory fish will not eat those fishes because they think they are venomous and going to cause them harm, but this protection provided also allows some of these mimics to get very close to unsuspecting fish to feed on them, by picking on their scales as a micropredator," says study co-author Nicholas Casewell of the Liverpool School of Tropical Medicine. "All of this mimicry, all of these interactions at the community level, ultimately are stimulated by the venom system that some of these fish have."
Another surprise from the study was the evidence suggesting that fang blenny fangs evolved before the venom. "This is pretty unusual, because often what we've found -- for example, in snakes -- is that some sort of venom secretions evolved first, before the elaborate venom delivery mechanism evolved," says Casewell. Evolution favored the tiny fish with large teeth first and later found a way to enhance them with venom.
"These unassuming little fish have a really quite advanced venom system, and that venom system has a major impact on fishes and other animals in its community," says Casewell.
Read more at Science Daily
|Melt ponds cover vast areas in the Arctic.|
Melt ponds provide more light and heat for the ice and the underlying water, but now it turns out that they may also have a more direct and potentially important influence on life in the Arctic waters.
Mats of algae and bacteria can evolve in the melt ponds, which can provide food for marine creatures. This is the conclusion of researchers in the periodical, Polar Biology.
Own little ecosystems
- The melt ponds can form their own little ecosystem. When all the sea ice melts during the summer, algae and other organisms from melt ponds are released into the surrounding seawater. Some of this food is immediately ingested by creatures living high up in the water column. Other food sinks to the bottom and gets eaten by seabed dwellers, explains Heidi Louise Sørensen, who is the principal author of the scientific article, continuing:
- Given that larger and larger areas of melt ponds are being formed in the Arctic, we can expect the release of more and more food for creatures in the polar sea.
Heidi Louise Sørensen studied the phenomenon in a number of melt ponds in North-Eastern Greenland as part of her PhD thesis at University of Southern Denmark (SDU).
Bo Thamdrup and Ronnie Glud of SDU, and Erik Jeppesen and Søren Rysgaard of Aarhus University also contributed to the work.
Food for seals and sea cucumbers
In the upper part of the water column it is mainly krill and copepods that benefit from the nutrient-rich algae and bacteria from melt ponds. These creatures are eaten by various larger animals, ranging from amphipods to fish, seals and whales. Deeper down, it is seabed dwellers such as sea cucumbers and brittle stars that benefit from the algae that sink down.
For some time now, researchers have been aware that simple biological organisms can evolve in melt ponds -- they may even support very diverse communities. But so far it has been unclear why sometimes there are many organisms in the ponds, and on other occasions virtually none.
According to the new study, 'nutrients' is the keyword. When nutrients such as phosphorus and nitrogen find their way into a melt pond, entire communities of algae and micro-organisms can flourish.
From the Siberian tundra
Nutrients can find their way into a melt pond in a variety of ways, For example, they can be washed in with waves of sea water; they can be transported by dust storms from the mainland (for example, from the Siberian tundra); or they can be washed with earth from the coast out on the ice, when it rains.
Finally, migratory birds or other larger animals resting on the ice can leave behind sources of nutrient.
- Climate change is accompanies by more storms and more precipitation, and we must expect that more nutrients will be released from the surroundings into the melt ponds. These conditions, plus the fact that the distribution of areas of melt ponds is increasing, can contribute to increased productivity in plant and animal life in the Arctic seas, says Professor Ronnie Glud of the Department of Biology at SDU.
Warmer and more windy
There are further factors that may potentially contribute to increased productivity in the Arctic seas:
- When the sea ice disappears, light can penetrate down into the water.
- water. When it gets warmer on the mainland, this creates more melt water, which can flow out into the sea, carrying nutrients in its wake.
BOX What the researchers did
Six melt ponds in Young Sound in North-Eastern Greenland were selected: two natural and four artificial basins. Phosphorous and nitrogen (nutrients, which are also known from common garden fertilizer) were added in various combinations to four ponds, while two served as control ponds. For a period of up to 13 days Heidi Louise Sørensen measured many different parameters in the melt water, including the content of Chlorophyll a: a pigment that enables algae to absorb energy from light. The chlorophyll content of the phosphorus- and nitrogen-enriched ponds was 2 to 10 times higher than in the control ponds and testifies to an increased content of algae.
Read more at Science Daily
Mar 29, 2017
|Brain size in primates is predicted by diet, an analysis by a team of NYU anthropologists indicates. Above, a chimpanzee eating fruit.|
The findings, which appear in the journal Nature Ecology and Evolution, reinforce the notion that both human and non-human primate brain evolution may be driven by differences in feeding rather than in socialization.
"Are humans and other primates big-brained because of social pressures and the need to think about and track our social relationships, as some have argued?" asks James Higham, an assistant professor in NYU's Department of Anthropology and a co-author of the new analysis. "This has come to be the prevailing view, but our findings do not support it -- in fact, our research points to other factors, namely diet."
"Complex foraging strategies, social structures, and cognitive abilities, are likely to have co-evolved throughout primate evolution," adds Alex DeCasien, an NYU doctoral candidate and lead author of the study. "However, if the question is: 'Which factor, diet or sociality, is more important when it comes to determining the brain size of primate species?' then our new examination suggests that factor is diet."
The social brain hypothesis sees social complexity as the primary driver of primate cognitive complexity, suggesting that social pressures ultimately led to the evolution of the large human brain. While some studies have shown positive relationships between relative brain size and group size, other studies which examined the effects of different social or mating systems have revealed highly conflicting results, raising questions about the strength of the social brain hypothesis.
In the Nature Ecology and Evolution study, the researchers, who also included Scott Williams, an assistant professor of anthropology at NYU, examined more than 140 primate species -- or more than three times as many as previous studies -- and incorporated more recent evolutionary trees, or phylogenies. They took into account food consumption across the studied species -- folivores (leaves), frugivores (fruit), frugivores/folivores, and omnivores (addition of animal protein) -- as well as several measures of sociality, such as group size, social system, and mating system.
Their results showed that brain size is predicted by diet rather than by the various measures of sociality -- after controlling for body size and phylogeny. Notably, frugivores and frugivore/folivores exhibit significantly larger brains than folivores and, to a lesser extent, omnivores show significantly larger brains than folivores.
The researchers caution that the results do not reveal an association between brain size and fruit or protein consumption on a within-species level; rather, they note, they are evidence of the cognitive demands required by different species to obtain certain foods.
Read more at Science Daily
But unlike an onion, peeling back Earth's layers to better explore planetary dynamics isn't an option, forcing scientists to make educated guesses about our planet's inner life based on surface-level observations. Clever imaging techniques devised by computational scientists, however, offer the promise of illuminating Earth's subterranean secrets.
Using advanced modeling and simulation, seismic data generated by earthquakes, and one of the world's fastest supercomputers, a team led by Jeroen Tromp of Princeton University is creating a detailed 3-D picture of Earth's interior. Currently, the team is focused on imaging the entire globe from the surface to the core-mantle boundary, a depth of 1,800 miles.
These high-fidelity simulations add context to ongoing debates related to Earth's geologic history and dynamics, bringing prominent features like tectonic plates, magma plumes, and hotspots into view. In 2016, the team released its first-generation global model. Created using data from 253 earthquakes captured by seismograms scattered around the world, the team's model is notable for its global scope and high scalability.
"This is the first global seismic model where no approximations -- other than the chosen numerical method -- were used to simulate how seismic waves travel through Earth and how they sense heterogeneities," said Ebru Bozdag, a coprincipal investigator of the project and an assistant professor of geophysics at the University of Nice Sophia Antipolis. "That's a milestone for the seismology community. For the first time, we showed people the value and feasibility of running these kinds of tools for global seismic imaging."
The project's genesis can be traced to a seismic imaging theory first proposed in the 1980s. To fill in gaps within seismic data maps, the theory posited a method called adjoint tomography, an iterative full-waveform inversion technique. This technique leverages more information than competing methods, using forward waves that travel from the quake's origin to the seismic receiver and adjoint waves, which are mathematically derived waves that travel from the receiver to the quake.
The problem with testing this theory? "You need really big computers to do this," Bozdag said, "because both forward and adjoint wave simulations are performed in 3-D numerically."
In 2012, just such a machine arrived in the form of the Titan supercomputer, a 27-petaflop Cray XK7 managed by the US Department of Energy's (DOE's) Oak Ridge Leadership Computing Facility (OLCF), a DOE Office of Science User Facility located at DOE's Oak Ridge National Laboratory. After trying out its method on smaller machines, Tromp's team gained access to Titan in 2013 through the Innovative and Novel Computational Impact on Theory and Experiment, or INCITE, program.
Working with OLCF staff, the team continues to push the limits of computational seismology to deeper depths.
Stitching together seismic slices
When an earthquake strikes, the release of energy creates seismic waves that often wreak havoc for life at the surface. Those same waves, however, present an opportunity for scientists to peer into the subsurface by measuring vibrations passing through Earth.
As seismic waves travel, seismograms can detect variations in their speed. These changes provide clues about the composition, density, and temperature of the medium the wave is passing through. For example, waves move slower when passing through hot magma, such as mantle plumes and hotspots, than they do when passing through colder subduction zones, locations where one tectonic plate slides beneath another.
Each seismogram represents a narrow slice of the planet's interior. By stitching many seismograms together, researchers can produce a 3-D global image, capturing everything from magma plumes feeding the Ring of Fire, to Yellowstone's hotspots, to subducted plates under New Zealand.
This process, called seismic tomography, works in a manner similar to imaging techniques employed in medicine, where 2-D x-ray images taken from many perspectives are combined to create 3-D images of areas inside the body.
In the past, seismic tomography techniques have been limited in the amount of seismic data they can use. Traditional methods forced researchers to make approximations in their wave simulations and restrict observational data to major seismic phases only. Adjoint tomography based on 3-D numerical simulations employed by Tromp's team isn't constrained in this way. "We can use the entire data -- anything and everything," Bozdag said.
Running its GPU version of the SPECFEM3D_GLOBE code, Tromp's team used Titan to apply full-waveform inversion at a global scale. The team then compared these "synthetic seismograms" with observed seismic data supplied by the Incorporated Research Institutions for Seismology (IRIS), calculating the difference and feeding that information back into the model for further optimization. Each repetition of this process improves global models.
"This is what we call the adjoint tomography workflow, and at a global scale it requires a supercomputer like Titan to be executed in reasonable timeframe," Bozdag said. "For our first-generation model, we completed 15 iterations, which is actually a small number for these kinds of problems. Despite the small number of iterations, our enhanced global model shows the power of our approach. This is just the beginning, however."
Automating to augment
For its initial global model, Tromp's team selected earthquake events that registered between 5.8 and 7 on the Richter scale -- a standard for measuring earthquake intensity. That range can be extended slightly to include more than 6,000 earthquakes in the IRIS database -- about 20 times the amount of data used in the original model.
Getting the most out of all the available data requires a robust automated workflow capable of accelerating the team's iterative process. Collaborating with OLCF staff, Tromp's team has made progress toward this goal.
For the team's first-generation model, Bozdag carried out each step of the workflow manually, taking about a month to complete one model update. Team members Matthieu Lefebvre, Wenjie Lei, and Youyi Ruan of Princeton University and the OLCF's Judy Hill developed new automated workflow processes that hold the promise of reducing that cycle to a matter of days.
"Automation will really make it more efficient, and it will also reduce human error, which is pretty easy to introduce," Bozdag said.
Additional support from OLCF staff has contributed to the efficient use and accessibility of project data. Early in the project's life, Tromp's team worked with the OLCF's Norbert Podhorszki to improve data movement and flexibility. The end result, called Adaptable Seismic Data Format (ASDF), leverages the Adaptable I/O System (ADIOS) parallel library and gives Tromp's team a superior file format to record, reproduce, and analyze data on large-scale parallel computing resources.
In addition, the OLCF's David Pugmire helped the team implement in situ visualization tools. These tools enabled team members to check their work more easily from local workstations by allowing visualizations to be produced in conjunction with simulation on Titan, eliminating the need for costly file transfers.
"Sometimes the devil is in the details, so you really need to be careful and know what you're looking at," Bozdag said. "David's visualization tools help us to investigate our models and see what is there and what is not."
With visualization, the magnitude of the team's project comes to light. The billion-year cycle of molten rock rising from the core-mantle boundary and falling from the crust -- not unlike the motion of globules in a lava lamp -- takes form, as do other geologic features of interest.
At this stage, the resolution of the team's global model is becoming advanced enough to inform continental studies, particularly in regions with dense data coverage. Making it useful at the regional level or smaller, such as the mantle activity beneath Southern California or the earthquake-prone crust of Istanbul, will require additional work.
"Most global models in seismology agree at large scales but differ from each other significantly at the smaller scales," Bozdag said. "That's why it's crucial to have a more accurate image of Earth's interior. Creating high-resolution images of the mantle will allow us to contribute to these discussions."
To improve accuracy and resolution further, Tromp's team is experimenting with model parameters under its most recent INCITE allocation. For example, the team's second-generation model will introduce anisotropic inversions, which are calculations that better capture the differing orientations and movement of rock in the mantle. This new information should give scientists a clearer picture of mantle flow, composition, and crust-mantle interactions.
Additionally, team members Dimitri Komatitsch of Aix-Marseille University in France and Daniel Peter of King Abdullah University in Saudi Arabia are leading efforts to update SPECFEM3D_GLOBE to incorporate capabilities such as the simulation of higher-frequency seismic waves. The frequency of a seismic wave, measured in Hertz, is equivalent to the number of waves passing through a fixed point in one second. For instance, the current minimum frequency used in the team's simulation is about 0.05 hertz (1 wave per 20 seconds), but Bozdag said the team would also like to incorporate seismic waves of up to 1 hertz (1 wave per second). This would allow the team to model finer details in Earth's mantle and even begin mapping Earth's core.
To make this leap, Tromp's team is preparing for Summit, the OLCF's next-generation supercomputer. Set to arrive in 2018, Summit will provide at least five times the computing power of Titan. As part of the OLCF's Center for Accelerated Application Readiness, Tromp's team is working with OLCF staff to take advantage of Summit's computing power upon arrival.
Read more at Science Daily
Dr. Jordan Mallon, a dinosaur specialist at the museum, argues instead that the fossil evidence for these distinctions is inconclusive and, as a result, it might be time to "rewrite the textbooks." His report, published today in the online journal Paleobiology, focusses on the biological principle of sexual dimorphism, where males and females of a species can be distinguished based on physical characteristics other than sexual organs.
"I'm not saying that dinosaurs were not dimorphic, but I am saying that there's no existing fossil evidence to suggest that they were. The jury is still out," says Mallon.
Mallon made his assessment by revisiting previous studies attributing sexual dimorphism to dinosaurs. The problem, he explains, is that some of those studies not only relied on small sample sizes, but, more importantly, they did not properly analyze the statistical data, which led to invalid conclusions.
"Essentially, if you go back and recrunch the data of those original studies using proper statistical tests such as mixture modelling, then there's no dimorphism," explains Mallon. "While others have doubted the existence of dimorphism from the dinosaur fossil record, this is the first published report to show that's the case."
Mallon reviewed data on nine species, ranging from horned dinosaurs, to stegosaurs to meat-eating dinos. Among the studies was a seminal 1976 paper assigning sexual dimorphism to about 20 specimens of a horned dinosaur called Protoceratops andrewsi. The author's analysis said males could be distinguished from females by a broader frill and larger bump on the nose. While the study used a large sample size, Mallon's retesting of the data shows there is not enough evidence to separate the specimens into two distinct groups based on the shapes of their bones.
Mallon notes that there are ways of distinguishing male dinosaurs from females, but, to date, these sorts of data are sparse and do little to inform an understanding about whether the sexes differed in their external anatomy.
"There are ways of determining the sex of individual females, for example, as some fossils have been found with eggs preserved inside them," he explains. Mallon also notes that researchers can look for medullary bone, which is a spongy bone deposited in the long bones of egg-laying females, as seen in birds today.
"What we need to do is examine dinosaur specimens that we can positively identify as females, and if you can survey a large enough population of them, you can then say this is what we expect females to look like. One can then study the remainder of the population to compare which ones look like the females that we already know, and which ones don't. Those would be the males," says Mallon.
Mallon maintains that he would not be surprised if dimorphism did exist among some dinosaurs, because the phenomenon is seen in living animals such as birds and crocodiles, which are the nearest living relatives of dinosaurs. Male crocodiles, for example, are larger than females, and the male peacock has a large colourful tail.
The challenge for paleontologists is to find fossils of a given species in a large enough number and of similar age to do a proper statistical analysis. And, as Mallon points out, the studies to date are lacking in that regard.
Read more at Science Daily
|These are Cabeater seals on an ice floe in the Antarctic Peninsula area.|
The study compared the position of Antarctic biodiversity and its management with that globally using the Convention on Biological Diversity's (CBD) Aichi targets. The Aichi targets are part of the Strategic Plan for Biodiversity 2011-2020, adopted under the CBD, to assess progress in halting global biodiversity loss. Yet they have never been applied to Antarctica and the Southern Ocean -- areas which together account for about 10% of the planet's surface.
The study found that the difference between the status of biodiversity in the Antarctic and the rest of the world was negligible.
"The results have been truly surprising," said lead author and Head of the School of Biological Sciences at Monash, Professor Steven Chown.
"While in some areas, such as invasive species management, the Antarctic region is doing relatively well, in others, such as protected area management and regulation of bioprospecting, it is lagging behind," he said. "Overall, the biodiversity and conservation management outlook for Antarctica and the Southern Ocean is no different to that for the rest of the planet."
"Despite our findings, there are great opportunities for positive action," said Monash co-author Professor Melodie McGeoch. "The agreements under the Antarctic Treaty System lend themselves to effective action, and nations have recently reinforced their desire to protect the region's biodiversity."
This latest analysis by scientists ensures that future assessments made under the Strategic Plan for Biodiversity 2011-2020 will be truly global.
"It will also help inform global progress towards achieving the United Nation's Sustainable Development Goals," Professor McGeoch said.
From Science Daily
|A honeybee is covered in commercial pollen.|
According to the study, a honeybee can carry up to 30 percent of its body weight in pollen because of the strategic spacing of its nearly three million hairs. The hairs cover the insect's eyes and entire body in various densities that allow efficient cleaning and transport.
The research found that the gap between each eye hair is approximately the same size as a grain of dandelion pollen, which is typically collected by bees. This keeps the pollen suspended above the eye and allows the forelegs to comb through and collect the particles. The legs are much hairier and the hair is very densely packed -- five times denser than the hair on the eyes. This helps the legs collect as much pollen as possible with each swipe. Once the forelegs are sufficiently scrubbed and cleaned by the other legs and the mouth, they return to the eyes and continue the process until the eyes are free of pollen.
The Georgia Tech team tethered bees and used high speed cameras to create the first quantified study of the honeybee cleaning process. They watched as the insects were able to remove up to 15,000 particles from their bodies in three minutes.
"Without these hairs and their specialized spacing, it would be almost impossible for a honeybee to stay clean," said Guillermo Amador, who led the study while pursuing his doctoral degree at Georgia Tech in mechanical engineering.
This was evident when Amador and the team created a robotic honeybee leg to swipe pollen-covered eyes. When they covered the leg with wax, the smooth, hairless leg gathered four times less pollen.
The high-speed videos also revealed something else.
"Bees have a preprogrammed cleaning routine that doesn't vary," said Marguerite Matherne, a Ph.D. student in the George W. Woodruff School of Mechanical Engineering. "Even if they're not very dirty in the first place, bees always swipe their eyes a dozen times, six times per leg. The first swipe is the most efficient, and they never have to brush the same area of the eye twice."
The research also found that pollenkitt, the sticky, viscous fluid found on the surface of pollen grains, is essential. When the fluid was removed from pollen during experiments, bees accumulated half as much.
"If we can start learning from natural pollinators, maybe we can create artificial pollinators to take stress off of bees," said David Hu, a professor in the Woodruff School of Mechanical Engineering and School of Biological Sciences. "Our findings may also be used to create mechanical designs that help keep micro and nanostructured surfaces clean."
Read more at Science Daily
Mar 28, 2017
|A mouse from a Maasai village in southern Kenya.|
"The research provides the first evidence that, as early as 15,000 years ago, humans were living in one place long enough to impact local animal communities -- resulting in the dominant presence of house mice," said Fiona Marshall, study co-author and a professor of anthropology at Washington University in St. Louis. "It's clear that the permanent occupation of these settlements had far-reaching consequences for local ecologies, animal domestication and human societies."
Marshall, a noted expert on animal domestication, considers the research exciting because it shows that settled hunter-gatherers rather than farmers were the first people to transform environmental relations with small mammals. By providing stable access to human shelter and food, hunter-gatherers led house mice down the path to commensalism, an early phase of domestication in which a species learns how to benefit from human interaction.
The findings have broad implications for the processes that led to animal domestication.
"The findings provide clear evidence that the ways humans have shaped the natural world are tied to varying levels of human mobility," said Marshall, the James W. and Jean L. Davis Professor in Arts & Sciences. "They suggest that the roots of animal domestication go back to human sedentism thousands of years prior to what has long been considered the dawn of agriculture."
Led by Thomas Cucchi of National Center for Scientific Research in Paris, France, and Lior Weissbrod of the University of Haifa in Israel, the study set out to explain large swings in the ratio of house mice to wild mice populations found during excavations of different prehistoric periods at an ancient Natufian hunter-gatherer site in the Jordan Valley of Israel.
Examining tiny species-related variations in the molar shapes of fossilized mice teeth dating back as far as 200,000 years, the team built a timeline showing how the populations of different mice fluctuated at the Natufian site during periods of varying human mobility.
The analysis revealed that human mobility influenced competitive relationships between two species of mice -- the house mouse (Mus musculus domesticus)and a short-tailed field mouse (M. macedonicus) -- that continue to live in and around modern settlements in Israel. These relationships are analogous to those of another pair of species called spiny mice which Weissbrod and Marshall discovered among semi-nomadic Maasai herders in southern Kenya.
Findings indicate that house mice began embedding themselves in the Jordan Valley homes of Natufian hunter-gatherers about 15,000 years ago, and that their populations rose and fell based on how often these communities picked up and moved to new locations.
When humans stayed in the same places for long runs of time, house mice out-competed their country cousins to the point of pushing most of them outside the settlement. In periods where drought, food shortages or other conditions forced hunter-gatherers to relocate more often, the populations of house mice and field mice reached a balance similar to that found among modern Maasai herders with similar mobility patterns.
The study confirms that house mice were already a fixture in the domiciles of eastern Mediterranean hunter-gatherer villages more than 3,000 years before the earliest known evidence for sedentary agriculture.
It suggests that the early hunter-gatherer settlements transformed ecological interactions and food webs, allowing house mice that benefited from human settlements to out-compete wild mice and establish themselves as the dominant population.
"The competition between commensal house mice and other wild mice continued to fluctuate as humans became more mobile in arid periods and more sedentary at other times -- indicating the sensitivity of local environments to degrees of human mobility and the complexity of human environmental relationships going back in the Pleistocene," said Weissbrod, currently a research fellow at the Zinman Institute of Archaeology at the University of Haifa.
Weissbrod's research involves analysis of microvertebrate remains from a wide range of prehistoric and historic sites in Israel and the Caucasus dealing with paleoecology and human-ecosystem interactions.
A 2010 graduate of the doctoral program in archaeological anthropology at Washington University, he began research for this study as part of a dissertation examining fluctuations in populations of mice and other small animals living around Maasai cattle herding settlements in Kenya.
Marshall helped Weissbrod to develop the ethnographic context for underlying research questions about the ecological impact of human mobility. Together they built field-based ecological frameworks for understanding changing animal human interactions through time focusing on mice and donkeys.
Working from his lab in Paris, Cucchi used a new technique called geometric morphometrics to identify the mouse fossils and reliably distinguish telltale differences in the miniscule remains of house mice and wild species. The method relies on high resolution imaging and digital analysis to categorize species-related variations in molar outlines nearly as thin as a single millimeter.
The findings, and the techniques used to document them, are important to archaeological research in a broader sense because they lend further support to the idea that fluctuations in ancient mouse populations can be used as a proxy for tracking ancient shifts in human mobility, lifestyle and food domestication.
Read more at Science Daily
The effects are slight, but significant, showing that the higher the blood lead level in childhood, the greater the loss of IQ points and occupational status in adulthood. The study appears Wednesday in the Journal of the American Medical Association.
Study participants are part of a life-long examination of more than 1,000 people born in Dunedin, New Zealand in 1972 and 1973. During their childhood, New Zealand had some of the highest gasoline lead levels in the world.
From birth to adulthood, these people have regularly been assessed for cognitive skills such as perceptual reasoning and working memory. At age 11, blood samples were collected from 565 of them which were then tested for lead.
Participants who were found to carry more than 10 micrograms of lead per deciliter of blood at age 11 had IQs at age 38 that were, on average, 4.25 points lower than their less lead-exposed peers. They were also found to have lost IQ points relative to their own childhood scores.
The study found that for each 5-microgram increase in blood lead, a person lost about 1.5 IQ points.
The mean blood lead level of the children at age 11 was 10.99 micrograms per deciliter of blood, slightly higher than the historical "level of concern" for lead exposure. Today's reference value at which the CDC recommends public health intervention is half that, 5 micrograms per deciliter, a level which 94 percent of children in the study exceeded. No safe blood lead level in children has been identified.
"This is historical data from an era when lead levels like these were viewed as normal in children and not dangerous, so most of our study participants were never given any special treatment," said Terrie Moffitt, the senior author of the study and Duke's Nannerl O. Keohane University Professor of psychology & neuroscience and psychiatry & behavioral sciences.
"This case is different from the one in Flint, Michigan and other cities where lead in the drinking water has led public health officials to begin special interventions for those children," Moffitt said. Flint's children are receiving regular blood monitoring and expanded early childhood education, behavioral health services and special nutrition with the federal government's support. "Interventions of this sort are intended to forestall the sorts of effects we've measured in this study," she said.
What makes the New Zealand case an important natural experiment is that automobile traffic goes through all neighborhoods. Unlike exposures to leaded paint or lead pipes in older structures, which pose more of a threat to poorer families, the exposure to leaded gasoline fumes was distributed relatively evenly across all social strata.
Beginning in the 1920s, a compound called tetra-ethyl-lead was added to gasoline for its ability to boost octane ratings and raise engine power. The lead itself didn't burn however, and emerged from tailpipes as elemental lead and lead oxides which settled as a particulate in soils around areas where cars were common.
Soil hangs on tightly to lead particles and soils next to busy roads have been found to have the highest lead concentrations from the leaded gasoline era. Children playing outside were prone to either breathe in lead-laden dust, or swallow small amounts of leaded soil.
In either case, lead can accumulate in the child's bloodstream. It then settles into the bones, teeth and soft tissues and accumulates in the body over time.
Leaded gasoline was phased out in the U.S. and New Zealand between the mid-1970s and the mid-1990s, but is still used in some Asian and middle eastern countries.
"Regardless of where you start in life, lead is going to exert a downward pull," said Avshalom Caspi, Edward M. Arnett Professor of psychology & neuroscience and psychiatry & behavioral sciences at Duke, who is a co-author on the paper. A neurotoxin exposure that affects all parts of society relatively equally would move the entire curve of IQ and social status downward. "If everyone takes a hit from environmental pollutants, society as a whole suffers."
The study also compared changes in social standing using a measure from the New Zealand government that plots families on a 6-point scale. The childhood social status of each child's family was compared to their adult standing at age 38. Children who were over 10 micrograms of lead attained occupations with socioeconomic status levels four-tenths lower than their less-exposed peers.
"The downward social mobility we see mirrors the trend in IQ," said Aaron Reuben, a Duke psychology graduate student who is first author on the study. After various statistical controls were applied to the data, "the decline in occupational status is partially but significantly explained by the loss of IQ," he said. "If you're above the historic level of concern (for lead exposure), you're doing worse on both."
Read more at Science Daily
|The feature known as Ina, as seen by NASA's Lunar Reconnaissance Orbiter, was likely formed by an eruption of fluffy 'magmatic foam,' new research shows.|
But new research led by Brown University geologists suggests that Ina is not so young after all. The analysis, published in the journal Geology, concludes that the feature was actually formed by an eruption around 3.5 billion years ago, around the same age as the dark volcanic deposits we see on the Moon's nearside. It's the peculiar type of lava that erupted from Ina that helps hide its age, the researchers say.
"As interesting as it would be for Ina to have formed in the recent geologic past, we just don't think that's the case," said Jim Head, co-author of the paper and professor in Brown's Department of Earth, Environmental and Planetary Sciences. "The model we've developed for Ina's formation puts it firmly within the period of peak volcanic activity on the Moon several billion years ago."
Ina sits near the summit of a gently sloped mound of basaltic rock, leading many scientists to conclude that it was likely the caldera of an ancient lunar volcano. But just how ancient wasn't clear. While the flanks of the volcano look billions of years old, the Ina caldera itself looks much younger. One sign of youth is its bright appearance relative to its surroundings. The brightness suggests Ina hasn't had time to accumulate as much regolith, the layer of loose rock and dust that builds up on the surface over time.
Then there are Ina's distinctive mounds -- 80 or so smooth hills of rock, some standing as tall as 100 feet, which dominate the landscape within the caldera. The mounds appear to have far fewer impact craters on them compared to the surrounding area, another sign of relative youth. Over time, it's expected that a surface should accumulate craters of various sizes at fairly constant rates. So scientists use the number and size of craters to estimate the relative age of a surface. In 2014, a team of researchers did a careful crater-count on Ina's mounds and concluded that they must have been formed by lava that erupted to the surface within the last 50 to 100 million years.
"That was a really puzzling finding," Head said. "I think most people agree that the volcano Ina sits on was formed billions of years ago, which means there would have been a pause in volcanic activity for a billion years or more before the activity that formed Ina. We wanted to see if there might be something about geologic structure within Ina that throws off our estimation of its age."
Not so young?
The researchers looked at well-studied volcanoes on Earth that might be similar to Ina. Ina appears to be a pit crater on a shield volcano, a gently sloping mountain similar to the Kilauea volcano in Hawaii. Kilauea has a pit crater similar to Ina known as the Kilauea Iki crater, which erupted in 1959.
As lava from that eruption solidified, it created a highly porous rock layer inside the pit, with underground vesicles as large as three feet in diameter and surface void space as deep as two feet. That porous surface, Head and his colleagues say, is created by the nature of the lava erupted in the late stages of events like this one. As the subsurface lava supply starts to diminish, it erupts as "magmatic foam" -- a bubbly mixture of lava and gas. When that foam cools and solidifies, it forms the highly porous surface.
The researchers suggest that an Ina eruption would have also produced magmatic foam. And because of the Moon's decreased gravity and nearly absent atmosphere, the lunar foam would have been even fluffier than on Earth, so it's expected that the structures within Ina are even more porous than on Earth.
It's the high porosity of those surfaces that throws off date estimates for Ina, both by hiding the buildup of regolith and by throwing off crater counts.
A highly porous surface, the researchers say, would allow loose rock and dust to filter into surface void space, making it appear as though less regolith has built up. That process would be perpetuated by seismic shaking in the region, much of which is caused by ongoing meteor impacts. "It's like banging on the side of a sieve to make the flour go through," Head said. "Regolith is jostled into holes rather than sitting on the surface, which makes Ina look a lot younger."
Porosity could also skew crater counts. Laboratory experiments using a high-speed projectile cannon have shown that impacts into porous targets make much smaller craters. Because of Ina's extreme porosity, the researchers say, its craters are much smaller than they would normally be, and many craters might not be visible at all. That could drastically alter the age estimate derived from crater counts.
The researchers estimate that the porous surface would reduce by a factor of three the size of craters on Ina's mounds. In other words, an impactor that would make a 100-foot-diameter crater in lunar basalt bedrock would make a crater of a little over 30 feet in a foam deposit. Taking that scaling relationship into account, the team gets a revised age for the Ina mounds of about 3.5 billion year old. That's similar to the surface age of the volcanic shield that surrounds Ina, and places the Ina activity within the timeframe of common volcanism on the Moon.
Read more at Science Daily