Aug 4, 2018

The end-Cretaceous extinction unleashed modern shark diversity

This is PhD student Mohamad Bazzi with a fossil lamniform shark tooth.
A study that examined the shape of hundreds of fossilized shark teeth suggests that modern shark biodiversity was triggered by the end-Cretaceous mass extinction event, about 66 million years ago.

This finding is reported this week in Current Biology.

As part of a larger scientific endeavour aiming to understand the diversity of fossil sharks, a group of researchers from Uppsala University, Sweden, and the University of New England, Australia, have explored how certain groups of sharks responded to the mass extinction that killed-off non-bird dinosaurs and marked the end of the Cretaceous period and the Mesozoic era.

Much like several other vertebrate groups during the Cretaceous (142-66 million years ago), shark diversity looked very different from today. Ground sharks (Carcharhiniformes) are the most diverse shark group living today, with over 200 different species. However, while dinosaurs dominated terrestrial environments during the Cretaceous, Mackerel sharks (Lamniformes) were the dominant shark forms of the sea.

"Our study found that the shift from lamniform- to carcharhiniform-dominated assemblages may well have been the result of the end-Cretaceous mass extinction," said project leader and Uppsala doctoral student Mohamad Bazzi.

Sharks are one of the major groups that survived the Cretaceous-Palaeogene mass extinction and, today, carcharhiniforms are typified by forms such as the Tiger, Hammerhead, and Blacktip Reef sharks and lamniforms by the Great White and Mako sharks.

"Unlike other vertebrates, the cartilaginous skeletons of sharks do not easily fossilize and so our knowledge of these fishes is largely limited to the thousands of isolated teeth they shed throughout their lives," says Mr. Bazzi. "Fortunately, shark teeth can tell us a lot about their biology, including information about diet, which can shed light on the mechanisms behind their extinction and survival."

The team used "cutting-edge" analytical techniques to explore the variation of tooth shape in carcharhiniforms and lamniforms and measured diversity by calculating the range of morphological variation, also called disparity.

"Going into this study, we knew that sharks underwent important losses in species richness across the extinction." said Dr. Nicolás Campione at the University of New England, who co-devised the project. "But to our surprise, we found virtually no change in disparity across this major transition. This suggests to us that species richness and disparity may have been decoupled across this interval."

Despite this seemingly stable pattern, the study found that extinction and survival patterns were substantially more complex. Morphologically, there were differential responses to extinction between lamniform and carcharhiniform sharks, with evidence for a selective extinction of lamniforms and a subsequent proliferation of carcharhiniforms (the largest order of living sharks today) in the immediate aftermath of the extinction.

"Carcharhiniforms are the most common shark group today and it would seem that the initial steps towards this dominance started approximately 66 million years ago," said Mr. Bazzi, who remarks that further research is still needed to understand the diversity patterns of other shark groups, along with the relationship between diet and tooth morphology.

Although the mechanisms that triggered such a shift in sharks can be difficult to interpret. The team hypothesises that changes in food availability may have played an important role. The end-Cretaceous extinction saw to major losses in marine reptiles and cephalopods (e.g. squids) and the post-extinction world saw the rise of bony fishes. In addition, it is likely that the loss of apex predators (such as lamniforms and marine reptiles) benefited mid-trophic sharks, a role fulfilled by many carcharhiniforms.

"By studying their teeth, we are able to get a glimpse at the lives of extinct sharks," said Dr. Campione, "and by understanding the mechanisms that have shaped their evolution in the past, perhaps we can provide some insights into how to mitigate further losses in current ecosystems."

Read more at Science Daily

Locusts help uncover the mysteries of smell

A palate-clearing sniff of coffee inspired Barani's smell research.
Understanding how a sensory input becomes an experience -- how molecules released by a blooming flower, for instance, become the internal experience of smelling a rose -- has for millennia been a central question of philosophy.

In more recent times, it has also been a question for scientists. One way to approach it is to understand the physical brain processes behind sensory experiences. Historically, scientists have proposed different ways to describe what is happening by positing that a certain set of neurons must fire; a certain sequence of firing that must occur; or a combination of the two.

But according to a research team from the School of Engineering & Applied Science at Washington University in St. Louis, these descriptions do not account for the variability of the real world. Smells do not occur in a vacuum. The team wanted to find out what happened when sensory input was presented in sequences, more akin to what happens in the real world.

They turned to locusts.

In a paper slated for publication in Nature Communications, researchers found that in locusts, only a subset of neurons associated with a particular scent would fire when that scent was presented in a dynamic environment that included other scents. Although there was not a one-to-one relationship between a pattern of neurons activated and a specific smell, the researchers were able to determine how the locusts could still recognize a scent; it comes down to the locust being flexible in its interpretation.

"There is variability because of stimulus history," said Barani Raman, associate professor of biomedical engineering, "so flexibility is necessary to compensate."

For the experiments, the team of Washington University engineers, which included Raman, graduate research assistants Srinath Nizampatnam and Rishabh Chandak, and Debajit Saha, a postdoctoral research fellow, first had to train the locusts in the same way one might train a dog, namely, Pavlov's dog. A machine administered a puff of the target scent, hexanol, to hungry locusts, then rewarded the locusts with a treat: grass. After enough rounds (usually six), the locusts would open up palps -- small organs outside of their mouths that function in a similar way to lips or tongues in humans -- after they smelled hexanol, in anticipation of the grass.

Once the locusts were trained, the testing began. The locusts were exposed to the "target" odor, hexanol either on its own, or after the introduction of a different scent, called a "distractor."

Each time the target odor was introduced on its own, a locust's neural activity was the same. But when the locusts were exposed to a distractor smell first, different combinations of neurons fired when the locusts were subsequently exposed to the target.

This is the variability based on context. What has been previously smelled (and even unrelated brain states, such as hunger) can affect how a brain reacts to the same input. If that were the end of it, though, smells would rarely, if ever, be recognizable.

Imagine entering a coffee shop and buying a freshly baked chocolate chip cookie. As you bring it to your mouth, you inhale and smell that comforting, chocolate chip cookie smell. The next day, you head to a tea shop. Another batch of freshly baked cookies calls your name. If variability (induced by prior exposure to tea or coffee) alone determined how smells are processed, the scent of tea shop cookie, wafting into your nose after a strong Earl Grey, couldn't possibly smell the same as it did after you caught a whiff of Sumatra at the coffee shop.

But just as humans recognize the smell of a chocolate chip cookie in either setting, the locusts recognized the target -- even though their neurons were firing in a variety of different ways -- as evidenced by their palps, which opened as per their conditioning.

So there had to be more to the story than variability when it came to recognizing smells. The team wanted to know if there was a pattern, or a way to discern, via brain activity, how the locusts were smelling the target odorant despite the variability in brain activity.

As it turned out, there is a way. "The rules are very simple," Raman said. "An OR-of-ANDs logical operation was sufficient to compensate for variability and allow flexible decoding."

Think of an "ideal" chair: it has four legs, a seat, two armrests, and back support. If you only recognized a chair with all of these, and only these, attributes, you would miss out on a lot of good chairs -- those on a pedestal, those without armrests, etc. To be able to generalize, there needs to be some flexibility in what's recognized as a chair. One simple way is to allow any object that has any two or three out of the four features usually associated with chair, if present, to be recognized as a chair.

The OR-of-ANDs logical operation for recognizing chair might be [four legs AND seat] OR [seat AND back support]. In the same way, locusts show a fixed pattern of brain activity when smelling the target odorant alone, but only some flexible combination involving just some of those same neurons will fire when smelling the target after smelling, say, an apple.

What subset of neurons that fire depends, in large part, on what the distractor smell is; the neurons that are activated by the target alone will continue to fire, but those that are in common to both the distractor and the target will either not be activated or their activity will be reduced.

In this way, the uniqueness of neural response to the target odorant is enhanced. Like perfume after a whiff of coffee, if the target odorant shared few neurons with the distractor, the cross-talk between the smells was less and the history/context is reset.

Going forward, the team plans to see if its results hold in another organism: the fruit fly. The researchers also will investigate how other sources of variability such as short-term memory might affect how smells are perceived. There is, of course, another organism of interest: humans.

The main inspiration for this research was the use of coffee beans to clear the olfactory pallet, so to speak, in perfume shops.

Read more at Science Daily

Aug 3, 2018

Plants can tell the time using sugars

The plant as a clock
A new study by an international team of scientists, including the University of Bristol, has discovered that plants adjust their daily circadian rhythm to the cycle of day and night by measuring the amount of sugars in their cells.

Plants, animals, fungi and some bacteria can estimate the time of day through their circadian rhythms.

These rhythms are regulated by an internal 'circadian clock', and how these clocks operate is a topic of importance for both agriculture and medicine. For example, changes in circadian rhythms have contributed to domestication of crops.

In the study published today, in the journal Current Biology, the research team involving the Universities of Bristol, Cambridge, Campinas, Sao Paulo and Melbourne has discovered a process that adjusts the timing of the plant body clock so that it stays in tune with the environment.

They found that sugars made from photosynthesis are sensed, and this leads to the plant falling into rhythm with changes in energy provision throughout the day.

Dr Antony Dodd of the University of Bristol's School of Biological Sciences, said: "Our findings show the first mechanism in plants that shifts the circadian rhythm backwards or forwards to synchronise it with the environment.

"The plant continuously measures the amount of sugar in the cells and uses this information to make the required adjustments."

Plants need circadian their rhythms to be correctly synchronised with the timing of day and night, so their activities are matched to the time of day.

For example, circadian rhythms control the time when plants grow, when their flowers open and release scent, and allow plants to carefully use energy reserves so they do not starve in the night.

Circadian rhythms also help plants to detect changes in the seasons, which is crucial to ensure our crops mature in the correct season.

Dr Dodd added: "This means that the discovery of a mechanism that synchronizes the plant body clock with the time in the environment has identified a new process that could be exploited in future to improve crop performance."

From Science Daily

Heatwave and climate change having negative impact on our soil say experts

Drought alters soil at microbial level.
The recent heatwave and drought could be having a deeper, more negative effect on soil than we first realised say scientists.

This could have widespread implications for plants and other vegetation which, in turn, may impact on the entire ecosystem.

That's because the organisms in soil are highly diverse and responsible not only for producing the soil we need to grow crops, but also other benefits such as cleaning water and regulating greenhouse gas emissions.

The new study, led by researchers at The University of Manchester and published today (02/08/2018) in Nature Communications, provides new insight into how a drought alters soil at microbial level. It shows that expected changes in climate will affect UK soil and that soil is not as tough as previously thought.

Due to climate change, disturbances such as drought are increasing in intensity and frequency. These extreme weather conditions change vegetation composition and soil moisture, which in turn impacts the soil's underlying organisms and microbial networks.

By studying how microbes react to severe drought, the study provides a better understanding of how underground soil networks respond to such environmental disturbances.

Lead author, Dr Franciska de Vries, from Manchester's School of Earth and Environmental Sciences, explains: "Soils harbour highly diverse microbial communities that are crucial for soil to function as it should.

"A major challenge is to understand how these complex microbial communities respond to and recover from disturbances, such as climate extremes, which are predicted to increase in frequency and intensity with climate change.

"These microbial communities within the soil play a crucial role in any ecosystem. But it wasn't known how soil networks respond to such disturbances until now."

Sequencing of soil DNA for the study was conducted at the Centre for Ecology & Hydrology (CEH). Dr Robert Griffiths, a molecular microbial ecologist at CEH, said: "This study further identifies those key organisms affected by drought, which will guide future research to predict how future soil microbial functions are affected by climate change."

The research team tested the effects of summer drought on plant communities consisting of four common grassland species. They found that drought increased the abundance of a certain fast-growing, drought-tolerant grass. With greater aboveground vegetation comes an increased rate of evapotranspiration, or cycling of water from plants to the atmosphere, lowering the overall soil moisture.

Science conducted as part of Lancaster University's Hazelrigg grassland experiment was key to the findings.

Professor Nick Ostle, from the Lancaster Environment Centre, said: "Our hot and dry summer this year is a 'wake up' to prepare for future weather stresses. We have just had the hottest ten years in UK history. This work shows that continued summer droughts will change soil biology. This matters as we plan for ensuring food security that depends on healthy soil."

Unlike past research, this study considered the multitude of direct and indirect interactions occurring between different microbial organisms in soil. Rather than focusing on select attributes of bacteria and fungi, this research takes a comprehensive approach to studying soil ecosystems.

Read more at Science Daily

Study challenges evolution of FOXP2 as human-specific language gene

First author Elizabeth Atkinson extracts DNA as part of her research on human gene FOXP2.
FOXP2, a gene implicated in affecting speech and language, is held up as a textbook example of positive selection on a human-specific trait. But in a paper published August 2 in the journal Cell, researchers challenge this finding. Their analysis of genetic data from a diverse sample of modern people and Neanderthals saw no evidence for recent, human-specific selection of FOXP2 and revises the history of how we think humans acquired language.

"A paper published in 2002 (Enard et al., Nature 418, 869-872) claimed there was a selective sweep relatively recently in human evolutionary history that could largely account for our linguistic abilities and even help explain how modern humans were able to flourish so rapidly in Africa within the last 50-100,000 years," says senior author Brenna Henn, a population geneticist at Stony Brook University and UC Davis. "I was immediately interested in dating the selective sweep and re-analyzing FOXP2 with larger and more diverse datasets, especially in more African populations."

Henn says that when the original 2002 work was done, the researchers did not have access to the modern sequencing technology that now provides data on whole genomes, so they only analyzed a small fraction of the FOXP2 gene in about 20 individuals, most of whom were of Eurasian descent. "We wanted to test whether their hypothesis stood up against a larger, more diverse dataset that more explicitly controlled for human demography," she says.

FOXP2 is highly expressed during brain development and regulates some muscle movements, aiding in language production. When the gene isn't expressed, it causes a condition called specific language impairment in which people may perform normally on cognitive tests but cannot produce spoken language. FOXP2 has also been shown to regulate language-like behaviors in mice and songbirds.

"In the past five years, several archaic hominin genomes have been sequenced, and FOXP2 was among the first genes examined because it was so important and supposedly human specific," says first author Elizabeth Atkinson of Stony Brook University and the Broad Institute of Harvard and MIT. "But this new data threw a wrench in the 2002 paper's timeline, and it turns out that the FOXP2 mutations we thought to be human specific, aren't."

Atkinson and her colleagues assembled mostly publicly available data from diverse human genomes -- both modern and archaic -- and analyzed the entire FOXP2 gene while comparing it to the surrounding genetic information to better understand the context for its evolution. Despite attempting a series of different statistical tests, they were unable to replicate this idea that there was any positive selection occurring for FOXP2.

"FOXP2 is still a textbook example taught in every evolutionary biology class despite the recent data from archaic DNA," says co-author Sohini Ramachandran, an evolutionary and computational biologist at Brown University. "So while we're not questioning the functional work of FOXP2 or its role in language production, we're finding that the story of FOXP2 is really more complex than we'd ever imagined."

The researchers hope that this paper will serve as a template for other population geneticists to conduct similar projects on human evolutionary history in the future.

Read more at Science Daily

VLA detects possible extrasolar planetary-mass magnetic powerhouse

Artist's conception of SIMP J01365663+0933473, an object with 12.7 times the mass of Jupiter, but a magnetic field 200 times more powerful than Jupiter's. This object is 20 light-years from Earth.
Astronomers using the National Science Foundation's Karl G. Jansky Very Large Array (VLA) have made the first radio-telescope detection of a planetary-mass object beyond our Solar System. The object, about a dozen times more massive than Jupiter, is a surprisingly strong magnetic powerhouse and a "rogue," traveling through space unaccompanied by any parent star.

"This object is right at the boundary between a planet and a brown dwarf, or 'failed star,' and is giving us some surprises that can potentially help us understand magnetic processes on both stars and planets," said Melodie Kao, who led this study while a graduate student at Caltech, and is now a Hubble Postdoctoral Fellow at Arizona State University.

Brown dwarfs are objects too massive to be considered planets, yet not massive enough to sustain nuclear fusion of hydrogen in their cores -- the process that powers stars. Theorists suggested in the 1960s that such objects would exist, but the first one was not discovered until 1995. They originally were thought to not emit radio waves, but in 2001 a VLA discovery of radio flaring in one revealed strong magnetic activity.

Subsequent observations showed that some brown dwarfs have strong auroras, similar to those seen in our own Solar System's giant planets. The auroras seen on Earth are caused by our planet's magnetic field interacting with the solar wind. However, solitary brown dwarfs do not have a solar wind from a nearby star to interact with. How the auroras are caused in brown dwarfs is unclear, but the scientists think one possibility is an orbiting planet or moon interacting with the brown dwarf's magnetic field, such as what happens between Jupiter and its moon Io.

The strange object in the latest study, called SIMP J01365663+0933473, has a magnetic field more than 200 times stronger than Jupiter's. The object was originally detected in 2016 as one of five brown dwarfs the scientists studied with the VLA to gain new knowledge about magnetic fields and the mechanisms by which some of the coolest such objects can produce strong radio emission. Brown dwarf masses are notoriously difficult to measure, and at the time, the object was thought to be an old and much more massive brown dwarf.

Last year, an independent team of scientists discovered that SIMP J01365663+0933473 was part of a very young group of stars. Its young age meant that it was in fact so much less massive that it could be a free-floating planet -- only 12.7 times more massive than Jupiter, with a radius 1.22 times that of Jupiter. At 200 million years old and 20 light-years from Earth, the object has a surface temperature of about 825 degrees Celsius, or more than 1500 degrees Farenheit. By comparison, the Sun's surface temperature is about 5,500 degrees Celsius.

The difference between a gas giant planet and a brown dwarf remains hotly debated among astronomers, but one rule of thumb that astronomers use is the mass below which deuterium fusion ceases, known as the "deuterium-burning limit," around 13 Jupiter masses.

Simultaneously, the Caltech team that originally detected its radio emission in 2016 had observed it again in a new study at even higher radio frequencies and confirmed that its magnetic field was even stronger than first measured.

"When it was announced that SIMP J01365663+0933473 had a mass near the deuterium-burning limit, I had just finished analyzing its newest VLA data," said Kao.

The VLA observations provided both the first radio detection and the first measurement of the magnetic field of a possible planetary mass object beyond our Solar System.

Such a strong magnetic field "presents huge challenges to our understanding of the dynamo mechanism that produces the magnetic fields in brown dwarfs and exoplanets and helps drive the auroras we see," said Gregg Hallinan, of Caltech.

"This particular object is exciting because studying its magnetic dynamo mechanisms can give us new insights on how the same type of mechanisms can operate in extrasolar planets -- planets beyond our Solar System. We think these mechanisms can work not only in brown dwarfs, but also in both gas giant and terrestrial planets," Kao said.

"Detecting SIMP J01365663+0933473 with the VLA through its auroral radio emission also means that we may have a new way of detecting exoplanets, including the elusive rogue ones not orbiting a parent star," Hallinan said.

Kao and Hallinan worked with J. Sebastian Pineda who also was a graduate student at Caltech and is now at the University of Colorado Boulder, David Stevenson of Caltech, and Adam Burgasser of the University of California San Diego. They are reporting their findings in the Astrophysical Journal.

Read more at Science Daily

Aug 2, 2018

Modern Flores Island pygmies show no genetic link to extinct 'hobbits'

A modern pygmy population evolved short stature independently of the extinct 'hobbit' pygmy species that lived on the same island -- Indonesia's Flores Island -- tens of thousands of years earlier, report Princeton's Serena Tucci, Joshua Akey and an international team of researchers. In this illustration, the modern pygmy village, Rampasasa, is shown at left; in the center, a modern Rampasasa pygmy wearing the traditional head covering and clothing is juxtaposed against the face of a Homo floresiensis reconstruction; at right, pygmy elephants play in the Liang Bua cave where theH. floresiensis fossils were discovered in 2004.
Two pygmy populations on the same tropical island. One went extinct tens of thousands of years ago; the other still lives there. Are they related?

It's a simple question that took years to answer.

As no one has been able to recover DNA from the fossils of Homo floresiensis (nicknamed the "hobbit"), researchers had to create a tool for finding archaic genetic sequences in modern DNA.

The technique was developed by scientists in the lab of Joshua Akey, a professor of ecology and evolutionary biology and the Lewis-Sigler Institute for Integrative Genomics at Princeton University.

"In your genome -- and in mine -- there are genes that we inherited from Neanderthals," said Serena Tucci, a postdoctoral research associate in Akey's lab. "Some modern humans inherited genes from Denisovans [another extinct species of humans], which we can check for because we have genetic information from Denisovans.

"But if you want to look for another species, like Floresiensis, we have nothing to compare, so we had to develop another method: We 'paint' chunks of the genome based on the source. We scan the genome and look for chunks that come from different species -- Neanderthal, Denisovans, or something unknown."

She used this technique with the genomes of 32 modern pygmies living in a village near the Liang Bua cave on Flores Island in Indonesia, where H. floresiensis fossils were discovered in 2004.

"They definitely have a lot of Neanderthal," said Tucci, who was the first author on a paper published Aug. 3 in the journal Science that detailed their findings. "They have a little bit of Denisovan. We expected that, because we knew there was some migration that went from Oceania to Flores, so there was some shared ancestry of these populations."

But there were no chromosomal "chunks" of unknown origins.

"If there was any chance to know the hobbit genetically from the genomes of extant humans, this would have been it," said Richard "Ed" Green, an associate professor of biomolecular engineering at the University of California-Santa Cruz (UCSC) and a corresponding author on the paper. "But we don't see it. There is no indication of gene flow from the hobbit into people living today."

The researchers did find evolutionary changes associated with diet and short stature. Height is very heritable, and geneticists have identified many genes with variants linked to taller or shorter stature. Tucci and her colleagues analyzed the Flores pygmy genomes with respect to height-associated genes identified in Europeans, and they found a high frequency of genetic variants associated with short stature.

"It sounds like a boring result, but it's actually quite meaningful," Green said. "It means that these gene variants were present in a common ancestor of Europeans and the Flores pygmies. They became short by selection acting on this standing variation already present in the population, so there's little need for genes from an archaic hominin to explain their small stature."

The Flores pygmy genome also showed evidence of selection in genes for enzymes involved in fatty acid metabolism, called FADS enzymes (fatty acid desaturase). These genes have been associated with dietary adaptations in other fish-eating populations, including the Inuit in Greenland.

Fossil evidence indicates H. floresiensis was significantly smaller than the modern Flores pygmies, standing about 3.5 feet tall (106 centimeters, shorter than the average American kindergartener), while modern pygmies average about 15 inches taller (145 centimeters). Floresiensis also differed from H. sapiens and H. erectus in their wrists and feet, probably due to the need to climb trees to evade Komodo dragons, said Tucci.

Dramatic size changes in animals isolated on islands is a common phenomenon, often attributed to limited food resources and freedom from predators. In general, large species tend to get smaller and small species tend to get larger on islands. At the time of H. floresiensis, Flores was home to dwarf elephants, giant Komodo dragons, giant birds and giant rats, all of which left bones in the Liang Bua cave.

"Islands are very special places for evolution," Tucci said. "This process, insular dwarfism, resulted in smaller mammals, like hippopotamus and elephants, and smaller humans."

Their results show that insular dwarfism arose independently at least twice on Flores Island, she said, first in H. floresiensis and again in the modern pygmies.

Read more at Science Daily

Astronomers blown away by historic stellar blast

Color image taken with the Hubble Space Telecope’s WFPC2 camera, showing the dumbbell-shaped cloud of gas and dust around the star. This nebula contains more than 10 times the mass of our Sun, which was ejected by Eta Carinae in the 19th century Great Eruption.
Imagine traveling to the Moon in just 20 seconds! That's how fast material from a 170 year old stellar eruption sped away from the unstable, eruptive, and extremely massive star Eta Carinae.

Astronomers conclude that this is the fastest jettisoned gas ever measured from a stellar outburst that didn't result in the complete annihilation of the star.

The blast, from the most luminous star known in our galaxy, released almost as much energy as a typical supernova explosion that would have left behind a stellar corpse. However, in this case a double-star system remained and played a critical role in the circumstances that led to the colossal blast.

Over the past seven years a team of astronomers led by Nathan Smith, of the University of Arizona, and Armin Rest, of the Space Telescope Science Institute, determined the extent of this extreme stellar blast by observing light echoes from Eta Carinae and its surroundings.

Light echos occur when the light from bright, short-lived events are reflected off of clouds of dust, which act like distant mirrors redirecting light in our direction. Like an audio echo, the arriving signal of the reflected light has a time delay after the original event due to the finite speed of light. In the case of Eta Carinae, the bright event was a major eruption of the star that expelled a huge amount of mass back in the mid-1800s during what is known as the "Great Eruption." The delayed signal of these light echoes allowed astronomers to decode the light from the eruption with modern astronomical telescopes and instruments, even though the original eruption was seen from Earth back in the mid-19th century. That was a time before modern tools like the astronomical spectrograph were invented.

"A light echo is the next best thing to time travel," Smith said. "That's why light echoes are so beautiful. They give us a chance to unravel the mysteries of a rare stellar eruption that was witnessed 170 years ago, but using our modern telescopes and cameras. We can also compare that information about the event itself with the 170-year old remnant nebula that was ejected. This was a behemoth stellar explosion from a very rare monster star, the likes of which has not happened since in our Milky Way Galaxy."

The Great Eruption temporarily promoted Eta Carinae to the second brightest star visible in our nighttime sky, vasty outshining the energy output every other star in the Milky Way, after which the star faded from naked eye visibility. The outburst expelled material (about 10 times more than the mass of our Sun) that also formed the bright glowing gas cloud known as the Homunculus. This dumbbell-shaped remnant is visible surrounding the star from within a vast star-forming region. The eruptive remnant can even be seen in small amateur telescopes from the Earth's Southern Hemisphere and equatorial regions, but is best seen in images obtained with the Hubble Space Telescope.

The team used instruments on the 8-meter Gemini South telescope, Cerro Tololo Inter-American Observatory 4-meter Blanco telescope, and the Magellan Telescope at Las Campanas Observatory to decode the light from these light echoes and to understand the expansion speeds in the historical explosion. "Gemini spectroscopy helped pin down the unprecedented velocities we observed in this gas, which clocked in at between about 10,000 to 20,000 kilometers per second," according to Rest. The research team, Gemini Observatory, and Blanco telescope are all supported by the U.S. National Science Foundation (NSF).

"We see these really high velocities all the time in supernova explosions where the star is obliterated." Smith notes. However, in this case the star survived, and explaining that led the researchers into new territory. "Something must have dumped a lot of energy into the star in a short amount of time," said Smith. The material expelled by Eta Carinae is travelling up to 20 times faster than expected for typical winds from a massive star so, according to Smith and his collaborators, enlisting the help of two partner stars might explain the extreme outflow.

The researchers suggest that the most straightforward way to simultaneously explain a wide range of observed facts surrounding the eruption and the remnant star system seen today is with an interaction of three stars, including a dramatic event where two of the three stars merged into one monster star. If that's the case, then the present-day binary system must have started out as triple system, with one of those two stars being the one that swallowed its sibling.

"Understanding the dynamics and environment around the largest stars in our galaxy is one of the most difficult areas of astronomy," said Richard Green, Director of the Division of Astronomical Sciences at NSF, the major funding agency for Gemini. "Very massive stars live short lives compared to stars like our Sun, but nevertheless catching one in the act of a major evolutionary step is statistically unlikely. That's why a case like Eta Carinae is so critical, and why NSF supports this kind of research."

Chris Smith, Head of Mission at the AURA Observatory in Chile and also part of the research team adds a historical perspective. "I'm thrilled that we can see light echoes coming from an event that John Herschel observed in the middle of the 19th century from South Africa," he said. "Now, over 150 years later we can look back in time, thanks to these light echoes, and unveil the secrets of this supernova wannabe using the modern instrumentation on Gemini to analyze the light in ways Hershel couldn't have even imagined!"

Eta Carinae is an unstable type of star known as a Luminous Blue Variable (LBV), located about 7,500 light years from Earth in a young star forming nebula found in the southern constellation of Carinae. The star is one of the intrinsically brightest in our galaxy and shines some five million times brighter than our Sun with a mass about one hundred times greater. Stars like Eta Carinae have the greatest mass-loss rates prior to undergoing supernova explosions, but the amount of mass expelled in Eta Carinae's 19th century Great Eruption exceeds any others known.

Eta Carinae will probably undergo a true supernova explosion sometime within the next half-million years at most, but possibly much sooner. Some types of supernovae have been seen to experience eruptive blasts like that of Eta Carinae in only the few years or decades before their final explosion, so some astronomers speculate that Eta Carinae might blow sooner rather than later.

Read more at Science Daily

New light shed on the people who built Stonehenge

Stonehenge. The excavation crew around Aubrey Hole 7 following excavations in 2008.
Despite over a century of intense study, we still know very little about the people buried at Stonehenge or how they came to be there. Now, a new University of Oxford research collaboration, published in Scientific Reports suggests that a number of the people that were buried at the Wessex site had moved with and likely transported the bluestones used in the early stages of the monument's construction, sourced from the Preseli Mountains of west Wales.

Conducted in partnership with colleagues at the UCL, Université Libre de Bruxelles & Vrije Universiteit Brussel), and the Muséum National d'Histoire Naturelle de Paris, France, the research combined radiocarbon-dating with new developments in archaeological analysis, pioneered by lead author Christophe Snoeck during his doctoral research in the School of Archaeology at Oxford.

While there has been much speculation as to how and why Stonehenge was built, the question of 'who' built it has received far less attention. Part of the reason for this neglect is that many of the human remains were cremated, and so it was difficult to extract much useful information from them. Snoeck demonstrated that that cremated bone faithfully retains its strontium isotope composition, opening the way to use this technique to investigate where these people had lived during the last decade or so of their lives.

With permission from Historic England and English Heritage, the team analysed skull bones from 25 individuals to better understand the lives of those buried at the iconic monument. These remains were originally excavated from a network of 56 pits in the 1920s, placed around the inner circumference and ditch of Stonehenge, known as 'Aubrey Holes'.

Analysis of small fragments of cremated human bone from an early phase of the site's history around 3000 BC, when it was mainly used as a cemetery, showed that at least 10 of the 25 people did not live near Stonehenge prior to their death. Instead, they found the highest strontium isotope ratios in the remains were consistent with living in western Britain, a region that includes west Wales -- the known source of Stonehenge's bluestones. Although strontium isotope ratios alone cannot distinguish between places with similar values, this connection suggests west Wales as the most likely origin of at least some of these people.

While the Welsh connection was known for the stones, the study shows that people were also moving between west Wales and Wessex in the Late Neolithic, and that some of their remains were buried at Stonehenge. The results emphasise the importance of inter-regional connections involving the movement of both materials and people in the construction and use of Stonehenge, providing rare insight into the large scale of contacts and exchanges in the Neolithic, as early as 5000 years ago.

Lead author Christophe Snoeck said: 'The recent discovery that some biological information survives the high temperatures reached during cremation (up to 1000 degrees Celsius) offered us the exciting possibility to finally study the origin of those buried at Stonehenge.'

John Pouncett, a lead author on the paper and Spatial Technology Officer at Oxford's School of Archaeology, said: 'The powerful combination of stable isotopes and spatial technology gives us a new insight into the communities who built Stonehenge. The cremated remains from the enigmatic Aubrey Holes and updated mapping of the biosphere suggest that people from the Preseli Mountains not only supplied the bluestones used to build the stone circle, but moved with the stones and were buried there too.'

Rick Schulting, a lead author on the research and Associate Professor in Scientific and Prehistoric Archaeology at Oxford, explained: 'To me the really remarkable thing about our study is the ability of new developments in archaeological science to extract so much new information ¬from such small and unpromising fragments of burnt bone.

'Some of the people's remains showed strontium isotope signals consistent with west Wales, the source of the bluestones that are now being seen as marking the earliest monumental phase of the site.'

Commenting on how they came to develop the innovative technique, Prof Julia Lee-Thorp, Head of Oxford's School of Archaeology and an author on the paper, said: 'This new development has come about as the serendipitous result of Dr Snoeck's interest in the effects of intense heat on bones, and our realization that that heating effectively "sealed in" some isotopic signatures.'

Read more at Science Daily

Both long term abstinence and heavy drinking may increase dementia risk

Serving champagne
People who abstain from alcohol or consume more than 14 units a week during middle age (midlife) are at increased risk of developing dementia, finds a study in The BMJ today.

However, the underlying mechanisms are likely to be different in the two groups.

As people live longer, the number living with dementia is expected to triple by 2050. So understanding the impact of alcohol consumption on ageing outcomes is important.

Previous studies indicate that moderate drinking is associated with a reduced risk of dementia, whereas both abstinence and heavy drinking are associated with a risk of dementia. But the evidence is far from conclusive, and the reasons underlying these associations remain unclear.

So a team of researchers from Inserm (French National Institute of Health and Medical Research) based in France and from UCL in the UK set out to investigate the association between midlife alcohol consumption and risk of dementia into early old age. They also examined whether cardiometabolic disease (a group of conditions including stroke, coronary heart disease, and diabetes) has any effect on this association.

Their findings are based on 9,087 British civil servants aged between 35 and 55 in 1985 who were taking part in the Whitehall II Study, which is looking at the impact of social, behavioural, and biological factors on long term health.

Participants were assessed at regular intervals between 1985 and 1993 (average age 50 years) on their alcohol consumption and alcohol dependence.

Alcohol consumption trajectories between 1985 and 2004 were also used to examine the association of long term alcohol consumption and risk of dementia from midlife to early old age.

Admissions for alcohol related chronic diseases and cases of dementia from 1991, and the role of cardiometabolic disease were then identified from hospital records.

Of the 9,087 participants, 397 cases of dementia were recorded over an average follow-up period of 23 years. Average age at dementia diagnosis was 76 years.

After taking account of sociodemographic, lifestyle, and health related factors that could have affected the results, the researchers found that abstinence in midlife or drinking more than 14 units a week was associated with a higher risk of dementia compared with drinking 1-14 units of alcohol a week. Among those drinking above 14 units a week of alcohol, every 7 unit a week increase in consumption was associated with 17% increase in dementia risk.

In the UK, 14 units of alcohol a week is now the recommended maximum limit for both men and women, but many countries still use a much higher threshold to define harmful drinking.

History of hospital admission for alcohol related chronic diseases was associated with a four times higher risk of dementia.

In abstainers, the researchers show that some of the excess dementia risk was due to a greater risk of cardiometabolic disease.

Alcohol consumption trajectories showed similar results, with long term abstainers, those reporting decreased consumption, and long term consumption of more than 14 units a week, all at a higher risk of dementia compared with long term consumption of 1-14 units a week.

Further analyses to test the strength of the associations were also broadly consistent, suggesting that the results are robust.

Taken together, these results suggest that abstention and excessive alcohol consumption are associated with an increased risk of dementia, say the researchers, although the underlying mechanisms are likely to be different in the two groups.

This is an observational study, so no firm conclusions can be drawn about cause and effect, and the researchers cannot rule out the possibility that some of the risk may be due to unmeasured (confounding) factors.

The authors say their findings "strengthen the evidence that excessive alcohol consumption is a risk factor for dementia" and "encourage use of lower thresholds of alcohol consumption in guidelines to promote cognitive health at older ages." They also say these findings "should not motivate people who do not drink to start drinking given the known detrimental effects of alcohol consumption for mortality, neuropsychiatric disorders, cirrhosis of the liver, and cancer"

This study is important since it fills gaps in knowledge, "but we should remain cautious and not change current recommendations on alcohol use based solely on epidemiological studies," says Sevil Yasar at Johns Hopkins School of Medicine, in a linked editorial.

She calls for further studies and ideally a government funded randomized clinical trial to answer pressing questions about the possible protective effects of light to moderate alcohol use on risk of dementia and the mediating role of cardiovascular disease with close monitoring of adverse outcomes.

Read more at Science Daily

Aug 1, 2018

Past experiences shape what we see more than what we are looking at now

Study subjects viewed pairs of blurred and clear images in a certain order, forcing their brains to use past experiences to recognize newly seen images, and letting researchers track the brain activity patterns underlying visual perception.
A rope coiled on dusty trail may trigger a frightened jump by hiker who recently stepped on a snake. Now a new study better explains how a one-time visual experience can shape perceptions afterward.

Led by neuroscientists from NYU School of Medicine and published online July 31 in eLife, the study argues that humans recognize what they are looking at by combining current sensory stimuli with comparisons to images stored in memory.

"Our findings provide important new details about how experience alters the content-specific activity in brain regions not previously linked to the representation of images by nerve cell networks," says senior study author Biyu He, PhD, assistant professor in the departments of Neurology, Radiology, and Neuroscience and Physiology.

"The work also supports the theory that what we recognize is influenced more by past experiences than by newly arriving sensory input from the eyes," says He, part of the Neuroscience Institute at NYU Langone Health.

She says this idea becomes more important as evidence mounts that hallucinations suffered by patients with post-traumatic stress disorder or schizophrenia occur when stored representations of past images overwhelm what they are looking at presently.

Glimpse of a Tiger

A key question in neurology is about how the brain perceives, for instance, that a tiger is nearby based on a glimpse of orange amid the jungle leaves. If the brains of our ancestors matched this incomplete picture with previous danger, they would be more likely to hide, survive and have descendants. Thus, the modern brain finishes perception puzzles without all the pieces.

Most past vision research, however, has been based on experiments wherein clear images were shown to subjects in perfect lighting, says He. The current study instead analyzed visual perception as subjects looked at black-and-white images degraded until they were difficult to recognize. Nineteen subjects were shown 33 such obscured "Mooney images" -- 17 of animals and 16 humanmade objects -- in a particular order. They viewed each obscured image six times, then a corresponding clear version once to achieve recognition, and then blurred images again six times after. Following the presentation of each blurred image, subjects were asked if they could name the object shown.

As the subjects sought to recognize images, the researchers "took pictures" of their brains every two seconds using functional magnetic resonance images (fMRI). The technology lights up with increased blood flow, which is known to happen as brain cells are turned on during a specific task. The team's 7 Tesla scanner offered a more than three-fold improvement in resolution over past studies using standard 3 Tesla scanners, for extremely precise fMRI-based measurement of vision-related nerve circuit activity patterns.

After seeing the clear version of each image, the study subjects were more than twice as likely to recognize what they were looking at when again shown the obscured version as they were of recognizing it before seeing the clear version. They had been "forced" to use a stored representation of clear images, called priors, to better recognize related, blurred versions, says He.

The authors then used mathematical tricks to create a 2D map that measured, not nerve cell activity in each tiny section of the brain as it perceived images, but instead of how similar nerve network activity patterns were in different brain regions. Nerve cell networks in the brain that represented images more similarly landed closer to each other on the map.

This approach revealed the existence of a stable system of brain organization that processed each image in the same steps, and regardless of whether clear or blurry, the authors say. Early, simpler brain circuits in the visual cortex that determine edge, shape, and color clustered on one end of the map, and more complex, "higher-order" circuits known to mix past and present information to plan actions at the opposite end.

These higher-order circuits included two brain networks, the default-mode network (DMN) and frontoparietal network (FPN), both linked by past studies to executing complex tasks such as planning actions, but not to visual, perceptual processing. Rather than remaining stable in the face of all images, the similarity patterns in these two networks shifted as brains went from processing unrecognized blurry images to effortlessly recognizing the same images after seeing a clear version. After previously seeing a clear version (disambiguation), neural activity patterns corresponding to each blurred image in the two networks became more distinct from the others, and more like the clear version in each case.

Read more at Science Daily

Scientists discover why elusive aye-aye developed such unusual features

3D models of aye-aye and squirrel skulls.
It is one of the most unusual primates on the planet -- famed for its large eyes, big ears and thin, bony finger used for probing.

Often persecuted as a harbinger of evil, the aye-aye has fascinated scientists, in particular how and why it evolved such unusual features.

But now a new study has, for the first time, measured the extent to which the endangered aye-aye has evolved similar features to squirrels, despite being more closely related to monkeys, chimps, and humans.

When two aye-ayes were first brought back to Europe from their native Madagascar by French explorers in 1780, they were "ranked with the rodents" and believed to be "more closely allied to the genus of squirrel than any other."

By the mid-19th Century the aye-aye had been correctly identified as a primate, but its squirrel-like appearance is often cited as a striking example of "evolutionary convergence," or how unrelated species can independently evolve the same traits.

Now, using techniques developed in collaboration by researchers at the University of York, a new study has used high-resolution microCT scanning to image the skulls of the two species, mapping and modelling the level of convergence in their physical features.

The findings suggest that the demands of needing to produce a high bite force with the two front teeth -- in the squirrel for cracking nuts and in the aye-aye for biting into tree bark to feed on wood-boring beetle larvae -- have not only led to the aye-aye evolving the ever-growing incisors characteristic of rodents, but has also given it a squirrel-like skull and jaw.

The study shows how lifestyle and ecology can have such a strong influence on the way a species looks that they can almost override ancestry.

Senior author of the study, Dr Philip Cox from the Department of Archaeology at the University of York and the Hull York Medical School, said: "Examples of convergent evolution can be seen throughout nature -- for example, despite belonging to separate biological groups, dolphins and sharks have converged in body shape due to their shared need to move efficiently through the water.

"Aye-ayes and squirrels have become an iconic example of convergence because of their similar teeth, but our study has shown for the first time that the evolution of their skulls and jaws has also converged.

"Our analysis suggests that the skulls of both species have not evolved simply to house their teeth, but that the distinctive shape may be what allows them to exact a high bite force. The shape of the skull is what makes the aye-aye look so similar to squirrels in particular."

Using skeletons borrowed from the collections of natural history museums, the research team made 3D reconstructions of the skulls and mandibles of the aye-aye and squirrel, plus a variety of other primates and rodents.

They then took 3D co-ordinates from these reconstructions and put this data into statistical software.

Plotting the evolutionary trees of the two biological groups allowed the team to visualise how the evolutionary paths of the aye-aye and squirrel incline towards each other -- showing the high degree of convergence in the skull and jaw, despite the completely different ancestry of the two species.

Read more at Science Daily

Heatwave deaths will rise steadily by 2080 as globe warms up

Future heatwaves are expected to become more frequent, more intense and longer-lasting.
If people cannot adapt to future climate temperatures, deaths caused by severe heatwaves will increase dramatically in tropical and subtropical regions, followed closely by Australia, Europe and the United States, a global new Monash-led study shows.

Published today in PLOS Medicine, it is the first global study to predict future heatwave-related deaths and aims to help decision makers in planning adaptation and mitigation strategies for climate change.

Researchers developed a model to estimate the number of deaths related to heatwaves in 412 communities across 20 countries for the period of 2031 to 2080.

The study projected excess mortality in relation to heatwaves in the future under different scenarios characterised by levels of greenhouse gas emissions, preparedness and adaption strategies and population density across these regions.

Study lead and Monash Associate Professor Yuming Guo said the recent media reports detailing deadly heatwaves around the world highlight the importance of the heatwave study.

"Future heatwaves in particular will be more frequent, more intense and will last much longer," Associate Professor Guo said.

"If we cannot find a way to mitigate the climate change (reduce the heatwave days) and help people adapt to heatwaves, there will be a big increase of heatwave-related deaths in the future, particularly in the poor countries located around the equator."

A key finding of the study shows that under the extreme scenario, there will be a 471 per cent increase in deaths caused by heatwaves in three Australian cities (Brisbane, Sydney and Melbourne) in comparison with the period 1971-2010.

"If the Australia government cannot put effort into reducing the impacts of heatwaves, more people will die because of heatwaves in the future," Associate Professor Guo said.

The study comes as many countries around the world have been affected by severe heatwaves, leaving thousands dead and tens of thousands more suffering from heatstroke-related illnesses. The collective death toll across India, Greece, Japan and Canada continues to rise as the regions swelter through record temperatures, humidity, and wildfires. Associate Professor Antonio Gasparrini, from the London School of Hygiene & Tropical Medicine and study co-author, said since the turn of the century, it's thought heatwaves have been responsible for tens of thousands of deaths, including regions of Europe and Russia.

"Worryingly, research shows that is it highly likely that there will be an increase in their frequency and severity under a changing climate, however, evidence about the impacts on mortality at a global scale is limited," Associate Professor Gasparrini said.

"This research, the largest epidemiological study on the projected impacts of heatwaves under global warming, suggests it could dramatically increase heatwave-related mortality, especially in highly-populated tropical and sub-tropical countries. The good news is that if we mitigate greenhouse gas emissions under scenarios that comply with the Paris Agreement, then the projected impact will be much reduced."

Read more at Science Daily

Innovative technique converts white fat to brown fat

Human white adipose tissue cultured in browning media for three weeks and stained with UCP1 (red), Lipidtox (green), and Sytox nuclear stain (blue).
Brown fat tissue in the body can burn enormous amounts of energy to generate heat, and studies in humans and animals have suggested that increasing the amount of healthy brown fat might help weight management and reduce symptoms of diabetes. However, how to safely and effectively increase brown fat has been a significant challenge for researchers.

A Columbia Engineering team led by Sam Sia, professor of biomedical engineering, has developed a simple, innovative method to directly convert white fat to brown fat outside the body and then reimplant it in a patient. The technique uses fat-grafting procedures commonly performed by plastic surgeons, in which fat is harvested from under the skin and then retransplanted into the same patient for cosmetic or reconstructive purposes. The researchers report in a Scientific Reports study (May 21) that they successfully converted harvested white fat to brown fat in the lab for potential use as a therapy.

Other methods to increase brown fat include chronic cold exposure, which is uncomfortable for most people, and pharmaceuticals that can cause side effects by targeting other organs. "Our approach to increasing brown fat is potentially safer than drugs because the only thing going into patients is their own tissue, and it's highly controllable because we can tune the amount of brown fat we inject," says Sia. "The process is also so simple that it could be potentially performed using an automated system within a doctor's office or clinic."

The team converted white fat to brown fat by culturing tissue fragments in media containing growth factors and other endogenous browning factors for one to three weeks to stimulate the "browning" process. They assessed the browning of the white fat by measuring levels of several brown fat biomarkers, including mitochondrial activity and the brown fat protein marker UCP1. In one of the study's experiments, they discovered that subcutaneous white fat in mice could be directly converted to brown fat outside the body, and that the brown fat both survived and remained stable after injection into the same mouse for a long period (two months in this experiment).

"The persistence of the converted brown fat is very important because we know that when white fat is naturally stimulated to turn to brown fat in vivo, through cold exposure for example, it can rapidly change back when the stimulation is removed," says Brian Gillette, the study's co-author and a Columbia-trained biomedical engineer now working in the department of surgery at NYU Winthrop Hospital. "Even though we could repeat the procedure several times if we needed to, since it's minimally invasive, it is critical that the brown fat survives well and remains stable so that it can function as an effective therapy."

The researchers then used their methods on human subcutaneous fat and were able to effectively convert it to brown fat. "This suggests that it might be possible one day to attempt our approach in humans as a potential therapy to help with weight loss, control of blood glucose levels, or to prevent weight gain," says Nicole Blumenfeld, a PhD student working with Sia and lead author of the paper.

The researchers note that, while the mice on a high fat diet treated with directly converted brown fat in the experiment did not show statistically significant weight loss versus a control group treated with unconverted white fat, the study demonstrates a simple and scalable tissue-grafting strategy that increases endogenous brown fat.

"This is an exciting advance toward engineered brown adipose tissue in clinical applications if it is proven to be safe and effective in humans," says Li Qiang, assistant professor in pathology and cell biology at Columbia University Medical Center who was not involved with this study. An expert in the pathophysiology of diabetes and obesity, Qiang documented the mechanism that promotes the "browning" of white adipose tissue.

Read more at Science Daily

Jul 31, 2018

160-year-old mystery about the origin of skeletons solved

A fossil heterostracan, Errivaspis waynensis, from the early Devonian (approximately 419 million years ago) of Herefordshire, UK.
Scientists at The University of Manchester and the University of Bristol have used powerful X-rays to peer inside the skeletons of some of our oldest vertebrate relatives, solving a 160-year-old mystery about the origin of our skeletons.

Living vertebrates have skeletons built from four different tissue types: bone and cartilage (the main tissues that human skeletons are made from), and dentine and enamel (the tissues from which our teeth are constructed). These tissues are unique because they become mineralised as they develop, giving the skeleton strength and rigidity.

Evidence for the early evolution of our skeletons can be found in a group of fossil fishes called heterostracans, which lived over 400 million years ago. These fishes include some of the oldest vertebrates with a mineralised skeleton that have ever been discovered. Exactly what tissue heterostracan skeletons were made from has long puzzled scientists.

Now a team of researchers from the University of Manchester, the University of Bristol and the Paul Scherrer Institute in Switzerland have taken a detailed look inside heterostracan skeletons using Synchrotron Tomography: a special type of CT scanning using very high energy X-rays produced by a particle accelerator. Using this technique, the team have identified this mystery tissue.

Lead researcher Dr Joseph Keating, from Manchester's School of Earth of Environmental Scientists, explained: "Heterostracan skeletons are made of a really strange tissue called 'aspidin'. It is crisscrossed by tiny tubes and does not closely resemble any of the tissues found in vertebrates today. For a 160 years, scientists have wondered if aspidin is a transitional stage in the evolution of mineralised tissues."

The results of this study, published in Nature Ecology and Evolution, show that the tiny tubes are voids that originally housed fibre-bundles of collagen, a type of protein found in your skin and bones.

These findings enabled Dr Keating to rule out all but one hypothesis for the tissue's identity: aspidin is the earliest evidence of bone in the fossil record.

Read more at Science Daily

Aphids manipulate their food

This is an aphid infestation of the stem close to the bud of a tansy.
Aphids -- who hasn't been bothered by these little insects at one time or another? Why do they reproduce on plants so successfully? These are among the questions that Professor Dr Caroline Müller and her research team are addressing at Bielefeld University's Faculty of Biology. They have found out that aphids are able to influence the quality of their food, and that this may enable them to construct a niche on their own host plants. Müller's research team is located in the Transregio Collaborative Research Centre 'NC3' that is studying animals and their 'individual niches. They have published their findings in the journal New Phytologist.

There are hundreds of different aphid species. They all feed on plant sap, known as phloem sap. The nutritional value of the phloem sap is determined by the sugar concentration and the concentration and composition of amino acids. Previously it was not known how the quality of plant sap changes in different plant parts after aphid infestation, how this change in quality influences the development of aphids, and how, in turn, the aphids can change the composition of the plant sap.

Müller and her team are the first to confirm that aphid infestation actually does change the composition of the plant sap depending on which aphid species is infesting which specific part of the plant. For example, infestation of the stem close to the bud with a certain aphid species changes the composition of sugar and organic acids in the sap. In contrast, infestation of the old leaves with another aphid species increases the concentration of amino acids. And a further phenomenon can also be ascertained: 'We were able to observe that the aphid species that developed best on the stem close to the bud and the other species that proliferated best on the old leaves each specifically increased the quality of the plant sap of the corresponding plant part,' says Ruth Jakobs, a research assistant at the Faculty of Biology. Hence, aphids construct their own niche in such a way that they are able to profit from it. 'We can assume that aphids behave in a similar way to, for example, beavers that settle in the dams they have constructed themselves,' says Müller.

The biologists gained their findings by placing aphids on different parts of common tansy plants -- the stem close to the bud, a young leaf, and an old leaf -- and determining the growth of the populations of these insects at these locations. In addition, the biologists collected the plant sap and analysed its chemical composition.

Read more at Science Daily

Plate tectonics not needed to sustain life

The artist's concept depicts Kepler-69c, a super-Earth-size planet in the habitable zone of a star like our sun, located about 2,700 light-years from Earth in the constellation Cygnus.
There may be more habitable planets in the universe than we previously thought, according to Penn State geoscientists, who suggest that plate tectonics -- long assumed to be a requirement for suitable conditions for life -- are in fact not necessary.

When searching for habitable planets or life on other planets, scientists look for biosignatures of atmospheric carbon dioxide. On Earth, atmospheric carbon dioxide increases surface heat through the greenhouse effect. Carbon also cycles to the subsurface and back to the atmosphere through natural processes.

"Volcanism releases gases into the atmosphere, and then through weathering, carbon dioxide is pulled from the atmosphere and sequestered into surface rocks and sediment," said Bradford Foley, assistant professor of geosciences. "Balancing those two processes keeps carbon dioxide at a certain level in the atmosphere, which is really important for whether the climate stays temperate and suitable for life."

Most of Earth's volcanoes are found at the border of tectonic plates, which is one reason scientists believed they were necessary for life. Subduction, in which one plate is pushed deeper into the subsurface by a colliding plate, can also aid in carbon cycling by pushing carbon into the mantle.

Planets without tectonic plates are known as stagnant lid planets. On these planets, the crust is one giant, spherical plate floating on mantle, rather than separate pieces. These are thought to be more widespread than planets with plate tectonics. In fact, Earth is the only planet with confirmed tectonic plates.

Foley and Andrew Smye, assistant professor of geosciences, created a computer model of the lifecycle of a planet. They looked at how much heat its climate could retain based on its initial heat budget, or the amount of heat and heat-producing elements present when a planet forms. Some elements produce heat when they decay. On Earth, decaying uranium produces thorium and heat, and decaying thorium produces potassium and heat.

After running hundreds of simulations to vary a planet's size and chemical composition, the researchers found that stagnant lid planets can sustain conditions for liquid water for billions of years. At the highest extreme, they could sustain life for up to 4 billion years, roughly Earth's life span to date.

"You still have volcanism on stagnant lid planets, but it's much shorter lived than on planets with plate tectonics because there isn't as much cycling," said Smye. "Volcanoes result in a succession of lava flows, which are buried like layers of a cake over time. Rocks and sediment heat up more the deeper they are buried."

The researchers found that at high enough heat and pressure, carbon dioxide gas can escape from rocks and make its way to the surface, a process known as degassing. On Earth, Smye said, the same process occurs with water in subduction fault zones.

This degassing process increases based on what types and quantities of heat-producing elements are present in a planet up to a certain point, said Foley.

"There's a sweet spot range where a planet is releasing enough carbon dioxide to keep the planet from freezing over, but not so much that the weathering can't pull carbon dioxide out of the atmosphere and keep the climate temperate," he said.

According to the researchers' model, the presence and amount of heat-producing elements were far better indicators for a planet's potential to sustain life.

Read more at Science Daily

Astronomers assemble 'light-fingerprints' to unveil mysteries of the cosmos

Earth with the albedo plotted over it.
Earthbound detectives rely on fingerprints to solve their cases; now astronomers can do the same, using "light-fingerprints" instead of skin grooves to uncover the mysteries of exoplanets.

Cornell University researchers have created a reference catalog using calibrated spectra and geometric albedos -- the light reflected by a surface -- of 19 of the most diverse bodies in our solar system. These include all eight planets, from rocky to gaseous; nine moons, from frozen to lava spewing; and two dwarf planets, one in the asteroid belt -- Ceres -- and one in the Kuiper belt -- Pluto.

By comparing observed spectra and albedos of exoplanets to this catalog of our own home planetary system, scientists will be able to characterize them in reference to the wide range of icy, rocky and gaseous worlds in our home system.

"A Catalog of Spectra, Albedos and Colors of Solar System Bodies for Exoplanet Comparison" was published online in the journal Astrobiology and will be featured on the print edition's cover in December.

"We use our own solar system and all we know about its incredible diversity of fascinating worlds as our Rosetta Stone," said co-author Lisa Kaltenegger, associate professor of astronomy and director of the Carl Sagan Institute. "With this catalog of light-fingerprints, we will be able to compare new observations of exoplanets to objects in our own solar system -- including the gaseous worlds of Jupiter and Saturn, the icy worlds of Europa, the volcanic world of Io and our own life-filled planet."

The catalog, freely available on the Carl Sagan Institute website, includes high- and low-resolution versions of the data, which shows astronomers the influence of spectral resolution on an object's identification. In addition, the catalog offers examples of how the colors of the 19 solar system models would change if they were orbiting stars other than our sun.

"Planetary science broke new ground in the '70s and '80s with spectral measurements for solar system bodies. Exoplanet science will see a similar renaissance in the near future," said Jack Madden, doctoral candidate at the Carl Sagan Institute and lead author of the study. "The technology to directly collect the light from Earth-sized planets around other stars is currently in a clean room waiting to be assembled and trained on the right target. With the upcoming launch of the James Webb Space Telescope and the current construction of large ground-based telescopes such as the Giant Magellan Telescope and the Extremely Large Telescope, we are entering a new age of observational ability, so we need a reference catalog of all the planets and moons we already know, to compare these new exoplanet spectra to."

The catalog will enable scientists to prioritize time-intensive, high-resolution observations of extrasolar planets and moons. It also offers insights into what kind of worlds won't be so easy to categorize without high-resolution spectra. For example, Venus is a rocky planet, but because sunlight reflects from its dense carbon dioxide atmosphere rather than its rocky surface, the colors astronomers observe from such a planet are similar to those of an icy world. On the outer edge of the habitable zone, rocky exoplanets are likely to have dense atmospheres like Venus. Such worlds will require long observations to characterize correctly.

"Examining our solar system from the vantage point of a distant observer is an illuminating exercise," said Madden.

Read more at Science Daily

Creating a (synthetic) song from a zebra finch's muscle

Zebra finches.
Birds create songs by moving muscles in their vocal organs to vibrate air passing through their tissues. While previous research reported that each of the different muscles controls one acoustic feature, new research shows that these muscles act in concert to create sound.

An international team of researchers describes how zebra finches produce songs in the journal Chaos, from AIP Publishing. Using electromyographic (EMG) signals, researchers tracked the activity of one of the main muscles involved in creating sound, the syringealis ventralis (vS) muscle. They then used the data from this muscle to create a synthetic zebra finch song.

"The activity of this muscle provided us information on gating, when the sound starts or ends," said Juan Döppler, one of the researchers and a doctoral student at the University of Buenos Aires. "What's interesting is that when the bird is asleep, despite lacking the airflow necessary to produce sound, this muscle also activates and shows electrical activity, similar to the activity it shows while singing."

The researchers inserted pairs of bipolar electrodes into the vS muscle of five adult zebra finches. It was previously assumed that the vS muscle was primarily involved in frequency modulation, but this team found that the vS muscle was involved in making sounds or phonation too.

Using EMG data, they identified phonating intervals with a success rate above 70 percent. The team developed a set of criteria to accurately predict sound production intervals in a set of data from five different birds. The criteria performed particularly well in the short, simple syllables, when the vS is mostly silent during the creation of sound. For more complex syllables, it was clear that incorporating the activity of other syringeal muscles that impact the start and stop of sound in a zebra finch's song would improve their gating prediction.

Still, the researchers were able to create a full reconstruction of the song using the activity of one muscle. The vS muscle is particularly active during sleep, so the researchers collected the electric activity of the vS muscle during sleep, and translated these patterns into song.

Read more at Science Daily

Jul 30, 2018

Carbon 'leak' may have warmed the planet for 11,000 years, encouraging human civilization

Researchers in the Sigman Lab at Princeton University extracted trace amounts of nitrogen from fossils to create a model for the activity of the Southern Ocean during the Holocene, a warm period that began about 11,000 years ago, during which agriculture and human civilization flourished. The fossils they studied included (from left): planktonic foraminifer Globigerina bulloides, a centric diatom, and deep-sea coral Desmophyllum dianthus.
The oceans are the planet's most important depository for atmospheric carbon dioxide on time scales of decades to millenia. But the process of locking away greenhouse gas is weakened by activity of the Southern Ocean, so an increase in its activity could explain the mysterious warmth of the past 11,000 years, an international team of researchers reports.

The warmth of that period was stabilized by a gradual rise in global carbon dioxide levels, so understanding the reason for that rise is of great interest, said Daniel Sigman, the Dusenbury Professor of Geological and Geophysical Sciences at Princeton.

Scientists have proposed various hypotheses for that carbon dioxide increase, but its ultimate cause has remained unknown. Now, an international collaboration led by scientists from Princeton and the Max Planck Institute for Chemistry point to an increase in Southern Ocean upwelling. Their research appears in the current issue of the journal Nature Geoscience.

"We think we may have found the answer," said Sigman. "Increased circulation in the Southern Ocean allowed carbon dioxide to leak into the atmosphere, working to warm the planet."

Their findings about ocean changes could also have implications for predicting how global warming will affect ocean circulation and how much atmospheric carbon dioxide will rise due to fossil fuel burning.

For years, researchers have known that growth and sinking of phytoplankton pumps carbon dioxide deep into the ocean, a process often referred to as the "biological pump." The biological pump is driven mostly by the low latitude ocean but is undone closer to the poles, where carbon dioxide is vented back to the atmosphere by the rapid exposure of deep waters to the surface, Sigman said. The worst offender is the Southern Ocean, which surrounds Antarctica. "We often refer to the Southern Ocean as a leak in the biological pump," Sigman said.

Sigman and his colleagues have found that an increase in the Southern Ocean's upwelling could be responsible for stabilizing the climate of the Holocene, the period reaching more than 10,000 years before the Industrial Revolution.

Most scientists agree that the Holocene's warmth was critical to the development of human civilization. The Holocene was an "interglacial period," one of the rare intervals of warm climate that have occurred over the ice age cycles of the last million years. The retreat of the glaciers opened a more expansive landscape for humans, and the higher concentrations of carbon dioxide in the atmosphere made for more productive agriculture, which allowed people to reduce their hunter-gathering activities and build permanent settlements.

The Holocene differed from other interglacial periods in several key ways, say the researchers. For one, its climate was unusually stable, without the major cooling trend that is typical of the other interglacials. Secondly, the concentration of carbon dioxide in the atmosphere rose about 20 parts per million (ppm), from 260 ppm in the early Holocene to 280 ppm in the late Holocene, whereas carbon dioxide was typically stable or declined over other interglacial periods.

For comparison, since the beginning of industrialization until now, the carbon dioxide concentration in the atmosphere has increased from 280 to more than 400 ppm as a consequence of burning fossil fuels.

"In this context, the 20 ppm increase observed during the Holocene may seem small," said Sigman. "However, scientists think that this small but significant rise played a key role in preventing progressive cooling over the Holocene, which may have facilitated the development of complex human civilizations."

In order to study the potential causes of the Holocene carbon dioxide rise, the researchers investigated three types of fossils from several different areas of the Southern Ocean: diatoms and foraminifers, both shelled microorganisms found in the oceans, and deep-sea corals.

From the nitrogen isotope ratios of the trace organic matter trapped in the mineral walls of these fossils, the scientists were able to reconstruct the evolution of nutrient concentrations in Southern Ocean surface waters over the past 10,000 years.

"The method we used to analyze the fossils is unique and provides a new way to study past changes in ocean conditions," says Anja Studer, first author of the study, who performed the research while a graduate student working with Sigman's lab.

The fossil-bound nitrogen isotope measurements indicate that during the Holocene, increasing amounts of water, rich in nutrients and carbon dioxide, welled up from the deep ocean to the surface of the Southern Ocean. While the cause for the increased upwelling is not yet clear, the most likely process appears to be a change in the "Roaring 40s," a belt of eastward-blowing winds that encircle Antarctica.

Because of the enhanced Southern Ocean upwelling, the biological pump weakened over the Holocene, allowing more carbon dioxide to leak from the deep ocean into the atmosphere and thus possibly explaining the 20 ppm rise in atmospheric carbon dioxide.

"This process is allowing some of that deeply stored carbon dioxide to invade back to the atmosphere," said Sigman. "We're essentially punching holes in the membrane of the biological pump."

The increase in atmospheric carbon dioxide levels over the Holocene worked to counter the tendency for gradual cooling that dominated most previous interglacials. Thus, the new results suggest that the ocean may have been responsible for the "special stability" of the Holocene climate.

The same processes are at work today: The absorption of carbon by the ocean is slowing the rise in atmospheric carbon dioxide produced by fossil fuel burning, and the upwelling of the Southern Ocean is still allowing some of that carbon dioxide to vent back into the atmosphere.

Read more at Science Daily

Homo sapiens developed a new ecological niche that separated it from other hominins

Map of the potential distribution of archaic hominins, including H. erectus, H. floresiensis, H. neanderthalenesis, Denisovans and archaic African hominins, in the Old World at the time of the evolution and dispersal of H. sapiens between approximately 300 and 60 thousand years ago.
Critical review of growing archaeological and palaeoenvironmental datasets relating to the Middle and Late Pleistocene (300-12 thousand years ago) hominin dispersals within and beyond Africa, published today in Nature Human Behaviour, demonstrates unique environmental settings and adaptations for Homo sapiens relative to previous and coexisting hominins such as Homo neanderthalensis and Homo erectus. Our species' ability to occupy diverse and 'extreme' settings around the world stands in stark contrast to the ecological adaptations of other hominin taxa, and may explain how our species became the last surviving hominin on the planet.

The paper, by scientists from the Max Planck Institute for the Science of Human History and the University of Michigan suggests investigations into what it means to be human should shift from attempts to uncover the earliest material traces of 'art', 'language', or technological 'complexity' towards understanding what makes our species ecologically unique. In contrast to our ancestors and contemporary relatives, our species not only colonized a diversity of challenging environments, including deserts, tropical rainforests, high altitude settings, and the palaeoarctic, but also specialized in its adaptation to some of these extremes.

Ancestral ecologies -- the ecology of Early and Middle Pleistocene Homo

Although all hominins that make up the genus Homo are often termed 'human' in academic and public circles, this evolutionary group, which emerged in Africa around 3 million years ago, is highly diverse. Some members of the genus Homo (namely Homo erectus) had made it to Spain, Georgia, China, and Indonesia by 1 million years ago. Yet, existing information from fossil animals, ancient plants, and chemical methods all suggest that these groups followed and exploited environmental mosaics of forest and grassland. It has been argued that Homo erectus and the 'Hobbit', or Homo floresiensis, used humid, resource-scarce tropical rainforest habitats in Southeast Asia from 1 million years ago to 100,000 and 50,000 years ago, respectively. However, the authors found no reliable evidence for this.

It has also been argued that our closest hominin relatives, Homo Neanderthalensis - or the Neanderthals -- were specialized to the occupation of high latitude Eurasia between 250,000 and 40,000 years ago. The base for this includes a face shape potentially adapted to cold temperatures and a hunting focus on large animals such as woolly mammoths. Nevertheless, a review of the evidence led the authors to again conclude that Neanderthals primarily exploited a diversity of forest and grassland habitats, and hunted a diversity of animas, from temperature northern Eurasia to the Mediterranean.

Deserts, rainforests, mountains, and the arctic

In contrast to these other members of the genus Homo, our species -- Homo sapiens - had expanded to higher-elevation niches than its hominin predecessors and contemporaries by 80-50,000 years ago, and by at least 45,000 years ago was rapidly colonizing a range of palaeoarctic settings and tropical rainforest conditions across Asia, Melanesia, and the Americas. Furthermore, the authors argue that the continued accumulation of better-dated, higher resolution environmental datasets associated with our species' crossing the deserts of northern Africa, the Arabian Peninsula, and northwest India, as well as the high elevations of Tibet and the Andes, will further help to determine the degree to which our species demonstrated novel colonizing capacities in entering these regions.

Finding the origins of this ecological 'plasticity', or the ability to occupy a number of very different environments, currently remains difficult in Africa, particularly back towards the evolutionary origins of Homo sapiens 300-200,000 years ago. However, the authors argue that there are tantalizing hints for novel environmental contexts of human habitation and associated technological shifts across Africa just after this timeframe. They hypothesize that the drivers of these changes will become more apparent with future work, especially that which tightly integrates archaeological evidence with highly resolved local palaeoecological data. For example, lead author of the paper, Dr. Patrick Roberts, suggests, "although a focus on finding new fossils or genetic characterization of our species and its ancestors has helped rough out the broad timing and location of hominin specifications, such efforts are largely silent on the various environmental contexts of biocultural selection."

The 'generalist specialist' -- a very sapiens niche

One of the main new claims of the authors is that the evidence for human occupation of a huge diversity of environmental settings across the majority of the Earth's continents by the Late Pleistocene hints at a new ecological niche, that of the 'generalist specialist'. As Roberts states "A traditional ecological dichotomy exists between 'generalists', who can make use of a variety of different resources and inhabit a variety of environmental conditions, and 'specialists', who have a limited diet and narrow environmental tolerance. However, Homo sapiens furnish evidence for 'specialist' populations, such as mountain rainforest foragers or palaeoarctic mammoth hunters, existing within what is traditionally defined as a 'generalist' species."

This ecological ability may have been aided by extensive cooperation between non-kin individuals among Pleistocene Homo sapiens, argues Dr. Brian Stewart, co-author of the study. "Non-kin food sharing, long-distance exchange, and ritual relationships would have allowed populations to 'reflexively' adapt to local climatic and environmental fluctuations, and outcompete and replace other hominin species." In essence, accumulating, drawing from, and passing down a large pool of cumulative cultural knowledge, in material or idea form, may have been crucial in the creation and maintenance of the generalist-specialist niche by our species in the Pleistocene.

Implications for our pursuit of ancient humanity


The authors are clear that this proposition remains hypothetical and could be disproven by evidence for the use of 'extreme' environments by other members of the genus Homo. However, testing the 'generalist specialist' niche in our species encourages research in more extreme environments that have previously been neglected as unpromising for palaeoanthropological and archaeological work, including the Gobi Desert and Amazon rainforest. The expansion of such research is particularly important in Africa, the evolutionary cradle of Homo sapiens, where more detailed archaeological and environmental records dating back to 300-200,000 years ago are becoming increasingly crucial if we are to track the ecological abilities of the earliest humans.

It is also clear that growing evidence for hominin interbreeding and a complex anatomical and behavioural origin of our species in Africa highlights that archaeologists and palaeoanthropologists should focus on looking at the environmental associations of fossils. "While we often get excited by the discovery of new fossils or genomes, perhaps we need to think about the behavioural implications of these discoveries in more detail, and pay more attention to what these new finds tell us about new the passing of ecological thresholds" says Stewart. Work focusing on how the genetics of different hominins may have led to ecological and physical benefits such as high-altitude capacities or UV tolerance remain highly fruitful ways forward in this regard.

Read more at Science Daily