Nov 3, 2018

'Robust' corals primed to resist coral bleaching

The research suggests that "robust" corals, such as brain corals, may be more resilient to bleaching than "complex" corals, such as branching staghorns, Orpheus Island QLD.
Using advanced genomic techniques, a team of researchers led by Dr Hua (Emily) Ying of The Australian National University (ANU) and Prof David Miller of the ARC Centre of Excellence for Coral Reef Studies (Coral CoE) at James Cook University (JCU), have found that the group of corals classified as "robust," which includes a number of the brain corals and mushroom corals, have a key physiological advantage over "complex" corals, including common branching corals such as the staghorn coral.

In a new paper published today in the journal Genome Biology, researchers report that "robust" corals possess a unique capacity to generate an "essential" amino acid.

"Amino acids are the building blocks of life," said lead author Dr Emily Ying of ANU Research School of Biology.

"Amino acids are crucial, for example, in repairing tissue or growing new tissue. But, generating amino acids is energetically costly for animals, so they usually only generate 11 of the 20 required for life. The remaining nine amino acids are called the 'essential' amino acids because they must be supplied by the animal's diet. For corals, this includes tiny drifting animals known as 'zooplankton.'"

But this is not the only form of sustenance for corals. Through a mutually-beneficial relationship with microalgae known as Symbiodinium, corals are supplied the energy needed to build their hard skeletons.

"Symbiodinium also supplies the coral with some of the 'essential' amino acids, making them less dependent on their diet than other animals," said senior author Prof David Miller of Coral CoE at JCU.

For example, when global warming causes corals to bleach, they expel their resident Symbiodinium and are therefore suddenly fully dependent on their diet to meet this nutritional requirement.

"We now know that 'robust' corals can make at least one of the 'essential' amino acids without relying on Symbiodinium. This suggests that they may be more resilient, at least in the short term, to bleaching than the 'complex' corals such as the branching staghorns," explained Prof Miller.

Until now, scientists had few clues about why some corals only host a specific Symbiodinium type and others are less particular.

Read more at Science Daily

Comet tails blowing in the solar wind

The Sun's magnetic field, which is embedded in the solar wind, permeates the entire solar system. The current sheet -- where the magnetic field changes polarity --spirals out from near the solar equator like a wavy skirt around a ballet dancer's waist.
Engineers and scientists gathered around a screen in an operations room at the Naval Research Laboratory in Washington, D.C., eager to lay their eyes on the first data from NASA's STEREO spacecraft. It was January 2007, and the twin STEREO satellites -- short for Solar and Terrestrial Relations Observatory -- which had launched just months before, were opening their instruments' eyes for the first time. First up: STEREO-B. The screen blinked, but instead of the vast starfield they expected, a pearly white, feathery smear -- like an angel's wing -- filled the frame. For a few panicky minutes, NRL astrophysicist Karl Battams worried something was wrong with the telescope. Then, he realized this bright object wasn't a defect, but an apparition, and these were the first satellite images of Comet McNaught. Later that day, STEREO-A would return similar observations.

Comet C/2006 P1 -- also known as Comet McNaught, named for astronomer Robert McNaught, who discovered it in August 2006 -- was one of the brightest comets visible from Earth in the past 50 years. Throughout January 2007, the comet fanned across the Southern Hemisphere's sky, so bright it was visible to the naked eye even during the day. McNaught belongs to a rarefied group of comets, dubbed the Great Comets and known for their exceptional brightness. Setting McNaught apart further still from its peers, however, was its highly structured tail, composed of many distinct dust bands called striae, or striations, that stretched more than 100 million miles behind the comet, longer than the distance between Earth and the Sun. One month later, in February 2007, an ESA (European Space Agency) and NASA spacecraft called Ulysses would encounter the comet's long tail.

"McNaught was a huge deal when it came because it was so ridiculously bright and beautiful in the sky," Battams said. "It had these striae -- dusty fingers that extended across a huge expanse of the sky. Structurally, it's one of the most beautiful comets we've seen for decades."

How exactly the tail broke up in this manner, scientists didn't know. It called to mind reports of another storied comet from long ago: the Great Comet of 1744, which was said to have dramatically fanned out in six tails over the horizon, a phenomenon astronomers then couldn't explain. By untangling the mystery of McNaught's tail, scientists hoped to learn something new about the nature of comets -- and solve two cosmic mysteries in one.

A key difference between studying comets in 1744 and 2007 is, of course, our ability to do so from space. In addition to STEREO's serendipitous sighting, another mission, ESA/NASA's SOHO -- the Solar and Heliospheric Observatory -- made regular observations as McNaught flew by the Sun. Researchers hoped these images might contain their answers.

Now, years later, Oliver Price, a planetary science Ph.D. student at University College London's Mullard Space Science Laboratory in the United Kingdom, has developed a new image-processing technique to mine through the wealth of data. Price's findings -- summarized in a recently published Icarus paper -- offer the first observations of striations forming, and an unexpected revelation about the Sun's effect on comet dust.

Comets are cosmic crumbs of frozen gas, rock and dust left over from the formation of our solar system 4.6 billion years ago -- and so they may contain important clues about our solar system's early history. Those clues are unlocked, as if from a time capsule, every time a comet's elliptical orbit brings it close to the Sun. Intense heat vaporizes the frozen gases and releases the dust within, which streams behind the comet, forming two distinct tails: an ion tail carried by the solar wind -- the constant flow of charged particles from the Sun -- and a dust tail.

Understanding how dust behaves in the tail -- how it fragments and clumps together -- can teach scientists a great deal about similar processes that formed dust into asteroids, moons and even planets all those billions of years ago. Appearing as one of the biggest and most structurally complex comets in recent history, McNaught was a particularly good subject for this type of study. Its brightness and high dust production made it much easier to resolve the evolution of fine structures in its dust tail.

Price began his study focusing on something the scientists couldn't explain. "My supervisor and I noticed weird goings-on in the images of these striations, a disruption in the otherwise clean lines," he said. "I set out to investigate what might have happened to create this weird effect."

The rift seemed to be located at the heliospheric current sheet, a boundary where the magnetic orientation, or polarity, of the electrified solar wind changes directions. This puzzled scientists because while they have long known a comet's ion tail is affected by the solar wind, they had never seen the solar wind impact dust tails before.

Dust in McNaught's tail -- roughly the size of cigarette smoke -- is too heavy, the scientists thought, for the solar wind to push around. On the other hand, an ion tail's miniscule, electrically charged ions and electrons easily sail along the solar wind. But it was difficult to tell exactly what was going on with McNaught's dust, and where, because at roughly 60 miles per second, the comet was rapidly traveling in and out of STEREO and SOHO's view.

"We got really good data sets with this comet, but they were from different cameras on different spacecraft, which are all in different places," Price said. "I was looking for a way to bring it all together to get a complete picture of what's happening in the tail."

His solution was a novel image-processing technique that compiles all the data from different spacecraft using a simulation of the tail, where the location of each tiny speck of dust is mapped by solar conditions and physical characteristics like its size and age, or how long it'd been since it'd flown off the head, or coma, of the comet. The end result is what Price dubbed a temporal map, which layers information from all the images taken at any given moment, allowing him to follow the dust's movements.

The temporal maps meant Price could watch the striations form over time. His videos, which cover the span of two weeks, are the first to track the formation and evolution of these structures, showing how dust fragments topple off the comet head and collapse into long striations.

But the researchers were most excited to find that Price's maps made it easier to explain the strange effect that drew their attention to the data in the first place. Indeed, the current sheet was the culprit behind the disruptions in the dust tail, breaking up each striation's smooth, distinct lines. For the two days it took the full length of the comet to traverse the current sheet, whenever dust encountered the changing magnetic conditions there, it was jolted out of position, as if crossing some cosmic speed bump.

"It's like the striation's feathers are ruffled when it crosses the current sheet," University College London planetary scientist Geraint Jones said. "If you picture a wing with lots of feathers, as the wing crosses the sheet, lighter ends of the feathers get bent out of shape. For us, this is strong evidence that the dust is electrically charged, and that the solar wind is affecting the motion of that dust."

Scientists have long known the solar wind affects charged dust; missions like Galileo Cassini, and Ulysses watched it move electrically charged dust through the space near Jupiter and Saturn. But it was a surprise for them to see the solar wind affect larger dust grains like those in McNaught's tail -- about 100 times bigger than the dust seen ejected from around Jupiter and Saturn -- because they're that much heavier for the solar wind to push around.

With this study, scientists gain new insights into long-held mysteries. The work sheds light on the nature of striated comet tails from the past and provides a crucial lens for studying other comets in the future. But it also opens a new line of questioning: What role did the Sun have in our solar system's formation and early history?

Read more at Science Daily

Nov 2, 2018

Older fathers associated with increased birth risks, study reports

Infant in hands
A decade of data documenting live births in the United States links babies of older fathers with a variety of increased risks at birth, including low birth weight and seizures, according to a new study by researchers at the Stanford University School of Medicine.

The data even suggest that the age of the father can sway the health of the mother during pregnancy, specifically her risk for developing diabetes.

"We tend to look at maternal factors in evaluating associated birth risks, but this study shows that having a healthy baby is a team sport, and the father's age contributes to the baby's health, too," said Michael Eisenberg, MD, associate professor of urology.

Data from more than 40 million births showed that babies born to fathers of an "advanced paternal age," which roughly equates to older than 35, were at a higher risk for adverse birth outcomes, such as low birth weight, seizures and need for ventilation immediately after birth. Generally speaking, the older a father's age, the greater the risk. For example, men who were 45 or older were 14 percent more likely to have a child born prematurely, and men 50 or older were 28 percent more likely to have a child that required admission to the neonatal intensive care unit.

Still, these numbers aren't reason to drastically change any life plans, as the risks are still relatively low, Eisenberg said. He compared the increased risks to buying lottery tickets. "If you buy two lottery tickets instead of one, your chances of winning double, so it's increased by 100 percent," he said. "But that's a relative increase. Because your chance of winning the lottery started very small, it's still unlikely that you're going to win the lottery. This is a very extreme example, but the same concept can be applied to how you think about these birth risks."

Instead, Eisenberg sees the findings as informational ammunition for people planning a family and hopes that they will serve to educate the public and health officials.

A paper describing the study will be published online Nov. 1 in the The British Medical Journal. Eisenberg is the senior author. Resident physician Yash Khandwala, MD, is the lead author.

Increased risks at 35

Back in 2017, Eisenberg published a study showing that the number of older men fathering children was on the rise. Now, about 10 percent of infants are born to fathers over the age of 40, whereas four decades ago it was only 4 percent.

"We're seeing these shifts across the United States, across race strata, across education levels, geography -- everywhere you look, the same patterns are being seen," Eisenberg said. "So I do think it's becoming more relevant for us to understand the health ramifications of advanced paternal age on infant and maternal health."

Eisenberg and his colleagues used data from 40.5 million live births documented through a data-sharing program run by the Centers for Disease Control and Prevention and the National Center for Health Statistics. The researchers organized the information based on the fathers' age -- younger than 25; 25 to 34; 35 to 44; 45 to 55; and older than 55 -- and controlled for a variety of parameters that might skew the association between the father's age and birth outcomes, such as race, education level, marital status, smoking history, access to care and the mother's age.

The data suggested that once a dad hits age 35, there's a slight increase in birth risks overall -- with every year that a man ages, he accumulates on average two new mutations in the DNA of his sperm -- but birth risks for infants born to fathers of the subsequent age tier showed sharper increases.

Compared with fathers between the ages of 25 and 34 (the average age of paternity in the United States), infants born to men 45 or older were 14 percent more likely to be admitted to the NICU, 14 percent more likely to be born prematurely, 18 percent more likely to have seizures and 14 percent more likely to have a low birth weight. If a father was 50 or older, the likelihood that their infant would need ventilation upon birth increased by 10 percent, and the odds that they would need assistance from the neonatal intensive care unit increased by 28 percent.

"What was really surprising was that there seemed to be an association between advanced paternal age and the chance that the mother would develop diabetes during pregnancy," said Eisenberg. For men age 45 and older, their partners were 28 percent more likely to develop gestational diabetes, compared with fathers between 25 and 34. Eisenberg points out that possible biological mechanisms at play here are still a bit murky, but he suspects that the mother's placenta has a role.

Beyond correlation

Moving forward, Eisenberg wants to look into other population cohorts to confirm the associations between age and birth risks, as well as begin to decode some of the possible biological mechanisms.

"Scientists have looked at these kinds of trends before, but this is the most comprehensive study to look at the relationship between the father's age and birth outcomes at a population level," said Eisenberg. "Having a better understanding of the father's biological role will be obviously important for the offspring, but also potentially for the mother."

Other Stanford co-authors of the study are professor of obstetrics and gynecology Valerie Baker, MD; professor of pediatrics Gary Shaw, DrPH; professor of pediatrics David Stevenson, MD; and professor of biomedical data, Ying Lu, PhD.

Read more at Science Daily

Widely used mosquito repellent proves lethal to larval salamanders

Spotted salamanders begin life in the water. During their aquatic larval phase, they are efficient predators of mosquito larvae.
Insect repellents containing picaridin can be lethal to salamanders. So reports a new study published today in Biology Letters that investigated how exposure to two common insect repellents influenced the survival of aquatic salamander and mosquito larvae.

Insect repellents are a defense against mosquito bites and mosquito-borne diseases like dengue, chikungunya, Zika, and West Nile virus. Salamanders provide natural mosquito control. During their aquatic juvenile phase, they forage on mosquito larvae, keeping populations of these nuisance insects in check.

Emma Rosi, a freshwater ecologist at Cary Institute of Ecosystem Studies and a co-author on the paper explains, "Use of insect repellents is on the rise globally. Chemicals in repellents enter aquatic ecosystems through sewage effluent and are now common in surface waters. We set out to understand the impact of repellent pollution on both larval mosquitoes and the larval salamanders that prey on them."

The paper is the first to suggest that environmentally realistic concentrations of picaridin-containing repellents in surface waters may increase the abundance of adult mosquitoes due to a decrease in predation pressure on mosquitoes at the larval stages.

Testing the two most popular repellents


The research team tested the effects of two of the most widely used insect repellents -- DEET (Repel 100 Insect Repellent) and picaridin (Sawyer Premium Insect Repellent) -- on larval salamanders and mosquitoes. In a lab, they exposed mosquito larvae and just-hatched spotted salamander larvae to three environmentally relevant concentrations of these chemicals, as well as a control treatment.

Rosi notes, "The concentrations in our experiments are conservative; we prepared them based on unadulterated commercial formulations, not concentrations of pure active compounds."

Mosquito larvae were not impacted by any of the treatments and matured unhindered. After four days of exposure to repellent with picaridin, salamanders in all of the treatment groups began to display signs of impaired development such as tail deformities. By day 25, 45-65% of picaridin-exposed salamander larvae died.

Co-author Barbara Han, a disease ecologist at Cary Institute explains, "Our findings demonstrate that larval salamanders suffer severe mortality and developmental deformities when exposed to environmentally relevant concentrations of commercially available repellent containing the active ingredient picaridin."

Adding, "The expediency of salamander mortality was disconcerting. When studying the effects of a chemical on an amphibian, we usually look for a suite of abnormalities. We couldn't collect these data because the salamanders died so quickly."

How toxic is toxic?

LC50 tests are used to define a chemical's environmental toxicity. These standard tests, based on one life stage of a single species, measure how long it takes for 50% of a test population to die with increasing exposure to a chemical in a lab over a four-day period.

Co-author Alexander Reisinger, an Assistant Professor at University of Florida, Gainesville says, "We observed heavy salamander mortality with picaridin, but not until after the fourth day of exposure. By the LC50 measure, picaridin would be deemed 'safe', but clearly, this is not the case. If a substance doesn't kill organisms within the first few days of exposure, it can still be toxic and have ecological impacts."

Results may underestimate the problem

Lethal in a controlled setting, picaridin may cause greater mortality in a natural context, where organisms are exposed to numerous stressors. Rosi notes, "Animals don't exist in isolation. In nature, competition, predation, resource limitation, and social interactions make it difficult for an organism to tolerate the added stress of exposure to a harmful substance, even in small amounts."

Timing -- of both repellent use and amphibian reproduction -- is also key. Many amphibians breed in a single seasonal pulse, putting all their eggs in one basket, so to speak. Mosquitoes have an extended breeding season, and reproduce multiple times.

Lead author Rafael Almeida, a postdoctoral researcher at Cornell University, conducted the research as a visiting PhD student at the Cary Institute. He explains, "The amount of repellents entering waterways peaks seasonally. If amphibians are exposed during a sensitive life stage, entire cohorts could perish. The population would not have a chance to recover until the following year. Meanwhile, mosquitoes would continue to reproduce. It suggests a negative feedback loop."

Read more at Science Daily

Unique immunity genes in one widespread coral species

Clonal colonies of the coral P. damicornis being reared in the Cnidarian Immunity Lab at the UM Rosenstiel School of Marine and Atmospheric Science.
A new study led by researchers at the University of Miami (UM) Rosenstiel School of Marine and Atmospheric Science found that a common coral species might have evolved unique immune strategies to cope with environmental change.

Roughly 30 percent of the cauliflower coral's (Pocillopora damicornis) genome was unique compared to several other reef-building corals. In this 30%, many of these genes were related to immune function. This diversity of genes related to immune function, the researchers say, may be important for the long-term survival of coral reefs as climate change and ocean acidification continue to alter the environment to which corals are adapted.

"This coral is traditionally thought of as a weed, and yet it may be one of the last corals to survive environmental changes such as climate change," said senior author of the study Nikki Traylor-Knowles, an assistant professor of marine biology and ecology at the UM Rosenstiel School.

To conduct the research, the scientists extracted and sequenced the genomic DNA from two healthy fragments and two bleached fragments of P. damicornis, which is one of the most abundant and widespread reef-building corals in the world. Their genome was then compared to publicly available genomes for several other coral species and several other cnidarian species.

"The study shows that this is an important coral with a very complex and unique immune system, which may explain why it is able to survive in so many different locations," said the paper's lead author Ross Cunning, who conducted the research as a postdoctoral scientist at the UM Rosenstiel School and is now a researcher at Shedd Aquarium.

These results suggest that the evolution of an innate immune system has been a defining feature of the success of hard corals like P. damicornis, and may help facilitate their continued success under climate change scenarios.

The immune system of corals, like humans, is vital to protect its overall health and deal with changes in its surroundings. If an animal has a stronger immune system then it will be better equipped to deal with environmental changes. These new findings suggest that some corals have many more and diverse immunity genes than would be expected, which is the hallmark of a very robust immune system.

"This study helps us better understand how corals deal with stress," said Traylor-Knowles. "Its complex immune system indicates that it may have the tools to deal with environmental change much more easily than other corals."

Read more at Science Daily

Immigration to the United States changes a person's microbiome

This graphical abstract shows that migration from a non-western nation to the United States is associated with a loss in gut microbiome diversity and function in a manner that may predispose individuals to metabolic disease.
Researchers at the University of Minnesota and the Somali, Latino, and Hmong Partnership for Health and Wellness have new evidence that the gut microbiota of immigrants and refugees rapidly Westernize after a person's arrival in the United States. The study of communities migrating from Southeast Asia to the U.S., published November 1 in the journal Cell, could provide insight into some of the metabolic health issues, including obesity and diabetes, affecting immigrants to the country.

"We found that immigrants begin losing their native microbes almost immediately after arriving in the U.S. and then acquire alien microbes that are more common in European-American people," says senior author Dan Knights, a computer scientist and quantitative biologist at the University of Minnesota. "But the new microbes aren't enough to compensate for the loss of the native microbes, so we see a big overall loss of diversity."

It has been shown before that people in developing nations have a much greater diversity of bacteria in their gut microbiome, the population of beneficial microbes living in humans' intestines, than people living in the U.S. "But it was striking to see this loss of diversity actually happening in people who were changing countries or migrating from a developing nation to the U.S.," he says.

The research was conducted with assistance from -- and inspired by -- Minnesota's large community of refugees and immigrants from Southeast Asia, particularly the Hmong and Karen peoples, ethnic minorities that originally were from China and Burma and that today have communities in Thailand. The study used a community-based participatory research approach: members of the Hmong and Karen communities in both Minnesota and Thailand were involved in designing the study, recruiting participants, and educating their communities about the findings.

"Obesity was a concern that was coming up a lot for the Hmong and Karen communities here. In other studies, the microbiome had been related to obesity, so we wanted to know if there was potentially a relationship in immigrants and make any findings relevant and available to the communities. These are vulnerable populations, so we definitely try to make all of our methods as sensitive to that as possible and make sure that they have a stake in the research," says first author Pajau Vangay.

Knights, Vangay, and their team compared the gut microbiota of Hmong and Karen people still living in Thailand; Hmong and Karen people who had immigrated to the U.S.; the children of those immigrants; and Caucasian American controls. They also were able to follow a group of 19 Karen refugees as they relocated from Thailand to the U.S., which meant they could track how the refugees' gut microbiomes changed longitudinally in their first six to nine months in the U.S.

And the researchers did find that significant changes happened that fast: in those first six to nine months, the Western strain Bacteroides began to displace the non-Western bacteria strain Prevotella. But this Westernization also continued to happen over the course of the first decade in the U.S., and overall microbiome diversity decreased the longer the immigrants had been in the U.S. The participants' food logs suggested that eating a more Western diet played a role in perturbing the microbiome but couldn't explain all the changes.

The changes were even more pronounced in their children. "We don't know for sure why this is happening. It could be that this has to do with actually being born in the USA or growing up in the context of a more typical US diet. But it was clear that the loss of diversity was compounded across generations. And that's something that has been seen in animal models before, but not in humans," says Knights.

Although the research didn't establish a cause-and-effect relationship between the microbiome changes in immigrants and the immigrant obesity epidemic, it did show a correlation: greater westernization of the microbiome was associated with greater obesity.

Knights believes that this research has a lot to tell us about our health. "When you move to a new country, you pick up a new microbiome. And that's changing not just what species of microbes you have, but also what enzymes they carry, which may affect what kinds of food you can digest and how your diet interacts with your health," he says. "This might not always be a bad thing, but we do see that Westernization of the microbiome is associated with obesity in immigrants, so this could an interesting avenue for future research into treatment of obesity, both in immigrants and potentially in the broader population."

Read more at Science Daily

Thirty years in the life of supernova 1987A

Shockwave of Supernova 1987A as it slammed into debris that ringed the original star before its demise.
Since it first appeared in the southern night sky on February 24th 1987, Supernova 1987A has been one of the most studied objects in the history of astronomy.

The supernova was the cataclysmic death of a blue supergiant star, some 168,000 light-years from Earth, in the Large Magellanic Cloud, a satellite galaxy of our own Milky Way Galaxy. It was the brightest supernova to appear in our skies since Kepler's Supernova in 1604 and the first since the invention of the telescope.

The brilliant new star was first spotted by two astronomers working at the Las Campanas Observatory in northern Chile the night of the 24th: the University of Toronto's Ian Shelton, and a telescope operator at the observatory, Oscar Duhalde.

Now, Yvette Cendes, a graduate student with the University of Toronto and the Leiden Observatory, has created a time-lapse showing the aftermath of the supernova over a 25-year period, from 1992 to 2017. The images show the shockwave expanding outward and slamming into debris that ringed the original star before its demise.

In an accompanying paper, published in the Astrophysical Journal on October 31st, Cendes and her colleagues add to the evidence that the expanding remnant is shaped -- not like a ring like those of Saturn's -- but like a donut, a form known as a torus.

They also confirm that the shockwave has now picked up some one thousand kilometres per second in speed. The acceleration has occurred because the expanding torus has punched through the ring of debris.

From Science Daily

Nov 1, 2018

Gut bacteria may control movement

Fruit fly (Drosophila melanogaster) on a blade of grass.
A new study puts a fresh spin on what it means to "go with your gut." The findings, published in Nature, suggest that gut bacteria may control movement in fruit flies and identify the neurons involved in this response. The study was supported by the National Institute of Neurological Disorders and Stroke (NINDS), part of the National Institutes of Health.

"This study provides additional evidence for a connection between the gut and the brain, and in particular outlines how gut bacteria may influence behavior, including movement," said Margaret Sutherland, Ph.D., program director at NINDS.

Researchers led by Sarkis K. Mazmanian, Ph.D., professor of microbiology at the California Institute of Technology in Pasadena, and graduate student Catherine E. Schretter, observed that germ-free flies, which did not carry bacteria, were hyperactive. For instance, they walked faster, over greater distances, and took shorter rests than flies that had normal levels of microbes. Dr. Mazmanian and his team investigated ways in which gut bacteria may affect behavior in fruit flies.

"Locomotion is important for a number of activities such as mating and searching for food. It turns out that gut bacteria may be critical for fundamental behaviors in animals," said Dr. Mazmanian.

Fruit flies carry between five and 20 different species of bacteria and Dr. Mazmanian's team treated the germ-free animals with individual strains of those microbes. When the flies received Lactobacillus brevis, their movements slowed down to normal speed. L. brevis was one of only two species of bacteria that restored normal behavior in the germ-free flies.

Dr. Mazmanian's group also discovered that the molecule xylose isomerase (Xi), a protein that breaks down sugar and is found in L. brevis, may be critical to this process. Isolating the molecule and treating germ-free flies with it was sufficient to slow down the speedwalkers.

Additional experiments showed that Xi may regulate movement by fine-tuning levels of certain carbohydrates, such as trehalose, which is the main sugar found in flies and is similar to mammalian glucose. Flies that were given Xi had lower levels of trehalose than did untreated germ-free flies. When Xi-treated flies, which showed normal behavior, were given trehalose alone, they resumed fast movements suggesting that the sugar was able to reverse the effects of Xi.

Next, the researchers looked into the flies' nervous system to see what cells were involved in bacteria-directed movement. When Dr. Mazmanian's team turned on neurons that produce the chemical octopamine, that activation canceled out the effect of L. brevis on the germ-free flies. As a result, the flies, which had previously slowed down after receiving the bacterium or Xi, resumed their speedwalking behavior. Turning on octopamine-producing nerve cells in flies with normal levels of bacteria also caused them to move faster. However, activating neurons that produce other brain chemicals did not influence the flies' movements.

According to Dr. Mazmanian, Schretter and their colleagues, Xi may be monitoring the flies' metabolic state, including levels of nutrients, and then signaling to octopamine neurons whether they should turn on or off, resulting in changes in behavior.

Instead of octopamine, mammals produce a comparable chemical called noradrenaline, which has been shown to control movement.

"Gut bacteria may play a similar role in mammalian locomotion, and even in movement disorders such as Parkinson's disease," said Dr. Mazmanian.

Read more at Science Daily

Major corridor of Silk Road already home to high-mountain herders over 4,000 years ago

Iron Age rock art near the city of Osh shows the tall, beautiful horses that drove important Silk Road trade.
Using ancient proteins and DNA recovered from tiny pieces of animal bone, archaeologists at the Max Planck Institute for the Science of Human History (MPI-SHH) and the Institute of Archaeology and Ethnography (IAET) at the Russian Academy of Sciences-Siberia have discovered evidence that domestic animals -cattle, sheep, and goat -- made their way into the high mountain corridors of southern Kyrgyzstan more than four millennia ago, as published in a study in PLOS ONE.

Long before the formal creation of the Silk Road -- a complex system of trade routes linking East and West Eurasia through its arid continental interior- pastoral herders living in the mountains of Central Asia helped form new cultural and biological links across this region. However, in many of the most important channels of the Silk Road itself, including Kyrgyzstan's Alay Valley (a large mountain corridor linking northwest China with the oases cities of Bukhara and Samarkand), very little is known about the lifeways of early people who lived there in the centuries and millennia preceding the Silk Road era.

In 2017, an international team of researchers, led by Dr. Svetlana Shnaider (IAET), Dr. Aida Abdykanova (American University of Central Asia), and Dr. William Taylor (MPI-SHH), identified a series of never-before-seen habitation sites along the mountain margins that form Kyrgzstan's southern border with Tajikistan. Test excavations and survey at these sites produced archaeological animal bones that promised to shed light on how people used the Alay region in the past. When Taylor and colleagues analyzed the bones that had been recovered, however, they were so small and badly broken that researchers could no longer use their size and shape to identify which species they originally belonged to. "We were crushed," says Shnaider. "To get so close to understanding the early economy of one of the most important channels of the Silk Road -and come up empty-handed -- was incredibly disheartening."

However, Taylor and his colleagues then applied a technique known as Zooarchaeology by Mass Spectrometry, or ZooMS. This method uses laser-based, mass spectrometry to identify the peptide building blocks that make up collagen inside the bone itself -- peptides that differ across animal taxa, and produce unique "fingerprints" that can be used to identify otherwise unrecognizable pieces of bone. With this technique, Dr. Taylor and his colleagues discovered that people living in the Alay Valley began herding sheep, goat, and cattle by at least 4300 years ago. Combining their work with ancient DNA research at France's University of Toulouse, they also found that in later centuries, as Silk Road trade flourished across the region, transport animals like domestic horses and Bactrian camel became increasingly significant in Alay. Their results are published in PLOS ONE.

Read more at Science Daily

A comprehensive 'parts list' of the brain built from its components, the cells

Stock image of rendering of human brain.
Neuroscientists at the Allen Institute have moved one step closer to understanding the complete list of cell types in the brain. In the most comprehensive study of its kind to date, published today on the cover of the journal Nature, the researchers sorted cells from the cortex, the outermost shell and the cognitive center of the brain, into 133 different "cell types" based on the genes the cells switch on and off.

The classification, building off of 15 years of work at the Allen Institute, uncovered many rare brain cell types and laid the groundwork for revealing new functions of two of those rare neuron types. The study captured cell-by-cell information from parts of the mouse cortex that are involved in vision and movement.

Scientists are very far from understanding how the mammalian brain does what it does. They don't even entirely know what it's made of -- the different types of brain cells. What neuroscientists are up against in their work is akin to trying to recreate a delicious, complex meal, not only without knowing the ingredients and recipe that went into making it, but without even having any way to describe many of those ingredients.

In the new study, the researchers came up with a way to describe those ingredients by analyzing the genes from nearly 24,000 of the mouse's 100 million brain cells, creating a list of 133 cell types. Because the study captured the activity of tens of thousands of genes from so many cells and is nearly complete for the vision and motor regions in the study, the other regions of the cortex will likely follow similar rules of organization, the researchers said.

"This is by far the most comprehensive, most in-depth analysis of any regions of the cortex in any species. We can now say that we understand the distribution rules for its parts list," said Hongkui Zeng, Ph.D., Executive Director of Structured Science at the Allen Institute for Brain Science, a division of the Allen Institute, and senior author on the study. "With all these data in hand, we can start to learn new principles of how the brain is organized -- and ultimately, how it works."

In an accompanying paper, also published today in Nature and led by researchers at the Janelia Research Campus of the Howard Hughes Medical Institute, the neuroscientists used the gene-based classification and additional information about neuron shape to uncover two new types of neurons involved in movement. The researchers then measured the activity of these different neurons in moving mice, and they found that one type is involved in planning movements, whereas the other type works to trigger movement itself.

"Gene expression is a very efficient way of getting at cell types, and that's really what the Allen Institute effort is at the core," said Janelia's Karel Svoboda, Ph.D., who led the motor neuron study along with Michael Economo, Ph.D., and is also a co-author on the cell types study. "The motor cortex study is the first salvo in a different type of cell type classification, where gene expression information, structural information and measurements of neural activity are brought together to make statements about the function of specific cell types in the brain."

Sifting through 24,000 cells to understand the brain

The mammalian cortex is considered the main brain region controlling cognitive function, and is far larger in humans than in most other mammals. Many researchers believe understanding the makeup of this complex but regularly ordered region of the brain will help us understand what makes mammal brains special -- or what makes our brains uniquely human. The Allen Institute researchers are also working to define the "ingredients list" for the rest of the mouse cortex, although they expect that many of the rules of organization they've identified in this study will hold true across the entire region. And knowledge gained from the mouse cortex forms the foundation for understanding the human cortex through comparative studies.

Although there are many ways of understanding what makes one cell type different from another -- its shape, how it sends electric signals, and how those signals translate into the brain's many functions -- only gene expression lends itself to studying tens of thousands of cells, one cell at a time, in a comprehensive way.

"It's only through recent advances in technology that we can measure the activity of so many genes in a single cell," said Bosiljka Tasic, Ph.D., Associate Director of Molecular Genetics at the Allen Institute for Brain Science and first author on the cell types study. "Ultimately, we are also working to study not only gene expression, but many of the cells' other properties -- including their function, which is the most elusive, the most difficult to define."

The Allen Institute-led cell type study was built off a similar, smaller study completed in 2016, which sifted through about 1,600 cells from the mouse's visual processing part of the brain. Scaling up the number of cells in their analysis by nearly 15 times and expanding to a second region of the brain cortex allowed the researchers to create a more comprehensive and refined cell type catalog.

Read more at Science Daily

Earliest recorded lead exposure in 250,000-year-old Neanderthal teeth

Lead in periodic table
Using evidence found in teeth from two Neanderthals from southeastern France, researchers from the Department of Environmental Medicine and Public Health at the Icahn School of Medicine at Mount Sinai report the earliest evidence of lead exposure in an extinct human-like species from 250,000 years ago.

This study is the first to report lead exposure in Neanderthal and is the first to use teeth to reconstruct climate during and timing of key developmental events including weaning and nursing duration -- key determinants of population growth.

Results of the study will be published online in Science Advances, a journal published by the American Association for the Advancement of Science, at 2PM EST on October 31st.

The international research team of biological anthropologists, archaeologists, earth scientists, and environmental exposure experts measured barium, lead and oxygen in the teeth for evidence of nursing, weaning, chemical exposure, and climate variations across the growth rings in the teeth. Elemental analysis of the teeth revealed short-term exposure to lead during cooler seasons, possibly from ingestion of contaminated food or water, or inhalation from fires containing lead.

During fetal and childhood development, a new tooth layer is formed every day. As each of these 'growth rings' forms, some of the many chemicals circulating in the body are captured in each layer, which provides a chronological record of exposure. The research team used lasers to sample these layers and reconstruct the past exposures along incremental markings, similar to using growth rings on a tree to determine the tree's growth history.

This evidence allowed the team to relate the individuals' development to ancient seasons, revealing that one Neanderthal was born in the spring, and that both Neanderthal children were more likely to be sick during colder periods. The findings are consistent with mammals' pattern of bearing offspring during periods of increased food availability. The nursing duration of 2.5 years in one individual is similar to the average age of weaning in preindustrial human populations. The researchers note they can't make broad generalizations about Neanderthals due to the small study size, but that their research methods offer a new approach to answering questions about long extinct species.

"Traditionally, people thought lead exposure occurred in populations only after industrialization, but these results show it happened prehistorically, before lead had been widely released into the environment," said one of the study's lead authors, Christine Austin, PhD, Assistant Professor in the Department of Environmental Medicine and Public Health at the Icahn School of Medicine at Mount Sinai. "Our team plans to analyze more teeth from our ancestors and investigate how lead exposures may have affected their health and how that may relate to how our bodies respond to lead today."

"Dietary patterns in our early life have far reaching consequences for our health, and by understanding how breastfeeding evolved we can help guide the current population on what is good breastfeeding practice," said Manish Arora PhD, BDS, MPH, Professor and Vice Chairman Department of Environmental Medicine and Public Health at the Icahn School of Medicine. "Our research team is working on applying these techniques in contemporary populations to study how breastfeeding alters health trajectories including those of neurodevelopment, cardiac health and other high priority health outcomes."

"This study reports a major breakthrough in the reconstruction of ancient climates, a significant factor in human evolution, as temperature and precipitation cycles influenced the landscapes and food resources our ancestors relied on," said the study's lead author Tanya Smith, PhD, Associate Professor at Griffith University.

Read more at Science Daily

Oct 31, 2018

Changes to RNA aid the process of learning and memory

RNA changes and learning: Induced memory and learning deficits in mice have been found to be reversible. When the researchers injected knockout mice with a virus carrying Ythdf1, their performance on memory and learning tasks improved dramatically.
RNA carries pieces of instructions encoded in DNA to coordinate the production of proteins that will carry out the work to be done in a cell. But the process isn't always straightforward. Chemical modifications to DNA or RNA can alter the way genes are expressed without changing the actual genetic sequences. These epigenetic or epitranscriptome changes can affect many biological processes such as immune system response, nervous system development, various human cancers and even obesity.

Most of these changes happen through methylation, a process in which chemical molecules called methyl groups are added to a DNA or RNA molecule. Proteins that add a methyl group are known as "writers," and proteins that can remove the methyl groups are "erasers." For the methylation to have a biological effect, there must be "reader" proteins that can identify the change and bind to it.

The most common modification on messenger RNA in mammals is called N6-methyladenosine (m6A). It is widespread in the nervous system. It helps coordinate several neural functions, working through reader proteins in the YTH family of proteins.

In a new study published in Nature, scientists from the University of Chicago show how Ythdf1, a member of the YTH family that specifically recognizes m6A, plays an important role in the process of learning and memory formation. Using CRISPR/Cas9 gene editing tools to knock out Ythdf1in mice, they demonstrated how it promotes translation of m6A-modified messenger RNA (mRNA) in response to learning activities and direct nerve cell stimulus.

"This study opens the door to our future understanding of learning and memory," said Chuan He, PhD, the John T. Wilson Distinguished Service Professor of Chemistry, Biochemistry and Molecular Biology at UChicago and one of the senior authors of the study. "We saw differences in long-term memory and learning between the normal and knockout mice, demonstrating that the m6A methylation plays a critical role through Ythdf1."

In 2015, He published a study in Cell showing how Ythdf1 recognizes m6A-modified mRNAs and promotes their translation to proteins. The new study further demonstrates how this translation increases specifically in response to nervous system stimulation.

Hailing Shi, a graduate student in He's lab, led the new study, working with colleagues from Shanghai Tech University in China and the University of Pennsylvania. Mice express more Ythdf1 mRNAs in the hippocampus, part of the brain crucial to spatial learning and memory. So, the researchers conducted several experiments with both normal mice and mice without Ythdf1 to test the effects on their ability to learn from experiences.

In one scenario called the Morris water maze to test spatial memory, they used a water tank with a submerged platform a mouse could stand on to avoid swimming. Mice got several tries to learn where the platforms were located based on visual cues in a testing room. Then the platform was removed. The normal mice did a better job remembering where the platform used to be than the knockout mice.

The researchers also tested contextual and auditory fear memory in the different groups of mice by administering electrical shocks in combination with certain sounds in specific settings. Again, the normal mice demonstrated better contextual memory than knockout mice. They showed a fear response after being placed in the same setting again without the associated sounds, but not after hearing the sounds in a different setting.

The memory and learning deficits were reversible, however. When the researchers injected knockout mice with a virus carrying Ythdf1, their performance on memory and learning tasks improved dramatically.

The researchers also tested the response of cultured mouse neurons directly in the lab. When the normal cells were stimulated, they increased new protein production, compared to much less activity in Ythdf1 knockout cells.

"It's really an exciting finding to show how the protein can respond to a neuronal stimulus which could contribute to controlled translation," Shi said.

"It's a stimulation-dependent upregulation of translation," He added. "It makes sense because you don't want to fire up your neurons constantly, only when you have a stimulation."

While the current study identifies one important function for YTHDF1, there may be many other functions involved with other biological processes.

Read more at Science Daily

Astronomers discover the giant that shaped the early days of our Milky Way

N-body simulation of the merger of a Milky Way-like galaxy (with its stars in blue) and a smaller disky galaxy resembling the Small Magellanic Cloud in mass (with it stars in red). At the beginning, the two galaxies are clearly separated, but gravity pulls them together and this leads to the full accretion of the smaller one. Distinguishing the accreted stars from the rest is not easy by the final stage, but it is possible using the motions of the stars and their chemical composition.
Some ten billion years ago, the Milky Way merged with a large galaxy. The stars from this partner, named Gaia-Enceladus, make up most of the Milky Way's halo and also shaped its thick disk, giving it its inflated form. A description of this mega-merger, discovered by an international team led by University of Groningen astronomer Amina Helmi, is now published in the scientific journal Nature.

Large galaxies like our Milky Way are the result of mergers of smaller galaxies. An outstanding question is whether a galaxy like the Milky Way is the product of many small mergers or of a few large ones. The University of Groningen's Professor of Astronomy, Amina Helmi, has spent most of her career looking for 'fossils' in our Milky Way which might offer some hints as to its evolution. She uses the chemical composition, the position and the trajectory of stars in the halo to deduce their history and thereby to identify the mergers which created the early Milky Way.

Gaia's second data release

The recent second data release from the Gaia satellite mission last April provided Professor Helmi with data on around 1.7 billion stars. Helmi has been involved in the development of the Gaia mission for some twenty years and was part of the data validation team on the second data release. She has now used the data to look for traces of mergers in the halo: "We expected stars from fused satellites in the halo. What we didn't expect to find was that most halo stars actually have a shared origin in one very large merger."

Thick disk

This is indeed what she found. The chemical signature of many halo stars was clearly different from the 'native' Milky Way stars. "And they are a fairly homogenous group, which indicates they share a common origin." By plotting both trajectory and chemical signature, the 'invaders' stood out clearly. Helmi: "The youngest stars from Gaia-Enceladus are actually younger than the native Milky Way stars in what is now the thick disk region. This means that the progenitor of this thick disk was already present when the fusion happened, and Gaia-Enceladus, because of its large size, shook it and puffed it up."

In a previous paper, Helmi had already described a huge 'blob' of stars sharing a common origin. Now, she shows that stars from this blob in the halo are the debris from the merging of the Milky Way with a galaxy which was slightly more massive than the Small Magellanic Cloud, some ten billion years ago. The galaxy is called Gaia-Enceladus, after the Giant Enceladus who in Greek mythology was born of Gaia (the Earth goddess) and Uranus (the Sky god).

Read more at Science Daily

Dinosaurs put all colored birds' eggs in one basket, evolutionarily speaking

An assortment of paleognath and neognath bird eggs and a fossil theropod egg (on the right).
A new study says the colors found in modern birds' eggs did not evolve independently, as previously thought, but evolved instead from dinosaurs.

According to researchers at Yale, the American Museum of Natural History, and the University of Bonn, birds inherited their egg color from non-avian dinosaur ancestors that laid eggs in fully or partially open nests. The researchers' findings appear Oct. 31 in the online edition of the journal Nature.

"This completely changes our understanding of how egg colors evolved," said the study's lead author, Yale paleontologist Jasmina Wiemann. "For two centuries, ornithologists assumed that egg color appeared in modern birds' eggs multiple times, independently."

The egg colors of birds reflect characteristic preferences in nesting environments and brooding behaviors. Modern birds use only two pigments, red and blue, to create all of the various egg colors, spots, and speckles.

Wiemann and her colleagues analyzed 18 fossil dinosaur eggshell samples from around the world, using non-destructive laser microspectroscopy to test for the presence of the two eggshell pigments. They found them in eggshells belonging to Eumaniraptoran dinosaurs, which include small, carnivorous dinosaurs such as Velociraptor.

"We infer that egg color co-evolved with open nesting habits in dinosaurs," Wiemann said. "Once dinosaurs started to build open nests, exposure of the eggs to visually hunting predators and even nesting parasites favored the evolution of camouflaging egg colors, and individually recognizable patterns of spots and speckles."

Co-author Mark Norell, the Macaulay Curator of Paleontology at the American Museum of Natural History, noted that "Colored eggs have been considered a unique bird characteristic for over a century. Like feathers and wishbones, we now know that egg color evolved in their dinosaur predecessors long before birds appeared."

From Science Daily

A wilderness 'horror story'

Only 23 percent of the world's landmass can now be considered wilderness like this rainforest in Ecuador.
Producing the first comprehensive fine-scale map of the world's remaining marine and terrestrial wild places, conservation scientists writing in the journal Nature say that just 23 percent of the world's landmass can now be considered wilderness, with the rest -- excluding Antarctica -- lost to the direct effects of human activities.

These disturbing findings are particularly troubling as numerous recent studies reveal that Earth's remaining wilderness areas are increasingly important buffers against the effects of climate change and other human impacts. The authors note two upcoming gatherings of key decision makes will be crucial to stopping current rate of loss.

Said the paper's lead author James Watson of WCS and the University of Queensland: "These results are nothing short of a horror story for the planet's last wild places. The loss of wilderness must be treated in the same way we treat extinction. There is no reversing once the first cut enters. The decision is forever."

The authors describe wilderness areas as those places that do not have industrial level activity within them according to the marine and terrestrial human footprint. Local communities can live within them, hunt and fish, etc.

Various analyses reveal that wilderness areas provide increasingly important refuges for species that are declining in landscapes dominated by people. In the seas, they are the last regions that still contain viable populations of top predators, such as tuna, marlins and sharks.

In addition, wilderness areas are also places where enormous amounts of carbon is stored and sequestered with intact ecosystems being at least twice important than similar degraded habitats when it comes carbon mitigation.

The loss of wilderness is not just a biodiversity conservation and climate issue. Many wildernesses are home to millions of indigenous people who rely on them for maintaining their long bio-cultural connections to land and sea. Their loss is eroding many cultures around the world.

As bleak as these recent finding are, the authors say there is still a chance for Earth's remaining wilderness can be protected. Incredibly, just 20 nations hold 94 percent of the worlds marine and terrestrial wilderness areas (excluding Antarctica and the High Seas), with five mega wilderness nations (Russia, Canada, Australia, United States and Brazil) containing 70 percent. The authors argue that these nations have an enormous role to play to secure the last of the wild.

Said John Robinson, WCS Executive Vice President for Global Conservation at WCS and a co-author of the paper: "Wilderness will only be secured globally if these nations take a leadership role. Right now, across the board, this type of leadership is missing. Already we have lost so much. We must grasp these opportunities to secure the wilderness before it disappears forever."

The authors say the time is right to change international policy frameworks to act on wilderness conservation, noting two upcoming international meetings of particular importance:

At the 14th meeting of the Conference of the Parties to the Convention on Biological Diversity (CBD), held from November 17-29, signatory governments, intergovernmental organizations such as the International Union for Conservation of Nature (IUCN), international non-governmental organizations, and the scientific community will meet to work towards a strategic plan for the protection of biodiversity after 2020. The authors of the Nature paper and their organizations urge participants at the meeting to include a mandated target for wilderness conservation. In their view, a bold yet achievable target is to define and conserve 100 percent of all remaining intact ecosystems.

Read more at Science Daily

Giant flightless birds were nocturnal and possibly blind

Giant nocturnal elephant birds are shown foraging in the ancient forests of Madagascar at night.
If you encountered an elephant bird today, it would be hard to miss. Measuring in at over 10 feet tall, the extinct avian is the largest bird known to science. However, while you looked up in awe, it's likely that the big bird would not be looking back.

According to brain reconstruction research led by The University of Texas at Austin, the part of the elephant bird brain that processed vision was tiny, a trait that indicates they were nocturnal and possibly blind. The findings were published Oct. 31 in the journal Proceedings of the Royal Society B.

A nocturnal lifestyle is a trait shared by the elephant bird's closest living relative, the kiwi -- a practically blind, chicken-size denizen of New Zealand -- and a clue that is helping scientists learn more about the elephant bird's behavior and habitat, said Christopher Torres, a Ph.D. candidate who led the research.

"Studying brain shape is a really useful way of connecting ecology -- the relationship between the bird and the environment -- and anatomy," Torres said. "Discoveries like these give us tremendous insights into the lives of these bizarre and poorly understood birds."

Julia Clarke, a professor at the UT Jackson School of Geosciences and Torres' Ph.D. adviser, co-authored the study. Torres is a student in UT's Department of Integrative Biology in the College of Natural Sciences.

Elephant birds were large, flightless and lived in what is now Madagascar until a mixture of habitat loss and potential human meddling led to their demise between 500 and 1,000 years ago.

"Humans lived alongside, and even hunted, elephant birds for thousands of years," Torres said. "But we still know practically nothing about their lives. We don't even really know exactly when or why they went extinct."

Scientists had previously assumed that elephant birds were similar to other big, flightless birds, like emus and ostriches -- both of which are active during the day and have good eyesight. But Torres and Clarke revealed that elephant birds had distinctly different lifestyles through reconstructions of their brains.

Bird skulls wrap tightly around their brains, with the turns and curves of the bone corresponding to brain structures. The researchers studied the skulls of two species of elephant birds. By using CT-imaging data of the two elephant bird skulls, the researchers were able to create digital brain reconstructions called endocasts. In addition to the elephant bird skulls, the researchers also created endocasts for close relatives of the elephant bird, both living and extinct.

In both elephant bird skulls, the optic lobe -- a bundle of brain nerves that controls eyesight -- was very small, with the structure almost absent in the larger species. The lobe had the most in common with that of a kiwi, which Torres said came as a "total shock" because of the kiwi's poor vision and nocturnal behavior.

"No one has ever suspected that elephant birds were nocturnal," Torres said. "The few studies that speculated on what their behavior was like explicitly assumed they were active during the day."

Andrew Iwaniuk, an associate professor at the University of Lethbridge and an expert on brain evolution in birds who was not involved with the research, said that he had a similar reaction to the findings.

"I was surprised that the visual system is so small in a bird this big," he said. "For a bird this large to evolve a nocturnal lifestyle is truly bizarre and speaks to an ecology unlike that of their closest relatives or any other bird species that we know of."

In addition to vision, the endocasts rendering of the olfactory bulb -- the part of the brain that processes the sense of smell -- helped shed light on the habitats where elephant birds lived. The larger of the two species of elephant bird had a large olfactory bulb, a trait associated with forest dwelling. In contrast, the smaller elephant bird species had a smaller olfactory bulb, possibly indicating that it lived in grasslands. The smaller species also appears to have somewhat keener vision, which means it may have been more active at dusk than during the pitch black of night.

"Details like these not only tell us about what the lives of elephant birds were like, but also what life in general was like on Madagascar in the distant past," Clarke said. "As recently as 500 years ago, very nearly blind, giant flightless birds were crashing around the forests of Madagascar in the dark. No one ever expected that."

Read more at Science Daily

Oct 30, 2018

Scientists refine the search for dark matter

Researchers from Lund University in Sweden, among others, have developed a more effective technique in the search for clues about dark matter in the universe. They can now analyse much larger amounts of the data generated at CERN.

At the CERN research facility, a long series of experiments is underway on protons colliding in the LHC accelerator at almost the speed of light. The amount of data is constantly increasing, as the accelerator's capacity improves. However, it is more difficult to process and store the vast amounts of data that are produced. This is why there is a continuous evaluation of which data the researchers should examine more closely.

"If we are not careful, we could end up discarding data that contains clues to completely new particles of which we are not yet aware, such as particles that form dark matter", explains Caterina Doglioni, a particle physicist at Lund University and a member of the ATLAS experiment at CERN.

She is one of the researchers behind a recent study focusing on how to better utilise CERN's enormous amounts of data. Instead of recording all the information from the experiment and then analysing it at a later date, much of the data analysis is done in a short amount of time so that a much smaller fraction of the event is retained. This technique, that has been employed by other LHC experiments as well, allows researchers to record and store many more events that could contain traces of new particles.

The hope is to find signs of hitherto unknown particles that could be carriers of forces that could create a connection between visible and dark matter, according to Doglioni.

"These new particles, which we call "mediator particles" can disintegrate into extremely short-lived pairs of quarks, i.e. the very building blocks of the protons and neutrons in atoms. When quarks disintegrate, a type of particle shower is formed that we can actually detect with our instruments", says Caterina Doglioni.

The research community has long been searching for answers about the elusive dark matter that makes up a large part of our universe. Only five per cent of the universe is matter that we are currently able to perceive and measure. The remaining 95 per cent is unexplored and referred to as dark matter and dark energy.

Among other things, this assumption is based on the fact that galaxies rotate as though there were significantly more matter than that which we can see. Dark matter is reported to make up 27 per cent of the universe, while 68 per cent is dark energy - considered to be what causes the universe to constantly accelerate in its ongoing expansion. Researchers have declared October 31st "Dark Matter Day", a day with many different events dedicated to dark matter all over the world.

"We know that dark matter exists. Normally it passes through our measurement instruments, but cannot be registered, but in the case of our research we hoped to see the products of particles connected to it. ", says Caterina Doglioni.

Read more at Science Daily

Alterations to seabed raise fears for future

Maps showing areas of the seafloor which have been affected, to varying degrees, by the increasing acidification of the oceans as a result of human activities.
The ocean floor as we know it is dissolving rapidly as a result of human activity.

Normally the deep sea bottom is a chalky white. It's composed, to a large extent, of the mineral calcite (CaCO3) formed from the skeletons and shells of many planktonic organisms and corals. The seafloor plays a crucial role in controlling the degree of ocean acidification. The dissolution of calcite neutralizes the acidity of the CO2, and in the process prevents seawater from becoming too acidic. But these days, at least in certain hotspots such as the Northern Atlantic and the southern Oceans, the ocean's chalky bed is becoming more of a murky brown. As a result of human activities the level of CO2 in the water is so high, and the water is so acidic, that the calcite is simply being dissolved.

The McGill-led research team who published their results this week in a study in PNAS believe that what they are seeing today is only a foretaste of the way that the ocean floor will most likely be affected in future.

Long-lasting repercussions

"Because it takes decades or even centuries for CO2 to drop down to the bottom of the ocean, almost all the CO2 created through human activity is still at the surface. But in the future, it will invade the deep-ocean, spread above the ocean floor and cause even more calcite particles at the seafloor to dissolve," says lead author Olivier Sulpis who is working on his PhD in McGill's Dept. of Earth and Planetary Sciences. "The rate at which CO2 is currently being emitted into the atmosphere is exceptionally high in Earth's history, faster than at any period since at least the extinction of the dinosaurs. And at a much faster rate than the natural mechanisms in the ocean can deal with, so it raises worries about the levels of ocean acidification in future."

In future work, the researchers plan to look at how this deep ocean bed dissolution is likely to evolve over the coming centuries, under various potential future CO2 emission scenarios. They believe that it is critical for scientists and policy makers to develop accurate estimates of how marine ecosystems will be affected, over the long-term, by acidification caused by humans.

How the work was done

Because it is difficult and expensive to obtain measurements in the deep-sea, the researchers created a set of seafloor-like microenvironments in the laboratory, reproducing abyssal bottom currents, seawater temperature and chemistry as well as sediment compositions. These experiments helped them to understand what controls the dissolution of calcite in marine sediments and allowed them to quantify precisely its dissolution rate as a function of various environmental variables. By comparing pre-industrial and modern seafloor dissolution rates, they were able to extract the anthropogenic fraction of the total dissolution rates.

The speed estimates for ocean-bottom currents came from a high-resolution ocean model developed by University of Michigan physical oceanographer Brian Arbic and a former postdoctoral fellow in his laboratory, David Trossman, who is now a research associate at the University of Texas-Austin.

"When David and I developed these simulations, applications to the dissolution of geological material at the bottom of the oceans were far from our minds. It just goes to show you that scientific research can sometimes take unexpected detours and pay unexpected dividends," said Arbic, an associate professor in the University of Michigan Department of Earth and Environmental Sciences.

Trossman adds: "Just as climate change isn't just about polar bears, ocean acidification isn't just about coral reefs. Our study shows that the effects of human activities have become evident all the way down to the seafloor in many regions, and the resulting increased acidification in these regions may impact our ability to understand Earth's climate history."

Read more at Science Daily

Interior northwest Nez Perce used tobacco long before European contact

Washington State University researchers David Gang, left, and Shannon Tushingham have found tobacco use among the Nez Perce goes back centuries, with nicotine in pipes 1,200 years old creating "the longest continuous biomolecular record of ancient tobacco smoking from a single region anywhere in the world."
Washington State University researchers have determined that the Nez Perce grew and smoked tobacco at least 1,200 years ago, long before the arrival of traders and settlers from the eastern United States. Their finding upends a long-held view that indigenous people in this area of the interior Pacific Northwest smoked only kinnikinnick or bearberry before traders brought tobacco starting around 1790.

Shannon Tushingham, a WSU assistant professor and director of its Museum of Anthropology, made the discovery after teaming up with David Gang, a professor in the Institute of Biological Chemistry, to analyze pipes and pipe fragments in the museum's collection.

"Usually in archaeology we just find little pieces of artifacts, things that you might not think much of," she said. "But the information that we can extract from them on a molecular level is phenomenal."

Indeed, writing in the Proceedings of the National Academy of Sciences, the researchers say their dating of various materials reveals "the longest continuous biomolecular record of ancient tobacco smoking from a single region anywhere in the world."

Tushingham first became interested in the subject when, while excavating plank houses in far northern California for her dissertation, she came across two soapstone pipes.

"I just thought, 'Wouldn't it be interesting to know what people were smoking?'" she said. "Then I started looking at the different plants and it wasn't just tobacco. People smoked lots of different plants. I realized it was an open question whether people had smoked tobacco in many places in North America."

Indigenous tobacco is scarce in the cool climate of the northwest. Coyote tobacco, or Nicotiana attenuata, is found mostly on sandy river bars, while the natural range of N. quadrivalvus lies south of southwestern Oregon.

Meanwhile, the more potent dried trade tobacco was easy to transport in bundles, or "twists," and Hudson's Bay Company explorers, fur traders and the Lewis and Clark expedition found an eager audience for it as they came through the region in the 1700 and 1800s.

"This occurred so rapidly and so early in the historic record that a complete understanding of in situ pre-contact smoking practices has been obscured," Tushingham and Gang write in their paper.

In the 1930s, anthropologist Alfred Kroeber oversaw a survey of more than 200 tribes and bands west of the Rocky Mountains. In one of the ensuing monographs, "Salt, Dogs, Tobacco," he reported that the smoking of non-tobacco products was "more universal," with planting confined to a "long irregular area" from the Oregon coast into south-central California. An accompanying map, however, shows three spots in the Columbia River basin where tobacco could have been mixed with kinnikinnick.

Working with Nez Perce tribal leaders, Tushingham and Gang analyzed a dozen pipes and fragments from three sites on the Snake River. Gang said he could use a solvent to get the substance from a pipe and analyze it using mass spectrometry. That left the pipes intact.

The technique extracts molecular amounts of residue on the surface and inside of the pipes, Gang said. "We don't want to destroy them. We don't want to damage them. We had one pipe that was 5,000 years old that we were really worried about that was sandstone."

Results were inconclusive, but the pipe was fine.

The researchers did detect nicotine in pipes from both after and well before Euro-American contact. None appeared to contain arbutin, a compound associated with kinnikinnick.

Because tobacco in the interior northwest needed to be planted, Tushingham said their finding offers a new view of native interactions with the landscape. Indigenous people have often been thought of as "passive consumers of the environment," yet they managed camas and even grew clams on the coast, she said.

"I think it's a very reasonable proposition that people were cultivating tobacco," Tushingham said. "This is just another sign of the sophistication of cultures in this area and how they managed plants and animals."

The researchers hope that their findings will inform native smoking-cessation programs, acknowledging the deep cultural role of tobacco while addressing health problems.

"If we know there's this eons-long use of psychoactive plants, doesn't that tell you something about human physiology, human health?" asked Tushingham. "Isn't that important information to know in terms of what we would do for treating people today, if we know more about the evolutionary history of this powerful plant and its long history of use by people?"

Read more at Science Daily

Synthetic microorganisms allow scientists to study ancient evolutionary mysteries

A genetically modified yeast containing an endosymbiotic bacterium.
Scientists at Scripps Research and their collaborators have created microorganisms that may recapitulate key features of organisms thought to have lived billions of years ago, allowing them to explore questions about how life evolved from inanimate molecules to single-celled organisms to the complex, multicellular lifeforms we see today.

By studying one of these engineered organisms-a bacterium whose genome consists of both ribonucleic acid (RNA) and deoxyribonucleic acid (DNA)-the scientists hope to shed light on the early evolution of genetic material, including the theorized transition from a world where most life relied solely on the genetic molecule RNA to one where DNA serves as the primary storehouse of genetic information.

Using a second engineered organism, a genetically modified yeast containing an endosymbiotic bacterium, they hope to better understand the origins of cellular power plants called mitochondria. Mitochondria provide essential energy for the cells of eukaryotes, a broad group of organisms-including humans-that possesses complex, nucleus-containing cells.

The researchers report engineering the microbes in two papers, one published October 29, 2018 in the Proceedings of the National Academy of Sciences (PNAS) and another published August 30, 2018 in Journal of the American Chemical Society (JACS).

"These engineered organisms will allow us to probe two key theories about major milestones in the evolution of living organisms-the transition from the RNA world to the DNA world and the transition from prokaryotes to eukaryotes with mitochondria," says Peter Schultz, PhD, senior author on the papers and president of Scripps Research. "Access to readily manipulated laboratory models enables us to seek answers to questions about early evolution that were previously intractable."

The origins of life on Earth have been a human fascination for millennia. Scientists have traced the arc of life back several billion years and concluded that the simplest forms of life emerged from Earth's primordial chemical soup and subsequently evolved over the eons into organisms of greater and greater complexity. A monumental leap came with the emergence of DNA, a molecule that stores all of the information required to replicate life and directs cellular machinery to do its bidding primarily by generating RNA, which in turn directs the synthesis of proteins, the molecular workhorses in cells.

In the 1960s, Carl Woese and Leslie Orgel, along with DNA pioneer Francis Crick, proposed that before DNA, organisms relied on RNA to carry genetic information, a molecule similar to but far less stable than DNA, that can also catalyze chemical reactions like proteins. "In science class, students learn that DNA leads to RNA which in turn leads to proteins-that's a central dogma of biology-but the RNA world hypothesis turns that on its head," says Angad Mehta, PhD, first author of the new papers and a postdoctoral research associate at Scripps Research. "For the RNA world hypothesis to be true, you have to somehow get from RNA to a DNA genome, yet how that might have happened is still a very big question among scientists."

One possibility is that the transition proceeded through a kind of microbial missing link, a replicating organism that stored genetic information as RNA. For the JACS study, the Scripps Research-led team created Escherichia coli bacteria that partially build their DNA with ribonucleotides, the molecular building blocks typically used to build RNA. These engineered genomes contained up to 50 percent RNA, thus simultaneously representing a new type of synthetic organism and possibly a throwback to billions of years ago.

Mehta cautions that their work so far has focused on characterizing this chimeric RNA-DNA genome and its effect on bacterial growth and replication but hasn't explicitly explored questions about the transition from the RNA world to the DNA world. But, he says, the fact that E. coli with half its genome comprised of RNA can survive and replicate is remarkable and seems to support the possibility of the existence of evolutionarily transitional organisms possessing hybrid RNA-DNA genomes. The Scripps Research team is now studying how the mixed genomes of their engineered E. coli function and plans to use the bacteria to explore a number of evolutionary questions.

For instance, one question is whether the presence of RNA leads to rapid genetic drift-large changes in gene sequence in a population over time. Scientists theorize that massive genetic drift occurred quickly during early evolution, and the presence in the genome of RNA could help explain how genetic change occurred so quickly.

In the paper published in PNAS, the researchers report engineering another laboratory model for an evolutionary milestone thought to have occurred more than 1.5 billion years ago. They created a yeast dependent for energy on bacteria living inside it as a beneficial parasite or "endosymbiont." This composite organism will allow them to investigate the ancient origins of mitochondria-tiny, bacteria-like organelles that produce chemical energy within the cells of all higher organisms.

Mitochondria are widely thought to have evolved from ordinary bacteria that were captured by larger, single-celled organisms. They carry out several key functions in cells. Most importantly, they serve as oxygen reactors, using O2 to make cells' basic unit of chemical energy, the molecule ATP. As crucial as mitochondria are to cells, their origins remain somewhat mysterious, although there are clear hints of descent from a more independent organism, widely assumed to have been a bacterium.

Mitochondria have a double-membrane structure like that of some bacteria, and-again, like bacteria-contain their own DNA. Analyses of the mitochondrial genome suggest that it shares an ancient ancestor with modern Rickettsia bacteria, which can live within the cells of their hosts and cause disease. Stronger support for the bacterial origin of mitochondria theory would come from experiments showing that independent bacteria could indeed be transformed, in an evolution-like progression, into mitochondria-like symbionts. To that end, the Scripps Research scientists engineered E. coli bacteria that could live in, depend upon, and provide key assistance to, cells of Saccharomyces cerevisiae, also known as baker's yeast.

The researchers started by modifying E. coli to lack the gene encoding thiamin, making the bacteria dependent on the yeast cells for this essential vitamin. At the same time, they added to the bacteria a gene for ADP/ATP translocase, a transporter protein, so that ATP produced within the bacterial cells would be supplied to their yeast-cell hosts-mimicking the central function of real mitochondria. The team also modified the yeast so that their own mitochondria were deficient at supplying ATP. Thus the yeast would be dependent on the bacteria for normal, mitochondria-based ATP production.

The team found that some of the engineered bacteria, after being modified with surface proteins to protect them from being destroyed in the yeast, lived and proliferated in harmony with their hosts for more than 40 generations and appeared to be viable indefinitely. "The modified bacteria seem to accumulate new mutations within the yeast to better adapt to their new surroundings," says Schultz.

With this system established, the team will try to evolve the E. coli to become mitochondria-like organelles. For the new E. coli endosymbiont, adapting to life inside yeast could allow it an opportunity to radically slim its genome. A typical E. coli bacterium, for example, has several thousand genes, whereas mitochondria have evolved a stripped-down set of just 37.

The Scripps Research team rounded out the study with further gene-subtraction experiments, and the results were promising: they found they could eliminate not just the E. coli thiamin gene but also the genes underlying the production of the metabolic molecule NAD and the amino acid serine, and still get a viable symbiosis.

"We are now well on our way to showing that we can delete the genes for making all 20 amino acids, which comprise a significant part of the E. coli genome," says Schultz. "Once we've achieved that, we'll move on to deleting genes for the syntheses of cofactors and nucleotides, and within a few years we hope to be able to get a truly minimal endosymbiotic genome."

Read more at Science Daily