A fungal infection associated with a high percentage of deaths among HIV and other immune-compromised patients is more diverse than previously known and likely spread around the world by bats.
A global assessment of the fungus Histoplasma by the Translational Genomics Research Institute (TGen) found that the pathogen is actually divided among six species, and that its spread and speciation from continent-to-continent over the past 9 million years coincides with the global dispersal and evolution of bats.
Published in the scientific journal PLOS Neglected Tropical Diseases, TGen's study of 234 samples of Histoplasma capsulatum from around the world used the latest in genetic sequencing to characterize the differences between various species of this fungus. The study estimates a timeframe for its evolution, based on the average rate of genetic mutations.
"We need to better understand this disease so we can be better prepared for infectious outbreaks, and to see its relationship to similar fungal infections, such as Valley Fever," said Dr. Bridget Barker, TGen Assistant Professor of Pathogen Genomics and the senior author of this study. "There currently is no cure and no vaccine for this disease."
Histoplasma, which thrives in soil containing large amounts of bird or bat droppings, causes histoplasmosis, an infection most often acquired by breathing in microscopic fungal spores in the air. Histoplasmosis also is known as cave disease, spelunker's disease or Darling's disease, after the early 20th century scientist who discovered it. While most people don't get sick, the disease often results in a fever, cough and fatigue. Many get better without medication, but some infections can be severe.
Histoplasma is responsible for up to 30 percent of all HIV/AIDS-related deaths in Latin America, according to the study. And the U.S. Centers for Disease Control and Prevention warns that other parts of the world are at risk where access to anti-retroviral drugs is limited.
"And with the increase of immunosuppressive therapy due to transplants and other chronic inflammatory disorders, disseminated histoplasmosis is becoming more frequent and is geographically expanding," said Dr. Marcus Teixeira, a TGen Post-Doctoral Fellow and the study's lead author. "In patients with an impaired immune system, the disease is mostly fatal without an early diagnosis and proper treatment."
The fungus thrives in moderate temperature, relatively high humidity and low light conditions. Besides bats, Histoplasma also is known to infect baboons, badgers, sea otters, raccoons, horses, cats and dogs.
The study, "Worldwide phylogenetic distribution and population dynamics of the genus Histoplasma," found that the disease likely began in Latin America and spread worldwide over the last 2-9 million years, during the Miocene and Pleistocene epochs.
Relatively little is known about the genetic background of the causative agents of histoplasmosis, its relationship with different hosts, and the variation of this disease in clinical presentation. Dr. Teixeira suggests that additional studies are needed to explore the genetic diversity of Histoplasma and correlate that information with medical data to provide public health professionals with answers should there be a disease outbreak or severe clinical cases.
Read more at Science Daily
Jun 2, 2016
Scientists gain supervolcano insights from Wyoming granite
University of Wyoming researchers Davin Bagdonas and Carol Frost make observations on Lankin Dome, part of the Wyoming batholith, in central Wyoming's Granite Mountains. |
But what do the subterranean magma chambers look like, and where does the magma originate? Those questions can't be answered directly at modern, active volcanoes.
Instead, a new National Science Foundation (NSF)-funded study by University of Wyoming researchers suggests that scientists can go back into the past to study the solidified magma chambers where erosion has removed the overlying rock, exposing granite underpinnings. The study and its findings are outlined in a paper published in the June issue of American Mineralogist, the journal of the Mineralogical Society of America.
"Every geology student is taught that the present is the key to the past," says Carol Frost, director of the NSF's Division of Earth Sciences, on leave from UW, where she is a professor in the Department of Geology and Geophysics. "In this study, we used the record from past to understand what is happening in modern magma chambers."
One such large granite body, the 2.62 billion-year-old Wyoming batholith, extends more than 125 miles across central Wyoming. UW master's degree student Davin Bagdonas traversed the Granite, Shirley and Laramie mountains to examine the body, finding remarkable uniformity, with similar biotite granite throughout.
"It was monotonous," says Bagdonas, who worked on the project with Frost. "Only minor variations were observed in granite near the roof and margins of the intrusion."
This homogeneity indicates that the crystallizing magma was generally well-mixed. However, more subtle isotopic variations across the batholith show that the magma formed by melting of multiple rock sources that rose through multiple conduits, and that homogenization was incomplete.
Studies of the products of supervolcanoes and their possible batholithic counterparts at depth are a vibrant, controversial area of research, says Brad Singer, professor in the Department of Geoscience at the University of Wisconsin-Madison. He says the research by Frost and her colleagues offers "a novel perspective gleaned from the ancient Wyoming batholith, suggesting that it is the frozen portion of a vast magma system that could have fed supervolcanoes like those which erupted in northern Chile-southern Bolivia during the last 10 million years.
"The possibility of such a connection, while intriguing, does raise questions. The high silica and potassium contents of the Wyoming granites differ from the bulk magma compositions erupted by these huge Andean supervolcanos. This might mean that the Wyoming batholith records the complete solidification of potentially explosive magma at depth, without the eruption of much high-silica rhyolite," Singer says. "Notwithstanding, this paper will certainly provoke a deeper look into how ancient Archean granites can be used to leverage understanding of the 'volcanic-plutonic connection' at supervolcanoes."
Read more at Science Daily
At the cradle of oxygen: Brand-new detector to reveal the interiors of stars
Oxygen is essential for life: we are immersed in it yet none of it actually originates from our own planet. All oxygen was ultimately formed through thermonuclear reactions deep inside stars. Laboratory studies of the astrophysical processes leading to the formation of oxygen are extremely important. A big step forward in these studies will be possible when work commences in 2018 at the Extreme Light Infrastructure -- Nuclear Physics (ELI-NP) facility near Bucharest, using a state-of-the-art source of intense gamma radiation. High energy protons will be intercepted using a specially-designed particle detector acting as a target. A demonstrator version of the detector, constructed at the Faculty of Physics, University of Warsaw (FUW), has recently completed the first round of tests in Romania.
In terms of mass, the most abundant elements in the Universe are hydrogen (74%) and helium (24%). The percentage by mass of other, heavier elements is significantly lower: oxygen comprises just 0.85% and carbon 0.39% (in contrast, oxygen comprises 65% of the human body and carbon 18% by mass). In nature, conditions supporting the formation of oxygen are present only within evolutionarily-advanced stars which have converted almost all their hydrogen into helium. Helium becomes then their main fuel. At this stage, three helium nuclei start combining into a carbon nucleus. By adding another helium nucleus, this in turn forms an oxygen nucleus and emits one or more gamma photons.
"Oxygen can be described as the 'ash' from the thermonuclear 'combustion' of carbon. But what mechanism explains why carbon and oxygen are always formed in stars at more or less the same proportion of 6 to 10?" asks Dr. Chiara Mazzocchi (FUW). She goes on to explain: "Stars evolve in stages. During the first stage, they convert hydrogen into helium, then helium into carbon, oxygen and nitrogen, with heavier elements formed in subsequent stages. Oxygen is formed from carbon during the helium-burning phase. The thing is that, in theory, oxygen could be produced at a faster rate. If the star were to run out of helium and shift to the next stage of its evolution, the proportions between carbon and oxygen would be different."
The experiments planned for ELI-NP will not actually recreate thermonuclear reactions converting carbon into oxygen and photons gamma. In fact, researchers are hoping to observe the reverse reaction: collisions between high-energy photons with oxygen nuclei to produce carbon and helium nuclei. Registering the products of this decay should make it possible to study the characteristics of the reaction and fine-tune existing theoretical models of thermonuclear synthesis.
"We are preparing an eTPC detector for the experiments at ELI-NP. It is an electronic-readout time-projection chamber, which is an updated version of an earlier detector built at the Faculty's Institute of Experimental Physics. The latter was successfully used by our researchers for the world's first observations of a rare nuclear process: two-proton decay," says Dr. Mikolaj Cwiok (FUW).
The main element of the eTPC detector is a chamber filled with gas comprising many oxygen nuclei (e.g. carbon dioxide). The gas acts as a target. The gamma radiation beam passes through the gas, with some of the photons colliding with oxygen nuclei to produce carbon and helium nuclei. The nuclei formed through the reaction, which are charged particles, ionise the gas. In order to increase their range, the gas is kept at a reduced pressure, around 1/10 of the atmospheric one. The released electrons are directed using an electric field towards the Gas Electron Multiplier (GEM) amplification structures followed by readout electrodes. The paths of the particles are registered electronically using strip electrodes. Processing the data using specialised FPGA processors makes it possible to reconstruct the 3D paths of the particles.
The active region of the detector will be 35x20x20 cm^3, and at nominal intensity of the photon beam it should register up to 70 collisions of gamma photons with oxygen nuclei per day. Tests at ELI-NP used a demonstrator:a smaller but fully functional version of the final detector, named mini-eTPC. The device was tested with a beam of alpha particles (helium nuclei).
"We are extremely pleased with the results of the tests conducted thus far. The demonstrator worked as we expected and successfully registered the tracks of charged particles. We are certain to use it in future research as a fully operational measuring device. In 2018, ELI-NP will be equipped with a larger detector which we are currently building at our laboratories," adds Dr. Mazzocchi.
The project is carried out in collaboration with researchers from ELI-NP/IFIN-HH (Magurele, Romania) and the University of Connecticut in the US. The Warsaw team, led by Prof. Wojciech Dominik, brings together physicists and engineers from the Division of Particles and Fundamental Interactions and the Nuclear Physics Division and students from the University of Warsaw: Jan Stefan Bihalowicz, Jerzy Manczak, Katarzyna Mikszuta and Piotr Podlaski.
Extreme Light Infrastructure (ELI) is a research project valued at 850 million euro, conducted as part of the European Strategy Forum on Research Infrastructures roadmap. The ELI scientific consortium will encompass three centres in the Czech Republic, Romania and Hungary, focusing on research into the interactions between light and matter under the conditions of the most powerful photon beams and at a wide range of wavelengths and timescales measured in attoseconds (a billionth of a billionth of a second). The Romanian ELI -- Nuclear Physics centre, in Magurele near Bucharest, conducts research into two sources of radiation: high-intensity radiation lasers (of the order of a 10^23 watts per square centimetre), and high-intensity sources of monochromatic gamma radiation. The gamma beam will be formed by scattering laser light off the electrons accelerated by a linear accelerator to speeds nearing the speed of light.
Read more at Science Daily
Spinning electrons yield positrons for research
Researchers use accelerators to coax the electron into performing a wide range of tricks to enable medical tests and treatments, improve product manufacturing, and power breakthrough scientific research. Now, they're learning how to coax the same tricks out of the electron's antimatter twin -- the positron -- to open up a whole new vista of research and applications.
Using the Continuous Electron Beam Accelerator Facility (CEBAF) at the Department of Energy's Jefferson Lab, a team of researchers has, for the first time, demonstrated a new technique for producing polarized positrons. The method could enable new research in advanced materials and offers a new avenue for producing polarized positron beams for a proposed International Linear Collider and an envisioned Electron-Ion Collider.
Jefferson Lab Injector Scientist Joe Grames says the idea for the method grew out of the many advances that have been made in understanding and controlling the electron beams used for research in CEBAF.
"We have a lot of experience here at Jefferson Lab in operating a world-leading electron accelerator," Grames said. "We are constantly improving the electron beam for the experiments, pushing the limits of what we can get the electrons to do."
The CEBAF accelerator gathers up free electrons, sets the electrons to spinning like tops, packs them full of additional energy ("accelerating" the particles to up to 12 billion electron-volts), and directs them along a tightly controlled path into experimental targets. Grames and his colleagues would like to take that finesse a step further and transform CEBAF's well-controlled polarized electron beams into well-controlled beams of polarized positrons to offer researchers at Jefferson Lab an additional probe of nuclear matter. They named the endeavor the Polarized Electrons for Polarized Positrons experiment, or PEPPo.
Positrons are the anti-particles of electrons. Where the electron has a negative charge, the positron has a positive one. Producing positrons that are spinning in the same direction, like the electrons in CEBAF, is very challenging. Before PEPPo, researchers had successfully managed to coax polarized positrons into existence using very high-energy electron beams and sophisticated technologies. The PEPPo method, however, puts a new twist on things.
"From the beginning, our aim was to show that we could use the polarized electron beam we produce every day at CEBAF to create the positrons. But we wanted to do that using a low-energy and small-footprint electron beam, so that a university or company may also benefit from our proof of principle," Grames explained.
The PEPPo system was placed inside the CEBAF accelerator's injector, which is the part of the accelerator that generates electrons. The system consists mainly of small magnets for managing the particle beams, targets for transforming them, and detectors for measuring the particles.
In it, a new beam of electrons from CEBAF is directed into a slice of tungsten. The electrons rapidly decelerate as they pass through the tungsten atoms, giving off gamma rays. These gamma rays then interact with other atoms in the tungsten target to produce lower-energy pairs of positrons and electrons. Throughout the process, the polarization of the original electron beam is passed along. The researchers use a magnet to siphon the positrons away from the other particles and direct them into a detector system that measures their energy and polarization.
"We showed that there's a very efficient transfer of polarization from electrons to the positrons," said Grames.
Further, the researchers found that it is also possible to dial up the degree of polarization that they are interested in by selecting positrons of the right energy. While the more abundant lower-energy positrons are less polarized, the positrons with highest-energy retain nearly all of the polarization of the original electron beam. In PEPPo, the electron beam was 85 percent polarized and accelerated to 8 million electron-volts (MeV).
"Nuclear physicists typically want the highest polarization possible for their experiments," he explained. "Positrons collected at half the original electron energy were about 50 percent polarized, which is still quite high. But, as we approached the maximum energy, we measured 82 percent, showing that a very large portion of the original electron polarization is transferred."
The PEPPo experiment ran for four weeks in the spring of 2012. The result has just been published in Physical Review Letters, and it is featured as an Editors' Suggestion.
Read more at Science Daily
Using the Continuous Electron Beam Accelerator Facility (CEBAF) at the Department of Energy's Jefferson Lab, a team of researchers has, for the first time, demonstrated a new technique for producing polarized positrons. The method could enable new research in advanced materials and offers a new avenue for producing polarized positron beams for a proposed International Linear Collider and an envisioned Electron-Ion Collider.
Jefferson Lab Injector Scientist Joe Grames says the idea for the method grew out of the many advances that have been made in understanding and controlling the electron beams used for research in CEBAF.
"We have a lot of experience here at Jefferson Lab in operating a world-leading electron accelerator," Grames said. "We are constantly improving the electron beam for the experiments, pushing the limits of what we can get the electrons to do."
The CEBAF accelerator gathers up free electrons, sets the electrons to spinning like tops, packs them full of additional energy ("accelerating" the particles to up to 12 billion electron-volts), and directs them along a tightly controlled path into experimental targets. Grames and his colleagues would like to take that finesse a step further and transform CEBAF's well-controlled polarized electron beams into well-controlled beams of polarized positrons to offer researchers at Jefferson Lab an additional probe of nuclear matter. They named the endeavor the Polarized Electrons for Polarized Positrons experiment, or PEPPo.
Positrons are the anti-particles of electrons. Where the electron has a negative charge, the positron has a positive one. Producing positrons that are spinning in the same direction, like the electrons in CEBAF, is very challenging. Before PEPPo, researchers had successfully managed to coax polarized positrons into existence using very high-energy electron beams and sophisticated technologies. The PEPPo method, however, puts a new twist on things.
"From the beginning, our aim was to show that we could use the polarized electron beam we produce every day at CEBAF to create the positrons. But we wanted to do that using a low-energy and small-footprint electron beam, so that a university or company may also benefit from our proof of principle," Grames explained.
The PEPPo system was placed inside the CEBAF accelerator's injector, which is the part of the accelerator that generates electrons. The system consists mainly of small magnets for managing the particle beams, targets for transforming them, and detectors for measuring the particles.
In it, a new beam of electrons from CEBAF is directed into a slice of tungsten. The electrons rapidly decelerate as they pass through the tungsten atoms, giving off gamma rays. These gamma rays then interact with other atoms in the tungsten target to produce lower-energy pairs of positrons and electrons. Throughout the process, the polarization of the original electron beam is passed along. The researchers use a magnet to siphon the positrons away from the other particles and direct them into a detector system that measures their energy and polarization.
"We showed that there's a very efficient transfer of polarization from electrons to the positrons," said Grames.
Further, the researchers found that it is also possible to dial up the degree of polarization that they are interested in by selecting positrons of the right energy. While the more abundant lower-energy positrons are less polarized, the positrons with highest-energy retain nearly all of the polarization of the original electron beam. In PEPPo, the electron beam was 85 percent polarized and accelerated to 8 million electron-volts (MeV).
"Nuclear physicists typically want the highest polarization possible for their experiments," he explained. "Positrons collected at half the original electron energy were about 50 percent polarized, which is still quite high. But, as we approached the maximum energy, we measured 82 percent, showing that a very large portion of the original electron polarization is transferred."
The PEPPo experiment ran for four weeks in the spring of 2012. The result has just been published in Physical Review Letters, and it is featured as an Editors' Suggestion.
Read more at Science Daily
Jun 1, 2016
Elliptical galaxies not formed by merging
Montage of the SDP.81 Einstein Ring and the lensed galaxy. |
It all starts from a problem with dust: galaxies with the highest rates of star formation are also the "dustiest," because the violent process of star formation produces gas and heavy molecules. This means that part of the electromagnetic radiation emitted by nascent stars cannot be recorded by the instruments for astronomical observation in the optical and the ultraviolet band, as it is absorbed by dust and gas and re-emitted in the infrared. On top of this, owing to instrument limitations it is even difficult to observe this infrared radiation in the case of very distant, older galaxies. All this complicates things for astrophysicists investigating stellar and galaxy formation, and all studies to date have mostly proposed predictions based on purely theoretical models.
Claudia Mancuso, PhD student under the supervision of Andrea Lapi and Luigi Danese, SISSA professors in the astrophysics group and co-authors of the study, did the opposite: "we started from the data, available in complete form only for the closer galaxies and in incomplete form for the more distant ones, and we filled the 'gaps' by interpreting and extending the data based on a scenario we devised" comments Mancuso. The analysis also took into account the phenomenon of gravitational lensing, which allows us to observe very distant galaxies belonging to ancient cosmic epochs.
In this "direct" manner (i.e., model-independent) the SISSA group obtained an image of the evolution of galaxies even in very ancient epochs (close, in a cosmic timescale, to the epoch of reionization). This reconstruction demonstrates that elliptical galaxies cannot have formed through the merging of other galaxies, "simply because there wasn't enough time to accumulate the large quantity of stars seen in these galaxies through these processes," comments Mancuso. "This means that the formation of elliptical galaxies occurs through internal, in situ processes of star formation.
"These findings," states Mancuso, "will constitute a necessary starting point for building the future generation of models and numerical simulations and, more importantly, they will provide an unprecedented basis for identifying primordial galaxies in the next generation surveys in the ultraviolet with the future James Webb Space Telescope (JWST), in the millimeter band with the Atacama Large Millimeter Array (ALMA), and in the radio band with the Square Kilometer Array (SKA) interferometer."
From Science Daily
Measuring the Milky Way: One massive problem, one new solution
It is a galactic challenge, to be sure, but Gwendolyn Eadie is getting closer to an accurate answer to a question that has defined her early career in astrophysics: what is the mass of the Milky Way?
The short answer, so far, is 7 X 1011 solar masses. In terms that are easier to comprehend, that's about the mass of our Sun, multiplied by 700 billion. The Sun, for the record, has a mass of two nonillion (that's 2 followed by 30 zeroes) kilograms, or 330,000 times the mass of Earth.
"And our galaxy isn't even the biggest galaxy," Eadie says.
Measuring the mass of our home galaxy, or any galaxy, is particularly difficult. A galaxy includes not only stars, planets, moons, gases, dust and other objects and material, but also a big helping of dark matter, a mysterious and invisible form of matter that is not yet fully understood and has not been directly detected in the lab. Astronomers and cosmologists, however, can infer the presence of dark matter through its gravitational influence on visible objects.
Eadie, a PhD candidate in physics and astronomy at McMaster University, has been studying the mass of the Milky Way and its dark matter component since she started graduate school. She uses the velocities and positions of globular star clusters that orbit the Milky Way.
The orbits of globular clusters are determined by the galaxy's gravity, which is dictated by its massive dark matter component. What's new about Eadie's research is the technique she devised for using globular cluster (GCs) velocities.
The total velocity of a GC must be measured in two directions: one along our line-of-sight, and one across the plane of the sky (the proper motion). Unfortunately, researchers have not yet measured the proper motions of all the GCs around the Milky Way.
Eadie, however, has developed a way to use these velocities that are only partially known, in addition to the velocities that are fully known, to estimate the mass of the galaxy. Her method also predicts the mass contained within any distance from the center of the galaxy, with uncertainties, which makes her results easy to compare with other studies.
Eadie and her academic supervisor William Harris, a professor of Physics and Astronomy at McMaster, have co-authored a paper on their most recent findings, which allow dark matter and visible matter to have different distributions in space. They have submitted this work to the Astrophysical Journal, and Eadie will present their results May 31 at the Canadian Astronomical Society's conference in Winnipeg.
Read more at Sciecne Daily
The short answer, so far, is 7 X 1011 solar masses. In terms that are easier to comprehend, that's about the mass of our Sun, multiplied by 700 billion. The Sun, for the record, has a mass of two nonillion (that's 2 followed by 30 zeroes) kilograms, or 330,000 times the mass of Earth.
"And our galaxy isn't even the biggest galaxy," Eadie says.
Measuring the mass of our home galaxy, or any galaxy, is particularly difficult. A galaxy includes not only stars, planets, moons, gases, dust and other objects and material, but also a big helping of dark matter, a mysterious and invisible form of matter that is not yet fully understood and has not been directly detected in the lab. Astronomers and cosmologists, however, can infer the presence of dark matter through its gravitational influence on visible objects.
Eadie, a PhD candidate in physics and astronomy at McMaster University, has been studying the mass of the Milky Way and its dark matter component since she started graduate school. She uses the velocities and positions of globular star clusters that orbit the Milky Way.
The orbits of globular clusters are determined by the galaxy's gravity, which is dictated by its massive dark matter component. What's new about Eadie's research is the technique she devised for using globular cluster (GCs) velocities.
The total velocity of a GC must be measured in two directions: one along our line-of-sight, and one across the plane of the sky (the proper motion). Unfortunately, researchers have not yet measured the proper motions of all the GCs around the Milky Way.
Eadie, however, has developed a way to use these velocities that are only partially known, in addition to the velocities that are fully known, to estimate the mass of the galaxy. Her method also predicts the mass contained within any distance from the center of the galaxy, with uncertainties, which makes her results easy to compare with other studies.
Eadie and her academic supervisor William Harris, a professor of Physics and Astronomy at McMaster, have co-authored a paper on their most recent findings, which allow dark matter and visible matter to have different distributions in space. They have submitted this work to the Astrophysical Journal, and Eadie will present their results May 31 at the Canadian Astronomical Society's conference in Winnipeg.
Read more at Sciecne Daily
Genetic switch that turned moths black also colors butterflies
Writing in the journal Nature, a team of researchers led by academics at the Universities of Cambridge and Sheffield, report that a fast-evolving gene known as "cortex" appears to play a critical role in dictating the colours and patterns on butterfly wings.
A parallel paper in the same journal by researchers from the University of Liverpool shows that this same gene also caused the peppered moth to turn black during the mid-19th century, when it evolved to find new ways to camouflage itself; a side-effect of industrial pollution at the time.
The finding offers clues about how genetics plays a role in making evolution a predictable process. For reasons the researchers have yet to understand in full, the cortex gene, which helps to regulate cell division in butterflies and moths, has become a major target for natural selection acting on colour and pattern on the wings.
Chris Jiggins, Professor of Evolutionary Biology and a Fellow of St John's College, University of Cambridge, said: "What's exciting is that it turns out to be the same gene in both cases. For the moths, the dark colouration developed because they were trying to hide, but the butterflies use bright colours to advertise their toxicity to predators. It raises the question that given the diversity in butterflies and moths, and the hundreds of genes involved in making a wing, why is it this one every time?"
Dr Nicola Nadeau, a NERC Research Fellow from the University of Sheffield added: "It's amazing that the same gene controls such a diversity of different colours and patterns in butterflies and a moth. Our study, together with the findings from the University of Liverpool, shows that the cortex gene is important for colour and pattern evolution in this whole group of insects."
Butterflies and moths comprise the order of insects known as Lepidoptera. Nearly all of the 160,000 types of moth and 17,000 types of butterfly have different wing patterns, which are adapted for purposes like attracting mates, giving off warnings, camouflage (also known as "crypsis"), and thermal regulation.
These wing patterns are actually made up of tiny coloured scales arranged like tiles on a roof. Although they have been studied by biologists for over a century, the molecular mechanisms which control their development are only now starting to be uncovered.
The peppered moth is one of the most famous examples of evolution by natural selection. Until the 19th Century, peppered moths were predominantly pale-coloured, and used this to camouflage themselves against lichen-covered tree trunks, which made them almost invisible to predators.
During the industrial revolution, however, the lichen on trees in some parts of the country was killed by pollution, and soot turned the trunks black. A corresponding change was seen in the in peppered moths which turned black as well, helping them to remain camouflaged from birds. The process is known as industrial melanism -- melanism meaning the development of dark coloured pigmentation.
The Liverpool-led team found that this colour change was produced by a mutation in the cortex gene, which occurred during the mid 1800s, just before the first reported sighting of black peppered moths. Fascinatingly, however, the Cambridge-Sheffield study has now shown that exactly the same gene also influences the extremely bright and colourful patterns of Heliconius -- the name given to about 40 different closely-related species of beautiful, tropical butterflies found in South America.
Heliconius colour patterns are used to send a signal to potential predators that the butterflies are toxic if eaten, and different types of Heliconius butterfly mimic one another by using their bright colours as warning signals. Unlike the dark colouring of the peppered moth, it is therefore an evolutionary development that is meant to be seen.
The researchers carried out fine-scale mapping, looking for parts of the DNA sequence that were specifically different in butterflies with different patterns, in three different Heliconius species, and in each case the cortex gene was found to be responsible for this adaptation in their patterning.
Because Heliconius species are extremely diverse, the study of what causes variations in their patterning can provide more general clues about the genetic switches that control diversification in species.
In most cases, the genes responsible for these processes are known as "transcription factors" -- meaning that they are responsible for turning other genes on and off. Intriguingly, what made cortex such an elusive switch to spot was the fact that it does not do this. Instead, it is a cell cycle regulator, which means that it controls when cells divide and thus when different coloured scales develop within a butterfly wing.
Read more at Science Daily
Neanderthals used fire in caves: French cave sheds new light on the Neanderthals
Bruniquel cave. |
Bruniquel Cave, an extraordinary find
Bruniquel Cave was discovered in 1990 on a site overlooking the Aveyron Valley. The team of speleologists in charge of its management has kept the site in pristine condition, preserving its numerous natural formations (an underground lake, calcite rafts, translucent flowstone, concretions of all types…), intact floors containing numerous bone remains and dozens of bear hibernation hollows[2] with impressive claw marks. But most importantly, the cave contains original structures made up of about 400 stalagmites or sections of stalagmites, gathered and arranged in more or less circular formations. These circles show signs of fire use: calcite reddened or blackened by soot and fractured by heat, as well as burnt matter including bone remnants. In 1995, a first team of speleologists and researchers[3] used carbon 14 to date a burnt bone at 47,600 years (the oldest possible date using that technique), but no further dating was carried out at that time.
Intriguing stalagmite structures spawn a new concept: "speleofacts"
In 2013 a team of researchers, with the backing of the DRAC Midi Pyrénées regional archaeological department, launched a new program of studies and analyses. In addition to a 3D survey of the stalagmite structures and an inventory of their constituent elements, a magnetic study was used to reveal anomalies caused by heat, making it possible to map the burnt remnants found in this part of the cave. It seems most plausible that these fires were simply used as light sources.
Since no other stalagmite structure of this scale has yet been discovered, the team developed a new concept to designate these carefully arranged pieces of stalagmites: "speleofacts." An inventory of the cave's 400 speleofacts reveals a total of 112 meters of stalagmites broken into well-calibrated pieces, weighing an estimated 2.2 metric tons. The components of the structures are aligned, juxtaposed and superimposed (in two, three and even four layers), with props around the outside, apparently to hold them in place, and filler pieces. Marks left by the wrenching of stalagmites from the cave floor to make the structures have been identified nearby.
The world's first spelunkers
No remains were found in the cave floors that could help date the installation: a thick crust of calcite has coated the structures, sealing them in place and concealing the original floor. For this reason, the researchers, with the help of colleagues from the University of Xi'an (China) and the University of Minnesota (US), used a method called uranium series dating (U-Th), based on the radioactive properties of uranium, trace amounts of which are omnipresent in the environment. When stalagmites are formed, uranium is present in the calcite. Over time it breaks down into other elements, including thorium (Th). The age of a stalagmite can therefore be determined by measuring the thorium and remaining uranium in the calcite.
The Neanderthals made these structures by breaking stalagmites and rearranging the pieces. After the site was abandoned, new layers of calcite, including new stalagmite growth, formed on the human-made structures. By dating the end of the growth of the stalagmites used in the structures and the beginning of the regrowth sealing those same structures, the researchers have estimated the age of the installation at 176,500 years, ± 2,000 years. Additional samples, in particular of the calcite covering a burnt bone, confirmed this surprising result.
Were the first Neanderthals explorers and builders?
The very existence of these structures, virtually unique in the annals of archaeology, was already an astonishing discovery. In Prehistory, it wasn't until the beginning of the recent Paleolithic[4] in Europe, plus some isolated cases in Southeast Asia and Australia, that man was known to make regular incursions into the underground world, beyond the reach of sunlight. The proof is nearly always drawings, engravings and paintings, like those found in the caves of Chauvet (-36,000 years), Lascaux (-22,000 to -20,000 years), Altamira in Spain and Niaux (-18,000 to -15,000 years for both sites) and, more rarely, burial sites (Cussac Cave in France's Dordogne region: -28,500 years). But the Bruniquel stalagmite structures were built long before modern humans arrived in Europe (-40,000 years). Their creators must therefore have been the first Neanderthals[5] so far presumed by the scientific community not to have ventured far underground, nor to have mastered such sophisticated use of lighting and fire, let alone to have built such elaborate constructions.
New questions about the Neanderthals
We now know that, some 140 millennia before the arrival of modern man, Europe's first Neanderthals were occupying deep caves, building complex structures and maintaining fires in them. The Bruniquel structures are of particular interest due to their distance from the mouth of the cave, which is thought to be the same now as in the days of the Neanderthals. The researchers also wonder what the function of these installations, so far from daylight, could have been. Eliminating the unlikely hypothesis of shelter, given the structures' distance from the entrance, was it to find materials of now-unknown utility? Could it have been for "technical" purposes, such as water storage? Or for the observance of religious or other rites? In any case, the researchers confirm that the Neanderthals had to have an advanced social organization to build such constructions. Further studies will attempt to explain their function, which for the moment remains the biggest mystery surrounding Bruniquel Cave.
Read more at Science Daily
May 31, 2016
Scientists find brain area responsible for learning from immediate experience
In a study funded by the Medical Research Council and published in journal eLife, a team from Oxford University and Imperial College looked at an area called the mediodorsal thalamus (MD), known to be involved in decision making and learning.
Senior author and Oxford researcher Dr Anna Mitchell explained: 'We already knew that the mediodorsal thalamus is involved in learning and decision making but did not fully understand the role it played. A key question in neurosciences is how the brain computes functions like planning a day's activities or making a decision to do one thing rather than another. We process information using widespread networks across the brain, so it is useful to focus on the contribution of particular areas to the overall task. In this case, we chose to look at how the mediodorsal thalamus supports optimal processing of new learning and decision making.'
The study used Rhesus macaque monkeys, who were taught cognitive tasks on touchscreen computers that released food rewards for learning new information and making good choices. These tests were then repeated after surgery that induced selective lesions to the MD.
Monkeys who could not use their MD were less able to respond to changes that required them to adapt their behaviour to continue making the right choices to maximise rewards. They also struggled with their decisions when they were presented with a choice of several differently rewarded options.
Dr Mitchell said: 'Previously, some had thought that in these cases the monkeys would just keep repeating the same choice as before. We found that they could make different choices but they had a reduced ability to integrate information from recent choices that they had made combined with the result of their most recent choice to optimally guide their decisions.
Read more at Science Daily
Tobacco smoke makes germs more resilient
The mouth is one of the dirtiest parts of the body, home to millions of germs, and smoking makes it worse, say researchers. |
University of Louisville School of Dentistry researcher David A. Scott, Ph.D., explores how cigarettes lead to colonization of bacteria in the body. Scott and his research team have identified how tobacco smoke, composed of thousands of chemical components, is an environmental stressor and promotes bacteria colonization and immune invasion.
Scott says since this initial finding several years ago, a recent literature review published in Tobacco Induced Diseases revealed that cigarette smoke and its components also promote biofilm formation by several other pathogens including Staphylococcus aureus, Streptococcus mutans, Klebsiella pneumonia and Pseudomonas aeruginosa.
Biofilms are composed of numerous microbial communities often made up of complex, interacting and co-existing multispecies structures. Bacteria can form biofilms on most surfaces including teeth, heart valves and the respiratory tract.
"Once a pathogen establishes itself within a biofilm, it can be difficult to eradicate as biofilms provide a physical barrier against the host immune response, can be impermeable to antibiotics and act as a reservoir for persistent infection," Scott said. "Furthermore, biofilms allow for the transfer of genetic material among the bacterial community and this can lead to antibiotic resistance and the propagation of other virulence factors that promote infection."
One of the most prevalent biofilms is dental plaque, which can lead to gingivitis -- a gum disease found in almost half the world's population -- and to more severe oral diseases, such as chronic periodontitis. Bacterial biofilms also can form on heart valves resulting in heart-related infections, and they also can cause a host of other problems.
"We are continuing research to understand the interactions of the elaborate communities within biofilms and how they relate to disease. Many studies have investigated biofilms using single species, but more relevant multispecies models are emerging. Novel treatments for biofilm-induced diseases also are being investigated, but we have a long way to go," Scott said.
Scott elaborates on this research in a short question and answer style blog to be published May 31 on the BioMedCentral website: http://blogs.biomedcentral.com/on-health/2016/05/31/wntd-author-qa/
Attention to Scott's work comes as the World Health Organization observes World No Tobacco Day on May 31 to encourage a global 24-hour abstinence from all forms of tobacco consumption. The effort points to the annual 6 million worldwide deaths linked to the negative health effects of tobacco use.
Read more at Science Daily
3-D model reveals how invisible waves move materials within aquatic ecosystems
David Deepwell, a graduate student, and Professor Marek Stastna in Waterloo's Faculty of Mathematics have created a 3-D simulation that showcases how materials such phytoplankton, contaminants, and nutrients move within aquatic ecosystems via underwater bulges called mode-2 internal waves.
The simulation can help researchers understand how internal waves can carry materials over long distances. Their model was presented in the American Institute of Physics' journal Physics of Fluids earlier this week.
In the simulation, fluids of different densities are layered like the layers of a cake, creating an environment similar to that found in large aquatic bodies such as oceans and lakes. A middle layer of fluid, known as a pycnocline, over which the layers are closely packed together is created, and it is in this layer that materials tend to be caught.
"When the fluid behind the gate is mixed and then the gate is removed, the mixed fluid collapses into the stratification because it is both heavier than the top layer and lighter than the bottom one," explained Deepwell, "Adding dye to the mixed fluid while the gate is in place simulates the material we want the mode-2 waves -- the bulges in the pycnocline formed once the gate is taken away -- to transport. We can then measure the size of the wave, how much dye remains trapped within it, and how well the wave carries its captured material."
Deepwell and Statsna found that the larger the bulge within the pycnocline, the larger the amount of material carried by the mode-2 wave.
Read more at Science Daily
Remains of rice and mung beans help solve a Madagascan mystery
Residues of plant remains were obtained from sediments in the archaeological layers. |
Genetic research has confirmed that the inhabitants of Madagascar do indeed share close ancestry with Malaysians, Polynesians, and other speakers of what is classed the Austronesian language family. To date, archaeological research has identified human settlements in Madagascar that belong to the first millennium. There are also findings suggesting that Madagascar may have been occupied by hunter-gatherers who probably arrived from Africa by the first or second millennium. Until now, however, archaeological evidence of the Austronesian colonisation has been missing.
The team were able to identify the species of nearly 2,500 ancient plant remains obtained from their excavations at 18 ancient settlement sites in Madagascar, on neighbouring islands and on the eastern African coast. They examined residues obtained from sediments in the archaeological layers, using a system of sieves and water. They looked at whether the earliest crops grown on the sites were African crops or were crops introduced to Africa from elsewhere. They found both types, but noted a distinct pattern, with African crops primarily concentrated on the mainland and the islands closest to the mainland. In Madagascar, in contrast, early subsistence focused on Asian crops. The data suggested an introduction of these crops, both to Madagascar and the neighbouring Comoros Islands, by the 8th and 10th century.
Senior author Dr Nicole Boivin, from the School of Archaeology at the University of Oxford and Director of the Department of Archaeology at the Max Planck Institute for the Science of Human History, said: 'Southeast Asians clearly brought crops from their homeland and grew and subsisted on them when they reached Africa. This means that archaeologists can use crop remains as evidence to provide real material insights into the history of the island. There are a lot of things we still don't understand about Madagascar's past; it remains one of our big enigmas. But what is exciting is that we finally have a way of providing a window into the island's highly mysterious Southeast Asian settlement and distinguishing it from settlements by mainland Africans that we know also happened.'
The analyses also suggest that Southeast Asians colonised not only Madagascar but also the nearby islands of the Comoros, because again the crops that grew there were dominated by the same Asian species. By contrast, crops identified on the eastern African coast and near coastal islands like Mafia and Zanzibar were mainly African species like sorghum, pearl millet and baobab.
Commenting on the Southeast Asian influence in the Comoros, study lead author Dr Alison Crowther, from the University of Queensland, Australia, said: 'This took us by surprise. After all, people in the Comoros speak African languages and they don't look like they have Southeast Asian ancestry in the way that populations on Madagascar do. What was amazing to us was the stark contrast that emerged between the crops on the Eastern African coast and the offshore islands versus those on Madagascar, but also the Comoros.'
Read more at Science Daily
May 30, 2016
Arctic Ocean methane does not reach the atmosphere
Ocean floor observatories, research ship and airplane were deployed to a area of 250 active methane gas flares in the Arctic Ocean. |
"Our results are exciting and controversial," says senior scientist Cathrine Lund Myhre from NILU -- Norwegian Institute for Air Research, who is cooperating with CAGE through MOCA project.
The results were published in Geophysical Research Letters.
The scientist performed simultaneous measurements close to seabed, in the ocean and in the atmosphere during an extensive ship and air campaign offshore Svalbard Archipelago in summer 2014. As of today, three independent models employing the marine and atmospheric measurements show that the methane emissions from the sea bed in the area did not significantly affect the atmosphere.
"This is an important message to bring to the debate on the state of the ocean and atmospheric system in the Arctic. It is also important to emphasize that the Arctic has in recent years experienced major changes and average temperatures well above normal values. A thorough description of the present state of the Arctic environment, possible only with adequate measurements, is essential to the detection of future changes of potentially global significance." says Lund Myhre.
Methane increase since 2006
Levels of methane in the atmosphere have risen by an average of 6 parts per billion (ppb) globally per year since 2006, and slightly more over the Arctic and Norway. Since methane is the most important greenhouse gas after CO2, it is very important to explore why.
Vast quantities of methane gas are stored under the seabed in ice-like substances called methane hydrates. One possible explanation for the increased methane concentration in the atmosphere is that these hydrates dissolve as the oceans become warmer. Methane gas leaks from the methane hydrates under the seabed, and rises through the water. The scientists want to find out if these emissions are increasing, and just how much methane is reaching the atmosphere.
"Estimates on how much methane gas is stored beneath the seabed as hydrates vary enormously. A recent calculation suggests that we are talking about 74,000 gigatonnes, and one gigatonne is a billion tonnes," says professor Jürgen Mienert, director at CAGE.
If any of the methane stored in the Arctic hydrate reservoirs is released into the atmosphere as a result of climate change, this could have a global impact in terms of further climate warming, in addition to what human activities are already contributing.
Why is methane not released to the atmosphere?
Sea ice, the obvious obstacle to such emissions, is not found here in the summer. So what is stopping the methane? Emissions from the sea bed are after all clearly visible both on the seabed and in the water column.
"We are talking about 250 active methane seeps found at relatively shallow depths: 90 to 150 meters" says oceanographer Benedicte Ferré from CAGE.
According to her, it is the sea itself that adds obstacles to methane emissions to the atmosphere in the summer. The weather is generally calm during summer, with little wind. This leads to stratification of the water column whereby layers of different density form, much like oil over water.
This means there is no or low exchange of water masses between the surface layer and the layers below. A natural barrier occurs, acting as a ceiling, preventing the methane from reaching the surface.But this condition does not last forever: wind blowing over the ocean can mix these layers, causing this natural barrier to disappear. Thus the methane may break the surface and enter the atmosphere.
"There is still a lot we do not know about seasonal variations. The methane can also be transported by water masses, or dissolve and be eaten by bacteria in the ocean. Thus long term observations are necessary to understand the emissions throughout the year. The only way to obtain these measurements are to use observatories that remain on the seabed for a long time," says Benedicte Ferré.
CAGE set out two such observatories last year, which have been retrieved in May with data waiting to be analysed.
Read more at Science Daily
Deep, old water explains why Antarctic Ocean hasn't warmed
Observed warming over the past 50 years (in degrees Celsius per decade) shows rapid warming in the Arctic, while the Southern Ocean around Antarctica has warmed little, if at all. |
The study resolves a scientific conundrum, and an inconsistent pattern of warming often seized on by climate deniers. Observations and climate models show that the unique currents around Antarctica continually pull deep, centuries-old water up to the surface -- seawater that last touched Earth's atmosphere before the machine age, and has never experienced fossil fuel-related climate change. The paper is published May 30 in Nature Geoscience.
"With rising carbon dioxide you would expect more warming at both poles, but we only see it at one of the poles, so something else must be going on," said lead author Kyle Armour, a UW assistant professor of oceanography and of atmospheric sciences. "We show that it's for really simple reasons, and ocean currents are the hero here."
Gale-force westerly winds that constantly whip around Antarctica act to push surface water north, continually drawing up water from below. The Southern Ocean's water comes from such great depths, and from sources that are so distant, that it will take centuries before the water reaching the surface has experienced modern global warming.
Other places in the oceans, like the west coast of the Americas and the equator, draw seawater up from a few hundred meters depth, but that doesn't have the same effect.
"The Southern Ocean is unique because it's bringing water up from several thousand meters [as much as 2 miles]," Armour said. "It's really deep, old water that's coming up to the surface, all around the continent. You have a lot of water coming to the surface, and that water hasn't seen the atmosphere for hundreds of years."
The water surfacing off Antarctica last saw Earth's atmosphere centuries ago in the North Atlantic, then sank and followed circuitous paths through the world's oceans before resurfacing off Antarctica, hundreds or even a thousand years later.
Delayed warming of the Antarctic Ocean is commonly seen in global climate models. But the culprit had been wrongly identified as churning, frigid seas mixing extra heat downward. The study used data from Argo observational floats and other instruments to trace the path of the missing heat.
"The old idea was that heat taken up at the surface would just mix downward, and that's the reason for the slow warming," Armour said. "But the observations show that heat is actually being carried away from Antarctica, northward along the surface."
In the Atlantic, the northward flow of the ocean's surface continues all the way to the Arctic. The study used dyes in model simulations to show that seawater that has experienced the most climate change tends to clump up around the North Pole. This is another reason why the Arctic's ocean and sea ice are bearing the brunt of global warming, while Antarctica is largely oblivious.
"The oceans are acting to enhance warming in the Arctic while damping warming around Antarctica," Armour said. "You can't directly compare warming at the poles, because it's occurring on top of very different ocean circulations."
Read more at Science Daily
The brain clock that keeps memories ticking
Just as members of an orchestra need a conductor to stay on tempo, neurons in the brain need well-timed waves of activity to organize memories across time. In the hippocampus -- the brain's memory center -- temporal ordering of the neural code is important for building a mental map of where you've been, where you are, and where you are going. Published on May 30 in Nature Neuroscience, research from the RIKEN Brain Science Institute in Japan has pinpointed how the neurons that represent space in mice stay in time.
As a mouse navigates its environment, the central hippocampal area called CA1 relies on rhythmic waves of neural input from nearby brain regions to produce an updated map of space. When researchers turned off the input from nearby hippocampal area CA3, the refreshed maps became jumbled. While mice could still do a simple navigation task, and signals from single neurons appeared to represent space accurately, the population level code, or 'orchestra' was out of time and contained errors. "The neural music didn't change," said senior author Thomas McHugh, "but by silencing CA3 input to CA1 in the hippocampus we got rid of the conductor."
McHugh and co-author Steven Middleton accomplished this by genetically engineering mice to express a nerve toxin in CA3 that shut down the synaptic junctions between CA3 and other brain areas. The overall neuronal activity was preserved, but with synaptic communication effectively muted, they could measure the impact of removing CA3 input on the space map in area CA1.
While mice ran up and down a track, the authors recorded multiple individual neurons as well as the summed electric current from a larger group of neurons, called local field potentials. This allowed them to monitor each theta cycle, the time period over which the hippocampus updates its neural map of space as the mouse moves.
Comparing the individual and population activity in normal and transgenic mice, they made an apparently paradoxical observation. As the transgenic mice moved in the enclosure, individual neurons continued to update their activity at a regular interval of 8 Hz, known as theta-cycle phase precession. This cyclic organization of information, however, was absent across the population of neurons. "Without input from CA3, there was no global organization of the neural signals across the theta cycle to define where the mouse came from or where it was going," said McHugh.
The discovery of the mental map of space in the hippocampus was awarded the 2014 Nobel Prize in Physiology or Medicine, but the circuitry connecting ensembles of place cells, which are also used for memory processing, and how they update in realtime was not known. Without CA3 input, accurate prediction of the spatial location from the ensemble neural code is impaired. The mouse still knows where it is, but small errors in representing space from individual neurons become compounded without CA3 directing the CA1 ensemble. "If neurons don't activate in sequence, you can't organize memories across time," says McHugh. "Whether in mice or humans, you need temporal organization to get from here to there, to make decisions and reach goals." If shutdown of CA3 was possible in humans, McHugh suggests, memories would likely become useless and jumbled. Earlier work with these same mice pointed to a similar role for the CA3 neurons in organizing information during sleep, a process required for long-term memory storage.
Read more at Science Daily
As a mouse navigates its environment, the central hippocampal area called CA1 relies on rhythmic waves of neural input from nearby brain regions to produce an updated map of space. When researchers turned off the input from nearby hippocampal area CA3, the refreshed maps became jumbled. While mice could still do a simple navigation task, and signals from single neurons appeared to represent space accurately, the population level code, or 'orchestra' was out of time and contained errors. "The neural music didn't change," said senior author Thomas McHugh, "but by silencing CA3 input to CA1 in the hippocampus we got rid of the conductor."
McHugh and co-author Steven Middleton accomplished this by genetically engineering mice to express a nerve toxin in CA3 that shut down the synaptic junctions between CA3 and other brain areas. The overall neuronal activity was preserved, but with synaptic communication effectively muted, they could measure the impact of removing CA3 input on the space map in area CA1.
While mice ran up and down a track, the authors recorded multiple individual neurons as well as the summed electric current from a larger group of neurons, called local field potentials. This allowed them to monitor each theta cycle, the time period over which the hippocampus updates its neural map of space as the mouse moves.
Comparing the individual and population activity in normal and transgenic mice, they made an apparently paradoxical observation. As the transgenic mice moved in the enclosure, individual neurons continued to update their activity at a regular interval of 8 Hz, known as theta-cycle phase precession. This cyclic organization of information, however, was absent across the population of neurons. "Without input from CA3, there was no global organization of the neural signals across the theta cycle to define where the mouse came from or where it was going," said McHugh.
The discovery of the mental map of space in the hippocampus was awarded the 2014 Nobel Prize in Physiology or Medicine, but the circuitry connecting ensembles of place cells, which are also used for memory processing, and how they update in realtime was not known. Without CA3 input, accurate prediction of the spatial location from the ensemble neural code is impaired. The mouse still knows where it is, but small errors in representing space from individual neurons become compounded without CA3 directing the CA1 ensemble. "If neurons don't activate in sequence, you can't organize memories across time," says McHugh. "Whether in mice or humans, you need temporal organization to get from here to there, to make decisions and reach goals." If shutdown of CA3 was possible in humans, McHugh suggests, memories would likely become useless and jumbled. Earlier work with these same mice pointed to a similar role for the CA3 neurons in organizing information during sleep, a process required for long-term memory storage.
Read more at Science Daily
Remains of bizarre group of extinct snail-eating Australian marsupials discovered
"Malleodectes mirabilis was a bizarre mammal, as strange in its own way as a koala or kangaroo," says study lead author UNSW Professor Mike Archer.
"Uniquely among mammals, it appears to have had an insatiable appetite for escargot--snails in the whole shell. Its most striking feature was a huge, extremely powerful, hammer-like premolar that would have been able to crack and then crush the strongest snail shells in the forest."
Research describing the new marsupials is published in the journal Scientific Reports.
Isolated teeth and partial dentitions of this unusual group, known as malleodectids, had been unearthed over the years at Riversleigh, where Professor Archer and his colleagues have excavated for almost four decades. But the profoundly different nature of the marsupials was not realised until a well-preserved portion of the skull of a juvenile was found in a 15 million year old Middle Miocene cave deposit at Riversleigh.
This juvenile specimen was only recently extracted from its limestone casing, using an acid bath at UNSW, which made it available for study with modern techniques including micro-computed tomography. The young animal still had its baby teeth, and was teething, with adult teeth that had been about to erupt when it was alive still embedded in its jaw.
"Details of the canine, premolar and molar teeth of this specimen have enabled its relationships to other Australian marsupials to be determined with reasonable confidence," says Professor Archer, of the PANGEA Research Centre in the UNSW School of Biological, Earth and Environmental Sciences.
"Although it is very different from the others, it appears to have been related to the dasyures -- marsupial carnivores such as Tasmanian Devils and the extinct Tasmanian Tigers that are unique to Australia and New Guinea."
Nothing remains of the cave at Riversleigh, known as AL90 site, except its limestone floor, which contains the bones of thousands of animals that fell into, or lived in, the ancient cave.
"The juvenile malleodectid could have been clinging to the back of its mother while she was hunting for snails in the rocks around the cave's entrance, and may have fallen in and then been unable to climb back out," says team member UNSW Professor Suzanne Hand.
"Many other animals that lived in this lush forest met a similar fate with their skeletons accumulating one on top of another for perhaps thousands of years, until the cave became filled with palaeontological treasures.
"Over millions of years the walls and ceiling of the cave were eroded away, leaving only the fossil-rich floor, which was discovered by our Riversleigh Project team members in 1990."
Subsequent quarrying of the cave floor has produced thousands of exquisite fossils including the articulated skeletons of the ram-sized, sloth-like Nimbadon -- an extinct marsupial that fell in while moving overhead in the tree tops.
The Riversleigh World Heritage fossil deposits, which span the last 24 million years of Australian history, have produced many previously unknown kinds of animals such as Thingodonta, which may have been a woodpecker-like marsupial; Fangaroo, a tusked kangaroo; Drop crocs, which are strange leopard-like crocodiles that may have been arboreal; and Dromornis -- the Demon Duck of Doom, which was one of the largest birds in the world.
The Riversleigh Project, which has been a major focus of the palaeontological team at UNSW, is about to carry out its 40th annual expedition to Riversleigh.
Once again, the team expects to discover yet more strange creatures that once populated Australia's ancient rainforests at a time when the northern regions of the continent looked more like Amazonian rainforests than the arid zone the area has become today.
Of particular interest for this year's expedition will be younger apparently Late Miocene rocks discovered by the team, assisted by funding from the Australian Research Council and the National Geographic Society, in a remote area now called "New Riversleigh." These will fill a key time period for the rich, long record of environmental change at Riversleigh.
Among the first tantalising discoveries from "New Riversleigh" has been yet another bizarre, hyper-carnivorous marsupial that looks like it might be a younger, far more powerful cousin of the earlier snail-eating malleodectids.
Like so many of the strange creatures continuously being discovered in Riversleigh's rocks, malleodectids went extinct long before humans arrived.
Read more at Science Daily
May 29, 2016
How the brain makes, and breaks, a habit
Working with a mouse model, an international team of researchers demonstrates what happens in the brain for habits to control behavior. |
Working with a mouse model, an international team of researchers demonstrates what happens in the brain for habits to control behavior.
The study is published in Neuron and was led by Christina Gremel, assistant professor of psychology at the University of California San Diego, who began the work as a postdoctoral researcher at the National Institute on Alcohol Abuse and Alcoholism of the National Institutes of Health. Senior authors on the study are Rui Costa, of the Champalimaud Centre for the Unknown in Lisbon, and David Lovinger of the NIAAA/NIH.
The study provides the strongest evidence to date, Gremel said, that the brain's circuits for habitual and goal-directed action compete for control -- in the orbitofrontal cortex, a decision-making area of the brain -- and that neurochemicals called endocannabinoids allow for habit to take over, by acting as a sort of brake on the goal-directed circuit.
Endocannabinoids are a class of chemicals produced naturally by humans and other animals. Receptors for endocannabinoids are found throughout the body and brain, and the endocannabinoid system is implicated in a variety of physiological processes -- including appetite, pain sensation, mood and memory. It is also the system that mediates the psychoactive effects of cannabis.
Earlier work by Gremel and Costa had shown that the orbitofrontal cortex, or OFC, is an important brain area for relaying information on goal-directed action. They found that by increasing the output of neurons in the OFC with a technique called optogenetics -- precisely turning neurons on and off with flashes of light -- they increased goal-directed actions. In contrast, when they decreased activity in the same area with a chemical approach, they disrupted goal-directed actions and the mice relied on habit instead.
"Habit takes over when the OFC is quieted," Gremel said.
In the current study, since endocannabinoids are known to reduce the activity of neurons in general, the researchers hypothesized that endocannabinoids may be quieting or reducing activity in the OFC and, with it, the ability to shift to goal-directed action. They focused particularly on neurons projecting from the OFC into the dorsomedial striatum.
They trained mice to perform the same lever-pressing action for the same food reward but in two different environments that differentially bias the development of goal-directed versus habitual actions. Like humans who don't suffer from neuropsychiatric disorders, healthy mice will readily shift between performing the same action using a goal-directed versus habitual action strategy. To stick with the earlier example of getting home, we can switch the homing autopilot off and shift to goal-directed behavior when we need to get to a new or different location.
To test their hypothesis on the role played by endocannabinoids, the researchers then deleted a particular endocannabinoid receptor, called cannabinoid type 1, or CB1, in the OFC-to-striatum pathway. Mice missing these receptors did not form habits -- showing the critical role played by the neurochemicals as well as that particular pathway.
"We need a balance between habitual and goal-directed actions. For everyday function, we need to be able to make routine actions quickly and efficiently, and habits serve this purpose," Gremel said. "However, we also encounter changing circumstances, and need the capacity to 'break habits' and perform a goal-directed action based on updated information. When we can't, there can be devastating consequences."
Read more at Science Daily
Why malnutrition is an immune disorder
Malnourished children are most likely to die from common infections, not starvation. New experimental evidence, reviewed May 26 in Trends in Immunology, indicates that even with a healthy diet, defects in immune system function from birth could contribute to a malnourished state throughout life. Researchers speculate that targeting immune pathways could be a new approach to reduce the poor health and mortality caused by under- and overnutrition.
"That traditional image of malnutrition that we're unfortunately so familiar with--of someone wasting away--that's just the external picture," says Review first author Claire Bourke, a postdoctoral research assistant in the Centre for Genomics and Child Health at Queen Mary University of London. "Those height and weight defects that we see are the tip of the iceberg--there are a whole range of pro-inflammatory conditions, impaired gut function, weakened responses to new infections, and a resulting high metabolic burden underlying them."
The most common form of undernutrition globally is stunting -- where children fail to achieve their full height potential. Despite looking healthy, children in developing countries who are stunted in height may also have stunted immune development, making them more vulnerable to death by common infections.
Only recently have researchers had access to technology that can accurately study immunodeficiency. Even though immune parameters in undernourished children have been looked at for decades, much of that data is outdated. How malnutrition and immune function are related is actually still poorly understood; however, there is wide acceptance that malnutrition comes with a range of immune problems. These include reduced numbers of white blood cells, skin and gut membranes that are easier for pathogens to break through, and malfunctioning lymph nodes.
What's also emerging is that the relationship between malnutrition and immune dysfunction may be a bit "chicken and egg," with both causing and being the consequence of the other. Immune dysfunction results when people consume too few calories because of lack of food or have an excess of fat and sugar in their diet. That dysfunction is recorded in the DNA through epigenetic marks, so that if malnourished people have offspring, their children inherit an altered immune system (even after multiple generations). This altered immune system may then cause malnutrition even if children have an adequate diet.
"It's been thought for a long time that the immune system is driving pathology, but new experimental tools have made it possible to separate out the effects of the immune system from those of the diet alone," says Bourke. "There are new models for environmental enteric dysfunction in mice, a growing interest in microbiota and epigenetics--all of these studies show that the more we look into the immune system, the more it has a role to play in a really wide array of physiological systems. It doesn't just fight infection; it affects metabolism, neurological function, and growth, which are things that are also impaired in malnutrition."
Bourke imagines a future where clinicians could generate individualized immune readouts that can identify young people most susceptible to infection as a result of malnutrition. This could reduce the burden of a leading cause of child mortality by helping those who are most vulnerable get treated more often and sooner with targeted interventions.
Read more at Science Daily
"That traditional image of malnutrition that we're unfortunately so familiar with--of someone wasting away--that's just the external picture," says Review first author Claire Bourke, a postdoctoral research assistant in the Centre for Genomics and Child Health at Queen Mary University of London. "Those height and weight defects that we see are the tip of the iceberg--there are a whole range of pro-inflammatory conditions, impaired gut function, weakened responses to new infections, and a resulting high metabolic burden underlying them."
The most common form of undernutrition globally is stunting -- where children fail to achieve their full height potential. Despite looking healthy, children in developing countries who are stunted in height may also have stunted immune development, making them more vulnerable to death by common infections.
Only recently have researchers had access to technology that can accurately study immunodeficiency. Even though immune parameters in undernourished children have been looked at for decades, much of that data is outdated. How malnutrition and immune function are related is actually still poorly understood; however, there is wide acceptance that malnutrition comes with a range of immune problems. These include reduced numbers of white blood cells, skin and gut membranes that are easier for pathogens to break through, and malfunctioning lymph nodes.
What's also emerging is that the relationship between malnutrition and immune dysfunction may be a bit "chicken and egg," with both causing and being the consequence of the other. Immune dysfunction results when people consume too few calories because of lack of food or have an excess of fat and sugar in their diet. That dysfunction is recorded in the DNA through epigenetic marks, so that if malnourished people have offspring, their children inherit an altered immune system (even after multiple generations). This altered immune system may then cause malnutrition even if children have an adequate diet.
"It's been thought for a long time that the immune system is driving pathology, but new experimental tools have made it possible to separate out the effects of the immune system from those of the diet alone," says Bourke. "There are new models for environmental enteric dysfunction in mice, a growing interest in microbiota and epigenetics--all of these studies show that the more we look into the immune system, the more it has a role to play in a really wide array of physiological systems. It doesn't just fight infection; it affects metabolism, neurological function, and growth, which are things that are also impaired in malnutrition."
Bourke imagines a future where clinicians could generate individualized immune readouts that can identify young people most susceptible to infection as a result of malnutrition. This could reduce the burden of a leading cause of child mortality by helping those who are most vulnerable get treated more often and sooner with targeted interventions.
Read more at Science Daily
Subscribe to:
Posts (Atom)