When two identical sounds are repeated quickly, a filter reduces the attention that the brain directs to the second sound it hears. In people with schizophrenia, this ability to reduce the brain's response to identical sounds does not function properly. But the question is: Why? Neuroscientists have been investigating the mechanism that lies behind this auditory sensory gating. Their results show that the filtering begins at the very beginning of the auditory stimuli processing.
Our sound environment is extremely dense, which is why the brain has to adapt and implement filtering mechanisms that allow it to hold its attention on the most important elements and save energy. When two identical sounds are repeated quickly, one of these filters -- called auditory sensory gating -- drastically reduces the attention that the brain directs to the second sound it hears. In people with schizophrenia, this ability to reduce the brain's response to identical sounds does not function properly. The brain, it seems, is constantly assailed by a multitude of auditory stimuli, which disrupt its attentional capacity. But the question is: Why? Neuroscientists from the University of Geneva (UNIGE), Switzerland, have been investigating the mechanism that lies behind this auditory sensory gating, which was previously unknown. Their results, published in the journal eNeuro, show that the filtering begins at the very beginning of the auditory stimuli processing, i. e. in the brainstem. This finding runs counter to earlier hypotheses, which held that it was a function of the frontal cortex control, which is heavily impacted in schizophrenics.
One of the main characteristics of schizophrenia, which affects 0.5% of the population, is a difficulty in prioritising and ranking surrounding sounds, which then assail the individual. This is why schizophrenia is diagnosed using a simple test: the P50. "The aim is to have the patient hear two identical sounds spaced 500 milliseconds apart. We then measure the brain activity in response to these two sounds using an external encephalogram," explains Charles Quairiaux, a researcher in the Department of Basic Neurosciences in UNIGE's Faculty of Medicine. "If brain activity decreases drastically when listening to the second sound, everything is okay. But if it's almost identical, then that's one of the best-known symptoms of schizophrenia."
Although widely used to perform such diagnostics, the functioning of this filtering mechanism -- called auditory sensory gating -- is still a mystery. Most hypotheses held that this brain property is provided by a frontal cortex control, located at the front of the brain. "This area of control is badly affected in people suffering from schizophrenia, and it's situated at the end of the brain's sound processing pathway," explains Dr Quairiaux.
The failure is situated at the base of sound processing
In order to test this hypothesis, the Geneva-based neuroscientists placed external electroencephalographic electrodes on mice, which were then subjected to the P50 test, varying the intervals between the two sounds from 125 milliseconds to 2 seconds. The results proved to be exactly the same as those observed in humans: there was a clear decrease in brain activity when listening to the second sound.
The scientists then placed internal electrodes in the cortical and subcortical auditory regions of the brain, from the brainstem to the frontal cortex -- the pathway for processing sounds. The mice were given the P50 test a second time and, contrary to the initial hypothesis formulated by the scientists, the researchers observed that the drop in attention given to the second sound had already occurred at the brainstem and not only at the cortical level, with a 60% decrease in brain activity. "This discovery means we're going to have to reconsider our understanding of the mechanism, because it demonstrates that the filter effect begins at the very moment when the brain perceives the sound!" points out Dr Quairiaux. And where does this leave people suffering from schizophrenia?
Read more at Science Daily
Sep 7, 2019
A swifter way towards 3D-printed organs
20 people die every day waiting for an organ transplant in the United States, and while more than 30,000 transplants are now performed annually, there are over 113,000 patients currently on organ waitlists. Artificially grown human organs are seen by many as the "holy grail" for resolving this organ shortage, and advances in 3D printing have led to a boom in using that technique to build living tissue constructs in the shape of human organs. However, all 3D-printed human tissues to date lack the cellular density and organ-level functions required for them to be used in organ repair and replacement.
Now, a new technique called SWIFT (sacrificial writing into functional tissue) created by researchers from Harvard's Wyss Institute for Biologically Inspired Engineering and John A. Paulson School of Engineering and Applied Sciences (SEAS), overcomes that major hurdle by 3D printing vascular channels into living matrices composed of stem-cell-derived organ building blocks (OBBs), yielding viable, organ-specific tissues with high cell density and function. The research is reported in Science Advances.
"This is an entirely new paradigm for tissue fabrication," said co-first author Mark Skylar-Scott, Ph.D., a Research Associate at the Wyss Institute. "Rather than trying to 3D-print an entire organ's worth of cells, SWIFT focuses on only printing the vessels necessary to support a living tissue construct that contains large quantities of OBBs, which may ultimately be used therapeutically to repair and replace human organs with lab-grown versions containing patients' own cells."
SWIFT involves a two-step process that begins with forming hundreds of thousands of stem-cell-derived aggregates into a dense, living matrix of OBBs that contains about 200 million cells per milliliter. Next, a vascular network through which oxygen and other nutrients can be delivered to the cells is embedded within the matrix by writing and removing a sacrificial ink. "Forming a dense matrix from these OBBs kills two birds with one stone: not only does it achieve a high cellular density akin to that of human organs, but the matrix's viscosity also enables printing of a pervasive network of perfusable channels within it to mimic the blood vessels that support human organs," said co-first author Sébastien Uzel, PhD., a Research Associate at the Wyss Institute and SEAS.
The cellular aggregates used in the SWIFT method are derived from adult induced pluripotent stem cells, which are mixed with a tailored extracellular matrix (ECM) solution to make a living matrix that is compacted via centrifugation. At cold temperatures (0-4oC), the dense matrix has the consistency of mayonnaise - soft enough to manipulate without damaging the cells, but thick enough to hold its shape - making it the perfect medium for sacrificial 3D printing. In this technique, a thin nozzle moves through this matrix depositing a strand of gelatin "ink" that pushes cells out of the way without damaging them.
When the cold matrix is heated to 37 oC, it stiffens to become more solid (like an omelet being cooked) while the gelatin ink melts and can be washed out, leaving behind a network of channels embedded within the tissue construct that can be perfused with oxygenated media to nourish the cells. The researchers were able to vary the diameter of the channels from 400 micrometers to 1 millimeter, and seamlessly connected them to form branching vascular networks within the tissues.
Organ-specific tissues that were printed with embedded vascular channels using SWIFT and perfused in this manner remained viable, while tissues grown without these channels experienced cell death in their cores within 12 hours. To see whether the tissues displayed organ-specific functions, the team printed, evacuated, and perfused a branching channel architecture into a matrix consisting of heart-derived cells and flowed media through the channels for over a week. During that time, the cardiac OBBs fused together to form a more solid cardiac tissue whose contractions became more synchronous and over 20 times stronger, mimicking key features of a human heart.
"Our SWIFT biomanufacturing method is highly effective at creating organ-specific tissues at scale from OBBs ranging from aggregates of primary cells to stem-cell-derived organoids," said corresponding author Jennifer Lewis, Sc.D., who is a Core Faculty Member at the Wyss Institute as well as the Hansjörg Wyss Professor of Biologically Inspired Engineering at SEAS. "By integrating recent advances from stem-cell researchers with the bioprinting methods developed by my lab, we believe SWIFT will greatly advance the field of organ engineering around the world."
Collaborations are underway with Wyss Institute faculty members Chris Chen, M.D., Ph.D. at Boston University and Sangeeta Bhatia, M.D., Ph.D., at MIT to implant these tissues into animal models and explore their host integration, as part of the 3D Organ Engineering Initiative co-led by Lewis and Chris Chen.
Read more at Science Daily
Now, a new technique called SWIFT (sacrificial writing into functional tissue) created by researchers from Harvard's Wyss Institute for Biologically Inspired Engineering and John A. Paulson School of Engineering and Applied Sciences (SEAS), overcomes that major hurdle by 3D printing vascular channels into living matrices composed of stem-cell-derived organ building blocks (OBBs), yielding viable, organ-specific tissues with high cell density and function. The research is reported in Science Advances.
"This is an entirely new paradigm for tissue fabrication," said co-first author Mark Skylar-Scott, Ph.D., a Research Associate at the Wyss Institute. "Rather than trying to 3D-print an entire organ's worth of cells, SWIFT focuses on only printing the vessels necessary to support a living tissue construct that contains large quantities of OBBs, which may ultimately be used therapeutically to repair and replace human organs with lab-grown versions containing patients' own cells."
SWIFT involves a two-step process that begins with forming hundreds of thousands of stem-cell-derived aggregates into a dense, living matrix of OBBs that contains about 200 million cells per milliliter. Next, a vascular network through which oxygen and other nutrients can be delivered to the cells is embedded within the matrix by writing and removing a sacrificial ink. "Forming a dense matrix from these OBBs kills two birds with one stone: not only does it achieve a high cellular density akin to that of human organs, but the matrix's viscosity also enables printing of a pervasive network of perfusable channels within it to mimic the blood vessels that support human organs," said co-first author Sébastien Uzel, PhD., a Research Associate at the Wyss Institute and SEAS.
The cellular aggregates used in the SWIFT method are derived from adult induced pluripotent stem cells, which are mixed with a tailored extracellular matrix (ECM) solution to make a living matrix that is compacted via centrifugation. At cold temperatures (0-4oC), the dense matrix has the consistency of mayonnaise - soft enough to manipulate without damaging the cells, but thick enough to hold its shape - making it the perfect medium for sacrificial 3D printing. In this technique, a thin nozzle moves through this matrix depositing a strand of gelatin "ink" that pushes cells out of the way without damaging them.
When the cold matrix is heated to 37 oC, it stiffens to become more solid (like an omelet being cooked) while the gelatin ink melts and can be washed out, leaving behind a network of channels embedded within the tissue construct that can be perfused with oxygenated media to nourish the cells. The researchers were able to vary the diameter of the channels from 400 micrometers to 1 millimeter, and seamlessly connected them to form branching vascular networks within the tissues.
Organ-specific tissues that were printed with embedded vascular channels using SWIFT and perfused in this manner remained viable, while tissues grown without these channels experienced cell death in their cores within 12 hours. To see whether the tissues displayed organ-specific functions, the team printed, evacuated, and perfused a branching channel architecture into a matrix consisting of heart-derived cells and flowed media through the channels for over a week. During that time, the cardiac OBBs fused together to form a more solid cardiac tissue whose contractions became more synchronous and over 20 times stronger, mimicking key features of a human heart.
"Our SWIFT biomanufacturing method is highly effective at creating organ-specific tissues at scale from OBBs ranging from aggregates of primary cells to stem-cell-derived organoids," said corresponding author Jennifer Lewis, Sc.D., who is a Core Faculty Member at the Wyss Institute as well as the Hansjörg Wyss Professor of Biologically Inspired Engineering at SEAS. "By integrating recent advances from stem-cell researchers with the bioprinting methods developed by my lab, we believe SWIFT will greatly advance the field of organ engineering around the world."
Collaborations are underway with Wyss Institute faculty members Chris Chen, M.D., Ph.D. at Boston University and Sangeeta Bhatia, M.D., Ph.D., at MIT to implant these tissues into animal models and explore their host integration, as part of the 3D Organ Engineering Initiative co-led by Lewis and Chris Chen.
Read more at Science Daily
Sep 6, 2019
Climate change could bring short-term gain, long-term pain for loggerhead turtles
An overwhelming scientific consensus affirms that for thousands of species across the globe, climate change is an immediate and existential threat.
For the loggerhead turtle, whose vast range extends from the chilly shores of Newfoundland to the blistering beaches of Australia, the story isn't so cut and dried.
New research from conservation biologists at Florida State University and their collaborators suggests that while some loggerheads will suffer from the effects of a changing climate, populations in certain nesting areas could stand to reap important short-term benefits from the shifting environmental conditions.
In an investigation of 17 loggerhead turtle nesting beaches along the coast of Brazil, scientists found that hatchling production -- the rate of successful hatching and emergence of hatchling turtles -- could receive a boost in temperate areas forecasted to warm under climate change. But those improvements could be relatively short lived.
"Even though hatchling success is projected to increase by the year 2100 in areas that currently have lower temperatures, it is likely that as climate change progresses and temperatures and precipitation levels approach negative thresholds, hatchling production at these locations will start to decrease," said study author Mariana Fuentes, an assistant professor in FSU's Department of Earth, Ocean and Atmospheric Science.
The study was published in the journal Scientific Reports.
During the incubation process, marine turtle eggs are heavily influenced by their environments. Air and sand temperatures can determine the sex of hatchlings, spikes in moisture content can drown developing embryos, and excessive solar radiation exposure can affect turtles' morphology and reduce their chances of survival.
In their study, the FSU researchers evaluated current and projected hatchling production under a variety of different environmental conditions throughout the expansive Brazilian coastline.
In the northern equatorial nesting beaches where temperatures already soar, the team found persistent and accelerating climate change will increase air temperatures and escalate precipitation beyond the thresholds for healthy incubation -- a major hit to hatchling production.
For the temperate beaches farther down the coast, climate change will bring similar increases in air temperature and precipitation. But, hundreds of miles from the equator, the effects of those changes look considerably different.
"These cooler beaches are also predicted to experience warming air temperatures; however, productivity is predicted to increase under both the extreme and conservative climate change scenarios," said former Florida State master's student Natalie Montero, who led the study.
Over the coming decades, as the climate shifts and temperatures climb, these conventionally cooler beaches will become more suitable for healthy loggerhead incubation. But if climate change continues unabated, "these beaches could also become too warm for successful production, much like the warmer beaches in our study," Montero said.
The researchers also stress that changes associated with a warming climate -- beach erosion, unchecked coastal development and environmental degradation, for example -- pose urgent threats to marine turtle nesting beaches at all latitudes, regardless of air temperature or precipitation.
And while contemporary and future shifts in climate conditions could benefit select loggerhead populations, well-documented warming trends suggest the long-term prospects of these and other ancient sea turtle species remain precarious.
Read more at Science Daily
For the loggerhead turtle, whose vast range extends from the chilly shores of Newfoundland to the blistering beaches of Australia, the story isn't so cut and dried.
New research from conservation biologists at Florida State University and their collaborators suggests that while some loggerheads will suffer from the effects of a changing climate, populations in certain nesting areas could stand to reap important short-term benefits from the shifting environmental conditions.
In an investigation of 17 loggerhead turtle nesting beaches along the coast of Brazil, scientists found that hatchling production -- the rate of successful hatching and emergence of hatchling turtles -- could receive a boost in temperate areas forecasted to warm under climate change. But those improvements could be relatively short lived.
"Even though hatchling success is projected to increase by the year 2100 in areas that currently have lower temperatures, it is likely that as climate change progresses and temperatures and precipitation levels approach negative thresholds, hatchling production at these locations will start to decrease," said study author Mariana Fuentes, an assistant professor in FSU's Department of Earth, Ocean and Atmospheric Science.
The study was published in the journal Scientific Reports.
During the incubation process, marine turtle eggs are heavily influenced by their environments. Air and sand temperatures can determine the sex of hatchlings, spikes in moisture content can drown developing embryos, and excessive solar radiation exposure can affect turtles' morphology and reduce their chances of survival.
In their study, the FSU researchers evaluated current and projected hatchling production under a variety of different environmental conditions throughout the expansive Brazilian coastline.
In the northern equatorial nesting beaches where temperatures already soar, the team found persistent and accelerating climate change will increase air temperatures and escalate precipitation beyond the thresholds for healthy incubation -- a major hit to hatchling production.
For the temperate beaches farther down the coast, climate change will bring similar increases in air temperature and precipitation. But, hundreds of miles from the equator, the effects of those changes look considerably different.
"These cooler beaches are also predicted to experience warming air temperatures; however, productivity is predicted to increase under both the extreme and conservative climate change scenarios," said former Florida State master's student Natalie Montero, who led the study.
Over the coming decades, as the climate shifts and temperatures climb, these conventionally cooler beaches will become more suitable for healthy loggerhead incubation. But if climate change continues unabated, "these beaches could also become too warm for successful production, much like the warmer beaches in our study," Montero said.
The researchers also stress that changes associated with a warming climate -- beach erosion, unchecked coastal development and environmental degradation, for example -- pose urgent threats to marine turtle nesting beaches at all latitudes, regardless of air temperature or precipitation.
And while contemporary and future shifts in climate conditions could benefit select loggerhead populations, well-documented warming trends suggest the long-term prospects of these and other ancient sea turtle species remain precarious.
Read more at Science Daily
Discovery of neuronal ensemble activities that is orchestrated to represent one memory
The brain stores memories through a neuronal ensemble, termed engram cells. A unique system was established to transfer neuronal population activity into light with discrimination between engram and non-engram cells using fluorescence proteins. By using this system, it is revealed that engram sub-ensembles represent distinct pieces of information, which are then orchestrated to constitute an entire memory. In addition, some sub-ensembles preferentially reappear during post-learning sleep, and these replayed sub-ensembles are more likely to be reactivated during retrieval.
Japanese research group supervised by Dr Noriaki Ohkawa (Lecturer) and Professor Kaoru Inokuchi of University of Toyama succeeded to establish a system to investigate characteristic activity of cell ensemble acquiring memory and visualized ways for representation and consolidation of memory experienced novel episode in brain.
We are exposed to many episodic events and then memorize their information through our life. This kind of memory, episodic memory, is processed in several brain regions, and one of the regions is hippocampus. It is authorized that, in the hippocampus, one specific episodic memory is stored within and retrieved from a neuronal ensemble composed of neurons, termed engram cells, that are activated during learning. Indeed, activation or inhibition of engram cell ensemble induces or inhibits memory retrieval, respectively, thus, engram cell ensemble represents the physiological manifestation of a specific memory trace. However, one episodic memory is composed of several components of episode, and each component should be encoded by specific substrate, e.g. engram sub-ensemble. Nevertheless, it had not been clear how activity in these engram cells is assembled to represent the corresponding event because of technical limitations mean that it is difficult to distinguish the population activity of engram cells from that of non-engram cells.
To address how one episodic memory is represented and consolidated in engram cell ensemble, it is required to visualize the activity of engram and non-engram cells. Engram cells can be specifically targeted in c-fos-tTA mice because the neural activity associated with memory formation induces c-fos gene expression, which in turn induces activity-dependent tTA expression under the control of the c-fos promoter. In the absence of doxycycline, tTA can bind to the tetracycline responsive element (TRE), enabling downstream expression of the TRE-dependent transgene. When neuron activates, Ca2+ flows into their soma. Thy1-G-CaMP7 mice express a Ca2+ indicator, G-CaMP7, in pyramidal neurons of hippocampal CA1 in the mice. Thus, neuronal activity is transferred into G-CaMP7 fluorescence, called Ca2+ imaging. We developed a technique that combines a head-mounted, miniature fluorescent microscope, with Thy1-G-CaMP7/c-fos-tTA double transgenic mice. The hippocampal CA1 region in double transgenic mice was injected with LV expressing a fluorescent protein, Kikume Green Red (KikGR), under the control of TRE. Using this approach, engram cells can be identified with KikGR, and the Ca2+ signals corresponding to the activity of engram cells and non-engram cells can be tracked during experience of a novel episodic event.
The data indicated that population activity of engram cell ensemble exhibited the characteristic trait of highly repetitive activity during novel episodic event. To address component of one memory, next, it was proposed to deconstruct population activity into sub-ensemble groups. Non-negative matrix factorization (NMF) decomposes population activity into a time series of coactivated neuronal ensembles. Each sub-ensemble is composed of the different cells, which are spatially intermingled, to make their synchronous activity even among the group of engram cells associated with a single event. These results suggest that the total information of one event is structured into sub-engram ensembles.
In order to measure the activity of engram cells across different memory processing stages, recording of Ca2+ transients from novel experience through post-experience sleep to retrieval was conducted. Around 40% of the total number of sub-ensembles that appeared in a novel experience in engram reactivated during post-experience sleep and then preferentially reappeared in retrieval sessions, while almost none of sub-ensembles in non-engram did not show this feature. Thus, engram sub-ensembles formed during a novel experience and that were reactivated during sleep sessions were mostly reactivated during the retrieval session. By contrast, most non-engram ensembles that were activated during the novel experience were not reactivated in the later sessions.
Read more at Science Daily
Japanese research group supervised by Dr Noriaki Ohkawa (Lecturer) and Professor Kaoru Inokuchi of University of Toyama succeeded to establish a system to investigate characteristic activity of cell ensemble acquiring memory and visualized ways for representation and consolidation of memory experienced novel episode in brain.
We are exposed to many episodic events and then memorize their information through our life. This kind of memory, episodic memory, is processed in several brain regions, and one of the regions is hippocampus. It is authorized that, in the hippocampus, one specific episodic memory is stored within and retrieved from a neuronal ensemble composed of neurons, termed engram cells, that are activated during learning. Indeed, activation or inhibition of engram cell ensemble induces or inhibits memory retrieval, respectively, thus, engram cell ensemble represents the physiological manifestation of a specific memory trace. However, one episodic memory is composed of several components of episode, and each component should be encoded by specific substrate, e.g. engram sub-ensemble. Nevertheless, it had not been clear how activity in these engram cells is assembled to represent the corresponding event because of technical limitations mean that it is difficult to distinguish the population activity of engram cells from that of non-engram cells.
To address how one episodic memory is represented and consolidated in engram cell ensemble, it is required to visualize the activity of engram and non-engram cells. Engram cells can be specifically targeted in c-fos-tTA mice because the neural activity associated with memory formation induces c-fos gene expression, which in turn induces activity-dependent tTA expression under the control of the c-fos promoter. In the absence of doxycycline, tTA can bind to the tetracycline responsive element (TRE), enabling downstream expression of the TRE-dependent transgene. When neuron activates, Ca2+ flows into their soma. Thy1-G-CaMP7 mice express a Ca2+ indicator, G-CaMP7, in pyramidal neurons of hippocampal CA1 in the mice. Thus, neuronal activity is transferred into G-CaMP7 fluorescence, called Ca2+ imaging. We developed a technique that combines a head-mounted, miniature fluorescent microscope, with Thy1-G-CaMP7/c-fos-tTA double transgenic mice. The hippocampal CA1 region in double transgenic mice was injected with LV expressing a fluorescent protein, Kikume Green Red (KikGR), under the control of TRE. Using this approach, engram cells can be identified with KikGR, and the Ca2+ signals corresponding to the activity of engram cells and non-engram cells can be tracked during experience of a novel episodic event.
The data indicated that population activity of engram cell ensemble exhibited the characteristic trait of highly repetitive activity during novel episodic event. To address component of one memory, next, it was proposed to deconstruct population activity into sub-ensemble groups. Non-negative matrix factorization (NMF) decomposes population activity into a time series of coactivated neuronal ensembles. Each sub-ensemble is composed of the different cells, which are spatially intermingled, to make their synchronous activity even among the group of engram cells associated with a single event. These results suggest that the total information of one event is structured into sub-engram ensembles.
In order to measure the activity of engram cells across different memory processing stages, recording of Ca2+ transients from novel experience through post-experience sleep to retrieval was conducted. Around 40% of the total number of sub-ensembles that appeared in a novel experience in engram reactivated during post-experience sleep and then preferentially reappeared in retrieval sessions, while almost none of sub-ensembles in non-engram did not show this feature. Thus, engram sub-ensembles formed during a novel experience and that were reactivated during sleep sessions were mostly reactivated during the retrieval session. By contrast, most non-engram ensembles that were activated during the novel experience were not reactivated in the later sessions.
Read more at Science Daily
Largest-ever ancient-DNA study illuminates millennia of South and Central Asian prehistory
The largest-ever study of ancient human DNA, along with the first genome of an individual from the ancient Indus Valley Civilization, reveal in unprecedented detail the shifting ancestry of Central and South Asian populations over time.
The research, published online Sept. 5 in a pair of papers in Science and Cell, also answers longstanding questions about the origins of farming and the source of Indo-European languages in South and Central Asia.
Geneticists, archaeologists and anthropologists from North America, Europe, Central Asia and South Asia analyzed the genomes of 524 never before-studied ancient individuals. The work increased the worldwide total of published ancient genomes by about 25 percent.
By comparing these genomes to one another and to previously sequenced genomes, and by putting the information into context alongside archaeological, linguistic and other records, the researchers filled in many of the key details about who lived in various parts of this region from the Mesolithic Era (about 12,000 years ago) to the Iron Age (until around 2,000 years ago) and how they relate to the people who live there today.
"With this many samples, we can detect subtle interactions between populations as well as outliers within populations, something that has only become possible in the last couple of years through technological advances," said David Reich, co-senior author of both papers and professor of genetics in the Blavatnik Institute at Harvard Medical School.
"These studies speak to two of the most profound cultural transformations in ancient Eurasia -- the transition from hunting and gathering to farming and the spread of Indo-European languages, which are spoken today from the British Isles to South Asia -- along with the movement of people," said Vagheesh Narasimhan, co-first author of both papers and a postdoctoral fellow in the Reich lab. "The studies are particularly significant because Central and South Asia are such understudied parts of the world."
"One of the most exciting aspects of this study is the way it integrates genetics with archaeology and linguistics," said Ron Pinhasi of the University of Vienna, co-senior author of the Science paper. "The new results emerged after combining data, methods and perspectives from diverse academic disciplines, an integrative approach that provides much more information about the past than any one of these disciplines could alone."
"In addition, the introduction of new sampling methodologies allowed us to minimize damage to skeletons while maximizing the chance of obtaining genetic data from regions where DNA preservation is often poor," Pinhasi added.
Language key
Indo-European languages -- including Hindi/Urdu, Bengali, Punjabi, Persian, Russian, English, Spanish, Gaelic and more than 400 others -- make up the largest language family on Earth.
For decades, specialists have debated how Indo-European languages made their way to distant parts of the world. Did they spread via herders from the Eurasian Steppe? Or did they travel with farmers moving west and east from Anatolia (present-day Turkey)?
A 2015 paper by Reich and colleagues indicated that Indo-European languages arrived in Europe via the steppe. The Science study now makes a similar case for South Asia by showing that present-day South Asians have little if any ancestry from farmers with Anatolian roots.
"We can rule out a large-scale spread of farmers with Anatolian roots into South Asia, the centerpiece of the 'Anatolian hypothesis' that such movement brought farming and Indo-European languages into the region," said Reich, who is also an investigator of the Howard Hughes Medical Institute and the Broad Institute. "Since no substantial movements of people occurred, this is checkmate for the Anatolian hypothesis."
One new line of evidence in favor of a steppe origin for Indo-European languages is the detection of genetic patterns that connect speakers of the Indo-Iranian and Balto-Slavic branches of Indo-European. The researchers found that present-day speakers of both branches descend from a subgroup of steppe pastoralists who moved west toward Europe almost 5,000 years ago and then spread back eastward into Central and South Asia in the following 1,500 years.
"This provides a simple explanation in terms of ancient movements of people for the otherwise puzzling shared linguistic features of these two branches of Indo-European, which today are separated by vast geographic distances," said Reich.
A second line of evidence in favor of a steppe origin is the researchers' discovery that of the 140 present-day South Asian populations analyzed in the study, a handful show a remarkable spike in ancestry from the steppe. All but one of these steppe-enriched populations are historically priestly groups, including Brahmins -- traditional custodians of texts written in the ancient Indo-European language Sanskrit.
"The finding that Brahmins often have more steppe ancestry than other groups in South Asia, controlling for other factors, provides a fascinating new argument in favor of a steppe origin for Indo-European languages in South Asia," said Reich.
"This study has filled in a large piece of the puzzle of the spread of Indo-European," said co-author Nick Patterson, research fellow in genetics at HMS and a staff scientist at the Broad Institute of MIT and Harvard. "I believe the high-level picture is now understood."
"This problem has been in the air for 200 years or more and it's now rapidly being sorted out," he added. "I'm very excited by that."
Agriculture origins
The studies inform another longstanding debate, this one about whether the change from a hunting and gathering economy to one based on farming was driven more by movements of people, the copying of ideas or local invention.
In Europe, ancient-DNA studies have shown that agriculture arrived along with an influx of people with ancestry from Anatolia.
The new study reveals a similar dynamic in Iran and Turan (southern Central Asia), where the researchers found that Anatolian-related ancestry and farming arrived around the same time.
"This confirms that the spread of agriculture entailed not only a westward route from Anatolia to Europe but also an eastward route from Anatolia into regions of Asia previously only inhabited by hunter-gatherer groups," said Pinhasi.
Then, as farming spread northward through the mountains of Inner Asia thousands of years after taking hold in Iran and Turan, "the links between ancestry and economy get more complex," said archaeologist Michael Frachetti of Washington University in St. Louis, co-senior author who led much of the skeletal sampling for the Science paper.
By around 5,000 years ago, the researchers found, southwestern Asian ancestry flowed north along with farming technology, while Siberian or steppe ancestry flowed south onto the Iranian plateau. The two-way pattern of movement took place along the mountains, a corridor that Frachetti previously showed was a "Bronze Age Silk Road" along which people exchanged crops and ideas between East and West.
In South Asia, however, the story appears quite different. Not only did the researchers find no trace of the Anatolian-related ancestry that is a hallmark of the spread of farming to the west, but the Iranian-related ancestry they detected in South Asians comes from a lineage that separated from ancient Iranian farmers and hunter-gatherers before those groups split from each other.
The researchers concluded that farming in South Asia was not due to the movement of people from the earlier farming cultures of the west; instead, local foragers adopted it.
"Prior to the arrival of steppe pastoralists bringing their Indo-European languages about 4,000 years ago, we find no evidence of large-scale movements of people into South Asia," said Reich.
First glimpse of the ancestry of the Indus Valley Civilization
Running from the Himalayas to the Arabian Sea, the Indus River Valley was the site of one of the first civilizations of the ancient world, flourishing between 4,000 and 5,000 years ago. People built towns with populations in the tens of thousands. They used standardized weights and measures and exchanged goods with places as far-flung as East Africa.
But who were they?
Before now, geneticists were unable to extract viable data from skeletons buried at Indus Valley Civilization archaeological sites because the heat and volatile climate of lowland South Asia have degraded most DNA beyond scientists' ability to analyze it.
The Cell paper changes this.
After screening more than 60 skeletal samples from the largest known town of the Indus Valley Civilization, called Rakhigarhi, the authors found one with a hint of ancient DNA. After more than 100 sequencing attempts, they generated enough data to reach meaningful conclusions.
The ancient woman's genome matched those of 11 other ancient people reported in the Science paper who lived in what is now Iran and Turkmenistan at sites known to have exchanged objects with the Indus Valley Civilization. All 12 had a distinctive mix of ancestry, including a lineage related to Southeast Asian hunter-gatherers and an Iranian-related lineage specific to South Asia. Because this mix was different from the majority of people living in Iran and Turkmenistan at that time, the authors propose that the 11 individuals reported in the Science paper were migrants, likely from the Indus Valley Civilization.
None of the 12 had evidence of ancestry from steppe pastoralists, consistent with the model that that group hadn't arrived yet in South Asia.
The Science paper further showed that after the decline of the Indus Valley Civilization between 4,000 and 3,500 years ago, a portion of the group to which these 12 individuals belonged mixed with people coming from the north who had steppe pastoralist ancestry, forming the Ancestral North Indians, one of the two primary ancestral populations of present-day people in India. A portion of the original group also mixed with people from peninsular India to form the other primary source population, the Ancestral South Indians.
"Mixtures of the Ancestral North Indians and Ancestral South Indians -- both of whom owe their primary ancestry to people like that of the Indus Valley Civilization individual we sequenced -- form the primary ancestry of South Asians today," said Patterson.
"The study directly ties present-day South Asians to the ancient peoples of South Asia's first civilization," added Narasimhan.
Read more at Science Daily
The research, published online Sept. 5 in a pair of papers in Science and Cell, also answers longstanding questions about the origins of farming and the source of Indo-European languages in South and Central Asia.
Geneticists, archaeologists and anthropologists from North America, Europe, Central Asia and South Asia analyzed the genomes of 524 never before-studied ancient individuals. The work increased the worldwide total of published ancient genomes by about 25 percent.
By comparing these genomes to one another and to previously sequenced genomes, and by putting the information into context alongside archaeological, linguistic and other records, the researchers filled in many of the key details about who lived in various parts of this region from the Mesolithic Era (about 12,000 years ago) to the Iron Age (until around 2,000 years ago) and how they relate to the people who live there today.
"With this many samples, we can detect subtle interactions between populations as well as outliers within populations, something that has only become possible in the last couple of years through technological advances," said David Reich, co-senior author of both papers and professor of genetics in the Blavatnik Institute at Harvard Medical School.
"These studies speak to two of the most profound cultural transformations in ancient Eurasia -- the transition from hunting and gathering to farming and the spread of Indo-European languages, which are spoken today from the British Isles to South Asia -- along with the movement of people," said Vagheesh Narasimhan, co-first author of both papers and a postdoctoral fellow in the Reich lab. "The studies are particularly significant because Central and South Asia are such understudied parts of the world."
"One of the most exciting aspects of this study is the way it integrates genetics with archaeology and linguistics," said Ron Pinhasi of the University of Vienna, co-senior author of the Science paper. "The new results emerged after combining data, methods and perspectives from diverse academic disciplines, an integrative approach that provides much more information about the past than any one of these disciplines could alone."
"In addition, the introduction of new sampling methodologies allowed us to minimize damage to skeletons while maximizing the chance of obtaining genetic data from regions where DNA preservation is often poor," Pinhasi added.
Language key
Indo-European languages -- including Hindi/Urdu, Bengali, Punjabi, Persian, Russian, English, Spanish, Gaelic and more than 400 others -- make up the largest language family on Earth.
For decades, specialists have debated how Indo-European languages made their way to distant parts of the world. Did they spread via herders from the Eurasian Steppe? Or did they travel with farmers moving west and east from Anatolia (present-day Turkey)?
A 2015 paper by Reich and colleagues indicated that Indo-European languages arrived in Europe via the steppe. The Science study now makes a similar case for South Asia by showing that present-day South Asians have little if any ancestry from farmers with Anatolian roots.
"We can rule out a large-scale spread of farmers with Anatolian roots into South Asia, the centerpiece of the 'Anatolian hypothesis' that such movement brought farming and Indo-European languages into the region," said Reich, who is also an investigator of the Howard Hughes Medical Institute and the Broad Institute. "Since no substantial movements of people occurred, this is checkmate for the Anatolian hypothesis."
One new line of evidence in favor of a steppe origin for Indo-European languages is the detection of genetic patterns that connect speakers of the Indo-Iranian and Balto-Slavic branches of Indo-European. The researchers found that present-day speakers of both branches descend from a subgroup of steppe pastoralists who moved west toward Europe almost 5,000 years ago and then spread back eastward into Central and South Asia in the following 1,500 years.
"This provides a simple explanation in terms of ancient movements of people for the otherwise puzzling shared linguistic features of these two branches of Indo-European, which today are separated by vast geographic distances," said Reich.
A second line of evidence in favor of a steppe origin is the researchers' discovery that of the 140 present-day South Asian populations analyzed in the study, a handful show a remarkable spike in ancestry from the steppe. All but one of these steppe-enriched populations are historically priestly groups, including Brahmins -- traditional custodians of texts written in the ancient Indo-European language Sanskrit.
"The finding that Brahmins often have more steppe ancestry than other groups in South Asia, controlling for other factors, provides a fascinating new argument in favor of a steppe origin for Indo-European languages in South Asia," said Reich.
"This study has filled in a large piece of the puzzle of the spread of Indo-European," said co-author Nick Patterson, research fellow in genetics at HMS and a staff scientist at the Broad Institute of MIT and Harvard. "I believe the high-level picture is now understood."
"This problem has been in the air for 200 years or more and it's now rapidly being sorted out," he added. "I'm very excited by that."
Agriculture origins
The studies inform another longstanding debate, this one about whether the change from a hunting and gathering economy to one based on farming was driven more by movements of people, the copying of ideas or local invention.
In Europe, ancient-DNA studies have shown that agriculture arrived along with an influx of people with ancestry from Anatolia.
The new study reveals a similar dynamic in Iran and Turan (southern Central Asia), where the researchers found that Anatolian-related ancestry and farming arrived around the same time.
"This confirms that the spread of agriculture entailed not only a westward route from Anatolia to Europe but also an eastward route from Anatolia into regions of Asia previously only inhabited by hunter-gatherer groups," said Pinhasi.
Then, as farming spread northward through the mountains of Inner Asia thousands of years after taking hold in Iran and Turan, "the links between ancestry and economy get more complex," said archaeologist Michael Frachetti of Washington University in St. Louis, co-senior author who led much of the skeletal sampling for the Science paper.
By around 5,000 years ago, the researchers found, southwestern Asian ancestry flowed north along with farming technology, while Siberian or steppe ancestry flowed south onto the Iranian plateau. The two-way pattern of movement took place along the mountains, a corridor that Frachetti previously showed was a "Bronze Age Silk Road" along which people exchanged crops and ideas between East and West.
In South Asia, however, the story appears quite different. Not only did the researchers find no trace of the Anatolian-related ancestry that is a hallmark of the spread of farming to the west, but the Iranian-related ancestry they detected in South Asians comes from a lineage that separated from ancient Iranian farmers and hunter-gatherers before those groups split from each other.
The researchers concluded that farming in South Asia was not due to the movement of people from the earlier farming cultures of the west; instead, local foragers adopted it.
"Prior to the arrival of steppe pastoralists bringing their Indo-European languages about 4,000 years ago, we find no evidence of large-scale movements of people into South Asia," said Reich.
First glimpse of the ancestry of the Indus Valley Civilization
Running from the Himalayas to the Arabian Sea, the Indus River Valley was the site of one of the first civilizations of the ancient world, flourishing between 4,000 and 5,000 years ago. People built towns with populations in the tens of thousands. They used standardized weights and measures and exchanged goods with places as far-flung as East Africa.
But who were they?
Before now, geneticists were unable to extract viable data from skeletons buried at Indus Valley Civilization archaeological sites because the heat and volatile climate of lowland South Asia have degraded most DNA beyond scientists' ability to analyze it.
The Cell paper changes this.
After screening more than 60 skeletal samples from the largest known town of the Indus Valley Civilization, called Rakhigarhi, the authors found one with a hint of ancient DNA. After more than 100 sequencing attempts, they generated enough data to reach meaningful conclusions.
The ancient woman's genome matched those of 11 other ancient people reported in the Science paper who lived in what is now Iran and Turkmenistan at sites known to have exchanged objects with the Indus Valley Civilization. All 12 had a distinctive mix of ancestry, including a lineage related to Southeast Asian hunter-gatherers and an Iranian-related lineage specific to South Asia. Because this mix was different from the majority of people living in Iran and Turkmenistan at that time, the authors propose that the 11 individuals reported in the Science paper were migrants, likely from the Indus Valley Civilization.
None of the 12 had evidence of ancestry from steppe pastoralists, consistent with the model that that group hadn't arrived yet in South Asia.
The Science paper further showed that after the decline of the Indus Valley Civilization between 4,000 and 3,500 years ago, a portion of the group to which these 12 individuals belonged mixed with people coming from the north who had steppe pastoralist ancestry, forming the Ancestral North Indians, one of the two primary ancestral populations of present-day people in India. A portion of the original group also mixed with people from peninsular India to form the other primary source population, the Ancestral South Indians.
"Mixtures of the Ancestral North Indians and Ancestral South Indians -- both of whom owe their primary ancestry to people like that of the Indus Valley Civilization individual we sequenced -- form the primary ancestry of South Asians today," said Patterson.
"The study directly ties present-day South Asians to the ancient peoples of South Asia's first civilization," added Narasimhan.
Read more at Science Daily
Ancient animal species: Fossils dating back 550 million years among first animal trails
Three Gorges, Yantze River, China. |
Shuhai Xiao, a professor of geosciences with the Virginia Tech College of Science, calls the unearthed fossils, including the bodies and trails left by an ancient animal species, the most convincing sign of ancient animal mobility, dating back about 550 million years. Named Yilingia spiciformis -- that translates to spiky Yiling bug, Yiling being the Chinese city near the discovery site -- the animal was found in multiple layers of rock by Xiao and Zhe Chen, Chuanming Zhou, and Xunlai Yuan from the Chinese Academy of Sciences' Nanjing Institute of Geology and Palaeontology.
The findings are published in the latest issue of Nature. The trials are from the same rock unit and are roughly the same age as bug-like footprints found by Xiao and his team in a series of digs from 2013 to 2018 in the Yangtze Gorges area of southern China, and date back to the Ediacaran Period, well before the age of dinosaurs or even the Pangea supercontinent. What sets this find apart: The preserved fossil of the animal that made the trail versus the unknowable guesswork where the body has not been preserved.
"This discovery shows that segmented and mobile animals evolved by 550 million years ago," Xiao said. "Mobility made it possible for animals to make an unmistakable footprint on Earth, both literally and metaphorically. Those are the kind of features you find in a group of animals called bilaterans. This group includes us humans and most animals. Animals and particularly humans are movers and shakers on Earth. Their ability to shape the face of the planet is ultimately tied to the origin of animal motility."
The animal was a millipede-like creature a quarter-inch to an inch wide and up to 4 inches long that alternately dragged its body across the muddy ocean floor and rested along the way, leaving trails as loing as 23 inches. The animal was an elongated narrow creature, with 50 or so body segments, a left and right side, a back and belly, and a head and a tail.
The origin of bilaterally symmetric animals -- known as bilaterians -- with segmented bodies and directional mobility is a monumental event in early animal evolution, and is estimated to have occurred the Ediacaran Period, between 635 and 539 million years ago. But until this finding by Xiao and his team, there was no convincing fossil evidence to substantiate those estimates. One of the recovered specimens is particularly vital because the animal and the trail it produced just before its death are preserved together.
Remarkably, the find also marks what may be the first sign of decision making among animals -- the trails suggest an effort to move toward or away from something, perhaps under the direction of a sophisticated central nerve system, Xiao said. The mobility of animals led to environmental and ecological impacts on the Earth surface system and ultimately led to the Cambrian substrate and agronomic revolutions, he said.
"We are the most impactful animal on Earth," added Xiao, also an affiliated member of the Global Change Center at Virginia Tech. "We make a huge footprint, not only from locomotion, but in many other and more impactful activities related to our ability to move. When and how animal locomotion evolved defines an important geological and evolutionary context of anthropogenic impact on the surface of the Earth."
Rachel Wood, a professor in the School of GeoSciences at University of Edinburgh in Scotland, who was not involved with the study, said, "This is a remarkable finding of highly significant fossils. We now have evidence that segmented animals were present and had gained an ability to move across the sea floor before the Cambrian, and more notably we can tie the actual trace-maker to the trace. Such preservation is unusual and provides considerable insight into a major step in the evolution of animals."
Read more at Science Daily
Sep 4, 2019
Similarities in human, chimpanzee, and bonobo eye color patterns revealed
Studies have suggested that the contrast between the white of human eyes -- known as the sclerae -- and the colourful irises allows others to detect the direction of our gaze. The ability to detect gaze is important as many other human skills, such as social learning, seem to depend on this.
In contrast, as the sclerae of apes' eyes is often darker than human eyes, researchers have long argued that their gaze is 'cryptic', or hidden. This means that nonhuman apes would not be able to see where other members of their species are looking.
Now, researchers from the National University of Singapore (NUS), together with collaborators from the University of St Andrews and Leiden University, have discovered that ape eyes possess the same pattern of colour differences as human beings. Doctoral student Mr Juan O. Perea-García and Associate Professor Antónia Monteiro from the Department of Biological Sciences at the NUS Faculty of Science suggest that this discovery may mean apes also follow each other's gaze.
Their findings were published in Proceedings of the National Academy of Sciences (PNAS) on 3 September 2019.
Eye-opening results
The research team compared the darkness of the sclerae contrasted with irises of over 150 humans, bonobos and chimpanzees. The researchers found that bonobos, like humans, have paler sclerae and darker irises. Chimpanzees were found to have a different pattern -- with very dark sclerae, and paler irises. Both of these colour patterns show the same type of contrast seen in human eyes, and could help other apes find out where they are looking.
"Humans are unique in many ways, as no other animal can communicate with similar intricate language or build tools of such complexity. Gaze following is an important component of many behaviours that are thought to be characteristically human, so our findings suggest that apes might also engage in these behaviours," said Mr Perea-García.
Furthering our ancestral understanding
Before humans had language, our ancestors might have used the gaze of those around them to help communicate dangers or other useful information. They might not have been able to say, "Look over there!." However, a look in the direction of the predator might be sufficient, as long as it was possible to follow the direction of their gaze.
Apart from helping us understand how our ancestors communicated, this study suggests some interesting new research directions. These include questions pertaining to why human beings and bonobos evolve in a similar way, despite bonobos being more closely related to chimpanzees.
Read more at Science Daily
In contrast, as the sclerae of apes' eyes is often darker than human eyes, researchers have long argued that their gaze is 'cryptic', or hidden. This means that nonhuman apes would not be able to see where other members of their species are looking.
Now, researchers from the National University of Singapore (NUS), together with collaborators from the University of St Andrews and Leiden University, have discovered that ape eyes possess the same pattern of colour differences as human beings. Doctoral student Mr Juan O. Perea-García and Associate Professor Antónia Monteiro from the Department of Biological Sciences at the NUS Faculty of Science suggest that this discovery may mean apes also follow each other's gaze.
Their findings were published in Proceedings of the National Academy of Sciences (PNAS) on 3 September 2019.
Eye-opening results
The research team compared the darkness of the sclerae contrasted with irises of over 150 humans, bonobos and chimpanzees. The researchers found that bonobos, like humans, have paler sclerae and darker irises. Chimpanzees were found to have a different pattern -- with very dark sclerae, and paler irises. Both of these colour patterns show the same type of contrast seen in human eyes, and could help other apes find out where they are looking.
"Humans are unique in many ways, as no other animal can communicate with similar intricate language or build tools of such complexity. Gaze following is an important component of many behaviours that are thought to be characteristically human, so our findings suggest that apes might also engage in these behaviours," said Mr Perea-García.
Furthering our ancestral understanding
Before humans had language, our ancestors might have used the gaze of those around them to help communicate dangers or other useful information. They might not have been able to say, "Look over there!." However, a look in the direction of the predator might be sufficient, as long as it was possible to follow the direction of their gaze.
Apart from helping us understand how our ancestors communicated, this study suggests some interesting new research directions. These include questions pertaining to why human beings and bonobos evolve in a similar way, despite bonobos being more closely related to chimpanzees.
Read more at Science Daily
How California wildfires can impact water availability
In recent years, wildfires in the western United States have occurred with increasing frequency and scale. Climate change scenarios in California predict prolonged periods of drought with potential for conditions even more amenable to wildfires. The Sierra Nevada Mountains provide up to 70% of the state's water resources, yet there is little known on how wildfires will impact water resources in the future.
A new study by scientists at Lawrence Berkeley National Laboratory (Berkeley Lab) uses a numerical model of an important watershed in California to shed light on how wildfires can affect large-scale hydrological processes, such as stream flow, groundwater levels, and snowpack and snowmelt. The team found that post-wildfire conditions resulted in greater winter snowpack and subsequently greater summer runoff as well as increased groundwater storage.
The study, "Watershed Dynamics Following Wildfires: Nonlinear Feedbacks and Implications on Hydrologic Responses," was published recently in the journal, Hydrological Processes.
"We wanted to understand how changes at the land surface can propagate to other locations of the watershed," said the study's lead author, Fadji Maina, a postdoctoral fellow in Berkeley Lab's Earth & Environmental Sciences Area. "Previous studies have looked at individual processes. Our model ties it together and looks at the system holistically."
The researchers modeled the Cosumnes River watershed, which extends from the Sierra Nevadas, starting just southwest of Lake Tahoe, down to the Central Valley, ending just north of the Sacramento Delta. "It's pretty representative of many watersheds in the state," said Berkeley Lab researcher Erica Woodburn, co-author of the study. "We had previously constructed this model to understand how watersheds in the state might respond to climate change extremes. In this study, we used the model to numerically explore how post-wildfire land cover changes influenced water partitioning in the landscape over a range of spatial and temporal resolutions."
Using high-performance computing to simulate watershed dynamics over a period of one year, and assuming a 20% burn area based on historical occurrences, the study allowed them to identify the regions in the watershed that were most sensitive to wildfires conditions, as well as the hydrologic processes that are most affected.
Some of the findings were counterintuitive, the researchers said. For example, evapotranspiration, or the loss of water to the atmosphere from soil, leaves, and through plants, typically decreases after wildfire. However, some regions in the Berkeley Lab model experienced an increase due to changes in surface water runoff patterns in and near burn scars.
"After a fire there are fewer trees, which leads to an expectation of less evapotranspiration," Maina said. "But in some locations we actually saw an increase. It's because the fire can change the subsurface distribution of groundwater. So there are nonlinear and propagating impacts of changing the land cover that leads to opposite trends than what you might expect from altering the land cover."
Changing the land cover leads to a change in snowpack dynamics. "That will change how much and when the snow melts and feeds the rivers," Woodburn said. "That in turn will impact groundwater. It's a cascading effect. In the model we quantify how much it moves in space and time, which is something you can only do accurately with the type of high resolution model we've constructed."
She added: "The changes to stream flow and groundwater levels following a wildfire are especially important metrics for water management stakeholders, who largely rely on this natural resource but have little way of understanding how they might be impacted given wildfires in the future. The study is really illustrative of the integrative nature of hydrologic processes across the Sierra Nevada-Central Valley interface in the state."
Read more at Science Daily
A new study by scientists at Lawrence Berkeley National Laboratory (Berkeley Lab) uses a numerical model of an important watershed in California to shed light on how wildfires can affect large-scale hydrological processes, such as stream flow, groundwater levels, and snowpack and snowmelt. The team found that post-wildfire conditions resulted in greater winter snowpack and subsequently greater summer runoff as well as increased groundwater storage.
The study, "Watershed Dynamics Following Wildfires: Nonlinear Feedbacks and Implications on Hydrologic Responses," was published recently in the journal, Hydrological Processes.
"We wanted to understand how changes at the land surface can propagate to other locations of the watershed," said the study's lead author, Fadji Maina, a postdoctoral fellow in Berkeley Lab's Earth & Environmental Sciences Area. "Previous studies have looked at individual processes. Our model ties it together and looks at the system holistically."
The researchers modeled the Cosumnes River watershed, which extends from the Sierra Nevadas, starting just southwest of Lake Tahoe, down to the Central Valley, ending just north of the Sacramento Delta. "It's pretty representative of many watersheds in the state," said Berkeley Lab researcher Erica Woodburn, co-author of the study. "We had previously constructed this model to understand how watersheds in the state might respond to climate change extremes. In this study, we used the model to numerically explore how post-wildfire land cover changes influenced water partitioning in the landscape over a range of spatial and temporal resolutions."
Using high-performance computing to simulate watershed dynamics over a period of one year, and assuming a 20% burn area based on historical occurrences, the study allowed them to identify the regions in the watershed that were most sensitive to wildfires conditions, as well as the hydrologic processes that are most affected.
Some of the findings were counterintuitive, the researchers said. For example, evapotranspiration, or the loss of water to the atmosphere from soil, leaves, and through plants, typically decreases after wildfire. However, some regions in the Berkeley Lab model experienced an increase due to changes in surface water runoff patterns in and near burn scars.
"After a fire there are fewer trees, which leads to an expectation of less evapotranspiration," Maina said. "But in some locations we actually saw an increase. It's because the fire can change the subsurface distribution of groundwater. So there are nonlinear and propagating impacts of changing the land cover that leads to opposite trends than what you might expect from altering the land cover."
Changing the land cover leads to a change in snowpack dynamics. "That will change how much and when the snow melts and feeds the rivers," Woodburn said. "That in turn will impact groundwater. It's a cascading effect. In the model we quantify how much it moves in space and time, which is something you can only do accurately with the type of high resolution model we've constructed."
She added: "The changes to stream flow and groundwater levels following a wildfire are especially important metrics for water management stakeholders, who largely rely on this natural resource but have little way of understanding how they might be impacted given wildfires in the future. The study is really illustrative of the integrative nature of hydrologic processes across the Sierra Nevada-Central Valley interface in the state."
Read more at Science Daily
New light shed on demise of two extinct New Zealand songbirds
They may not have been seen for the past 50 and 110 years, but an international study into their extinction has provided answers to how the world lost New Zealand's South Island kokako and huia.
Lead author Dr Nicolas Dussex, of the University of Otago, New Zealand, and Swedish Museum of Natural History, says the team set out to investigate if it was external (habitat loss, mammalian predators) or internal (demography, genetic effects) factors which led to their extinction.
Very little was known about the forest songbirds he describes as "iconic and somehow mysterious," which were last seen in 1960s and 1907 respectively, but recent advances in the extraction and analysis of ancient DNA provided the opportunity scientists needed to find out more.
The study, just published in Biology Letters, produced what Dr Dussex calls "very surprising" results.
The researchers mapped the birds' complete genomes and saw a response to ice age climate change many thousands of years ago, but no signs of genetic problems common in small populations such as inbreeding. This suggests a rapid population decline possibly caused by habitat loss and new predators introduced by the Europeans.
"Because even the earliest Polynesian settlers more than 700 years ago had a significant impact on forest cover, we would have expected huia and South Island kokako populations to have survived at small population sizes for centuries and thus to have experienced an increase in inbreeding.
"However, our data did not show evidence for inbreeding and indicated that the two species still had quite a bit of genetic diversity close to the time of extinction. This means that their extinction was most likely not driven by genetic effects and inbreeding, but that further habitat loss and introduction of mammalian predators by European must have triggered a rapid extinction," he says.
Dr Dussex says this is the first study to generate high-quality genomes from historical specimens of extinct New Zealand species.
"Using complete genomes allowed us to reconstruct the birds' population history, and, more importantly, to determine whether genetic effects could have contributed to their extinction.
"While we focused here on two extinct species, understanding the role of genetic effects in the extinction process is extremely relevant to the study of declining and inbred populations, such as the kakapo, saddleback, and kiwi. This knowledge can thus contribute the conservation and recovery of endangered species potentially exposed to negative genetic effects."
Co-author Dr Michael Knapp, of Otago's Department of Anatomy, says the team hopes its work will stimulate similar research in other extinct or endangered endemic species from New Zealand.
Read more at Science Daily
Lead author Dr Nicolas Dussex, of the University of Otago, New Zealand, and Swedish Museum of Natural History, says the team set out to investigate if it was external (habitat loss, mammalian predators) or internal (demography, genetic effects) factors which led to their extinction.
Very little was known about the forest songbirds he describes as "iconic and somehow mysterious," which were last seen in 1960s and 1907 respectively, but recent advances in the extraction and analysis of ancient DNA provided the opportunity scientists needed to find out more.
The study, just published in Biology Letters, produced what Dr Dussex calls "very surprising" results.
The researchers mapped the birds' complete genomes and saw a response to ice age climate change many thousands of years ago, but no signs of genetic problems common in small populations such as inbreeding. This suggests a rapid population decline possibly caused by habitat loss and new predators introduced by the Europeans.
"Because even the earliest Polynesian settlers more than 700 years ago had a significant impact on forest cover, we would have expected huia and South Island kokako populations to have survived at small population sizes for centuries and thus to have experienced an increase in inbreeding.
"However, our data did not show evidence for inbreeding and indicated that the two species still had quite a bit of genetic diversity close to the time of extinction. This means that their extinction was most likely not driven by genetic effects and inbreeding, but that further habitat loss and introduction of mammalian predators by European must have triggered a rapid extinction," he says.
Dr Dussex says this is the first study to generate high-quality genomes from historical specimens of extinct New Zealand species.
"Using complete genomes allowed us to reconstruct the birds' population history, and, more importantly, to determine whether genetic effects could have contributed to their extinction.
"While we focused here on two extinct species, understanding the role of genetic effects in the extinction process is extremely relevant to the study of declining and inbred populations, such as the kakapo, saddleback, and kiwi. This knowledge can thus contribute the conservation and recovery of endangered species potentially exposed to negative genetic effects."
Co-author Dr Michael Knapp, of Otago's Department of Anatomy, says the team hopes its work will stimulate similar research in other extinct or endangered endemic species from New Zealand.
Read more at Science Daily
New peanut allergy treatment shows effectiveness and safety
People allergic to peanuts may have a new way to protect themselves from severe allergic reactions to accidental peanut exposure. It's called sublingual immunotherapy -- or SLIT -- and it involves putting a miniscule amount of liquefied peanut protein under the tongue, where it is absorbed immediately into the blood stream to desensitize the immune system to larger amounts of peanut protein.
Published in the Journal of Allergy and Clinical Immunology, the research led by first author Edwin Kim, MD, assistant professor of medicine at the UNC School of Medicine, shows that SLIT could offer patients a safe and effective way to protect themselves from severe allergic reactions or even anaphylaxis.
"As a parent of two children with nut allergies, I know the fear parents face and the need for better treatments," said Kim, member of the UNC Children's Research Institute. "We now have the first long-term data showing that sublingual immunotherapy is safe and tolerable, while offering a strong amount of protection."
There are three main immunotherapeutic ways clinician scientists have developed to treat nut allergies, and all of them attempt to desensitize the immune system to nut proteins to help patients avoid severe allergic reactions. According to Kim, about 100 mg of peanut protein can trigger a severe allergic reaction. That's the sort of trace amount people fear can show up in food "manufactured in a facility that processes peanuts." For reference, one peanut kernel contains about 300 mg.
"The main idea beyond immunotherapy is not for kids to be able to eat peanut butter and jelly sandwiches," Kim said. "It's to keep them safe from the small hidden exposures that could occur with packaged foods, at restaurants, and with other food exposures."
One immunotherapy method involves a patch on the skin that releases a small amount of peanut protein through the skin to desensitize the immune system. This approach has proved safe in clinical research but perhaps not as effective as researchers had hoped. It could become an FDA-approved treatment.
A second approach is called oral immunotherapy (OIT), which is currently under FDA review and a decision is expected this year. OIT requires patients to ingest a small portion of peanut protein daily, and over the course of time this can desensitize the immune system to accidental exposures. In a large phase 3 OIT clinical trial, patients initially ingested 0.5 mg of peanut and increased the amount to 300 mg over the course of many weeks and then maintained that 300 mg daily intake for the remainder of the year. This trial showed substantial effectiveness in protecting patients but some patients suffered serious side effects. A subsequent meta-analysis of OIT clinical trial data published in The Lancet in April suggested that more clinical research on OIT was needed due to the risk of serious side effects.
A third approach is SLIT. Instead of having patients ingest peanut protein, doctors place a small amount of peanut protein under patients' tongues, where it is immediately absorbed. Because the peanut protein avoids digestion, patients are given much less peanut protein -- about 0.0002 mg initially. This amount then increases over the course of months to just 2 mg.
In 2011, Kim and colleagues -- including Wesley Burks, MD, dean of the UNC School of Medicine -- conducted a small study of 18 patients to show that SLIT was safe and effective over the course of one year. Since then, Kim and colleagues followed 48 patients in the SLIT protocol of 2 mg daily for five years. In the JACI paper, the researchers showed that 67 percent of these patients were able to tolerate at least 750 mg of peanut protein without serious side effects. About 25 percent could tolerate 5000 mg.
Kim's data shows SLIT was about as effective as OIT, though the SLIT study was much smaller. And SLIT posed much less risk of serious side effects. The most common side effect was itchiness around the mouth that lasted about 15 minutes and did not need treatment. No one left the multi-year study because of side effects.
"SLIT participants tolerated between 10 and 20 times more peanut protein than it would take for someone to get sick," Kim said. "We think this provides a good cushion of protection -- maybe not quite as good as OIT -- but with an easier mechanism (sublingually) and, as far as we can tell right now, a better safety signal."
Kim's lab has finished a separate SLIT study of 4 mg daily for 55 patients over the course of four years. He hopes to publish results later this year. "With sublingual immunotherapy, we hope we can maintain our safety profile while seeing an even stronger benefit for patients," Kim said.
Read more at Science Daily
Published in the Journal of Allergy and Clinical Immunology, the research led by first author Edwin Kim, MD, assistant professor of medicine at the UNC School of Medicine, shows that SLIT could offer patients a safe and effective way to protect themselves from severe allergic reactions or even anaphylaxis.
"As a parent of two children with nut allergies, I know the fear parents face and the need for better treatments," said Kim, member of the UNC Children's Research Institute. "We now have the first long-term data showing that sublingual immunotherapy is safe and tolerable, while offering a strong amount of protection."
There are three main immunotherapeutic ways clinician scientists have developed to treat nut allergies, and all of them attempt to desensitize the immune system to nut proteins to help patients avoid severe allergic reactions. According to Kim, about 100 mg of peanut protein can trigger a severe allergic reaction. That's the sort of trace amount people fear can show up in food "manufactured in a facility that processes peanuts." For reference, one peanut kernel contains about 300 mg.
"The main idea beyond immunotherapy is not for kids to be able to eat peanut butter and jelly sandwiches," Kim said. "It's to keep them safe from the small hidden exposures that could occur with packaged foods, at restaurants, and with other food exposures."
One immunotherapy method involves a patch on the skin that releases a small amount of peanut protein through the skin to desensitize the immune system. This approach has proved safe in clinical research but perhaps not as effective as researchers had hoped. It could become an FDA-approved treatment.
A second approach is called oral immunotherapy (OIT), which is currently under FDA review and a decision is expected this year. OIT requires patients to ingest a small portion of peanut protein daily, and over the course of time this can desensitize the immune system to accidental exposures. In a large phase 3 OIT clinical trial, patients initially ingested 0.5 mg of peanut and increased the amount to 300 mg over the course of many weeks and then maintained that 300 mg daily intake for the remainder of the year. This trial showed substantial effectiveness in protecting patients but some patients suffered serious side effects. A subsequent meta-analysis of OIT clinical trial data published in The Lancet in April suggested that more clinical research on OIT was needed due to the risk of serious side effects.
A third approach is SLIT. Instead of having patients ingest peanut protein, doctors place a small amount of peanut protein under patients' tongues, where it is immediately absorbed. Because the peanut protein avoids digestion, patients are given much less peanut protein -- about 0.0002 mg initially. This amount then increases over the course of months to just 2 mg.
In 2011, Kim and colleagues -- including Wesley Burks, MD, dean of the UNC School of Medicine -- conducted a small study of 18 patients to show that SLIT was safe and effective over the course of one year. Since then, Kim and colleagues followed 48 patients in the SLIT protocol of 2 mg daily for five years. In the JACI paper, the researchers showed that 67 percent of these patients were able to tolerate at least 750 mg of peanut protein without serious side effects. About 25 percent could tolerate 5000 mg.
Kim's data shows SLIT was about as effective as OIT, though the SLIT study was much smaller. And SLIT posed much less risk of serious side effects. The most common side effect was itchiness around the mouth that lasted about 15 minutes and did not need treatment. No one left the multi-year study because of side effects.
"SLIT participants tolerated between 10 and 20 times more peanut protein than it would take for someone to get sick," Kim said. "We think this provides a good cushion of protection -- maybe not quite as good as OIT -- but with an easier mechanism (sublingually) and, as far as we can tell right now, a better safety signal."
Kim's lab has finished a separate SLIT study of 4 mg daily for 55 patients over the course of four years. He hopes to publish results later this year. "With sublingual immunotherapy, we hope we can maintain our safety profile while seeing an even stronger benefit for patients," Kim said.
Read more at Science Daily
T. Rex had an air conditioner in its head, study suggests
T. rex skeleton, head and neck |
In the past, scientists believed two large holes in the roof of a T. rex's skull -- called the dorsotemporal fenestra -- were filled with muscles that assist with jaw movements.
But that assertion puzzled Casey Holliday, a professor of anatomy in the MU School of Medicine and lead researcher on the study.
"It's really weird for a muscle to come up from the jaw, make a 90-degree turn, and go along the roof of the skull," Holliday said. "Yet, we now have a lot of compelling evidence for blood vessels in this area, based on our work with alligators and other reptiles."
Using thermal imaging -- devices that translate heat into visible light -- researchers examined alligators at the St. Augustine Alligator Farm Zoological Park in Florida. They believe their evidence offers a new theory and insight into the anatomy of a T. rex's head.
"An alligator's body heat depends on its environment," said Kent Vliet, coordinator of laboratories at the University of Florida's Department of Biology. "Therefore, we noticed when it was cooler and the alligators are trying to warm up, our thermal imaging showed big hot spots in these holes in the roof of their skull, indicating a rise in temperature. Yet, later in the day when it's warmer, the holes appear dark, like they were turned off to keep cool. This is consistent with prior evidence that alligators have a cross-current circulatory system -- or an internal thermostat, so to speak."
Holliday and his team took their thermal imaging data and examined fossilized remains of dinosaurs and crocodiles to see how this hole in the skull changed over time.
"We know that, similarly to the T. rex, alligators have holes on the roof of their skulls, and they are filled with blood vessels," said Larry Witmer, professor of anatomy at Ohio University's Heritage College of Osteopathic Medicine. "Yet, for over 100 years we've been putting muscles into a similar space with dinosaurs. By using some anatomy and physiology of current animals, we can show that we can overturn those early hypotheses about the anatomy of this part of the T. rex's skull."
From Science Daily
Sep 3, 2019
New whale species discovered along the coast of Hokkaido
A new beaked whale species Berardius minimus, which has been long postulated by local whalers in Hokkaido, Japan, has been confirmed.
In a collaboration between the National Museum of Nature and Science, Hokkaido University, Iwate University, and the United States National Museum of Natural History, a beaked whale species which has long been called Kurotsuchikujira (black Baird's beaked whale) by local Hokkaido whalers has been confirmed as the new cetacean species Berardius minimus (B. minimus).
Beaked whales prefer deep ocean waters and have a long diving capacity, making them hard to see and inadequately understood. The Stranding Network Hokkaido, a research group founded and managed by Professor Takashi F. Matsuishi of Hokkaido University, collected six stranded un-identified beaked whales along the coasts of the Okhotsk Sea.
The whales shared characteristics of B. bairdii (Baird's beaked whale) and were classified as belonging to the same genus Berardius. However, a number of distinguishable external characteristics, such as body proportions and color, led the researchers to investigate whether these beaked whales belong to a currently unclassified species.
"Just by looking at them, we could tell that they have a remarkably smaller body size, more spindle-shaped body, a shorter beak, and darker color compared to known Berardius species," explained Curator Emeritus Tadasu K. Yamada of the National Museum of Nature and Science from the research team.
In the current study, the specimens of this unknown species were studied in terms of their morphology, osteology, and molecular phylogeny. The results, published in the journal Scientific Reports, showed that the body length of physically mature individuals is distinctively smaller than B. bairdii (6.2-6.9m versus 10.0m). Detailed cranial measurements and DNA analyses further emphasized the significant difference from the other two known species in the genus Berardius. Due to it having the smallest body size in the genus, the researchers named the new species B. minimus.
"There are still many things we don't know about B. minimus," said Takashi F. Matsuishi. "We still don't know what adult females look like, and there are still many questions related to species distribution, for example. We hope to continue expanding what we know about B. minimus."
Local Hokkaido whalers also refer to some whales in the region as Karasu (crow). It is still unclear whether B. minimus (or Kurotsuchikujira) and Karasu are the same species or not, and the research team speculate that it is possible Karasu could be yet another different species.
Read more at Science Daily
In a collaboration between the National Museum of Nature and Science, Hokkaido University, Iwate University, and the United States National Museum of Natural History, a beaked whale species which has long been called Kurotsuchikujira (black Baird's beaked whale) by local Hokkaido whalers has been confirmed as the new cetacean species Berardius minimus (B. minimus).
Beaked whales prefer deep ocean waters and have a long diving capacity, making them hard to see and inadequately understood. The Stranding Network Hokkaido, a research group founded and managed by Professor Takashi F. Matsuishi of Hokkaido University, collected six stranded un-identified beaked whales along the coasts of the Okhotsk Sea.
The whales shared characteristics of B. bairdii (Baird's beaked whale) and were classified as belonging to the same genus Berardius. However, a number of distinguishable external characteristics, such as body proportions and color, led the researchers to investigate whether these beaked whales belong to a currently unclassified species.
"Just by looking at them, we could tell that they have a remarkably smaller body size, more spindle-shaped body, a shorter beak, and darker color compared to known Berardius species," explained Curator Emeritus Tadasu K. Yamada of the National Museum of Nature and Science from the research team.
In the current study, the specimens of this unknown species were studied in terms of their morphology, osteology, and molecular phylogeny. The results, published in the journal Scientific Reports, showed that the body length of physically mature individuals is distinctively smaller than B. bairdii (6.2-6.9m versus 10.0m). Detailed cranial measurements and DNA analyses further emphasized the significant difference from the other two known species in the genus Berardius. Due to it having the smallest body size in the genus, the researchers named the new species B. minimus.
"There are still many things we don't know about B. minimus," said Takashi F. Matsuishi. "We still don't know what adult females look like, and there are still many questions related to species distribution, for example. We hope to continue expanding what we know about B. minimus."
Local Hokkaido whalers also refer to some whales in the region as Karasu (crow). It is still unclear whether B. minimus (or Kurotsuchikujira) and Karasu are the same species or not, and the research team speculate that it is possible Karasu could be yet another different species.
Read more at Science Daily
Human perception of colors does not rely entirely on language
After patient RDS (identified only by his initials for privacy) suffered a stroke, he experienced a rare and unusual side effect: when he saw something red, blue, green, or any other chromatic hue, he could not name the object's color.
Using RDS as a subject, a study publishing on September 3 in the journal Cell Reports looks at how language shapes human thinking. Neuroscientists and philosophers have long wrestled with the interaction between language and thought: do names shape the way we categorize what we perceive, or do they correspond to categories that arise from perception?
To name the color red, for instance, we think of a red item as one of many in a vaguely defined spectrum that encompasses the concept "red." In this sense, we perform an act of categorization each time we call something by its name -- we group colors into discrete categories to identify mustard as a shade of yellow, for instance, or place teal in the blue family.
Senior author Paolo Bartolomeo, a neurologist at the Brain and Spine Institute in Salpêtrière Hospital in Paris, says, "We perceive colors as continuous. There is no sharp boundary between, say, red and blue. And yet conceptually we group colors into categories associated with color names.
"In our study, we had the unique opportunity to address the role of language in color categorization by testing a patient who couldn't effectively name colors after a stroke," he says.
Many scientists believe categorizing colors depends on top-down input from the language system to the visual cortex. Color names are believed to be stored in the brain's left hemisphere and to depend on language-related activity in the left side of the brain.
Conversely, these latest findings support recent neuroimaging studies suggesting that color categorization is distributed bilaterally in the human brain.
Viewing discs containing two colors from the same color category (e.g., two blue shades) or from different categories (e.g., brown and red), RDS was asked to identify same-category colors. He was also asked to name 34 color patches presented on a computer screen; eight of these patches were achromatic (white, black and grey), and 26 were chromatic.
Before his stroke, RDS perceived and named colors normally. After the stroke, an MRI revealed a lesion in the left region of his brain. This lesion apparently severed RDS's memory of color names from his visual perception of colors and his language system. Yet RDS could still group most colors -- even colors he couldn't name -- into categories such as dark or light or as being a mixture of other colors.
"We were surprised by his ability to consistently name so-called achromatic colors such as black, white, and gray, as opposed to his impaired naming of chromatic ones such as red, blue, and green," says the first author of the study, PhD student Katarzyna Siuda-Krzywicka. This suggested that our language system may process black, white, and gray differently from chromatic colors. Such striking dissociations raise important questions about how different color-related signals are segregated and integrated in the brain, she says.
To ensure that RDS's behavior did not reflect abnormal brain organization, the researchers compared the functioning of his unaffected brain areas to that of the same brain areas in healthy subjects and developed a non-verbal color-categorization test. "Our result -- that his color categories were independent from language -- could be generalized to healthy adults," Bartolomeo says.
Read more at Science Daily
Using RDS as a subject, a study publishing on September 3 in the journal Cell Reports looks at how language shapes human thinking. Neuroscientists and philosophers have long wrestled with the interaction between language and thought: do names shape the way we categorize what we perceive, or do they correspond to categories that arise from perception?
To name the color red, for instance, we think of a red item as one of many in a vaguely defined spectrum that encompasses the concept "red." In this sense, we perform an act of categorization each time we call something by its name -- we group colors into discrete categories to identify mustard as a shade of yellow, for instance, or place teal in the blue family.
Senior author Paolo Bartolomeo, a neurologist at the Brain and Spine Institute in Salpêtrière Hospital in Paris, says, "We perceive colors as continuous. There is no sharp boundary between, say, red and blue. And yet conceptually we group colors into categories associated with color names.
"In our study, we had the unique opportunity to address the role of language in color categorization by testing a patient who couldn't effectively name colors after a stroke," he says.
Many scientists believe categorizing colors depends on top-down input from the language system to the visual cortex. Color names are believed to be stored in the brain's left hemisphere and to depend on language-related activity in the left side of the brain.
Conversely, these latest findings support recent neuroimaging studies suggesting that color categorization is distributed bilaterally in the human brain.
Viewing discs containing two colors from the same color category (e.g., two blue shades) or from different categories (e.g., brown and red), RDS was asked to identify same-category colors. He was also asked to name 34 color patches presented on a computer screen; eight of these patches were achromatic (white, black and grey), and 26 were chromatic.
Before his stroke, RDS perceived and named colors normally. After the stroke, an MRI revealed a lesion in the left region of his brain. This lesion apparently severed RDS's memory of color names from his visual perception of colors and his language system. Yet RDS could still group most colors -- even colors he couldn't name -- into categories such as dark or light or as being a mixture of other colors.
"We were surprised by his ability to consistently name so-called achromatic colors such as black, white, and gray, as opposed to his impaired naming of chromatic ones such as red, blue, and green," says the first author of the study, PhD student Katarzyna Siuda-Krzywicka. This suggested that our language system may process black, white, and gray differently from chromatic colors. Such striking dissociations raise important questions about how different color-related signals are segregated and integrated in the brain, she says.
To ensure that RDS's behavior did not reflect abnormal brain organization, the researchers compared the functioning of his unaffected brain areas to that of the same brain areas in healthy subjects and developed a non-verbal color-categorization test. "Our result -- that his color categories were independent from language -- could be generalized to healthy adults," Bartolomeo says.
Read more at Science Daily
Reactor turns greenhouse gas into pure liquid fuel
Models of carbon dioxide molecules |
The catalytic reactor developed by the Rice University lab of chemical and biomolecular engineer Haotian Wang uses carbon dioxide as its feedstock and, in its latest prototype, produces highly purified and high concentrations of formic acid.
Formic acid produced by traditional carbon dioxide devices needs costly and energy-intensive purification steps, Wang said. The direct production of pure formic acid solutions will help to promote commercial carbon dioxide conversion technologies.
The method is detailed in Nature Energy.
Wang, who joined Rice's Brown School of Engineering in January, and his group pursue technologies that turn greenhouse gases into useful products. In tests, the new electrocatalyst reached an energy conversion efficiency of about 42%. That means nearly half of the electrical energy can be stored in formic acid as liquid fuel.
"Formic acid is an energy carrier," Wang said. "It's a fuel-cell fuel that can generate electricity and emit carbon dioxide -- which you can grab and recycle again.
"It's also fundamental in the chemical engineering industry as a feedstock for other chemicals, and a storage material for hydrogen that can hold nearly 1,000 times the energy of the same volume of hydrogen gas, which is difficult to compress," he said. "That's currently a big challenge for hydrogen fuel-cell cars."
Two advances made the new device possible, said lead author and Rice postdoctoral researcher Chuan Xia. The first was his development of a robust, two-dimensional bismuth catalyst and the second a solid-state electrolyte that eliminates the need for salt as part of the reaction.
"Bismuth is a very heavy atom, compared to transition metals like copper, iron or cobalt," Wang said. "Its mobility is much lower, particularly under reaction conditions. So that stabilizes the catalyst." He noted the reactor is structured to keep water from contacting the catalyst, which also helps preserve it.
Xia can make the nanomaterials in bulk. "Currently, people produce catalysts on the milligram or gram scales," he said. "We developed a way to produce them at the kilogram scale. That will make our process easier to scale up for industry."
The polymer-based solid electrolyte is coated with sulfonic acid ligands to conduct positive charge or amino functional groups to conduct negative ions. "Usually people reduce carbon dioxide in a traditional liquid electrolyte like salty water," Wang said. "You want the electricity to be conducted, but pure water electrolyte is too resistant. You need to add salts like sodium chloride or potassium bicarbonate so that ions can move freely in water.
"But when you generate formic acid that way, it mixes with the salts," he said. "For a majority of applications you have to remove the salts from the end product, which takes a lot of energy and cost. So we employed solid electrolytes that conduct protons and can be made of insoluble polymers or inorganic compounds, eliminating the need for salts."
The rate at which water flows through the product chamber determines the concentration of the solution. Slow throughput with the current setup produces a solution that is nearly 30% formic acid by weight, while faster flows allow the concentration to be customized. The researchers expect to achieve higher concentrations from next-generation reactors that accept gas flow to bring out pure formic acid vapors.
The Rice lab worked with Brookhaven National Laboratory to view the process in progress. "X-ray absorption spectroscopy, a powerful technique available at the Inner Shell Spectroscopy (ISS) beamline at Brookhaven Lab's National Synchrotron Light Source II, enables us to probe the electronic structure of electrocatalysts in operando -- that is, during the actual chemical process," said co-author Eli Stavitski, lead beamline scientist at ISS. "In this work, we followed bismuth's oxidation states at different potentials and were able to identify the catalyst's active state during carbon dioxide reduction."
With its current reactor, the lab generated formic acid continuously for 100 hours with negligible degradation of the reactor's components, including the nanoscale catalysts. Wang suggested the reactor could be easily retooled to produce such higher-value products as acetic acid, ethanol or propanol fuels.
Read more at Science Daily
How humans have shaped dogs' brains
Puppies of various dog breeds |
Over several hundred years, humans have selectively bred dogs to express specific physical and behavioral characteristics. Erin Hecht and colleagues investigated the effects of this selective pressure on brain structure by analyzing magnetic resonance imaging scans of 33 dog breeds. The research team observed wide variation in brain structure that was not simply related to body size or head shape.
The team then examined the areas of the brain with the most variation across breeds. This generated maps of six brain networks, with proposed functions varying from social bonding to movement, that were each associated with at least one behavioral characteristic. The variation in behaviors across breeds was correlated with anatomical variation in the six brain networks.
Studying the neuroanatomical variation in dogs offers a unique opportunity to study the evolutionary relationship between brain structure and behavior.
The article, "Significant Neuroanatomical Variation Among Domestic Dog Breeds," appears online Sept. 2, 2019 in the Journal of Neuroscience.
From Science Daily
Sep 2, 2019
What if we paid countries to protect biodiversity?
Researchers from Sweden, Germany, Brazil and the USA have developed a financial mechanism to support the protection of the world's natural heritage. In a recent study, they developed three different design options for an intergovernmental biodiversity financing mechanism. Asking what would happen if money was given to countries for providing protected areas, they simulated where the money would flow, what type of incentives this would create -- and how these incentives would align with international conservation goals.
After long negotiations, the international community has agreed to safeguard the global ecosystems and improve on the status of biodiversity. The global conservation goals for 2020, called the Aichi targets, are an ambitious hallmark. Yet, effective implementation is largely lacking. Biodiversity is still dwindling at rates only comparable to the last planetary mass extinction. Additional effort is required to reach the Aichi targets and even more so to halt biodiversity loss.
"Human well-being depends on ecological life support. Yet, we are constantly losing biodiversity and therefore the resilience of ecosystems. At the international level, there are political goals, but the implementation of conservation policies is a national task. There is no global financial mechanism that can help nations to reach their biodiversity targets," says lead author Nils Droste from Lund University, Sweden.
Brazil has successfully implemented Ecological Fiscal Transfer systems that compensate municipalities for hosting protected areas at a local level since the early 1990's. According to previous findings, such mechanisms help to create additional protected areas. The international research team has therefore set out to scale this idea up to the global level where not municipalities but nations are in charge of designating protected areas. They developed and compared three different design options:
An ecocentric model: where only protected area extent per country counts -- the bigger the protected area, the better;
A socio-ecological model: where protected areas and Human Development Index count, adding development justice to the previous model;
An anthropocentric model: where population density is also considered, as people benefit locally from protected areas.
The socio-ecological design was the one that proved to be the most efficient. The model provided the highest marginal incentives -- that is, the most additional money for protecting an additional percent of a country's area -- for countries that are the farthest from reaching the global conservation goals. The result surprised the researchers.
"While we developed the socio-ecological design with a fairness element in mind, believing that developing countries might be more easily convinced by a design that benefits them, we were surprised how well this particular design aligns with the global policy goals," says Nils Droste.
"It would most strongly incentivize additional conservation action where the global community is lacking it the most," he adds.
As the study was aimed at providing options, not prescriptions for policy makers, the study did not detail who should be paying or how large the fund should exactly be. Rather, it provides a yet unexplored option to develop a financial mechanism for biodiversity conservation akin to what the Green Climate Fund is for climate change.
Read more at Science Daily
After long negotiations, the international community has agreed to safeguard the global ecosystems and improve on the status of biodiversity. The global conservation goals for 2020, called the Aichi targets, are an ambitious hallmark. Yet, effective implementation is largely lacking. Biodiversity is still dwindling at rates only comparable to the last planetary mass extinction. Additional effort is required to reach the Aichi targets and even more so to halt biodiversity loss.
"Human well-being depends on ecological life support. Yet, we are constantly losing biodiversity and therefore the resilience of ecosystems. At the international level, there are political goals, but the implementation of conservation policies is a national task. There is no global financial mechanism that can help nations to reach their biodiversity targets," says lead author Nils Droste from Lund University, Sweden.
Brazil has successfully implemented Ecological Fiscal Transfer systems that compensate municipalities for hosting protected areas at a local level since the early 1990's. According to previous findings, such mechanisms help to create additional protected areas. The international research team has therefore set out to scale this idea up to the global level where not municipalities but nations are in charge of designating protected areas. They developed and compared three different design options:
An ecocentric model: where only protected area extent per country counts -- the bigger the protected area, the better;
A socio-ecological model: where protected areas and Human Development Index count, adding development justice to the previous model;
An anthropocentric model: where population density is also considered, as people benefit locally from protected areas.
The socio-ecological design was the one that proved to be the most efficient. The model provided the highest marginal incentives -- that is, the most additional money for protecting an additional percent of a country's area -- for countries that are the farthest from reaching the global conservation goals. The result surprised the researchers.
"While we developed the socio-ecological design with a fairness element in mind, believing that developing countries might be more easily convinced by a design that benefits them, we were surprised how well this particular design aligns with the global policy goals," says Nils Droste.
"It would most strongly incentivize additional conservation action where the global community is lacking it the most," he adds.
As the study was aimed at providing options, not prescriptions for policy makers, the study did not detail who should be paying or how large the fund should exactly be. Rather, it provides a yet unexplored option to develop a financial mechanism for biodiversity conservation akin to what the Green Climate Fund is for climate change.
Read more at Science Daily
New artifacts suggest people arrived in North America earlier than previously thought
Stone tools and other artifacts unearthed from an archeological dig at the Cooper's Ferry site in western Idaho suggest that people lived in the area 16,000 years ago, more than a thousand years earlier than scientists previously thought.
The artifacts would be considered among the earliest evidence of people in North America.
The findings, published today in Science, add weight to the hypothesis that initial human migration to the Americas followed a Pacific coastal route rather than through the opening of an inland ice-free corridor, said Loren Davis, a professor of anthropology at Oregon State University and the study's lead author.
"The Cooper's Ferry site is located along the Salmon River, which is a tributary of the larger Columbia River basin. Early peoples moving south along the Pacific coast would have encountered the Columbia River as the first place below the glaciers where they could easily walk and paddle in to North America," Davis said. "Essentially, the Columbia River corridor was the first off-ramp of a Pacific coast migration route.
"The timing and position of the Cooper's Ferry site is consistent with and most easily explained as the result of an early Pacific coastal migration."
Cooper's Ferry, located at the confluence of Rock Creek and the lower Salmon River, is known by the Nez Perce Tribe as an ancient village site named Nipéhe. Today the site is managed by the U.S. Bureau of Land Management.
Davis first began studying Cooper's Ferry as an archaeologist for the BLM in the 1990s. After joining the Oregon State faculty, he partnered with the BLM to establish a summer archaeological field school there, bringing undergraduate and graduate students from Oregon State and elsewhere for eight weeks each summer from 2009 to 2018 to help with the research.
The site includes two dig areas; the published findings are about artifacts found in area A. In the lower part of that area, researchers uncovered several hundred artifacts, including stone tools; charcoal; fire-cracked rock; and bone fragments likely from medium- to large-bodied animals, Davis said. They also found evidence of a fire hearth, a food processing station and other pits created as part of domestic activities at the site.
Over the last two summers, the team of students and researchers reached the lower layers of the site, which, as expected, contained some of the oldest artifacts uncovered, Davis said. He worked with a team of researchers at Oxford University, who were able to successfully radiocarbon date a number of the animal bone fragments.
The results showed many artifacts from the lowest layers are associated with dates in the range of 15,000 to 16,000 years old.
"Prior to getting these radiocarbon ages, the oldest things we'd found dated mostly in the 13,000-year range, and the earliest evidence of people in the Americas had been dated to just before 14,000 years old in a handful of other sites," Davis said. "When I first saw that the lower archaeological layer contained radiocarbon ages older than 14,000 years, I was stunned but skeptical and needed to see those numbers repeated over and over just to be sure they're right. So we ran more radiocarbon dates, and the lower layer consistently dated between 14,000-16,000 years old."
The dates from the oldest artifacts challenge the long-held "Clovis First" theory of early migration to the Americas, which suggested that people crossed from Siberia into North America and traveled down through an opening in the ice sheet near the present-day Dakotas. The ice-free corridor is hypothesized to have opened as early as 14,800 years ago, well after the date of the oldest artifacts found at Cooper's Ferry, Davis said.
"Now we have good evidence that people were in Idaho before that corridor opened," he said. "This evidence leads us to conclude that early peoples moved south of continental ice sheets along the Pacific coast."
Davis's team also found tooth fragments from an extinct form of horse known to have lived in North America at the end of the last glacial period. These tooth fragments, along with the radiocarbon dating, show that Cooper's Ferry is the oldest radiocarbon-dated site in North America that includes artifacts associated with the bones of extinct animals, Davis said.
The oldest artifacts uncovered at Cooper's Ferry also are very similar in form to older artifacts found in northeastern Asia, and particularly, Japan, Davis said. He is now collaborating with Japanese researchers to do further comparisons of artifacts from Japan, Russia and Cooper's Ferry. He is also awaiting carbon-dating information from artifacts from a second dig location at the Cooper's Ferry site.
Read more at Science Daily
The artifacts would be considered among the earliest evidence of people in North America.
The findings, published today in Science, add weight to the hypothesis that initial human migration to the Americas followed a Pacific coastal route rather than through the opening of an inland ice-free corridor, said Loren Davis, a professor of anthropology at Oregon State University and the study's lead author.
"The Cooper's Ferry site is located along the Salmon River, which is a tributary of the larger Columbia River basin. Early peoples moving south along the Pacific coast would have encountered the Columbia River as the first place below the glaciers where they could easily walk and paddle in to North America," Davis said. "Essentially, the Columbia River corridor was the first off-ramp of a Pacific coast migration route.
"The timing and position of the Cooper's Ferry site is consistent with and most easily explained as the result of an early Pacific coastal migration."
Cooper's Ferry, located at the confluence of Rock Creek and the lower Salmon River, is known by the Nez Perce Tribe as an ancient village site named Nipéhe. Today the site is managed by the U.S. Bureau of Land Management.
Davis first began studying Cooper's Ferry as an archaeologist for the BLM in the 1990s. After joining the Oregon State faculty, he partnered with the BLM to establish a summer archaeological field school there, bringing undergraduate and graduate students from Oregon State and elsewhere for eight weeks each summer from 2009 to 2018 to help with the research.
The site includes two dig areas; the published findings are about artifacts found in area A. In the lower part of that area, researchers uncovered several hundred artifacts, including stone tools; charcoal; fire-cracked rock; and bone fragments likely from medium- to large-bodied animals, Davis said. They also found evidence of a fire hearth, a food processing station and other pits created as part of domestic activities at the site.
Over the last two summers, the team of students and researchers reached the lower layers of the site, which, as expected, contained some of the oldest artifacts uncovered, Davis said. He worked with a team of researchers at Oxford University, who were able to successfully radiocarbon date a number of the animal bone fragments.
The results showed many artifacts from the lowest layers are associated with dates in the range of 15,000 to 16,000 years old.
"Prior to getting these radiocarbon ages, the oldest things we'd found dated mostly in the 13,000-year range, and the earliest evidence of people in the Americas had been dated to just before 14,000 years old in a handful of other sites," Davis said. "When I first saw that the lower archaeological layer contained radiocarbon ages older than 14,000 years, I was stunned but skeptical and needed to see those numbers repeated over and over just to be sure they're right. So we ran more radiocarbon dates, and the lower layer consistently dated between 14,000-16,000 years old."
The dates from the oldest artifacts challenge the long-held "Clovis First" theory of early migration to the Americas, which suggested that people crossed from Siberia into North America and traveled down through an opening in the ice sheet near the present-day Dakotas. The ice-free corridor is hypothesized to have opened as early as 14,800 years ago, well after the date of the oldest artifacts found at Cooper's Ferry, Davis said.
"Now we have good evidence that people were in Idaho before that corridor opened," he said. "This evidence leads us to conclude that early peoples moved south of continental ice sheets along the Pacific coast."
Davis's team also found tooth fragments from an extinct form of horse known to have lived in North America at the end of the last glacial period. These tooth fragments, along with the radiocarbon dating, show that Cooper's Ferry is the oldest radiocarbon-dated site in North America that includes artifacts associated with the bones of extinct animals, Davis said.
The oldest artifacts uncovered at Cooper's Ferry also are very similar in form to older artifacts found in northeastern Asia, and particularly, Japan, Davis said. He is now collaborating with Japanese researchers to do further comparisons of artifacts from Japan, Russia and Cooper's Ferry. He is also awaiting carbon-dating information from artifacts from a second dig location at the Cooper's Ferry site.
Read more at Science Daily
Evidence for past high-level sea rise
An international team of scientists, studying evidence preserved in speleothems in a coastal cave, illustrate that more than three million years ago -- a time in which the Earth was two to three degrees Celsius warmer than the pre-industrial era -- sea level was as much as 16 meters higher than the present day. Their findings represent significant implications for understanding and predicting the pace of current-day sea level rise amid a warming climate.
The scientists, including Professor Yemane Asmerom and Sr. Research Scientist Victor Polyak from The University of New Mexico, the University of South Florida, Universitat de les Illes Balears and Columbia University, published their findings in today's edition of the journal Nature. The analysis of deposits from Artà Cave on the island of Mallorca in the western Mediterranean Sea produced sea levels that serve as a target for future studies of ice sheet stability, ice sheet model calibrations and projections of future sea level rise, the scientists said.
Sea level rises as a result of melting ice sheets, such as those that cover Greenland and Antarctica. However, how much and how fast sea level will rise during warming is a question scientists have worked to answer. Reconstructing ice sheet and sea-level changes during past periods when climate was naturally warmer than today, provides an Earth's scale laboratory experiment to study this question according to USF Ph.D. student Oana Dumitru, the lead author, who did much of her dating work at UNM under the guidance of Asmerom and Polyak.
"Constraining models for sea level rise due to increased warming critically depends on actual measurements of past sea level," said Polyak. "This study provides very robust measurements of sea level heights during the Pliocene."
"We can use knowledge gained from past warm periods to tune ice sheet models that are then used to predict future ice sheet response to current global warming," said USF Department of Geosciences Professor Bogdan Onac.
The project focused on cave deposits known as phreatic overgrowths on speleothems. The deposits form in coastal caves at the interface between brackish water and cave air each time the ancient caves were flooded by rising sea levels. In Artà Cave, which is located within 100 meters of the coast, the water table is -- and was in the past -- coincident with sea level, says Professor Joan J. Fornós of Universitat de les Illes Balears.
The scientists discovered, analyzed, and interpreted six of the geologic formations found at elevations of 22.5 to 32 meters above present sea level. Careful sampling and laboratory analyses of 70 samples resulted in ages ranging from 4.4 to 3.3 million years old BP (Before Present), indicating that the cave deposits formed during the Pliocene epoch. The ages were determined using uranium-lead radiometric dating in UNM's Radiogenic Isotope Laboratory.
"This was a unique convergence between an ideally-suited natural setting worked out by the team of cave scientists and the technical developments we have achieved over the years in our lab at The University of New Mexico," said Asmerom. "Judicious investments in instrumentation and techniques result in these kinds of high-impact dividends."
"Sea level changes at Artà Cave can be caused by the melting and growing of ice sheets or by uplift or subsidence of the island itself," said Columbia University Assistant Professor Jacky Austermann, a member of the research team. She used numerical and statistical models to carefully analyze how much uplift or subsidence might have happened since the Pliocene and subtracted this from the elevation of the formations they investigated.
One key interval of particular interest during the Pliocene is the mid Piacenzian Warm Period -- some 3.264 to 3.025 million years ago -- when temperatures were 2 to 3º Celsius higher than pre-industrial levels. "The interval also marks the last time the Earth's atmospheric CO2 was as high as today, providing important clues about what the future holds in the face of current anthropogenic warming," Onac says.
This study found that during this period, global mean sea level was as high as 16.2 meters (with an uncertainty range of 5.6 to 19.2 meters) above present. This means that even if atmospheric CO2 stabilizes around current levels, the global mean sea level would still likely rise at least that high, if not higher, the scientists concluded. In fact, it is likely to rise higher because of the increase in the volume of the oceans due to rising temperature.
"Considering the present-day melt patterns, this extent of sea level rise would most likely be caused by a collapse of both Greenland and the West Antarctic ice sheets," Dumitru said.
Read more at Science Daily
The scientists, including Professor Yemane Asmerom and Sr. Research Scientist Victor Polyak from The University of New Mexico, the University of South Florida, Universitat de les Illes Balears and Columbia University, published their findings in today's edition of the journal Nature. The analysis of deposits from Artà Cave on the island of Mallorca in the western Mediterranean Sea produced sea levels that serve as a target for future studies of ice sheet stability, ice sheet model calibrations and projections of future sea level rise, the scientists said.
Sea level rises as a result of melting ice sheets, such as those that cover Greenland and Antarctica. However, how much and how fast sea level will rise during warming is a question scientists have worked to answer. Reconstructing ice sheet and sea-level changes during past periods when climate was naturally warmer than today, provides an Earth's scale laboratory experiment to study this question according to USF Ph.D. student Oana Dumitru, the lead author, who did much of her dating work at UNM under the guidance of Asmerom and Polyak.
"Constraining models for sea level rise due to increased warming critically depends on actual measurements of past sea level," said Polyak. "This study provides very robust measurements of sea level heights during the Pliocene."
"We can use knowledge gained from past warm periods to tune ice sheet models that are then used to predict future ice sheet response to current global warming," said USF Department of Geosciences Professor Bogdan Onac.
The project focused on cave deposits known as phreatic overgrowths on speleothems. The deposits form in coastal caves at the interface between brackish water and cave air each time the ancient caves were flooded by rising sea levels. In Artà Cave, which is located within 100 meters of the coast, the water table is -- and was in the past -- coincident with sea level, says Professor Joan J. Fornós of Universitat de les Illes Balears.
The scientists discovered, analyzed, and interpreted six of the geologic formations found at elevations of 22.5 to 32 meters above present sea level. Careful sampling and laboratory analyses of 70 samples resulted in ages ranging from 4.4 to 3.3 million years old BP (Before Present), indicating that the cave deposits formed during the Pliocene epoch. The ages were determined using uranium-lead radiometric dating in UNM's Radiogenic Isotope Laboratory.
"This was a unique convergence between an ideally-suited natural setting worked out by the team of cave scientists and the technical developments we have achieved over the years in our lab at The University of New Mexico," said Asmerom. "Judicious investments in instrumentation and techniques result in these kinds of high-impact dividends."
"Sea level changes at Artà Cave can be caused by the melting and growing of ice sheets or by uplift or subsidence of the island itself," said Columbia University Assistant Professor Jacky Austermann, a member of the research team. She used numerical and statistical models to carefully analyze how much uplift or subsidence might have happened since the Pliocene and subtracted this from the elevation of the formations they investigated.
One key interval of particular interest during the Pliocene is the mid Piacenzian Warm Period -- some 3.264 to 3.025 million years ago -- when temperatures were 2 to 3º Celsius higher than pre-industrial levels. "The interval also marks the last time the Earth's atmospheric CO2 was as high as today, providing important clues about what the future holds in the face of current anthropogenic warming," Onac says.
This study found that during this period, global mean sea level was as high as 16.2 meters (with an uncertainty range of 5.6 to 19.2 meters) above present. This means that even if atmospheric CO2 stabilizes around current levels, the global mean sea level would still likely rise at least that high, if not higher, the scientists concluded. In fact, it is likely to rise higher because of the increase in the volume of the oceans due to rising temperature.
"Considering the present-day melt patterns, this extent of sea level rise would most likely be caused by a collapse of both Greenland and the West Antarctic ice sheets," Dumitru said.
Read more at Science Daily
How bacteria behind hospital infections block out antibiotics
Drug-resistant bacteria responsible for deadly hospital-acquired infections shut out antibiotics by closing tiny doors in their cell walls.
The new finding by researchers at Imperial College London could allow researchers to design new drugs that 'pick the locks' of these closed doors and allow antibiotics into bacterial cells. The result is published today in Nature Communications.
The bacterium Klebsiella pneumoniae causes infections in the lungs, blood and wounds of those in hospitals, and patients that have compromised immune systems are especially vulnerable. More than 20,000 K. pneumoniae infections were recorded in UK hospitals in the past year.
Like many bacteria, K. pneumoniae is becoming increasingly resistant to antibiotics, particularly a family of drugs called Carbapenems. Carbapenems are used as antibiotics in hospitals when others have failed or are ineffective.
Therefore, rising resistance to Carbapenems could dramatically affect our ability to cure infections. For this reason, Carbapenem-resistant K. pneumoniae and are classified as 'critical' World Health Organization Priority 1 organisms.
Now, the team from Imperial has discovered one mechanism by which K. pneumoniae is able to resist Carbapenems. Antibiotics usually enter the K. pneumoniae bacteria through surface doorways known as pores. The team investigated the structure of the pores and showed that by shutting these doorways K. pneumoniae becomes resistant to multiple drugs, since antibiotics cannot enter and kill them.
First author, Dr Joshua Wong, from the Department of Life Sciences at Imperial, said: "The prevalence of antibiotic resistance is increasing, so we are becoming more and more reliant on drugs like Carbapenems that work against a wide range of bacteria.
"But now with important bacteria like K. pneumoniae gaining resistance to Carbapenems it's important we understand how they are able to achieve this. Our new study provides vital insights that could allow new strategies and drugs to be designed."
The team compared the structures of K. pneumoniae bacteria that were resistant to Carbapenems to those that weren't, and found the resistant bacteria had modified or absent versions of a protein that creates pores in the cell wall. Resistant bacteria have much smaller pores, blocking the drug from entering.
The closed doors aren't all good news for bacteria. They also mean that the bacteria can take in fewer nutrients, and tests in mice showed that the bacteria grow slower as a result.
However, the advantage in terms of avoiding antibiotics outweighed the negative impact of slower growth for the bacteria, allowing them to maintain a high level of infection.
The project was conducted in close collaboration with Dr Konstantinos Beis from the Department of Life Sciences, who is based at the Research Complex at Harwell in Oxfordshire.
The team was led by Professor Gad Frankel, from the Department of Life Sciences at Imperial, who said: "The modification the bacteria use to avoid antibiotics is difficult to get around. Any drugs to counteract this defence mechanism would likely also get blocked out by the closed doors.
"However, we hope that it will be possible to design drugs that can pick the lock of the door, and our data provides information to help scientists and pharmaceutical companies make these new agents a reality."
As resistant bacteria are weaker, these results suggest that the pressure posed by the extensive use of Carbapenems in hospital settings is a major driver in the spread of these superbugs. The study provides a direct scientific basis for the implementation of restrictive prescribing policies that would minimise the use of broad-spectrum agents such as Carbapenems.
Read more at Science Daily
The new finding by researchers at Imperial College London could allow researchers to design new drugs that 'pick the locks' of these closed doors and allow antibiotics into bacterial cells. The result is published today in Nature Communications.
The bacterium Klebsiella pneumoniae causes infections in the lungs, blood and wounds of those in hospitals, and patients that have compromised immune systems are especially vulnerable. More than 20,000 K. pneumoniae infections were recorded in UK hospitals in the past year.
Like many bacteria, K. pneumoniae is becoming increasingly resistant to antibiotics, particularly a family of drugs called Carbapenems. Carbapenems are used as antibiotics in hospitals when others have failed or are ineffective.
Therefore, rising resistance to Carbapenems could dramatically affect our ability to cure infections. For this reason, Carbapenem-resistant K. pneumoniae and are classified as 'critical' World Health Organization Priority 1 organisms.
Now, the team from Imperial has discovered one mechanism by which K. pneumoniae is able to resist Carbapenems. Antibiotics usually enter the K. pneumoniae bacteria through surface doorways known as pores. The team investigated the structure of the pores and showed that by shutting these doorways K. pneumoniae becomes resistant to multiple drugs, since antibiotics cannot enter and kill them.
First author, Dr Joshua Wong, from the Department of Life Sciences at Imperial, said: "The prevalence of antibiotic resistance is increasing, so we are becoming more and more reliant on drugs like Carbapenems that work against a wide range of bacteria.
"But now with important bacteria like K. pneumoniae gaining resistance to Carbapenems it's important we understand how they are able to achieve this. Our new study provides vital insights that could allow new strategies and drugs to be designed."
The team compared the structures of K. pneumoniae bacteria that were resistant to Carbapenems to those that weren't, and found the resistant bacteria had modified or absent versions of a protein that creates pores in the cell wall. Resistant bacteria have much smaller pores, blocking the drug from entering.
The closed doors aren't all good news for bacteria. They also mean that the bacteria can take in fewer nutrients, and tests in mice showed that the bacteria grow slower as a result.
However, the advantage in terms of avoiding antibiotics outweighed the negative impact of slower growth for the bacteria, allowing them to maintain a high level of infection.
The project was conducted in close collaboration with Dr Konstantinos Beis from the Department of Life Sciences, who is based at the Research Complex at Harwell in Oxfordshire.
The team was led by Professor Gad Frankel, from the Department of Life Sciences at Imperial, who said: "The modification the bacteria use to avoid antibiotics is difficult to get around. Any drugs to counteract this defence mechanism would likely also get blocked out by the closed doors.
"However, we hope that it will be possible to design drugs that can pick the lock of the door, and our data provides information to help scientists and pharmaceutical companies make these new agents a reality."
As resistant bacteria are weaker, these results suggest that the pressure posed by the extensive use of Carbapenems in hospital settings is a major driver in the spread of these superbugs. The study provides a direct scientific basis for the implementation of restrictive prescribing policies that would minimise the use of broad-spectrum agents such as Carbapenems.
Read more at Science Daily
Sep 1, 2019
Aspirin should not be recommended for healthy people over 70
Low-dose aspirin does not prolong disability-free survival of healthy people over 70, even in those at the highest risk of cardiovascular disease. The late breaking results of the ASPREE trial are presented today at ESC Congress 2019 together with the World Congress of Cardiology.
On behalf of the ASPREE Investigators, Professor Christopher Reid of Curtin University, Perth, Australia said: "An ever-increasing number of people reach the age of 70 without overt cardiovascular disease (CVD). This analysis suggests that improved risk prediction methods are needed to identify those who could benefit from daily low-dose aspirin."
European guidelines on the prevention of CVD do not recommend aspirin for individuals free from CVD due to the increased risk of major bleeding. This advice was subsequently supported by results in moderate risk patients (ARRIVE), diabetic patients (ASCEND), and in people over 70 (ASPREE) which demonstrated that modest reductions in CVD risk were outweighed by the increased bleeding hazard.
The primary finding from the ASPREE randomised trial was that in people aged 70 years or over with no known CVD, there was no effect of 100 mg of daily aspirin on the composite primary endpoint of disability-free survival (defined as those not reaching a primary endpoint of dementia or persistent physical disability or death). The primary endpoint was chosen to reflect the reasons for prescribing a preventive drug in an otherwise healthy elderly population.
This analysis examined whether the results for the primary endpoint of disability-free survival might vary by the baseline level of CVD risk. Analyses were also conducted for the secondary endpoints of all-cause mortality, major haemorrhage, and prevention of CVD (defined as fatal coronary heart disease, nonfatal myocardial infarction, fatal or nonfatal stroke, or hospitalisation for heart failure).
The investigators calculated ten-year CVD risk probabilities at baseline for the 19,114 ASPREE participants using the Framingham score (up to 75 years) and the atherosclerotic cardiovascular disease (ASCVD) pooled cohort risk equations (up to 79 years) and divided them into thirds. As there are no CVD risk scores available beyond the age ranges specified in the equations, they also classified participants according to the presence of 0 to 1, 2 to 3, or more than 3 CVD risk factors. Overall rates of disability-free survival, mortality, major bleeding and CVD were examined for each risk group and outcomes were compared for those treated with aspirin or placebo.
For participants in the lowest third of CVD risk, by both Framingham and ASCVD scores, there was no disability-free survival or cardiovascular benefit from aspirin. This group also had the highest likelihood of bleeding.
In contrast, those in the highest third of CVD risk, by both Framingham and ASCVD scores, had significantly lower CVD event rates on aspirin with similar rates of bleeding. Hazard ratios for CVD reduction with aspirin version placebo were 0.72 (95% confidence interval [CI] 0.54-0.95) for the group classified as high risk by the Framingham score and 0.75 (95% CI 0.58-0.97) for those defined as high risk by the ASCVD equations.
However, this reduction in CVD did not translate to a significantly improved disability-free survival. Hazard ratios for disability-free survival with aspirin versus placebo were 0.86 (95% CI 0.62-1.20) for the group designated high risk by the Framingham score and 0.89 (95% CI 0.62-1.28) for those considered high risk by the ASCVD equations.
Prof Reid said: "The findings emphasise that the risk-benefit trade-off for aspirin use in healthy older men and women varies across levels of cardiovascular risk. It also indicates that the reduction in CVD events in the highest risk groups using current stratification methods does not identify individuals in whom this advantage translates into longer disability-free survival."
New ways to identify groups at increased CVD risk, beyond the use of conventional risk factors and current prediction models, will be investigated in the ASPREE longitudinal follow-up study. Genetic and biomarker information will be included from the ASPREE biobank.
Read more at Science Daily
On behalf of the ASPREE Investigators, Professor Christopher Reid of Curtin University, Perth, Australia said: "An ever-increasing number of people reach the age of 70 without overt cardiovascular disease (CVD). This analysis suggests that improved risk prediction methods are needed to identify those who could benefit from daily low-dose aspirin."
European guidelines on the prevention of CVD do not recommend aspirin for individuals free from CVD due to the increased risk of major bleeding. This advice was subsequently supported by results in moderate risk patients (ARRIVE), diabetic patients (ASCEND), and in people over 70 (ASPREE) which demonstrated that modest reductions in CVD risk were outweighed by the increased bleeding hazard.
The primary finding from the ASPREE randomised trial was that in people aged 70 years or over with no known CVD, there was no effect of 100 mg of daily aspirin on the composite primary endpoint of disability-free survival (defined as those not reaching a primary endpoint of dementia or persistent physical disability or death). The primary endpoint was chosen to reflect the reasons for prescribing a preventive drug in an otherwise healthy elderly population.
This analysis examined whether the results for the primary endpoint of disability-free survival might vary by the baseline level of CVD risk. Analyses were also conducted for the secondary endpoints of all-cause mortality, major haemorrhage, and prevention of CVD (defined as fatal coronary heart disease, nonfatal myocardial infarction, fatal or nonfatal stroke, or hospitalisation for heart failure).
The investigators calculated ten-year CVD risk probabilities at baseline for the 19,114 ASPREE participants using the Framingham score (up to 75 years) and the atherosclerotic cardiovascular disease (ASCVD) pooled cohort risk equations (up to 79 years) and divided them into thirds. As there are no CVD risk scores available beyond the age ranges specified in the equations, they also classified participants according to the presence of 0 to 1, 2 to 3, or more than 3 CVD risk factors. Overall rates of disability-free survival, mortality, major bleeding and CVD were examined for each risk group and outcomes were compared for those treated with aspirin or placebo.
For participants in the lowest third of CVD risk, by both Framingham and ASCVD scores, there was no disability-free survival or cardiovascular benefit from aspirin. This group also had the highest likelihood of bleeding.
In contrast, those in the highest third of CVD risk, by both Framingham and ASCVD scores, had significantly lower CVD event rates on aspirin with similar rates of bleeding. Hazard ratios for CVD reduction with aspirin version placebo were 0.72 (95% confidence interval [CI] 0.54-0.95) for the group classified as high risk by the Framingham score and 0.75 (95% CI 0.58-0.97) for those defined as high risk by the ASCVD equations.
However, this reduction in CVD did not translate to a significantly improved disability-free survival. Hazard ratios for disability-free survival with aspirin versus placebo were 0.86 (95% CI 0.62-1.20) for the group designated high risk by the Framingham score and 0.89 (95% CI 0.62-1.28) for those considered high risk by the ASCVD equations.
Prof Reid said: "The findings emphasise that the risk-benefit trade-off for aspirin use in healthy older men and women varies across levels of cardiovascular risk. It also indicates that the reduction in CVD events in the highest risk groups using current stratification methods does not identify individuals in whom this advantage translates into longer disability-free survival."
New ways to identify groups at increased CVD risk, beyond the use of conventional risk factors and current prediction models, will be investigated in the ASPREE longitudinal follow-up study. Genetic and biomarker information will be included from the ASPREE biobank.
Read more at Science Daily
Suggested move to plant-based diets risks worsening brain health nutrient deficiency
The momentum behind a move to plant-based and vegan diets for the good of the planet is commendable, but risks worsening an already low intake of an essential nutrient involved in brain health, warns a nutritionist in the online journal BMJ Nutrition, Prevention & Health.
To make matters worse, the UK government has failed to recommend or monitor dietary levels of this nutrient -- choline -- found predominantly in animal foods, says Dr Emma Derbyshire, of Nutritional Insight, a consultancy specialising in nutrition and biomedical science.
Choline is an essential dietary nutrient, but the amount produced by the liver is not enough to meet the requirements of the human body.
Choline is critical to brain health, particularly during fetal development. It also influences liver function, with shortfalls linked to irregularities in blood fat metabolism as well as excess free radical cellular damage, writes Dr Derbyshire.
The primary sources of dietary choline are found in beef, eggs, dairy products, fish, and chicken, with much lower levels found in nuts, beans, and cruciferous vegetables, such as broccoli.
In 1998, recognising the importance of choline, the US Institute of Medicine recommended minimum daily intakes. These range from 425 mg/day for women to 550 mg/day for men, and 450 mg/day and 550 mg/day for pregnant and breastfeeding women, respectively, because of the critical role the nutrient has in fetal development.
In 2016, the European Food Safety Authority published similar daily requirements. Yet national dietary surveys in North America, Australia, and Europe show that habitual choline intake, on average, falls short of these recommendations.
"This is....concerning given that current trends appear to be towards meat reduction and plant-based diets," says Dr Derbyshire.
She commends the first report (EAT-Lancet) to compile a healthy food plan based on promoting environmental sustainability, but suggests that the restricted intakes of whole milk, eggs and animal protein it recommends could affect choline intake.
And she is at a loss to understand why choline does not feature in UK dietary guidance or national population monitoring data.
"Given the important physiological roles of choline and authorisation of certain health claims, it is questionable why choline has been overlooked for so long in the UK," she writes. "Choline is presently excluded from UK food composition databases, major dietary surveys, and dietary guidelines," she adds.
It may be time for the UK government's independent Scientific Advisory Committee on Nutrition to reverse this, she suggests, particularly given the mounting evidence on the importance of choline to human health and growing concerns about the sustainability of the planet's food production.
"More needs to be done to educate healthcare professionals and consumers about the importance of a choline-rich diet, and how to achieve this," she writes.
Read more at Science Daily
To make matters worse, the UK government has failed to recommend or monitor dietary levels of this nutrient -- choline -- found predominantly in animal foods, says Dr Emma Derbyshire, of Nutritional Insight, a consultancy specialising in nutrition and biomedical science.
Choline is an essential dietary nutrient, but the amount produced by the liver is not enough to meet the requirements of the human body.
Choline is critical to brain health, particularly during fetal development. It also influences liver function, with shortfalls linked to irregularities in blood fat metabolism as well as excess free radical cellular damage, writes Dr Derbyshire.
The primary sources of dietary choline are found in beef, eggs, dairy products, fish, and chicken, with much lower levels found in nuts, beans, and cruciferous vegetables, such as broccoli.
In 1998, recognising the importance of choline, the US Institute of Medicine recommended minimum daily intakes. These range from 425 mg/day for women to 550 mg/day for men, and 450 mg/day and 550 mg/day for pregnant and breastfeeding women, respectively, because of the critical role the nutrient has in fetal development.
In 2016, the European Food Safety Authority published similar daily requirements. Yet national dietary surveys in North America, Australia, and Europe show that habitual choline intake, on average, falls short of these recommendations.
"This is....concerning given that current trends appear to be towards meat reduction and plant-based diets," says Dr Derbyshire.
She commends the first report (EAT-Lancet) to compile a healthy food plan based on promoting environmental sustainability, but suggests that the restricted intakes of whole milk, eggs and animal protein it recommends could affect choline intake.
And she is at a loss to understand why choline does not feature in UK dietary guidance or national population monitoring data.
"Given the important physiological roles of choline and authorisation of certain health claims, it is questionable why choline has been overlooked for so long in the UK," she writes. "Choline is presently excluded from UK food composition databases, major dietary surveys, and dietary guidelines," she adds.
It may be time for the UK government's independent Scientific Advisory Committee on Nutrition to reverse this, she suggests, particularly given the mounting evidence on the importance of choline to human health and growing concerns about the sustainability of the planet's food production.
"More needs to be done to educate healthcare professionals and consumers about the importance of a choline-rich diet, and how to achieve this," she writes.
Read more at Science Daily
Subscribe to:
Posts (Atom)