Dec 31, 2019

Happy New Year

As the tradition goes I, Danny, wants to wish everybody who reads A Magical Journey a Happy New Year as we soon walk in to the roaring 20's! As I've been doing a couple of times before I hearby leave you with ABBA and their Happy New Year! See you all next year again!

Dec 30, 2019

Mosquitoes can sense toxins through their legs

Researchers at LSTM have identified a completely new mechanism by which mosquitoes that carry malaria are becoming resistant to insecticide.

After studying both Anopheles gambiae and Anopheles coluzzii, two major malaria vectors in West Africa, they found that a particular family of binding proteins situated in the insect's legs were highly expressed in resistant populations.

First author on a paper published today in the journal Nature, Dr Victoria Ingham, explains: "We have found a completely new insecticide resistance mechanism that we think is contributing to the lower than expected efficacy of bed nets. The protein, which is based in the legs, comes into direct contact with the insecticide as the insect lands on the net, making it an excellent potential target for future additives to nets to overcome this potent resistance mechanism."

Examining the Anopheline mosquitoes, the team demonstrated that the binding protein, SAP2, was found elevated in resistant populations and further elevated following contact with pyrethroids, the insecticide class used on all bed nets. They found that when levels of this protein were reduced, by partial silencing of the gene, susceptibility to pyrethroids were restored; conversely when the protein was expressed at elevated levels, previously susceptible mosquitoes became resistant to pyrethroids.

The increase in insecticide resistance across mosquito populations has led to the introduction of new insecticide treated bed nets containing the synergist piperonyl butoxide (PBO) as well as pyrethroid insecticides. The synergist targets one of the most widespread and previously most potent resistance mechanisms caused by the cytochrome P450s. However, mosquitoes are continually evolving new resistance mechanisms and the discovery of this new resistance mechanism provides an excellent opportunity to identify additional synergists that could be used to restore susceptibility

Professor Hilary Ranson is senior author on the paper. She said: "Long-lasting insecticide treated bed nets remain one of the key interventions in malaria control. It is vital that we understand and mitigate for resistance within mosquito populations in order to ensure that the dramatic reductions in disease rates in previous decades are not reversed. This newly discovered resistance mechanism could provide us with an important target for both the monitoring of insecticide resistance and the development of novel compounds able to block pyrethroid resistance and prevent the spread of malaria."

From Science Daily

Using deep learning to predict disease-associated mutations

During the past years, artificial intelligence (AI) -- the capability of a machine to mimic human behavior -- has become a key player in high-techs like drug development projects. AI tools help scientists to uncover the secret behind the big biological data using optimized computational algorithms. AI methods such as deep neural network improves decision making in biological and chemical applications i.e., prediction of disease-associated proteins, discovery of novel biomarkers and de novo design of small molecule drug leads. These state-of-the-art approaches help scientists to develop a potential drug more efficiently and economically.

A research team led by Professor Hongzhe Sun from the Department of Chemistry at the University of Hong Kong (HKU), in collaboration with Professor Junwen Wang from Mayo Clinic, Arizona in the United States (a former HKU colleague), implemented a robust deep learning approach to predict disease-associated mutations of the metal-binding sites in a protein. This is the first deep learning approach for the prediction of disease-associated metal-relevant site mutations in metalloproteins, providing a new platform to tackle human diseases. The research findings were recently published in a top scientific journal Nature Machine Intelligence.

Metal ions play pivotal roles either structurally or functionally in the (patho)physiology of human biological systems. Metals such as zinc, iron and copper are essential for all lives and their concentration in cells must be strictly regulated. A deficiency or an excess of these physiological metal ions can cause severe disease in humans. It was discovered that a mutation in human genome are strongly associated with different diseases. If these mutations happen in the coding region of DNA, it might disrupt metal-binding sites of the proteins and consequently initiate severe diseases in humans. Understanding of disease-associated mutations at the metal-binding sites of proteins will facilitate discovery of new drugs.

The team first integrated omics data from different databases to build a comprehensive training dataset. By looking at the statistics from the collected data, the team found that different metals have different disease associations. A mutation in zinc-binding sites has a major role in breast, liver, kidney, immune system and prostate diseases. By contrast, the mutations in calcium- and magnesium-binding sites are associated with muscular and immune system diseases, respectively. For iron-binding sites, mutations are more associated with metabolic diseases. Furthermore, mutations of manganese- and copper-binding sites are associated with cardiovascular diseases with the latter being associated with nervous system disease as well. They used a novel approach to extract spatial features from the metal binding sites using an energy-based affinity grid map. These spatial features have been merged with physicochemical sequential features to train the model. The final results show using the spatial features enhanced the performance of the prediction with an area under the curve (AUC) of 0.90 and an accuracy of 0.82. Given the limited advanced techniques and platforms in the field of metallomics and metalloproteins, the proposed deep learning approach offers a method to integrate the experimental data with bioinformatics analysis. The approach will help scientist to predict DNA mutations which are associated with disease like cancer, cardiovascular diseases and genetic disorders.

Read more at Science Daily

Evolution: Revelatory relationship

A new study of the ecology of an enigmatic group of novel unicellular organisms by scientists from Ludwig-Maximilians-Universitaet (LMU) in Munich supports the idea hydrogen played an important role in the evolution of Eukaryota, the first nucleated cells.

One of the most consequential developments in the history of biological evolution occurred approximately 2 billion years ago with the appearance of the first eukaryotes -- unicellular organisms that contain a distinct nucleus. This first eukaryotic lineage would subsequently give rise to all higher organisms including plants and animals, but its origins remain obscure. Some years ago, microbiologists analyzed DNA sequences from marine sediments, which shed new light on the problem. These sediments were recovered from a hydrothermal vent at a site known as Loki's Castle (named for the Norse god of fire) on the Mid-Atlantic Ridge in the Arctic Ocean. Sequencing of the DNA molecules they contained revealed that they were derived from a previously unknown group of microorganisms.

Although the cells from which the DNA originated could not be isolated and characterized directly, the sequence data showed them to be closely related to the Archaea. The researchers therefore named the new group Lokiarchaeota.

Archaea, together with the phylum Bacteria, are the oldest known lineages of single-celled organisms. Strikingly, the genomes of the Lokiarchaeota indicated that they might exhibit structural and biochemical features that are otherwise specific to eukaryotes. This suggests that the Lokiarchaeota might be related to the last common ancestor of eukaryotes. Indeed, phylogenomic analysis of the Lokiarchaeota DNA from Loki's Castle strongly suggested that they were derived from descendants of one of the last common ancestors of Eukaryota and Archaea. Professor William Orsi of the Department of Earth and Environmental Sciences at LMU, in cooperation with scientists at Oldenburg University and the Max Planck Institute for Marine Microbiology, has now been able to examine the activity and metabolism of the Lokiarchaeota directly. The results support the suggested relationship between Lokiarchaeota and eukaryotes, and provide hints as to the nature of the environment in which the first eukaryotes evolved. The new findings appear in the journal Nature Microbiology.

The most likely scenario for the emergence of eukaryotes is that they arose from a symbiosis in which the host was an archaeal cell and the symbiont was a bacterium. According to this theory, the bacterial symbiont subsequently gave rise to the mitochondria -- the intracellular organelles that are responsible for energy production in eukaryotic cells. One hypothesis proposes that the archaeal host was dependent on hydrogen for its metabolism, and that the precursor of the mitochondria produced it. This "hydrogen hypothesis" posits that the two partner cells presumably lived in an anoxic environment that was rich in hydrogen, and if they were separated from the hydrogen source they would have become more dependent on one another for survival potentially leading to an endosymbiotic event. "If the Lokiarchaeota, as the descendants of this putative ur-archaeon, are also dependent on hydrogen, this would support the hydrogen hypothesis," says Orsi. "However, up to now, the ecology of these Archaea in their natural habitat was a matter of speculation."

Orsi and his team have now, for the first time, characterized the cellular metabolism of Lokiarchaeota recovered from sediment cores obtained from the seabottom in an extensive oxygen-depleted region off the coast of Namibia. They did so by analyzing the RNA present in these samples. RNA molecules are copied from the genomic DNA, and serve as blueprints for the synthesis of proteins. Their sequences therefore reflect patterns and levels of gene activity. The sequence analyses revealed that Lokiarchaeota in these samples outnumbered bacteria by 100- to 1000-fold. "That strongly indicates that these sediments are a favorable habitat for them, promoting their activity," says Orsi.

Read more at Science Daily

How cells learn to 'count'

One of the wonders of cell biology is its symmetry. Mammalian cells have one nucleus and one cell membrane, and most humans have 23 pairs of chromosomes. Trillions of mammalian cells achieve this uniformity -- but some consistently break this mold to fulfill unique functions. Now, a team of Johns Hopkins Medicine researchers have found how these outliers take shape.

In experiments with genetically engineered mice, a research team has ruled out a mechanism that scientists have long believed controls the number of hairlike structures, called cilia, protruding on the outside of each mammalian cell. They concluded that control of the cilia count might rely instead on a process more commonly seen in non-mammalian species.

The experiments, described Dec. 2 in Nature Cell Biology and led by Andrew Holland, Ph.D., associate professor of molecular biology and genetics at the Johns Hopkins University School of Medicine, may eventually help scientists learn more about human diseases related to cilia function, such as respiratory infections, infertility and hydrocephaly.

Cilia are ancient structures that first appeared on single-celled organisms as small hairlike "fingers" that act as motors to move the cell or antennae to sense the environment. Nearly all human cells have at least one cilium that senses physical or chemical cues. However, some specialized cell types in humans, such as those lining the respiratory and reproductive tracts, have hundreds of cilia on their surface that beat in waves to move fluids through the system.

"Our main question was how these multicilliated cells become so dramatically different than the rest of the cells in our body," says Holland. "Most cells make exactly one cilium per cell, but these highly specialized cells give up on this tight numerical control and make hundreds of cilia."

In an effort to answer the question, Holland and his team took a closer look at the base of cilia, the place where the organelles attach and grow from the surface of the cell. This base is a microscopic, cylinder-shaped structure called a centriole.

In single-ciliated cells, Holland says, centrioles are created before a cell divides. A cell contains two-parent centrioles that each duplicate so that both new cells gets one pair of centrioles -- the oldest of these two centrioles then goes on to form the base of the cilium. However, multicilliated cells create unique structures, called deuterosomes, that act as a copy machine to enable the production of tens to hundreds of centrioles, allowing these cells to create many cilia.

"Deuterosomes are only present in multicilliated cells, and scientists have long thought they are central for determining how many centrioles and cilia are formed," says Holland.

To test this, Holland and his team developed a mouse model that lacked the gene that creates deuterosomes. Then, they analyzed the tissues that carry multicilliated cells and counted their cilia.

The researchers were surprised to find that the genetically engineered mice had the same number of cilia on cells as the mice with deuterosomes, ruling out the central role of deuterosomes in controlling the number of cilia. For example, the multicilliated cells lining the trachea all had 200-300 cillia per cell. The researchers also found that cells without deuterosomes could make new centrioles just as quickly as cells with them.

With this surprising result in hand, the researchers engineered mouse cells that lacked both deuterosomes and parent centrioles, and then counted the number of cilia formed in multicilliated cells.

"We figured that with no parent centrioles and no deuterosomes, the multicilliated cells would be unable to create the proper number of new cilia," says Holland.

Remarkably, Holland says, even the lack of parent centrioles had no effect on the final cilia number. Most cells in both normal and genetically engineered groups created between 50 and 90 cilia.

"This finding changes the dogma of what we believed to be the driving force behind centriole assembly," explains Holland. "Instead of needing a platform to grow on, centrioles can be created spontaneously."

While uncommon in mammals, the so-called de novo generation of centrioles is not new to the animal kingdom. Some species, such as the small flatworm planaria, lack parent centrioles entirely, and rely on de novo centriole generation to create the cilia they use to move.

In further experiments on genetically engineered mice, Holland found that all the spontaneously created centrioles were assembled within a region of the cell rich with fibrogranular material -- the protein components necessary to build a centriole.

He says he suspects that proteins found in that little-understood area of the cell contain the essential elements necessary to construct centrioles and ultimately control the number of cilia that are formed. Everything else, the deuterosomes and even the parent centrioles, are "not strictly necessary," he says.

"We think that the deuterosomes function to relieve pressure on the parent centrioles from the demands of making many new centrioles, freeing up parent centrioles to fulfill other functions," says Holland.

Read more at Science Daily

Dec 29, 2019

'Lost crops' could have fed as many as maize

Make some room in the garden, you storied three sisters: the winter squash, climbing beans and the vegetable we know as corn. Grown together, newly examined "lost crops" could have produced enough seed to feed as many indigenous people as traditionally grown maize, according to new research from Washington University in St. Louis.

But there are no written or oral histories to describe them. The domesticated forms of the lost crops are thought to be extinct.

Writing in the Journal of Ethnobiology, Natalie Muellert, assistant professor of archaeology in Arts & Sciences, describes how she painstakingly grew and calculated yield estimates for two annual plants that were cultivated in eastern North America for thousands of years -- and then abandoned.

Growing goosefoot (Chenopodium, sp.) and erect knotweed (Polygonum erectum) together is more productive than growing either one alone, Mueller discovered. Planted in tandem, along with the other known lost crops, they could have fed thousands.

Archaeologists found the first evidence of the lost crops in rock shelters in Kentucky and Arkansas in the 1930s. Seed caches and dried leaves were their only clues. Over the past 25 years, pioneering research by Gayle Fritz, professor emerita of archaeology at Washington University, helped to establish the fact that a previously unknown crop complex had supported local societies for millennia before maize -- a.k.a. corn -- was adopted as a staple crop.

But how, exactly, to grow them?

The lost crops include a small but diverse group of native grasses, seed plants, squashes and sunflowers -- of which only the squashes and sunflowers are still cultivated. For the rest, there is plenty of evidence that the lost crops were purposefully tended -- not just harvested from free-living stands in the wild -- but there are no instructions left.

"There are many Native American practitioners of ethnobotanical knowledge: farmers and people who know about medicinal plants, and people who know about wild foods. Their knowledge is really important," Mueller said. "But as far as we know, there aren't any people who hold knowledge about the lost crops and how they were grown.

"It's possible that there are communities or individuals who have knowledge about these plants, and it just isn't published or known by the academic community," she said. "But the way that I look at it, we can't talk to the people who grew these crops.

"So our group of people who are working with the living plants is trying to participate in the same kind of ecosystem that they participated in -- and trying to reconstruct their experience that way."

That means no greenhouse, no pesticides and no special fertilizers.

"You have not just the plants but also everything else that comes along with them, like the bugs that are pollinating them and the pests that are eating them. The diseases that affect them. The animals that they attract, and the seed dispersers," Mueller said. "There are all of these different kinds of ecological elements to the system, and we can interact with all of them."

Her new paper reported on two experiments designed to investigate germination requirements and yields for the lost crops.

Mueller discovered that a polyculture of goosefoot and erect knotweed is more productive than either grown separately as a monoculture. Grown together, the two plants have higher yields than global averages for closely related domesticated crops (think: quinoa and buckwheat), and they are within the range of those for traditionally grown maize.

"The main reason that I'm really interested in yield is because there's a debate within archeology about why these plants were abandoned," Mueller said. "We haven't had a lot of evidence about it one way or the other. But a lot of people have just kind of assumed that maize would be a lot more productive because we grow maize now, and it's known to be one of the most productive crops in the world per unit area."

Mueller wanted to quantify yield in this experiment so that she could directly compare yield for these plants to maize for the first time.

But it didn't work out perfectly. She was only able to obtain yield estimates for two of the five lost crops that she tried to grow -- but not for the plants known as maygrass, little barley and sumpweed.

Read more at Science Daily

Powder, not gas: A safer, more effective way to create a star on Earth

Scientists have found that sprinkling a type of powder into fusion plasma could aid in harnessing the ultra-hot gas within a tokamak facility to produce heat to create electricity without producing greenhouse gases or long-term radioactive waste.

A major issue with operating ring-shaped fusion facilities known as tokamaks is keeping the plasma that fuels fusion reactions free of impurities that could reduce the efficiency of the reactions. Now, scientists at the U.S. Department of Energy's (DOE) Princeton Plasma Physics Laboratory (PPPL) have found that sprinkling a type of powder into the plasma could aid in harnessing the ultra-hot gas within a tokamak facility to produce heat to create electricity without producing greenhouse gases or long-term radioactive waste.

Fusion, the power that drives the sun and stars, combines light elements in the form of plasma -- the hot, charged state of matter composed of free electrons and atomic nuclei -- that generates massive amounts of energy. Scientists are seeking to replicate fusion on Earth for a virtually inexhaustible supply of power to generate electricity.

"The main goal of the experiment was to see if we could lay down a layer of boron using a powder injector," said PPPL physicist Robert Lunsford, lead author of the paper reporting the results in Nuclear Fusion. "So far, the experiment appears to have been successful."

¬The boron prevents an element known as tungsten from leaching out of the tokamak walls into the plasma, where it can cool the plasma particles and make fusion reactions less efficient. A layer of boron is applied to plasma-facing surfaces in a process known as "boronization." Scientists want to keep the plasma as hot as possible -- at least ten times hotter than the surface of the sun -- to maximize the fusion reactions and therefore the heat to create electricity.

Using powder to provide boronization is also far safer than using a boron gas called diborane, the method used today. "Diborane gas is explosive, so everybody has to leave the building housing the tokamak during the process," Lunsford said. "On the other hand, if you could just drop some boron powder into the plasma, that would be a lot easier to manage. While diborane gas is explosive and toxic, boron powder is inert," he added. "This new technique would be less intrusive and definitely less dangerous."

Another advantage is that while physicists must halt tokamak operations during the boron gas process, boron powder can be added to the plasma while the machine is running. This feature is important because to provide a constant source of electricity, future fusion facilities will have to run for long, uninterrupted periods of time. "This is one way to get to a steady-state fusion machine," Lunsford said. "You can add more boron without having to completely shut down the machine."

There are other reasons to use a powder dropper to coat the inner surfaces of a tokamak. For example, the researchers discovered that injecting boron powder has the same benefit as puffing nitrogen gas into the plasma -- both techniques increase the heat at the plasma edge, which increases how well the plasma stays confined within the magnetic fields.

The powder dropper technique also gives scientists an easy way to create low-density fusion plasmas, important because low density allows plasma instabilities to be suppressed by magnetic pulses, a relatively simple way to improve fusion reactions. Scientists could use powder to create low-density plasmas at any time, rather than waiting for a gaseous boronization. Being able to create a wide range of plasma conditions easily in this way would enable physicists to explore the behavior of plasma more thoroughly.

Read more at Science Daily

Dec 28, 2019

New insights into the earliest events of seed germination

Plant seeds may strike the casual observer as unspectacular -- but they have properties that are nothing short of superpowers. In a dry state they can store their energy for years and then suddenly release it for germination when environmental conditions are favourable. One striking example is the "super bloom" in the Death Valley National Park, when seeds that have endured the dry and hot desert for decades suddenly germinate at rainfall followed by a rare and spectacular desert bloom several months later. Seeds conserve a fully formed embryo, which only continues growing when conditions are right for it to do so. This may be the case only years -- or in more extreme cases even centuries -- later.

Seed germination is controlled by several plant hormones, which are researched intensely. However, not much was known about the processes that need to take place to allow the hormones to function. How is energy in the seed made available? How can energy metabolism be started early and efficiently? An international team of researchers has now been looking into these questions.

Using a new type of fluorescent biosensors, the researchers observed, in living seed cells, both energy metabolism and the so-called redox metabolism, which relies in sulphur. The researchers discovered that when the seeds came into contact with water, energy metabolism was established in a matter of minutes, and the plant cells' "power stations" -- known as mitochondria -- activated their respiration. The researchers also found out which molecular switches are activated to enable energy to be released efficiently -- with the so-called thiol-redox switches playing a central role.

"By looking into the very early processes of germination control, we can gain a better understanding of the mechanisms driving seed germination," says Prof. Markus Schwarzländer from the University of Münster (Germany), who led the study. "In future we could think about how such switches could be used in crop biotechnology." The results of the study could be of relevance in farming, when seeds need to keep their germination vigour for as long as possible on the one hand, but should also germinate in synch and with minimal losses on the other hand. The study has been published in the journal PNAS (Proceedings of the National Academy of Sciences).

Background and method

In order to be able to observe the activities taking place in the energy metabolism, the researchers visualized under the microscope adenosine triphosphate (ATP), the general currency for energy in the cell, and Nicotinamide adenine dinucleotide phosphate (NADPH), the electron energy, in the mitochondria. They compared seeds from thale cress: both dry seeds and seeds "imbibed" with water.

To find out whether the redox switches are important for kick-starting germination, the researchers deactivated specific proteins using genetic methods and then compared the reaction shown by the modified seeds with that of the unmodified ones. The researchers allowed the seeds to age artificially in the laboratory, and they saw that the seeds germinated much less actively if they lacked the relevant proteins.

The researchers' next step involved so-called redox proteome analysis, i.e. they examined the relevant redox proteins in their entirety with the use of biochemical methods. For this purpose, they isolated active mitochondria and flash-froze them in order to be able to study this state directly where the process was taking place. The researchers then used mass spectrometry methods to identify several so-called cysteine-peptides which are important for resource efficiency in energy metabolism.

Read more at Science Daily

For restricted eaters, a place at the table but not the meal

Holiday celebrations often revolve around eating, but for those with food restrictions, that can produce an incongruous feeling when dining with friends and loved ones: loneliness.

People with restricted diets -- due to allergies, health issues or religious or cultural norms -- are more likely to feel lonely when they can't share in what others are eating, new Cornell University research shows.

"Despite being physically present with others, having a food restriction leaves people feeling left out because they are not able to take part in bonding over the meal," said Kaitlin Woolley, assistant professor of marketing in the Samuel Curtis Johnson Graduate School of Management and lead author of the research.

Across seven studies and controlled experiments, researchers found that food restrictions predicted loneliness among both children and adults.

The research also offers the first evidence, Woolley said, that having a food restriction causes increased loneliness. For example, in one experiment, assigning unrestricted individuals to experience a food restriction increased reported feelings of loneliness. That suggests such feelings are not driven by non-food issues or limited to picky eaters, Woolley said.

"We can strip that away and show that assigning someone to a restriction or not can have implications for their feeling of inclusion in the group meal," she said.

Further evidence came from a survey of observers of the Jewish holiday of Passover. When reminded during the holiday of the leavened foods they couldn't enjoy with others, participants' loneliness increased. Yet, within their own similarly restricted group, they felt a stronger bond.

Bonding over meals is an inherently social experience, Woolley notes. In previous research, she found that strangers felt more connected and trusting of each other when they shared the same food, and eating food from the same plate increased cooperation between strangers.

But when restricted from sharing in the meal, people suffer "food worries," Woolley said. They fret about what they can eat and how others might judge them for not fitting in.

Those worries generated a degree of loneliness comparable to that reported by unmarried or low-income adults, and stronger than that experienced by schoolchildren who were not native English speakers, according to the research. Compared with non-restricted individuals, having a restriction increased reported loneliness by 19%. People felt lonelier regardless of how severe their restriction was, or whether their restriction was imposed or voluntary.

The study concluded that food restrictions and loneliness are on the rise and "may be related epidemics," warranting further research.

To date, Woolley said, children have been the primary focus of research on the effects of food restrictions. A nationally representative survey she analyzed from the Centers for Disease Control did not track the issue among adults.

But increasingly, she said, food restrictions are being carried into adulthood, or adults are choosing restricted diets such as gluten-free, vegetarian and vegan for health or ethical reasons. Up to 30% of all participants in her research deal with restrictions, Woolley said.

Read more at Science Daily

Dec 27, 2019

300 million year old atmospheric dust

Dust plays a crucial role in the life and health of our planet. In our modern world, dust-borne nutrients traveling in great dust storms from the Saharan Desert fertilize the soil in the Amazon Rainforest and feed photosynthetic organisms like algae in the Atlantic Ocean. In turn, it is those organisms that breathe in carbon dioxide and expel oxygen.

Mehrdad Sardar Abadi, a researcher in the Mewbourne College of Earth and Energy School of Geosciences and School director Lynn Soreghan, led a study with researchers from Florida State University, the Massachusetts Institute of Technology, Hampton University and the College of Charleston, to understand the role of dust on the Earth's atmosphere in deep time -- 300 million years ago.

To do this research, the team needed to find ancient atmospheric dust, which led them to the remnants of a shallow marine ecosystem in modern-day Iran.

Similar to areas of our modern world like the Bahamas, these shallow marine ecosystems cannot survive unless they are in pristine water away from river runoff, Sardar Abadi explained. By targeting the systems, Sardar Abadi and Soreghan knew that silicate particles they found would have been deposited through the air and not from a river.

Sardar Abadi and Soreghan identified and sampled dust trapped in carbonate rocks from two intervals of limestone now preserved in outcroppings in the mountains of northern and central Iran.

Rocks were then subjected to a series of chemical treatments to extract the ancient dust. What was left were silicate minerals like clay and quartz that entered the environment as air-borne particles -- 300-million-year-old dust.

Ancient dust in hand, Sardar Abadi could determine how much dust was in the Late Paleozoic atmosphere. Their results suggested that Earth's atmosphere was much dustier during this ancient time. Working with collaborators at Florida State University, he performed geochemical tests to analyze the iron in the samples. Those tests revealed that the ancient dust also contained remarkable proportions of highly reactive iron -- a particularly rich source of this key micronutrient.

While iron is not the only micronutrient potentially carried in dust, it is estimated that this ancient dust contained twice the bioavailable iron as the modern dust that fertilizes the Amazon Rainforest.

This potent dust fertilization led to a massive surge in marine photosynthesizers. Fueled by iron-rich dust, algae and cyanobacteria took in carbon dioxide and expelled oxygen. Researchers speculate that this action, operating over millions of years, changed the planet's atmosphere.

"Higher abundances in primary producers like plants and algae could lead to higher carbon capture, helping to explain declines in atmospheric carbon dioxide around 300 million years ago," said Sardar Abadi.

"If what we are seeing from our samples was happening on a global scale, it means that the dust fertilization effect brought down atmospheric carbon dioxide and was a fairly significant part of the carbon cycle during this time in the Earth's history," said Soreghan.

One carbon sequestration method scientists have proposed is adding bioavailable iron to isolated parts of the ocean that are so remote and far from dust-containing continents, they are essentially deserts. Scientists who have attempted this on a small scale have documented resultant phytoplankton blooms.

But, Soreghan warned, no one knows the unintended consequences of doing this on a large scale. This is why Sardar Abadi and the team of researchers delved into deep time for answers.

"The Earth's geologic record is like a laboratory book. It has run an infinite number of experiments. We can open Earth's lab book, reconstruct what happened in the past and see how the Earth responded to these sometimes very extreme states," said Soreghan.

The data and syntheses help constrain and refine computer climate models. The further back into deep time a modeler goes, the more unconstrained variables there are. By providing data, models can be more accurate.

Read more at Science Daily

Development of ultrathin durable membrane for efficient oil and water separation

Researchers led by Professor MATSUYAMA Hideto and Professor YOSHIOKA Tomohisa at Kobe University's Research Center for Membrane and Film Technology have succeeded in developing an ultrathin membrane with a fouling-resistant silica surface treatment for high performance separation of oil from water.

Furthermore, this membrane was shown to be versatile; it was able to separate water from a wide variety of different oily substances.

These results were published online in the Journal of Materials Chemistry A on October 3 2019.

Introduction

The development of technology to separate oil from water is crucial for dealing with oil spills and water pollution generated by various industries. By 2025, it is predicted that two thirds of the world's population won't have sufficient access to clean water. Therefore the development of technologies to filter oily emulsions and thus increase the amount of available clean water is gaining increasing attention.

Compared with traditional purification methods including centrifugation and chemical coagulation, membrane separation has been proposed as a low cost, energy efficient alternative. Although this technology has been greatly developed, most membranes suffer from fouling issues whereby droplets of oil get irreversibly absorbed onto the surface. This leads to membrane pore blocking, subsequently reducing its lifespan and efficiency.

One method of mitigating the fouling issues is to add surface treatments to the membrane. However, many experiments with this method have encountered problems such as changes in the original surface structure and the deterioration of the treated surface layer by strong acid, alkaline and salt solutions. These issues limit the practical applications of such membranes in the harsh conditions during wastewater treatment.

Research Methodology

In this study, researchers succeeded in developing a membrane consisting of a porous polyketone (PK) support with a 10 nano-meter thick silica layer applied on the top surface. This silica layer was formed onto the PK fibrils using electrostatic attraction- the negatively charged silica was attracted to the positively charged PK.

The PK membrane has a high water permeance due to its large pores and high porosity. The silicification process- the addition of silica on the PK fibrils- provides a strong oil-repellant coating to protect the surface modified membrane from fouling issues.

Another advantage of this membrane is that it requires no large pressure application to achieve high water penetration. The membrane exhibited water permeation by gravity- even when a water level as low as 10cm (with a pressure of approx. 0.01atm) was utilized. In addition, the developed membrane was able to reject 99.9% of oil droplets- including those with a size of 10 nanometers. By using this membrane with an area of 1m2, 6000 liters of wastewater can be treated in one hour under an applied pressure of 1atm. It was also shown to be effective at separating water from various different oily emulsions.

As mentioned, the silification provided a strong oil repellant coating. Through the experiments carried out on the membrane to test its durability against fouling, it was discovered that oil did not become adsorbed onto the surface and that the oil droplets could be easily cleaned off. This membrane showed great tolerance against a variety of acidic, alkaline, solvent and salt solutions.

Read more at Science Daily

California's stricter vaccine exemption policy and improved vaccination rates

California's elimination, in 2016, of non-medical vaccine exemptions from school entry requirements was associated with an estimated increase in vaccination coverage at state and county levels, according to a new study published this week in PLOS Medicine by Nathan Lo of the University of California, San Francisco, and colleagues.

Vaccine hesitancy, the reluctance or refusal to receive vaccinations, is a growing public health problem in the United States and globally. The effectiveness of state policies that eliminate non-medical exemptions to childhood vaccination requirements has been unclear. In the new study, researchers used publicly available data from the US Centers for Disease Control and Prevention on coverage of measles, mumps, and rubella (MMR) vaccination and rates of both non-medical and medical exemptions in children entering kindergarten. The dataset included information on 45 states from 2011 to 2017 and county-level data from 17 states spanning 2010 through 2017.

The results of the analysis suggest that after the 2016 implementation of California's new exemption policy, MMR coverage in California increased by 3.3% over what the projected MMR coverage in California would be in the absence of the policy. Non-medical vaccination exemptions decreased by 2.4% and medical exemptions increased by 0.4%. Change in MMR vaccination coverage across California counties from 2015 to 2017 ranged from a 6% decrease to a 26% increase, with the largest increases seen in "high risk" counties with the lower pre-policy vaccination coverage.

"These study results support the idea that state level governmental policies to remove non-medical exemptions can be effective strategies to increase vaccination coverage across the United States," the authors say.

From Science Daily

Intermittent fasting: Live 'fast,' live longer?

For many people, the New Year is a time to adopt new habits as a renewed commitment to personal health. Newly enthusiastic fitness buffs pack into gyms and grocery stores are filled with shoppers eager to try out new diets.

But, does scientific evidence support the claims made for these diets? In a review article published in the Dec. 26 issue of The New England Journal of Medicine, Johns Hopkins Medicine neuroscientist Mark Mattson, Ph.D., concludes that intermittent fasting does.

Mattson, who has studied the health impact of intermittent fasting for 25 years, and adopted it himself about 20 years ago, writes that "intermittent fasting could be part of a healthy lifestyle." A professor of neuroscience at the Johns Hopkins University School of Medicine, Mattson says his new article is intended to help clarify the science and clinical applications of intermittent fasting in ways that may help physicians guide patients who want to try it.

Intermittent fasting diets, he says, fall generally into two categories: daily time-restricted feeding, which narrows eating times to 6-8 hours per day, and so-called 5:2 intermittent fasting, in which people limit themselves to one moderate-sized meal two days each week.

An array of animal and some human studies have shown that alternating between times of fasting and eating supports cellular health, probably by triggering an age-old adaptation to periods of food scarcity called metabolic switching. Such a switch occurs when cells use up their stores of rapidly accessible, sugar-based fuel, and begin converting fat into energy in a slower metabolic process.

Mattson says studies have shown that this switch improves blood sugar regulation, increases resistance to stress and suppresses inflammation. Because most Americans eat three meals plus snacks each day, they do not experience the switch, or the suggested benefits.

In the article, Mattson notes that four studies in both animals and people found intermittent fasting also decreased blood pressure, blood lipid levels and resting heart rates.

Evidence is also mounting that intermittent fasting can modify risk factors associated with obesity and diabetes, says Mattson. Two studies at the University Hospital of South Manchester NHS Foundation Trust of 100 overweight women showed that those on the 5:2 intermittent fasting diet lost the same amount of weight as women who restricted calories, but did better on measures of insulin sensitivity and reduced belly fat than those in the calorie-reduction group.

More recently, Mattson says, preliminary studies suggest that intermittent fasting could benefit brain health too. A multicenter clinical trial at the University of Toronto in April found that 220 healthy, nonobese adults who maintained a calorie restricted diet for two years showed signs of improved memory in a battery of cognitive tests. While far more research needs to be done to prove any effects of intermittent fasting on learning and memory, Mattson says if that proof is found, the fasting -- or a pharmaceutical equivalent that mimics it -- may offer interventions that can stave off neurodegeneration and dementia.

"We are at a transition point where we could soon consider adding information about intermittent fasting to medical school curricula alongside standard advice about healthy diets and exercise," he says.

Mattson acknowledges that researchers do "not fully understand the specific mechanisms of metabolic switching and that "some people are unable or unwilling to adhere" to the fasting regimens. But he argues that with guidance and some patience, most people can incorporate them into their lives. It takes some time for the body to adjust to intermittent fasting, and to get beyond initial hunger pangs and irritability that accompany it. "Patients should be advised that feeling hungry and irritable is common initially and usually passes after two weeks to a month as the body and brain become accustomed to the new habit," Mattson says.

Read more at Science Daily

Dec 23, 2019

How fish get their shape

The diverse colours, shapes and patterns of fish are captivating. Despite such diversity, a general feature that we can observe in fish such as salmon or tuna once they are served in a dish like sushi, is the distinct 'V' patterns in their meat. While this appears to be genetically observed in the muscle arrangement of most fish species, how such a generic 'V' pattern arises is puzzling.

A team of researchers from the Mechanobiology Institute (MBI) at the National University of Singapore (NUS) investigated the science behind the formation of the 'V' patterns -- also known as chevron patterns -- in the swimming muscles of fish. The study focused on the myotome (a group of muscles served by a spinal nerve root) that makes up most of the fish body. These fish muscles power the fish's side-to-side swimming motion and the chevron pattern is thought to increase swimming efficiency. The research team found that these patterns do not simply arise from genetic instruction or biochemical pathways but actually require physical forces to correctly develop. The findings of the study were published in the journal Proceedings of the National Academy of Sciences of the United States of America on 26 November 2019.

Friction and stress combine to shape patterns in fish muscle

The chevron pattern is not unique to salmon and tuna; it is also present in other fish species such as the zebrafish, as well as in some amphibian species like salamanders and frogs during development. The 'V' shape first appears in the somites -- the precursor building blocks of the myotome, which forms the skeletal muscles. The somites typically form during the first few days of fish development or morphogenesis.

A team of scientists led by MBI Postdoctoral Fellow Dr Sham Tlili and Principal Investigator Assistant Professor Timothy Saunders studied chevron formation in the myotome of zebrafish embryos. Initially, each future developing myotome segment is cuboidal in shape. However, over the course of five hours, it deforms into a pointed 'V' shape. To find out how this deformation actually takes place, the team adopted a combination of different techniques -- imaging of the developing zebrafish myotome at single cell resolution; quantitative analysis of the imaging data; and fitting the quantitative data into biophysical models.

Based on findings from their experimental as well as theoretical studies, the MBI scientists identified certain physical mechanisms that they thought might be guiding chevron formation during fish development.

Firstly, the developing myotomes are physically connected to other embryonic tissues such as the neural tube, notochord, skin and ventral tissues. The strength of their connection to these different tissues varies at different time points of myotome formation, and accordingly, different amounts of friction are generated across the tissue. Effectively, the side regions of the developing myotome are under greater friction than the central region. As new segments push the myotome forward, this leads to the formation of a shallow 'U' shape in the myotome tissue.

Secondly, cells within the future myotome begin to elongate as they form muscle fibres. The research team revealed that this transformation process generates an active, non-uniform force along certain directions within the somite tissue, which results in the 'U' shape sharpening into the characteristic 'V'-shaped chevron. Lastly, orientated cell rearrangements within the future myotome help to stabilise the newly acquired chevron shape.

Deciphering the patterns guiding organ formation

Asst Prof Saunders, a theoretical physicist who applies physical principles to characterise biological processes that take place during development, said, "This work reveals how a carefully balanced interplay between cell morphology and mechanical interactions can drive the emergence of complex shapes during development. We are excited to see if the principles we have revealed are also acting in the shaping of other organs."

Read more at Science Daily

Massive gas disk raises questions about planet formation theory

Astronomers using the Atacama Large Millimeter/submillimeter Array (ALMA) found a young star surrounded by an astonishing mass of gas. The star, called 49 Ceti, is 40 million years old and conventional theories of planet formation predict that the gas should have disappeared by that age. The enigmatically large amount of gas requests a reconsideration of our current understanding of planet formation.

Planets are formed in gaseous dusty disks called protoplanetary disks around young stars. Dust particles aggregate together to form Earth-like planets or to become the cores of more massive planets by collecting large amounts of gas from the disk to form Jupiter-like gaseous giant planets. According to current theories, as time goes by the gas in the disk is either incorporated into planets or blown away by radiation pressure from the central star. In the end, the star is surrounded by planets and a disk of dusty debris. This dusty disk, called a debris disk, implies that the planet formation process is almost finished.

Recent advances in radio telescopes have yielded a surprise in this field. Astronomers have found that several debris disk still possess some amount of gas. If the gas remains long in the debris disks, planetary seeds may have enough time and material to evolve to giant planets like Jupiter. Therefore, the gas in a debris disk affects the composition of the resultant planetary system.

"We found atomic carbon gas in the debris disk around 49 Ceti by using more than 100 hours of observations on the ASTE telescope," says Aya Higuchi, an astronomer at the National Astronomical Observatory of Japan (NAOJ). ASTE is a 10-m diameter radio telescope in Chile operated by NAOJ. "As a natural extension, we used ALMA to obtain a more detailed view, and that gave us the second surprise. The carbon gas around 49 Ceti turned out to be 10 times more abundant than our previous estimation."

Thanks to ALMA's high resolution, the team revealed the spatial distribution of carbon atoms in a debris disk for the first time. Carbon atoms are more widely distributed than carbon monoxide, the second most abundant molecules around young stars, hydrogen molecules being the most abundant. The amount of carbon atoms is so large that the team even detected faint radio waves from a rarer form of carbon, 13C. This is the first detection of the 13C emission at 492 GHz in any astronomical object, which is usually hidden behind the emission of normal 12C.

"The amount of 13C is only 1% of 12C, therefore the detection of 13C in the debris disk was totally unexpected," says Higuchi. "It is clear evidence that 49 Ceti has a surprisingly large amount of gas."

Read more at Science Daily

Chronobiology: 'We'll be in later'

Students attending a high school in Germany can decide whether to begin the schoolday at the normal early time or an hour later. According to Ludwig-Maximilians-Universitaet (LMU) in Munich chronobiologists, the measure has had a positive effect on both their sleep and learning experience.

They fall asleep too late at night, and are rudely expelled from dreamland by the shrill tones of the alarm clock in the morning. Classes begin early and they must be prepared to show their mettle.

Adolescents are constantly sleep deprived, a phenomenon that can be observed worldwide. In addition, the problem is no longer confined to certain personality types and therefore of individual concern, it has become a public health issue. Indeed, the Centers for Disease Control and Prevention in the US have officially designated the matter as a public health concern. The consequences of chronic sleep deficit include not only a reduced ability to concentrate but also an increased accident risk to and from school. Studies have also detected higher risks for depression, obesity, diabetes and other chronic metabolic diseases. In light of these findings, it is hardly surprising that calls for school classes to begin later in the morning are becoming louder.

But would such a move do any good? Would a later school start actually change the sleep of adolescents for the better, and enhance their cognitive performance in class? So far, there have been few research studies of this question in Europe. A group of chronobiologists in Munich, led by Eva Winnebeck and Till Roenneberg, studied the issue at a high school in Germany that made an exceptional change to their starting time arrangement. This school instituted a system that allows senior students to decide day by day whether or not to attend the first class of the day or to come to school an hour later. This form of flexible scheduling is possible because the school has adopted what is known as the Dalton Plan (for which the institution won the German School Prize in 2013). A major component of this idea (which originated in the US) is that students are required to tackle parts of the school curriculum independently in the context of project phases. The school timetable allots 10 hours per week for these activities, half of which are scheduled for the first class at 8 o'clock in the morning. Students who choose to skip this class must work through the material in their free periods during the day or after the end of the regular school day. Students from the three senior grades (i.e. 15- to 19-year-olds) served as the study population for LMU researchers from the Institute of Medical Psychology. For 3 weeks before and 6 weeks after the introduction of the flexible system in the school in Alsdorf, the team observed how the students reacted and adapted to the change. The participating students were asked to record their sleeping patterns daily, and around half of them were equipped with activity monitors for objective sleep monitoring. At the end of the study, the participants provided information on their sleep, their overall level of satisfaction and their ability to concentrate in class and while studying course content.

Read more at Science Daily

Moms' obesity in pregnancy is linked to lag in sons' development and IQ

A mother's obesity in pregnancy can affect her child's development years down the road, according to researchers who found lagging motor skills in preschoolers and lower IQ in middle childhood for boys whose mothers were severely overweight while pregnant. A team of epidemiologists, nutritionists and environmental health researchers at Columbia University Mailman School of Public Health and the University of Texas at Austin and found that the differences are comparable to the impact of lead exposure in early childhood. The findings are published in BMC Pediatrics.

The researchers studied 368 mothers and their children, all from similar economic circumstances and neighborhoods, during pregnancy and when the children were 3 and 7 years of age. At age 3, the researchers measured the children's motor skills and found that maternal obesity during pregnancy was strongly associated with lower motor skills in boys. At age 7, they again measured the children and found that the boys whose mothers were overweight or obese in pregnancy had scores 5 or more points lower on full-scale IQ tests, compared to boys whose mothers had been at a normal weight. No effect was found in the girls.

"What's striking is, even using different age-appropriate developmental assessments, we found these associations in both early and middle childhood, meaning these effects persist over time," said Elizabeth Widen, assistant professor of nutritional sciences at UT Austin and a co-author. "These findings aren't meant to shame or scare anyone. We are just beginning to understand some of these interactions between mothers' weight and the health of their babies."

It is not altogether clear why obesity in pregnancy would affect a child later, though previous research has found links between a mother's diet and cognitive development, such as higher IQ scores in kids whose mothers have more of certain fatty acids found in fish. Dietary and behavioral differences may be driving factors, or fetal development may be affected by some of the things that tend to happen in the bodies of people with a lot of extra weight, such as inflammation, metabolic stress, hormonal disruptions and high amounts of insulin and glucose.

The researchers controlled for several factors in their analysis, including race and ethnicity, marital status, the mother's education and IQ, as well as whether the children were born prematurely or exposed to environmental toxic chemicals like air pollution. What the pregnant mothers ate or whether they breastfed were not included in the analysis.

The team also examined and accounted for the nurturing environment in a child's home, looking at how parents interacted with their children and if the child was provided with books and toys. A nurturing home environment was found to lessen the negative effects of obesity.

According to Widen and senior author Andrew Rundle, DrPH, associate professor of Epidemiology at Columbia Mailman School, while the results showed that the effect on IQ was smaller in nurturing home environments, it was still there.

This is not the first study to find that boys appear to be more vulnerable in utero. Earlier research found lower performance IQ in boys but not girls whose mothers were exposed to lead, and a 2019 study suggested boys whose moms had fluoride in pregnancy scored lower on an IQ assessment.

Because childhood IQ is a predictor of education level, socio-economic status and professional success later in life, researchers say there is potential for impacts to last into adulthood.

Read more at Science Daily

Dec 22, 2019

Early-life exposure to dogs may lessen risk of developing schizophrenia

Child with dog.
Ever since humans domesticated the dog, the faithful, obedient and protective animal has provided its owner with companionship and emotional well-being. Now, a study from Johns Hopkins Medicine suggests that being around "man's best friend" from an early age may have a health benefit as well -- lessening the chance of developing schizophrenia as an adult.

And while Fido may help prevent that condition, the jury is still out on whether or not there's any link, positive or negative, between being raised with Fluffy the cat and later developing either schizophrenia or bipolar disorder.

"Serious psychiatric disorders have been associated with alterations in the immune system linked to environmental exposures in early life, and since household pets are often among the first things with which children have close contact, it was logical for us to explore the possibilities of a connection between the two," says Robert Yolken, M.D., chair of the Stanley Division of Pediatric Neurovirology and professor of neurovirology in pediatrics at the Johns Hopkins Children's Center, and lead author of a research paper recently posted online in the journal PLOS One.

In the study, Yolken and colleagues at Sheppard Pratt Health System in Baltimore investigated the relationship between exposure to a household pet cat or dog during the first 12 years of life and a later diagnosis of schizophrenia or bipolar disorder. For schizophrenia, the researchers were surprised to see a statistically significant decrease in the risk of a person developing the disorder if exposed to a dog early in life. Across the entire age range studied, there was no significant link between dogs and bipolar disorder, or between cats and either psychiatric disorder.

The researchers caution that more studies are needed to confirm these findings, to search for the factors behind any strongly supported links, and to more precisely define the actual risks of developing psychiatric disorders from exposing infants and children under age 13 to pet cats and dogs.

According to the American Pet Products Association's most recent National Pet Owners Survey, there are 94 million pet cats and 90 million pet dogs in the United States. Previous studies have identified early life exposures to pet cats and dogs as environmental factors that may alter the immune system through various means, including allergic responses, contact with zoonotic (animal) bacteria and viruses, changes in a home's microbiome, and pet-induced stress reduction effects on human brain chemistry.

Some investigators, Yolken notes, suspect that this "immune modulation" may alter the risk of developing psychiatric disorders to which a person is genetically or otherwise predisposed.

In their current study, Yolken and colleagues looked at a population of 1,371 men and women between the ages of 18 and 65 that consisted of 396 people with schizophrenia, 381 with bipolar disorder and 594 controls. Information documented about each person included age, gender, race/ethnicity, place of birth and highest level of parental education (as a measure of socioeconomic status). Patients with schizophrenia and bipolar disorder were recruited from inpatient, day hospital and rehabilitation programs of Sheppard Pratt Health System. Control group members were recruited from the Baltimore area and were screened to rule out any current or past psychiatric disorders.

All study participants were asked if they had a household pet cat or dog or both during their first 12 years of life. Those who reported that a pet cat or dog was in their house when they were born were considered to be exposed to that animal since birth.

The relationship between the age of first household pet exposure and psychiatric diagnosis was defined using a statistical model that produces a hazard ratio -- a measure over time of how often specific events (in this case, exposure to a household pet and development of a psychiatric disorder) happen in a study group compared to their frequency in a control group. A hazard ratio of 1 suggests no difference between groups, while a ratio greater than 1 indicates an increased likelihood of developing schizophrenia or bipolar disorder. Likewise, a ratio less than 1 shows a decreased chance.

Analyses were conducted for four age ranges: birth to 3, 4 to 5, 6 to 8 and 9 to 12.

Surprisingly, Yolken says, the findings suggests that people who are exposed to a pet dog before their 13th birthday are significantly less likely -- as much as 24% -- to be diagnosed later with schizophrenia.

"The largest apparent protective effect was found for children who had a household pet dog at birth or were first exposed after birth but before age 3," he says.

Yolken adds that if it is assumed that the hazard ratio is an accurate reflection of relative risk, then some 840,000 cases of schizophrenia (24% of the 3.5 million people diagnosed with the disorder in the United States) might be prevented by pet dog exposure or other factors associated with pet dog exposure.

"There are several plausible explanations for this possible 'protective' effect from contact with dogs -- perhaps something in the canine microbiome that gets passed to humans and bolsters the immune system against or subdues a genetic predisposition to schizophrenia," Yolken says.

For bipolar disorder, the study results suggest there is no risk association, either positive or negative, with being around dogs as an infant or young child.

Overall for all ages examined, early exposure to pet cats was neutral as the study could not link felines with either an increased or decreased risk of developing schizophrenia or bipolar disorder.

"However, we did find a slightly increased risk of developing both disorders for those who were first in contact with cats between the ages of 9 and 12," Yolken says. "This indicates that the time of exposure may be critical to whether or not it alters the risk."

One example of a suspected pet-borne trigger for schizophrenia is the disease toxoplasmosis, a condition in which cats are the primary hosts of a parasite transmitted to humans via the animals' feces. Pregnant women have been advised for years not to change cat litter boxes to eliminate the risk of the illness passing through the placenta to their fetuses and causing a miscarriage, stillbirth, or potentially, psychiatric disorders in a child born with the infection.

In a 2003 review paper, Yolken and colleague E. Fuller Torrey, M.D., associate director of research at the Stanley Medical Research Institute in Bethesda, Maryland, provided evidence from multiple epidemiological studies conducted since 1953 that showed there also is a statistical connection between a person exposed to the parasite that causes toxoplasmosis and an increased risk of developing schizophrenia. The researchers found that a large number of people in those studies who were diagnosed with serious psychiatric disorders, including schizophrenia, also had high levels of antibodies to the toxoplasmosis parasite.

Because of this finding and others like it, most research has focused on investigating a potential link between early exposure to cats and psychiatric disorder development. Yolken says the most recent study is among the first to consider contact with dogs as well.

"A better understanding of the mechanisms underlying the associations between pet exposure and psychiatric disorders would allow us to develop appropriate prevention and treatment strategies," Yolken says.

Read more at Science Daily

Dogs process numerical quantities in similar brain region as humans

Dog and chalkboard addition.
Dogs spontaneously process basic numerical quantities, using a distinct part of their brains that corresponds closely to number-responsive neural regions in humans, finds a study at Emory University.

Biology Letters published the results, which suggest that a common neural mechanism has been deeply conserved across mammalian evolution.

"Our work not only shows that dogs use a similar part of their brain to process numbers of objects as humans do -- it shows that they don't need to be trained to do it," says Gregory Berns, Emory professor of psychology and senior author of the study.

"Understanding neural mechanisms -- both in humans and across species -- gives us insights into both how our brains evolved over time and how they function now," says co-author Stella Lourenco, an associate professor of psychology at Emory.

Such insights, Lourenco adds, may one day lead to practical applications such as treating brain abnormalities and improving artificial intelligence systems.

Lauren Aulet, a PhD candidate in Lourenco's lab, is first author of the study.

The study used functional magnetic resonance imaging (fMRI) to scan dogs' brains as they viewed varying numbers of dots flashed on a screen. The results showed that the dogs' parietotemporal cortex responded to differences in the number of the dots. The researchers held the total area of the dots constant, demonstrating that it was the number of the dots, not the size, that generated the response.

The approximate number system supports the ability to rapidly estimate a quantity of objects in a scene, such as the number of predators approaching or the amount of food available for foraging. Evidence suggests that humans primarily draw on their parietal cortex for this ability, which is present even in infancy.

This basic sensitivity to numerical information, known as numerosity, does not rely on symbolic thought or training and appears to be widespread throughout the animal kingdom. Much of the research in non-humans, however, has involved intensive training of the subjects.

Previous research, for example, has found that particular neurons in the parietal cortex of monkeys are attuned to numerical values. Such studies had not clarified whether numerosity is a spontaneous system in non-human primates, because the subjects underwent many trials and received rewards for selecting scenes with greater numbers of dots in preparation for the experiments.

Behavioral studies in dogs that were trained in the task of discriminating between different quantities of objects have also indicated that dogs are sensitive to numerosity.

The Emory researchers wanted to delve further into the neural underpinnings of canine number perception using fMRI.

Berns is founder of the Dog Project, which is researching evolutionary questions surrounding man's best, and oldest friend. The project was the first to train dogs to voluntarily enter an fMRI scanner and remain motionless during scanning, without restraint or sedation.

Lourenco primarily researches human visual perception, cognition and development.

Eleven dogs of varying breeds were involved in the current fMRI experiments. The dogs did not receive advance training in numerosity. After entering the fMRI, they passively viewed dot arrays that varied in numerical value. Eight of the 11 dogs showed greater activation in the parietotemporal cortex when the ratio between alternating dot arrays was more dissimilar than when the numerical values were constant.

"We went right to the source, observing the dogs' brains, to get a direct understanding of what their neurons were doing when the dogs viewed varying quantities of dots," Aulet says. "That allowed us to bypass the weaknesses of previous behavioral studies of dogs and some other species."

Humans and dogs are separated by 80 million years of evolution, Berns notes. "Our results provide some of the strongest evidence yet that numerosity is a shared neural mechanism that goes back at least that far," he says.

Unlike dogs and other animals, humans are able to build on basic numerosity in order to do more complex math, drawing primarily on the prefrontal cortex. "Part of the reason that we are able to do calculus and algebra is because we have this fundamental ability for numerosity that we share with other animals," Aulet says. "I'm interested in learning how we evolved that higher math ability and how these skills develop over time in individuals, starting with basic numerosity in infancy."

Read more at Science Daily

Dec 21, 2019

Scientists find iron 'snow' in Earth's core

Illustration of Earth's core.
The Earth's inner core is hot, under immense pressure and snow-capped, according to new research that could help scientists better understand forces that affect the entire planet.

The snow is made of tiny particles of iron -- much heavier than any snowflake on Earth's surface -- that fall from the molten outer core and pile on top of the inner core, creating piles up to 200 miles thick that cover the inner core.

The image may sound like an alien winter wonderland. But the scientists who led the research said it is akin to how rocks form inside volcanoes.

"The Earth's metallic core works like a magma chamber that we know better of in the crust," said Jung-Fu Lin, a professor in the Jackson School of Geosciences at The University of Texas at Austin and a co-author of the study.

The study is available online and will be published in the print edition of the journal JGR Solid Earth on December 23.

Youjun Zhang, an associate professor at Sichuan University in China, led the study. The other co-authors include Jackson School graduate student Peter Nelson; and Nick Dygert, an assistant professor at the University of Tennessee who conducted the research during a postdoctoral fellowship at the Jackson School.

The Earth's core can't be sampled, so scientists study it by recording and analyzing signals from seismic waves (a type of energy wave) as they pass through the Earth.

However, aberrations between recent seismic wave data and the values that would be expected based on the current model of the Earth's core have raised questions. The waves move more slowly than expected as they passed through the base of the outer core, and they move faster than expected when moving through the eastern hemisphere of the top inner core.

The study proposes the iron snow-capped core as an explanation for these aberrations. The scientist S.I. Braginkskii proposed in the early 1960s that a slurry layer exists between the inner and outer core, but prevailing knowledge about heat and pressure conditions in the core environment quashed that theory. However, new data from experiments on core-like materials conducted by Zhang and pulled from more recent scientific literature found that crystallization was possible and that about 15% of the lowermost outer core could be made of iron-based crystals that eventually fall down the liquid outer core and settle on top of the solid inner core.

"It's sort of a bizarre thing to think about," Dygert said. "You have crystals within the outer core snowing down onto the inner core over a distance of several hundred kilometers."

The researchers point to the accumulated snow pack as the cause of the seismic aberrations. The slurry-like composition slows the seismic waves. The variation in snow pile size -- thinner in the eastern hemisphere and thicker in the western -- explains the change in speed.

"The inner-core boundary is not a simple and smooth surface, which may affect the thermal conduction and the convections of the core," Zhang said.

The paper compares the snowing of iron particles with a process that happens inside magma chambers closer to the Earth's surface, which involves minerals crystalizing out of the melt and glomming together. In magma chambers, the compaction of the minerals creates what's known as "cumulate rock." In the Earth's core, the compaction of the iron contributes to the growth of the inner core and shrinking of the outer core.

And given the core's influence over phenomena that affects the entire planet, from generating its magnetic field to radiating the heat that drives the movement of tectonic plates, understanding more about its composition and behavior could help in understanding how these larger processes work.

Bruce Buffet, a geosciences professor at the University of California, Berkley who studies planet interiors and who was not involved in the study, said that the research confronts longstanding questions about the Earth's interior and could even help reveal more about how the Earth's core came to be.

"Relating the model predictions to the anomalous observations allows us to draw inferences about the possible compositions of the liquid core and maybe connect this information to the conditions that prevailed at the time the planet was formed," he said. "The starting condition is an important factor in Earth becoming the planet we know."

Read more at Science Daily

Forgetfulness might depend on time of day

Pocket watch.
Can't remember something? Try waiting until later in the day. Researchers identified a gene in mice that seems to influence memory recall at different times of day and tracked how it causes mice to be more forgetful just before they normally wake up.

"We may have identified the first gene in mice specific to memory retrieval," said Professor Satoshi Kida from the University of Tokyo Department of Applied Biological Chemistry.

Every time you forget something, it could be because you didn't truly learn it -- like the name of the person you were just introduced to a minute ago; or it could be because you are not able to recall the information from where it is stored in your brain -- like the lyrics of your favorite song slipping your mind.

Many memory researchers study how new memories are made. The biology of forgetting is more complicated to study because of the difficulties of distinguishing between not knowing and not recalling.

"We designed a memory test that can differentiate between not learning versus knowing but not being able to remember," said Kida.

Researchers tested the memories of young adult male and female mice. In the "learning," or training, phase of the memory tests, researchers allowed mice to explore a new object for a few minutes.

Later, in the "recall" phase of the test, researchers observed how long the mice touched the object when it was reintroduced. Mice spend less time touching objects that they remember seeing previously. Researchers tested the mice's recall by reintroducing the same object at different times of day.

They did the same experiments with healthy mice and mice without BMAL1, a protein that regulates the expression of many other genes. BMAL1 normally fluctuates between low levels just before waking up and high levels before going to sleep.

Mice trained just before they normally woke up and tested just after they normally went to sleep did recognize the object.

Mice trained at the same time -- just before they normally woke up -- but tested 24 hours later did not recognize the object.

Healthy mice and mice without BMAL1 had the same pattern of results, but the mice without BMAL1 were even more forgetful just before they normally woke up. Researchers saw the same results when they tested mice on recognizing an object or recognizing another mouse.

Something about the time of day just before they normally wake up, when BMAL1 levels are normally low, causes mice to not recall something they definitely learned and know.

According to Kida, the memory research community has previously suspected that the body's internal, or circadian, clock that is responsible for regulating sleep-wake cycles also affects learning and memory formation.

"Now we have evidence that the circadian clocks are regulating memory recall," said Kida.

Researchers have traced the role of BMAL1 in memory retrieval to a specific area of the brain called the hippocampus. Additionally, researchers connected normal BMAL1 to activation of dopamine receptors and modification of other small signaling molecules in the brain.

"If we can identify ways to boost memory retrieval through this BMAL1 pathway, then we can think about applications to human diseases of memory deficit, like dementia and Alzheimer's disease," said Kida.

However, the purpose of having memory recall abilities that naturally fluctuate depending on the time of day remains a mystery.

Read more at Science Daily

Dec 20, 2019

Mowing urban lawns less intensely increases biodiversity, saves money and reduces pests

The researchers combined data across North America and Europe using a meta-analysis, a way of aggregating results from multiple studies to increase statistical strength. They found strong evidence that increased mowing intensity of urban lawns -- which included parks, roundabouts and road verges -- had negative ecological effects, particularly on invertebrate and plant diversity. Pest species, on the other hand, benefitted from intense lawn management.

"Even a modest reduction in lawn mowing frequency can bring a host of environmental benefits: increased pollinators, increased plant diversity and reduced greenhouse gas emissions. At the same time, a longer, healthier lawn makes it more resistant to pests, weeds, and drought events." said Dr Chris Watson, lead author of the study.

The issue with regular lawn mowing is that it favours grasses, which grow from that base of the plant, and low growing species like dandelion and clover. Other species that have their growing tips or flowering stems regularly removed by mowing can't compete. Allowing plant diversity in urban lawns to increase has the knock-on effect of increasing the diversity of other organisms such as pollinators and herbivores.

The effect of intense lawn mowing on pest species was the least studied aspect of the research the authors looked at, featuring in seven datasets across three studies in Eastern Canada. However, in all of these studies they found that intensive lawn mowing resulted in an increase in the abundance of weeds and lawn pests.

"These findings support a lot of research done by the turfgrass industry that shows that the more disturbance a lawn gets, the higher the likelihood of pest and weed invasion." said Dr Chris Watson.

Common ragweed, which featured prominently in the studies, is one of the most allergenic plant species found in North America and Europe. Previous studies have estimated the cost of ragweed-based allergies to be CAD$155 million per year in Quebec and €133 million a year in Austria and Bavaria. Having a more rapid reproduction than other species, ragweed is able to colonise disturbances caused by intense mowing.

Chris Watson explained that "Certain lawn invaders, such as ragweed, can be decreased simply through reducing lawn mowing frequency. This will decrease the pollen load in the air and reduce the severity of hayfever symptoms, number of people affected, and medical costs."

To understand the economic costs of intensely mowed lawns the researchers used a case study of the city of Trois-Rivières, Quebec, Canada. By using data on mowing contractor costs they estimated a 36% reduction of public maintenance costs when mowing frequency was reduced from 15 to 10 times per year in high use lawn areas and 3 times to once a year in low use areas.

"If citizens would like to see urban greenspace improvement, they have the ability to influence how governments go about this -- especially if it does not cost more money!" said Dr Chris Watson. "Likewise, complaints about long, messy lawns could quickly reduce the appetite of local government to trial these approaches -- so it's important to have some community information and education as well. We need to shake the outdated social stigma that comes from having a lawn a few centimetres longer than your neighbour's"

The potential for long grass to harbour ticks and rodents is a common concern. However, Dr Chris Watson said there is little evidence to support this. "The presence of ticks are more strongly related to host populations, like deer, than type of vegetation. With respect to small mammals, some species prefer longer grass' whereas others do not. The next phase of our research aims to explore these negative perceptions in more detail."

For their meta-analysis the researchers identified studies in an urban setting that measured mowing intensity (either height or frequency) as an experimental factor. On top of the 14 studies they identified, which took place between 2004 and 2019, they also included three previously unpublished studies from their research group. A separate case study was used to estimate the economic costs of high intensity lawn management.

On the reasons for conducting a meta-analysis, Chris Watson explained that: "Often, ecological studies are done over only one or two years and can be heavily influenced by the weather conditions during the period of study. A meta-analysis looks beyond individual years or locations to provide a broad overview of a research subject."

The number of data sources from previous studies available to the authors ultimately limited the analysis. "In this case, all studies came from North America and Europe so there is a big opportunity in seeing if the trends we found are confirmed elsewhere. Likewise, all the studies were used to explore pest species were from Eastern Canada, so it is important to do more research in other places before applying these results generally." said Dr Chris Watson.

When looking at the economic impacts of intense lawn management the authors were only able to incorporate contractor costs which included worker's salaries, equipment operation and fuel. They were unable to include the costs of pesticides and fertiliser or factor in indirect economic benefits from improved ecosystem services like pollination.

Read more at Science Daily

Why is drinking in moderation so difficult for some people?

Compulsive drinking may be due to dysfunction in a specific brain pathway that normally helps keep drinking in check. The results are reported in the journal Biological Psychiatry.

In the United States, 14 million adults struggle with alcohol use disorder (AUD) -- formerly known as alcoholism. This disorder makes individuals unable to stop drinking even when they know the potential risks to health, jobs, and relationships.

"Difficulty saying no to alcohol, even when it could clearly lead to harm, is a defining feature of alcohol use disorders," said Andrew Holmes, PhD, senior investigator of the study and Chief of the Laboratory on Behavioral and Genomic Neuroscience at the National Institute on Alcohol Abuse and Alcoholism (NIAAA). "This study takes us a step further in understanding the brain mechanisms underlying compulsive drinking."

Many complex parts of behavior -- emotion, reward, motivation, anxiety -- are regulated by the cortex, the outer layers of the brain that are responsible for complex processes like decision-making. Unlike drugs like cocaine, alcohol has broad effects on the brain, which makes narrowing down a target for therapeutic treatment much more difficult.

"We want to understand how the brain normally regulates drinking, so we can answer questions about what happens when this regulation isn't happening as it should," said Lindsay Halladay, PhD, Assistant Professor of Psychology and Neuroscience at Santa Clara University, and lead author of the study.

To study how the brain regulates drinking, Halladay and colleagues trained mice in the lab to press a lever for an alcohol reward. Once trained, the mice were presented with a new, conflicting situation: press the same lever for alcohol and receive a light electric shock to their feet, or avoid that risk but forfeit the alcohol. After a short session, most mice quickly learn to avoid the shock and choose to give up the alcohol.

Halladay's team first used surgically-implanted electrodes to measure activity in regions of the cortex during that decision.

"We found a group of neurons in the medial prefrontal cortex that became active when mice approached the lever but aborted the lever press," said Halladay. "These neurons only responded when the mice did not press the lever, apparently deciding the risk of shock was too great, but not when mice chose alcohol over the risk of shock. This means that the neurons we identified may be responsible for putting the brakes on drinking when doing so may be dangerous."

The medial prefrontal cortex (mPFC) plays a role in many forms of decision-making and communicates with many regions of the brain, so Halladay's team explored those external connections.

The team used optogenetics, a viral engineering technique that allowed them to effectively shut down precise brain pathways by shining light in the brain. They shut down activity of cells in the mPFC that communicate with the nucleus accumbens, an area of the brain important for reward, and found that the number of risky lever presses increased.

"Shutting down this circuit restored alcohol-seeking despite the risk of shock," said Halladay. "This raises the possibility that alcohol use disorder stems from some form of dysfunction in this pathway."

Understanding compulsive drinking in some people relies on identifying the neural pathway that keeps drinking in check.

"Current treatments just aren't effective enough," said Halladay. "Nearly half of all people treated for AUD relapse within a year of seeking treatment."

Read more at Science Daily

NASA maps inner Milky Way, sees cosmic 'candy cane'

This image of the inner galaxy color codes different types of emission sources by merging microwave data (green) mapped by the Goddard-IRAM Superconducting 2-Millimeter Observer (GISMO) instrument with infrared (850 micrometers, blue) and radio observations (19.5 centimeters, red). Where star formation is in its infancy, cold dust shows blue and cyan, such as in the Sagittarius B2 molecular cloud complex. Yellow reveals more well-developed star factories, as in the Sagittarius B1 cloud. Red and orange show where high-energy electrons interact with magnetic fields, such as in the Radio Arc and Sagittarius A features. An area called the Sickle may supply the particles responsible for setting the Radio Arc aglow. Within the bright source Sagittarius A lies the Milky Way's monster black hole. The image spans a distance of 750 light-years.
A feature resembling a candy cane appears at the center of this colorful composite image of our Milky Way galaxy's central zone. But this is no cosmic confection. It spans 190 light-years and is one of a set of long, thin strands of ionized gas called filaments that emit radio waves.

This image includes newly published observations using an instrument designed and built at NASA's Goddard Space Flight Center in Greenbelt, Maryland. Called the Goddard-IRAM Superconducting 2-Millimeter Observer (GISMO), the instrument was used in concert with a 30-meter radio telescope located on Pico Veleta, Spain, operated by the Institute for Radio Astronomy in the Millimeter Range headquartered in Grenoble, France.

"GISMO observes microwaves with a wavelength of 2 millimeters, allowing us to explore the galaxy in the transition zone between infrared light and longer radio wavelengths," said Johannes Staguhn, an astronomer at Johns Hopkins University in Baltimore who leads the GISMO team at Goddard. "Each of these portions of the spectrum is dominated by different types of emission, and GISMO shows us how they link together."

GISMO detected the most prominent radio filament in the galactic center, known as the Radio Arc, which forms the straight part of the cosmic candy cane. This is the shortest wavelength at which these curious structures have been observed. Scientists say the filaments delineate the edges of a large bubble produced by some energetic event at the galactic center, located within the bright region known as Sagittarius A about 27,000 light-years away from us. Additional red arcs in the image reveal other filaments.

"It was a real surprise to see the Radio Arc in the GISMO data," said Richard Arendt, a team member at the University of Maryland, Baltimore County and Goddard. "Its emission comes from high-speed electrons spiraling in a magnetic field, a process called synchrotron emission. Another feature GISMO sees, called the Sickle, is associated with star formation and may be the source of these high-speed electrons."

Two papers describing the composite image, one led by Arendt and one led by Staguhn, were published on Nov. 1 in the Astrophysical Journal.

The image shows the inner part of our galaxy, which hosts the largest and densest collection of giant molecular clouds in the Milky Way. These vast, cool clouds contain enough dense gas and dust to form tens of millions of stars like the Sun. The view spans a part of the sky about 1.6 degrees across -- equivalent to roughly three times the apparent size of the Moon -- or about 750 light-years wide.

To make the image, the team acquired GISMO data, shown in green, in April and November 2012. They then used archival observations from the European Space Agency's Herschel satellite to model the far-infrared glow of cold dust, which they then subtracted from the GISMO data. Next, they added, in blue, existing 850-micrometer infrared data from the SCUBA-2 instrument on the James Clerk Maxwell Telescope near the summit of Maunakea, Hawaii. Finally, they added, in red, archival longer-wavelength 19.5-centimeter radio observations from the National Science Foundation's Karl G. Jansky Very Large Array, located near Socorro, New Mexico. The higher-resolution infrared and radio data were then processed to match the lower-resolution GISMO observations.

Read more at Science Daily

ESO observations reveal black holes' breakfast at the cosmic dawn

This image shows one of the gas halos newly observed with the MUSE instrument on ESO's Very Large Telescope superimposed to an older image of a galaxy merger obtained with ALMA. The large-scale halo of hydrogen gas is shown in blue, while the ALMA data is shown in orange.
Astronomers using ESO's Very Large Telescope have observed reservoirs of cool gas around some of the earliest galaxies in the Universe. These gas halos are the perfect food for supermassive black holes at the centre of these galaxies, which are now seen as they were over 12.5 billion years ago. This food storage might explain how these cosmic monsters grew so fast during a period in the Universe's history known as the Cosmic Dawn.

"We are now able to demonstrate, for the first time, that primordial galaxies do have enough food in their environments to sustain both the growth of supermassive black holes and vigorous star formation," says Emanuele Paolo Farina, of the Max Planck Institute for Astronomy in Heidelberg, Germany, who led the research published today in The Astrophysical Journal. "This adds a fundamental piece to the puzzle that astronomers are building to picture how cosmic structures formed more than 12 billion years ago."

Astronomers have wondered how supermassive black holes were able to grow so large so early on in the history of the Universe. "The presence of these early monsters, with masses several billion times the mass of our Sun, is a big mystery," says Farina, who is also affiliated with the Max Planck Institute for Astrophysics in Garching bei München. It means that the first black holes, which might have formed from the collapse of the first stars, must have grown very fast. But, until now, astronomers had not spotted 'black hole food' -- gas and dust -- in large enough quantities to explain this rapid growth.

To complicate matters further, previous observations with ALMA, the Atacama Large Millimeter/submillimeter Array, revealed a lot of dust and gas in these early galaxies that fuelled rapid star formation. These ALMA observations suggested that there could be little left over to feed a black hole.

To solve this mystery, Farina and his colleagues used the MUSE instrument on ESO's Very Large Telescope in the Chilean Atacama Desert to study quasars -- extremely bright objects powered by supermassive black holes which lie at the centre of massive galaxies. The study surveyed 31 quasars that are seen as they were more than 12.5 billion years ago, at a time when the Universe was still an infant, only about 870 million years old. This is one of the largest samples of quasars from this early on in the history of the Universe to be surveyed.

The astronomers found that 12 quasars were surrounded by enormous gas reservoirs: halos of cool, dense hydrogen gas extending 100,000 light years from the central black holes and with billions of times the mass of the Sun. The team, from Germany, the US, Italy and Chile, also found that these gas halos were tightly bound to the galaxies, providing the perfect food source to sustain both the growth of supermassive black holes and vigorous star formation.

The research was possible thanks to the superb sensitivity of MUSE, the Multi Unit Spectroscopic Explorer, on ESO's VLT, which Farina says was "a game changer" in the study of quasars. "In a matter of a few hours per target, we were able to delve into the surroundings of the most massive and voracious black holes present in the young Universe," he adds. While quasars are bright, the gas reservoirs around them are much harder to observe. But MUSE could detect the faint glow of the hydrogen gas in the halos, allowing astronomers to finally reveal the food stashes that power supermassive black holes in the early Universe.

Read more at Science Daily