Dec 21, 2019

Scientists find iron 'snow' in Earth's core

Illustration of Earth's core.
The Earth's inner core is hot, under immense pressure and snow-capped, according to new research that could help scientists better understand forces that affect the entire planet.

The snow is made of tiny particles of iron -- much heavier than any snowflake on Earth's surface -- that fall from the molten outer core and pile on top of the inner core, creating piles up to 200 miles thick that cover the inner core.

The image may sound like an alien winter wonderland. But the scientists who led the research said it is akin to how rocks form inside volcanoes.

"The Earth's metallic core works like a magma chamber that we know better of in the crust," said Jung-Fu Lin, a professor in the Jackson School of Geosciences at The University of Texas at Austin and a co-author of the study.

The study is available online and will be published in the print edition of the journal JGR Solid Earth on December 23.

Youjun Zhang, an associate professor at Sichuan University in China, led the study. The other co-authors include Jackson School graduate student Peter Nelson; and Nick Dygert, an assistant professor at the University of Tennessee who conducted the research during a postdoctoral fellowship at the Jackson School.

The Earth's core can't be sampled, so scientists study it by recording and analyzing signals from seismic waves (a type of energy wave) as they pass through the Earth.

However, aberrations between recent seismic wave data and the values that would be expected based on the current model of the Earth's core have raised questions. The waves move more slowly than expected as they passed through the base of the outer core, and they move faster than expected when moving through the eastern hemisphere of the top inner core.

The study proposes the iron snow-capped core as an explanation for these aberrations. The scientist S.I. Braginkskii proposed in the early 1960s that a slurry layer exists between the inner and outer core, but prevailing knowledge about heat and pressure conditions in the core environment quashed that theory. However, new data from experiments on core-like materials conducted by Zhang and pulled from more recent scientific literature found that crystallization was possible and that about 15% of the lowermost outer core could be made of iron-based crystals that eventually fall down the liquid outer core and settle on top of the solid inner core.

"It's sort of a bizarre thing to think about," Dygert said. "You have crystals within the outer core snowing down onto the inner core over a distance of several hundred kilometers."

The researchers point to the accumulated snow pack as the cause of the seismic aberrations. The slurry-like composition slows the seismic waves. The variation in snow pile size -- thinner in the eastern hemisphere and thicker in the western -- explains the change in speed.

"The inner-core boundary is not a simple and smooth surface, which may affect the thermal conduction and the convections of the core," Zhang said.

The paper compares the snowing of iron particles with a process that happens inside magma chambers closer to the Earth's surface, which involves minerals crystalizing out of the melt and glomming together. In magma chambers, the compaction of the minerals creates what's known as "cumulate rock." In the Earth's core, the compaction of the iron contributes to the growth of the inner core and shrinking of the outer core.

And given the core's influence over phenomena that affects the entire planet, from generating its magnetic field to radiating the heat that drives the movement of tectonic plates, understanding more about its composition and behavior could help in understanding how these larger processes work.

Bruce Buffet, a geosciences professor at the University of California, Berkley who studies planet interiors and who was not involved in the study, said that the research confronts longstanding questions about the Earth's interior and could even help reveal more about how the Earth's core came to be.

"Relating the model predictions to the anomalous observations allows us to draw inferences about the possible compositions of the liquid core and maybe connect this information to the conditions that prevailed at the time the planet was formed," he said. "The starting condition is an important factor in Earth becoming the planet we know."

Read more at Science Daily

Forgetfulness might depend on time of day

Pocket watch.
Can't remember something? Try waiting until later in the day. Researchers identified a gene in mice that seems to influence memory recall at different times of day and tracked how it causes mice to be more forgetful just before they normally wake up.

"We may have identified the first gene in mice specific to memory retrieval," said Professor Satoshi Kida from the University of Tokyo Department of Applied Biological Chemistry.

Every time you forget something, it could be because you didn't truly learn it -- like the name of the person you were just introduced to a minute ago; or it could be because you are not able to recall the information from where it is stored in your brain -- like the lyrics of your favorite song slipping your mind.

Many memory researchers study how new memories are made. The biology of forgetting is more complicated to study because of the difficulties of distinguishing between not knowing and not recalling.

"We designed a memory test that can differentiate between not learning versus knowing but not being able to remember," said Kida.

Researchers tested the memories of young adult male and female mice. In the "learning," or training, phase of the memory tests, researchers allowed mice to explore a new object for a few minutes.

Later, in the "recall" phase of the test, researchers observed how long the mice touched the object when it was reintroduced. Mice spend less time touching objects that they remember seeing previously. Researchers tested the mice's recall by reintroducing the same object at different times of day.

They did the same experiments with healthy mice and mice without BMAL1, a protein that regulates the expression of many other genes. BMAL1 normally fluctuates between low levels just before waking up and high levels before going to sleep.

Mice trained just before they normally woke up and tested just after they normally went to sleep did recognize the object.

Mice trained at the same time -- just before they normally woke up -- but tested 24 hours later did not recognize the object.

Healthy mice and mice without BMAL1 had the same pattern of results, but the mice without BMAL1 were even more forgetful just before they normally woke up. Researchers saw the same results when they tested mice on recognizing an object or recognizing another mouse.

Something about the time of day just before they normally wake up, when BMAL1 levels are normally low, causes mice to not recall something they definitely learned and know.

According to Kida, the memory research community has previously suspected that the body's internal, or circadian, clock that is responsible for regulating sleep-wake cycles also affects learning and memory formation.

"Now we have evidence that the circadian clocks are regulating memory recall," said Kida.

Researchers have traced the role of BMAL1 in memory retrieval to a specific area of the brain called the hippocampus. Additionally, researchers connected normal BMAL1 to activation of dopamine receptors and modification of other small signaling molecules in the brain.

"If we can identify ways to boost memory retrieval through this BMAL1 pathway, then we can think about applications to human diseases of memory deficit, like dementia and Alzheimer's disease," said Kida.

However, the purpose of having memory recall abilities that naturally fluctuate depending on the time of day remains a mystery.

Read more at Science Daily

Dec 20, 2019

Mowing urban lawns less intensely increases biodiversity, saves money and reduces pests

The researchers combined data across North America and Europe using a meta-analysis, a way of aggregating results from multiple studies to increase statistical strength. They found strong evidence that increased mowing intensity of urban lawns -- which included parks, roundabouts and road verges -- had negative ecological effects, particularly on invertebrate and plant diversity. Pest species, on the other hand, benefitted from intense lawn management.

"Even a modest reduction in lawn mowing frequency can bring a host of environmental benefits: increased pollinators, increased plant diversity and reduced greenhouse gas emissions. At the same time, a longer, healthier lawn makes it more resistant to pests, weeds, and drought events." said Dr Chris Watson, lead author of the study.

The issue with regular lawn mowing is that it favours grasses, which grow from that base of the plant, and low growing species like dandelion and clover. Other species that have their growing tips or flowering stems regularly removed by mowing can't compete. Allowing plant diversity in urban lawns to increase has the knock-on effect of increasing the diversity of other organisms such as pollinators and herbivores.

The effect of intense lawn mowing on pest species was the least studied aspect of the research the authors looked at, featuring in seven datasets across three studies in Eastern Canada. However, in all of these studies they found that intensive lawn mowing resulted in an increase in the abundance of weeds and lawn pests.

"These findings support a lot of research done by the turfgrass industry that shows that the more disturbance a lawn gets, the higher the likelihood of pest and weed invasion." said Dr Chris Watson.

Common ragweed, which featured prominently in the studies, is one of the most allergenic plant species found in North America and Europe. Previous studies have estimated the cost of ragweed-based allergies to be CAD$155 million per year in Quebec and €133 million a year in Austria and Bavaria. Having a more rapid reproduction than other species, ragweed is able to colonise disturbances caused by intense mowing.

Chris Watson explained that "Certain lawn invaders, such as ragweed, can be decreased simply through reducing lawn mowing frequency. This will decrease the pollen load in the air and reduce the severity of hayfever symptoms, number of people affected, and medical costs."

To understand the economic costs of intensely mowed lawns the researchers used a case study of the city of Trois-Rivières, Quebec, Canada. By using data on mowing contractor costs they estimated a 36% reduction of public maintenance costs when mowing frequency was reduced from 15 to 10 times per year in high use lawn areas and 3 times to once a year in low use areas.

"If citizens would like to see urban greenspace improvement, they have the ability to influence how governments go about this -- especially if it does not cost more money!" said Dr Chris Watson. "Likewise, complaints about long, messy lawns could quickly reduce the appetite of local government to trial these approaches -- so it's important to have some community information and education as well. We need to shake the outdated social stigma that comes from having a lawn a few centimetres longer than your neighbour's"

The potential for long grass to harbour ticks and rodents is a common concern. However, Dr Chris Watson said there is little evidence to support this. "The presence of ticks are more strongly related to host populations, like deer, than type of vegetation. With respect to small mammals, some species prefer longer grass' whereas others do not. The next phase of our research aims to explore these negative perceptions in more detail."

For their meta-analysis the researchers identified studies in an urban setting that measured mowing intensity (either height or frequency) as an experimental factor. On top of the 14 studies they identified, which took place between 2004 and 2019, they also included three previously unpublished studies from their research group. A separate case study was used to estimate the economic costs of high intensity lawn management.

On the reasons for conducting a meta-analysis, Chris Watson explained that: "Often, ecological studies are done over only one or two years and can be heavily influenced by the weather conditions during the period of study. A meta-analysis looks beyond individual years or locations to provide a broad overview of a research subject."

The number of data sources from previous studies available to the authors ultimately limited the analysis. "In this case, all studies came from North America and Europe so there is a big opportunity in seeing if the trends we found are confirmed elsewhere. Likewise, all the studies were used to explore pest species were from Eastern Canada, so it is important to do more research in other places before applying these results generally." said Dr Chris Watson.

When looking at the economic impacts of intense lawn management the authors were only able to incorporate contractor costs which included worker's salaries, equipment operation and fuel. They were unable to include the costs of pesticides and fertiliser or factor in indirect economic benefits from improved ecosystem services like pollination.

Read more at Science Daily

Why is drinking in moderation so difficult for some people?

Compulsive drinking may be due to dysfunction in a specific brain pathway that normally helps keep drinking in check. The results are reported in the journal Biological Psychiatry.

In the United States, 14 million adults struggle with alcohol use disorder (AUD) -- formerly known as alcoholism. This disorder makes individuals unable to stop drinking even when they know the potential risks to health, jobs, and relationships.

"Difficulty saying no to alcohol, even when it could clearly lead to harm, is a defining feature of alcohol use disorders," said Andrew Holmes, PhD, senior investigator of the study and Chief of the Laboratory on Behavioral and Genomic Neuroscience at the National Institute on Alcohol Abuse and Alcoholism (NIAAA). "This study takes us a step further in understanding the brain mechanisms underlying compulsive drinking."

Many complex parts of behavior -- emotion, reward, motivation, anxiety -- are regulated by the cortex, the outer layers of the brain that are responsible for complex processes like decision-making. Unlike drugs like cocaine, alcohol has broad effects on the brain, which makes narrowing down a target for therapeutic treatment much more difficult.

"We want to understand how the brain normally regulates drinking, so we can answer questions about what happens when this regulation isn't happening as it should," said Lindsay Halladay, PhD, Assistant Professor of Psychology and Neuroscience at Santa Clara University, and lead author of the study.

To study how the brain regulates drinking, Halladay and colleagues trained mice in the lab to press a lever for an alcohol reward. Once trained, the mice were presented with a new, conflicting situation: press the same lever for alcohol and receive a light electric shock to their feet, or avoid that risk but forfeit the alcohol. After a short session, most mice quickly learn to avoid the shock and choose to give up the alcohol.

Halladay's team first used surgically-implanted electrodes to measure activity in regions of the cortex during that decision.

"We found a group of neurons in the medial prefrontal cortex that became active when mice approached the lever but aborted the lever press," said Halladay. "These neurons only responded when the mice did not press the lever, apparently deciding the risk of shock was too great, but not when mice chose alcohol over the risk of shock. This means that the neurons we identified may be responsible for putting the brakes on drinking when doing so may be dangerous."

The medial prefrontal cortex (mPFC) plays a role in many forms of decision-making and communicates with many regions of the brain, so Halladay's team explored those external connections.

The team used optogenetics, a viral engineering technique that allowed them to effectively shut down precise brain pathways by shining light in the brain. They shut down activity of cells in the mPFC that communicate with the nucleus accumbens, an area of the brain important for reward, and found that the number of risky lever presses increased.

"Shutting down this circuit restored alcohol-seeking despite the risk of shock," said Halladay. "This raises the possibility that alcohol use disorder stems from some form of dysfunction in this pathway."

Understanding compulsive drinking in some people relies on identifying the neural pathway that keeps drinking in check.

"Current treatments just aren't effective enough," said Halladay. "Nearly half of all people treated for AUD relapse within a year of seeking treatment."

Read more at Science Daily

NASA maps inner Milky Way, sees cosmic 'candy cane'

This image of the inner galaxy color codes different types of emission sources by merging microwave data (green) mapped by the Goddard-IRAM Superconducting 2-Millimeter Observer (GISMO) instrument with infrared (850 micrometers, blue) and radio observations (19.5 centimeters, red). Where star formation is in its infancy, cold dust shows blue and cyan, such as in the Sagittarius B2 molecular cloud complex. Yellow reveals more well-developed star factories, as in the Sagittarius B1 cloud. Red and orange show where high-energy electrons interact with magnetic fields, such as in the Radio Arc and Sagittarius A features. An area called the Sickle may supply the particles responsible for setting the Radio Arc aglow. Within the bright source Sagittarius A lies the Milky Way's monster black hole. The image spans a distance of 750 light-years.
A feature resembling a candy cane appears at the center of this colorful composite image of our Milky Way galaxy's central zone. But this is no cosmic confection. It spans 190 light-years and is one of a set of long, thin strands of ionized gas called filaments that emit radio waves.

This image includes newly published observations using an instrument designed and built at NASA's Goddard Space Flight Center in Greenbelt, Maryland. Called the Goddard-IRAM Superconducting 2-Millimeter Observer (GISMO), the instrument was used in concert with a 30-meter radio telescope located on Pico Veleta, Spain, operated by the Institute for Radio Astronomy in the Millimeter Range headquartered in Grenoble, France.

"GISMO observes microwaves with a wavelength of 2 millimeters, allowing us to explore the galaxy in the transition zone between infrared light and longer radio wavelengths," said Johannes Staguhn, an astronomer at Johns Hopkins University in Baltimore who leads the GISMO team at Goddard. "Each of these portions of the spectrum is dominated by different types of emission, and GISMO shows us how they link together."

GISMO detected the most prominent radio filament in the galactic center, known as the Radio Arc, which forms the straight part of the cosmic candy cane. This is the shortest wavelength at which these curious structures have been observed. Scientists say the filaments delineate the edges of a large bubble produced by some energetic event at the galactic center, located within the bright region known as Sagittarius A about 27,000 light-years away from us. Additional red arcs in the image reveal other filaments.

"It was a real surprise to see the Radio Arc in the GISMO data," said Richard Arendt, a team member at the University of Maryland, Baltimore County and Goddard. "Its emission comes from high-speed electrons spiraling in a magnetic field, a process called synchrotron emission. Another feature GISMO sees, called the Sickle, is associated with star formation and may be the source of these high-speed electrons."

Two papers describing the composite image, one led by Arendt and one led by Staguhn, were published on Nov. 1 in the Astrophysical Journal.

The image shows the inner part of our galaxy, which hosts the largest and densest collection of giant molecular clouds in the Milky Way. These vast, cool clouds contain enough dense gas and dust to form tens of millions of stars like the Sun. The view spans a part of the sky about 1.6 degrees across -- equivalent to roughly three times the apparent size of the Moon -- or about 750 light-years wide.

To make the image, the team acquired GISMO data, shown in green, in April and November 2012. They then used archival observations from the European Space Agency's Herschel satellite to model the far-infrared glow of cold dust, which they then subtracted from the GISMO data. Next, they added, in blue, existing 850-micrometer infrared data from the SCUBA-2 instrument on the James Clerk Maxwell Telescope near the summit of Maunakea, Hawaii. Finally, they added, in red, archival longer-wavelength 19.5-centimeter radio observations from the National Science Foundation's Karl G. Jansky Very Large Array, located near Socorro, New Mexico. The higher-resolution infrared and radio data were then processed to match the lower-resolution GISMO observations.

Read more at Science Daily

ESO observations reveal black holes' breakfast at the cosmic dawn

This image shows one of the gas halos newly observed with the MUSE instrument on ESO's Very Large Telescope superimposed to an older image of a galaxy merger obtained with ALMA. The large-scale halo of hydrogen gas is shown in blue, while the ALMA data is shown in orange.
Astronomers using ESO's Very Large Telescope have observed reservoirs of cool gas around some of the earliest galaxies in the Universe. These gas halos are the perfect food for supermassive black holes at the centre of these galaxies, which are now seen as they were over 12.5 billion years ago. This food storage might explain how these cosmic monsters grew so fast during a period in the Universe's history known as the Cosmic Dawn.

"We are now able to demonstrate, for the first time, that primordial galaxies do have enough food in their environments to sustain both the growth of supermassive black holes and vigorous star formation," says Emanuele Paolo Farina, of the Max Planck Institute for Astronomy in Heidelberg, Germany, who led the research published today in The Astrophysical Journal. "This adds a fundamental piece to the puzzle that astronomers are building to picture how cosmic structures formed more than 12 billion years ago."

Astronomers have wondered how supermassive black holes were able to grow so large so early on in the history of the Universe. "The presence of these early monsters, with masses several billion times the mass of our Sun, is a big mystery," says Farina, who is also affiliated with the Max Planck Institute for Astrophysics in Garching bei München. It means that the first black holes, which might have formed from the collapse of the first stars, must have grown very fast. But, until now, astronomers had not spotted 'black hole food' -- gas and dust -- in large enough quantities to explain this rapid growth.

To complicate matters further, previous observations with ALMA, the Atacama Large Millimeter/submillimeter Array, revealed a lot of dust and gas in these early galaxies that fuelled rapid star formation. These ALMA observations suggested that there could be little left over to feed a black hole.

To solve this mystery, Farina and his colleagues used the MUSE instrument on ESO's Very Large Telescope in the Chilean Atacama Desert to study quasars -- extremely bright objects powered by supermassive black holes which lie at the centre of massive galaxies. The study surveyed 31 quasars that are seen as they were more than 12.5 billion years ago, at a time when the Universe was still an infant, only about 870 million years old. This is one of the largest samples of quasars from this early on in the history of the Universe to be surveyed.

The astronomers found that 12 quasars were surrounded by enormous gas reservoirs: halos of cool, dense hydrogen gas extending 100,000 light years from the central black holes and with billions of times the mass of the Sun. The team, from Germany, the US, Italy and Chile, also found that these gas halos were tightly bound to the galaxies, providing the perfect food source to sustain both the growth of supermassive black holes and vigorous star formation.

The research was possible thanks to the superb sensitivity of MUSE, the Multi Unit Spectroscopic Explorer, on ESO's VLT, which Farina says was "a game changer" in the study of quasars. "In a matter of a few hours per target, we were able to delve into the surroundings of the most massive and voracious black holes present in the young Universe," he adds. While quasars are bright, the gas reservoirs around them are much harder to observe. But MUSE could detect the faint glow of the hydrogen gas in the halos, allowing astronomers to finally reveal the food stashes that power supermassive black holes in the early Universe.

Read more at Science Daily

Dec 19, 2019

Changes in the immune system explain why belly fat is bad for thinking

Iowa State researchers have found for the first time that less muscle and more body fat may affect how flexible our thinking gets as we become older, and changes in parts of the immune system could be responsible.

These findings could lead to new treatments that help maintain mental flexibility in aging adults with obesity, sedentary lifestyles, or muscle loss that naturally happens with aging.

The study, led by Auriel Willette, assistant professor of food science and human nutrition, and Brandon Klinedinst, a PhD student in neuroscience, looked at data from more than 4,000 middle-aged to older UK Biobank participants, both men and women. The researchers examined direct measurements of lean muscle mass, abdominal fat, and subcutaneous fat, and how they were related to changes in fluid intelligence over six years.

Willette and Klinedinst discovered people mostly in their 40s and 50s who had higher amounts of fat in their mid-section had worse fluid intelligence as they got older. Greater muscle mass, by contrast, appeared to be a protective factor. These relationships stayed the same even after taking into account chronological age, level of education, and socioeconomic status.

"Chronological age doesn't seem to be a factor in fluid intelligence decreasing over time," Willette said. "It appears to be biological age, which here is the amount of fat and muscle."

Generally, people begin to gain fat and lose lean muscle once they hit middle age, a trend that continues as they get older. To overcome this, implementing exercise routines to maintain lean muscle becomes more important. Klinedinst said exercising, especially resistance training, is essential for middle-aged women, who naturally tend to have less muscle mass than men.

The study also looked at whether or not changes in immune system activity could explain links between fat or muscle and fluid intelligence. Previous studies have shown that people with a higher body mass index (BMI) have more immune system activity in their blood, which activates the immune system in the brain and causes problems with cognition. BMI only takes into account total body mass, so it has not been clear whether fat, muscle, or both jump-start the immune system.

In this study, in women, the entire link between more abdominal fat and worse fluid intelligence was explained by changes in two types of white blood cells: lymphocytes and eosinophils. In men, a completely different type of white blood cell, basophils, explained roughly half of the fat and fluid intelligence link. While muscle mass was protective, the immune system did not seem to play a role.

While the study found correlations between body fat and decreased fluid intelligence, it is unknown at this time if it could increase the risk of Alzheimer's disease.

"Further studies would be needed to see if people with less muscle mass and more fat mass are more likely to develop Alzheimer's disease, and what the role of the immune system is," Klinedinst said.

Starting a New Year's resolution now to work out more and eat healthier may be a good idea, not only for your overall health, but to maintain healthy brain function.

Read more at Science Daily

The majority consider themselves more environmentally friendly than others

Research from the University of Gothenburg shows that we tend to overestimate our personal environmental engagement. In a study with participants from Sweden, the United States, England, and India, most participants were convinced that they acted more environmentally friendly than the average person.

In the study, over 4,000 people responded to how much, and how often, they perform environmentally friendly activities compared to others. For example, buying eco-labelled products, saving household energy, and reducing purchases of plastic bags.

It turned out that the majority of the participants rated themselves as more environmentally friendly than others. Both in comparison to unknown people, and to their friends.

"The results point out our tendency to overestimate our own abilities, which is in line with previous studies where most people consider themselves to be more honest, more creative, and better drivers than others. This study shows that over-optimism, or the "better-than-average" effect, also applies to environmentally friendly behaviours," says environmental psychology researcher Magnus Bergquist.

After analyzing data from different types of environmentally friendly activities, results revealed that the participants were more likely to overestimate their engagement in activities they perform often. Many seemed to draw the faulty conclusion that the activities they perform often, they also perform more often than others.

A consequence of thinking that you are more environmentally friendly than others, is that it can reduce the motivation to act environmentally friendly in the future. The study also showed that when we think we are more environmentally friendly than others, we actually tend to become somewhat less environmentally friendly.

According to Magnus Bergquist, one way of reducing the risk of over-optimism standing in our way for a real environmental commitment, may be trying to have a more realistic view of our own environmental efforts.

"If you think about it logically, the majority cannot be more environmentally friendly than others. One way to change this faulty opinion, is to inform people that others actually behave environmentally friendly, and thereby creating an environmentally friendly norm. Social norms affect us also in this area, we know this from previous studies," says Magnus Bergquist.

From Science Daily

Bilingual children are strong, creative storytellers, study shows

Bilingual children use as many words as monolingual children when telling a story, and demonstrate high levels of cognitive flexibility, according to new research by University of Alberta scientists.

"We found that the number of words that bilingual children use in their stories is highly correlated with their cognitive flexibility -- the ability to switch between thinking about different concepts," said Elena Nicoladis, lead author and professor in the Department of Psychology in the Faculty of Science. "This suggests that bilinguals are adept at using the medium of storytelling."

Vocabulary is a strong predictor of school achievement, and so is storytelling. "These results suggest that parents of bilingual children do not need to be concerned about long-term school achievement, said Nicoladis. "In a storytelling context, bilingual kids are able to use this flexibility to convey stories in creative ways."

The research examined a group of French-English bilingual children who have been taught two languages since birth, rather than learning a second language later in life. Results show that bilingual children used just as many words to tell a story in English as monolingual children. Participants also used just as many words in French as they did in English when telling a story.

Previous research has shown that bilingual children score lower than monolingual children on traditional vocabulary tests, meaning this results are changing our understanding of multiple languages and cognition in children.

"The past research is not surprising," added Nicoladis. "Learning a word is related to how much time you spend in each language. For bilingual children, time is split between languages. So, unsurprisingly, they tend to have lower vocabularies in each of their languages. However, this research shows that as a function of storytelling, bilingual children are equally strong as monolingual children."

This research used a new, highly sensitive measure for examining cognitive flexibility, examining a participant's ability to switch between games with different rules, while maintaining accuracy and reaction time. This study builds on previous research examining vocabulary in bilingual children who have learned English as a second language.

From Science Daily

Your DNA is not your destiny -- or a good predictor of your health

In most cases, your genes have less than five per cent to do with your risk of developing a particular disease, according to new research by University of Alberta scientists.

In the largest meta-analysis ever conducted, scientists have examined two decades of data from studies that examine the relationships between common gene mutations, also known as single nucleotide polymorphisms (SNPs), and different diseases and conditions. And the results show that the links between most human diseases and genetics are shaky at best.

"Simply put, DNA is not your destiny, and SNPs are duds for disease prediction," said David Wishart, professor in the University of Alberta's Department of Biological Sciences and the Department of Computing Science and co-author on the study. "The vast majority of diseases, including many cancers, diabetes, and Alzheimer's disease, have a genetic contribution of 5 to 10 per cent at best."

The study also highlights some notable exceptions, including Crohn's disease, celiac disease, and macular degeneration, which have a genetic contribution of approximately 40 to 50 per cent.

"Despite these rare exceptions, it is becoming increasingly clear that the risks for getting most diseases arise from your metabolism, your environment, your lifestyle, or your exposure to various kinds of nutrients, chemicals, bacteria, or viruses," explained Wishart.

Wishart and his research collaborators suggest that measuring metabolites, chemicals, proteins, or the microbiome provides a much more accurate measure of human disease risk and are also more accurate for diagnosis. The findings fly in the face of many modern gene testing businesses models, which suggest that gene testing can accurately predict someone's risk for disease.

"The bottom line is that if you want to have an accurate measure of your health, your propensity for disease or what you can do about it, it's better to measure your metabolites, your microbes or your proteins -- not your genes," added Wishart. "This research also highlights the need to understand our environment and the safety or quality of our food, air, and water."

From Science Daily

Researchers determine age for last known settlement by a direct ancestor to modern humans

Homo erectus skull.
Homo erectus, one of modern humans' direct ancestors, was a wandering bunch. After the species dispersed from Africa about two million years ago, it colonized the ancient world, which included Asia and possibly Europe.

But about 400,000 years ago, Homo erectus essentially vanished. The lone exception was a spot called Ngandong, on the Indonesian island of Java. But scientists were unable to agree on a precise time period for the site -- until now.

In a new study published in the journal Nature, an international team of researchers led by the University of Iowa; Macquarie University; and the Institute of Technology Bandung, Indonesia, dates the last existence of Homo erectus at Ngandong between 108,000 and 117,000 years ago.

The researchers time-stamped the site by dating animal fossils from the same bonebed where 12 Homo erectus skull caps and two tibia had been found, and then dated the surrounding land forms -- mostly terraces below and above Ngandong -- to establish an accurate record for the primeval humans' possible last stand on Earth.

"This site is the last known appearance of Homo erectus found anywhere in the world," says Russell Ciochon, professor in the Department of Anthropology at Iowa and co-corresponding author on the study. "We can't say we dated the extinction, but we dated the last occurrence of it. We have no evidence Homo erectus lived later than that anywhere else."

The research team presents 52 new age estimates for the Ngandong evidence. They include animal fossil fragments and sediment from the rediscovered fossil bed where the original Homo erectus remains were found by Dutch surveyors in the 1930s, and a sequence of dates for the river terraces below and above the fossil site.

In addition, the researchers determined when mountains south of Ngandong first rose by dating stalagmites from caves in the Southern Mountains. This allowed them to determine when the Solo River began coursing through the Ngandong site, and the river terrace sequence was created.

"You have this incredible array of dates that are all consistent," Ciochon says. "This has to be the right range. That's why it's such a nice, tight paper. The dating is very consistent."

"The issues with the dating of Ngandong could only ever be resolved by an appreciation of the wider landscape," says Kira Westaway, associate professor at Macquarie University and a joint-lead author on the paper. "Fossils are the byproducts of complex landscape processes. We were able to nail the age of the site because we constrained the fossils within the river deposit, the river terrace, the sequence of terraces, and the volcanically active landscape."

Previous research by Ciochon and others shows Homo erectus hopscotched its way across the Indonesian archipelago, and arrived on the island of Java about 1.6 million years ago. The timing was good: The area around Ngandong was mostly grassland, the same environment that cradled the species in Africa. Plants and animals were abundant. While the species continued to venture to other islands, Java, it appears, likely remained home -- or least a way station -- to some bands of the species.

However, around 130,000 years ago, the environment at Ngandong changed, and so did Homo erectus's fortunes.

"There was a change in climate," Ciochon explains. "We know the fauna changed from open country, grassland, to a tropical rainforest (extending southward from today's Malaysia). Those were not the plants and animals that Homo erectus was used to, and the species just could not adapt."

Ciochon co-led a 12-member, international team that dug at Ngandong in 2008 and in 2010, accompanied by Yan Rizal and Yahdi Zaim, the lead researchers from the Institute of Technology, Bandung, on the excavation. Using notes from the Dutch surveyors' excavation in the 1930s, the team found the original Homo erectus bone bed at Ngandong and re-exposed it, collecting and dating 867 animal fossil fragments. Meanwhile, Westaway's team had been dating the surrounding landscapes, such as the terraces, during that time.

"It was coincidental" the teams were working in the same place -- one group at the fossil bed, the other group dating the surrounding area, Ciochon says.

"With the data we had, we couldn't really date the Ngandong fossils," Ciochon continues. "We had dates on them, but they were minimum ages. So, we couldn't really say how old, although we knew we were in the ballpark. By working with Kira, who had vast amount of dating data for the terraces, mountains, and other landscape features, we were able to provide precise regional chronological and geomorphic contexts for the Ngandong site."

Researchers from multiple institutions contributed to the manuscript, including those from the Institute of Technology in Bandung, Indonesia; the University of Wollongong, Australia; the University of Texas-Austin; Griffith University in Nathan, Australia; Southern Cross University in Lismore, Australia; the University of Oxford, United Kingdom; the Geological Agency in Bandung; the University of Queensland in Brisbane Australia; the University of New England in Armidale, Australia; the University of Copenhagen in Denmark; Minnesota State University-Mankato; Bluestone Heights in Cleveland, Ohio; the University of Alberta in Edmonton, Canada; Rutgers University; Indiana University; and Illinois State University.

Read more at Science Daily

Dec 18, 2019

Celebrated ancient Egyptian woman physician likely never existed

For decades, an ancient Egyptian known as Merit Ptah has been celebrated as the first female physician and a role model for women entering medicine. Yet a researcher from the University of Colorado Anschutz Medical Campus now says she never existed and is an example of how misconceptions can spread.

"Almost like a detective, I had to trace back her story, following every lead, to discover how it all began and who invented Merit Ptah," said Jakub Kwiecinski, PhD, an instructor in the Dept. of Immunology and Microbiology at the CU School of Medicine and a medical historian.

His study was published last week in the Journal of the History of Medicine and Allied Sciences.

Kwiecinski's interest in Merit Ptah (`beloved of god Ptah') was sparked after seeing her name in so many places.

"Merit Ptah was everywhere. In online posts about women in STEM, in computer games, in popular history books, there's even a crater on Venus named after her," he said. "And yet, with all these mentions, there was no proof that she really existed. It soon became clear that there had been no ancient Egyptian woman physician called Merit Ptah."

Digging deep into the historical record, Kwiecinski discovered a case of mistaken identity that took on a life of its own, fueled by those eager for an inspirational story.

According to Kwiecinski, Merit Ptah the physician had her origins in the 1930s when Kate Campbell Hurd-Mead, a medical historian, doctor and activist, set out to write a complete history of medical women around the world. Her book was published in 1938.

She talked about the excavation of a tomb in the Valley of Kings where there was a "picture of a woman doctor named Merit Ptah, the mother of a high priest, who is calling her `the Chief Physician.'"

Kwiecinski said there was no record of such a person being a physician.

"Merit Ptah as a name existed in the Old Kingdom, but does not appear in any of the collated lists of ancient Egyptian healers -- not even as one of the `legendary'; or `controversial cases," he said. "She is also absent from the list of Old Kingdom women administrators. No Old Kingdom tombs are present in the Valley of the Kings, where the story places Merit Ptah's son, and only a handful of such tombs exist in the larger area, the Theban Necropolis."

The Old Kingdom of Egypt lasted from 2575 to 2150 BC.

But there was another woman who bears a striking resemblance to Merit Ptah. In 1929-30, an excavation in Giza uncovered a tomb of Akhethetep, an Old Kingdom courtier. Inside, a false door depicted a woman called Peseshet, presumably the tomb owner's mother, described as the `Overseer of Healer Women.' Peseshet and Merit Ptah came from the same time periods and were both mentioned in the tombs of their sons who were high priestly officials.

This discovery was described in several books and one of them found its way into Hurd-Mead's private library. Kwiecinski believes Hurd-Mead confused Merit Ptah with Peseseth.

"Unfortunately, Hurd-Mead in her own book accidentally mixed up the name of the ancient healer, as well as the date when she lived, and the location of the tomb," he said. "And so, from a misunderstood case of an authentic Egyptian woman healer, Peseshet, a seemingly earlier Merit Ptah, `the first woman physician' was born."

The Merit Ptah story spread far and wide, driven by a variety of forces. Kwiecinski said one factor was the popular perception of ancient Egypt as an almost fairytale land "outside time and space" perfectly suited for the creation of legendary stories.

The story spread through amateur historian circles, creating a kind of echo chamber not unlike how fake news stories circulate today.

"Finally, it was associated with an extremely emotional, partisan -- but also deeply personal -- issue of equal rights," he said. "Altogether this created a perfect storm that propelled the story of Merit Ptah into being told over and over again."

Yet Kwiecinski said the most striking part of the story is not the mistake but the determination of generations of women historians to recover the forgotten history of female healers, proving that science and medicine have never been exclusively male.

Read more at Science Daily

Distant Milky Way-like galaxies reveal star formation history of the universe

Look at this new radio image covered with dots, each of which is a distant galaxy! The brightest spots are galaxies that are powered by supermassive black holes and shine bright in radio light. But what makes this image special are the numerous faint dots filling the sky. These are distant galaxies like our own that have never been observed in radio light before.

To learn about the star-formation history of the universe, we need to look back in time. Galaxies throughout the universe have been forming stars for the past 13 billion years. But most stars were born between 8 and 11 billion years ago, during an era called "cosmic noon."

It has been a challenge for astronomers to study the faint light coming from this era. Optical telescopes can see very distant galaxies, but new stars are largely hidden inside dusty clouds of gas. Radio telescopes can see through the dust and observe the rare, bright starburst galaxies , but until now have not been sensitive enough to detect the signals from distant Milky Way-like galaxies that are responsible for most of the star formation in the universe.

An international team of astronomers using the South African Radio Astronomy Observatory (SARAO) MeerKAT telescope recently made the first radio observation sensitive enough to reveal these galaxies. "To make this image, we selected an area in the Southern Sky that contains no strong radio sources whose glare could blind a sensitive observation," said Tom Mauch of SARAO in Cape Town, South Africa, who led the team that published their results in The Astrophysical Journal.

The team used the 64 MeerKAT dishes to observe this area for a total of 130 hours. The resulting image shows a region of the sky that is comparable in area to five full Moons, containing tens of thousands of galaxies.

"Because radio waves travel at the speed of light, this image is a time machine that samples star formation in these distant galaxies over billions of years," explained co-author James Condon of the National Radio Astronomy Observatory (NRAO) in Charlottesville, Virginia. "Because only short-lived stars that are less than 30 million years old send out radio waves, we know that the image is not contaminated by old stars. The radio light we see from each galaxy is therefore proportional to its star-forming rate at that moment in time."

The astronomers want to use this image to learn more about star formation in the entire universe. "These first results indicate that the star-formation rate around cosmic noon is even higher than was originally expected," said Allison Matthews, a graduate student at the University of Virginia and Grote Reber doctoral fellow at the NRAO. "Previous images could only detect the tip of the iceberg, the rare and luminous galaxies that produced only a small fraction of the stars in the universe. What we see now is the complete picture: these faint dots are the galaxies that formed most of the stars in the universe."

Read more at Science Daily

Strength of conviction won't help to persuade when people disagree

If you disagree with someone, it might not make any difference how certain they say they are, as during disagreement your brain's sensitivity to the strength of people's beliefs is reduced, finds a study led by UCL and City, University of London.

The brain scanning study, published in Nature Neuroscience, reveals a new type of confirmation bias that can make it very difficult to alter people's opinions.

"We found that when people disagree, their brains fail to encode the quality of the other person's opinion, giving them less reason to change their mind," said the study's senior author, Professor Tali Sharot (UCL Psychology & Language Sciences).

For the study, the researchers asked 42 participants, split into pairs, to estimate house prices. They each wagered on whether the asking price would be more or less than a set amount, depending on how confident they were. Next, each lay in an MRI scanner with the two scanners divided by a glass wall. On their screens they were shown the properties again, reminded of their own judgements, then shown their partner's assessment and wagers, and finally were asked to submit a final wager.

The researchers found that, when both participants agreed, people would increase their final wagers to larger amounts, particularly if their partner had placed a high wager.

Conversely, when the partners disagreed, the opinion of the disagreeing partner had little impact on people's wagers, even if the disagreeing partner had placed a high wager.

The researchers found that one brain area, the posterior medial prefrontal cortex (pMFC), was involved in incorporating another person's beliefs into one's own. Brain activity differed depending on the strength of the partner's wager, but only when they were already in agreement. When the partners disagreed, there was no relationship between the partner's wager and brain activity in the pMFC region.

The pMFC is known to be involved in decision-making, and helps to signal when a decision should be changed.

The researchers say that the tendency to ignore the strength of opposing beliefs may generate polarisation and facilitate the maintenance of false beliefs.

First author Dr Andreas Kappes (City, University of London) said: "Our findings could help make sense of some puzzling observations in domains including science and politics."

"For instance, over the last decade climate scientists have expressed greater confidence that climate change is human-made. Yet, the percentage of the population that believe this notion to be true has dropped over the same period of time. While there are complex, multi-layered reasons for this specific trend, such examples may be related to a bias in how the strength of other's opinions are encoded in our brains."

Read more at Science Daily

Unveiling a new map that reveals the hidden personalities of jobs

Thousands of Australian students will get their Higher School Certificates this week -- how many will choose the 'right career'?

According to new research published today in the Proceedings of the National Academy of Sciences, understanding the hidden personality dimensions of different roles could be the key to matching a person and their ideal occupation.

The findings of "Social media-predicted personality traits and values can help match people to their ideal jobs" point to the benefit of not only identifying the skills and experience in a particular industry, but also being aware of personality traits and values that characterise jobs -- and how they align with your own.

Lead researcher Associate Professor Peggy Kern of the University of Melbourne's Centre for Positive Psychology notes that "it's long been believed that different personalities align better with different jobs. For example, sales roles might better suit an extraverted individual, whereas a librarian role might better suit an introverted individual. But studies have been small-scale in nature. Never before has there been such large-scale evidence of the distinctive personality profiles that occur across occupations."

The research team looked at over 128,000 Twitter users, representing over 3,500 occupations to establish that different occupations tended to have very different personality profiles. For instance, software programmers and scientists tended to be more open to experience, whereas elite tennis players tended to be more conscientious and agreeable.

Remarkably, many similar jobs were grouped together -- based solely on the personality characteristics of users in those roles. For example, one cluster included many different technology jobs such as software programmers, web developers, and computer scientists.

The research used a variety of advanced artificial intelligence, machine learning and data analytics approaches to create a data-driven 'vocation compass' -- a recommendation system that finds the career that is a good fit with our personality.

Co-author Dr Marian-Andrei Rizoui of the University of Technology Sydney said they were able to "successfully recommend an occupation aligned to people's personality traits with over 70 per cent accuracy."

"Even when the system was wrong it was not too far off, pointing to professions with very similar skill sets," he said. "For instance, it might suggest a poet becomes a fictional writer, not a petrochemical engineer."

With work taking up most of our waking hours, Professor Kern said many people want an occupation that "aligns with who they are as an individual."

"We leave behind digital fingerprints online as we use different platforms," said Professor Kern. "This creates the possibility for a modern approach to matching one's personality and occupation with an excellent accuracy rate."

Co-author, Professor Paul X McCarthy of the University of New South Wales in Sydney, said finding the perfect job was a lot like finding the perfect mate.

"At the moment we have an overly simplified view of careers, with a very small number of visible, high-status jobs as prizes for the hardest-working, best connected and smartest competitors.

"What if instead -- as our new vocation map shows -- the truth was closer to dating, where there are in fact a number of roles ideally suited for everyone?

"By better understanding the personality dimensions of different jobs we can find more perfect matches."

The researchers noted that while the study used publicly available data from Twitter, the underlying vocation compass map could be used to match people using information about their personality traits from social media, online surveys or other platforms.

"Our analytic approach potentially provides an alternative for identifying occupations which might interest a person, as opposed to relying upon extensive self-report assessments," said Dr Rizoui.

Read more at Science Daily

Archaeologists find Bronze Age tombs lined with gold

Archaeologists with the University of Cincinnati have discovered two Bronze Age tombs containing a trove of engraved jewelry and artifacts that promise to unlock secrets about life in ancient Greece.

The UC archaeologists announced the discovery Tuesday in Greece.

Jack Davis and Sharon Stocker, archaeologists in UC's classics department, found the two beehive-shaped tombs in Pylos, Greece, last year while investigating the area around the grave of an individual they have called the "Griffin Warrior," a Greek man whose final resting place they discovered nearby in 2015.

Like the Griffin Warrior's tomb, the princely tombs overlooking the Mediterranean Sea also contained a wealth of cultural artifacts and delicate jewelry that could help historians fill in gaps in our knowledge of early Greek civilization.

UC's team spent more than 18 months excavating and documenting the find. The tombs were littered with flakes of gold leaf that once papered the walls.

"Like with the Griffin Warrior grave, by the end of the first week we knew we had something that was really important," said Stocker, who supervised the excavation.

"It soon became clear to us that lightning had struck again," said Davis, head of UC's classics department.

The Griffin Warrior is named for the mythological creature -- part eagle, part lion -- engraved on an ivory plaque in his tomb, which also contained armor, weaponry and gold jewelry. Among the priceless objects of art was an agate sealstone depicting mortal combat with such fine detail that Archaeology magazine hailed it as a "Bronze Age masterpiece."

Artifacts found in the princely tombs tell similar stories about life along the Mediterranean 3,500 years ago, Davis said. A gold ring depicted two bulls flanked by sheaves of grain, identified as barley by a paleobotanist who consulted on the project.

"It's an interesting scene of animal husbandry -- cattle mixed with grain production. It's the foundation of agriculture," Davis said. "As far as we know, it's the only representation of grain in the art of Crete or Minoan civilization."

Like the grave of the Griffin Warrior, the two family tombs contained artwork emblazoned with mythological creatures. An agate sealstone featured two lion-like creatures called genii standing upright on clawed feet. They carry a serving vase and an incense burner, a tribute for the altar before them featuring a sprouting sapling between horns of consecration, Stocker said.

Above the genii is a 16-pointed star. The same 16-pointed star also appears on a bronze and gold artifact in the grave, she said.

"It's rare. There aren't many 16-pointed stars in Mycenaean iconography. The fact that we have two objects with 16 points in two different media (agate and gold) is noteworthy," Stocker said.

The genius motif appears elsewhere in the East during this period, she said.

"One problem is we don't have any writing from the Minoan or Mycenaean time that talks of their religion or explains the importance of their symbols," Stocker said.

UC's team also found a gold pendant featuring the likeness of the Egyptian goddess Hathor.

"Its discovery is particularly interesting in light of the role she played in Egypt as protectress of the dead," Davis said.

The identity of the Griffin Warrior is a matter for speculation. Stocker said the combination of armor, weapons and jewelry found in his tomb strongly indicate he had military and religious authority, likely as the king known in later Mycenaean times as a wanax.

Likewise, the princely tombs paint a picture of accumulated wealth and status, she said. They contained amber from the Baltic, amethyst from Egypt, imported carnelian and lots of gold. The tombs sit on a scenic vista overlooking the Mediterranean Sea on the spot where the Palace of Nestor would later rise and fall to ruins.

"I think these are probably people who were very sophisticated for their time," she said. "They have come out of a place in history where there were few luxury items and imported goods. And all of a sudden at the time of the first tholos tombs, luxury items appear in Greece.

"You have this explosion of wealth. People are vying for power," she said. "It's the formative years that will give rise to the Classic Age of Greece."

The antiquities provide evidence that coastal Pylos was once an important destination for commerce and trade.

"If you look at a map, Pylos is a remote area now. You have to cross mountains to get here. Until recently, it hasn't even been on the tourist path," Stocker said. "But if you're coming by sea, the location makes more sense. It's on the way to Italy. What we're learning is that it's a much more central and important place on the Bronze Age trade route."

The princely tombs sit close to the palace of Nestor, a ruler mentioned in Homer's famous works "The Iliad" and "The Odyssey." The palace was discovered in 1939 by the late UC Classics professor Carl Blegen. Blegen had wanted to excavate in the 1950s in the field where Davis and Stocker found the new tombs but could not get permission from the property owner to expand his investigation. The tombs would have to wait years for another UC team to make the startling discovery hidden beneath its grape vines.

Excavating the site was particularly arduous. With the excavation season looming, delays in procuring the site forced researchers to postpone plans to study the site first with ground-penetrating radar. Instead, Stocker and Davis relied on their experience and intuition to focus on one disturbed area.

"There were noticeable concentrations of rocks on the surface once we got rid of the vegetation," she said.

Those turned out to be the exposed covers of deep tombs, one plunging nearly 15 feet. The tombs were protected from the elements and potential thieves by an estimated 40,000 stones the size of watermelons.

The boulders had sat undisturbed for millennia where they had fallen when the domes of the tombs collapsed. And now 3,500 years later, UC's team had to remove each stone individually.

"It was like going back to the Mycenaean Period. They had placed them by hand in the walls of the tombs and we were taking them out by hand," Stocker said. "It was a lot of work."

At every step of the excavation, the researchers used photogrammetry and digital mapping to document the location and orientation of objects in the tomb. This is especially valuable because of the great number of artifacts that were recovered, Davis said.

"We can see all levels as we excavated them and relate them one to the other in three dimensions," he said. UC's team will continue working at Pylos for at least the next two years while they and other researchers around the world unravel mysteries contained in the artifacts.

Read more at Science Daily

Dec 17, 2019

Blue light may not be as disruptive to our sleep patterns as originally thought

Contrary to common belief, blue light may not be as disruptive to our sleep patterns as originally thought -- according to University of Manchester scientists.

According to the team, using dim, cooler, lights in the evening and bright warmer lights in the day may be more beneficial to our health.

Twilight is both dimmer and bluer than daylight, they say, and the body clock uses both of those features to determine the appropriate times to be asleep and awake.

Current technologies designed to limit our evening exposure to blue light, for example by changing the screen colour on mobile devices, may therefore send us mixed messages, they argue.

This is because the small changes in brightness they produce are accompanied by colours that more resemble day.

The research, which was carried out on mice, used specially designed lighting that allowed the team to adjust colour without changing brightness.

That showed blue colours produced weaker effects on the mouse body clock than equally bright yellow colours.

The findings, say the team, have important implications for the design of lighting and visual displays intended to ensure healthy patterns of sleep and alertness.

The study is published in Current Biology and funded by the Biotechnology and Biological Sciences Research Council.

The body clock uses a specialised light sensitive protein in the eye to measure brightness, called melanopsin, which is better at detecting shorter wavelength photons.

This is why, say the team, researchers originally suggested blue light might have a stronger effect.

However, our perception of colour comes from the retinal cone cells and the new research shows that the blue colour signals they supply reduce the impact on light on the clock.

Dr Tim Brown, from The University of Manchester, said: "We show the common view that blue light has the strongest effect on the clock is misguided; in fact, the blue colours that are associated with twilight have a weaker effect than white or yellow light of equivalent brightness.

"There is lots of interest in altering the impact of light on the clock by adjusting the brightness signals detected by melanopsin but current approaches usually do this by changing the ratio of short and long wavelength light; this provides a small difference in brightness at the expense of perceptible changes in colour."

He added: "We argue that this is not the best approach, since the changes in colour may oppose any benefits obtained from reducing the brightness signals detected by melanopsin.

"Our findings suggest that using dim, cooler, lights in the evening and bright warmer lights in the day may be more beneficial.

Read more at Science Daily

How mysterious circular DNA causes cancer in children

Cancer development is associated with the gradual accumulation of DNA defects over time. Thus, cancer is considered an age-related disease. But why do children develop cancer? An international team of researchers, led by Charité -- Universitätsmedizin Berlin and the Memorial Sloan Kettering Cancer Center in New York, now reveal that mysterious rings of DNA known as extrachromosomal circular DNA can contribute to cancer development in children. Producing the first detailed map of circular DNA, the scientists have shed new unanticipated insights on long standing questions in the field of cancer genetics. The work has been published in Nature Genetics.

Every year, nearly half a million people in Germany develop cancer. Approximately 2,100 cancer patients are children under the age of 18. The fact that the majority of cancers develop in old adults is due to the mechanisms contributing to cancer development. A range of exogenous factors, including tobacco smoke and radiation, can cause damage to cellular DNA. If this type of DNA damage is left to accumulate over many years, affected cells may lose control over cell division and growth. This results in cancer development. Children, however, are not old enough to be affected by this mechanism of cancer development. What, then, is the reason for childhood cancers? A team of researchers, led by Dr. Anton Henssen of Charité's Department of Pediatrics, Division of Oncology and Hematology and the Experimental and Clinical Research Center (ECRC,) an institution jointly operated by Charité and the Max Delbrück Center for Molecular Medicine (MDC), are a large step closer to finding an answer. Working alongside a team of scientists led by Dr. Richard Koche from the Memorial Sloan Kettering Cancer Center and other international partners, the groups of researchers were able to show that rings of DNA can cause disruption of our cells' genetic information, which can contribute to cancer development.

Scientists have known about these ring-shaped sections of DNA for decades. Found inside our cells, they do not form part of our normal genetic information, which is stored in the form of chromosomes. It is for this reason that they are referred to as extrachromosomal circular DNA. But even nowadays, scientists know relatively little about their function, mainly because they have lacked technologies for a more detailed analysis of circular DNA. In their now published study, the researchers combined state-of-the-art sequencing techniques with pioneering bioinformatics algorithms to perform the first-ever detailed mapping of circular DNA in neuroblastoma, a deadly childhood tumor . Based on their findings, the researchers were able to draw important conclusions regarding the development of this type of cancer.

Working with colleagues from the Barcelona Supercomputing Center, the researchers analyzed neuroblastoma tissue samples from a total of 93 children. Their analysis revealed that the prevalence and diversity of circular DNA is far greater than previously anticipated. According to the researchers' findings, each tissue sample contained on average 5,000 circular DNA copies. DNA sequencing also revealed the process by which specific DNA sections separate from a chromosome to form circular DNA before reintegrating into the chromosome at a different location. "This can potentially cause cancer if it results in the original sequence of genetic information being disrupted," explains the Emmy Noether Independent Junior Research Group's leader, Dr. Henssen, who is also a researcher at the German Cancer Consortium (DKTK) in Berlin and a Berlin Institute of Health (BIH) Clinician Scientist. Stressing the significance of the researchers' findings, Dr. Henssen says: "The detailed processes involved had not previously been elucidated in this manner and provide insight into how even young cells, like those found in children, can transform into cancer cells."

"We were also able to show that certain types of circular DNA may accelerate neuroblastoma growth," explains Dr. Koche and adds: "Testing for their presence may therefore make it easier to predict the course of the disease. Additionally, studying this process in the relatively quiet genomes of these pediatric tumors may help illuminate similar mechanisms which were previously missed in more complex adult cancers. Given the recent interest in circular DNA in a variety of normal and disease contexts, the current study may have implications for a broad range of tumor types and associated clinical outcomes."

Read more at Science Daily

Birds' seasonal migrations shift earlier as climate changes

Northern waterthrush
In what the authors believe is one of the first studies to examine climate change impact on the timing of bird migration on a continental scale, researchers report that spring migrants were likely to pass certain stops earlier now than they would have 20 years ago. Also, temperature and migration timing were closely aligned, with the greatest changes in migration timing occurring in the regions warming most rapidly. Timing shifts were less apparent in fall, they add.

Writing in Nature Climate Change, lead researcher Kyle Horton at Colorado State University (CSU), with artificial intelligence researcher Dan Sheldon at the University of Massachusetts Amherst and senior author Andrew Farnsworth of the Cornell Lab of Ornithology, describe how they analyzed 24 years of radar data from the National Oceanic and Atmospheric Administration (NOAA) for this study of nocturnal bird migration.

Horton describes the breadth of the research, which observed nighttime migratory behaviors of hundreds of species representing billions of birds, as "critically important" to understanding and learning more answers about shifting migration patterns. "To see changes in timing at continental scales is truly impressive, especially considering the diversity of behaviors and strategies used by the many species the radars capture," he says, adding that the observed shifts do not necessarily mean that migrants are keeping pace with climate change.

Farnsworth says the team's research answered, for the first time, key questions on birds and climate change. "Bird migration evolved largely as a response to changing climate," he points out. "It's a global phenomenon involving billions of birds annually. And it's not a surprise that birds' movements track changing climates. But how assemblages of bird populations respond in an era of such rapid and extreme changes in climate has been a black box. Capturing scales and magnitudes of migration in space and time has been impossible until recently."

Horton says that this access to the data and cloud computing greatly enhanced the team's ability to synthesize the findings. "To process all of these data, without cloud computing, it would have taken over a year of continuous computing," he notes. Instead, the team crunched the numbers in about 48 hours.

As Sheldon at UMass Amherst points out, these bird flights have been recorded for decades by the National Weather Services' network of constantly scanning weather radars, but until recently these data have been mostly out of reach for bird researchers, partly because the sheer magnitude of information and lack of tools to analyze it made only limited studies possible.

For this study, Amazon Web Services provided access to the data. Also, a new tool, "MistNet," developed by Sheldon and colleagues at UMass Amherst with others at the Cornell Lab uses machine learning to extract bird data from the radar record and to take advantage of the decades-long radar data archives. The name refers to the fine, almost invisible, "mist nets" that ornithologists use to capture migratory songbirds.

As Sheldon explains, MistNet automates the processing of a massive data set that has measured bird migration over the continental U.S. for over two decades, with excellent results when compared to humans working by hand. It uses computer vision techniques to differentiate birds from rain on the images, a major hurdle that had challenged biologists for decades. "Historically, a person had to look at each radar image to determine whether it contained rain or birds," he notes. "We developed 'MistNet,' an artificial intelligence system to detect patterns in radar images and remove rain automatically."

Sheldon's group made earlier maps of where and when migration occurred over the past 24 years and animated these to illustrate, for example, the most intensive migration areas in the continental United States in a corridor roughly along and just west of the Mississippi River. MistNet also allows researchers to estimate flying velocity and traffic rates of migrating birds.

Horton at CSU says that the lack of change in fall migration patterns was a little surprising, though migration also tends to be a "little bit messier" during those months. "In the spring, we see bursts of migrants, moving at a fairly rapid pace, ultimately to reach the breeding grounds," he explained. "However, during the fall, there's not as much pressure to reach the wintering grounds, and migration tends to move at a slower, more punctuated pace."

A combination of factors makes fall migration more challenging to study, he adds. In the fall, birds are not competing for mates and the pace to reach their destination is more relaxed. There's also a wider age range of birds migrating, as the young eventually realize they need to migrate, too.

Read more at Science Daily

Ancient 'chewing gum' yields insights into people and bacteria of the past

Researchers from the University of Copenhagen have succeeded in extracting a complete human genome from a thousands-of-years old "chewing gum." According to the researchers, it is a new untapped source of ancient DNA.

During excavations on Lolland, archaeologists have found a 5,700-year-old type of "chewing gum" made from birch pitch. In a new study, researchers from the University of Copenhagen succeeded in extracting a complete ancient human genome from the pitch.

It is the first time that an entire ancient human genome has been extracted from anything other than human bones. The new research results have been published in the scientific journal Nature Communications.

'It is amazing to have gotten a complete ancient human genome from anything other than bone,'' says Associate Professor Hannes Schroeder from the Globe Institute, University of Copenhagen, who led the research.

'What is more, we also retrieved DNA from oral microbes and several important human pathogens, which makes this a very valuable source of ancient DNA, especially for time periods where we have no human remains,' Hannes Schroeder adds.

Based on the ancient human genome, the researchers could tell that the birch pitch was chewed by a female. She was genetically more closely related to hunter-gatherers from the mainland Europe than to those who lived in central Scandinavia at the time. They also found that she probably had dark skin, dark hair and blue eyes.

Sealed in mud

The birch pitch was found during archaeological excavations at Syltholm, east of Rødbyhavn in southern Denmark. The excavations are being carried out by the Museum Lolland-Falster in connection with the construction of the Fehmarn tunnel.

'Syltholm is completely unique. Almost everything is sealed in mud, which means that the preservation of organic remains is absolutely phenomenal,' says Theis Jensen, Postdoc at the Globe Institute, who worked on the study for his PhD and also participated in the excavations at Syltholm.

'It is the biggest Stone Age site in Denmark and the archaeological finds suggest that the people who occupied the site were heavily exploiting wild resources well into the Neolithic, which is the period when farming and domesticated animals were first introduced into southern Scandinavia,' Theis Jensen adds.

This is reflected in the DNA results, as the researchers also identified traces of plant and animal DNA in the pitch -- specifically hazelnuts and duck -- which may have been part of the individual's diet.

Bacterial evolution

In addition, the researchers succeeded in extracting DNA from several oral microbiota from the pitch, including many commensal species and opportunistic pathogens.

'The preservation is incredibly good, and we managed to extract many different bacterial species that are characteristic of an oral microbiome. Our ancestors lived in a different environment and had a different lifestyle and diet, and it is therefore interesting to find out how this is reflected in their microbiome,' says Hannes Schroeder.

The researchers also found DNA that could be assigned to Epstein-Barr Virus, which is known to cause infectious mononucleosis or glandular fever. According to Hannes Schroeder, ancient "chewing gums" bear great potential in researching the composition of our ancestral microbiome and the evolution of important human pathogens.

'It can help us understand how pathogens have evolved and spread over time, and what makes them particularly virulent in a given environment. At the same time, it may help predict how a pathogen will behave in the future, and how it might be contained or eradicated,' says Hannes Schroeder.

Read more at Science Daily

Dec 16, 2019

New tool uses AI to flag fake news for media fact-checkers

A new artificial intelligence (AI) tool could help social media networks and news organizations weed out false stories.

The tool, developed by researchers at the University of Waterloo, uses deep-learning AI algorithms to determine if claims made in posts or stories are supported by other posts and stories on the same subject.

"If they are, great, it's probably a real story," said Alexander Wong, a professor of systems design engineering at Waterloo. "But if most of the other material isn't supportive, it's a strong indication you're dealing with fake news."

Researchers were motivated to develop the tool by the proliferation of online posts and news stories that are fabricated to deceive or mislead readers, typically for political or economic gain.

Their system advances ongoing efforts to develop fully automated technology capable of detecting fake news by achieving 90 per cent accuracy in a key area of research known as stance detection.

Given a claim in one post or story and other posts and stories on the same subject that have been collected for comparison, the system can correctly determine if they support it or not nine out of 10 times.

That is a new benchmark for accuracy by researchers using a large dataset created for a 2017 scientific competition called the Fake News Challenge.

While scientists around the world continue to work towards a fully automated system, the Waterloo technology could be used as a screening tool by human fact-checkers at social media and news organizations.

"It augments their capabilities and flags information that doesn't look quite right for verification," said Wong, a founding member of the Waterloo Artificial Intelligence Institute. "It isn't designed to replace people, but to help them fact-check faster and more reliably."

AI algorithms at the heart of the system were shown tens of thousands of claims paired with stories that either supported or didn't support them. Over time, the system learned to determine support or non-support itself when shown new claim-story pairs.

"We need to empower journalists to uncover truth and keep us informed," said Chris Dulhanty, a graduate student who led the project. "This represents one effort in a larger body of work to mitigate the spread of disinformation."

From Science Daily

Uncertain role of natural gas in the transition to clean energy

A new MIT study examines the opposing roles of natural gas in the battle against climate change -- as a bridge toward a lower-emissions future, but also a contributor to greenhouse gas emissions.

Natural gas, which is mostly methane, is viewed as a significant "bridge fuel" to help the world move away from the greenhouse gas emissions of fossil fuels, since burning natural gas for electricity produces about half as much carbon dioxide as burning coal. But methane is itself a potent greenhouse gas, and it currently leaks from production wells, storage tanks, pipelines, and urban distribution pipes for natural gas. Increasing its usage, as a strategy for decarbonizing the electricity supply, will also increase the potential for such "fugitive" methane emissions, although there is great uncertainty about how much to expect. Recent studies have documented the difficulty in even measuring today's emissions levels.

This uncertainty adds to the difficulty of assessing natural gas' role as a bridge to a net-zero-carbon energy system, and in knowing when to transition away from it. But strategic choices must be made now about whether to invest in natural gas infrastructure. This inspired MIT researchers to quantify timelines for cleaning up natural gas infrastructure in the United States or accelerating a shift away from it, while recognizing the uncertainty about fugitive methane emissions.

The study shows that in order for natural gas to be a major component of the nation's effort to meet greenhouse gas reduction targets over the coming decade, present methods of controlling methane leakage would have to improve by anywhere from 30 to 90 percent. Given current difficulties in monitoring methane, achieving those levels of reduction may be a challenge. Methane is a valuable commodity, and therefore companies producing, storing, and distributing it already have some incentive to minimize its losses. However, despite this, even intentional natural gas venting and flaring (emitting carbon dioxide) continues.

The study also finds policies that favor moving directly to carbon-free power sources, such as wind, solar, and nuclear, could meet the emissions targets without requiring such improvements in leakage mitigation, even though natural gas use would still be a significant part of the energy mix.

The researchers compared several different scenarios for curbing methane from the electric generation system in order to meet a target for 2030 of a 32 percent cut in carbon dioxide-equivalent emissions relative to 2005 levels, which is consistent with past U.S. commitments to mitigate climate change. The findings appear today in the journal Environmental Research Letters, in a paper by MIT postdoc Magdalena Klemun and Associate Professor Jessika Trancik.

Methane is a much stronger greenhouse gas than carbon dioxide, although how much more depends on the timeframe you choose to look at. Although methane traps heat much more, it doesn't last as long once it's in the atmosphere -- for decades, not centuries. When averaged over a 100-year timeline, which is the comparison most widely used, methane is approximately 25 times more powerful than carbon dioxide. But averaged over a 20-year period, it is 86 times stronger.

The actual leakage rates associated with the use of methane are widely distributed, highly variable, and very hard to pin down. Using figures from a variety of sources, the researchers found the overall range to be somewhere between 1.5 percent and 4.9 percent of the amount of gas produced and distributed. Some of this happens right at the wells, some occurs during processing and from storage tanks, and some is from the distribution system. Thus, a variety of different kinds of monitoring systems and mitigation measures may be needed to address the different conditions.

"Fugitive emissions can be escaping all the way from where natural gas is being extracted and produced, all the way along to the end user," Trancik says. "It's difficult and expensive to monitor it along the way."

That in itself poses a challenge. "An important thing to keep in mind when thinking about greenhouse gases," she says, "is that the difficulty in tracking and measuring methane is itself a risk." If researchers are unsure how much there is and where it is, it's hard for policymakers to formulate effective strategies to mitigate it. This study's approach is to embrace the uncertainty instead of being hamstrung by it, Trancik says: The uncertainty itself should inform current strategies, the authors say, by motivating investments in leak detection to reduce uncertainty, or a faster transition away from natural gas.

"Emissions rates for the same type of equipment, in the same year, can vary significantly," adds Klemun. "It can vary depending on which time of day you measure it, or which time of year. There are a lot of factors."

Much attention has focused on so-called "super-emitters," but even these can be difficult to track down. "In many data sets, a small fraction of point sources contributes disproportionately to overall emissions," Klemun says. "If it were easy to predict where these occur, and if we better understood why, detection and repair programs could become more targeted." But achieving this will require additional data with high spatial resolution, covering wide areas and many segments of the supply chain, she says.

The researchers looked at the whole range of uncertainties, from how much methane is escaping to how to characterize its climate impacts, under a variety of different scenarios. One approach places strong emphasis on replacing coal-fired plants with natural gas, for example; others increase investment in zero-carbon sources while still maintaining a role for natural gas.

In the first approach, methane emissions from the U.S. power sector would need to be reduced by 30 to 90 percent from today's levels by 2030, along with a 20 percent reduction in carbon dioxide. Alternatively, that target could be met through even greater carbon dioxide reductions, such as through faster expansion of low-carbon electricity, without requiring any reductions in natural gas leakage rates. The higher end of the published ranges reflects greater emphasis on methane's short-term warming contribution.

One question raised by the study is how much to invest in developing technologies and infrastructure for safely expanding natural gas use, given the difficulties in measuring and mitigating methane emissions, and given that virtually all scenarios for meeting greenhouse gas reduction targets call for ultimately phasing out natural gas that doesn't include carbon capture and storage by mid-century. "A certain amount of investment probably makes sense to improve and make use of current infrastructure, but if you're interested in really deep reduction targets, our results make it harder to make a case for that expansion right now," Trancik says.

Read more at Science Daily

Researchers reconstruct spoken words as processed in nonhuman primate brains

Rhesus macaque
A team of Brown University researchers has used a brain-computer interface to reconstruct English words from neural signals recorded in the brains of nonhuman primates. The research, published in the journal Nature Communications Biology, could be a step toward developing brain implants that may help people with hearing loss, the researchers say.

"What we've done is to record the complex patterns of neural excitation in the secondary auditory cortex associated with primates' hearing specific words," said Arto Nurmikko, a professor in Brown's School of Engineering, a research associate in Brown's Carney Institute for Brain Science and senior author of the study. "We then use that neural data to reconstruct the sound of those words with high fidelity.

"The overarching goal is to better understand how sound is processed in the primate brain," Nurmikko added, "which could ultimately lead to new types of neural prosthetics."

The brain systems involved in the initial processing of sound are similar in humans and non-human primates. The first level of processing, which happens in what's called the primary auditory cortex, sorts sounds according to attributes like pitch or tone. The signal then moves to the secondary auditory cortex, where it's processed further. When someone is listening to spoken words, for example, this is where the sounds are classified by phonemes -- the simplest features that enable us to distinguish one word from another. After that, the information is sent to other parts of the brain for the processing that enables human comprehension of speech.

But because that early-stage processing of sound is similar in humans and non-human primates, learning how primates process the words they hear is useful, even though they likely don't understand what those words mean.

For the study, two pea-sized implants with 96-channel microelectrode arrays recorded the activity of neurons while rhesus macaques listened to recordings of individual English words and macaque calls. In this case, the macaques heard fairly simple one- or two-syllable words -- "tree," "good," "north," "cricket" and "program."

The researchers processed the neural recordings using computer algorithms specifically developed to recognize neural patterns associated with particular words. From there, the neural data could be translated back into computer-generated speech. Finally, the team used several metrics to evaluate how closely the reconstructed speech matched the original spoken word that the macaque heard. The research showed the recorded neural data produced high-fidelity reconstructions that were clear to a human listener.

The use of multielectrode arrays to record such complex auditory information was a first, the researchers say.

"Previously, work had gathered data from the secondary auditory cortex with single electrodes, but as far as we know this is the first multielectrode recording from this part of the brain," Nurmikko said. "Essentially we have nearly 200 microscopic listening posts that can give us the richness and higher resolution of data which is required."

One of the goals of the study, for which doctoral student Jihun Lee led the experiments, was to test whether any particular decoding model algorithm performed better than others. The research, in collaboration with Wilson Truccolo, a computational neuroscience expert, showed that recurrent neural networks (RNNs) -- a type of machine learning algorithm often used in computerized language translation -- produced the highest-fidelity reconstructions. The RNNs substantially outperformed more traditional algorithms that have been shown to be effective in decoding neural data from other parts of the brain.

Christopher Heelan, a research associate at Brown and co-lead author of the study, thinks the success of the RNNs comes from their flexibility, which is important in decoding complex auditory information.

"More traditional algorithms used for neural decoding make strong assumptions about how the brain encodes information, and that limits the ability of those algorithms to model the neural data," said Heelan, who developed the computational toolkit for the study. "Neural networks make weaker assumptions and have more parameters allowing them to learn complicated relationships between the neural data and the experimental task."

Ultimately, the researchers hope, this kind of research could aid in developing neural implants the may aid in restoring peoples' hearing.

Read more at Science Daily