Aug 31, 2019

Detailed map shows how viruses infect humans

Biologists at Columbia University Vagelos College of Physicians and Surgeons have leveraged a computational method to map protein-protein interactions between all known human-infecting viruses and the cells they infect. The method, along with the data that it generated, has generated a wealth of information about how viruses manipulate the cells that they infect and cause disease. Among the study's findings are the role of estrogen receptors in regulating Zika virus infection and how human papillomavirus (HPV) causes cancer.

The study, led by Sagi Shapira, PhD, assistant professor of systems biology at Columbia University Vagelos College of Physicians and Surgeons, was published today in the journal Cell.

LIMITED UNDERSTANDING OF HOW VIRUSES WORK

At the molecular level, viruses invade cells and manipulate them to replicate, survive, and cause disease. Since they depend on human cells for their life cycle, one way viruses co-opt cellular machinery is through protein-protein interactions within their cell host. Similarly, cells respond to infection by initiating immune responses that control and limit viral replication -- these too, depend on protein-protein interactions.

To date, considerable effort has been invested in identifying these key interactions -- and many of these efforts have resulted in many fundamental discoveries, some with therapeutic implications. However, traditional methods are limited in terms of scalability, efficiency, and even access. To address this challenge, Dr. Shapira and his collaborators developed and implemented a computational framework, P-HIPSTER, that infers interactions between pathogen and human proteins -- the building blocks of viruses and cells.

Until now, our knowledge about many viruses that infect people is limited to their genome sequences. Yet for most viruses, little has been uncovered about the underlying biological interactions that drive these relationships and give rise to disease.

"There are over 1,000 unique viruses that are known to infect people," says Dr. Shapira. "Yet, despite their unquestionable public health importance, we know virtually nothing about the vast majority of them. We just know they infect human cells. The idea behind this effort was to systematically catalogue the interactions that viruses have with the cells they infect. And, by doing so, also reveal some really interesting biology and provide the scientific community with a resource that they can use to make interesting observations of their own."

Using a novel algorithm, P-HIPSTer exploits protein structural information to systematically interrogate virus-human protein-protein interactions with remarkable accuracy. Dr. Shapira and his collaborators applied P-HIPSTer to all 1,001 human-infecting viruses and the approximately 13,000 proteins they encode. The algorithm predicted roughly 280,000 likely pairs of interacting proteins that represent a comprehensive catalogue of human virus protein-protein interactions with an accuracy rate of almost 80 percent.

"This is the first step towards building a comprehensive cartography of physical interactions between different organisms," Dr. Shapira says.

ZIKA, HPV, VIRAL EVOLUTION

In addition to defining pan-viral protein interactions, P-HIPSTer has yielded new biological insights into Zika virus, HPV, and the impact of viruses in shaping human genetics.

Among their discoveries, the researchers found that Zika virus interacts with estrogen receptor, the protein that allows cells to effectively respond to the estrogen hormone. Importantly, they found estrogen receptor has potential to inhibit Zika virus replication. Says Dr. Shapira, "And, in fact, estrogen receptor inhibits viral replication even more than interferon, a protein that is the body's first line of defense to viral infection and our gold standard for anti-viral defense."

The finding is particularly relevant to clinical disease as pregnant women are most susceptible to Zika during their first trimester, which is when estrogen levels are at their lowest. This period also is when the fetus is most susceptible to Zika, a virus for which there is no vaccine or specific treatment and that can cause sever birth defects.

Dr. Shapira and his team also explored interactions between HPV, the leading cause of cervical cancer, and the cells that it infects. HPV is the most common sexually transmitted viral infection with approximately 80 percent of sexually active individuals contracting one of the 200 different types of HPV at some point in their lives. Dr. Shapira and his team used the data generated by P-HIPSTer to identify protein-protein interactions that distinguish HPV infections associated with cancer from those that are not. In addition to providing insights into how HPV may cause disease, the finding could lead to improved diagnostics for those infected with HPV, and P-HIPSTer could potentially be used to help predict whether or not any particular virus is likely to be highly pathogenic.

The researchers also examined whether the interactions mediated by viruses have impacted human genetics. The researchers found evidence of strong selection pressure for several dozen cellular proteins have been shaped by viral infection, unlocking new insights into how our genome has been impacted by viruses.

"One of the things we can do with this data is drill down and ask whether virus infection has changed the history of human genetics," notes Dr. Shapira. "That is certainly not a novel idea but to have a catalogue of what those proteins are is significant. There are a lot of areas that we can explore now that we couldn't before."

FUTURE WORK

Dr. Shapira and his team intend to apply P-HIPSTer on more complex pathogens, such as parasites and bacteria, and use it to better understand how bacteria in the human gut communicate with each other. In the future, the algorithm could also be used to explore viruses or pathogens that effect agricultural plants or livestock.

Read more at Science Daily

Marathoners, take your marks...and fluid and salt!

Legend states that after the Greek army defeated the invading Persian forces near the city of Marathon in 490 B.C.E., the courier Pheidippides ran to Athens to report the victory and then immediately dropped dead. The story -- and the distance Pheidippides covered -- inspired the modern marathon, a grueling 26.2-mile contest that attracts some 1.3 million runners annually to compete in the more than 800 races held worldwide.

While Pheidippides' demise was more likely brought about by a 300-mile run he reportedly made just prior to his "marathon," today's long-distance runners face a mostly short-term but still serious physical threat known as acute kidney injury, or AKI. Now, results of a new study of marathon runners led by researchers at Johns Hopkins Medicine and Yale University suggest that sweat (fluid) volume and sweat sodium losses, rather than a rise in core body temperature, are the key contributors to post-race AKI.

"We knew from a previous study that a large number of marathoners developed short-term AKI following a race, so we wanted more specifically to pin down the causes," says Chirag Parikh, Ph.D., director of the Division of Nephrology at the Johns Hopkins University School of Medicine and senior author of the new paper. "Our findings suggest that managing fluid volume and salt losses with a personalized regimen during the time period surrounding a marathon may help reduce the number or lessen the severity of AKI incidents afterward."

The researchers say they also found that runners with AKI following a marathon had increased levels of a blood serum protein known as copeptin. If the connection is confirmed with future studies, they say, copeptin could be valuable as a biomarker during training for predicting post-marathon susceptibility to AKI.

AKI, as described by the National Kidney Foundation, is a "sudden episode of kidney failure or kidney damage that happens within a few hours or a few days." It causes waste products to build up in the blood, making it hard for kidneys to maintain the correct balance of fluids in the body. Symptoms of AKI differ depending on the cause and may include: too little urine leaving the body; swelling in legs, ankles and around the eyes; fatigue; shortness of breath; confusion; nausea; chest pain; and in severe cases, seizures or coma. The disorder is most commonly seen in hospitalized patients whose kidneys are affected by medical and surgical stress and complications.

Similarly, a marathon subjects a runner to sustained physical stress, reduced blood flow to the kidneys and significant increases in the metabolic rate. Together, these events severely challenge the body's ability to keep fluid volume, electrolytes and temperature levels -- along with the regulatory responses to changes in all three -- in balance. The result, as seen in 82% of the runners evaluated by the same researchers in a 2017 Yale University study, was AKI that averaged two days in duration.

For the latest study, the goal was to better define the risk factors and mechanism for the problem by examining 23 runners, ages 22-63, who competed in the 2017 Hartford Marathon in Connecticut.

Participants were volunteers recruited through local running clubs and the marathon's registration process. Divided nearly equally between men and women, they were all experienced runners with a body mass index ranging between 18.5-24.9, and had completed at least four races longer than 20 kilometers (12.4 miles) within the previous three years.

Urine and blood samples were collected from the participants at three time points: 24 hours prior to the marathon, within 30 minutes of completing the race and 24 hours after. The researchers evaluated the samples for sodium levels; key biomolecules such as creatine phosphokinase, hemoglobin, urine protein and copeptin; and biomarkers associated with kidney injury such as interleukin-18 and kidney injury molecule-1.

Sweat collection patches were placed on the runners prior to the marathon and recovered at the 5-mile mark (because they became too saturated further in the race). Blood pressure, heart rate and weight were measured at all three time points, while a bioharness worn during the marathon continually recorded body temperature.

Twelve of the 23 runners (55%) developed AKI after the race, while 17 (74%) tested positive for markers indicating some injury to the renal tubules, the tiny portals in the kidneys where blood is filtered.

In the runners with post-race AKI, the researchers observed distinct sodium and fluid volume losses. The median salt loss was 2.3 grams, with some losing as much as 7 grams.

Fluid volume loss via sweat had a midpoint level of 2.5 liters (5.2 pints), up to a maximum of 6.8 liters (14.4 pints). For comparison, a 155-pound (70-kilogram) body contains about 42 liters (85 pints) of fluid.

Core body temperature, while significantly elevated throughout a marathon, was basically the same for all runners and therefore, was not considered a causal factor for AKI. However, the researchers say that the combination of high-body temperature along with fluid and salt losses may add to the development of kidney injury.

"Putting the sodium and fluid volume loss numbers into perspective, the median salt loss for the AKI runners was about 1 1/4 teaspoons, or the entire daily amount recommended by the American Heart Association," Parikh says. "Their median fluid volume loss was equivalent to sweating out slightly more than a 2-liter soda bottle. Beyond that, we had evidence that runners weren't adequately keeping up with those depletions."

In turn, Parikh says, that failure to balance the sodium and fluid losses during a marathon may account for the new study's other relevant finding: the higher levels of copeptin seen in runners with post-race AKI.

Copeptin is a precursor to the release of vasopressin, a hormone secreted by the pituitary gland in response to reduced blood volume. It tells our kidneys and blood vessels to hold on to water, preventing a sudden drop in blood pressure and physical collapse.

"In the runners who developed AKI, we found copeptin levels as much as 20 times higher than those who did not," Parikh says. "This is biological evidence that the AKI sufferers were severely volume down."

Because vasopressin reduces blood flow to the kidneys, and decreases renal filtration and urine output, he adds, it also may induce inflammation and injury to the kidney tissues if secreted for an extended period of time. This may explain why a large number of marathon runners get AKI while those competing at shorter distances do not.

Parikh says future studies, using larger samples, will need to evaluate whether optimizing fluid and salt volumes in marathon runners lowers rates or reduces the severity of post-race AKI. Additionally, he says, the researchers would like to follow runners who participate in multiple marathons to look for any cumulative kidney damage.

Read more at Science Daily

Aug 30, 2019

Newly discovered giant planet slingshots around its star

Astronomers have discovered a planet three times the mass of Jupiter that travels on a long, egg-shaped path around its star. If this planet were somehow placed into our own solar system, it would swing from within our asteroid belt to out beyond Neptune. Other giant planets with highly elliptical orbits have been found around other stars, but none of those worlds were located at the very outer reaches of their star systems like this one.

"This planet is unlike the planets in our solar system, but more than that, it is unlike any other exoplanets we have discovered so far," says Sarah Blunt, a Caltech graduate student and first author on the new study publishing in The Astronomical Journal. "Other planets detected far away from their stars tend to have very low eccentricities, meaning that their orbits are more circular. The fact that this planet has such a high eccentricity speaks to some difference in the way that it either formed or evolved relative to the other planets."

The planet was discovered using the radial velocity method, a workhorse of exoplanet discovery that detects new worlds by tracking how their parent stars "wobble" in response to gravitational tugs from those planets.

However, analyses of these data usually require observations taken over a planet's entire orbital period. For planets orbiting far from their stars, this can be difficult: a full orbit can take tens or even hundreds of years.

The California Planet Search, led by Caltech Professor of Astronomy Andrew W. Howard, is one of the few groups that watches stars over the decades-long timescales necessary to detect long-period exoplanets using radial velocity.

The data needed to make the discovery of the new planet were first provided by W. M. Keck Observatory in Hawaii. In 1997, the team began using the High-Resolution Echelle Spectrometer (HIRES) on the Keck I telescope to take measurements of the planet's star, called HR 5183.

"The key was persistence," said Howard. "Our team followed this star with Keck Observatory for more than two decades and only saw evidence for the planet in the past couple years! Without that long-term effort, we never would have found this planet."

In addition to Keck Observatory, the California Planet Search also used the Lick Observatory in Northern California and the McDonald Observatory in Texas.

The astronomers have been watching HR 5183 since the 1990s, but do not have data corresponding to one full orbit of the planet, called HR 5183 b, because it circles its star roughly every 45 to 100 years. The team instead found the planet because of its strange orbit.

"This planet spends most of its time loitering in the outer part of its star's planetary system in this highly eccentric orbit, then it starts to accelerate in and does a slingshot around its star," explains Howard. "We detected this slingshot motion. We saw the planet come in and now it's on its way out. That creates such a distinctive signature that we can be sure that this is a real planet, even though we haven't seen a complete orbit."

The new findings show that it is possible to use the radial velocity method to make detections of other far-flung planets without waiting decades. And, the researchers suggest, looking for more planets like this one could illuminate the role of giant planets in shaping their solar systems.

Planets take shape out of disks of material left over after stars form. That means that planets should start off in flat, circular orbits. For the newly detected planet to be on such an eccentric orbit, it must have gotten a gravitational kick from some other object.

The most plausible scenario, the researchers propose, is that the planet once had a neighbor of similar size. When the two planets got close enough to each other, one pushed the other out of the solar system, forcing HR 5183 b into a highly eccentric orbit.

"This newfound planet basically would have come in like a wrecking ball," says Howard, "knocking anything in its way out of the system."

This discovery demonstrates that our understanding of planets beyond our solar system is still evolving. Researchers continue to find worlds that are unlike anything in our solar system or in solar systems we have already discovered.

Read more at Science Daily

Hints of a volcanically active exo-moon

Jupiter's moon Io is the most volcanically active body in our solar system. Today, there are indications that an active moon outside our solar system, an exo-Io, could be hidden at the exoplanet system WASP-49b. "It would be a dangerous volcanic world with a molten surface of lava, a lunar version of close-in Super Earths like 55 Cancri-e" says Apurva Oza, postdoctoral fellow at the Physics Insitute of the University of Bern and associate of the NCCR PlanetS, "a place where Jedis go to die, perilously familiar to Anakin Skywalker." But the object that Oza and his colleagues describe in their work seems to be even more exotic than Star Wars science fiction: the possible exomoon would orbit a hot giant planet, which in turn would race once around its host star in less than three days -- a scenario 550 light years away in the inconspicuous constellation of Lepus, underneath the bright Orion constellation.

Sodium gas as circumstantial evidence

Astronomers have not yet discovered a rocky moon beyond our solar system and it's on the basis of circumstantial evidence that the researchers in Bern conclude that the exo-Io exists: Sodium gas was detected at the WASP 49-b at an anomalously high-altitude. "The neutral sodium gas is so far away from the planet that it is unlikely to be emitted solely by a planetary wind," says Oza. Observations of Jupiter and Io in our solar system, by the international team, along with mass loss calculations show that an exo-Io could be a very plausible source of sodium at WASP 49-b. "The sodium is right where it should be" says the astrophysicist.

Tides keep the system stable

Already in 2006, Bob Johnson of the University of Virginia and the late Patrick Huggins at New York University, USA had shown that large amounts of sodium at an exoplanet could point to a hidden moon or ring of material, and ten years ago, researchers at Virginia calculated that such a compact system of three bodies: star, close-in giant planet and moon, can be stable over billions of years. Apurva Oza was then a student at Virginia, and after his PhD on moons atmospheres in Paris, decided to pick up the theoretical calculations of these researchers. He now publishes the results of his work together with Johnson and colleagues in the Astrophysical Journal.

"The enormous tidal forces in such a system are the key to everything," explains the astrophysicist. The energy released by the tides to the planet and its moon keeps the moon's orbit stable, simultaneously heating it up and making it volcanically active. In their work, the researchers were able to show that a small rocky moon can eject more sodium and potassium into space through this extreme volcanism than a large gas planet, especially at high altitudes. "Sodium and potassium lines are quantum treasures to us astronomers because they are extremely bright," says Oza, "the vintage street lamps that light up our streets with yellow haze, is akin to the gas we are now detecting in the spectra of a dozen exoplanets."

"We need to find more clues"

The researchers compared their calculations with these observations and found five candidate systems where a hidden exomoon can survive against destructive thermal evaporation. For WASP 49-b the observed data can be best explained by the existence of an exo-Io. However, there are other options. For example, the exoplanet could be surrounded by a ring of ionized gas, or non-thermal processes. "We need to find more clues," Oza admits. The researchers are therefore relying on further observations with ground-based and space-based instruments.

"While the current wave of research is going towards habitability and biosignatures, our signature is a signature of destruction," says the astrophysicist. A few of these worlds could be destroyed in a few billion years due to the extreme mass loss. "The exciting part is that we can monitor these destructive processes in real time, like fireworks," says Oza.

Read more at Science Daily

A new way to measure how water moves

When a chemical spills in the environment, it's important to know how quickly the spill will spread. If a farmer irrigates a crop, the person will need to know how fast the water should move through the soil and be absorbed by the roots. In both cases, a good understanding of water pore structure is necessary.

A new method to measure pore structure and water flow is described in a study published in the journal Water Resources Research. With it, scientists should be able to more accurately determine how fast water, contaminants, nutrients and other liquids move through the soil -- and where they go.

The mathematical model was validated by researchers at the University of California, Davis, California State University, Northridge and University of North Carolina at Chapel Hill.

"This will open a whole new direction that will help us use our resources more efficiently and better understand the flow of water, contaminants and nutrients," said corresponding author and UC Davis assistant professor Majdi Abou Najm, who developed the model when he was at the American University of Beirut.

NOT ONE-SIZE-FITS ALL

One of the most important equations in hydrology, Darcy's law, has long been used to describe the flow of fluids through a porous medium, like rocks and soil. But that equation assumes a one-size-fits all estimation of pore size, when the reality is more complicated.

"Our model finds a middle ground between reality, which has an infinite number of pore sizes, and the current model, which represents them with one average pore size," said Abou Najm.

CHEAP AND ACCESSIBLE

The new model, which was tested on four sands for the study, has the added benefit of being relatively cheap and accessible to use in a variety of environments. The study said that most pore size measurement methods require collecting samples of limited size for lab analysis. This new method provides a simple, inexpensive approach to measuring a variety of pore sizes directly in the field using items that can be bought in a typical grocery store, such as soup thickeners or food additives.

From Science Daily

Entanglement sent over 50 km of optical fiber

For the first time, a team has sent a light particle entangled with matter over 50 km of optical fiber. This paves the way for the practical use of quantum networks and sets a milestone for a future quantum internet.

The quantum internet promises absolutely tap-proof communication and powerful distributed sensor networks for new science and technology. However, because quantum information cannot be copied, it is not possible to send this information over a classical network. Quantum information must be transmitted by quantum particles, and special interfaces are required for this. The Innsbruck-based experimental physicist Ben Lanyon, who was awarded the Austrian START Prize in 2015 for his research, is researching these important intersections of a future quantum Internet. Now his team at the Department of Experimental Physics at the University of Innsbruck and at the Institute of Quantum Optics and Quantum Information of the Austrian Academy of Sciences has achieved a record for the transfer of quantum entanglement between matter and light. For the first time, a distance of 50 kilometers was covered using fiber optic cables. "This is two orders of magnitude further than was previously possible and is a practical distance to start building inter-city quantum networks," says Ben Lanyon.

Converted photon for transmission

Lanyon's team started the experiment with a calcium atom trapped in an ion trap. Using laser beams, the researchers write a quantum state onto the ion and simultaneously excite it to emit a photon in which quantum information is stored. As a result, the quantum states of the atom and the light particle are entangled. But the challenge is to transmit the photon over fiber optic cables. "The photon emitted by the calcium ion has a wavelength of 854 nanometers and is quickly absorbed by the optical fiber," says Ben Lanyon. His team therefore initially sends the light particle through a nonlinear crystal illuminated by a strong laser. Thereby the photon wavelength is converted to the optimal value for long-distance travel: the current telecommunications standard wavelength of 1550 nanometers. The researchers from Innsbruck then send this photon through a 50-kilometer-long optical fiber line. Their measurements show that atom and light particle are still entangled even after the wavelength conversion and this long journey.

Even greater distances in sight

As a next step, Lanyon and his team show that their methods would enable entanglement to be generated between ions 100 kilometers apart and more. Two nodes send each an entangled photon over a distance of 50 kilometers to an intersection where the light particles are measured in such a way that they lose their entanglement with the ions, which in turn would entangle them. With 100-kilometer node spacing now a possibility, one could therefore envisage building the world's first intercity light-matter quantum network in the coming years: only a handful of trapped ion-systems would be required on the way to establish a quantum internet between Innsbruck and Vienna, for example.

From Science Daily

Aug 29, 2019

Busy older stars outpace stellar youngsters

The oldest stars in our Galaxy are also the busiest, moving more rapidly than their younger counterparts in and out of the disk of the Milky Way, according to new analysis carried out at the University of Birmingham.

The findings provide fresh insights into the history of our Galaxy and increase our understanding of how stars form and evolve.

Researchers calculate that the old stars are moving more quickly in and out of the disc -- the pancake-shaped mass at the heart of the Galaxy where most stars are located.

A number of theories could explain this movement -- it all depends where the star is in the disc. Stars towards the outskirts could be knocked by gravitational interactions with smaller galaxies passing by. Towards the inner parts of the disc, the stars could be disturbed by massive gas clouds which move along with the stars inside the disc. They could also be thrown out of the disc by the movement of its spiral structure.

Dr Ted Mackereth, a galactic archaeologist at the University of Birmingham, is lead author on the paper. He explains: "The specific way that the stars move tells us which of these processes has been dominant in forming the disc we see today. We think older stars are move active because they have been around the longest, and because they were formed during a period when the Galaxy was a bit more violent, with lots of star formation happening and lots of disturbance from gasses and smaller satellite galaxies. There are lots of different processes at work, and untangling all these helps us to build up a picture of the history of our Galaxy."

The study uses data from the Gaia satellite, currently working to chart the movements of around 1 billion stars in the Milky Way. It also takes information from APOGEE, an experiment run by the Sloan Digital Sky Survey that uses spectroscopy to measure the distribution of elements in stars, as well as images from the recently-retired Kepler space telescope.

Measurements provided by Kepler show how the brightness of stars varies over time, which gives insights into how they vibrate. In turn, that yields information about their interior structure, which enables scientists to calculate their age.

The Birmingham team, working with colleagues at the University of Toronto and teams involved with the Sloan Digital Sky Survey, were able to take these different data strands and calculate the differences in velocity between different sets of stars grouped by age.

They found that the older stars were moving in many different directions with some moving very quickly out from the galactic disk. Younger stars move closely together at much slower speeds out from the disc, although they are faster than the older stars as they rotate around the Galaxy within the disc.

Read more at Science Daily

Biological 'rosetta stone' brings scientists closer to deciphering how the body is built

Every animal, from an ant to a human, contains in their genome pieces of DNA called Hox genes. Architects of the body, these genes are keepers of the body's blueprints; they dictate how embryos grown into adults, including where a developing animal puts its head, legs and other body parts.

Scientists have long searched for ways to decipher how Hox genes create this body map; a key to decoding how we build our bodies.

Now an international group of researchers from Columbia University and the Spanish National Research Council (CSIC) based at the Universidad Pablo de Olavide in Seville, Spain have found one such key: a method that can systematically identify the role each Hox gene plays in a developing fruit fly. Their results, reported recently in Nature Communications, offer a new path forward for researchers hoping to make sense of a process that is equal parts chaotic and precise, and that is critical to understanding not only growth and development but also aging and disease.

"The genome, which contains thousands of genes and millions of letters of DNA, is the most complicated code ever written," said Richard Mann, PhD, principal investigator at Columbia's Mortimer B. Zuckerman Mind Brain Behavior Institute and the paper's co-senior author. "Deciphering this code has proven so difficult because evolution wrote it in fits and starts over hundreds of millions of years. Today's study offers a key to cracking that code, bringing us closer than ever to understanding how Hox genes build a healthy body, or how this process gets disrupted in disease."

Hox genes are ancient; they can be found across all animal species. Even primitive jellyfish have them. Each type of organism has different combinations of these genes. Fruit flies have eight Hox genes, while humans have 39.

These genes work by producing special proteins called transcription factors, which work together with similar proteins called Hox cofactors to bind to different segments of DNA and turn many other genes on and off at just the right time -- a Rube Goldberg machine of microscopic proportions.

"Because these genes are intricately involved in many aspects of development, it has proven incredibly challenging to isolate individual Hox genes and trace their activity over time," said James Castelli-Gair Hombría, PhD, a principal investigator at the Centro Andaluz de Biologi?a del Desarrollo at the Universidad Pablo de Olavide and the paper's co-senior author. "We had this incredibly complex equation to solve but too many unknowns to make significant progress."

Recently, Dr. Hombría and his team hit upon a bit of luck. While examining genetic activity in a developing fruit fly, they stumbled upon a small piece of regulatory DNA, called vvI1+2, that had an unusual -- and surprising -- attribute. Although it was active in cells across the fruit fly's entire developing body, it appeared to be regulated by all of the fruit fly's eight Hox genes.

"The ubiquity of the vvI1+2 DNA segment across the entire developing fruit fly, combined with the fact that every Hox gene touches it, made it an ideal system by which to study the Hox gene family," said Carlos Sánchez-Higueras, PhD, a postdoctoral researcher in the Hombría lab and the paper's first author. "In this single piece of DNA, we had the perfect tool; we could now devise a method to systematically manipulate vvI1+2 activity to see how each Hox gene functioned."

First, Dr. Sánchez-Higueras teamed up with Dr. Mann at Columbia's Zuckerman Institute and used a sophisticated computer algorithm called No Read Left Behind, or NRLB. NRLB was recently developed by Dr. Mann, his lab, and his collaborators, including Columbia systems biology professor Harmen Bussemaker, PhD. This powerful algorithm pinpoints the locations where transcription factors bind to a stretch of DNA, even if these binding sites are very weak and difficult to capture. For this study, the researchers focused on the Hox transcription factors and Hox cofactors that bind to vvI1+2.

"Our analyses provided a precise road map of Hox binding sites in vvI1+2, which we could then apply to a living fruit fly," said Dr. Mann, who is also the Higgins Professor of Biochemistry and Molecular Biophysics (in Systems Biology) at Columbia's Vagelos College of Physicians and Surgeons.

By employing a combination of elegant genetic manipulations in living, or in vivo, fly embryos, together with advanced biochemical and computational analysis, the researchers could then systematically manipulate Hox target activity with an unprecedented level of precision.

"We now had a starting point from which to systematically decode Hox gene regulation," said Dr. Hombría, "a kind of Rosetta Stone to help us decipher the genetics of body development."

The researchers' findings are especially promising because they can be applied to the entire genome. The steps that Hox genes undergo to regulate vvI1+2 can inform how Hox genes regulate other DNA, not just in fruit flies but beyond -- including in vertebrates, such as mammals, and even humans.

Read more at Science Daily

A face for Lucy's ancestor

Australopithecus anamensis is the earliest-known species of Australopithecus and widely accepted as the progenitor of 'Lucy's' species, Australopithecus afarensis. Until now, A. anamensis was known mainly from jaws and teeth. Yohannes Haile-Selassie of the Cleveland Museum of Natural History, Stephanie Melillo of the Max Planck Institute for Evolutionary Anthropology and their colleagues have discovered the first cranium of A. anamensis at the paleontological site of Woranso-Mille, in the Afar Region of Ethiopia.

The 3.8 million-year-old fossil cranium represents a time interval between 4.1 and 3.6 million years ago, when A. anamensis gave rise to A. afarensis. Researchers used morphological features of the cranium to identify which species the fossil represents. "Features of the upper jaw and canine tooth were fundamental in determining that MRD was attributable to A. anamensis," said Melillo. "It is good to finally be able to put a face to the name." The MRD cranium, together with other fossils previously known from the Afar, show that A. anamensis and A. afarensis co-existed for approximately 100,000 years. This temporal overlap challenges the widely-accepted idea of a linear transition between these two early human ancestors. Haile-Selassie said: "This is a game changer in our understanding of human evolution during the Pliocene."

Working for the past 15 years at the site, the team discovered the cranium (MRD-VP-1/1, here referred to as "MRD") in February 2016. In the years following their discovery, paleoanthropologists of the project conducted extensive analyses of MRD, while project geologists worked on determining the age and context of the specimen. The results of the team's findings are published online in two papers in the international scientific journal Nature.

Discovery of the cranium

The Woranso-Mille project has been conducting field research in the central Afar region of Ethiopia since 2004. The project has collected more than 12,600 fossil specimens representing about 85 mammalian species. The fossil collection includes about 230 fossil hominin specimens dating to between more than 3.8 and about 3.0 million years ago. The first piece of MRD, the upper jaw, was found by Ali Bereino (a local Afar worker) on February 10, 2016 at a locality known as Miro Dora, Mille district of the Afar Regional State. The specimen was exposed on the surface and further investigation of the area resulted in the recovery of the rest of the cranium. "I couldn't believe my eyes when I spotted the rest of the cranium. It was a eureka moment and a dream come true," said Haile-Selassie.

Geology and age determination

In a companion paper published in the same issue of Nature, Beverly Saylor of Case Western Reserve University and her colleagues determined the age of the fossil as 3.8 million years by dating minerals in layers of volcanic rocks nearby. They mapped the dated levels to the fossil site using field observations and the chemistry and magnetic properties of rock layers. Saylor and her colleagues combined the field observations with analysis of microscopic biological remains to reconstruct the landscape, vegetation and hydrology where MRD died.

MRD was found in the sandy deposits of a delta where a river entered a lake. The river likely originated in the highlands of the Ethiopian plateau while the lake developed at lower elevations where rift activity caused the Earth surface to stretch and thin, creating the lowlands of the Afar region. Fossil pollen grains and chemical remains of fossil plant and algae that are preserved in the lake and delta sediments provide clues about the ancient environmental conditions. Specifically they indicate that the watershed of the lake was mostly dry but that there were also forested areas on the shores of the delta or along the side the river that fed the delta and lake system. "MRD lived near a large lake in a region that was dry. We're eager to conduct more work in these deposits to understand the environment of the MRD specimen, the relationship to climate change and how it affected human evolution, if at all," said Naomi Levin, a co-author on the study from University of Michigan.

A new face in the crowd

Australopithecus anamensis is the oldest known member of the genus Australopithecus. Due to the cranium's rare near-complete state, the researchers identified never-before-seen facial features in the species. "MRD has a mix of primitive and derived facial and cranial features that I didn't expect to see on a single individual," Haile-Selassie said. Some characteristics were shared with later species, while others had more in common with those of even older and more primitive early human ancestor groups such as Ardipithecus and Sahelanthropus. "Until now, we had a big gap between the earliest-known human ancestors, which are about 6 million years old, and species like 'Lucy', which are two to three million years old. One of the most exciting aspects of this discovery is how it bridges the morphological space between these two groups," said Melillo.

Branching out


Among the most important findings was the team's conclusion that A. anamensis and its descendant species, the well-known A. afarensis, coexisted for a period of at least 100,000 years. This finding contradicts the long-held notion of an anagenetic relationship between these two taxa, instead supporting a branching pattern of evolution. Melillo explains: "We used to think that A. anamensis gradually turned into A. afarensis over time. We still think that these two species had an ancestor-descendent relationship, but this new discovery suggests that the two species were actually living together in the Afar for quite some time. It changes our understanding of the evolutionary process and brings up new questions -- were these animals competing for food or space?"

This conclusion is based on the assignment of the 3.8-million-year-old MRD to A. anamensis and the 3.9-million-year-old hominin cranial fragment commonly known as the Belohdelie frontal, to A. afarensis. The Belohdelie frontal was discovered in the Middle Awash of Ethiopia by a team of paleontologists in 1981, but its taxonomic status has been questioned in the intervening years.

Read more at Science Daily

Prehistoric puma feces reveals oldest parasite DNA ever recorded

Modern-day puma
The oldest parasite DNA ever recorded has been found in the ancient, desiccated faeces of a puma.

A team of Argentinian scientists from the National Council of Scientific and Technical Research (CONICET) made the discovery after studying a coprolite taken from a rock-shelter in the country's mountainous Catamarca Province, where the remains of now extinct megafauna have previously been recovered in stratigraphic excavations.

Radiocarbon dating revealed that the coprolite and thus the parasitic roundworm eggs preserved inside dated back to between 16,570 and 17,000 years ago, towards the end of the last Ice Age.

At that time, the area around the shelter at Peñas de las Trampas in the southern Andean Puna was thought to have been wetter than today, making it a suitable habitat for megafauna like giant ground sloths, and also smaller herbivores like American horses and South American camelids which the pumas may have preyed on.

Ancient mitochondrial DNA analysis was used to confirm the coprolite came from a Puma (Puma concolor) and that the eggs belonged to Toxascaris leonina, a species of roundworm still commonly found in the digestive systems of modern day cats, dogs and foxes.

The study, published in the journal Parasitology, explains that the extremely dry, cold and salty conditions which took hold at the Peñas de las Trampas site since the onset of the Holocene would have helped to reduce the breakdown of the DNA, allowing it to be preserved.

Led by Romina Petrigh and Martín Fugassa, the study was carried out by an interdisciplinary team including archaeologists and biologists and is part of a project that views ancient faeces as important paleobiological reservoirs.

Dr Petrigh, from the National University of Mar del Plata and CONICET, said: "While we have found evidence of parasites in coprolites before, those remains were much more recent, dating back only a few thousand years. The latest find shows that these roundworms were infecting the fauna of South America before the arrival of the first humans in the area around 11,000 years ago."

She added: "I was very happy when I discovered how old this DNA was. It's difficult to recover DNA of such an old age as it usually suffers damage over time. Our working conditions had to be extremely controlled to avoid contamination with modern DNA, so we used special decontaminated reagents and disposable supplies. Several experiments were performed to authenticate the DNA sequences obtained and the efforts of the team of researchers who participated was essential."

The discovery marks a number of firsts: it represents the oldest record of an ancient DNA sequence for a gastrointestinal nematode parasite of wild mammals, the oldest molecular parasite record worldwide, and also a new maximum age for the recovery of old DNA of this origin.

For Dr Petrigh, the findings also cast light on both the past and the present. She said: "This work confirms the presence of T. leonina in prehistoric times, presumably even before that of humans in the region, and it represents the oldest record in the world. The common interpretation is that the presence of T. leonina in American wild carnivores today is a consequence of their contact with domestic dogs or cats, but that should no longer be assumed as the only possible explanation.

"Our aDNA studies have also confirmed the presence of pumas in the southern Puna at the end of the Pleistocene. This has significant implications for the natural history of the region, as well as for inferring the ecological context immediately before -- as far as is known -- the first human explorers ventured into the area."

Read more at Science Daily

Aug 27, 2019

Ancient prescribed-burns could revitalize communities today

It costs more than a new iPhone XS, and it's made out of hazelnut shrub stems. Traditional baby baskets of Northern California's Yurok and Karuk tribes come at a premium not only because they are handcrafted by skilled weavers, but because the stems required to make them are found only in forest understory areas experiencing a type of controlled burn once practiced by the tribes but suppressed for more than a century.

A new Stanford-led study with the U.S. Forest Service in collaboration with the Yurok and Karuk tribes found that incorporating traditional techniques into current fire suppression practices could help revitalize American Indian cultures, economies and livelihoods, while continuing to reduce wildfire risks. The findings could inform plans to incorporate the cultural burning practices into forest management across an area one and a half times the size of Rhode Island.

"Burning connects many tribal members to an ancestral practice that they know has immense ecological and social benefit especially in the aftermath of industrial timber activity and ongoing economic austerity," said study lead author Tony Marks-Block, a doctoral candidate in anthropology who worked with Lisa Curran, the Roger and Cynthia Lang Professor in Environmental Anthropology.

"We must have fire in order to continue the traditions of our people," said Margo Robbins, a Yurok basket weaver and director of the Yurok Cultural Fire Management Council who advised the researchers. "There is such a thing as good fire."

The study, published in Forest Ecology and Management, replicates Yurok and Karuk fire treatments that involve cutting and burning hazelnut shrub stems. The approach increased the production of high-quality stems (straight, unbranched and free of insect marks or bark blemishes) needed to make culturally significant items such as baby baskets and fish traps up to 10-fold compared with untreated shrubs.

Reducing fuel load

Previous studies have shown that repeated prescribed burning reduces fuel for wildfires, thus reducing their intensity and size in seasonally dry forests such as the one the researchers studied in the Klamath Basin area near the border with Oregon. This study was part of a larger exploration of prescribed burns being carried out by Stanford and U.S. Forest Service researchers who collaborated with the Yurok and Karuk tribes to evaluate traditional fire management treatments. Together, they worked with a consortium of federal and state agencies and nongovernmental organizations across 5,570 acres in the Klamath Basin.

The consortium has proposed expanding these "cultural burns" -- which have been greatly constrained throughout the tribes' ancestral lands -- across more than 1 million acres of federal and tribal lands that are currently managed with techniques including less targeted controlled burns or brush removal.

Tribes traditionally burned specific plants or landscapes as a way of generating materials or spurring food production, as opposed to modern prescribed burns that are less likely to take these considerations into account. The authors argue that increasing the number of cultural burns could ease food insecurity among American Indian communities in the region. Traditional food sources have declined precipitously due in part to the suppression of prescribed burns that kill acorn-eating pests and promote deer populations by creating beneficial habitat and increasing plants' nutritional content

"This study was founded upon tribal knowledge and cultural practices," said co-author Frank Lake, a research ecologist with the U.S. Forest Service and a Karuk descendant with Yurok family. "Because of that, it can help us in formulating the best available science to guide fuels and fire management that demonstrate the benefit to tribal communities and society for reducing the risk of wildfires."

The researchers write that it would be easy and efficient to include traditional American Indian prescribed burning practices in existing forest management strategies. For example, federal fire managers could incorporate hazelnut shrub propane torching and pile burning into their fuel reduction plans to meet cultural needs. Managers would need to consult and collaborate with local tribes to plan these activities so that the basketry stems could be gathered post-treatment. Larger-scale pile burning treatments typically occur over a few days and require routine monitoring by forestry technicians to ensure they do not escape or harm nearby trees. As these burn, it would be easy for a technician to simultaneously use a propane torch to top-kill nearby hazelnut shrubs. This would not require a significant increase in personnel hours.

Read more at Science Daily

Arms race between parasites and their victims

Acanthocephala are parasitic worms that reproduce in the intestines of various animals, including fish. However, only certain species of fish are suitable as hosts. A study by the University of Bonn now shows how the parasites succeed in preferably infecting these types. The results will be published in the journal Behaviour, but are already available online.

The parasitic worm Pomphorhynchus laevis does not have an easy life: In order to reproduce, the parasite must first hope that its eggs will be eaten by a freshwater shrimp. The larvae hatching from the eggs then need a change of scenery: They can only develop into adult worms if they are swallowed by a fish. However, not every fish species is suitable as a final host. Some species have defense mechanisms that kill the parasite before it can mate and release new eggs into the water through the fish intestine.

In order to improve their chances of reproduction, the worms have developed several sophisticated strategies in the course of evolution. "For example, parasite-infected shrimp change their behavior," explains Dr. Timo Thünken from the Institute for Evolutionary Biology and Ecology at the University of Bonn. "They no longer avoid certain fish species and are therefore eaten more frequently." Another thesis was so far controversial: Freshwater shrimp are beige-brownish; their body shell is also relatively transparent. They therefore barely stand out from their surroundings. Pomphorhynchus laevis larvae, on the other hand, are bright orange. It is therefore possible to see with the naked eye whether a shrimp is infected: Its parasitic cargo is marked by an orange spot.

Infected shrimp attract more attention

It may be that the shrimp are less well camouflaged as a result and are more frequently eaten by fish. Study director Prof. Dr. Theo Bakker already investigated this thesis several years ago. He was indeed able to determine that shrimp with an orange mark more frequently ended up in the stomachs of sticklebacks. Yet this finding was not confirmed in studies with brown trout.

However, the brown trout, in contrast to the stickleback, is not a suitable final host for Pomphorhynchus laevis. Its immune system usually prevents a successful infection with Pomphorhynchus laevis. "It is therefore possible that the orange coloring attracts particular those fish that are especially suitable for the parasite's further reproduction," Thünken suspects. "We have now conducted experiments to put this thesis to the test."

The biologists marked the shrimp with an orange dot in order to simulate larval infestation. Then they tested how often the shrimp in question were eaten by different fish in comparison to unmarked species. The mark did in fact increase the risk of being eaten -- but only in some types of fish: Barbels and sticklebacks were particularly interested in the marked freshwater shrimp; the dot made no difference to brown trout.

In another experiment, the researchers fed their fish exclusively with larvae-infested shrimp. "We were able to show that this diet often led to infection in barbels and sticklebacks, but very rarely in brown trout," explains Thünken. Evidently, their conspicuous coloring ensures that the larvae end up mainly in the stomach of suitable final hosts. However, it is unclear whether they have acquired their orange hue in the course of evolution in order to reach precisely these hosts. "Perhaps over time they have simply adapted better to the digestive tract of those fish that responded particularly strongly to the orange color," says Thünken.

Read more at Science Daily

Skin creams aren't what we thought they were

Anyone who has gone through the stress and discomfort of raw, irritated skin knows the relief that comes with slathering on a creamy lotion. Topical creams generally contain a few standard ingredients, but manufacturers know little about how these components interact to influence the performance of the product. Now, researchers report the first direct glimpse of how a cream or lotion is structured on the molecular scale, and it's not quite what they expected.

The researchers will present their results today at the American Chemical Society (ACS) Fall 2019 National Meeting & Exposition.

"The long-term stability and clinical properties of a cream are determined by its fundamental structure," says Delaram Ahmadi, the graduate student who performed the study. "If we can understand the chemical microstructure of the cream and relate that to the structure of the skin, then perhaps we can better repair the compromised skin barrier."

One of Ahmadi's research advisors, David Barlow, Ph.D., adds, "We wanted to improve the science around cream formulation so that companies could more rationally formulate them to get exactly what they want. The most significant thing we found is that the textbook picture of the structure of a cream is very naïve."

Formulators have mostly inferred the structure of these emulsions based on indirect measurements, Barlow explains. But his group took a direct approach, with Ahmadi analyzing the cream using X-ray and neutron scattering techniques to determine how the ingredients were dispersed. Ahmadi and Barlow are at King's College London, and their co-investigator, Jayne Lawrence, Ph.D., is at Manchester University.

Cream is usually thought of as stacks of lamellae, or membranes, composed of surfactants and co-surfactants that maintain oil droplets dispersed within water (or vice versa). To reveal a cream's true structure, the researchers started with an aqueous cream formulation from the British Pharmacopoeia that contains two co-surfactants and a sodium dodecyl sulfate (SDS) surfactant. They also incorporated a diol known to act as a preservative. One by one, Ahmadi replaced each ingredient with heavier isotopic versions. The researchers then scattered X-rays and neutrons off the selectively isotope-labelled samples and, from the resulting patterns of scattering, determined the location of each ingredient and the aggregate it formed within the cream.

The results were surprising. Although they observed co-surfactants in the lamellar layers as predicted, the surfactant was not there. "The surfactant peak profile suggested that the molecule formed micelles in the cream," Ahmadi says. In addition, the preservative was not found in the aqueous layer, where scientists have always presumed it would be. It was, in fact, residing in the lamellae. Preservatives have an antimicrobial effect, thereby prolonging shelf-life. Formulators had assumed that to be an effective antimicrobial, the preservative had to be dissolved in the water layer. So, Ahmadi says this finding could mean the creams are essentially self-preserving.

The team is currently performing computer experiments to model the behavior of the preservatives in a bilayer system like a cream to understand why they are in the membrane layer. And they want to better understand the structure of the surfactant micelles dispersed in the layers. "I don't think anybody else has considered that there would be these micelles in the system at all," Barlow says. "This is new, and we need to think about where they are in the structure and what they are doing."

Read more at Science Daily

Gene mutations coordinate to drive malignancy in lung cancer

Scientists have shown exactly how mutations in two different genes coordinate to drive the development of malignant lung tumors, according to a new report in the open-access journal eLife.

The study in novel genetically engineered mice looked at the characteristics of lung tumors from when they are invisibly small to when they are larger and potentially deadly. The results shed new light on the mechanisms of tumor progression and will help researchers currently developing drugs for lung tumors.

There are many types of lung cancer: non-small cell lung cancer (NSCLC) is the leading cause of cancer-related death globally, and lung adenocarcinoma is the most common subtype of NSCLC. Around 75% of lung adenocarcinomas have mutations that affect two important control mechanisms for cell growth -- the MAP kinase pathway, and the PI3'-kinase pathway. Each pathway alone is not sufficient to cause lung cancer; they need to coordinate to make this happen.

"We knew that mutations in the MAP kinase pathway promote the growth of benign lung tumors, but that PI3'-kinase mutations alone do not kickstart tumor formation in the same cells," explains lead author Ed van Veen, former Postdoctoral Fellow in senior author Martin McMahon's laboratory at Huntsman Cancer Institute (HCI) at the University of Utah (U of U), Salt Lake City, US. "The pathways instead cooperate to drive the growth of malignant tumors, but we didn't know what molecular changes occurred as a result of this cooperation and how the lung cells lose their characteristics as cancer develops."

The team studied mice with mutations that were only active in lung cells called Type 2 pneumocytes. They analyzed the effects of these mutations on the genes and protein molecules in individual cells at different stages of tumor development. When they looked at the gene expression of the MAP and PI3'-kinase-driven tumors, they found that the tumor cells had reduced levels of genes that are hallmarks of a Type 2 pneumocyte, suggesting that these lung cells had lost their identity.

Next, the team looked at which molecules were responsible for coordinating the MAP and PI3'- kinase pathways together. Fluorescent labeling of molecules already known to be involved in lung cell specialization showed some surprising results -- these molecules did not play a role in the loss of lung cell identity that contributes to tumor progression. Rather, a molecule called PGC1? appeared to be involved.

To investigate if PGC1? directly controls the loss of Type 2 pneumocyte identity during lung tumor development, the team studied mice with a silenced version of the molecule, alongside mutations in the MAP kinase pathway. They found that silencing PGC1? causes lung cells to lose their specialized characteristics by cooperating with two other molecules that are required for this specialization.

Read more at Science Daily

Aug 26, 2019

How plants measure their carbon dioxide uptake

When water is scarce, plants can close their pores to prevent losing too much water. This allows them to survive even longer periods of drought, but with the majority of pores closed, carbon dioxide uptake is also limited, which impairs photosynthetic performance and thus plant growth and yield.

Plant accomplish a balancing act -- navigating between drying out and starving in dry conditions -- through an elaborate network of sensors. An international team of plant scientists led by Rainer Hedrich, a biophysicist from Julius-Maximilians-Universität (JMU) Würzburg in Bavaria, Germany, has now pinpointed these sensors. The results have been published in the journal Nature Plants.

Microvalves control photosynthesis and water supply

When light is abundant, plants open the pores in their leaves to take in carbon dioxide (CO2) which they subsequently convert to carbohydrates in a process called photosynthesis. At the same time, a hundred times more water escapes through the microvalves than carbon dioxide flows in.

This is not a problem when there is enough water available, but when soils are parched in the middle of summer, the plant needs to switch to eco-mode to save water. Then plants will only open their pores to perform photosynthesis for as long as necessary to barely survive. Opening and closing the pores is accomplished through specialised guard cells that surround each pore in pairs. The units comprised of pores and guard cells are called stomata.

Guard cells have sensors for CO2 and ABA

The guard cells must be able to measure the photosynthesis and the water supply to respond appropriately to changing environmental conditions. For this purpose, they have a receptor to measure the CO2 concentration inside the leaf. When the CO2 value rises sharply, this is a sign that the photosynthesis is not running ideally. Then the pores are closed to prevent unnecessary evaporation. Once the CO2 concentration has fallen again, the pores reopen.

The water supply is measured through a hormone. When water is scarce, plants produce abscisic acid (ABA), a key stress hormone, and set their CO2 control cycle to water saving mode. This is accomplished through guard cells which are fitted with ABA receptors. When the hormone concentration in the leaf increases, the pores close.

Analysing the CO2-ABA network

The JMU research team wanted to shed light on the components of the guard cell control cycles. For this purpose, they exposed Arabidopsis species to elevated levels of CO2 or ABA. They did so over several hours to trigger reactions at the level of the genes. Afterwards, the stomata were isolated from the leaves to analyse the respective gene expression profiles of the guard cells using bioinformatics techniques. For this task, the team took Tobias Müller and Marcus Dietrich on board, two bioinformatics experts at the University of Würzburg.

The two experts found out that the gene expression patterns differed significantly at high CO2 or ABA concentrations. Moreover, they noticed that excessive CO2 also caused the expression of some ABA genes to change. These findings led the researchers to take a closer look at the ABA signalling pathway. They were particularly interested in the ABA receptors of the PYR/PYL family (pyrabactin receptor and pyrabactin-like). Arabidopsis has 14 of these receptors, six of them in the guard cells.

ABA receptors under the microscope

"Why does a guard cell need as many as six receptors for a single hormone? To answer this question, we teamed up with Professor Pedro Luis Rodriguez from the University of Madrid, who is an expert in ABA receptors," says Hedrich. Rodriguez's team generated Arabidopsis mutants in which they could study the ABA receptors individually.

"This enabled us to assign each of the six ABA receptors a task in the network and identify the individual receptors which are responsible for the ABA- and CO2-induced closing of the stomata," Peter Ache, a colleague of Hedrich's, explains.

Read more at Science Daily

A lack of background knowledge can hinder reading comprehension

The purpose of going to school is to learn, but students may find certain topics difficult to understand if they don't have the necessary background knowledge. This is one of the conclusions of a research article published in Psychological Science, a journal of the Association for Psychological Science.

"Background knowledge plays a key role in students' reading comprehension -- our findings show that if students don't have sufficient related knowledge, they'll probably have difficulties understanding text," says lead researcher Tenaha O'Reilly of Educational Testing Service (ETS)'s Center for Research on Human Capital in Education. "We also found that it's possible to measure students' knowledge quickly by using natural language processing techniques. If a student scores below the knowledge threshold, they'll probably have trouble comprehending the text."

Previous research has shown that students who lack sufficient reading skills, including decoding and vocabulary, fare poorly relative to their peers. But the research of O'Reilly and ETS colleagues Zuowei Wang and John Sabatini suggests that a knowledge threshold may also be an essential component of reading comprehension.

The researchers examined data from 3,534 high-school students at 37 schools in the United States. The students completed a test that measured their background knowledge on ecosystems. For the topical vocabulary section of the test, the students saw a list of 44 words and had to decide which were related to the topic of ecosystems. They also completed a multiple-choice section that was designed to measure their factual knowledge.

Then, after reading a series of texts on the topic of ecosystems, the students completed 34 items designed to measure how well they understood the texts. These comprehension items tapped into their ability to summarize what they had read, recognize opinions and incorrect information, and apply what they had read to reason more broadly about the content.

The researchers used a statistical technique called broken-line regression -- often used to identify an inflection point in a data set -- to analyze the students' performance.

The results revealed that a background-knowledge score of about 33.5, or about 59% correct, functioned as a performance threshold. Below this score, background knowledge and comprehension were not noticeably correlated; above the threshold score, students' comprehension appeared to increase as their background knowledge increased.

Additional results indicated that the pattern could not be fully explained by the level of students' knowledge on a different topic -- what mattered was their background knowledge of ecosystems.

The researchers found that students' ability to identify specific keywords was a fairly strong predictor whether they would perform above or below the threshold. That is, correctly identifying ecosystems, habitat, and species as topically relevant was more strongly linked with students' comprehension than was identifying bioremediation, densities, and fauna.

The findings underscore the importance of having reached a basic knowledge level to be able to read and comprehend texts across different subjects:

"Reading isn't just relevant to English Language Arts classes but also to reading in the content areas," says O'Reilly. "The Common Core State Standards highlight the increasing role of content area and disciplinary reading. We believe that the role of background knowledge in students' comprehension and learning might be more pronounced when reading texts in the subject areas."

The researchers plan to explore whether a similar kind of knowledge threshold emerges in other topic areas and domains; they note that it will be important to extend the research by focusing on diverse measures and populations.

If the pattern holds, the findings could have important applications for classroom teaching, given the availability of knowledge assessments that can be administered without taking valuable time away from instruction.

Read more at Science Daily

The flavor of chocolate is developed during the processing of the cocoa beans

Just as we have seen an increase in the number of microbreweries making specialty beers, the market for chocolate has also developed, so there are more high-end chocolate manufacturers who are trying to stand out by fine tuning the taste and making several different varietals. Therefore, there is a need to know how you can address this during the processing of the noble cocoa.

The research was conducted on so-called noble cocoa (the varieties Criollo and Trinitario). Since the vast majority of the world's cocoa production is of the Forastero varietal, much more research has been done on this variety than on the two aforementioned varietals.

"Criollo cocoa is less bitter than Forastero, but is still more aromatic. You could call it the Pinot Noir of cocoa. But Criollo is a hassle to cultivate and it is difficult to grow in Africa, which is why it is almost exclusively grown in Central America, South America and Madagascar," says Professor with Special Responsibilities at the Department of Food Science at the University of Copenhagen (UCPH FOOD) Dennis Sandris Nielsen and continues:

"In this study we have, together with colleagues from Belgium and Nicaragua, examined for the first time how different conditions during fermentation affect the composition and activity of the microorganisms naturally present on the Criollo beans and how this affects the flavour of the finished fermented beans.

It has long been known that the processing, including the fermentation and drying of the cocoa, is important in relation to the final quality of the cocoa.

"Our research confirms this and we have also learned how to fine tune the cocoa by fine tuning the process itself, which means that you can get a higher quality out of your raw materials if you understand these processes," says Dennis Sandris Nielsen.

Part of the flavour is formed in the fermentation

A cocoa fruit is the size of a small honey melon and contains 30-40 cocoa beans, which are surrounded by a pulp. If you take a raw cocoa bean and try to make chocolate from it, it does not taste good -- you need fermentation to release the flavour potential.

The fermentation is done by opening the cocoa fruit, removing the beans and allowing them to ferment -- for example, in a box (see picture).

The pulp surrounding the beans is inoculated with various microorganisms from the surroundings, equipment, etc. The pulp is very acidic (pH 3-3.5) and has a high sugar content (about 10%), and in such an environment only a small number of microorganisms can grow. This is why the fermentation usually goes well, even if it is not inoculated with a starter culture. There is a natural selection of microorganisms that positively affect the taste.

Initially, it is mainly yeast that grows and then lactic acid bacteria. The yeast forms alcohol, while the lactic acid bacteria eat some of the citric acid that is naturally present. This means that the pH rises, making the environment more favourable for acetic acid bacteria, which converts some of the alcohol that has been formed into vinegar.

The processes generate a lot of heat -- reaching approx. 45-48 degrees. The alcohol and vinegar penetrate the beans and kill the germ in the cocoa beans so that they can no longer germinate. The cell walls are broken down, which means that different substances can react with each other and this is where the flavour development evolves.

Flavour development continues as the beans dry in the sun, where they are dried until they are microbially stable. Here ends the process for the cocoa farmers who can now sell the fermented and dried beans on. The producers are responsible for roasting the beans, where the substances formed during fermentation and drying react with each other, leading to the well-known and beloved flavour and aroma of cocoa.

Read more at Science Daily

Even scientists have gender stereotypes ... which can hamper the career of women researchers

However convinced we may be that science is not just for men, the concept of science remains much more strongly associated with masculinity than with femininity in people's minds. This automatic bias, which had already been identified among the general public, also exists in the minds of most scientists, who are not necessarily aware of it. And, in certain conditions, it may lead to otherwise careful scientific evaluation committees putting women at a disadvantage during promotion rounds involving men and women researchers. These are the findings of a study conducted by behavioural scientists from the Social and cognitive psychology laboratory (CNRS/Université Clermont Auvergne), the Laboratory of Cognitive Psychology (CNRS/Aix-Marseille Université), and the University of British Columbia (Canada), with the support of the CNRS Mission for the place of women. The study is published in the journal Nature Human Behaviour on 26 August 2019.

Women remain underrepresented in scientific research: at the French National Centre for Scientific Research (CNRS), across all disciplines, the average percentage of female researchers is 35%. And the higher the scientific research position, the more this percentage declines. Several reasons have been cited to explain these disparities: differences in levels of motivation, self-censorship ... but is discrimination also part of the story?

To find out, scientists in social and cognitive psychology studied 40 evaluation committees tasked with evaluating applications for research director positions at the CNRS over a period of two years. This is the first time that a research institution has carried out such a scientific study of its practices in the course of an annual nationwide competition covering the entire scientific spectrum.

This study shows that, from particle physics to the social sciences, most scientists, whether male or female, associate "science" and "masculine" in their semantic memory (the memory of concepts and words). This stereotype is implicit, which is to say that most often it is not detectable at the level of discourse. And it is equivalent to that observed among the general population.

Yet does this implicit stereotype have consequences on the decisions made by evaluation committees? Yes, when committees deny or minimise the existence of bias against women. Here, this is the case for around half of the committees. In these committees, the stronger the implicit stereotypes, the less often women are promoted. In contrast, when committees acknowledge the possibility of bias, implicit stereotypes, however strong they may be, have no influence.

Even if disparities between men and women in science have multiple causes and start at school (as the same authors have shown in other publications), this study indicates for the first time the existence of implicit gender stereotypes among male and female researchers across all disciplines -- stereotypes that can harm the careers of women scientists.

Read more at Science Daily

Aug 25, 2019

Bioprinting complex living tissue in just a few seconds

Tissue engineers create artificial organs and tissues that can be used to develop and test new drugs, repair damaged tissue and even replace entire organs in the human body. However, current fabrication methods limit their ability to produce free-form shapes and achieve high cell viability.

Researchers at the Laboratory of Applied Photonics Devices (LAPD), in EPFL's School of Engineering, working with colleagues from Utrecht University, have come up with an optical technique that takes just a few seconds to sculpt complex tissue shapes in a biocompatible hydrogel containing stem cells. The resulting tissue can then be vascularized by adding endothelial cells.

The team describes this high-resolution printing method in an article appearing in Advanced Materials. The technique will change the way cellular engineering specialists work, allowing them to create a new breed of personalized, functional bioprinted organs.

Printing a femur or a meniscus

The technique is called volumetric bioprinting. To create tissue, the researchers project a laser down a spinning tube filled with a stem-cell-laden hydrogel. They shape the tissue by focusing the energy from the light at specific locations, which then solidify. After just a few seconds, a complex 3D shape appears, suspended in the gel. The stem cells in the hydrogel are largely unaffected by this process. The researchers then introduce endothelial cells to vascularize the tissue.

The researchers have shown that it's possible to create a tissue construct measuring several centimeters, which is a clinically useful size. Examples of their work include a valve similar to a heart valve, a meniscus and a complex-shaped part of the femur. They were also able to build interlocking structures.

"Unlike conventional bioprinting -- a slow, layer-by-layer process -- our technique is fast and offers greater design freedom without jeopardizing the cells' viability," says Damien Loterie, an LAPD researcher and one of the study's coauthors.

Replicating the human body

The researchers' work is a real game changer. "The characteristics of human tissue depend to a large extent on a highly sophisticated extracellular structure, and the ability to replicate this complexity could lead to a number of real clinical applications," says Paul Delrot, another coauthor. Using this technique, labs could mass-produce artificial tissues or organs at unprecedented speed. This sort of replicability is essential when it comes to testing new drugs in vitro, and it could help obviate the need for animal testing -- a clear ethical advantage as well as a way of reducing costs.

"This is just the beginning. We believe that our method is inherently scalable towards mass fabrication and could be used to produce a wide range of cellular tissue models, not to mention medical devices and personalized implants," says Christophe Moser, the head of the LAPD.

Read more at Science Daily

Big brains or big guts: Choose one

Ptarmigan
Big brains can help an animal mount quick, flexible behavioral responses to frequent or unexpected environmental changes. But some birds just don't need 'em.

A global study comparing 2,062 birds finds that, in highly variable environments, birds tend to have either larger or smaller brains relative to their body size. Birds with smaller brains tend to use ecological strategies that are not available to big-brained counterparts. Instead of relying on grey matter to survive, these birds tend to have large bodies, eat readily available food and make lots of babies.

The new research from biologists at Washington University in St. Louis appears Aug. 23 in the journal Nature Communications.

"The fact is that there are a great many species that do quite well with small brains," said Trevor Fristoe, formerly a postdoctoral researcher at Washington University, now at the University of Konstanz in Germany.

"What's really interesting is that we don't see any middle ground here," Fristoe said. "The resident species with intermediate brain size are almost completely absent from high latitude (colder and more climatically variable) environments. The species that don't go all in on either of the extreme strategies are forced to migrate to more benign climates during the winter."

"Having a large brain is typically associated with strong energetic demands and a slower life-history," said Carlos Botero, assistant professor of biology in Arts & Sciences and co-author of the paper. "Free from these constraints, species with small brains can exhibit traits and lifestyles that are never seen in larger-brained ones.

"What we found is that alternative ecological strategies that either increase or decrease investments in brain tissue are equally capable of coping with the challenges of living in high-latitude environments," he said.

Because the brain is such a costly organ to develop and maintain, biologists have long been interested in understanding how large brain size -- in all species -- could have evolved.

One hypothesis is based around the idea that one of the main advantages of possessing a big brain is that it allows for a high degree of behavioral flexibility. With flexibility comes the ability to respond to different conditions -- such as wide swings in temperature, or changes in food availability.

The so-called cognitive buffer hypothesis is not the only possible explanation for the evolution of brain size -- but it is an important and influential one.

Relative brain size is a measure of the size of the brain as compared to the body -- think: an ostrich's brain might be much bigger than a chickadee's brain, but so is the ostrich's body. Predictably, the global distribution of relative brain size of birds follows a bell curve, with most species landing squarely in the middle, and only a handful of outliers with relatively large or relatively small brains.

Previous studies had found general trends towards larger relative brain sizes in higher latitudes, where conditions are more variable -- consistent with the cognitive buffer hypothesis. Fristoe and Botero's new study is different because it looks at the full distribution of brain sizes across environments, allowing them to test whether different sizes are over- or under-represented.

Excluding contributions from migrants -- the birds that live in polar or temperate environments only during more favorable times of the year -- the researchers found that at high latitudes, bird brain size appears to be bimodal. This morphological pattern means that bird brains are significantly more likely to be relatively large, or relatively small, compared to body size.

What was going on here? Fristoe, born in Alaska, had a few ideas.

In fact, Fristoe suggests that the Alaska state bird, the ptarmigan, might be a good poster child for the small-brained species. Endearing though she is -- with her plushy bosom, feathered feet and unusual chuckling call -- she's not exactly known for her smarts. The ptarmigan can, however, chow down on twigs and willow leaves with the best of them.

"In our paper, we find that small-brained species in these environments employ strategies that are unachievable with a large brain," Fristoe said. "First, these species are able to persist by foraging on readily available but difficult to digest resources such as dormant plant buds, the needles of conifers, or even twigs.

"These foods can be found even during harsh winter conditions, but they are fibrous and require a large gut to digest," he said. "Gut tissue, like brain tissue, is energetically demanding, and limited budgets mean that it is challenging to maintain a lot of both.

"We also found that these species have high reproductive rates, producing many offspring every year," Fristoe said. "This would allow their populations to recover from high mortality during particularly challenging conditions. Because big-brained species tend to invest more time in raising fewer offspring, this is a strategy that is not available to them."

In other words, maybe big brains are not all that.

"Brains are not evolving in isolation -- they are part of a broader suite of adaptations that help organisms be successful in their lives," Botero said. "Because of trade-offs between different aspects of that total phenotype, we find that two different lineages may respond to selection from environmental oscillations in completely different ways.

Read more at Science Daily