Sep 1, 2018

Sound can be used to print droplets that couldn't be printed before

In acoustophoretic printing, soundwaves generate a controllable force that pulls each droplet off of the nozzle when it reaches a specific size and ejects it towards the printing target - much like picking apples from a tree!
Harvard University researchers have developed a new printing method that uses sound waves to generate droplets from liquids with an unprecedented range of composition and viscosity. This technique could finally enable the manufacturing of many new biopharmaceuticals, cosmetics, and food and expand the possibilities of optical and conductive materials.

"By harnessing acoustic forces, we have created a new technology that enables myriad materials to be printed in a drop-on-demand manner," said Jennifer Lewis, the Hansjorg Wyss Professor of Biologically Inspired Engineering at the Harvard John A. Paulson School of Engineering and Applied Sciences and the senior author of the paper.

Lewis is also a Core Faculty Member at the Wyss Institute for Biologically Inspired Engineering and the Jianming Yu Professor of Arts and Sciences at Harvard.

The research is published in Science Advances.

Liquid droplets are used in many applications from printing ink on paper to creating microcapsules for drug delivery. Inkjet printing is the most common technique used to pattern liquid droplets, but it's only suitable for liquids that are roughly 10 times more viscous than water. Yet many fluids of interest to researchers are far more viscous. For example, biopolymer and cell-laden solutions, which are vital for biopharmaceuticals and bioprinting, are at least 100 times more viscous than water. Some sugar-based biopolymers could be as viscous as honey, which is 25,000 times more viscous than water.

The viscosity of these fluids also changes dramatically with temperature and composition, makes it even more difficult to optimize printing parameters to control droplet sizes.

"Our goal was to take viscosity out of the picture by developing a printing system that is independent from the material properties of the fluid," said Daniele Foresti, first author of the paper, the Branco Weiss Fellow and Research Associate in Materials Science and Mechanical Engineering at SEAS and the Wyss Institute.

To do that, the researchers turned to acoustic waves.

Thanks to gravity, any liquid can drip -- from water dripping out of a faucet to the century-long pitch drop experiment. With gravity alone, droplet size remains large and drop rate difficult to control. Pitch, which has a viscosity roughly 200 billion times that of water, forms a single drop per decade.

To enhance drop formation, the research team relies on generating sound waves. These pressure waves have been typically used to defy gravity, as in the case of acoustic levitation. Now, the researchers are using them to assist gravity, dubbing this new technique acoustophoretic printing.

The researchers built a subwavelength acoustic resonator that can generate a highly confined acoustic field resulting in a pulling force exceeding 100 times the normal gravitation forces (1 G) at the tip of the printer nozzle -- that's more than four times the gravitational force on the surface of the sun.

This controllable force pulls each droplet off of the nozzle when it reaches a specific size and ejects it towards the printing target. The higher the amplitude of the sound waves, the smaller the droplet size, irrespective of the viscosity of the fluid.

"The idea is to generate an acoustic field that literally detaches tiny droplets from the nozzle, much like picking apples from a tree," said Foresti.

The researchers tested the process on a wide range of materials from honey to stem-cell inks, biopolymers, optical resins and, even, liquid metals. Importantly, sound waves don't travel through the droplet, making the method safe to use even with sensitive biological cargo, such as living cells or proteins.

"Our technology should have an immediate impact on the pharmaceutical industry," said Lewis. "However, we believe that this will become an important platform for multiple industries."

"This is an exquisite and impactful example of the breadth and reach of collaborative research," said Dan Finotello, director of NSF's MRSEC program. "The authors have developed a new printing platform using acoustic-forces, which, unlike in other methods, are material-independent and thus offer tremendous printing versatility. The application space is limitless."

The Harvard Office of Technology Development has protected the intellectual property relating to this project and is exploring commercialization opportunities.

Read more at Science Daily

Biomechanics of chewing depend more on animal size, not diet

This image of a bobcat skull 3D model shows the part of the jaw joint (red) that was the focus of the UB research.
Chewing: We don't think about it, we just do it. But biologists don't know a lot about how chewing behavior leaves telltale signs on the underlying bones. To find out, researchers at the Jacobs School of Medicine and Biomedical Sciences at the University at Buffalo have been studying the jaw joints of carnivorans, the large mammalian order that includes dogs, cats and bears.

Last week, the scientists described results that they didn't expect to find. In the paper, published online on Aug. 24 in PLOS ONE, they reported that the jaw joint bone, the center around which chewing activity revolves (literally), appears to have evolved based more on an animal's size than what it eats.

While focused on carnivorans, the research may also provide some clues to how jaw joints function in general, including in humans and could improve the understanding of temporomandibular disorders (TMJ), which cause pain in the jaw joint and in the muscles that control the jaw.

"Even though it is clear that the carnivoran jaw joint is important for feeding, no one knew if jaw joint bone structure across species was related to the mechanical demands of feeding," explained M. Aleksander Wysocki, first author and a doctoral student in the new computational cell biology, anatomy and pathology graduate program in the Department of Pathology and Anatomical Sciences in the Jacobs School.

Wysocki and co-author Jack Tseng, PhD, assistant professor in the Department of Pathology and Anatomical Sciences in the Jacobs School, took a multifaceted approach. They examined 40 different carnivoran species from bobcats to wolves, looking at the jaw joint bone called the mandibular condyle.

The jaw's pivot point

"The mandibular condyle is the pivot point of the jaw, it functions similarly to the way the bolt of a door hinge does," Wysocki said. "Studies have shown that this joint is loaded with force during chewing."

He noted that the team was especially interested in the intricate, spongey bone structures inside the jaw joint, also known as trabecular bone. "We thought that this part of the skull would be the best candidate for determining relationships between food type and anatomy."

For example, because hyenas crush bone while consuming their prey, it could be assumed that their jaw joints would need to be capable of exerting significant force. "On the other hand, an animal that eats plants wouldn't be expected to require that kind of jaw joint structure," he said. "But we found that diet has a weaker relationship with skull anatomy than we thought. Mostly it's the animals' size that determines jaw joint structure and mechanical properties."

The researchers took computed tomography (CT) scan data of skulls from 40 species at the American Museum of Natural History, then built 3D models of them, from which they extracted the internal bone structure. Using a 3D printer, the scientists then printed 3D cores, based on virtual "core samples" taken from the mandibular condyle of each jaw joint, which they then scaled and tested for strength.

"Using a compression gauge, we measured how rigid these jaw joint structures were and how much force they could withstand," Wysocki said.

No significant correlation

The testing revealed no significant correlations between the shape or mechanical performance of the jaw joint bone and the diets of particular carnivorans.

"The mandibular condyle absorbs compressive force during chewing so we hypothesized that this was a part of the skull that was likely to be influenced by what the animal eats," Wysocki said. "It turns out that body size is the key factor determining the complexity of jaw joint bone structure and strength."

He noted that some previous research has revealed that despite the wide variety of diets consumed by different carnivorans, the overall skull shape is considerably influenced by non-feeding variables.

"Still, given how critical the temporomandibular joint is in capturing prey and eating it, these results are very striking," he said. "For over a century, it has been assumed that skull shape is closely related to what an animal eats. And now we have found that jaw joint bone structure is related to carnivoran body size, not what the animal is eating."

Wysocki said that the reasons for this apparent disconnect may be that larger carnivorans don't need such powerful jaws because they are proportionately larger than their prey, or possibly because they share the work involved by hunting in groups. He also said that other factors such as developmental constraints of bone structure could play a role in producing the trends observed in the study.

"Our research shows that factors other than diet need to be considered when attempting to understand jaw joint function," Wysocki concluded. "It turns out that the functional anatomy of the jaw joint is much more complex than we thought."

Read more at Science Daily

Aug 31, 2018

Water worlds could support life, study says

Depiction of a world completely covered with ocean.
The conditions for life surviving on planets entirely covered in water are more fluid than previously thought, opening up the possibility that water worlds could be habitable, according to a new paper from the University of Chicago and Pennsylvania State University.

The scientific community has largely assumed that planets covered in a deep ocean would not support the cycling of minerals and gases that keeps the climate stable on Earth, and thus wouldn't be friendly to life. But the study, published Aug. 30 in The Astrophysical Journal, found that ocean planets could stay in the "sweet spot" for habitability much longer than previously assumed. The authors based their findings on more than a thousand simulations.

"This really pushes back against the idea you need an Earth clone -- that is, a planet with some land and a shallow ocean," said Edwin Kite, assistant professor of geophysical sciences at UChicago and lead author of the study.

As telescopes get better, scientists are finding more and more planets orbiting stars in other solar systems. Such discoveries are resulting in new research into how life could potentially survive on other planets, some of which are very different from Earth -- some may be covered entirely in water hundreds of miles deep.

Because life needs an extended period to evolve, and because the light and heat on planets can change as their stars age, scientists usually look for planets that have both some water and some way to keep their climates stable over time. The primary method we know of is how Earth does it. Over long timescales, our planet cools itself by drawing down greenhouse gases into minerals and warms itself up by releasing them via volcanoes.

But this model doesn't work on a water world, with deep water covering the rock and suppressing volcanoes.

Kite, and Penn State coauthor Eric Ford, wanted to know if there was another way. They set up a simulation with thousands of randomly generated planets, and tracked the evolution of their climates over billions of years.

"The surprise was that many of them stay stable for more than a billion years, just by luck of the draw," Kite said. "Our best guess is that it's on the order of 10 percent of them."

These lucky planets sit in the right location around their stars. They happened to have the right amount of carbon present, and they don't have too many minerals and elements from the crust dissolved in the oceans that would pull carbon out of the atmosphere. They have enough water from the start, and they cycle carbon between the atmosphere and ocean only, which in the right concentrations is sufficient to keep things stable.

"How much time a planet has is basically dependent on carbon dioxide and how it's partitioned between the ocean, atmosphere and rocks in its early years," said Kite. "It does seem there is a way to keep a planet habitable long-term without the geochemical cycling we see on Earth."

Read more at Science Daily

Scientists identify protein that may have existed when life began

Researchers have designed a synthetic small protein that wraps around a metal core composed of iron and sulfur. This protein can be repeatedly charged and discharged, allowing it to shuttle electrons within a cell. Such peptides may have existed at the dawn of life, moving electrons in early metabolic cycles.
How did life arise on Earth? Rutgers researchers have found among the first and perhaps only hard evidence that simple protein catalysts -- essential for cells, the building blocks of life, to function -- may have existed when life began.

Their study of a primordial peptide, or short protein, is published in the Journal of the American Chemical Society.

In the late 1980s and early 1990s, the chemist Günter Wächtershäuser postulated that life began on iron- and sulfur-containing rocks in the ocean. Wächtershäuser and others predicted that short peptides would have bound metals and served as catalysts of life-producing chemistry, according to study co-author Vikas Nanda, an associate professor at Rutgers' Robert Wood Johnson Medical School.

Human DNA consists of genes that code for proteins that are a few hundred to a few thousand amino acids long. These complex proteins -- needed to make all living-things function properly -- are the result of billions of years of evolution. When life began, proteins were likely much simpler, perhaps just 10 to 20 amino acids long. With computer modeling, Rutgers scientists have been exploring what early peptides may have looked like and their possible chemical functions, according to Nanda.

The scientists used computers to model a short, 12-amino acid protein and tested it in the laboratory. This peptide has several impressive and important features. It contains only two types of amino acids (rather than the estimated 20 amino acids that synthesize millions of different proteins needed for specific body functions), it is very short and it could have emerged spontaneously on the early Earth in the right conditions. The metal cluster at the core of this peptide resembles the structure and chemistry of iron-sulfur minerals that were abundant in early Earth oceans. The peptide can also charge and discharge electrons repeatedly without falling apart, according to Nanda, a resident faculty member at the Center for Advanced Technology and Medicine.

"Modern proteins called ferredoxins do this, shuttling electrons around the cell to promote metabolism," said senior author Professor Paul G. Falkowski, who leads Rutgers' Environmental Biophysics and Molecular Ecology Laboratory. "A primordial peptide like the one we studied may have served a similar function in the origins of life."

Falkowski is the principal investigator for a NASA-funded ENIGMA project led by Rutgers scientists that aims to understand how protein catalysts evolved at the start of life. Nanda leads one team that will characterize the full potential of the primordial peptide and continue to develop other molecules that may have played key roles in the origins of life.

With computers, Rutgers scientists have smashed and dissected nearly 10,000 proteins and pinpointed four "Legos of life" -- core chemical structures that can be stacked to form the innumerable proteins inside all organisms. The small primordial peptide may be a precursor to the longer Legos of life, and scientists can now run experiments on how such peptides may have functioned in early-life chemistry.

Read more at Science Daily

Most land-based ecosystems worldwide risk 'major transformation' due to climate change

With the "business as usual" emissions scenario expected changes would threaten global biodiversity and derail vital services that nature provides to humanity, such as water security, carbon storage and recreation.
Without dramatic reductions in greenhouse-gas emissions, most of the planet's land-based ecosystems -- from its forests and grasslands to the deserts and tundra -- are at high risk of "major transformation" due to climate change, according to a new study from an international research team.

The researchers used fossil records of global vegetation change that occurred during a period of post-glacial warming to project the magnitude of ecosystem transformations likely in the future under various greenhouse gas emissions scenarios.

They found that under a "business as usual" emissions scenario, in which little is done to rein in heat-trapping greenhouse-gas emissions, vegetation changes across the planet's wild landscapes will likely be more far-reaching and disruptive than earlier studies suggested.

The changes would threaten global biodiversity and derail vital services that nature provides to humanity, such as water security, carbon storage and recreation, according to study co-author Jonathan Overpeck, dean of the School for Environment and Sustainability at the University of Michigan.

"If we allow climate change to go unchecked, the vegetation of this planet is going to look completely different than it does today, and that means a huge risk to the diversity of the planet," said Overpeck, who conceived the idea for the study with corresponding author Stephen T. Jackson of the U.S. Geological Survey.

The findings are scheduled for publication in the Aug. 31 edition of the journal Science. Forty-two researchers from around the world contributed to the paper. The first author is geosciences graduate student Connor Nolan of the University of Arizona.

Overpeck stressed that the team's results are not merely hypothetical. Some of the expected vegetational changes are already underway in places like the American West and Southwest, where forest dieback and massive wildfires are transforming landscapes.

"We're talking about global landscape change that is ubiquitous and dramatic," Overpeck said. "And we're already starting to see it in the United States, as well as around the globe."

Previous studies based largely on computer modeling and present-day observations also predicted sweeping vegetational changes in response to climate warming due to the ongoing buildup of carbon dioxide and other greenhouse gases.

But the new study, which took five years to complete, is the first to use paleoecological data -- the records of past vegetation change present in ancient pollen grains and plant fossils from hundreds of sites worldwide -- to project the magnitude of future ecosystem changes on a global scale.

The team focused on vegetation changes that occurred during Earth's last deglaciation, a period of warming that began 21,000 years ago and that was roughly comparable in magnitude (4 to 7 degrees Celsius, or 7 to 13 degrees Fahrenheit) to the warming expected in the next 100 to 150 years if greenhouse gas emissions are not reduced significantly.

Because the amount of warming in the two periods is similar, a post-glacial to modern comparison provides "a conservative estimate of the extent of ecological transformation to which the planet will be committed under future climate scenarios," the authors wrote.

The estimate is considered conservative in part because the rate of projected future global warming is at least an order of magnitude greater than that of the last deglaciation and is therefore potentially far more disruptive.

"We're talking about the same amount of change in 10-to-20 thousand years that's going to be crammed into a century or two," said Jackson, director of the U.S. Geological Survey's Southwest Climate Adaptation Center. "Ecosystems are going to be scrambling to catch up."

To determine the extent of the vegetation change following the last glacial peak, the researchers first compiled and evaluated pollen and plant-fossil records from 594 sites worldwide -- from every continent except Antarctica. All of the sites in their global database of ecological change had been reliably radiocarbon-dated to the period between 21,000 and 14,000 years before present.

Then they used paleoclimatic data from a number of sources to infer the corresponding temperature increases responsible for the vegetation changes seen in the fossils. That, in turn, enabled them to calculate how various levels of future warming will likely affect the planet's terrestrial vegetation and ecosystems.

"We used the results from the past to look at the risk of future ecosystem change," said the University of Arizona's Nolan. "We find that as temperatures rise, there are bigger and bigger risks for more ecosystem change."

Under a business as usual emissions scenario, the probability of large-scale vegetation change is greater than 60 percent, they concluded. In contrast, if greenhouse-gas emissions are reduced to levels targeted in the 2015 Paris Agreement, the probability of large-scale vegetation change is less than 45 percent.

Much of the change could occur during the 21st century, especially where vegetation disturbance is amplified by other factors, such as climate extremes, widespread plant mortality events, habitat fragmentation, invasive species and natural resource harvesting. The changes will likely continue into the 22nd century or beyond, the researchers concluded.

The ecosystem services that will be significantly impacted include carbon storage -- currently, vast amounts of carbon are stored in the plants and soils of land-based ecosystems.

"A lot of the carbon now locked up by vegetation around the planet could be released to the atmosphere, further amplifying the magnitude of the climate change," Overpeck said.

The authors say their empirically based, paleoecological approach provides an independent perspective on climate-driven vegetation change that complements previous studies based on modeling and present-day observations.

Read more at Science Daily

Human genome could contain up to 20 percent fewer genes, researchers reveal

DNA illustration
A new study led by the Spanish National Cancer Research Centre (CNIO) reveals that up to 20% of genes classified as coding (those that produce the proteins that are the building blocks of all living things) may not be coding after all because they have characteristics that are typical of non-coding or pseudogenes (obsolete coding genes). The consequent reduction in the size of the human genome could have important effects in biomedicine since the number of genes that produce proteins and their identification is of vital importance for the investigation of multiple diseases, including cancer, cardiovascular diseases, etc.

The work, published in the journal Nucleic Acids Research, is the result of an international collaboration led by Michael Tress of the CNIO Bioinformatics Unit along with researchers from the Wellcome Trust Sanger Institute in the United Kingdom, the Massachusetts Institute of Technology in the United States, the Pompeu Fabra University and the National Center for Supercomputing (BSC-CNS) in Barcelona, and the National Center for Cardiovascular Research (CNIC) in Madrid.

Since the completion of the sequencing of the human genome in 2003 experts from around the world have been working to compile the final human proteome (the total number of proteins generated from genes) and the genes that produce them. This task is immense given the complexity of the human genome and the fact that we have about 20,000 separate coding genes.

The researchers analyzed the genes cataloged as protein coding in the main reference human proteomes: the detailed comparison of the reference proteomes from GENCODE/Ensembl, RefSeq and UniProtKB found 22,210 coding genes, but only 19,446 of these genes were present in all 3 annotations.

When they analyzed the 2,764 genes that were present in only one or two of these reference annotations, they were surprised to discover that experimental evidence and manual annotations suggested that almost all of these genes were more likely to be non-coding genes or pseudogenes. In fact, these genes, together with another 1,470 coding genes that are present in the three reference catalogs, were not evolving like typical protein coding genes. The conclusion of the study is that most of these 4,234 genes probably do not code for proteins.

The study is already paying off, according to the scientists. "We have been able to analyze many of these genes in detail," Tress explains, "and more than 300 genes have already been reclassified as non-coding." The results are already being included in the new annotations of the human genome by the GENCODE international consortium, of which the CNIO researchers are part.

Conflicting gene numbers in recent years

The work once again highlights doubts about the number of real genes present in human cells 15 years after the sequencing the human genome. Although the most recent data indicates that the number of genes encoding human proteins could exceed 20,000, Federico Abascal, of the Wellcome Trust Sanger Institute in the United Kingdom and first author of the work, states: "Our evidence suggests that humans may only have 19,000 coding genes, but we still do not know which 19,000 genes are."

For his part, David Juan, of the Pompeu Fabra University and participant in the study, reiterates the importance of these results: "Surprisingly, some of these unusual genes have been well studied and have more than 100 scientific publications based on the assumption that the gene produces a protein. "

This study suggests that there is still a large amount of uncertainty, since the final number of coding genes could 2,000 more or 2,000 fewer than it is now. The human proteome still requires much work, especially given its importance to the medical community.

Read more at Science Daily

Aug 30, 2018

Cold climates contributed to the extinction of the Neanderthals

Neanderthal depiction
Climate change may have played a more important role in the extinction of Neanderthals than previously believed, according to a new study published in the journal, Proceedings of the Natural Academy of Sciences.

A team of researchers from a number of European and American research institutions, including Northumbria University, Newcastle, have produced detailed new natural records from stalagmites that highlight changes in the European climate more than 40,000 years ago.

They found several cold periods that coincide with the timings of a near complete absence of archaeological artefacts from the Neanderthals, suggesting the impact that changes in climate had on the long-term survival of Neanderthal man.

Stalagmites grow in thin layers each year and any change in temperature alters their chemical composition. The layers therefore preserve a natural archive of climate change over many thousands of years.

The researchers examined stalagmites in two Romanian caves, which revealed more detailed records of climate change in continental Europe than had previously been available.

The layers of the stalagmites showed a series of prolonged extreme cold and excessively dry conditions in Europe between 44,000 and 40,000 years ago. They highlight a cycle of temperatures gradually cooling, staying very cold for centuries to millennia and then warming again very abruptly.

The researchers compared these palaeoclimate records with archaeological records of Neanderthal artefacts and found a correlation between the cold periods -- known as stadials -- and an absence of Neanderthal tools.

This indicates the Neanderthal population greatly reduced during the cold periods, suggesting that climate change played a role in their decline.

Dr Vasile Ersek is co-author of the study and a senior lecturer in physical geography in Northumbria University's Department of Geography and Environmental Sciences. He explained: "The Neanderthals were the human species closest to ours and lived in Eurasia for some 350,000 years. However, around 40,000 years ago -- during the last Ice Age and shortly after the arrival of anatomically modern humans in Europe -- they became extinct.

"For many years we have wondered what could have caused their demise. Were they pushed 'over the edge' by the arrival of modern humans, or were other factors involved? Our study suggests that climate change may have had an important role in the Neanderthal extinction."

The researchers believe that modern humans survived these cold stadial periods because they were better adapted to their environment than the Neanderthals.

Neanderthals were skilled hunters and had learned how to control fire, but they had a less diverse diet than modern humans, living largely on meat from the animals they had successfully pursued. These food sources would naturally become scarce during colder periods, making the Neanderthals more vulnerable to rapid environmental change.

In comparison, modern humans had incorporated fish and plants into their diet alongside meat, which supplemented their food intake and potentially enabled their survival.

Dr Ersek said the research team's findings had indicated that this cycle of "hostile climate intervals" over thousands of years, in which the climate varied abruptly and was characterised by extreme cold temperatures, was responsible for the future demographic character of Europe.

"Before now, we did not have climate records from the region where Neanderthals lived which had the necessary age accuracy and resolution to establish a link between when Neanderthals died out and the timing of these extreme cold periods," he said, "But our findings indicate that the Neanderthal populations successively decreased during the repeated cold stadials.

"When temperatures warmed again, their smaller populations could not expand as their habitat was also being occupied by modern humans and this facilitated a staggered expansion of modern humans into Europe.

Read more at Science Daily

Unstoppable monster in the early universe

ALMA revealed the distribution of molecular gas (left) and dust particles (right). In addition to the dense cloud in the center, the research team found two dense clouds several thousand light-years away from the center. These dense clouds are dynamically unstable and thought to be the sites of intense star formation.
Astronomers obtained the most detailed anatomy chart of a monster galaxy located 12.4 billion light-years away. Using the Atacama Large Millimeter/submillimeter Array (ALMA), the team revealed that the molecular clouds in the galaxy are highly unstable, which leads to runaway star formation. Monster galaxies are thought to be the ancestors of the huge elliptical galaxies in today's universe, therefore these findings pave the way to understand the formation and evolution of such galaxies.

"One of the best parts of ALMA observations is to see the far-away galaxies with unprecedented resolution," says Ken-ichi Tadaki, a postdoctoral researcher at the Japan Society for the Promotion of Science and the National Astronomical Observatory of Japan, the lead author of the research paper published in the journal Nature.

Monster galaxies, or starburst galaxies, form stars at a startling pace; 1000 times higher than the star formation in our Galaxy. But why are they so active? To tackle this problem, researchers need to know the environment around the stellar nurseries. Drawing detailed maps of molecular clouds is an important step to scout a cosmic monster.

Tadaki and the team targeted a chimerical galaxy COSMOS-AzTEC-1. This galaxy was first discovered with the James Clerk Maxwell Telescope in Hawai`i, and later the Large Millimeter Telescope (LMT) in Mexico found an enormous amount of carbon monoxide gas in the galaxy and revealed its hidden starburst. The LMT observations also measured the distance to the galaxy, and found that it is 12.4 billion light-years (Note).

Researchers have found that COSMOS-AzTEC-1 is rich with the ingredients of stars, but it was still difficult to figure out the nature of the cosmic gas in the galaxy. The team utilized the high resolution and high sensitivity of ALMA to observe this monster galaxy and obtain a detailed map of the distribution and the motion of the gas. Thanks to the most extended ALMA antenna configuration of 16 km, this is the highest resolution molecular gas map of a distant monster galaxy ever made.

"We found that there are two distinct large clouds several thousand light-years away from the center," explains Tadaki. "In most distant starburst galaxies, stars are actively formed in the center. So it is surprising to find off-center clouds."

The astronomers further investigated the nature of the gas in COSMOS-AzTEC-1 and found that the clouds throughout the galaxy are very unstable, which is unusual. In a normal situation, the inward gravity and outward pressure are balanced in the clouds. Once gravity overcomes pressure, the gas cloud collapses and forms stars at a rapid pace. Then, stars and supernova explosions at the end of the stellar life cycle blast out gases, which increase the outward pressure. As a result, the gravity and pressure reach a balanced state and star formation continues at a moderate pace. In this way star formation in galaxies is self-regulating. But, in COSMOS-AzTEC-1, the pressure is far weaker than the gravity and hard to balance. Therefore this galaxy shows runaway star formation and has morphed into an unstoppable monster galaxy.

The team estimated that the gas in COSMOS-AzTEC-1 will be completely consumed in 100 million years, which is 10 times faster than in other star forming galaxies.

But why is the gas in COSMOS-AzTEC-1 so unstable? Researchers do not have a definitive answer yet, but galaxy merger is a possible cause. Galaxy collision may have efficiently transported the gas into a small area and ignited intense star formation.

Read more at Science Daily

How a NASA scientist looks in the depths of the great red spot to find water on Jupiter

The Great Red Spot is the dark patch in the middle of this infrared image. It is dark due to the thick clouds that block thermal radiation. The yellow strip denotes the portion of the Great Red Spot used in astrophysicist Gordon L. Bjoraker's analysis.
For centuries, scientists have worked to understand the makeup of Jupiter. It's no wonder: this mysterious planet is the biggest one in our solar system by far, and chemically, the closest relative to the Sun. Understanding Jupiter is a key to learning more about how our solar system formed, and even about how other solar systems develop.

But one critical question has bedeviled astronomers for generations: Is there water deep in Jupiter's atmosphere, and if so, how much?

Gordon L. Bjoraker, an astrophysicist at NASA's Goddard Space Flight Center in Greenbelt, Maryland, reported in a recent paper in the Astronomical Journal that he and his team have brought the Jovian research community closer to the answer.

By looking from ground-based telescopes at wavelengths sensitive to thermal radiation leaking from the depths of Jupiter's persistent storm, the Great Red Spot, they detected the chemical signatures of water above the planet's deepest clouds. The pressure of the water, the researchers concluded, combined with their measurements of another oxygen-bearing gas, carbon monoxide, imply that Jupiter has 2 to 9 times more oxygen than the sun. This finding supports theoretical and computer-simulation models that have predicted abundant water (H2O) on Jupiter made of oxygen (O) tied up with molecular hydrogen (H2).

The revelation was stirring given that the team's experiment could have easily failed. The Great Red Spot is full of dense clouds, which makes it hard for electromagnetic energy to escape and teach astronomers anything about the chemistry within.

"It turns out they're not so thick that they block our ability to see deeply," said Bjoraker. "That's been a pleasant surprise."

New spectroscopic technology and sheer curiosity gave the team a boost in peering deep inside Jupiter, which has an atmosphere thousands of miles deep, Bjoraker said: "We thought, well, let's just see what's out there."

The data Bjoraker and his team collected will supplement the information NASA's Juno spacecraft is gathering as it circles the planet from north to south once every 53 days.

Among other things, Juno is looking for water with its own infrared spectrometer and with a microwave radiometer that can probe deeper than anyone has seen -- to 100 bars, or 100 times the atmospheric pressure at Earth's surface. (Altitude on Jupiter is measured in bars, which represent atmospheric pressure, since the planet does not have a surface, like Earth, from which to measure elevation.)

If Juno returns similar water findings, thereby backing Bjoraker's ground-based technique, it could open a new window into solving the water problem, said Goddard's Amy Simon, a planetary atmospheres expert.

"If it works, then maybe we can apply it elsewhere, like Saturn, Uranus or Neptune, where we don't have a Juno," she said.

Juno is the latest spacecraft tasked with finding water, likely in gas form, on this giant gaseous planet.

Water is a significant and abundant molecule in our solar system. It spawned life on Earth and now lubricates many of its most essential processes, including weather. It's a critical factor in Jupiter's turbulent weather, too, and in determining whether the planet has a core made of rock and ice.

Jupiter is thought to be the first planet to have formed by siphoning the elements left over from the formation of the Sun as our star coalesced from an amorphous nebula into the fiery ball of gases we see today. A widely accepted theory until several decades ago was that Jupiter was identical in composition to the Sun; a ball of hydrogen with a hint of helium -- all gas, no core.

But evidence is mounting that Jupiter has a core, possibly 10 times Earth's mass. Spacecraft that previously visited the planet found chemical evidence that it formed a core of rock and water ice before it mixed with gases from the solar nebula to make its atmosphere. The way Jupiter's gravity tugs on Juno also supports this theory. There's even lightning and thunder on the planet, phenomena fueled by moisture.

"The moons that orbit Jupiter are mostly water ice, so the whole neighborhood has plenty of water," said Bjoraker. "Why wouldn't the planet -- which is this huge gravity well, where everything falls into it -- be water rich, too?"

The water question has stumped planetary scientists; virtually every time evidence of H2O materializes, something happens to put them off the scent. A favorite example among Jupiter experts is NASA's Galileo spacecraft, which dropped a probe into the atmosphere in 1995 that wound up in an unusually dry region. "It's like sending a probe to Earth, landing in the Mojave Desert, and concluding the Earth is dry," pointed out Bjoraker.

In their search for water, Bjoraker and his team used radiation data collected from the summit of Maunakea in Hawaii in 2017. They relied on the most sensitive infrared telescope on Earth at the W.M. Keck Observatory, and also on a new instrument that can detect a wider range of gases at the NASA Infrared Telescope Facility.

The idea was to analyze the light energy emitted through Jupiter's clouds in order to identify the altitudes of its cloud layers. This would help the scientists determine temperature and other conditions that influence the types of gases that can survive in those regions.

Planetary atmosphere experts expect that there are three cloud layers on Jupiter: a lower layer made of water ice and liquid water, a middle one made of ammonia and sulfur, and an upper layer made of ammonia.

To confirm this through ground-based observations, Bjoraker's team looked at wavelengths in the infrared range of light where most gases don't absorb heat, allowing chemical signatures to leak out. Specifically, they analyzed the absorption patterns of a form of methane gas. Because Jupiter is too warm for methane to freeze, its abundance should not change from one place to another on the planet.

"If you see that the strength of methane lines vary from inside to outside of the Great Red Spot, it's not because there's more methane here than there," said Bjoraker, "it's because there are thicker, deep clouds that are blocking the radiation in the Great Red Spot."

Bjoraker's team found evidence for the three cloud layers in the Great Red Spot, supporting earlier models. The deepest cloud layer is at 5 bars, the team concluded, right where the temperature reaches the freezing point for water, said Bjoraker, "so I say that we very likely found a water cloud." The location of the water cloud, plus the amount of carbon monoxide that the researchers identified on Jupiter, confirms that Jupiter is rich in oxygen and, thus, water.

Bjoraker's technique now needs to be tested on other parts of Jupiter to get a full picture of global water abundance, and his data squared with Juno's findings.

Read more at Science Daily

Solar eruptions may not have slinky-like shapes after all

An image from NASA Solar Dynamics Observatory (SDO) satellite that shows an example of a commonly believed Slinky-like shaped coronal mass ejection (CME) -- in this case a long filament of solar material hovering in the sun's atmosphere, or corona. This CME traveled 900 miles per second connecting with Earth's magnetic environment and causing aurora to appear four days later on Sept. 3, 2012.
As the saying goes, everything old is new again. While the common phrase often refers to fashion, design, or technology, scientists at the University of New Hampshire have found there is some truth to this mantra even when it comes to research. Revisiting some older data, the researchers discovered new information about the shape of coronal mass ejections (CMEs) -- large-scale eruptions of plasma and magnetic field from the sun -- that could one day help protect satellites in space as well as the electrical grid on Earth.

"Since the late 1970s, coronal mass ejections have been assumed to resemble a large Slinky -- one of those spring toys -- with both ends anchored at the sun, even when they reach Earth about one to three days after they erupt," said Noe Lugaz, research associate professor in the UNH Space Science Center. "But our research suggests their shapes are possibly different."

Knowing the shape and size of CMEs is important because it can help better forecast when and how they will impact Earth. While they are one of the main sources for creating beautiful and intense auroras, like the Northern and Southern Lights, they can also damage satellites, disrupt radio communications and wreak havoc on the electrical transmission system causing massive and long-lasting power outages. Right now, only single point measurements exist for CMEs making it hard for scientists to judge their shapes. But these measurements have been helpful to space forecasters, allowing them a 30 to 60 minute warning before impact. The goal is to lengthen that notice time to hours -- ideally 24 hours -- to make more informed decisions on whether to power down satellites or the grid.

In their study, published in Astrophysical Journal Letters, the researchers took a closer look at data from two NASA spacecraft, Wind and ACE, typically orbiting upstream of Earth. They analyzed the data of 21 CMEs over a two-year period between 2000 and 2002 when Wind had separated from ACE. Wind had only separated one percent of one astronomical unit (AU), which is the distance from the sun to the Earth (93,000,000 miles). So, instead of now being in front of Earth, with ACE, Wind was now perpendicular to the Sun-Earth line, or on the side.

"Because they are usually so close to one another, very few people compare the data from both Wind and ACE," said Lugaz. "But 15 years ago, they were apart and in the right place for us to go back and notice the difference in measurements, and the differences became larger with increasing separations, making us question the Slinky shape."

The data points toward a few other shape possibilities: CMEs are not simple Slinky shapes (they might be deformed ones or something else entirely), or CMEs are Slinky-shaped but on a much smaller scale (roughly four times smaller) than previously thought.

While the researchers say more studies are needed, Lugaz says this information could be important for future space weather forecasting. With other missions being considered by NASA and NOAA, the researchers say this study shows that future spacecraft may first need to investigate how close to the Sun-Earth line they have to remain to make helpful and more advanced forecast predictions.

Read more at Science Daily

'Archived' heat has reached deep into the Arctic interior, researchers say

Heat currently trapped below the surface has the potential to melt the Arctic region's entire sea-ice pack if it reaches the surface, according to researchers.
Arctic sea ice isn't just threatened by the melting of ice around its edges, a new study has found: Warmer water that originated hundreds of miles away has penetrated deep into the interior of the Arctic.

That "archived" heat, currently trapped below the surface, has the potential to melt the region's entire sea-ice pack if it reaches the surface, researchers say.

The study appears online Aug. 29 in the journal Science Advances.

"We document a striking ocean warming in one of the main basins of the interior Arctic Ocean, the Canadian Basin," said lead author Mary-Louise Timmermans, a professor of geology and geophysics at Yale University.

The upper ocean in the Canadian Basin has seen a two-fold increase in heat content over the past 30 years, the researchers said. They traced the source to waters hundreds of miles to the south, where reduced sea ice has left the surface ocean more exposed to summer solar warming. In turn, Arctic winds are driving the warmer water north, but below the surface waters.

"This means the effects of sea-ice loss are not limited to the ice-free regions themselves, but also lead to increased heat accumulation in the interior of the Arctic Ocean that can have climate effects well beyond the summer season," Timmermans said. "Presently this heat is trapped below the surface layer. Should it be mixed up to the surface, there is enough heat to entirely melt the sea-ice pack that covers this region for most of the year."

The co-authors of the study are John Toole and Richard Krishfield of the Woods Hole Oceanographic Institution.

The National Science Foundation Division of Polar Programs provided support for the research.

From Science Daily

Aug 29, 2018

Mammal forerunner that reproduced like a reptile sheds light on brain evolution

Researchers from The University of Texas at Austin found a fossil of an extinct mammal relative with a clutch of 38 babies that were near miniatures of their mother.
Compared with the rest of the animal kingdom, mammals have the biggest brains and produce some of the smallest litters of offspring. A newly described fossil of an extinct mammal relative -- and her 38 babies -- is among the best evidence that a key development in the evolution of mammals was trading brood power for brain power.

The find is among the rarest of the rare because it contains the only known fossils of babies from any mammal precursor, said researchers from The University of Texas at Austin who discovered and studied the fossilized family. But the presence of so many babies -- more than twice the average litter size of any living mammal -- revealed that it reproduced in a manner akin to reptiles. Researchers think the babies were probably developing inside eggs or had just recently hatched when they died.

The study, published in the journal Nature on Aug. 29, describes specimens that researchers say may help reveal how mammals evolved a different approach to reproduction than their ancestors, which produced large numbers of offspring.

"These babies are from a really important point in the evolutionary tree," said Eva Hoffman, who led research on the fossil as a graduate student at the UT Jackson School of Geosciences. "They had a lot of features similar to modern mammals, features that are relevant in understanding mammalian evolution."

Hoffman co-authored the study with her graduate adviser, Jackson School Professor Timothy Rowe.

The mammal relative belonged to an extinct species of beagle-size plant-eaters called Kayentatherium wellesi that lived alongside dinosaurs about 185 million years ago. Like mammals, Kayentatherium probably had hair.

When Rowe collected the fossil more than 18 years ago from a rock formation in Arizona, he thought that he was bringing a single specimen back with him. He had no idea about the dozens of babies it contained.

Sebastian Egberts, a former graduate student and fossil preparator at the Jackson School, spotted the first sign of the babies years later when a grain-sized speck of tooth enamel caught his eye in 2009 as he was unpacking the fossil.

"It didn't look like a pointy fish tooth or a small tooth from a primitive reptile," said Egberts, who is now an instructor of anatomy at the Philadelphia College of Osteopathic Medicine. "It looked more like a molariform tooth (molar-like tooth) -- and that got me very excited."

A CT scan of the fossil revealed a handful of bones inside the rock. However, it took advances in CT-imaging technology during the next 18 years, the expertise of technicians at UT Austin's High-Resolution X-ray Computed Tomography Facility, and extensive digital processing by Hoffman to reveal the rest of the babies -- not only jaws and teeth, but complete skulls and partial skeletons.

The 3D visualizations Hoffman produced allowed her to conduct an in-depth analysis of the fossil that verified that the tiny bones belonged to babies and were the same species as the adult. Her analysis also revealed that the skulls of the babies were like scaled-down replicas of the adult, with skulls a tenth the size but otherwise proportional. This finding is in contrast to mammals, which have babies that are born with shortened faces and bulbous heads to account for big brains.

The brain is an energy-intensive organ, and pregnancy -- not to mention childrearing -- is an energy-intensive process. The discovery that Kayentatherium had a tiny brain and many babies, despite otherwise having much in common with mammals, suggests that a critical step in the evolution of mammals was trading big litters for big brains, and that this step happened later in mammalian evolution.

"Just a few million years later, in mammals, they unquestionably had big brains, and they unquestionably had a small litter size," Rowe said.

The mammalian approach to reproduction directly relates to human development -- including the development of our own brains. By looking back at our early mammalian ancestors, humans can learn more about the evolutionary process that helped shape who we are as a species, Rowe said.

"There are additional deep stories on the evolution of development, and the evolution of mammalian intelligence and behavior and physiology that can be squeezed out of a remarkable fossil like this now that we have the technology to study it," he said.

Read more at Science Daily

Getting to the roots of our ancient cousins' diet

Paranthropus robustus fossil from South Africa SK 46 (discovered 1936, estimated age 1.9-1.5 million years) and the virtually reconstructed first upper molar used in the analyses.
Since the discovery of the fossil remains of Australopithecus africanus from Taung nearly a century ago, and subsequent discoveries of Paranthropus robustus, there have been disagreements about the diets of these two South African hominin species. By analyzing the splay and orientation of fossil hominin tooth roots, researchers of the MPI for Evolutionary Anthropology, the University of Chile and the University of Oxford now suggest that Paranthropus robustus had a unique way of chewing food not seen in other hominins.

Food needs to be broken down in the mouth before it can be swallowed and digested further. How this is being done depends on many factors, such as the mechanical properties of the foods and the morphology of the masticatory apparatus. Palaeoanthropologists spend a great deal of their time reconstructing the diets of our ancestors, as diet holds the key to understanding our evolutionary history. For example, a high-quality diet (and meat-eating) likely facilitated the evolution of our large brains, whilst the lack of a nutrient-rich diet probably underlies the extinction of some other species (e.g., P. boisei). The diet of South African hominins has remained particularly controversial however.

Using non-invasive high-resolution computed tomography technology and shape analysis the authors deduced the main direction of loading during mastication (chewing) from the way the tooth roots are oriented within the jaw. By comparing the virtual reconstructions of almost 30 hominin first molars from South and East Africa they found that Australopithecus africanus had much wider splayed roots than both Paranthropus robustus and the East African Paranthropus boisei. "This is indicative of increased laterally-directed chewing loads in Australopithecus africanus, while the two Paranthropus species experienced rather vertical loads," says Kornelius Kupczik of the Max Planck Institute for Evolutionary Anthropology.

Paranthropus robustus, unlike any of the other species analysed in this study, exhibits an unusual orientation, i.e. "twist," of the tooth roots, which suggests a slight rotational and back-and-forth movement of the mandible during chewing. Other morphological traits of the P. robustus skull support this interpretation. For example, the structure of the enamel also points towards a complex, multidirectional loading, whilst their unusual microwear pattern can conceivably also be reconciled with a different jaw movement rather than by mastication of novel food sources. Evidently, it is not only what hominins ate and how hard they bit that determines its skull morphology, but also the way in which the jaws are being brought together during chewing.

The new study demonstrates that the orientation of tooth roots within the jaw has much to offer for an understanding of the dietary ecology of our ancestors and extinct cousins. "Perhaps palaeoanthropologists have not always been asking the right questions of the fossil record: rather than focusing on what our extinct cousins ate, we should equally pay attention to how they masticated their foods," concludes Gabriele Macho of the University of Oxford.

Read more at Science Daily

Researchers 3D print prototype for 'bionic eye'

Researchers at the University of Minnesota have fully 3D printed an image sensing array on a hemisphere, which is a first-of-its-kind prototype for a "bionic eye."
A team of researchers at the University of Minnesota have, for the first time, fully 3D printed an array of light receptors on a hemispherical surface. This discovery marks a significant step toward creating a "bionic eye" that could someday help blind people see or sighted people see better.

The research is published today in Advanced Materials, a peer-reviewed scientific journal covering materials science. The author also holds the patent for 3D-printed semiconducting devices.

"Bionic eyes are usually thought of as science fiction, but now we are closer than ever using a multimaterial 3D printer," said Michael McAlpine, a co-author of the study and University of Minnesota Benjamin Mayhugh Associate Professor of Mechanical Engineering.

Researchers started with a hemispherical glass dome to show how they could overcome the challenge of printing electronics on a curved surface. Using their custom-built 3D printer, they started with a base ink of silver particles. The dispensed ink stayed in place and dried uniformly instead of running down the curved surface. The researchers then used semiconducting polymer materials to print photodiodes, which convert light into electricity. The entire process takes about an hour.

McAlpine said the most surprising part of the process was the 25 percent efficiency in converting the light into electricity they achieved with the fully 3D-printed semiconductors.

"We have a long way to go to routinely print active electronics reliably, but our 3D-printed semiconductors are now starting to show that they could potentially rival the efficiency of semiconducting devices fabricated in microfabrication facilities," McAlpine said. "Plus, we can easily print a semiconducting device on a curved surface, and they can't."

McAlpine and his team are known for integrating 3D printing, electronics, and biology on a single platform. They received international attention a few years ago for printing a "bionic ear." Since then, they have 3D printed life-like artificial organs for surgical practice, electronic fabric that could serve as "bionic skin," electronics directly on a moving hand, and cells and scaffolds that could help people living with spinal cord injuries regain some function.

McAlpine's drive to create a bionic eye is a little more personal.

"My mother is blind in one eye, and whenever I talk about my work, she says, 'When are you going to print me a bionic eye?'" McAlpine said.

McAlpine says the next steps are to create a prototype with more light receptors that are even more efficient. They'd also like to find a way to print on a soft hemispherical material that can be implanted into a real eye.

McAlpine's research team includes University of Minnesota mechanical engineering graduate student Ruitao Su, postdoctoral researchers Sung Hyun Park, Shuang-Zhuang Guo, Kaiyan Qiu, Daeha Joung, Fanben Meng, and undergraduate student Jaewoo Jeong.

Read more at Science Daily

Three previously unknown ancient primates identified

Kirk's father and Austin-based artist Randy Kirk produced his own rendering of what the species might have looked like.
Biological anthropologists from The University of Texas at Austin have described three new species of fossil primates that were previously unknown to science. All of the new primates were residents of San Diego County at a time when southern California was filled with lush tropical forests.

Since the 1930s, numerous primate fossils have been uncovered in the sandstones and claystones that make up the Friars Formation in San Diego County. Paleontologist Stephen Walsh and fieldworkers from the San Diego Museum of Natural History (SDNHM) built up a large collection of fossil primates from the San Diego area, but Walsh was unable to describe these specimens before his death in 2007.

A decade later, UT Austin graduate student Amy Atwater and anthropology professor Chris Kirk took up the challenge, describing and naming three previously unknown omomyoid primates that lived 42 million to 46 million years ago. The researchers named these new species Ekwiiyemakius walshi, Gunnelltarsius randalli and Brontomomys cerutti.

These findings double the number of known primate genera represented in the Friars Formation and increase the total number of known omomyine primates of that period from 15 to 18.

Atwater and Kirk's descriptions were published in the Journal of Human Evolution.

"The addition of these primates provides for a better understanding of primate richness in the middle Eocene," said Atwater, who is now the paleontology collection manager at the Museum of the Rockies in Bozeman, Montana. "Previous research in the Rocky Mountain basins suggested the primate richness declined during this time period, but we argue that primate richness increased concurrently in other locations."

Studying the teeth, researchers concluded the three new genera, which represent the bulk of the undescribed Friars Formation omomyoid sample at SDNHM, range in size from 113 to 796 grams and are most likely related to a group of extinct species comprising the primate subfamily Omomyinae.

"Teeth can tell us a lot about evolutionary history and give us a good handle on the size and diet of an extinct primate," Kirk said. "Enamel is the hardest tissue in the body. And as a result, teeth are more likely to be preserved in the fossil record."

Ekwiiyemakius walshi, the smallest of the three new species, was estimated to weigh between 113 and 125 grams -- comparable in size to some modern bushbabies. It was named for Walsh, who collected and prepared many of the specimens, and also derives from the Native American Kumeyaay tribe's place name, Ekwiiyemak -- meaning "behind the clouds" -- for the location of the headwaters of the San Diego and Sweetwater Rivers.

Gunnelltarsius randalli was named for Gregg Gunnell, the researchers' late colleague and expert on Eocene mammals, and for SDNHM fossil collections manager Kesler Randall. It was estimated to weigh between 275 and 303 grams, about the size of today's fat-tailed dwarf lemur.

Read more at Science Daily

Aug 28, 2018

Many Arctic pollutants decrease after market removal and regulation

Polar bears
Levels of some persistent organic pollutants (POPs) regulated by the Stockholm Convention are decreasing in the Arctic, according to an international team of researchers who have been actively monitoring the northern regions of the globe.

POPs are a diverse group of long-lived chemicals that can travel long distances from their source of manufacture or use. Many POPs were used extensively in industry, consumer products or as pesticides in agriculture. Well-known POPs include chemicals such as DDT and PCBs (polychlorinated biphenyls), and some of the products they were used in included flame retardants and fabric coatings.

Because POPs were found to cause health problems for people and wildlife, they were largely banned or phased out of production in many countries. Many have been linked to reproductive, developmental, neurological and immunological problems in mammals. The accumulation of DDT, a well-known and heavily used POP, was also linked to eggshell-thinning in fish-eating birds, such as eagles and pelicans, in the late 20th century, and caused catastrophic population declines for those animals.

In 2001, 152 countries signed a United Nations treaty in Stockholm, Sweden intended to eliminate, restrict or minimize unintentional production of 12 of the most widely used POPs. Later amendments added more chemicals to the initial list. Today, more than 33 POP chemicals or groups are covered by what is commonly called the "Stockholm Convention," which has been recognized by182 countries.

"This paper shows that following the treaty and earlier phase-outs have largely resulted in a decline of these contaminants in the Arctic," says John Kucklick, a biologist from the National Institute of Standards and Technology (NIST) and the senior U.S. author on the paper, published August 23 in Science of the Total Environment. "When POP use was curtailed, the change was reflected by declining concentrations in the environment."

"In general, the contaminants that are being regulated are decreasing," says Frank Rigét from the Department of Bioscience, Aarhus University, Denmark, and lead author.

POPs are particularly problematic in the Arctic because the ecosystem there is especially fragile, and pollution can come from both local sources and from thousands of miles away due to air and water currents. POPs also bioaccumulate. This means that they build up faster in animals and humans than they can be excreted, and that exposure can increase up the food chain. Plankton exposed to POPs in water are eaten by schools of fish, which are in turn eaten by seals or whales, and with each jump up the food chain the amount of POPs increases. The same is true for terrestrial animals. A large mammal's exposure, therefore, can be large and long-lasting.

Indigenous people living in northern coastal areas such as Alaska often consume more fish and other animals that come from higher on the food chain than the average American. Such communities, therefore, are potentially exposed to larger amounts of these pollutants.

For almost two decades beginning in 2000, Kucklick and Rigét worked in conjunction with scientists from Denmark, Sweden, Canada, Iceland and Norway to track POPs in the fat of several marine mammals and in the tissue of shellfish and seabirds. They also monitored air in the Arctic circle for pollution.

To gain a fuller picture of how the deposition of POPs might have changed over time, the study included specimens archived since the 1980s and '90s in special storage facilities around the globe. The U.S. specimens were provided by the NIST Biorepository, located in Charleston, South Carolina. Samples archived in that facility are part of the Alaska Marine Mammal Tissue Archival Project (AMMTAP) or the Seabird Tissue Archival and Monitoring Project (STAMP). Both collections are conducted in collaboration with other federal agencies.

The study pooled more than 1,000 samples taken over the course of several decades from many different locations throughout the Arctic Circle. In general, the so-called legacy POPs -- those that have been eliminated or restricted from production -- were shown to be decreasing over the past two to three decades, although some had decreased more than others.

The biggest decreases were in a byproduct of the pesticide lindane, a-HCH, with a mean annual decline of 9 percent in Arctic wildlife.

The research team found PCBs had decreased as well. Most industrial countries banned PCBs in the 1970s and '80s, and their production was reduced under the Stockholm Convention in 2004. Previously, the compounds had been widely used in electrical systems. In this study, it was found that their presence had decreased by almost 4 percent per year across the Arctic region since being pulled from the market.

Two of the legacy POPs listed under Stockholm, β-HCH and HCB, showed only small declines of less than 3 percent per year. β-HCH was part of a heavily-used pesticide mixture with the active ingredient lindane and HCB was used both in agriculture and industry.

A small number of the legacy POPs had increased in a few locations, although some of those were at sites suspected to be influenced by strong, still-existing local pollution sources.

Notably, the flame retardant hexabromocyclododecane (HBCDD) showed an annual increase of 7.6 percent. HBCDD was one of 16 additional POPs added to the Stockholm Convention as of 2017 and is recommended for elimination from use, with certain exemptions.

Most of the research conducted for this paper was a direct result of the 2001 treaty stipulations, which included a requirement that sponsors participate in ongoing, long-term biological monitoring. Although the U.S. participated in the research, it has not ratified the treaty. It is expected that work on the treaty will continue as new POPs are identified.

Read more at Science Daily

The science behind blowing bubbles

Soap bubbles
What exactly happens when you blow on a soap film to make a bubble? Behind this simple question about a favorite childhood activity is some real science, researchers at New York University have found.

In a series of experiments replicating bubble blowing, NYU's Applied Math Lab has discovered two ways in which bubbles can be made: one, by pushing with a steady but strong wind on a soap film through a circular wand, which causes it to grow into a bubble, and two, by pushing with a gentle wind on an already-inflated film in order to drive its further growth.

"This second method might explain how we often blow bubbles as kids: a quick puff bends the film outward and thereafter the film keeps growing even as the flow of air slows," says Leif Ristroph, an assistant professor at NYU's Courant Institute of Mathematical Sciences who led the study.

The first method is more intuitive, but less common.

"This is used by the bubble blowers we see in parks in the summertime," explains Ristroph. "They simply walk, sufficiently fast, it seems, with a soapy loop of rope, which provides the relative wind needed to stretch out the film."

The results, reported in the journal Physical Review Letters, point to potential applications in consumer products that contain bubbles or droplets, such as sprays, foams, and emulsions, which are combinations of unmixable liquids.

The paper's other researchers were Likhit Ganedi, an NYU undergraduate at the time of the work and now a graduate student at Carnegie Mellon University, Anand Oza, a postdoctoral fellow at the time of the research and now a professor at the New Jersey Institute of Technology, and Michael Shelley, a professor at the Courant Institute.

As a physics problem, blowing bubbles is a question of how a liquid film -- typically soapy water -- interacts with an imposed flow of an external fluid, which is air in the case of bubble blowing. This dynamic is crucial in understanding how to enhance industrial production of many chemical products.

To break down the science that explains this process -- i.e., what events precede the formation of bubbles -- the researchers created an experiment, replicating the blowing of bubbles, using oil films suspended in flowing water and pushed through a wire loop wand.

"Working with water instead of air has many advantages in terms of controlling, measuring, and seeing flows," Ristroph explains. "This is the trick that made these experiments possible."

Their experimental observations, combined with predictions drawn from mathematical models, allowed the researchers to understand the forces that produced the resulting film shapes.

Their findings give a precise recipe or set of instructions for how to blow bubbles -- and with it, related production processes.

Read more at Science Daily

Secret tunnels discovered between the skull and the brain

Newly discovered channels in the skull may provide a shortcut for immune cells going to damaged tissue.
Bone marrow, the spongy tissue inside most of our bones, produces red blood cells as well as immune cells that help fight off infections and heal injuries. According to a new study of mice and humans, tiny tunnels run from skull bone marrow to the lining of the brain and may provide a direct route for immune cells responding to injuries caused by stroke and other brain disorders. The study was funded in part by the National Institutes of Health and published in Nature Neuroscience.

"We always thought that immune cells from our arms and legs traveled via blood to damaged brain tissue. These findings suggest that immune cells may instead be taking a shortcut to rapidly arrive at areas of inflammation," said Francesca Bosetti, Ph.D., program director at the NIH's National Institute of Neurological Disorders and Stroke (NINDS), which provided funding for the study. "Inflammation plays a critical role in many brain disorders and it is possible that the newly described channels may be important in a number of conditions. The discovery of these channels opens up many new avenues of research."

Using state-of-the-art tools and cell-specific dyes in mice, Matthias Nahrendorf, M.D., Ph.D., professor at Harvard Medical School and Massachusetts General Hospital in Boston, and his colleagues were able to distinguish whether immune cells traveling to brain tissue damaged by stroke or meningitis, came from bone marrow in the skull or the tibia, a large legbone. In this study, the researchers focused on neutrophils, a particular type of immune cell, which are among the first to arrive at an injury site.

Results in mouse brains showed that during stroke, the skull is more likely to supply neutrophils to the injured tissue than the tibia. In contrast, following a heart attack, the skull and tibia provided similar numbers of neutrophils to the heart, which is far from both of those areas.

Dr. Nahrendorf's group also observed that six hours after stroke, there were fewer neutrophils in the skull bone marrow than in the tibia bone marrow, suggesting that the skull marrow released many more cells to the injury site. These findings indicate that bone marrow throughout the body does not uniformly contribute immune cells to help injured or infected tissue and suggests that the injured brain and skull bone marrow may "communicate" in some way that results in a direct response from adjacent leukocytes.

Dr. Nahrendorf's team found that differences in bone marrow activity during inflammation may be determined by stromal cell-derived factor-1 (SDF-1), a molecule that keeps immune cells in the bone marrow. When levels of SDF-1 decrease, neutrophils are released from marrow. The researchers observed levels of SDF-1 decreasing six hours after stroke, but only in the skull marrow, and not in the tibia. The results suggest that the decrease in levels of SDF-1 may be a response to local tissue damage and alert and mobilize only the bone marrow that is closest to the site of inflammation.

Next, Dr. Nahrendorf and his colleagues wanted to see how the neutrophils were arriving at the injured tissue.

"We started examining the skull very carefully, looking at it from all angles, trying to figure out how neutrophils are getting to the brain," said Dr. Nahrendorf. "Unexpectedly, we discovered tiny channels that connected the marrow directly with the outer lining of the brain."

With the help of advanced imaging techniques, the researchers watched neutrophils moving through the channels. Blood normally flowed through the channels from the skull's interior to the bone marrow, but after a stroke, neutrophils were seen moving in the opposite direction to get to damaged tissue.

Dr. Nahrendorf's team detected the channels throughout the skull as well as in the tibia, which led them to search for similar features in the human skull. Detailed imaging of human skull samples obtained from surgery uncovered the presence of the channels. The channels in the human skull were five times larger in diameter compared to those found in mice. In human and mouse skulls, the channels were found in the both in the inner and outer layers of bone.

Read more at Science Daily

Scientists identify a new kind of human brain cell

This is a digital reconstruction of a rosehip neuron in the human brain.
One of the most intriguing questions about the human brain is also one of the most difficult for neuroscientists to answer: What sets our brains apart from those of other animals?

"We really don't understand what makes the human brain special," said Ed Lein, Ph.D., Investigator at the Allen Institute for Brain Science. "Studying the differences at the level of cells and circuits is a good place to start, and now we have new tools to do just that."

In a new study published today in the journal Nature Neuroscience, Lein and his colleagues reveal one possible answer to that difficult question. The research team, co-led by Lein and Gábor Tamás, Ph.D., a neuroscientist at the University of Szeged in Szeged, Hungary, has uncovered a new type of human brain cell that has never been seen in mice and other well-studied laboratory animals.

Tamás and University of Szeged doctoral student Eszter Boldog dubbed these new cells "rosehip neurons" -- to them, the dense bundle each brain cell's axon forms around the cell's center looks just like a rose after it has shed its petals, he said. The newly discovered cells belong to a class of neurons known as inhibitory neurons, which put the brakes on the activity of other neurons in the brain.

The study hasn't proven that this special brain cell is unique to humans. But the fact that the special neuron doesn't exist in rodents is intriguing, adding these cells to a very short list of specialized neurons that may exist only in humans or only in primate brains.

The researchers don't yet understand what these cells might be doing in the human brain, but their absence in the mouse points to how difficult it is to model human brain diseases in laboratory animals, Tamás said. One of his laboratory team's immediate next steps is to look for rosehip neurons in postmortem brain samples from people with neuropsychiatric disorders to see if these specialized cells might be altered in human disease.

When different techniques converge


In their study, the researchers used tissue samples from postmortem brains of two men in their 50s who had died and donated their bodies to research. They took sections of the top layer of the cortex, the outermost region of the brain that is responsible for human consciousness and many other functions that we think of as unique to our species. It's much larger, compared to our body size, than in other animals.

"It's the most complex part of the brain, and generally accepted to be the most complex structure in nature," Lein said.

Tamás' research lab in Hungary studies the human brain using a classical approach to neuroscience, conducting detailed examinations of cells' shapes and electrical properties. At the Allen Institute, Lein leads a team working to uncover the suite of genes that make human brain cells unique from each other and from the brain cells of mice.

Several years ago, Tamás visited the Allen Institute to present his latest research on specialized human brain cell types, and the two research groups quickly saw that they'd hit on the same cell using very different techniques.

"We realized that we were converging on the same cell type from absolutely different points of view," Tamás said. So they decided to collaborate.

The Allen Institute group, in collaboration with researchers from the J. Craig Venter Institute, found that the rosehip cells turn on a unique set of genes, a genetic signature not seen in any of the mouse brain cell types they've studied. The University of Szeged researchers found that the rosehip neurons form synapses with another type of neuron in a different part of the human cortex, known as pyramidal neurons.

This is one of the first studies of the human cortex to combine these different techniques to study cell types, said Rebecca Hodge, Ph.D., Senior Scientist at the Allen Institute for Brain Science and an author on the study.

"Alone, these techniques are all powerful, but they give you an incomplete picture of what the cell might be doing," Hodge said. "Together, they tell you complementary things about a cell that can potentially tell you how it functions in the brain."

How do you study humanity?

What appears to be unique about rosehip neurons is that they only attach to one specific part of their cellular partner, indicating that they might be controlling information flow in a very specialized way.

If you think of all inhibitory neurons like brakes on a car, the rosehip neurons would let your car stop in very particular spots on your drive, Tamás said. They'd be like brakes that only work at the grocery store, for example, and not all cars (or animal brains) have them.

"This particular cell type -- or car type -- can stop at places other cell types cannot stop," Tamás said. "The car or cell types participating in the traffic of a rodent brain cannot stop in these places."

The researchers' next step is to look for rosehip neurons in other parts of the brain, and to explore their potential role in brain disorders. Although scientists don't yet know whether rosehip neurons are truly unique to humans, the fact that they don't appear to exist in rodents is another strike against the laboratory mouse as a perfect model of human disease -- especially for neurological diseases, the researchers said.

"Our brains are not just enlarged mouse brains," said Trygve Bakken, M.D., Ph.D., Senior Scientist at the Allen Institute for Brain Science and an author on the study. "People have commented on this for many years, but this study gets at the issue from several angles."

Read more at Science Daily

Aug 27, 2018

How do fruit flies grow legs? Solving a molecular mystery

What do cancer and the growing legs of a fruit fly have in common? They can both be influenced by a single molecule, a protein that tends to call the shots inside of embryos as they develop into living, breathing animals. Present in virtually every creature on the planet, this protein goes by the name Epidermal Growth Factor Receptor protein, or EGFR.

Now a team of neuroscientists at Columbia University has figured out how to tease apart the many roles EGFR plays in the body -- challenging conventional wisdom in the process. They report their findings in PLOS Genetics.

"The results of our research first and foremost solve a long-standing question about the nature of EGFR signaling and how it drives development," said Richard Mann, PhD, a principal investigator at Columbia's Mortimer B. Zuckerman Mind Brain Behavior Institute and the paper's senior author. "But even more importantly, researchers can now use knowledge gleaned from our work to shed light on the link between EGFR and disease at a level of detail that until now would not have been possible."

EGFR signaling is woven into the fabric of development. It guides the formation of many body parts and has been an intense area of research focus for decades. Indeed, EGFR is so critical to development that disruptions to its normal activity likely play a role in everything from developmental disorders to Alzheimer's disease to cancer.

Scientists long believed that the molecules that bind to EGFR and activate it, known as EGFR ligands, act as morphogens. Morphogens are a type of signaling proteins that spread and trigger different responses in the developing tissues depending on their concentration -- high levels result in one outcome while low morphogen levels lead to another response in the underlying cells.

"Morphogens coordinate the development of entire tissues from a single point source," said Dr. Mann, who is also the Higgins Professor of Biochemistry and Molecular Biophysics at Columbia University Irving Medical Center. "Cells closest to a morphogen's source get the largest, most intense concentration of signals, while cells farther away would get progressively less, like ripples in a pool."

But some scientists have argued that EGFR signaling worked differently. Rather than sending out a single signal blast from a specific location, they said, multiple sources of EGFR ligands at different locations might send their own signals at different points of time -- and this combination of signals together could guide development.

Distinguishing between these competing hypotheses is critical, the researchers point out. Without knowing how EGFR activation guides development, it is difficult to decipher what happens when the normal developmental process gets disrupted.

In addition, actually finding a way to test these two hypotheses has proven difficult. Scientists could not simply switch off EGFR signaling entirely and see what happens, as they often do to figure out how a protein affects development. Because EGFR signaling is so ubiquitous, too many other systems would be affected, to pinpoint exactly how it works.

To get around this problem, the Columbia team tried a new approach. They focused on the EGFR ligands' many enhancers: small stretches of DNA that govern the ligands' activity and trigger EGFR activity in a precise manner in different parts of the body.

"If you think of EGFR signaling as a soundboard, like what you'd find in a recording studio, enhancers are similar to the soundboard's knobs and dials," said Dr. Mann. "Just like turning those dials up or down changes the output of soundboard, turning up or down individual enhancers changes the ligand expression and, therefore, the resulting activity of signals."

In this paper, the researchers first located the specific enhancers of the EGFR ligands that guided the fly's leg development. They then switched off only those enhancers.

"In this way, we've managed to eliminate one small aspect of EGFR activity, leaving the rest of the signaling largely intact," said Dr. Mann.

When doing so, they saw that the growth of the fly leg was not guided by a single source, the way a morphogen would act. Instead, the researchers found that EGFR sent signals from multiple different sources, located at different parts of the developing leg -- knowledge made possible by the scientists' focus on enhancers.

Because EGFR exists throughout the animal kingdom, these findings in the fly can be applied to studies of EGFR disruptions related to disease, such as developmental disorders and cancer.

Read more at Science Daily