Jun 1, 2019

Birds perceive 'warm' colors differently from 'cool' ones

Zebra finch.
Birds may not have a word for maroon. Or burnt sienna. But show a zebra finch a sunset-colored object, and she'll quickly decide whether it looks more like "red" or "orange."

A Duke University study shows that birds mentally sort the range of hues on the blue-green side of the spectrum into two categories too, but the line between them is fuzzier.

It may be that "either/or" thinking is less useful in this part of the color spectrum, the researchers say. Deciding whether, say, a reddish-beaked male is good mate material, or which fruits are ripe is vital for survival, whereas the differences between shades of green grass or blue sky may be less so.

The findings come from a study of something called categorical perception, a mental hack in which the brain subdivides the smooth and continuous range of wavelengths in the visible spectrum into distinct groups of basic colors , such as red, orange, yellow, green, blue and purple.

Categorical perception of color was long thought to be unique to humans, perhaps because human language labels the millions of colors our eyes can distinguish with just a few words. But a previous study led by Duke biology professor Stephen Nowicki showed that birds categorize colors too.

In that study, the researchers tested the birds' ability to tell whether two shades of red or orange are the same or different.

Males zebra finches have beaks that range from light orange to dark red. Females prefer red-beaked males over orange ones, presumably because red is a sign of better health.

Using different pairwise combinations of eight hues representing the range of male beak colors, the researchers showed female zebra finches a set of colored paper discs, some two-toned and some solid colored.

The birds learned that each time they flipped over a two-toned disc with their beak, they found seeds hidden underneath. If they flipped over a solid colored disc, they got nothing. Picking a particular disc before the others was a sign that a bird perceived it as having two colors rather than one.

In the new study, Nowicki, Duke Ph.D. student Matthew Zipple and colleagues tried the same experiment with a different set of colors ranging from green to blue, watching 17 birds as they puzzled through the colored discs in search of treats.

In both studies, the birds were better at distinguishing some color pairs than others, even when those pairs were equally far apart on the color spectrum. By analyzing where these shifts in discernment occurred along the spectrum, the researchers were able to show a threshold effect at work -- a clear perceptual boundary where red turns to orange, or blue turns to green.

But for blues and greens, the boundary appears less distinct. The birds were better at recognizing subtle differences within each color category, but less prone to treat colors from opposite sides of the boundary as "either/or" compared to the red-orange range.

The results parallel research on how human brains make sense of color. Previous studies show that people are better at labeling warm colors -- reds and oranges -- than cool colors. "In many languages there are fewer terms in the blue-green range," Nowicki said.

But the finding that birds perceive these parts of the spectrum differently lends support to the idea that color categories are more than arbitrary subdivisions of the spectrum shaped by the words we use to describe them, as some researchers have proposed.

"It's not just driven by language. It's the inherent way that visual processing works," Nowicki said.

One possible explanation for their results is that perhaps this kind of "either/or" thinking is more helpful for some color ranges than others, the researchers say. To a zebra finch, red is a color of attraction. It's in a female's best interest to classify a potential mate's bill as either hot or not, rather than something in the middle.

"It could be that the red-orange color range matters more to them than the blue-green range," Zipple said.

Another possibility is that, for animals that live on land rather than in the water, the visible light reflecting off the objects in their environment -- tree bark, dead leaves, flowers, fruits, fur, feathers and scales -- simply contains more variation in the orange-red part of the spectrum .

"It could be that vertebrates are generally better at categorizing colors in the red-orange range more commonly found on land because that's where they've evolved," Zipple said.

Whatever the reason, the researchers say, the world is awash in color. Categorical perception may be a cognitive shortcut that helps animals take in this barrage of information and focus on what's important.

"Categorization allows us to take the cloud of stimuli that are always around us, and not have to focus on every one of them individually," Zipple said. "Instead we're just able to bin them into different kinds."

Read more at Science Daily

Physicists create stable, strongly magnetized plasma jet in laboratory

When you peer into the night sky, much of what you see is plasma, a soupy amalgam of ultra-hot atomic particles. Studying plasma in the stars and various forms in outer space requires a telescope, but scientists can recreate it in the laboratory to examine it more closely.

Now, a team of scientists led by physicists Lan Gao of the U.S. Department of Energy's (DOE) Princeton Plasma Physics Laboratory (PPPL) and Edison Liang of Rice University, has for the first time created a particular form of coherent and magnetized plasma jet that could deepen the understanding of the workings of much larger jets that stream from newborn stars and possibly black holes -- stellar objects so massive that they trap light and warp both space and time.

"We are now creating stable, supersonic, and strongly magnetized plasma jets in a laboratory that might allow us to study astrophysical objects light years away," said astrophysicist Liang, co-author of the paper reporting the results in the Astrophysical Journal Letters.

The team created the jets using the OMEGA Laser Facility at the University of Rochester's Laboratory for Laser Energetics (LLE). The researchers aimed 20 of OMEGA's individual laser beams into a ring-shaped area on a plastic target. Each laser created a tiny puff of plasma; as the puffs expanded, they put pressure on the inner region of the ring. That pressure then squeezed out a plasma jet reaching over four millimeters in length and created a magnetic field that had a strength of over 100 tesla.

"This is the first step in studying plasma jets in a laboratory," said Gao, who was the primary author of the paper. "I'm excited because we not only created a jet. We also successfully used advanced diagnostics on OMEGA to confirm the jet's formation and characterize its properties."

The diagnostic tools, developed with teams from LLE and the Massachusetts Institute of Technology (MIT), measured the jet's density, temperature, length, how well it stayed together as it grew through space, and the shape of the magnetic field around it. The measurements help scientists determine how the laboratory phenomena compare to jets in outer space. They also provide a baseline that scientists can tinker with to observe how the plasma behaves under different conditions.

"This is groundbreaking research because no other team has successfully launched a supersonic, narrowly beamed jet that carries such a strong magnetic field, extending to significant distances," said Liang. "This is the first time that scientists have demonstrated that the magnetic field does not just wrap around the jet, but also extends parallel to the jet's axis," he said.

The researchers hope to expand their research with larger laser facilities and investigate other types of phenomena. "The next step involves seeing whether an external magnetic field could make the jet longer and more collimated," Gao said.

"We would also like to replicate the experiment using the National Ignition Facility at Lawrence Livermore National Laboratory, which has 192 laser beams, half of which could be used to create our plasma ring. It would have a larger radius and thus produce a longer jet than that produced using OMEGA. This process would help us figure out under which conditions the plasma jet is strongest."

Read more at Science Daily

May 31, 2019

Early humans used northern migration routes to reach eastern Asia

Traditional Mongolian gers.
Northern and Central Asia have been neglected in studies of early human migration, with deserts and mountains being considered uncompromising barriers. However, a new study by an international team argues that humans may have moved through these extreme settings in the past under wetter conditions. We must now reconsider where we look for the earliest traces of our species in northern Asia, as well as the zones of potential interaction with other hominins such as Neanderthals and Denisovans.

Archaeologists and palaeoanthropologists are increasingly interested in discovering the environments facing the earliest members of our species, Homo sapiens, as it moved into new parts of Eurasia in the Late Pleistocene (125,000-12,000 years ago). Much attention has focused on a 'southern' route around the Indian Ocean, with Northern and Central Asia being somewhat neglected. However, in a paper published in PLOS ONE, scientists of the Max Planck Institute for the History of Human Science in Jena, Germany, and colleagues at the Institute of Vertebrate Paleontology and Paleoanthropology in Beijing, China, argue that climate change may have made this a particularly dynamic region of hominin dispersal, interaction, and adaptation, and a crucial corridor for movement.

'Heading North' Out of Africa and into Asia

"Archaeological discussions of the migration routes of Pleistocene Homo sapiens have often focused on a 'coastal' route from Africa to Australia, skirting around India and Southeast Asia," says Professor Michael Petraglia of the Max Planck Institute for the Science of Human History, a co-author of the new study. "In the context of northern Asia, a route into Siberia has been preferred, avoiding deserts such as the Gobi." Yet over the past ten years, a variety of evidence has emerged that has suggested that areas considered inhospitable today might not have always been so in the past.

"Our previous work in Saudi Arabia, and work in the Thar Desert of India, has been key in highlighting that survey work in previously neglected regions can yield new insights into human routes and adaptations," says Petraglia. Indeed, if Homo sapiens could cross what is now the Arabian Deserts then what would have stopped it crossing other currently arid regions such as the Gobi Desert, the Junggar Basin, and the Taklamakan Desert at different points in the past? Similarly, the Altai Mountains, the Tien Shan and the Tibetan Plateau represent a potentially new high altitude window into human evolution, especially given the recent Denisovan findings from Denisova Cave in Russia and at the Baishiya Karst Cave in China.

Nevertheless, traditional research areas, a density of archaeological sites, and assumptions about the persistence of environmental 'extremes' in the past has led to a focus on Siberia, rather than the potential for interior routes of human movement across northern Asia.

A "Green Gobi"?


Indeed, palaeoclimatic research in Central Asia has increasingly accumulated evidence of past lake extents, past records of changing precipitation amounts, and changing glacial extents in mountain regions, which suggest that environments could have varied dramatically in this part of the world over the course of the Pleistocene. However, the dating of many of these environmental transitions has remained broad in scale, and these records have not yet been incorporated into archaeological discussions of human arrival in northern and Central Asia.

"We factored in climate records and geographical features into GIS models for glacials (periods during which the polar ice caps were at their greatest extent) and interstadials (periods during the retreat of these ice caps) to test whether the direction of past human movement would vary, based on the presence of these environmental barriers," says Nils Vanwezer, PhD student at the Max Planck Institute for the Science of Human History and a joint lead-author of the study.

"We found that while during 'glacial' conditions humans would indeed likely have been forced to travel via a northern arc through southern Siberia, during wetter conditions a number of alternative pathways would have been possible, including across a 'green' Gobi Desert," he continues. Comparisons with the available palaeoenvironmental records confirm that local and regional conditions would have been very different in these parts of Asia in the past, making these 'route' models a definite possibility for human movement.

Where did you come from, where did you go?


"We should emphasize that these routes are not 'real', definite pathways of Pleistocene human movement. However, they do suggest that we should look for human presence, migration, and interaction with other hominins in new parts of Asia that have been neglected as static voids of archaeology," says Dr. Patrick Roberts also of the Max Planck Institute for the Science of Human History, co-author of the study. "Given what we are increasingly discovering about the flexibility of our species, it would be of no surprise if we were to find early Homo sapiens in the middle of modern deserts or mountainous glacial sheets."

Read more at Science Daily

Ancient DNA tells the story of the first herders and farmers in east Africa

Shepherd and goats.
A collaborative study led by archaeologists, geneticists and museum curators is providing answers to previously unsolved questions about life in sub-Saharan Africa thousands of years ago. The results were published online in the journal Science Thursday, May 30.

Researchers from North American, European and African institutions analyzed ancient DNA from 41 human skeletons curated in the National Museums of Kenya and Tanzania, and the Livingstone Museum in Zambia.

"The origins of food producers in East Africa have remained elusive because of gaps in the archaeological record," said co-first author Mary Prendergast, Ph.D., professor of anthropology and chair of humanities at Saint Louis University's campus in Madrid, Spain.

"This study uses DNA to answer previously unresolvable questions about how people were moving and interacting," added Prendergast.

The research provides a look at the origins and movements of early African food producers.

The first form of food production to spread through most of Africa was the herding of cattle, sheep and goats. This way of life continues to support millions of people living on the arid grasslands that cover much of sub-Saharan Africa.

"Today, East Africa is one of the most genetically, linguistically, and culturally diverse places in the world," explains Elizabeth Sawchuk, Ph.D., a bioarchaeologist at Stony Brook University and co-first author of the study. "Our findings trace the roots of this mosaic back several millennia. Distinct peoples have coexisted in the Rift Valley for a very long time."

Previous archaeological research shows that the Great Rift Valley of Kenya and Tanzania was a key site for the transition from foraging to herding. Herders of livestock first appeared in northern Kenya around 5000 years ago, associated with elaborate monumental cemeteries, and then spread south into the Rift Valley, where Pastoral Neolithic cultures developed.

The new genetic results reveal that this spread of herding into Kenya and Tanzania involved groups with ancestry derived from northeast Africa, who appeared in East Africa and mixed with local foragers there between about 4500-3500 years ago. Previously, the origins and timing of these population shifts were unclear, and some archaeologists hypothesized that domestic animals spread through exchange networks, rather than by movement of people.

After around 3500 years ago, herders and foragers became genetically isolated in East Africa, even though they continued to live side by side. Archaeologists have hypothesized substantial interaction among foraging and herding groups, but the new results reveal that there were strong and persistent social barriers that lasted long after the initial encounters.

Another major genetic shift occurred during the Iron Age around 1200 years ago, with movement into the region of additional peoples from both northeastern and western Africa. These groups contributed to ancient ancestry profiles similar to those of many East Africans today. This genetic shift parallels two major cultural changes: farming and iron-working.

The study provided insight into the history of East Africa as an independent center of evolution of lactase persistence, which enables people to digest milk into adulthood. This genetic adaptation is found in high proportions among Kenyan and Tanzanian herders today.

The study was supported by funding from the Howard Hughes Medical Institute, with additional funding from the US. National Institutes of Health (5R01GM100233), Allen Discovery Center, and John Templeton Foundation, NSF Archaeometry Program, and Radcliffe Institute for Advanced Study.

Read more at Science Daily

Subaru Telescope captures 1800 exploding stars

Supernova illustration.
By combining one of the world's most powerful digital cameras and a telescope capable of capturing a wider shot of the night sky compared to other big telescopes, a team of researchers from Japan have been able to identify about 1800 new supernovae, including 58 Type Ia supernovae 8 billion light years away, reports a new study released online on 30 May.

A supernova is the name given to an exploding star that has reached the end of its life. The star often becomes as bright as its host galaxy, shining one billion times brighter than the Sun for anytime between a month to six months before dimming down. Supernova classed as Type Ia are useful because their constant maximum brightness allows researchers to calculate how far the star is from Earth. This is particularly useful for researchers who want to measure the expansion of the Universe.

In recent years, researchers began reporting a new type of supernovae five to ten times brighter than Type Ia supernovae. Named Super Luminous Supernovae, many have been trying to learn more about these stars. Their unusual brightness enables researchers to spot stars in the farthest parts of the Universe usually too faint to observe. Since distant Universe means the early Universe, studying this kind of star could reveal characteristics about the first, massive stars created after the Big Bang.

But supernovae are rare events, and there are only a handful of telescopes in the world capable of capturing sharp images of distant stars. In order to maximize the chances of observing a supernova, a team led by Kavli Institute for the Physics and Mathematics of the Universe (Kavli IPMU) Professor Naoki Yasuda, and researchers from Tohoku University, Konan University, the National Astronomical Observatory of Japan, School of Science, the University of Tokyo, and Kyoto University, used the Subaru Telescope.

This telescope is capable of generating shape stellar images, and the Hyper Suprime-Cam, an 870 mega-pixel digital camera attached at its top, captures a very wide area of the night sky in one shot.

By taking repeated images of the same area of night sky over a six month period, the researchers could identify new supernovae by looking for stars that suddenly appeared brighter before gradually fading out.

As a result, the team identified 5 super luminous supernovae, and about 400 Type Ia supernovae. Fifty-eight of these Type Ia supernovae were located more than 8 billion light years away from Earth. In comparison, it took researchers using the Hubble Space Telescope about 10 years to discover a total of 50 supernovae located more than 8 billion light years away from Earth.

"The Subaru Telescope and Hyper Suprime-Cam have already helped researchers create a 3D map of dark matter, and observation of primordial black holes, but now this result proves that this instrument has a very high capability finding supernovae very, very far away from Earth. I want to thank all of my collaborators for their time and effort, and look forward to analyzing our data to see what kind of picture of the Universe it holds," said Yasuda.

Read more at Science Daily

Circadian clocks: Body parts respond to day and night independently from brain, studies show

Body clocks concept.
Can your liver sense when you're staring at a television screen or cellphone late at night? Apparently so, and when such activity is detected, the organ can throw your circadian rhythms out of whack, leaving you more susceptible to health problems.

That's one of the takeaways from two new studies by University of California, Irvine scientists working in collaboration with the Institute for Research in Biomedicine in Barcelona, Spain.

The studies, published today in the journal Cell, used specially bred mice to analyze the network of internal clocks that regulate metabolism. Although researchers had suspected that the body's various circadian clocks could operate independently from the central clock in the hypothalamus of the brain, there was previously no way to test the theory, said Paolo Sassone-Corsi, director of UCI's Center for Epigenetics and Metabolism and senior author of one of the studies.

To overcome that obstacle, scientists figured out how to disable the entire circadian system of the mice, then jump-start individual clocks. For the experiments reported in the Cell papers, they activated clocks inside the liver or skin.

"The results were quite surprising," said Sassone-Corsi, Donald Bren Professor of Biological Chemistry. "No one realized that the liver or skin could be so directly affected by light."

For example, despite the shutdown of all other body clocks, including the central brain clock, the liver knew what time it was, responded to light changes as day shifted to night and maintained critical functions, such as preparing to digest food at mealtime and converting glucose to energy.

Somehow, the liver's circadian clock was able to detect light, presumably via signals from other organs. Only when the mice were subjected to constant darkness did the liver's clock stop functioning.

In upcoming studies, UCI and Barcelona researchers will phase in other internal clocks to see how different organs communicate with each other, Sassone-Corsi said.

"The future implications of our findings are vast," he noted. "With these mice, we can now begin deciphering the metabolic pathways that control our circadian rhythms, aging processes and general well-being."

In earlier studies, Sassone-Corsi has examined how circadian clocks can be rewired by such factors as sleep deprivation, diet and exercise. Exposure to computer, television or cellphone light just before bed can also scramble internal clocks.

Because of modern lifestyles, it's easy for people's circadian systems to get confused, he said. In turn, that can lead to depression, allergies, premature aging, cancer and other health problems. Further mice experiments could uncover ways to make human internal clocks "less misaligned," Sassone-Corsi added.

Read more at Science Daily

Declining fertility rates may explain Neanderthal extinction

Neanderthal depiction.
A new hypothesis for Neanderthal extinction supported by population modelling is put forward in a new study by Anna Degioanni from Aix Marseille Université, France and colleagues, published May 29, 2019 in the open-access journal PLOS ONE.

The lack of empirical data allowing testing of hypotheses is one of the biggest challenges for researchers studying Neanderthal extinction. Many hypotheses involve catastrophic events such as disease or climate change. In order to test alternative hypothetical extinction scenarios, Degioanni and colleagues created a Neanderthal population model allowing them to explore demographic factors which might have resulted in declining populations and population extinction over a period of 4,000-10,000 years (a time frame compatible with known Neanderthal history). The researchers created baseline demographic parameters for their Neanderthal extinction model (e.g. survival, migration, and fertility rates) based on observational data on modern hunter-gatherer groups and extant large apes, as well as available Neanderthal paleo-genetic and empirical data from earlier studies. The authors defined populations as extinct when they fell below 5,000 individuals.

The authors saw that in their model, extinction would have been possible within 10,000 years with a decrease in fertility rates of young (<20 0.4="" 10="" 2.7="" 4="" 8="" a="" amplified="" br="" by="" children="" could="" decrease="" decreased="" extinction="" fertility="" have="" if="" in="" infants="" just="" led="" less="" neanderthal="" occurred="" of="" old="" one="" percent="" rate="" reduction="" survival="" than="" the="" this="" to="" was="" within="" women="" year-old="" year="" years.="">
The authors intended to explore possible Neanderthal extinction scenarios rather than to posit any definitive explanation. However, the researchers note that this study is the first to use empirical data to suggest that relatively minor demographic changes, such as a reduction in fertility or an increase in infant mortality, might have led to Neanderthal extinction. The authors note that modelling can be a useful tool in studying Neanderthals.

The authors add: "This study of the disappearance of the Neanderthals published today in PLOS ONE does not attempt to explain "why" the Neanderthals disappeared, but to identify "how" their demise may have taken place. This original approach is made on the basis of demographic modeling. The results suggest that a very small reduction in fertility may account for the disappearance of the Neanderthal population. According to this research, this decrease did not concern all female Neanderthals, but only the youngest (less than 20 years old)."

From Science Daily

May 30, 2019

Astronomers find 'Forbidden' planet in 'Neptunian Desert' around its star

Exoplanet illustration.
An exoplanet smaller than Neptune with its own atmosphere has been discovered in the Neptunian Desert around its star by an international collaboration of astronomers, with the University of Warwick taking a leading role.

The rogue planet was identified in the new research, led by Dr Richard West including Professor Peter Wheatley, Dr Daniel Bayliss and Dr James McCormac from the Astronomy and Astrophysics Group at the University of Warwick.

NGTS is situated at the European Southern Observatory's Paranal Observatory in the heart of the Atacama Desert, Chile. It is a collaboration between UK Universities Warwick, Leicester, Cambridge, and Queen's University Belfast, together with Observatoire de Genève, DLR Berlin and Universidad de Chile.

NGTS-4b, also nick-named 'The Forbidden Planet' by researchers, is a planet smaller than Neptune but three times the size of Earth.

It has a mass of 20 Earth masses, and a radius 20% smaller than Neptune, and is 1000 degrees Celsius. It orbits around the star in only 1.3 days -- the equivalent of Earth's orbit around the sun of one year.

It is the first exoplanet of its kind to have been found in the Neptunian Desert.

The Neptunian Desert is the region close to stars where no Neptune-sized planets are found. This area receives strong irradiation from the star, meaning the planets do not retain their gaseous atmosphere as they evaporate leaving just a rocky core. However NGTS-4b still has its atmosphere of gas.

When looking for new planets astronomers look for a dip in the light of a star -- this the planet orbiting it and blocking the light. Usually only dips of 1% and more are picked up by ground-based searches, but the NGTS telescopes can pick up a dip of just 0.2%

Researchers believe the planet may have moved into the Neptunian Desert recently, in the last one million years, or it was very big and the atmosphere is still evaporating.

Dr Richard West, from the Department of Physics at the University of Warwick comments:

"This planet must be tough -- it is right in the zone where we expected Neptune-sized planets could not survive. It is truly remarkable that we found a transiting planet via a star dimming by less than 0.2% -- this has never been done before by telescopes on the ground, and it was great to find after working on this project for a year.

Read more at Science Daily

Earth recycles ocean floor into diamonds

Diamond and coal
The diamond on your finger is most likely made of recycled seabed cooked deep in the Earth.

Traces of salt trapped in many diamonds show the stones are formed from ancient seabeds that became buried deep beneath the Earth's crust, according to new research led by Macquarie University geoscientists in Sydney, Australia.

Most diamonds found at the Earth's surface are formed this way; others are created by crystallization of melts deep in the mantle.

In experiments recreating the extreme pressures and temperatures found 200 kilometres underground, Dr Michael Förster, Professor Stephen Foley, Dr Olivier Alard, and colleagues at Goethe Universität and Johannes Gutenberg Universität in Germany, have demonstrated that seawater in sediment from the bottom of the ocean reacts in the right way to produce the balance of salts found in diamond.

The study, published in Science Advances, settles a long-standing question about the formation of diamonds. "There was a theory that the salts trapped inside diamonds came from marine seawater, but couldn't be tested," says lead author Michael. "Our research showed that they came from marine sediment."

Diamonds are crystals of carbon that form beneath the Earth's crust in very old parts of the mantle. They are brought to the surface in volcanic eruptions of a special kind of magma called kimberlite.

While gem diamonds are usually made of pure carbon, so-called fibrous diamonds, which are cloudy and less appealing to jewellers, often include small traces of sodium, potassium and other minerals that reveal information about the environment where they formed.

These fibrous diamonds are commonly ground down and used in technical applications like drill bits.

Fibrous diamonds grow more quickly than gem diamonds, which means they trap tiny samples of fluids around them while they form.

"We knew that some sort of salty fluid must be around while the diamonds are growing, and now we have confirmed that marine sediment fits the bill," says Michael.

For this process to occur, a large slab of sea floor would have to slip down to a depth of more than 200 kilometres below the surface quite rapidly, in a process known as subduction in which one tectonic plate slides beneath another.

The rapid descent is required because the sediment must be compressed to more than four gigapascals (40,000 times atmospheric pressure) before it begins to melt in the temperatures of more than 800°C found in the ancient mantle.

To test the idea, team members at the Johannes Gutenberg Universität Mainz and Goethe Universität Frankfurt in Germany carried out a series of high-pressure, high-temperature experiments.

They placed marine sediment samples in a vessel with a rock called peridotite that is the most common kind of rock found in the part of the mantle where diamonds form. Then they turned up the pressure and the heat, giving the samples time to react with one another in conditions like those found at different places in the mantle.

At pressures between four and six gigapascals and temperatures between 800°C and 1100°C, corresponding to depths of between 120 and 180 kilometres below the surface, they found salts formed with a balance of sodium and potassium that closely matches the small traces found in diamonds.

"We demonstrated that the processes that lead to diamond growth are driven by the recycling of oceanic sediments in subduction zones," says Michael.

Read more at Science Daily

May 29, 2019

'Fettuccine' may be most obvious sign of life on Mars, researchers report

Travertine terraces in Mammoth Hot Springs, Yellowstone National Park
A rover scanning the surface of Mars for evidence of life might want to check for rocks that look like pasta, researchers report in the journal Astrobiology.

The bacterium that controls the formation of such rocks on Earth is ancient and thrives in harsh environments that are similar to conditions on Mars, said University of Illinois geology professor Bruce Fouke, who led the new, NASA-funded study.

"It has an unusual name, Sulfurihydrogenibium yellowstonense," he said. "We just call it 'Sulfuri.'"

The bacterium belongs to a lineage that evolved prior to the oxygenation of Earth roughly 2.35 billion years ago, Fouke said. It can survive in extremely hot, fast-flowing water bubbling up from underground hot springs. It can withstand exposure to ultraviolet light and survives only in environments with extremely low oxygen levels, using sulfur and carbon dioxide as energy sources.

"Taken together, these traits make it a prime candidate for colonizing Mars and other planets," Fouke said.

And because it catalyzes the formation of crystalline rock formations that look like layers of pasta, it would be a relatively easy life form to detect on other planets, he said.

The unique shape and structure of rocks associated with Sulfuri result from its unusual lifestyle, Fouke said. In fast-flowing water, Sulfuri bacteria latch on to one another "and hang on for dear life," he said.

"They form tightly wound cables that wave like a flag that is fixed on one end," he said. The waving cables keep other microbes from attaching. Sulfuri also defends itself by oozing a slippery mucus.

"These Sulfuri cables look amazingly like fettuccine pasta, while further downstream they look more like capellini pasta," Fouke said. The researchers used sterilized pasta forks to collect their samples from Mammoth Hot Springs in Yellowstone National Park.

The team analyzed the microbial genomes, evaluated which genes were being actively translated into proteins and deciphered the organism's metabolic needs, Fouke said.

The team also looked at Sulfuri's rock-building capabilities, finding that proteins on the bacterial surface speed up the rate at which calcium carbonate -- also called travertine -- crystallizes in and around the cables "1 billion times faster than in any other natural environment on Earth," Fouke said. The result is the deposition of broad swaths of hardened rock with an undulating, filamentous texture.

"This should be an easy form of fossilized life for a rover to detect on other planets," Fouke said.

Read more at Science Daily

Early humans deliberately recycled flint to create tiny, sharp tools

Flint.
A new Tel Aviv University study finds that prehistoric humans "recycled" discarded or broken flint tools 400,000 years ago to create small, sharp utensils with specific functions. These recycled tools were then used with great precision and accuracy to perform specific tasks involved in the processing of animal products and vegetal materials.

The site of Qesem Cave, located just outside Tel Aviv, was discovered during a road construction project in 2000. It has since offered up countless insights into life in the region hundreds of thousands of years ago.

In collaboration with Prof. Cristina Lemorini of Sapienza University of Rome, the research was led jointly by postdoctoral fellow Dr. Flavia Venditti in collaboration with Profs. Ran Barkai and Avi Gopher. All three are members of TAU's Department of Archaeology and Ancient Near Eastern Cultures. It was published on April 11 in the Journal of Human Evolution.

In recent years, archaeologists working in caves in Spain and North Africa and digs in Italy and Israel have unearthed evidence that prehistoric people recycled objects they used in daily life. Just as we recycle materials such as paper and plastic to manufacture new items today, early hominids collected discarded or broken tools made of flint to create new utensils for specific purposes hundreds of thousands of years ago.

"Recycling was a way of life for these people," Prof. Barkai says. "It has long been a part of human evolution and culture. Now, for the first time, we are discovering the specific uses of the recycled 'tool kit' at Qesem Cave."

Exceptional conditions in the cave allowed for the immaculate preservation of the materials, including micro residue on the surface of the flint tools.

"We used microscopic and chemical analyses to discover that these small and sharp recycled tools were specifically produced to process animal resources like meat, hide, fat and bones," Venditti explains. "We also found evidence of plant and tuber processing, which demonstrated that they were also part of the hominids' diet and subsistence strategies."

According to the study, signs of use were found on the outer edges of the tiny objects, indicating targeted cutting activities related to the consumption of food: butchery activities and tuber, hide and bone processing. The researchers used two different and independent spectroscopic chemical techniques: Fourier transform infrared spectroscopy (FTIR) and scanning electron microscopy coupled with energy dispersive X-ray spectroscopy (SEM-EDX).

"The meticulous analysis we conducted allowed us to demonstrate that the small recycled flakes were used in tandem with other types of utensils. They therefore constituted a larger, more diversified tool kit in which each tool was designed for specific objectives," Venditti says.

She adds, "The research also demonstrates that the Qesem inhabitants practiced various activities in different parts of the cave: The fireplace and the area surrounding it were eventually a central area of activity devoted to the consumption of the hunted animal and collected vegetal resources, while the so-called 'shelf area' was used to process animal and vegetal materials to obtain different by-products."

"This research highlights two debated topics in the field of Paleolithic archaeology: the meaning of recycling and the functional role of small tools," Prof. Barkai observes. "The data from the unique, well-preserved and investigated Qesem Cave serve to enrich the discussion of these phenomena in the scientific community."

Read more at Science Daily

Did ancient supernovae prompt human ancestors to walk upright?

Supernova illustration.
Did ancient supernovae induce proto-humans to walk on two legs, eventually resulting in Homo sapiens with hands free to build cathedrals, design rockets and snap iPhone selfies?

A paper published today in the Journal of Geology makes the case: Supernovae bombarded Earth with cosmic energy starting as many as 8 million years ago, with a peak some 2.6 million years ago, initiating an avalanche of electrons in the lower atmosphere and setting off a chain of events that feasibly ended with bipedal hominins such as Homo habilis, dubbed "handy man."

The authors believe atmospheric ionization probably triggered an enormous upsurge in cloud-to-ground lightning strikes that ignited forest fires around the globe. These infernos could be one reason ancestors of Homo sapiens developed bipedalism -- to adapt in savannas that replaced torched forests in northeast Africa.

"It is thought there was already some tendency for hominins to walk on two legs, even before this event," said lead author Adrian Melott, professor emeritus of physics & astronomy at the University of Kansas. "But they were mainly adapted for climbing around in trees. After this conversion to savanna, they would much more often have to walk from one tree to another across the grassland, and so they become better at walking upright. They could see over the tops of grass and watch for predators. It's thought this conversion to savanna contributed to bipedalism as it became more and more dominant in human ancestors."

Based on a "telltale" layer of iron-60 deposits lining the world's sea beds, astronomers have high confidence supernovae exploded in Earth's immediate cosmic neighborhood -- between 100 and only 50 parsecs (163 light years) away -- during the transition from the Pliocene Epoch to the Ice Age.

"We calculated the ionization of the atmosphere from cosmic rays which would come from a supernova about as far away as the iron-60 deposits indicate," Melott said. "It appears that this was the closest one in a much longer series. We contend it would increase the ionization of the lower atmosphere by 50-fold. Usually, you don't get lower-atmosphere ionization because cosmic rays don't penetrate that far, but the more energetic ones from supernovae come right down to the surface -- so there would be a lot of electrons being knocked out of the atmosphere."

According to Melott and co-author Brian Thomas of Washburn University, ionization in the lower atmosphere meant an abundance of electrons would form more pathways for lightning strikes.

"The bottom mile or so of atmosphere gets affected in ways it normally never does," Melott said. "When high-energy cosmic rays hit atoms and molecules in the atmosphere, they knock electrons out of them -- so these electrons are running around loose instead of bound to atoms. Ordinarily, in the lightning process, there's a buildup of voltage between clouds or the clouds and the ground -- but current can't flow because not enough electrons are around to carry it. So, it has to build up high voltage before electrons start moving. Once they're moving, electrons knock more electrons out of more atoms, and it builds to a lightning bolt. But with this ionization, that process can get started a lot more easily, so there would be a lot more lightning bolts."

The KU researcher said the probability that this lightning spike touched off a worldwide upsurge in wildfires is supported by the discovery of carbon deposits found in soils that correspond with the timing of the cosmic-ray bombardment.

"The observation is that there's a lot more charcoal and soot in the world starting a few million years ago," Melott said. "It's all over the place, and nobody has any explanation for why it would have happened all over the world in different climate zones. This could be an explanation. That increase in fires is thought to have stimulated the transition from woodland to savanna in a lot of places -- where you had forests, now you had mostly open grassland with shrubby things here and there. That's thought to be related to human evolution in northeast Africa. Specifically, in the Great Rift Valley where you get all these hominin fossils."

Melott said no such event is likely to occur again anytime soon. The nearest star capable of exploding into a supernova in the next million years is Betelgeuse, some 200 parsecs (652 light years) from Earth.

Read more at Science Daily

Comet inspires chemistry for making breathable oxygen on Mars

Oxygen element in periodic table.
Science fiction stories are chock full of terraforming schemes and oxygen generators for a very good reason -- we humans need molecular oxygen (O2) to breathe, and space is essentially devoid of it. Even on other planets with thick atmospheres, O2 is hard to come by.

So, when we explore space, we need to bring our own oxygen supply. That is not ideal because a lot of energy is needed to hoist things into space atop a rocket, and once the supply runs out, it is gone.

One place molecular oxygen does appear outside of Earth is in the wisps of gas streaming off comets. The source of that oxygen remained a mystery until two years ago when Konstantinos P. Giapis, a professor of chemical engineering at Caltech, and his postdoctoral fellow Yunxi Yao, proposed the existence of a new chemical process that could account for its production. Giapis, along with Tom Miller, professor of chemistry, have now demonstrated a new reaction for generating oxygen that Giapis says could help humans explore the universe and perhaps even fight climate change at home. More fundamentally though, he says the reaction represents a new kind of chemistry discovered by studying comets.

Most chemical reactions require energy, which is typically provided as heat. Giapis's research shows that some unusual reactions can occur by providing kinetic energy. When water molecules are shot like extremely tiny bullets onto surfaces containing oxygen, such as sand or rust, the water molecule can rip off that oxygen to produce molecular oxygen. This reaction occurs on comets when water molecules vaporize from the surface and are then accelerated by the solar wind until they crash back into the comet at high speed.

Comets, however, also emit carbon dioxide (CO2). Giapis and Yao wanted to test if CO2 could also produce molecular oxygen in collisions with the comet surface. When they found O2 in the stream of gases coming off the comet, they wanted to confirm that the reaction was similar to water's reaction. They designed an experiment to crash CO2 onto the inert surface of gold foil, which cannot be oxidized and should not produce molecular oxygen. Nonetheless, O2 continued to be emitted from the gold surface. This meant that both atoms of oxygen come from the same CO2 molecule, effectively splitting it in an extraordinary manner.

"At the time we thought it would be impossible to combine the two oxygen atoms of a CO2 molecule together because CO2 is a linear molecule, and you would have to bend the molecule severely for it to work," Giapis says. "You're doing something really drastic to the molecule."

To understand the mechanism of how CO2 breaks down to molecular oxygen, Giapis approached Miller and his postdoctoral fellow Philip Shushkov, who designed computer simulations of the entire process. Understanding the reaction posed a significant challenge because of the possible formation of excited molecules. These molecules have so much energy that their constituent atoms vibrate and rotate around to an enormous degree. All that motion makes simulating the reaction in a computer more difficult because the atoms within the molecules move in complex ways.

"In general, excited molecules can lead to unusual chemistry, so we started with that," Miller says. "But, to our surprise, the excited state did not create molecular oxygen. Instead, the molecule decomposed into other products. Ultimately, we found that a severely bent CO2 can also form without exciting the molecule, and that could produce O2."

The apparatus Giapis designed to perform the reaction works like a particle accelerator, turning the CO2 molecules into ions by giving them a charge and then accelerating them using an electric field, albeit at much lower energies than are found in a particle accelerator. However, he adds that such a device is not necessary for the reaction to occur.

"You could throw a stone with enough velocity at some CO2 and achieve the same thing," he says. "It would need to be traveling about as fast as a comet or asteroid travels through space."

That could explain the presence of small amounts of oxygen that have been observed high in the Martian atmosphere. There has been speculation that the oxygen is being generated by ultraviolet light from the sun striking CO2, but Giapis believes the oxygen is also generated by high-speed dust particles colliding with CO2 molecules.

He hopes that a variation of his reactor could be used to do the same thing at more useful scales -- perhaps one day serving as a source of breathable air for astronauts on Mars or being used to combat climate change by pulling CO2, a greenhouse gas, out of Earth's atmosphere and turning it into oxygen. He acknowledges, however, that both of those applications are a long way off because the current version of the reactor has a low yield, creating only one to two oxygen molecules for every 100 CO2 molecules shot through the accelerator.

"Is it a final device? No. Is it a device that can solve the problem with Mars? No. But it is a device that can do something that is very hard," he says. "We are doing some crazy things with this reactor."

Read more at Science Daily

May 27, 2019

Climate change affects the genetic diversity of a species

Marmot family.
What effects does climate change have on the genetic diversity of living organisms? In a study led by Charité -- Universitätsmedizin Berlin, an international team of researchers studied the genome of the alpine marmot, an ice-age remnant that now lives in large numbers in the high altitude Alpine meadow. Results were unexpected: the species was found to be the least genetically diverse of any wild mammal studied to date. An explanation was found in the marmots genetic past. The alpine marmot has lost its genetic diversity during ice-age related climate events and been unable to recover its diversity since. Results from this study have been published in the journal Current Biology.

A large rodent from the squirrel family, the alpine marmot lives in the high-altitude mountainous terrain found beyond the tree line. An international team of researchers has now successfully deciphered the animal's genome and found the individual animals tested to be genetically very similar. In fact, the animal's genetic diversity is lower than that of any other wild mammal whose genome has been genetically sequenced. "We were very surprised by this finding. Low genetic diversity is primarily found among highly endangered species such as, for instance, the mountain gorilla. Population numbers for the alpine marmot, however, are in the hundreds of thousands, which is why the species is not considered to be at risk," explains Prof. Dr. Markus Ralser, the Director of Charité's Institute of Biochemistry and the investigator with overall responsibility for the study, which was co-led by the Francis Crick Institute.

As the alpine marmot's low genetic diversity could not be explained by the animal's current living and breeding habits, the researchers used computer-based analysis to reconstruct the marmot's genetic past. After combining the results of comprehensive genetic analyses with data from fossil records, the researchers came to the conclusion that the alpine marmot lost its genetic diversity as a result of multiple climate-related adaptations during the last ice age. One of these adaptations occurred during the animal's colonization of the Pleistocene steppe at the beginning of the last ice age (between 110,000 and 115,000 years ago). A second occurred when the Pleistocene steppe disappeared again towards the end of the ice age (between 10,000 and 15,000 years ago). Since then, marmots have inhabited the high-altitude grasslands of the Alps, where temperatures are similar to those of the Pleistocene steppe habitat. The researchers found evidence to suggest that the marmot's adaptation to the colder temperatures of the Pleistocene steppe resulted in longer generation time and a decrease in the rate of genetic mutations. These developments meant that the animals were unable to effectively regenerate their genetic diversity. Overall results suggest that the rate of genome evolution is exceptionally low in alpine marmots.

Commenting on the significance of their results, Prof. Ralser says: "Our study shows that climate change can have extremely long-term effects on the genetic diversity of a species. This had not previously been shown in such clear detail. When a species displays very little genetic diversity, this can be due to climate events which occurred many thousands of years ago," He adds: "It is remarkable that the alpine marmot managed to survive for thousands of years despite its low genetic diversity." After all, a lack of genetic variation can mean a reduced ability to adapt to change, rendering the affected species more susceptible to both diseases and altered environmental conditions -- including changes in the local climate."

Summarizing the study's findings, Prof. Ralser explains: "We should take the results of the study seriously, as we can see similar warnings from the past. In the 19th century, the passenger pigeon was one of the most abundant species of land birds in the Northern Hemisphere, yet, it was completely wiped out within just a few years. It is possible that low genetic diversity played a role in this." Outlining his plans for further research, he adds: "An important next step would be to study other animals more closely which, like the alpine marmot, managed to survive the ice age. These animals might be trapped in a similar state of low genetic diversity. Currently, estimates of a particular species' extinction risk are primarily based on the number of animals capable of breeding. We ought to reconsider whether this should be the only criterion we use."

Read more at Science Daily

GRACE data contributes to understanding of climate change

The University of Texas at Austin team that led a twin satellite system launched in 2002 to take detailed measurements of the Earth, called the Gravity Recovery and Climate Experiment (GRACE), reports in the most recent issue of the journal Nature Climate Change on the contributions that their nearly two decades of data have made to our understanding of global climate patterns.

Among the many contributions that GRACE has made:

  • GRACE recorded three times the mass of ice lost in the polar and mountainous regions since first beginning measurements -- a consequence of global warming.
  • GRACE enabled a measure of the quantity of heat added to the ocean and the location for said heat that remains stored in the ocean. GRACE has provided detailed observations, confirming that the majority of the warming occurs in the upper 2,000 meters of the oceans.
  • GRACE has observed that of the 37 largest land-based aquifers, 13 have undergone critical mass loss. This loss, due to both a climate-related effect and an anthropogenic (human-induced) effect, documents the reduced availability of clean, fresh water supplies for human consumption.
  • The information gathered from GRACE provides vital data for the federal agency United States Drought Monitor and has shed light on the causes of drought and aquifer depletion in places worldwide, from India to California.

Intended to last just five years in orbit for a limited, experimental mission to measure small changes in the Earth's gravitational fields, GRACE operated for more than 15 years and has provided unprecedented insight into our global water resources, from more accurate measurements of polar ice loss to a better view of the ocean currents, and the rise in global sea levels. The mission was a collaboration between NASA and the German Aerospace Centre and was led by researchers in the Center for Space Research (CSR) in UT's Cockrell School of Engineering.

UT's Texas Advanced Computing Center (TACC) has played a critical role in this international project over the last 15 years, according to Byron Tapley, the Clare Cockrell Williams Centennial Chair Emeritus in the Department of Aerospace Engineering and Engineering Mechanics who established the Center for Space Research at UT in 1981 and who served as principal investigator of the GRACE mission.

"As the demand for the GRACE science deliverables have grown, TACC's ability to support these demands have grown. It has been a seamless transition to a much richer reporting environment," he said.

By measuring changes in mass that cause deviations in the strength of gravity's pull on the Earth's various systems -- water systems, ice sheets, atmosphere, land movements, and more -- the satellites can measure small changes in the Earth system interactions.

"By monitoring the physical components of the Earth's dynamical system as a whole, GRACE provides a time variable and holistic overview of how our oceans, atmosphere and land surface topography interact," Tapley said.

The data system for the mission is highly distributed and requires significant data storage and computation through an internationally distributed network. Although the final data products for the CSR solutions are generated at TACC, there is considerable effort in Germany by the Geophysics Center in Potsdam and the NASA Jet Propulsion Laboratory (JPL) in Pasadena, California. The final CSR analysis at TACC starts with a data downlink from the satellites to a raw data collection center in Germany. The data is then transmitted to JPL where the primary measurements are converted into the geophysical measurements consisting of GPS, accelerometer, attitude quaternions, and the high accuracy intersatellite ranging measurements collected by each satellite during a month-long observation span.

"The collection of information from this international community are brought together by the fundamental computing capability and the operational philosophy at TACC to undergo the challenging data analysis required to obtain the paradigm-shifting view of the Earth's interactions," Tapley said.

Despite being a risky venture operating on minimal funding, the GRACE mission surpassed all expectations and continues to provide a critical set of measurements.

"The concept of using the changing gravimetric patterns on Earth as a means to understanding major changes in the Earth system interactions had been proposed before," Tapley said. "But we were the first to make it happen at a measurement level that supported the needs of the diverse Earth-science community."

One of the remarkable benefits of working with TACC, according to Tapley, is the ability to pose questions whose solutions would have not been feasible prior to TACC and to find the capability to answer the questions.

"As an example, when we began the GRACE mission, our capability was looking at gravity models that were characterized by approximately 5,000 model parameters, whose solution was obtained at approximately yearly analysis intervals. The satellite-only GRACE models today are based on approximately 33,000 parameters that we have the ability to determine at a daily interval. In the final re-analysis of the GRACE data, we're looking to expand this parameterization to 4,000,000 parameters for the mean model. The interaction with TACC has always been in the context of: 'If the answer to a meaningful question requires extensive computations, let's find a way to satisfy that requirement,'" Tapley said.

Read more at Science Daily

New causes of autism found in 'junk' DNA

DNA illustration.
Leveraging artificial intelligence techniques, researchers have demonstrated that mutations in so-called 'junk' DNA can cause autism. The study, published May 27 in Nature Genetics, is the first to functionally link such mutations to the neurodevelopmental condition.

The research was led by Olga Troyanskaya in collaboration with Robert Darnell. Troyanskaya is deputy director for genomics at the Flatiron Institute's Center for Computational Biology (CCB) in New York City and a professor of computer science at Princeton University. Darnell is the Robert and Harriet Heilbrunn Professor of Cancer Biology at Rockefeller University and an investigator at the Howard Hughes Medical Institute.

Their team used machine learning to analyze the whole genomes of 1,790 individuals with autism and their unaffected parents and siblings. These individuals had no family history of autism, meaning the genetic cause of their condition was probably spontaneous mutations rather than inherited mutations.

The analysis predicted the ramifications of genetic mutations in parts of the genome that do not encode proteins, regions often mischaracterized as 'junk' DNA. The number of autism cases linked to the noncoding mutations was comparable to the number of cases linked to protein-coding mutations that disable gene function.

The implications of the work extend beyond autism, Troyanskaya says. "This is the first clear demonstration of non-inherited, noncoding mutations causing any complex human disease or disorder."

Scientists can apply the same techniques used in the new study to explore the role noncoding mutations play in diseases such as cancer and heart disease, says study co-author Jian Zhou of CCB and Princeton. "This enables a new perspective on the cause of not just autism, but many human diseases."

Only 1 to 2 percent of the human genome is made up of genes that encode the blueprints for making proteins. Those proteins carry out tasks throughout our bodies, such as regulating blood sugar levels, fighting infections and sending communications between cells. The other 98 percent of our genome isn't genetic dead weight, though. The noncoding regions help regulate when and where genes make proteins.

Mutations in protein-coding regions account for at most 30 percent of autism cases in individuals without a family history of autism. Evidence suggested that autism-causing mutations must happen elsewhere in the genome as well.

Uncovering which noncoding mutations may cause autism is tricky. A single individual may have dozens of noncoding mutations, most of which will be unique to the individual. This make the traditional approach of identifying common mutations among affected populations nonviable.

Troyanskaya and her colleagues took a new approach. They trained a machine learning model to predict how a given sequence would affect gene expression.

"This is a shift in thinking about genetic studies that we're introducing with this analysis," says Chandra Theesfeld, a research scientist in Troyanskaya's lab at Princeton. "In addition to scientists studying shared genetic mutations across large groups of individuals, here we're applying a set of smart, sophisticated tools that tell us what any specific mutation is going to do, even those that are rare or never observed before."

The researchers studied the genetic basis of autism by applying the machine learning model to a treasure trove of genetic data called the Simons Simplex Collection. The Simons Foundation, the Flatiron Institute's parent organization, produced and maintains the repository. The Simons Simplex Collection contains the whole genomes of nearly 2,000 'quartets' made up of a child with autism, an unaffected sibling and their unaffected parents.

These foursomes had no previous family history of autism, meaning that non-inherited mutations were probably responsible for the affected child's condition. (Such mutations occur spontaneously in sperm and egg cells as well as in embryos.)

The researchers used their model to predict the impact of non-inherited, noncoding mutations in each child with autism. They then compared those predictions with the effects of the same, unmutated strand in the child's unaffected sibling.

"The design of the Simons Simplex Collection is what allowed us to do this study," says Zhou. "The unaffected siblings are a built-in control."

Noncoding mutations in many of the children with autism altered gene regulation, the analysis suggested. Moreover, the results suggested that the mutations affected gene expression in the brain and genes already linked to autism, such as those responsible for neuron migration and development. "This is consistent with how autism most likely manifests in the brain," says study co-author Christopher Park, a research scientist at CCB. "It's not just the number of mutations occurring, but what kind of mutations are occurring."

The researchers tested the effects of some of the noncoding mutations in laboratory experiments. They inserted predicted high-impact mutations found in children with autism into cells and observed the resulting changes in gene expression. These changes affirmed the model's predictions.

Read more at Science Daily

Scientists uncover a trove of genes that could hold key to how humans evolved

DNA sequence illustration
Researchers at the Donnelly Centre in Toronto have found that dozens of genes, previously thought to have similar roles across different organisms, are in fact unique to humans and could help explain how our species came to exist.

These genes code for a class of proteins known as transcription factors, or TFs, which control gene activity. TFs recognize specific snippets of the DNA code called motifs, and use them as landing sites to bind the DNA and turn genes on or off.

Previous research had suggested that TFs which look similar across different organisms also bind similar motifs, even in species as diverse as fruit flies and humans. But a new study from Professor Timothy Hughes' lab, at the Donnelly Centre for Cellular and Biomolecular Research, shows that this is not always the case.

Writing in the journal Nature Genetics, the researchers describe a new computational method which allowed them to more accurately predict motif sequences each TF binds in many different species. The findings reveal that some sub-classes of TFs are much more functionally diverse than previously thought.

"Even between closely related species there's a non-negligible portion of TFs that are likely to bind new sequences," says Sam Lambert, former graduate student in Hughes' lab who did most of the work on the paper and has since moved to the University of Cambridge for a postdoctoral stint.

"This means they are likely to have novel functions by regulating different genes, which may be important for species differences," he says.

Even between chimps and humans, whose genomes are 99 per cent identical, there are dozens of TFs which recognize diverse motifs between the two species in a way that would affect expression of hundreds of different genes.

"We think these molecular differences could be driving some of the differences between chimps and humans," says Lambert, who won the Jennifer Dorrington Graduate Research Award for outstanding doctoral research at U of T's Faculty of Medicine.

To reanalyze motif sequences, Lambert developed new software which looks for structural similarities between the TFs' DNA binding regions that relate to their ability to bind the same or different DNA motifs. If two TFs, from different species, have a similar composition of amino-acids, building blocks of proteins, they probably bind similar motifs. But unlike older methods, which compare these regions as a whole, Lambert's automatically assigns greater value to those amino-acids -- a fraction of the entire region -- which directly contact the DNA. In this case, two TFs may look similar overall, but if they differ in the position of these key amino-acids, they are more likely to bind different motifs. When Lambert compared all TFs across different species and matched to all available motif sequence data, he found that many human TFs recognize different sequences -- and therefore regulate different genes -- than versions of the same proteins in other animals.

The finding contradicts earlier research, which stated that almost all of human and fruit fly TFs bind the same motif sequences, and is a call for caution to scientists hoping to draw insights about human TFs by only studying their counterparts in simpler organisms.

"There is this idea that has persevered, which is that the TFs bind almost identical motifs between humans and fruit flies," says Hughes, who is also a professor in U of T's Department of Molecular Genetics and Fellow of the Canadian Institute for Advanced Research. "And while there are many examples where these proteins are functionally conserved, this is by no means to the extent that has been accepted."

As for TFs that have unique human roles, these belong to the rapidly evolving class of so-called C2H2 zinc finger TFs, named for zinc ion-containing finger-like protrusions, with which they bind the DNA.

Their role remains an open question but it is known that organisms with more diverse TFs also have more cell types, which can come together in novel ways to build more complicated bodies.

Hughes is excited about a tantalizing possibility that some of these zinc finger TFs could be responsible for the unique features of human physiology and anatomy -- our immune system and the brain, which are the most complex among animals. Another concerns sexual dimorphism: countless visible, and often less obvious, differences between sexes that guide mate selection -- decisions that have an immediate impact on reproductive success, and can also have profound impact on physiology in the long term. The peacock's tail or facial hair in men are classic examples of such features.

"Almost nobody in human genetics studies the molecular basis of sexual dimorphism, yet these are features that all human beings see in each other and that we are all fascinated with," says Hughes. "I'm tempted to spend the last half of my career working on this, if I can figure out how to do it!"

Read more at Science Daily

May 26, 2019

On Mars, sands shift to a different drum

Illustration of sand dunes on Mars
Wind has shaped the face of Mars for millennia, but its exact role in piling up sand dunes, carving out rocky escarpments or filling impact craters has eluded scientists until now.

In the most detailed analysis of how sands move around on Mars, a team of planetary scientists led by Matthew Chojnacki at the University of Arizona Lunar and Planetary Lab set out to uncover the conditions that govern sand movement on Mars and how they differ from those on Earth.

The results, published in the current issue of the journal Geology, reveal that processes not involved in controlling sand movement on Earth play major roles on Mars, especially large-scale features on the landscape and differences in landform surface temperature.

"Because there are large sand dunes found in distinct regions of Mars, those are good places to look for changes," said Chojnacki, associate staff scientist at the UA and lead author of the paper, "Boundary conditions controls on the high-sand-flux regions of Mars." "If you don't have sand moving around, that means the surface is just sitting there, getting bombarded by ultraviolet and gamma radiation that would destroy complex molecules and any ancient Martian biosignatures."

Compared to Earth's atmosphere, the Martian atmosphere is so thin its average pressure on the surface is a mere 0.6 percent of our planet's air pressure at sea level. Consequently, sediments on the Martian surface move more slowly than their Earthly counterparts.

The Martian dunes observed in this study ranged from 6 to 400 feet tall and were found to creep along at a fairly uniform average speed of two feet per Earth year. For comparison, some of the faster terrestrial sand dunes on Earth, such as those in North Africa, migrate at 100 feet per year.

"On Mars, there simply is not enough wind energy to move a substantial amount of material around on the surface," Chojnacki said. "It might take two years on Mars to see the same movement you'd typically see in a season on Earth."

Planetary geologists had been debating whether the sand dunes on the red planet were relics from a distant past, when the atmosphere was much thicker, or whether drifting sands still reshape the planet's face today, and if so, to what degree.

"We wanted to know: Is the movement of sand uniform across the planet, or is it enhanced in some regions over others?" Chojnacki said. "We measured the rate and volume at which dunes are moving on Mars."

The team used images taken by the HiRISE camera aboard NASA's Mars Reconnaissance Orbiter, which has been surveying Earth's next-door neighbor since 2006. HiRISE, which stands for High Resolution Imaging Science Experiment, is led by the UA's Lunar and Planetary Laboratory and has captured about three percent of the Martian surface in stunning detail.

The researchers mapped sand volumes, dune migration rates and heights for 54 dune fields, encompassing 495 individual dunes.

"This work could not have been done without HiRISE," said Chojnacki, who is a member of the HiRISE team. "The data did not come just from the images, but was derived through our photogrammetry lab that I co-manage with Sarah Sutton. We have a small army of undergraduate students who work part time and build these digital terrain models that provide fine-scale topography."

Across Mars, the survey found active, wind-shaped beds of sand and dust in structural fossae -- craters, canyons, rifts and cracks -- as well as volcanic remnants, polar basins and plains surrounding craters.

In the study's most surprising finding, the researchers discovered that the largest movements of sand in terms of volume and speed are restricted to three distinct regions: Syrtis Major, a dark spot larger than Arizona that sits directly west of the vast Isidis basin; Hellespontus Montes, a mountain range about two-thirds the length of the Cascades; and North Polar Erg, a sea of sand lapping around the north polar ice cap. All three areas are set apart from other parts of Mars by conditions not known to affect terrestrial dunes: stark transitions in topography and surface temperatures.

"Those are not factors you would find in terrestrial geology," Chojnacki said. "On Earth, the factors at work are different from Mars. For example, ground water near the surface or plants growing in the area retard dune sand movement."

On a smaller scale, basins filled with bright dust were found to have higher rates of sand movement, as well.

"A bright basin reflects the sunlight and heats up the air above much more quickly than the surrounding areas, where the ground is dark," Chojnacki said, "so the air will move up the basin toward the basin rim, driving the wind, and with it, the sand."

Read more at Science Daily

Mites and ticks are close relatives, new research shows

Mite illustration.
Scientists from the University of Bristol and the Natural History Museum in London have reconstructed the evolutionary history of the chelicerates, the mega-diverse group of 110,000 arthropods that includes spiders, scorpions, mites and ticks.

They found, for the first time, genomic evidence that mites and ticks do not constitute two distantly related lineages, rather they are part of the same evolutionary line. This now makes them the most diverse group of chelicerates, changing our perspective on their biodiversity.

Arthropoda, or jointed-legged animals, make up the majority of animal biodiversity. They both pollinate (bees) and destroy our crops (locusts), are major food sources (shrimps and crabs), and are vectors of serious diseases like malaria and Lyme disease (mosquitoes and ticks).

Arthropods are ancient and fossils show that they have been around for more than 500 million years. The secret of their evolutionary success, which is reflected in their outstanding species diversity, is still unknown. To clarify what makes arthropod so successful we first need to understand how the different arthropod lineages relate to each other.

Co-author of the study, Professor Davide Pisani, from the University of Bristol's School of Earth Sciences and Biological Sciences, said: "Finding that mites and ticks constitute a single evolutionary lineage is really important for our understanding of how biodiversity is distributed within Chelicerata.

"Spiders, with more than 48,000 described species, have long been considered the most biodiverse chelicerate lineage, but 42,000 mite and 12,000 tick species have been described. So, if mites and ticks are a single evolutionary entity rather than two distantly related ones, they are more diverse than the spiders."

Dr Greg Edgecombe of the Natural History Museum London added: "Because of their anatomical similarities it has long been suspected that mites and ticks form a natural evolutionary group, which has been named Acari. However, not all anatomists agreed, and genomic data never found any support for this idea before."

Lead author, Dr Jesus Lozano Fernandez, from Bristol's School of Biological Sciences, said: "Spiders are iconic terrestrial animals that have always been part of the human imagination and folklore, representing mythological and cultural symbols, as well as often being objects of inner fears or admiration.

"Spiders have long been considered the most biodiverse chelicerate lineage, but our findings show that Acari is, in fact, bigger."

In order to come up with their findings, the researchers used an almost even representation of mites and ticks (10 and 11 species, respectively), the most complete species-level sampling at the genomic level for these groups so far.

Dr Lozano-Fernandez added: "Regardless of the methods we used, our results converge on the same answer -- mites and ticks really do form a natural group. Evolutionary trees like the one we've reconstructed provide us with the background information we need to interpret processes of genomic change.

Read more at Science Daily