Nov 24, 2018

Gigantic mammal 'cousin' discovered

During the Triassic period (252-201 million years ago) mammal-like reptiles called therapsids co-existed with ancestors to dinosaurs, crocodiles, mammals, pterosaurs, turtles, frogs, and lizards. One group of therapsids are the dicynodonts. Researchers at Uppsala University in Sweden, together with colleagues in Poland, have discovered fossils from a new genus of gigantic dicynodont. The new species Lisowicia bojani is described in the journal Science.
During the Triassic period (252-201 million years ago) mammal-like reptiles called therapsids co-existed with ancestors to dinosaurs, crocodiles, mammals, pterosaurs, turtles, frogs, and lizards. One group of therapsids are the dicynodonts. Researchers at Uppsala University in Sweden, together with colleagues in Poland, have discovered fossils from a new genus of gigantic dicynodont. The new species Lisowicia bojani is described in the journal Science.

The earth is about 4.5 billion years old and has gone through many geological periods and dramatic change. During the Triassic period, about 252-201 million years ago, all land on Earth came together and formed the massive continent called Pangea. During this time, the first dinosaurs came into being as well as ancestors to crocodiles, mammals, pterosaurs, turtles, frogs, and lizards. Recently, scientists have become interested in another type of animal, therapsids. Therapsids were "mammal-like" reptiles and are ancestors to the mammals, including humans, found today. One group of therapsids is called dicynodonts. All species of dicynodonts were herbivores (plant eaters) and their sizes ranged from small burrowers to large browsers. Most of them were also toothless. They survived the Permian mass extinction and became the dominant terrestrial herbivores in the Middle and Late Triassic. They were thought to have died out before the dinosaurs became the dominant form of tetrapod on land.

For the first time, researchers in the research programme Evolution and Development at Uppsala University in collaboration with researchers at the Polish Academy of Sciences (Warsaw), have discovered fossils from a new species of dicynodont in the Polish village of Lisowice. The species was named Lisowicia bojani after the village and a German comparative anatomist named Ludwig Heinrich Bojanus who worked in Vilnius and is known for making several important anatomical discoveries. The findings show that the Lisowicia was about the size of a modern-day elephant, about 4.5 metres long, 2.6 metres high and weighed approximately 9 tons, which is 40 percent larger than any previously identified dicynodont. Analysis of the limb bones showed that they had a fast growth, much like a mammal or a dinosaur. It lived during the Late Triassic, about 210-205 million years ago, about 10 million years later than previous findings of dicynodonts.

"The discovery of Lisowicia changes our ideas about the latest history of dicynodonts, mammal Triassic relatives. It also raises far more questions about what really make them and dinosaurs so large," says Dr Tomasz Sulej, Polish Academy of Sciences.

"Dicynodonts were amazingly successful animals in the Middle and Late Triassic. Lisowicia is the youngest dicynodont and the largest non-dinosaurian terrestrial tetrapod from the Triassic. It's natural to want to know how dicynodonts became so large. Lisowicia is hugely exciting because it blows holes in many of our classic ideas of Triassic 'mammal-like reptiles'," says Dr Grzegorz Niedzwiedzki, Uppsala University.

The first findings of fossils from Lisowice in Poland were made in 2005 by Robert Borz?cki and Piotr Menducki. Since then, more than 1,000 bones and bone fragments have been collected from the area, including fossils from Lisowicia. The area is thought to have been a river deposit during the Late Triassic period.

The discovery of Lisowicia provides the first evidence that mammal-like elephant sized dicynodonts were present at the same time as the more well-known long-necked sauropodomorph dinosaurs, contrary to previous belief. Sauropodomorphs include species like the Diplodocus or Brachiosaurus. It fills a gap in the fossil record of dicynodonts and it shows that some anatomical features of limbs thought to characterize large mammals or dinosaurs evolved also in the non-mammalian synapsid. Finally, these findings from Poland are the first substantial finds of dicynodonts from the Late Triassic in Europe.

Read more at Science Daily

Engineers fly first-ever plane with no moving parts

A new MIT plane is propelled via ionic wind. Batteries in the fuselage (tan compartment in front of plane) supply voltage to electrodes (blue/white horizontal lines) strung along the length of the plane, generating a wind of ions that propels the plane forward.
Since the first airplane took flight over 100 years ago, virtually every aircraft in the sky has flown with the help of moving parts such as propellers, turbine blades, and fans, which are powered by the combustion of fossil fuels or by battery packs that produce a persistent, whining buzz.

Now MIT engineers have built and flown the first-ever plane with no moving parts. Instead of propellers or turbines, the light aircraft is powered by an "ionic wind" -- a silent but mighty flow of ions that is produced aboard the plane, and that generates enough thrust to propel the plane over a sustained, steady flight.

Unlike turbine-powered planes, the aircraft does not depend on fossil fuels to fly. And unlike propeller-driven drones, the new design is completely silent.

"This is the first-ever sustained flight of a plane with no moving parts in the propulsion system," says Steven Barrett, associate professor of aeronautics and astronautics at MIT. "This has potentially opened new and unexplored possibilities for aircraft which are quieter, mechanically simpler, and do not emit combustion emissions."

He expects that in the near-term, such ion wind propulsion systems could be used to fly less noisy drones. Further out, he envisions ion propulsion paired with more conventional combustion systems to create more fuel-efficient, hybrid passenger planes and other large aircraft.

Barrett and his team at MIT have published their results in the journal Nature.

Hobby crafts

Barrett says the inspiration for the team's ion plane comes partly from the movie and television series, "Star Trek," which he watched avidly as a kid. He was particularly drawn to the futuristic shuttlecrafts that effortlessly skimmed through the air, with seemingly no moving parts and hardly any noise or exhaust.

"This made me think, in the long-term future, planes shouldn't have propellers and turbines," Barrett says. "They should be more like the shuttles in 'Star Trek,' that have just a blue glow and silently glide."

About nine years ago, Barrett started looking for ways to design a propulsion system for planes with no moving parts. He eventually came upon "ionic wind," also known as electroaerodynamic thrust -- a physical principle that was first identified in the 1920s and describes a wind, or thrust, that can be produced when a current is passed between a thin and a thick electrode. If enough voltage is applied, the air in between the electrodes can produce enough thrust to propel a small aircraft.

For years, electroaerodynamic thrust has mostly been a hobbyist's project, and designs have for the most part been limited to small, desktop "lifters" tethered to large voltage supplies that create just enough wind for a small craft to hover briefly in the air. It was largely assumed that it would be impossible to produce enough ionic wind to propel a larger aircraft over a sustained flight.

"It was a sleepless night in a hotel when I was jet-lagged, and I was thinking about this and started searching for ways it could be done," he recalls. "I did some back-of-the-envelope calculations and found that, yes, it might become a viable propulsion system," Barrett says. "And it turned out it needed many years of work to get from that to a first test flight."

Ions take flight

The team's final design resembles a large, lightweight glider. The aircraft, which weighs about 5 pounds and has a 5-meter wingspan, carries an array of thin wires, which are strung like horizontal fencing along and beneath the front end of the plane's wing. The wires act as positively charged electrodes, while similarly arranged thicker wires, running along the back end of the plane's wing, serve as negative electrodes.

The fuselage of the plane holds a stack of lithium-polymer batteries. Barrett's ion plane team included members of Professor David Perreault's Power Electronics Research Group in the Research Laboratory of Electronics, who designed a power supply that would convert the batteries' output to a sufficiently high voltage to propel the plane. In this way, the batteries supply electricity at 40,000 volts to positively charge the wires via a lightweight power converter.

Once the wires are energized, they act to attract and strip away negatively charged electrons from the surrounding air molecules, like a giant magnet attracting iron filings. The air molecules that are left behind are newly ionized, and are in turn attracted to the negatively charged electrodes at the back of the plane.

As the newly formed cloud of ions flows toward the negatively charged wires, each ion collides millions of times with other air molecules, creating a thrust that propels the aircraft forward.

The team, which also included Lincoln Laboratory staff Thomas Sebastian and Mark Woolston, flew the plane in multiple test flights across the gymnasium in MIT's duPont Athletic Center -- the largest indoor space they could find to perform their experiments. The team flew the plane a distance of 60 meters (the maximum distance within the gym) and found the plane produced enough ionic thrust to sustain flight the entire time. They repeated the flight 10 times, with similar performance.

"This was the simplest possible plane we could design that could prove the concept that an ion plane could fly," Barrett says. "It's still some way away from an aircraft that could perform a useful mission. It needs to be more efficient, fly for longer, and fly outside."

Barrett's team is working on increasing the efficiency of their design, to produce more ionic wind with less voltage. The researchers are also hoping to increase the design's thrust density -- the amount of thrust generated per unit area. Currently, flying the team's lightweight plane requires a large area of electrodes, which essentially makes up the plane's propulsion system. Ideally, Barrett would like to design an aircraft with no visible propulsion system or separate controls surfaces such as rudders and elevators.

"It took a long time to get here," Barrett says. "Going from the basic principle to something that actually flies was a long journey of characterizing the physics, then coming up with the design and making it work. Now the possibilities for this kind of propulsion system are viable."

Read more at Science Daily

Nov 23, 2018

Widely used reference for the human genome is missing 300 million bits of DNA

For the past 17 years, most scientists around the globe have been using the nucleic acid sequence, or genome, an assembly of DNA information, from primarily a single individual as a kind of "baseline" reference and human species representation for comparing genetic variety among groups of people.

Known as the GRCh38 reference genome, it is periodically updated with DNA sequences from other individuals, but in a new analysis, Johns Hopkins scientists now say that the collective genomes of 910 people of African descent have a large chunk -- about 300 million bits -- of genetic material that is missing from the basic reference genome.

"There's so much more human DNA than we originally thought," says Steven Salzberg, Ph.D., the Bloomberg Distinguished Professor of Biomedical Engineering, Computer Science, and Biostatistics at The Johns Hopkins University.

Knowing the variations in genomes across populations is essential to research design to reveal why certain people or groups of people may be more or less susceptible to common health conditions, such as heart disease, cancer and diabetes, and Salzberg says that scientists need to build more reference genomes that more closely reflect different populations.

"The whole world is relying on what is essentially a single reference genome, and when a particular DNA analysis doesn't match the reference and you throw away those non-matching sequences, those discarded bits may in fact hold the answers and clues you are seeking," says Salzberg.

Rachel Sherman, the first author on the report and a Ph.D. student in computer science at Johns Hopkins, says, "If you are a scientist looking for genome variations linked to a condition that is more prevalent in a certain population, you'd want to compare the genomes to a reference genome more representative of that population."

Specifically, the world's reference genome was assembled from the nucleic acid sequences of a handful of anonymous volunteers. Other researchers later determined that 70 percent of the reference genome derives from a single individual who was half European and half African, and the rest derives from multiple individuals of European and Chinese descent, according to Salzberg.

"These results underscore the importance of research on populations from diverse backgrounds and ancestries to create a comprehensive and inclusive picture of the human genome," said James P. Kiley, Ph.D., director of the Division of Lung Diseases at the National Heart, Lung, and Blood Institute (NHLBI), which supported the study. "A more complete picture of the human genome may lead to a better understanding of variations in disease risk across different populations."

For the new analysis, described online Nov. 19 in Nature Genetics, Salzberg and Sherman began their project with DNA collected from 910 individuals of African descent who live in 20 regions around the globe, including the U.S., Central Africa and the Caribbean. Their DNA had been collected for an NHLBI-supported study at Johns Hopkins led by Kathleen Barnes, Ph.D., who is now at the University of Colorado and continues to lead this program on genetic factors that may contribute to asthma and allergy, conditions known to be overrepresented in this population.

Many researchers look for small differences between the reference genome and the genomes of the individuals they are studying -- sometimes only a single change in chemical base pairs within the DNA. These small changes are called single nucleotide polymorphisms, or SNPs.

However, Salzberg's team focuses on larger variations in the genome. "SNPs correlate really well to figure out an individual's ancestry, but they haven't worked as well to determine genetic variations that may contribute to common conditions and diseases," says Salzberg. "Some conditions may be due to variations across larger sections of the genome."

Over a two-year period, Salzberg and Sherman analyzed the DNA sequences of the 910 people, looking for sections of DNA at least 1,000 base pairs long that did not align with or match the reference genome. "Within these DNA sequences are what makes one individual unique," says Sherman.

They assembled those sequences and looked for overlaps and redundancies, filtering out sequences shorter than 1,000 base pairs, and DNA likely linked to bacteria, which is found in all humans.

Then they compared the assembled sequences of all 910 individuals to the standard reference genome to find what Salzberg calls, "chunks of DNA that you may have and I don't."

In all, they found 300 million base pairs of DNA -- which is about 10 percent of the estimated size of the entire human genome -- that the reference genome did not account for. The largest section of unique DNA they found was 152,000 base pairs long, but most chunks were about 1,000-5,000 base pairs long.

A small portion of these DNA sequences may overlap with genes that encode proteins or other cellular functions, but, Salzberg says, they have not mapped the function of each sequence.

Read more at Science Daily

Removing toxic mercury from contaminated water

When mercury ions (light purple) in a liquid come near an electrode of platinum, they are attracted to the electrode's surface where they get reduced to metallic mercury. On the electrode, mercury atoms (dark purple) and platinum atoms (grey) develop into a very strong alloy, and the mercury is thus removed from the water.
Water which has been contaminated with mercury and other toxic heavy metals is a major cause of environmental damage and health problems worldwide. Now, researchers from Chalmers University of Technology, Sweden, present a totally new way to clean contaminated water, through an electrochemical process. The results are published in the scientific journal Nature Communications.

"Our results have really exceeded the expectations we had when we started with the technique," says the research leader Björn Wickman, from Chalmers' Department of Physics. "Our new method makes it possible to reduce the mercury content in a liquid by more than 99%. This can bring the water well within the margins for safe human consumption."

According to the World Health Organisation (WHO), mercury is one the most harmful substances for human health. It can influence the nervous system, the development of the brain, and more. It is particularly harmful for children and can also be transmitted from a mother to a child during pregnancy. Furthermore, mercury spreads very easily through nature, and can enter the food chain. Freshwater fish, for example, often contain high levels of mercury.

In the last two years, Björn Wickman and Cristian Tunsu, researcher at the Department of Chemistry and Chemical Engineering at Chalmers, have studied an electrochemical process for cleaning mercury from water. Their method works via extracting the heavy metal ions from water by encouraging them to form an alloy with another metal.

"Today, cleaning away the low, yet harmful, levels of mercury from large amounts of water is a major challenge. Industries need better methods to reduce the risk of mercury being released in nature," says Björn Wickman.

Their new method involves a metal plate -- an electrode -- that binds specific heavy metals to it. The electrode is made of the noble metal platinum, and through an electrochemical process it draws the toxic mercury out of the water to form an alloy of the two. In this way, the water is cleaned of the mercury contamination. The alloy formed by the two metals is very stable, so there is no risk of the mercury re-entering the water.

"An alloy of this type has been made before, but with a totally different purpose in mind. This is the first time the technique with electrochemical alloying has been used for decontamination purposes," says Cristian Tunsu.

One strength of the new cleaning technique is that the electrode has a very high capacity. Each platinum atom can bond with four mercury atoms. Furthermore, the mercury atoms do not only bond on the surface, but also penetrate deeper into the material, creating thick layers. This means the electrode can be used for a long time. After use, it can be emptied in a controlled way. Thereby, the electrode can be recycled, and the mercury disposed of in a safe way. A further positive for this process is that it is very energy efficient.

"Another great thing with our technique is that it is very selective. Even though there may be many different types of substance in the water, it just removes the mercury. Therefore, the electrode doesn't waste capacity by unnecessarily taking away other substances from the water," says Björn Wickman.

Patenting for the new method is being sought, and in order to commercialise the discovery, the new company Atium has been set up. The new innovation has already been bestowed with a number of prizes and awards, both in Sweden and internationally. The research and the colleagues in the company have also had a strong response from industry.

"We have already had positive interactions with a number of interested parties, who are keen to test the method. Right now, we are working on a prototype which can be tested outside the lab under real-world conditions."

Read the article, "Effective removal of mercury from aqueous streams via electrochemical alloy formation on platinum" in Nature Communications.

Potential uses for the new method

The technique could be used to reduce the amount of waste and increase the purity of waste and process water in the chemical and mining industries, and in metal production. It can contribute to better environmental cleaning of places with contaminated land and water sources.

It can even be used to clean drinking water in badly affected environments because, thanks to its low energy use, it can be powered totally by solar cells. Therefore, it can be developed into a mobile and reusable water cleaning technology.

More on heavy metals in our environment

Heavy metals in water sources create enormous environmental problems and influence the health of millions of people around the world. Heavy metals are toxic for all living organisms in the food chain. According to the WHO, mercury is one of the most dangerous substances for human health, influencing our nervous system, brain development and more. The substance is especially dangerous for children and unborn babies.

Today there are strict regulations concerning the management of toxic heavy metals to hinder their spread in nature. But there are many places worldwide which are already contaminated, and they can be transported in rain or in the air. This results in certain environments where heavy metals can become abundant, for example fish in freshwater sources.

Read more at Science Daily

Wild coffee plants, Christmas trees and chocolate's tree are surprisingly poorly protected

Headlines about threatened plant species often focus on hardwood plundered from the Amazon or obscure plants known only to specialized botanists. A new way of measuring plant conservation shows that a wide range of wild plants used for food, medicine, shelter, fuel, livestock forage and other valuable purposes are at risk. These include wild populations of firs used for Christmas trees, the original types of kitchen-cupboard staples like vanilla, chamomile, cacao and cinnamon, wild relatives of crops like coffee, and non-cultivated plants used by bees to make honey.

The indicator finds that less than 3 percent of the 7,000 evaluated species are presently classified as "low priority" or "sufficiently conserved." This is worrying, say the authors, since the indicator was designed to measure countries' progress toward ambitious conservation goals that are supposed to be met by 2020, including the Convention on Biological Diversity Aichi Target 13 and Sustainable Development Goal 2.5.

"This indicator underscores the urgency to protect the world's useful wild plants," said Colin Khoury, the study's lead author and a biodiversity specialist at the International Center for Tropical Agriculture (CIAT), a global research organization based in Colombia. "The indicator not only helps us measure where countries and the world stand with regard to safeguarding this natural and cultural heritage, but it provides actual information per species that can be used to take action to improve their conservation status."

The indicator was developed by CIAT in collaboration with the Global Crop Diversity Trust, the U.S. Department of Agriculture, and a number of universities and conservation organizations.

The Useful Plants Indicator scores almost 7,000 useful wild plants from 220 countries on a scale of 1-100, with 100 meaning fully protected. Any plant rated 75 or higher is "sufficiently conserved." Low, medium and high priority for conservation is reflected by scores of 74-50, 49-25 and 24-0, respectively. The indicator equally weighs in situ plant conservation -- in protected areas like national parks -- and ex situ conservation, which covers plants safeguarded in gene banks, botanical gardens and other conservation repositories.

The findings will be published in the journal Ecological Indicators in March 2019. An open-access online version of the study was published in November and a digital Useful Plants Indicator to explore the results was launched by CIAT on Nov. 21. The Biodiversity Indicators Partnership of the Convention on Biological Diversity is considering adopting the indicator as a tool to measure progress toward the Aichi Targets.

Coffea liberica, a wild coffee plant that used to make a caffeinated brew in parts of Africa and sought by coffee breeders for its disease resistance, scores only 32.3 out of 100 on the indicator. The wild ancestor of the connoisseur's preferred bean, C. arabica, does not fare much better, scoring 33.8. Of the 32 coffee species listed, none scores higher than 35.3. Of those, more than two thirds have no known viable genetic material stored in gene banks or other repositories.

Other wild species of note:

  • Abies nordmanniana, the Nordmann fir, a popular European Christmas tree, scores 13.5, placing it in the center of the high-priority category scale (0-24). Of the list's 13 fir species, only Japan's A. vietchii, Vietch's silver-fir, is a low-priority species, with a score of 59.5.
  • Theobroma cacao, the wild ancestor of chocolate native to the tropical Americas, scores 35.4.
  • Wild natural remedy plants such as valerian (Valeriana officinalis), chamomile (Matricaria chamomilla) and the famous Chinese remedy Gingko biloba score 29.1, 27.8 and 26.7, respectively.
  • The United States' paw paw tree (Asimina triloba), stands at 25.4, at the cusp between high and medium priority status.
  • Vanilla (Vanilla planifolia) and cinnamon (Cinnamomum verum) score 39.8 and 23.0

Most, but not all, of these plants are weighed down to their lack of ex situ conservation, said Khoury.

Researchers expected lower results for plants in natural settings, which makes the 40.7 average in situ score surprising, said Khoury. They cautioned, however, that relying on plant preservation strictly in natural protected areas is no longer a sure bet. Rapid climate change can force species to shift ranges beyond park borders and the edges of many protected areas are subjected to unchecked habitat destruction.

"The indicator shows that the network of protected areas around the world is doing something significant for useful plants," Khoury said. "But if we want to get serious about protecting these species, especially the ones that are vulnerable, we have a long way to go before they are fully protected."

How is your country conserving?

The impetus to build the indicator came from international agreements to conserve the thousands of wild plants that provide valuable ecosystem and cultural services to humanity. These include the Convention on Biological Diversity, in particular the Biodiversity Indicators Partnership (BIP) project "Mind the Gap," which funded the project to produce the indicator for Aichi Target 13, as well as the United Nations' Sustainable Development Goals and the International Treaty on Plant Genetic Resources for Food and Agriculture. Some call for fully safeguarding this plant biodiversity by the end of this decade.

"There's no way we're going to hit these 2020 targets," Khoury said.

The indicator draws from some 43 million plant records from almost every country on Earth. The combined indicator (both in situ and ex situ) shows that the world's top useful wild plant conservationists are South Korea, Botswana, and Chile.

Regional in situ conservation scores are highest in Northern Europe, which scores almost 90 out of 100 as a region. For the world's centers of biodiversity, South America's northern countries (Colombia, 72.9; Venezuela 78.9; Ecuador 70.6) and Central America's Panama (76) and Costa Rica (75.7) are among the leaders for conservation of the evaluated plants in natural settings. China (26.3), India (24.3) and Southeast Asia (19.8) have some of the lowest regional in situ conservation scores.

Canada (35.3) and the United States (36.5) lag behind all regions of Africa, which score from 42 to 59.7 on in situ conservation.

Read more at Science Daily

Being fair: The benefits of early childhood education

Children from low-income families who got intensive education early in life treat others with high levels of fairness in midlife, more than 40 years later, even when being fair comes at a high personal cost, according to a new study published today in Nature Communications.

The 78 people in the study were followed as part of the Abecedarian Project, begun in the 1970s and to this day one of the longest running randomized controlled studies of the effects of early childhood education in low-income and high-risk families.

Participants played games designed to measure their adherence to social norms and their social decision-making processes. In one game, a player was asked to split a sum of money -- $20 -with another participant. The participant could either accept the amount proposed, or reject it, in which case neither received any money. When faced with unequal offers, participants had to make trade-offs between self-interest and the enforcement of social norms of equality.

This is where the value of early childhood education became apparent. Players who, in the 1970s, had been given intensive educational training including cognitive and social stimulation when they were young children, strongly rejected unequal division of money among players when they were in midlife, even if it meant they would miss out on hefty financial gains themselves.

"When someone rejects an offer, they are sending a very strong signal to the other player about the decision regarding how the money should be divided," said Université de Montréal assistant psychology professor Sébastien Hétu, a first-author of the study. "People who received educational training through the Abecedarian Project were inclined to accept generally equal offers, but would reject disadvantageous and advantageous offers. In effect, they punished transgressions that they judged to be outside of the social norm of equality."

Originally developed and led by Craig Ramey, a professor and distinguished research scholar at the Virginia Tech Carilion Research Institute, the Abecedarian Project investigates the impacts of intensive early childhood educational interventions on language and learning in disadvantaged children. The new research involves an international group of scientists led by Virginia Tech neuroscientist Read Montague, in whose laboratory Hétu was a postdoctoral associate before coming to Montreal.?

Using computational modeling, the study's researchers also discovered differences in social decision-making strategies between participants. For example, in another game, players who had received educational interventions early in life planned further into the future than people who didn't.

"The participants who received early educational interventions were very sensitive to inequality, whether it was to their advantage or their disadvantage," said Yi Luo, first author of the study and a postdoctoral associate in Montague's lab. "Our results also suggest that they placed more value on the long-term benefits of promoting social norms as opposed to short-term benefits for personal gain."

Read more at Science Daily

Nov 22, 2018

Orange juice, leafy greens and berries may be tied to decreased memory loss in men

Eating leafy greens, dark orange and red vegetables and berry fruits, and drinking orange juice may be associated with a lower risk of memory loss over time in men, according to a study published in the November 21, 2018, online issue of Neurology®, the medical journal of the American Academy of Neurology.

"One of the most important factors in this study is that we were able to research and track such a large group of men over a 20-year period of time, allowing for very telling results," said study author Changzheng Yuan, ScD, of Harvard T.H. Chan School of Public Health in Boston. "Our studies provide further evidence dietary choices can be important to maintain your brain health."

The study looked at 27,842 men with an average age of 51 who were all health professionals. Participants filled out questionnaires about how many servings of fruits, vegetables and other foods they had each day at the beginning of the study and then every four years for 20 years. A serving of fruit is considered one cup of fruit or ½ cup of fruit juice. A serving of vegetables is considered one cup of raw vegetables or two cups of leafy greens.

Participants also took subjective tests of their thinking and memory skills at least four years before the end of the study, when they were an average age of 73. The test is designed to detect changes that people can notice in how well they are remembering things before those changes would be detected by objective cognitive tests. Changes in memory reported by the participants would be considered precursors to mild cognitive impairment. The six questions include "Do you have more trouble than usual remembering a short list of items, such as a shopping list?" and "Do you have more trouble than usual following a group conversation or a plot in a TV program due to your memory?"

A total of 55 percent of the participants had good thinking and memory skills, 38 percent had moderate skills, and 7 percent had poor thinking and memory skills.

The participants were divided into five groups based on their fruit and vegetable consumption. For vegetables, the highest group ate about six servings per day, compared to about two servings for the lowest group. For fruits, the top group ate about three servings per day, compared to half a serving for the bottom group.

The men who consumed the most vegetables were 34 percent less likely to develop poor thinking skills than the men who consumed the least amount of vegetables. A total of 6.6 percent of men in the top group developed poor cognitive function, compared to 7.9 percent of men in the bottom group.

The men who drank orange juice every day were 47 percent less likely to develop poor thinking skills than the men who drank less than one serving per month. This association was mainly observed for regular consumption of orange juice among the oldest men. A total of 6.9 percent of men who drank orange juice every day developed poor cognitive function, compared to 8.4 percent of men who drank orange juice less than once a month. This difference in risk was adjusted for age but not adjusted for other factors related to reported changes in memory.

The men who ate the most fruit each day were less likely to develop poor thinking skills, but that association was weakened after researchers adjusted for other dietary factors that could affect the results, such as consumption of vegetables, fruit juice, refined grains, legumes and dairy products.

The researchers also found that people who ate larger amounts of fruits and vegetables 20 years earlier were less likely to develop thinking and memory problems, whether or not they kept eating larger amounts of fruits and vegetables about six years before the memory test.

The study does not show that eating fruits and vegetables and drinking orange juice reduces memory loss; it only shows a relationship between them.

A limitation of the study was that participants' memory and thinking skills were not tested at the beginning of the study to see how they changed over the course of the study. However, because all participants completed professional training, they can be assumed to have started with relatively high cognitive function in early adult life. In addition, the study participants were all male health professionals such as dentists, optometrists, and veterinarians. Thus, the results may not apply to women and other groups of men.

Read more at Science Daily

Current climate models underestimate warming by black carbon aerosol

Soot particles remain suspended in the atmosphere as coated with organic matter. This coating results in non-linear enhancement of solar light absorption and subsequent heating of the surrounding air by these particles.
Soot belches out of diesel engines, rises from wood- and dung-burning cookstoves and shoots out of oil refinery stacks. According to recent research, air pollution, including soot, is linked to heart disease, some cancers and, in the United States, as many as 150,000 cases of diabetes every year.

Beyond its impact on health, soot, known as black carbon by atmospheric scientists, is a powerful global warming agent. It absorbs sunlight and traps heat in the atmosphere in magnitude second only to the notorious carbon dioxide. Recent commentaries in the journal Proceedings of the National Academy of Sciences called the absence of consensus on soot's light absorption magnitude "one of the grand challenges in atmospheric climate science."

Rajan Chakrabarty, assistant professor in the School of Engineering & Applied Science at Washington University in St. Louis, and William R. Heinson, a National Science Foundation postdoctoral fellow in Chakrabarty's lab, took on that challenge and discovered something new about soot, or rather, a new law that describes its ability to absorb light: the law of light absorption. With it, scientists will be able to better understand soot's role in climate change.

The research has been selected as an "Editors' Suggestion" published online Nov. 19 in the  journal Physical Review Letters.

Because of its ability to absorb sunlight and directly heat the surrounding air, climate scientists incorporate soot into their models -- computational systems which try to replicate conditions of the real world -- and then predict future warming trends. Scientists use real world observations to program their models.

But there hasn't been a consensus on how to incorporate soot's light absorption into these models. They treat it over-simplistically, using a sphere to represent a pure, black carbon aerosol.

"But nature is funny, it has its own ways to add complexity," Chakrabarty said. "By mass, 80 percent of all black carbon you find is always mixed. It's not perfect, like the models treat it."

The particles are mixed, or coated, with organic aerosols that are co-emitted with soot from a combustion system. It turns out, black carbon absorbs more light when it is coated with these organic materials, but the magnitude of absorption enhancement varies non-linearly depending on how much coating is present.

Chakrabarty and Heinson wanted to figure out a universal relationship between the amount of coating and the ability of soot to absorb light.

First, they created simulated particles that looked just like those found in nature, with varying degrees of organic coating. Then, using techniques borrowed from Chakrabarty's work with fractals, the team went through exacting calculations, measuring light absorption in particles bit-by-bit.

When they plotted the absorption magnitudes against the percentage of organic coating, the result was what mathematicians and scientists call a "universal power law." This means that, as the amount of coating increases, soot's light absorption goes up by a proportionately relative amount.

(The length and area of a square are related by a universal power law: If you double the length of the sides of a square, the area increases by four. It does not matter what the initial length of the side was, the relationship will always hold.)

They then turned to work done by different research groups who measured ambient soot light absorption across the globe, from Houston to London to Beijing. Chakrabarty and Heinson again plotted absorption enhancements against the percentage of coating.

The result was a universal power law with the same one-third ratio as was found in their simulated experiments.

With so many differing values for light absorption enhancement in soot, Chakrabarty said that the climate modelers are confused. "What on earth do we do? How do we account for the reality in our models?

"Now you have order in chaos and a law," he said. "And now you can apply it in a computationally inexpensive manner."

Their findings also point to the fact that warming due to black carbon could have been underestimated by climate models. Assuming spherical shape for these particles and not properly accounting for light absorption enhancement could result in significantly lower heating estimates.

Rahul Zaveri, senior scientist and developer of the comprehensive aerosol model MOSAIC at Pacific Northwest National Laboratory, calls the findings a significant and timely advancement.

"I am particularly excited about the mathematical elegance and extreme computational efficiency of the new parameterization," he said, "which can be quite readily implemented in climate models once the companion parameterization for light scattering by coated black carbon particles is developed."

Read more at Science Daily

Climate of small star TRAPPIST 1's seven intriguing worlds

The small, cool M dwarf star TRAPPIST-1 and its seven worlds. New research from the University of Washington speculates on possible climates of these worlds and how they may have evolved.
Not all stars are like the sun, so not all planetary systems can be studied with the same expectations. New research from a University of Washington-led team of astronomers gives updated climate models for the seven planets around the star TRAPPIST-1.

The work also could help astronomers more effectively study planets around stars unlike our sun, and better use the limited, expensive resources of the James Webb Space Telescope, now expected to launch in 2021.

"We are modeling unfamiliar atmospheres, not just assuming that the things we see in the solar system will look the same way around another star," said Andrew Lincowski, UW doctoral student and lead author of a paper published Nov. 1 in Astrophysical Journal. "We conducted this research to show what these different types of atmospheres could look like."

The team found, briefly put, that due to an extremely hot, bright early stellar phase, all seven of the star's worlds may have evolved like Venus, with any early oceans they may have had evaporating and leaving dense, uninhabitable atmospheres. However, one planet, TRAPPIST-1 e, could be an Earthlike ocean world worth further study, as previous research also has indicated.

TRAPPIST-1, 39 light-years or about 235 trillion miles away, is about as small as a star can be and still be a star. A relatively cool "M dwarf" star -- the most common type in the universe -- it has about 9 percent the mass of the sun and about 12 percent its radius. TRAPPIST-1 has a radius only a little bigger than the planet Jupiter, though it is much greater in mass.

All seven of TRAPPIST-1's planets are about the size of Earth and three of them -- planets labeled e, f and g -- are believed to be in its habitable zone, that swath of space around a star where a rocky planet could have liquid water on its surface, thus giving life a chance. TRAPPIST-1 d rides the inner edge of the habitable zone, while farther out, TRAPPIST-1 h, orbits just past that zone's outer edge.

"This is a whole sequence of planets that can give us insight into the evolution of planets, in particular around a star that's very different from ours, with different light coming off of it," said Lincowski. "It's just a gold mine."

Previous papers have modeled TRAPPIST-1 worlds, Lincowski said, but he and this research team "tried to do the most rigorous physical modeling that we could in terms of radiation and chemistry -- trying to get the physics and chemistry as right as possible."

The team's radiation and chemistry models create spectral, or wavelength, signatures for each possible atmospheric gas, enabling observers to better predict where to look for such gases in exoplanet atmospheres. Lincowski said when traces of gases are actually detected by the Webb telescope, or others, some day, "astronomers will use the observed bumps and wiggles in the spectra to infer which gases are present -- and compare that to work like ours to say something about the planet's composition, environment and perhaps its evolutionary history."

He said people are used to thinking about the habitability of a planet around stars similar to the sun. "But M dwarf stars are very different, so you really have to think about the chemical effects on the atmosphere(s) and how that chemistry affects the climate."

Combining terrestrial climate modeling with photochemistry models, the researchers simulated environmental states for each of TRAPPIST-1's worlds.

Their modeling indicates that:

  • TRAPPIST-1 b, the closest to the star, is a blazing world too hot even for clouds of sulfuric acid, as on Venus, to form.
  • Planets c and d receive slightly more energy from their star than Venus and Earth do from the sun and could be Venus-like, with a dense, uninhabitable atmosphere.
  • TRAPPIST-1 e is the most likely of the seven to host liquid water on a temperate surface, and would be an excellent choice for further study with habitability in mind.
  • The outer planets f, g and h could be Venus-like or could be frozen, depending on how much water formed on the planet during its evolution.

Lincowski said that in actuality, any or all of TRAPPIST-1's planets could be Venus-like, with any water or oceans long burned away. He explained that when water evaporates from a planet's surface, ultraviolet light from the star breaks apart the water molecules, releasing hydrogen, which is the lightest element and can escape a planet's gravity. This could leave behind a lot of oxygen, which could remain in the atmosphere and irreversibly remove water from the planet. Such a planet may have a thick oxygen atmosphere -- but not one generated by life, and different from anything yet observed.

"This may be possible if these planets had more water initially than Earth, Venus or Mars," he said. "If planet TRAPPIST-1 e did not lose all of its water during this phase, today it could be a water world, completely covered by a global ocean. In this case, it could have a climate similar to Earth."

Lincowski said this research was done more with an eye on climate evolution than to judge the planets' habitability. He plans future research focusing more directly on modeling water planets and their chances for life.

"Before we knew of this planetary system, estimates for the detectability of atmospheres for Earth-sized planets were looking much more difficult," said co-author Jacob Lustig-Yaeger, a UW astronomy doctoral student.

The star being so small, he said, will make the signatures of gases (like carbon dioxide) in the planet's atmospheres more pronounced in telescope data.

"Our work informs the scientific community of what we might expect to see for the TRAPPIST-1 planets with the upcoming James Webb Space Telescope."

Lincowski's other UW co-author is Victoria Meadows, professor of astronomy and director of the UW's Astrobiology Program. Meadows is also principal investigator for the NASA Astrobiology Institute's Virtual Planetary Laboratory, based at the UW. All of the authors were affiliates of that research laboratory.

"The processes that shape the evolution of a terrestrial planet are critical to whether or not it can be habitable, as well as our ability to interpret possible signs of life," Meadows said. "This paper suggests that we may soon be able to search for potentially detectable signs of these processes on alien worlds."

Read more at Science Daily

Evolution: South Africa's hominin record is a fair-weather friend

Field photograph of massive flowstone layers from one of the South African hominin caves, with red cave sediments underneath.
New research from an international team of scientists led by University of Cape Town isotope geochemist Dr Robyn Pickering is the first to provide a timeline for fossils from the caves within the Cradle of Humankind. It also sheds light on the climate conditions of our earliest ancestors in the area.

Published online in the journal Nature on 21 November 2018, the work corrects assumptions that the region's fossil-rich caves could never be related to each other. In fact, the research suggests fossils from Cradle caves date to just six specific time periods.

"Unlike previous dating work, which often focused on one cave, sometimes even just one chamber of the cave, we are providing direct ages for eight caves and a model to explain the age of all the fossils from the entire region," says Dr Robyn Pickering.

"Now we can link together the findings from separate caves and create a better picture of evolutionary history in southern Africa."

The Cradle of Humankind is a World Heritage Site made up of complex fossil-bearing caves. It's the world's richest early hominin site and home to nearly 40% of all known human ancestor fossils, including the famous Australopithecus africanus skull nicknamed Mrs Ples.

Using uranium-lead dating, researchers analysed 28 flowstone layers that were found sandwiched between fossil-rich sediment in eight caves across the Cradle. The results revealed that the fossils in these caves date to six narrow time-windows between 3.2 and 1.3 million years ago.

"The flowstones are the key," says Pickering. "We know they can only grow in caves during wet times, when there is more rain outside the cave. By dating the flowstones, we are picking out these times of increased rainfall. We therefore know that during the times in between, when the caves were open, the climate was drier and more like what we currently experience."

This means the early hominins living in the Cradle experienced big changes in local climate, from wetter to drier conditions, at least six times between 3 and 1 million years ago. However, only the drier times are preserved in the caves, skewing the record of early human evolution.

Up until now, the lack of dating methods for Cradle fossils made it difficult for scientists to understand the relationship between East and South Africa hominin species. Moreover, the South African record has often been considered undateable compared to East Africa where volcanic ash layers allow for high resolution dating.

Professor Andy Herries, a co-author in the study at La Trobe University in Australia, notes that "while the South African record was the first to show Africa as the origin point for humans, the complexity of the caves and difficultly dating them has meant that the South African record has remained difficult to interpret."

"In this study we show that the flowstones in the caves can act almost like the volcanic layers of East Africa, forming in different caves at the same time, allowing us to directly relate their sequences and fossils into a regional sequence," he says.

Dr Pickering began dating the Cradle caves back in 2005 as part of her PhD research. This new publication is the result of 13 years of work and brings together a team of 10 scientists from South Africa, Australia and the US. The results return the Cradle to the forefront and open new opportunities for scientists to answer complex questions about human history in the region.

"Robyn and her team have made a major contribution to our understanding of human evolution," says leading palaeoanthropologist Professor Bernard Wood, of the Center for the Advanced Study of Human Paleobiology at the George Washington University in the USA, who is not an author on the study.

Read more at Science Daily

Nov 21, 2018

Exoplanet stepping stones

This is an artist's impression based on published scientific data on the HR 8799 solar system. The magenta, HR 8799c planet is in the foreground. Compared to Jupiter, this gas giant is about seven times more massive and has a radius that is 20 percent larger. HR 8799c's planetary companions, d and b are in the background, orbiting their host star.
Astronomers have gleaned some of the best data yet on the composition of a planet known as HR 8799c -- a young giant gas planet about 7 times the mass of Jupiter that orbits its star every 200 years.

The team used state-of-the art instrumentation at the W. M. Keck Observatory on Maunakea, Hawaii to confirm the existence of water in the planet's atmosphere, as well as a lack of methane.

While other researchers had previously made similar measurements of this planet, these new, more robust data demonstrate the power of combining high-resolution spectroscopy with a technique known as adaptive optics, which corrects for the blurring effect of Earth's atmosphere.

"This type of technology is exactly what we want to use in the future to look for signs of life on an Earth-like planet. We aren't there yet but we are marching ahead," says Dimitri Mawet, an associate professor of astronomy at Caltech and a research scientist at JPL, which Caltech manages for NASA.

Mawet is co-author of a new paper on the findings published today in the Astronomical Journal. The lead author is Ji Wang, formerly a postdoctoral scholar at Caltech and now an assistant professor at Ohio State University.

Taking pictures of planets that orbit other stars -- exoplanets -- is a formidable task. Light from the host stars far outshines the planets, making them difficult to see.

More than a dozen exoplanets have been directly imaged so far, including HR 8799c and three of its planetary companions. In fact, HR 8799 is the only multiple-planet system to have its picture taken. Discovered using adaptive optics on the Keck II telescope, the direct images of HR8799 are the first-ever of a planetary system orbiting a star other than our sun.

Once an image is obtained, astronomers can use instruments, called spectrometers, to break apart the planet's light, like a prism turning sunlight into a rainbow, thereby revealing the fingerprints of chemicals. So far, this strategy has been used to learn about the atmospheres of several giant exoplanets.

The next step is to do the same thing only for smaller planets that are closer to their stars (the closer a planet is to its star and the smaller its size, the harder is it to see).

The ultimate goal is to look for chemicals in the atmospheres of Earth-like planets that orbit in the star's "habitable zone" -- including any biosignatures that might indicate life, such as water, oxygen, and methane.

Mawet's group hopes to do just this with an instrument on the upcoming Thirty Meter Telescope, a giant telescope being planned for the late 2020s by several national and international partners, including Caltech.

But for now, the scientists are perfecting their technique using Keck Observatory -- and, in the process, learning about the compositions and dynamics of giant planets.

"Right now, with Keck, we can already learn about the physics and dynamics of these giant exotic planets, which are nothing like our own solar system planets," says Wang.

In the new study, the researchers used an instrument on the Keck II telescope called NIRSPEC (near-infrared cryogenic echelle spectrograph), a high-resolution spectrometer that works in infrared light.

They coupled the instrument with Keck Observatory's powerful adaptive optics, a method for creating crisper pictures using a guide star in the sky as a means to measure and correct the blurring turbulence of Earth's atmosphere.

This is the first time the technique has been demonstrated on directly imaged planets using what's known as the L-band, a type of infrared light with a wavelength of around 3.5 micrometers, and a region of the spectrum with many detailed chemical fingerprints.

"The L-band has gone largely overlooked before because the sky is brighter at this wavelength," says Mawet. "If you were an alien with eyes tuned to the L-band, you'd see an extremely bright sky. It's hard to see exoplanets through this veil."

The researchers say that the addition of adaptive optics made the L-band more accessible for the study of the planet HR 8799c. In their study, they made the most precise measurements yet of the atmospheric constituents of the planet, confirming it has water and lacks methane as previously thought.

"We are now more certain about the lack of methane in this planet," says Wang. "This may be due to mixing in the planet's atmosphere. The methane, which we would expect to be there on the surface, could be diluted if the process of convection is bringing up deeper layers of the planet that don't have methane."

The L-band is also good for making measurements of a planet's carbon-to-oxygen ratio -- a tracer of where and how a planet forms. Planets form out of swirling disks of material around stars, specifically from a mix of hydrogen, oxygen, and carbon-rich molecules, such as water, carbon monoxide, and methane.

These molecules freeze out of the planet-forming disks at different distances from the star -- at boundaries called snowlines. By measuring a planet's carbon-to-oxygen ratio, astronomers can thus learn about its origins.

Read more at Science Daily

To predict the future, the brain uses two clocks

Time concept
That moment when you step on the gas pedal a split second before the light changes, or when you tap your toes even before the first piano note of Camila Cabello's "Havana" is struck. That's anticipatory timing.

One type relies on memories from past experiences. The other on rhythm. Both are critical to our ability to navigate and enjoy the world.

New University of California, Berkeley, research shows the neural networks supporting each of these timekeepers are split between two different parts of the brain, depending on the task at hand.

"Whether it's sports, music, speech or even allocating attention, our study suggests that timing is not a unified process, but that there are two distinct ways in which we make temporal predictions and these depend on different parts of the brain," said study lead author Assaf Breska, a postdoctoral researcher in neuroscience at UC Berkeley.

The findings, published online in the Proceedings of the National Academy of Sciences journal, offer a new perspective on how humans calculate when to make a move.

"Together, these brain systems allow us to not just exist in the moment, but to also actively anticipate the future," said study senior author Richard Ivry, a UC Berkeley neuroscientist.

Breska and Ivry studied the anticipatory timing strengths and deficits of people with Parkinson's disease and people with cerebellar degeneration.

They connected rhythmic timing to the basal ganglia, and interval timing -- an internal timer based largely on our memory of prior experiences -- to the cerebellum. Both are primal brain regions associated with movement and cognition.

Moreover, their results suggest that if one of these neural clocks is misfiring, the other could theoretically step in.

"Our study identifies not only the anticipatory contexts in which these neurological patients are impaired, but also the contexts in which they have no difficulty, suggesting we could modify their environments to make it easier for them to interact with the world in face of their symptoms," Breska said.

Non-pharmaceutical fixes for neurological timing deficits could include brain-training computer games and smartphone apps, deep brain stimulation and environmental design modifications, he said.

To arrive at their conclusion, Breska and Ivry compared how well Parkinson's and cerebellar degeneration patients used timing or "temporal" cues to focus their attention.

Both groups viewed sequences of red, white and green squares as they flashed by at varying speeds on a computer screen, and pushed a button the moment they saw the green square. The white squares alerted them that the green square was coming up.

In one sequence, the red, white and green squares followed a steady rhythm, and the cerebellar degeneration patients responded well to these rhythmic cues.

In another, the colored squares followed a more complex pattern, with differing intervals between the red and green squares. This sequence was easier for the Parkinson's patients to follow, and succeed at.

"We show that patients with cerebellar degeneration are impaired in using non-rhythmic temporal cues while patients with basal ganglia degeneration associated with Parkinson's disease are impaired in using rhythmic cues," Ivry said.

Ultimately, the results confirm that the brain uses two different mechanisms for anticipatory timing, challenging theories that a single brain system handles all our timing needs, researchers said.

Read more at Science Daily

4,000-year-old termite mounds found in Brazil are visible from space

This image shows mound fields. The mounds are found in dense, low, dry forest caatinga vegetation and can be seen when the land is cleared for pasture.
Researchers reporting in Current Biology on November 19 have found that a vast array of regularly spaced, still-inhabited termite mounds in northeastern Brazil -- covering an area the size of Great Britain -- are up to about 4,000 years old.

The mounds, which are easily visible on Google Earth, are not nests. Rather, they are the result of the insects' slow and steady excavation of a network of interconnected underground tunnels. The termites' activities over thousands of years has resulted in huge quantities of soil deposited in approximately 200 million cone-shaped mounds, each about 2.5 meters tall and 9 meters across.

"These mounds were formed by a single termite species that excavated a massive network of tunnels to allow them to access dead leaves to eat safely and directly from the forest floor," says Stephen Martin of the University of Salford in the UK. "The amount of soil excavated is over 10 cubic kilometers, equivalent to 4,000 great pyramids of Giza, and represents one of the biggest structures built by a single insect species."

"This is apparently the world's most extensive bioengineering effort by a single insect species," adds Roy Funch of Universidade Estadual de Feira de Santana in Brazil. "Perhaps most exciting of all -- the mounds are extremely old -- up to 4,000 years, similar to the ages of the pyramids."

The mounds are largely hidden from view in the fully deciduous, semiarid, thorny-scrub caatinga forests unique to northeastern Brazil. They'd only really come into view by "outsiders," including scientists, when some of the lands were cleared for pasture in recent decades.

Soil samples collected from the centers of 11 mounds and dated indicated that the mounds were filled 690 to 3,820 years ago. That makes them about as old as the world's oldest known termite mounds in Africa.

The researchers investigated whether the strangely regular spatial pattern of the mounds was driven by competition amongst termites in neighboring mounds. Their behavioral tests found little aggression at the mound level. That's compared to obvious aggression amongst termites collected at greater distances from one another.

The findings lead the researchers to suggest that the over-dispersed spatial mound pattern isn't generated by aggressive interactions. Instead, Martin and his colleagues propose that the mound pattern arose through self-organizational processes facilitated by the increased connectivity of the tunnel network and driven by episodic leaf-fall in the dry forest.

They say that a pheromone map might allow the termites to minimize their travel time from any location in the colony to the nearest waste mound. The vast tunnel network apparently allows safe access to a sporadic food supply, similar to what's been seen in naked mole-rats, which also live in arid regions and construct very extensive burrow networks to obtain food, the researchers report.

"It's incredible that, in this day and age, you can find an 'unknown' biological wonder of this sheer size and age still existing, with the occupants still present," Martin says.

Read more at Science Daily

Scientists discover new 'pinwheel' star system

This is an image of Apep captured at 8 microns in the thermal infrared with the VISIR camera on the European Southern Observatory's VLT telescope, Mt Paranal, Chile.
An international team of scientists has discovered a new, massive star system -- one that also challenges existing theories of how large stars eventually die.

"This system is likely the first of its kind ever discovered in our own galaxy," says Benjamin Pope, a NASA Sagan fellow at New York University's Center for Cosmology and Particle Physics and one of the researchers.

Specifically, the scientists detected a gamma-ray burst progenitor system -- a type of supernova that blasts out an extremely powerful and narrow jet of plasma and which is thought to occur only in distant galaxies.

"It was not expected such a system would be found in our galaxy -- only in younger galaxies much further away," adds Pope. "Given its brightness, it is surprising it was not discovered a lot sooner."

The discovery of the system, reported in the journal Nature Astronomy and dubbed "Apep," also included scientists from the Netherlands Institute for Radio Astronomy, the University of Sydney, the University of Edinburgh, the University of Sheffield, and the University of New South Wales.

The system, an estimated 8,000 light years away Earth, is adorned with a dust "pinwheel" -- whose strangely slow motion suggests current theories on star deaths may be incomplete.

When the most massive stars in our universe near the end of their lives, they produce fast winds -- typically moving at more than 1,000 kilometers per second -- that carry away large amounts of a star's mass. These fast winds should carry away the star's rotational energy and slow it down long before it dies.

"These massive stars are often found with a partner, in which the fast winds from the dying star can collide with its companion to produce a shock that emits at X-ray and radio frequencies and produces exotic dust patterns," explains Joseph Callingham, a postdoctoral fellow at the Netherlands Institute for Radio Astronomy and lead author of the paper.

Read more at Science Daily

Nov 20, 2018

Jumping genes shed light on how advanced life may have emerged

Caenorhabditis elegans.
A previously unappreciated interaction in the genome turns out to have possibly been one of the driving forces in the emergence of advanced life, billions of years ago.?

This discovery began with a curiosity for retrotransposons, known as "jumping genes," which are DNA sequences that copy and paste themselves within the genome, multiplying rapidly. Nearly half of the human genome is made up of retrotransposons, but bacteria hardly have them at all.

Nigel Goldenfeld, Swanlund Endowed Chair of Physics at the University of Illinois and Carl R. Woese Institute for Genomic Biology, and Thomas Kuhlman, a former physics professor at Illinois who is now at University of California, Riverside, wondered why this is.

"We thought a really simple thing to try was to just take one (retrotransposon) out of my genome and put it into the bacteria just to see what would happen," Kuhlman said. "And it turned out to be really quite interesting."

Their results, published in the Proceedings of the National Academy of Sciences, give more depth to the history of how advanced life may have emerged billions of years ago -- and could also help determine the possibility and nature of life on other planets.

Along the way to explaining life, the researchers first encountered death -- bacterial death, that is. When they put retrotransposons in bacteria, the outcome was fatal.

"As they jump around and make copies of themselves, they jump into genes that the bacteria need to survive," Kuhlman said. "It's incredibly lethal to them."

When retrotransposons copy themselves within the genome, they first find a spot in the DNA and cut it open. To survive, the organism then has to repair this cut. Some bacteria, like E. coli, only have one way to perform this repair, which usually ends up removing the new retrotransposon. But advanced organisms (eukaryotes) have an additional "trick" called nonhomologous end-joining, or NHEJ, that gives them another way to repair cuts in their DNA.

Goldenfeld and Kuhlman decided to see what would happen if they gave bacteria the ability to do NHEJ, thinking that it would help them tolerate the damage to their DNA. But it just made the retrotransposons better at multiplying, causing even more damage than before.

"It just completely killed everything," Kuhlman said. "At the time, I thought I was just doing something wrong."

They realized that the interaction between NHEJ and retrotransposons may be more important than they previously thought.

Eukaryotes typically have many retrotransposons in their genome, along with a lot of other "junk" DNA, which doesn't have a well-understood function. Within the genome, there must be a constant interplay between NHEJ and retrotransposons, as NHEJ tries to control how rapidly the retrotransposons multiply. This gives the organism more power over their genome, and the presence of "junk" DNA is important.

"As you get more and more junk in your DNA, you can start taking these pieces and combining them together in different ways, more ways than you could without all the junk in there," Kuhlman said.

These conditions -- the accumulation of "junk" DNA, the presence of retrotransposons and their interactions with NHEJ -- make the genome more complex. This is one feature that may distinguish advanced organisms, like humans, from simpler ones, like bacteria.

Advanced organisms can also manage their genome by using their spliceosome, a molecular machine that sorts through the "junk" DNA and reconstructs the genes back to normal.

Some parts of the spliceosome are similar to group II introns, bacteria's primitive version of retrotransposons. Introns are also found in eukaryotes, and along with the spliceosome are evolutionarily derived from group II introns. Goldenfeld said this poses an evolutionary question.

"What came first, the spliceosome or the group II introns? Clearly the group II introns," he said. "So then you can ask: where did the eukaryotic cell first get those group II introns in order to build up the spliceosome early on?"

This study suggests that group II introns, the ancestors of introns in the spliceosome and retrotransposons in eukaryotes, somehow invaded early eukaryotic cells. Then, their interactions with NHEJ created a "selection pressure" that helped lead to the emergence of the spliceosome, which helped life become advanced billions of years ago.

The spliceosome helped life become advanced by enabling eukaryotes to do more with their DNA. For example, even though humans have roughly the same number of genes as C. elegans, a worm, humans can do more with those genes.

"There's not much difference between this very simple worm and humans, which is obviously insane," Goldenfeld said. "What's happening is that humans are able to take these genes and mix and match them in many combinations to do much more complicated functions than C. elegans does."

Not only did NHEJ and retrotransposons help with the creation of the spliceosome; this study suggests that they may also have assisted in making chromosomes -- DNA molecules that contain genetic material -- more advanced. Interactions between NHEJ and retrotransposons may have aided in the transition from circular chromosomes (which bacteria generally have) to linear ones (which more advanced organisms have), another indicator of advanced life.

Goldenfeld said that before this research, many researchers studied the role of retrotransposons, but the importance of NHEJ was not fully appreciated. This research proves that it played a part, billions of years ago, in eukaryotes becoming the advanced organisms we know today.

"This certainly was not the only thing that was going on," Goldenfeld said. "But if it hadn't happened, it's hard to see how you could have complex life."

This study contributes to the larger questions that the Institute for Universal Biology, a NASA Astrobiology Institute that Goldenfeld directs, seeks to answer -- questions like: what had to happen in order for life to become advanced?

Answering this question in greater detail could help scientists determine the possibility of life on other planets.

"If life exists on other planets, presumably one would expect it to be microbial. Could it ever have made this transition to complex life?" Goldenfeld said. "It's not that you're inevitably going to get advanced life, because there are a bunch of things that have to happen."

The physics perspective of this study helps to quantify these theoretical questions. This quantification comes from simply taking measurements in a laboratory and using those measurements to make models of evolution, as was done in this study.

Read more at Science Daily

The taming of the dog, cow, horse, pig and rabbit

Research at the Earlham Institute into one of the "genetic orchestra conductors", microRNAs, sheds light on our selectively guided evolution of domestic pets and farmyard animals such as dogs and cows.
Research at the Earlham Institute into one of the 'genetic orchestra conductors', microRNAs, sheds light on our selectively guided evolution of domestic pets and farmyard animals such as dogs and cows.

What does a cow, a horse, a pig, a rabbit and a dog have in common, and how similar or dissimilar are their genetic conductors?

They're all animals domesticated by humans -- and the results have been quite incredible. Over just a few thousand years of selective breeding, these creatures have been bred for specific characteristics, leading to an incredible morphological and behavioral variety across breeds . Better understanding of the domestication process can lead to better, more sustainable food and a greater understanding of human diseases.

Dr Luca Penso-Dolfin of the Prof Federica Di Palma Group at Earlham Institute has been looking at these species' brains, testicles, hearts and kidneys for clues into how microRNAs (miRNAs), important regulators of cell development in mammals, differ across all of these tissues, and how this might affect the evolution of gene regulation.

The research is published in Nature's open access journal Scientific Reports, titled: "The evolutionary dynamics of miRNA in domestic mammals."

It turns out that brains and testes were particularly abundant in recently evolved "new" miRNAs, some of which might be linked to important domestication genes in dogs and cows. Perhaps this is one of the drivers behind the fascinating diversity in dog breeds that we have produced?

Dr Luca Penso-Dolfin told us: "The evolution of different miRNAs in different species might lead to important changes in how genes are used and controlled. These modifications, which might happen both in space (different tissues) and time (for example, different developmental stages) are still not fully understood. Further research is needed to better clarify the role of miRNAs in the evolution of mammalian gene regulation."

miRNAs are essentially one of the main reasons why, even though the DNA of every cell in an individual (give or take a few) is the same, we are able to produce 200 different types of cell in a human body. If DNA is an orchestra, miRNA is the conductor. Sometimes we don't need the violin to be constantly playing and occasionally the oboe needs to be muted.

By comparing the five domestic animals in the study to a database of miRNAs, Dr Penso-Dolfin was able to identify which miRNAs were "old" miRNAs, present across many mammal species, and which miRNAs had evolved more recently, in only one or a few species. These "new" miRNAs were found to be more specific to certain tissues, with most found in the brain and the testes.

From the perspective of evolution, it looks like these "new" miRNAs have an important role to play in the emergence of novel traits, especially when looking at cows and dogs, and analysis of the genes that the miRNAs work on suggests that their gene targets are enriched for genes under artificial selection -- so it appears that the team have found some miRNAs that are of importance to the domestication process.

Domestic mammals are of great economic and biomedical interest. A better understanding of gene regulation in these species might help us to shed light on some diseases common to ourselves. The diseases presenting in dogs, for example, have many commonalities with those in humans. The pig's high resemblance to human anatomy is of further interest. Moreover, some of these species (especially the cow and the pig) represent an important food source, meaning that the same discoveries might also be relevant for milk production, meat quality, and resistance to disease or stress.

Professor Federica Di Palma, Director of Science at EI, said: "the study of small regulatory RNAs as crucial regulators of diverse biological processes is an exciting area of research that continues to open up new avenues to explore when it comes to unravelling the complex evolution of economically important traits. Understanding the roles of miRNAs and their co-evolution with target genes in domestication can help not only improve understanding of fundamental biological processes but will also help us to better understand key traits of domestication selection with important applications to food security."

We're not so different from the mammals we live in such close proximity to, therefore the better we understand how they develop, the more light we can shed on our own evolution -- and perhaps answer many difficult questions on how to cure certain diseases.

Read more at Science Daily

Mars moon got its grooves from rolling stones

Much of Phobos' surface is covered with strange linear grooves. New research bolsters that idea the boulders blasted free from Stickney crater (the large depression on the right) carved those iconic grooves.
A new study bolsters the idea that strange grooves crisscrossing the surface of the Martian moon Phobos were made by rolling boulders blasted free from an ancient asteroid impact.

The research, published in Planetary and Space Science, uses computer models to simulate the movement of debris from Stickney crater, a huge gash on one end of Phobos' oblong body. The models show that boulders rolling across the surface in the aftermath of the Stickney impact could have created the puzzling patterns of grooves seen on Phobos today.

"These grooves are a distinctive feature of Phobos, and how they formed has been debated by planetary scientists for 40 years," said Ken Ramsley, a planetary science researcher at Brown University who led the work. "We think this study is another step toward zeroing in on an explanation."

Phobos' grooves, which are visible across most of the moon's surface, were first glimpsed in the 1970s by NASA's Mariner and Viking missions. Over the years, there has been no shortage of explanations put forward for how they formed. Some scientists have posited that large impacts on Mars have showered the nearby moon with groove-carving debris. Others think that Mars' gravity is slowly tearing Phobos apart, and the grooves are signs of structural failure.

Still other researchers have made the case that there's a connection between the grooves and the Stickney impact. In the late 1970s, planetary scientists Lionel Wilson and Jim Head proposed the idea that ejecta -- bouncing, sliding and rolling boulders -- from Stickney may have carved the grooves. Head, a professor in Brown's department of Earth, Environmental and Planetary Sciences, was also a coauthor of this new paper.

For a moon the size of the diminutive Phobos (27 kilometers across at its widest point), Stickney is a huge crater at 9 kilometers across. The impact that formed it would have blown free tons of giant rocks, making the rolling boulder idea entirely plausible, Ramsley says. But there are also some problems with the idea.

For example, not all of the grooves are aligned radially from Stickney as one might intuitively expect if Stickney ejecta did the carving. And some grooves are superposed on top of each other, which suggests some must have already been there when superposed ones were created. How could there be grooves created at two different times from one single event? What's more, a few grooves run through Stickney itself, suggesting that the crater must already have been there when the grooves formed. There's also a conspicuous dead spot on Phobos where there are no grooves at all. Why would all those rolling boulders just skip one particular area?

To explore those questions, Ramsley designed computer models to see if there was any chance that the "rolling boulder model" could recreate these confounding patterns. The models simulate the paths of the boulders ejected from Stickney, taking into account Phobos' shape and topography, as well as its gravitational environment, rotation and orbit around Mars.

Ramsley said he had no expectations for what the models might show. He wound up being surprised at how well the model recreated the groove patterns seen on Phobos.

"The model is really just an experiment we run on a laptop," Ramsley said. "We put all the basic ingredients in, then we press the button and we see what happens."

The models showed that the boulders tended to align themselves in sets of parallel paths, which jibes with the sets of parallel grooves seen on Phobos. The models also provide a potential explanation for some of the other more puzzling groove patterns.

The simulations show that because of Phobos' small size and relatively weak gravity, Stickney stones just keep on rolling, rather than stopping after a kilometer or so like they might on a larger body. In fact, some boulders would have rolled and bounded their way all the way around the tiny moon. That circumnavigation could explain why some grooves aren't radially aligned to the crater. Boulders that start out rolling across the eastern hemisphere of Phobos produce grooves that appear to be misaligned from the crater when they reach the western hemisphere.

That round-the-globe rolling also explains how some grooves are superposed on top of others. The models show that grooves laid down right after the impact were crossed minutes to hours later by boulders completing their global journeys. In some cases, those globetrotting boulders rolled all the back to where they started -- Stickney crater. That explains why Stickney itself has grooves.

Then there's the dead spot where there are no grooves at all. That area turns out to be a fairly low-elevation area on Phobos surrounded by a higher-elevation lip, Ramsley says. The simulations showed that boulders hit that lip and take a flying leap over the dead spot, before coming down again on the other side.

"It's like a ski jump," Ramsley said. "The boulders keep going but suddenly there's no ground under them. They end up doing this suborbital flight over this zone."

All told, Ramsley says, the models answer some key questions about how ejecta from Stickney could have been responsible for Phobos' complicated groove patterns.

Read more at Science Daily

The 'Swiss Army knife of prehistoric tools' found in Asia, suggests homegrown technology

These artifacts found in China are among the nearly four dozen that reflect the Levallois technique of toolmaking. In a paper published Nov. 19 in Nature, researchers date these artifacts to between 80,000 and 170,000 years ago.
New analysis of artifacts found at a South China archaeological site shows that sophisticated tool technology emerged in East Asia earlier than previously thought.

A study by an international team of researchers, including from the University of Washington, determines that carved stone tools, also known as Levallois cores, were used in Asia 80,000 to 170,000 years ago. Developed in Africa and Western Europe as far back as 300,000 years ago, the cores are a sign of more-advanced toolmaking -- the "multi-tool" of the prehistoric world -- but, until now, were not believed to have emerged in East Asia until 30,000 to 40,000 years ago.

With the find -- and absent human fossils linking the tools to migrating populations -- researchers believe people in Asia developed the technology independently, evidence of similar sets of skills evolving throughout different parts of the ancient world.

The is published online Nov. 19 in Nature.

"It used to be thought that Levallois cores came to China relatively recently with modern humans," said Ben Marwick, UW associate professor of anthropology and one of the paper's corresponding authors. "Our work reveals the complexity and adaptability of people there that is equivalent to elsewhere in the world. It shows the diversity of the human experience."

Levallois-shaped cores -- the "Swiss Army knife of prehistoric tools," Marwick said -- were efficient and durable, indispensable to a hunter-gatherer society in which a broken spear point could mean certain death at the claws or jaws of a predator. The cores were named for the Levallois-Perret suburb of Paris, where stone flakes were found in the 1800s.

Featuring a distinctive faceted surface, created through a sequence of steps, Levallois flakes were versatile "blanks," used to spear, slice, scrape or dig. The knapping process represents a more sophisticated approach to tool manufacturing than the simpler, oval-shaped stones of earlier periods.

The Levallois artifacts examined in this study were excavated from Guanyindong Cave in Guizhou Province in the 1960s and 1970s. Previous research using uranium-series dating estimated a wide age range of the archaeological site -- between 50,000 and 240,000 years old -- but that earlier technique focused on fossils found away from the stone artifacts, Marwick said. Analyzing the sediments surrounding the artifacts provides more specific clues as to when the artifacts would have been in use.

Marwick and other members of the team, from universities in China and Australia, used optically stimulated luminescence (OSL) to date the artifacts. OSL can establish age by determining when a sediment sample, down to a grain of sand, was last exposed to sunlight -- and thus, how long an artifact may have been buried in layers of sediment.

"Dating for this site was challenging because it had been excavated 40 years ago, and the sediment profile was exposed to air and without protection. So trees, plants, animals, insects could disturb the stratigraphy, which may affect the dating results if conventional methods were used for dating," said Bo Li , an associate professor of archaeology at the University of Wollongong in Australia and one of the paper's corresponding authors. "To solve this problem we used a new single-grain dating technique recently developed in our OSL lab at the University of Wollongong to date individual mineral grains in the sediment. Luckily we found residual sediment left over by the previous excavations, so that allowed us to take samples for dating."

The researchers analyzed more than 2,200 artifacts found at Guanyindong Cave, narrowing down the number of Levallois-style stone cores and flakes to 45. Among those believed to be in the older age range, about 130,000 to 180,000 years old, the team also was able to identify the environment in which the tools were used: an open woodland on a rocky landscape, in "a reduced rainforest area compared to today," the authors note.

In Africa and Europe these kinds of stone tools are often found at archaeological sites starting from 300,000 and 200,000 years ago. They are known as Mode III technology, part of a broad evolutionary sequence that was preceded by hand-axe technology (Mode II) and followed by blade tool technology (Mode IV). Archaeologists thought that Mode IV technologies arrived in China by migration from the West, but these new finds suggest they could have been locally invented. At the time people were making tools in Guanyindong Cave, the Denisovans -- ancestors to Homo sapiens and relative contemporaries to Neandertals elsewhere in the world -- roamed East Asia. But while hundreds of fossils of archaic humans and related artifacts, dating as far back as more than 3 million years ago, have been found in Africa and Europe, the archaeological record in East Asia is sparser.

That's partly why a stereotype exists, that ancient peoples in the region were behind in terms of technological development, Marwick said.

"Our work shows that ancient people there were just as capable of innovation as anywhere else. Technological innovations in East Asia can be homegrown, and don't always walk in from the West," he said.

The independent emergence of the Levallois technique at different times and places in the world is not unique in terms of prehistoric innovations. Pyramid construction, for one, appeared in at least three separate societies: the Egyptians, the Aztecs and the Mayans. Boatbuilding began specific to geography and reliant on a community's available materials. And writing, of course, developed in various forms with distinct alphabets and characters.

In the evolution of tools, Levallois cores represent something of a middle stage. Subsequent manufacturing processes yielded more-refined blades made of rocks and minerals that were more resistant to flaking, and composites that, for example, combined a spear point with blades along the edge. The appearance of blades later in time indicates a further increase in the complexity and the number of steps required to make the tools.

Read more at Science Daily