Dec 8, 2018

Molecular insights into spider silk

This is a schematic scheme of a spidroin consisting of an assembled C-terminal domain (cyan), the unfolded central domain (white line) and the N-terminal domains (green). Right hand side: scheme of a tapering spinning duct.
They are lightweight, almost invisible, highly extensible and strong, and of course biodegradable: the threads spiders use to build their webs. In fact, spider silk belongs to the toughest fibres in nature. Based on its low weight it even supersedes high-tech threads like Kevlar or Carbon. Its unique combination of strength and extensibility renders it in particular attractive for industry. Whether in aviation industry, textile industry, or medicine -- potential applications of this magnificent material are manifold.

Material scientists have long sought to reproduce the fibre in the laboratory, but with limited success. Today, it is possible to manufacture artificial spider silk of similar properties as the prototype, but the molecular-level structural details responsible for material properties await to be disclosed. Now, scientists from the Julius-Maximilians-Universität Würzburg (JMU) delivered new insights. Dr Hannes Neuweiler, lecturer at the Institute of Biotechnology and Biophysics at the JMU, is in charge of this project. His results are published in the scientific journal Nature Communications.

A molecular clamp connects protein building blocks


"The silk fibres consist of protein building blocks, so-called spidroins, which are assembled by spiders within their spinning gland," explains Neuweiler. The terminal ends of building blocks take special roles in this process. The two ends of a spidroin are terminated by an N- and a C-terminal domain.

The domains at both ends connect protein building blocks. In the present study, Neuweiler and colleagues took a close look at the C-terminal domain. The C-terminal domain connects two spidroins through formation of an intertwined structure that resembles a molecular clamp. Neuweiler describes the central result of the study: "We observed that the clamp self-assembles in two discrete steps. While the first step comprises association of two chain ends, the second step involves the folding of labile helices in the periphery of the domain."

This two-step process of self-assembly was previously unknown and may contribute to extensibility of spider silk. It is known that stretching of spider silk is associated with unfolding of helix. Previous work, however, traced extensibility back to the unfolding of helices in the central segment of spidroins. "We propose that the C-terminal domain might also act as module that contributes to extensibility" explains Neuweiler.

Assisting material science

In their study Neuweiler and co-workers investigated protein building blocks of the nursery web spider Euprosthenops australis. They used genetic engineering to exchange individual moieties of building blocks and modified the protein chemically using fluorescent dyes. Finally, the interaction of light with soluble proteins disclosed that the domain assembles in two discrete steps.

Read more at Science Daily

What sets primates apart from other mammals?

Brain organoids, or 'mini-brains' growing in culture.
University of Otago researchers have discovered information about a gene that sets primates -- great apes and humans -- apart from other mammals, through the study of a rare developmental brain disorder.

Dr Adam O'Neill carried out the research as part of his PhD at the University of Otago, under the supervision of Professor Stephen Robertson, discovering that the PLEKHG6 gene has qualities that drives aspects of brain development differently in primates compared to other species.

"Broadly speaking, this gene can be thought of as one of the genetic factors that make us human in a neurological sense," Dr O'Neill who now works in the Department of Physiological Genomics at Ludwig Maximilian Universität in Munich, Germany, explains.

Professor Robertson says the research, just published in international journal Cell Reports, aimed to address the idea that there must be genes that humans have that have made our brains bigger and better functioning in some respects than other animals. However, that increased complexity could come at a cost, potentially predisposing humans to the development of a whole suite of neurological or psychiatric conditions.

"Such genes have been hard to find, but using an approach where we studied children with a certain brain malformation called periventricular nodular heterotopia, we found a 'damaged' genomic element in a child that had the attributes of such a primate specific genetic factor," he explains. In this particular condition a subset of neurons in the developing brain fail to take up their correct position resulting in a variety of symptoms including epilepsy and delayed development.

Dr O'Neill and research collaborators from Max Planck Institute of Psychiatry, Germany, then set forth to test the point that the gene drives aspects of brain development that are unique to primates. Some amazing data was found using a novel approach through studying human "mini-brains" in culture. It is now possible to take a skin cell and transform it using a set of genetic tricks, so that it can be triggered to form a tiny brain-like structure in culture in the lab.

Their results showed that the particular genetic change that disabled a component of this gene (PLEKHG6) altered its ability to support the growth and proliferation of specialised stem cells in the developing brain. In addition, some of these cells also failed to migrate to their correct position in the growing "mini-brain" during the first few weeks of brain development.

Professor Robertson says it has been known for a while that these stem cells behave differently between primates/humans and other animals, but understanding what genes regulate these differences has been a mystery.

"Adam's achievement has been to show that this particular component of the PLEKHG6 gene is one such regulator that humans have 'acquired' very recently in their evolution to make their brains 'exceptional'."

Dr O'Neill says there are very few genetic elements that are primate specific in our genome, so this discovery adds to a very short list of genetic factors that, at least in one sense, make us human.

"Such an understanding positions us to better understand how a brain builds itself- knowledge that will add to our ability to design strategies to repair the damaged brain, especially early in infancy where there are still lots of stem cells around," Dr O'Neill says.

The work also helps provide more information about the list of genes that are altered to cause this particular type of brain malformation.

"Personally, I also think it does underscore how it is very subtle nuanced differences that separate us from other animals. Our anthropocentrism could be a whole lot more humble," Dr O'Neill says.

Read more at Science Daily

Dec 7, 2018

Parrot genome analysis reveals insights into longevity, cognition

Blue-fronted parrot
Parrots are famously talkative, and a blue-fronted Amazon parrot named Moises -- or at least its genome -- is telling scientists volumes about the longevity and highly developed cognitive abilities that give parrots so much in common with humans. Perhaps someday, it will also provide clues about how parrots learn to vocalize so well.

Morgan Wirthlin, a BrainHub post-doctoral fellow in Carnegie Mellon University's Computational Biology Department and first author of a report to appear in the Dec. 17 issue of the journal Current Biology, said she and her colleagues sequenced the genome of the blue-fronted Amazon and used it to perform the first comparative study of parrot genomes.

By comparing the blue-fronted Amazon with 30 other long- and short-lived birds -- including four additional parrot species -- she and colleagues at Oregon Health and Science University (OHSU), the Federal University of Rio de Janeiro and other entities identified a suite of genes previously not known to play a role in longevity that deserve further study. They also identified genes associated with longevity in fruit flies and worms.

"In many cases, this is the first time we've connected those genes to longevity in vertebrates," she said.

Wirthlin, who began the study while a Ph.D. student in behavioral neuroscience at OHSU, said parrots are known to live up to 90 years in captivity -- a lifespan that would be equivalent to hundreds of years for humans. The genes associated with longevity include telomerase, responsible for DNA repair of telomeres (the ends of chromosomes), which are known to shorten with age. Changes in these DNA repair genes can potentially turn cells malignant. The researchers have found evidence that changes in the DNA repair genes of long-lived birds appear to be balanced with changes in genes that control cell proliferation and cancer.

The researchers also discovered changes in gene-regulating regions of the genome -- which seem to be parrot-specific -- that were situated near genes associated with neural development. Those same genes are also linked with cognitive abilities in humans, suggesting that both humans and parrots evolved similar methods for developing higher cognitive abilities.

"Unfortunately, we didn't find as many speech-related changes as I had hoped," said Wirthlin, whose research is focused on the evolution of vocal behaviors, including speech. Animals that learn songs or speech are relatively rare -- parrots, hummingbirds, songbirds, whales, dolphins, seals and bats -- which makes them particularly interesting to scientists, such as Wirthlin, who hope to gain a better understanding of how humans evolved this capacity.

"If you're just analyzing genes, you hit the end of the road pretty quickly," she said. That's because learned speech behaviors are thought be more of a function of gene regulation than of changes in genes themselves. Doing comparative studies of these "non-coding" regulatory regions, she added, is difficult, but she and Andreas Pfenning, assistant professor of computational biology, are working on the computational and experimental techniques that may someday reveal more of their secrets.

Read more at Science Daily

Biggest mass extinction caused by global warming leaving ocean animals gasping for breath

This roughly 1.5-foot slab of rock from southern China shows the Permian-Triassic boundary. The bottom section is pre-extinction limestone. The upper section is microbial limestone deposited after the extinction.
The largest extinction in Earth's history marked the end of the Permian period, some 252 million years ago. Long before dinosaurs, our planet was populated with plants and animals that were mostly obliterated after a series of massive volcanic eruptions in Siberia.

Fossils in ancient seafloor rocks display a thriving and diverse marine ecosystem, then a swath of corpses. Some 96 percent of marine species were wiped out during the "Great Dying," followed by millions of years when life had to multiply and diversify once more.

What has been debated until now is exactly what made the oceans inhospitable to life -- the high acidity of the water, metal and sulfide poisoning, a complete lack of oxygen, or simply higher temperatures.

New research from the University of Washington and Stanford University combines models of ocean conditions and animal metabolism with published lab data and paleoceanographic records to show that the Permian mass extinction in the oceans was caused by global warming that left animals unable to breathe. As temperatures rose and the metabolism of marine animals sped up, the warmer waters could not hold enough oxygen for them to survive.

The study is published in the Dec. 7 issue of Science.

"This is the first time that we have made a mechanistic prediction about what caused the extinction that can be directly tested with the fossil record, which then allows us to make predictions about the causes of extinction in the future," said first author Justin Penn, a UW doctoral student in oceanography.

Researchers ran a climate model with Earth's configuration during the Permian, when the land masses were combined in the supercontinent of Pangaea. Before ongoing volcanic eruptions in Siberia created a greenhouse-gas planet, oceans had temperatures and oxygen levels similar to today's. The researchers then raised greenhouse gases in the model to the level required to make tropical ocean temperatures at the surface some 10 degrees Celsius (20 degrees Fahrenheit) higher, matching conditions at that time.

The model reproduces the resulting dramatic changes in the oceans. Oceans lost about 80 percent of their oxygen. About half the oceans' seafloor, mostly at deeper depths, became completely oxygen-free.

To analyze the effects on marine species, the researchers considered the varying oxygen and temperature sensitivities of 61 modern marine species -- including crustaceans, fish, shellfish, corals and sharks -- using published lab measurements. The tolerance of modern animals to high temperature and low oxygen is expected to be similar to Permian animals because they had evolved under similar environmental conditions. The researchers then combined the species' traits with the paleoclimate simulations to predict the geography of the extinction.

"Very few marine organisms stayed in the same habitats they were living in -- it was either flee or perish," said second author Curtis Deutsch, a UW associate professor of oceanography.

The model shows the hardest hit were organisms most sensitive to oxygen found far from the tropics. Many species that lived in the tropics also went extinct in the model, but it predicts that high-latitude species, especially those with high oxygen demands, were nearly completely wiped out.

To test this prediction, co-authors Jonathan Payne and Erik Sperling at Stanford analyzed late-Permian fossil distributions from the Paleoceanography Database, a virtual archive of published fossil collections. The fossil record shows where species were before the extinction, and which were wiped out completely or restricted to a fraction of their former habitat.

The fossil record confirms that species far from the equator suffered most during the event.

"The signature of that kill mechanism, climate warming and oxygen loss, is this geographic pattern that's predicted by the model and then discovered in the fossils," Penn said. "The agreement between the two indicates this mechanism of climate warming and oxygen loss was a primary cause of the extinction."

The study builds on previous work led by Deutsch showing that as oceans warm, marine animals' metabolism speeds up, meaning they require more oxygen, while warmer water holds less. That earlier study shows how warmer oceans push animals away from the tropics.

The new study combines the changing ocean conditions with various animals' metabolic needs at different temperatures. Results show that the most severe effects of oxygen deprivation are for species living near the poles.

"Since tropical organisms' metabolisms were already adapted to fairly warm, lower-oxygen conditions, they could move away from the tropics and find the same conditions somewhere else," Deutsch said. "But if an organism was adapted for a cold, oxygen-rich environment, then those conditions ceased to exist in the shallow oceans."

The so-called "dead zones" that are completely devoid of oxygen were mostly below depths where species were living, and played a smaller role in the survival rates. "At the end of the day, it turned out that the size of the dead zones really doesn't seem to be the key thing for the extinction," Deutsch said. "We often think about anoxia, the complete lack of oxygen, as the condition you need to get widespread uninhabitability. But when you look at the tolerance for low oxygen, most organisms can be excluded from seawater at oxygen levels that aren't anywhere close to anoxic."

Warming leading to insufficient oxygen explains more than half of the marine diversity losses. The authors say that other changes, such as acidification or shifts in the productivity of photosynthetic organisms, likely acted as additional causes.

The situation in the late Permian -- increasing greenhouse gases in the atmosphere that create warmer temperatures on Earth -- is similar to today.

Read more at Science Daily

More bioplastics do not necessarily contribute to climate change mitigation (#10000)

Under the assumption that a tax on conventional plastics will increase the share of bioplastics relative to total plastic consumption to 5 percent. The darker the coloring, the greater the loss of forest. In the most severely affected areas, up to 1 percent of the forest cover is lost.
Bioplastics are often promoted as an environmentally and climate-friendly alternative to conventional petroleum-based plastics. However, a recent study from the University of Bonn suggests that shifting to plant-based plastics could have less positive effects than expected. Specifically, an increased consumption of bioplastics in the following years is likely to generate increased greenhouse gas emissions from cropland expansion on a global scale. The study will be soon published in the scientific Journal Environmental Research Letters and is already available online.

Plastics are usually made from petroleum, with the associated impacts in terms of fossil fuel depletion but also climate change: The carbon embodied in fossil resources is suddenly released to the atmosphere by degradation or burning, hence contributing to global warming. This corresponds to about 400 million metric tonnes of CO2 per year worldwide, almost half of the total greenhouse gases that Germany emitted to the atmosphere in 2017. It is estimated that by 2050, plastics could already be responsible for 15% of the global CO2 emissions.

Bioplastics, on the other hand, are in principle climate-neutral since they are based on renewable raw materials such as maize, wheat or sugar cane. These plants get the CO2 that they need from the air through their leaves. Producing bioplastics therefore consumes CO2, which compensates for the amount that is later released at end-of-life. Overall, their net greenhouse gas balance is assumed to be zero. Bioplastics are thus often consumed as an environmentally friendly alternative.

But at least with the current level of technology, this issue is probably not as clear as often assumed. "The production of bioplastics in large amounts would change land use globally," explains Dr. Neus Escobar from the Institute of Food and Resource Economics at the University of Bonn. "This could potentially lead to an increase in the conversion of forest areas to arable land. However, forests absorb considerably more CO2 than maize or sugar cane annually, if only because of their larger biomass." Experience with biofuels has shown that this effect is not a theoretical speculation. The increasing demand for the "green" energy sources has brought massive deforestation to some countries across the tropics.

Dr. Neus Escobar and her colleagues Salwa Haddad, Prof. Dr. Jan Börner and Dr. Wolfgang Britz have simulated the effects of an increased demand for bioplastics in major producing countries. They used and extended a computer model that had already been used to calculate the impacts of biofuel policies. It is based on a database that depicts the entire world economy.

"For our experiment, we assume that the share of bioplastics relative to total plastic consumption increases to 5% in Europe, China, Brazil and the USA," she explains. "We run two different scenarios: a tax on conventional plastics compared with a subsidy on bioplastics." The most dramatic effects are found for the tax scenario: As fossil-based plastics consequently become considerably more expensive, the demand for them falls significantly. Worldwide, 0.08% fewer greenhouse gases would be released each year. However, part of this decline is due to economic distortions, as the tax also slows economic growth.

More fields, fewer forests

At the same time, the area of land used for agriculture increases in the tax scenario, while the forest area decreases by 0.17%. This translates into enormous quantities of CO2 being emitted into the atmosphere. "This is considered to occur as a one-time effect," Escobar explains. "Nevertheless, according to our calculations, it will take more than 20 years for it to be offset by the savings achieved by fossil substitution."

All in all, it takes a lot of time for the switch to bioplastics to pay off. Furthermore, the researchers estimate the societal costs of this policy to decrease one tonne of CO2 at more than 2,000 US dollars -- a high sum as compared to biofuel mandates. A subsidy to bioplastics would have very different effects on the global economy. However, both the compensation period and the costs for climate change mitigation would remain almost the same as with the tax.

"Consuming bioplastics from food crops in greater amounts does not seem to be an effective strategy to protect the climate," said the scientist. Especially because this would trigger many other negative effects, such as rising food prices. "But this would probably look different if other biomass resources were used for production, such as crop residues," says Escobar. "We recommend concentrating research efforts on these advanced bioplastics and bring them to market."

Read more at Science Daily

Unknown treasure trove of planets found hiding in dust

The Taurus Molecular Cloud, pictured here by ESA's Herschel Space Observatory, is a star-forming region about 450 light-years away. The image frame covers roughly 14 by 16 light-years and shows the glow of cosmic dust in the interstellar material that pervades the cloud, revealing an intricate pattern of filaments dotted with a few compact, bright cores -- the seeds of future stars.
"Super-Earths" and Neptune-sized planets could be forming around young stars in much greater numbers than scientists thought, new research by an international team of astronomers suggests.

Observing a sampling of young stars in a star-forming region in the constellation Taurus, researchers found many of them to be surrounded by structures that can best be explained as traces created by invisible, young planets in the making. The research, published in the Astrophysical Journal, helps scientists better understand how our own solar system came to be.

Some 4.6 billion years ago, our solar system was a roiling, billowing swirl of gas and dust surrounding our newborn sun. At the early stages, this so-called protoplanetary disk had no discernable features, but soon, parts of it began to coalesce into clumps of matter -- the future planets. As they picked up new material along their trip around the sun, they grew and started to plow patterns of gaps and rings into the disk from which they formed. Over time, the dusty disk gave way to the relatively orderly arrangement we know today, consisting of planets, moons, asteroids and the occasional comet.

Scientists base this scenario of how our solar system came to be on observations of protoplanetary disks around other stars that are young enough to currently be in the process of birthing planets. Using the Atacama Large Millimeter Array, or ALMA, comprising 45 radio antennas in Chile's Atacama Desert, the team performed a survey of young stars in the Taurus star-forming region, a vast cloud of gas and dust located a modest 450 light-years from Earth. When the researchers imaged 32 stars surrounded by protoplanetary disks, they found that 12 of them -- 40 percent -- have rings and gaps, structures that according to the team's measurements and calculations can be best explained by the presence of nascent planets.

"This is fascinating because it is the first time that exoplanet statistics, which suggest that super-Earths and Neptunes are the most common type of planets, coincide with observations of protoplanetary disks," said the paper's lead author, Feng Long, a doctoral student at the Kavli Institute for Astronomy and Astrophysics at Peking University in Bejing, China.

While some protoplanetary disks appear as uniform, pancake-like objects lacking any features or patterns, concentric bright rings separated by gaps have been observed, but since previous surveys have focused on the brightest of these objects because they are easier to find, it was unclear how common disks with ring and gap structures really are in the universe. This study presents the results of the first unbiased survey in that the target disks were selected independently of their brightness -- in other words, the researchers did not know whether any of their targets had ring structures when they selected them for the survey.

"Most previous observations had been targeted to detect the presence of very massive planets, which we know are rare, that had carved out large inner holes or gaps in bright disks," said the paper's second author Paola Pinilla, a NASA Hubble Fellow at the University of Arizona's Steward Observatory. "While massive planets had been inferred in some of these bright disks, little had been known about the fainter disks."

The team, which also includes Nathan Hendler and Ilaria Pascucci at the UA's Lunar and Planetary Laboratory, measured the properties of rings and gaps observed with ALMA and analyzed the data to evaluate possible mechanisms that could cause the observed rings and gaps. While these structures may be carved by planets, previous research has suggested that they may also be created by other effects. In one commonly suggested scenario, so-called ice lines caused by changes in the chemistry of the dust particles across the disc in response to the distance to the host star and its magnetic field create pressure variations across the disk. These effects can create variations in the disk, manifesting as rings and gaps.

The researchers performed analyses to test these alternative explanations and could not establish any correlations between stellar properties and the patterns of gaps and rings they observed.

"We can therefore rule out the commonly proposed idea of ice lines causing the rings and gaps," Pinilla said. "Our findings leave nascent planets as the most likely cause of the patterns we observed, although some other processes may also be at work."

Since detecting the individual planets directly is impossible because of the overwhelming brightness of the host star, the team performed calculations to get an idea of the kinds of planets that might be forming in the Taurus star-forming region. According to the findings, Neptune-sized gas planets or so-called super-Earths -- terrestrial planets of up to 20 Earth masses -- should be the most common. Only two of the observed disks could potentially harbor behemoths rivaling Jupiter, the largest planet in the solar system.

"Since most of the current exoplanet surveys can't penetrate the thick dust of protoplanetary disks, all exoplanets, with one exception, have been detected in more evolved systems where a disk is no longer present," Pinilla said.

Going forward, the research group plans to move ALMA's antennas farther apart, which should increase the array's resolution to around five astronomical units (one AU equals the average distance between the Earth and the sun), and to make the antennas sensitive to other frequencies that are sensitive to other types of dust.

Read more at Science Daily

Dec 6, 2018

Harmful, unfounded myths about migration and health have become accepted, used to justify policies of exclusion

Silhouette of a family with children.
Public health protection and cost savings are often used as reasons to restrict migrants' access to health care, or to deny them entry. Yet, as the new UCL-Lancet Commission on Migration and Health lays out with new international data and analysis, the most common myths about migration and health are not supported by the available evidence and ignore the important contribution of migration to global economies.

In 2018, there were more than one billion people on the move, a quarter of whom were migrants crossing international borders. The Commission is the result of a two-year project led by 20 leading experts from 13 countries, and includes new data analysis, with two original research papers, and represents the most comprehensive review of the available evidence to date. The report, including its recommendations to improve the public health response to migration, will be launched on 8th December at the UN Intergovernmental Conference to adopt the Global Compact for safe, orderly and regular migration in Marrakech.

"Populist discourse demonises the very same individuals who uphold economies and bolster social care and health services. Questioning the deservingness of migrants for health care on the basis of inaccurate beliefs supports practices of exclusion, harming the health of individuals, our society, and our economies," says Commission Chair Professor Ibrahim Abubakar, UCL (UK). "Migration is the defining issue of our time. How the world addresses human mobility will determine public health and social cohesion for decades ahead. Creating health systems that integrate migrant populations will benefit entire communities with better health access for all and positive gains for local populations. Failing to do so could be more expensive to national economies, health security, and global health than the modest investments required to protect migrants' right to health, and ensure migrants can be productive members of society."

The Lancet editor Dr Richard Horton adds: "In too many countries, the issue of migration is used to divide societies and advance a populist agenda. With one billion people on the move today, growing populations in many regions of the world, and the rising aspirations of a new generation of young people, migration is not going away. Migrants commonly contribute more to the economy than they cost, and how we shape their health and wellbeing today will impact our societies for generations to come. There is no more pressing issue in global health."

Myths about migration and health not supported by the available evidence:

Are high income countries being overwhelmed by migrants?

Discussions about migration often focus on rising numbers of people crossing international borders and overwhelming high-income countries, but changes in migration are more complex. Although international migration receives the most political and public attention, most movement globally is internal migration. A quarter of all migrants (an estimated 258 million people) are international migrants. In the past four decades, the percentage of the world's population that is considered an international migrant has changed very little -- from 2.9% in 1990 to 3.4% in 2017 globally.

Most international migrants are labour migrants (approximately 65%) -- and a much smaller proportion are refugees and asylum seekers.

While high-income countries have seen a greater rise in the percentage of international migrants (from 7.6% in 1990 to 13.4% in 2017), they are more likely to be students who pay for their education or labour migrants who are net contributors to the economy. Refugees make up a larger proportion of the total population in low income countries compared to high income countries (0.7% vs 0.2%).

Are migrants damaging economies?

An overwhelming consensus of evidence exists on the positive economic benefits of migration, which is insufficiently acknowledged. In advanced economies, each 1% increase in migrants in the adult population increases the gross domestic product per person by up to 2%. Additionally, migration contributes to global wealth distribution. An estimated US$613 billion was sent by migrants to their families at origin in 2017. Approximately three quarters of these remittances are to low- and middle-income countries -- an amount three-times larger than official development assistance.

Are migrants a burden on health services?

Migrants constitute a substantial proportion of the health care workforce in many high-income countries. Rather than being a burden, migrants are more likely to bolster services by providing medical care, teaching children, caring for older people, and supporting understaffed services. In the UK, 37% of doctors received their medical qualification in another country.

A new, comprehensive systematic review and meta-analysis concludes that international migrants in high-income countries have lower rates of mortality compared to general populations across the majority of disease categories. The study used mortality estimates on more than 15.2 million migrants from 92 countries and found that international migrants had lower rates of deaths for cardiovascular, digestive, endocrine, neoplasms, nervous and respiratory diseases, mental and behavioural disorders and injuries than people in the general population in the receiving country. There was no evidence of a difference for blood, genitourinary, or musculoskeletal disorders.

The only two exceptions were for infections such as viral hepatitis, tuberculosis, and HIV, and external causes, such as assault, where migrants had increased rates of mortality. However, as the report also highlights, several studies (eg on tuberculosis) have shown that the risk of transmission of infections is elevated only within migrant communities, and is negligible in host populations.

The findings are most likely to apply to international migrants in high income countries who are studying, working, or have joined family members in these countries. Vulnerable groups, such as refugees, asylum seekers, and undocumented migrants, may have different health needs, but, as the authors note, rather than form policies based on exceptions, evidence about the healthy benefits of migration should be at the forefront of decisions.

Are migrants disease carriers that pose risks to resident populations?

The stereotype of migrants as disease carriers is perhaps one of the most prevalent, and the most harmful. However, there is no systematic association between migration and importation of infectious diseases, and the evidence shows that the risk of transmission from migrating populations to host populations is generally low. Studies on tuberculosis suggest that the risk of transmission is elevated within migrant households and communities, but not in host populations.

Migrants may come from regions with higher disease burden, especially if they come from regions of conflict, with weak public health systems. But illness and infection can also be acquired or during transit -- for example air travel can facilitate the rapid spread of infection. Indeed, recent examples of spread of resistant pathogens was driven mainly by international travel, tourism, and the movement of livestock rather than migration.

Strong public health systems are needed to prevent outbreaks of disease, whether associated with migration or not.

Do migrants have higher fertility rates than among host populations?

Populist rhetoric often claims that migrants have many more children than host populations. The Commission collates data from several long-term studies that suggest the birth rates among migrants are barely at the level of population replacement (2.1 births per woman) and often falling. A study of six European countries found that fertility rates among migrant women were, in general, lower than host populations.

Studies in India and Ethiopia, for example, have shown that internal migrants are more likely to use contraception than host populations. Ensuring access to services is key to ensuring the sexual and reproductive healthcare of migrant women and girls.

Unfounded myths: harmful to individuals and society

Unfounded myths about migration have wide ranging impacts on how migrants are treated within society. Despite evidence that migrants have positive health benefit to societies, many men and women who migrate are subjected to laws, restrictions, and discrimination that put them at risk of ill-health. Protection of the public is often invoked as a reason for the denial of entry, detention or deportation, but too often these policies leave migrants facing worse health situations.

The Commission calls for governments to improve migrants' access to services, strengthen migrants' right to health and tackle the wider determinants of migrant health, including taking a zero-tolerance approach to racism and discrimination.

Restricting entry based on health status is increasingly common. In Australia, permanent residency application can be rejected because the applicant has a health condition -- the five most common reasons were intellectual or functional impairment, HIV, cancer and renal disease. 35 countries have imposed some form of travel ban on people with HIV. Too often, policies are not based on the overall contribution of migrants to host societies, but only in terms of costs to the state. Restrictions on entry or deportation for diseases with low risk of casual transmission are impermissible on both public health and human rights grounds.

Linking health status to migration enforcement also reinforces distrust in the health profession, and Iimits migrants' ability to access health care on a non-discriminatory basis. The fear of deportation can mean migrants will not seek health care or assistance when needed, hindering individual and public health. In practice, health-related enforcement regimes can pressure health workers to act as immigration control agents. The Commission points to a growing trend of states limiting access to health care for migrants, despite commitments to provide "health for all." In the UK, the hostile environment policy was highlighted in the Windrush scandal of 2018 with long-term migrants being deported midway through medical treatment. Upfront charging regulations are still in place.

"Migrants are healthier and contribute to our economy and the NHS. There is no evidence that migrants are a drain on the NHS or that they spread infectious disease. Exclusion of migrants in health systems and the increasing negative rhetoric is political and not evidence based. The hostile environment towards migrants in the UK has led directly to migrants and British citizens being denied health care, with direct severe public health and health economic consequences. Migrants constitute a considerable portion of the heath care workforce in the UK, making important contributions to the country which should be recognised," says Professor Abubakar.

States are increasingly treating unauthorised border crossings as a criminal offense, leading to detention, at times indefinitely. Indefinite offshore detention of migrants on Nauru island was introduced as an immigration policy in Australia in 2013, and the USA recently announced a zero-tolerance policy, resulting in migrants arrested or jailed and children separated from their parents. Detention poses clear violations of international law, and findings from a systematic review of 38 studies shows that detention is associated with negative health outcomes, especially mental health.

"Contrary to the current political narrative portraying migrants as disease carriers who are a blight on society, migrants are an essential part of economic stability in the US. The separation of migrant children from their parents creates long term psychological damage -- and is a cruel and unnecessary aspect of US policy. The criminalization and detention of migrants seeking internationally protected refuge violates international law, and puts them at greater risk of ill health. Migrants are vital to our wellbeing as a society. Addressing the healthcare needs of migrant populations is an essential strategy to stemming costs associated with any avoidable disease burden in these populations," says co-author Professor Terry McGovern, Columbia University, USA.

Finally, the health of migrants depends on the social and structural context of their journey, and destination. Migrant related discrimination is a profound determinant of health, especially mental health and social wellbeing. Access to justice, and education are important determinants of health. Yet a study of 28 developed and developing countries found that nearly half did not allow immediate access to education for irregular migrant children, and migrants face many barriers accessing justice, through poor information, employer intimidation, language barriers or unfamiliarity with the system.

Professor Bernadette Kumar, Norwegian Institute for Public Health (Norway) adds: "Too often, government policies prioritise the politics of xenophobia and racism over their responsibilities to act forcefully to counter them. Racial and ethnic discrimination fuel the exclusion of migrant populations, not only violating the rights of individuals, but hindering social cohesion and progress of society at large. Racism and prejudice should be confronted with a zero-tolerance approach."

The Commission is accompanied by four linked Comments, including by Louise Arbour, UN Special Representative for International Migration; David Miliband and Mesfin Teklu Tessema, International Rescue Committee; Kolitha Wickramage, International Organization for Migration, and Giuseppe Annunziata, Regional Office for Europe, WHO; and Walid Ammar, Director General of the Lebanese Ministry of Public Health.

Commission co-author Dr Nyovani Madise, African Institute for Development Policy (Kenya), adds: "Africans have always been highly mobile people, moving mostly within national and regional borders. The Commission's report confirms that migrants boost the economies of their destination countries while many also remit money back to their places of origin. Most African countries find themselves as host to refugees and displaced people running away from conflicts in neighbouring countries. Migration policies in Africa must take on board the Commission's recommendations to provide good healthcare for all people on the move and to acknowledge the positive benefits of migration when formulating and implementing development policies."

Read more at Science Daily

Largest continuous oil and gas resource potential ever

Assessment units of the Wolfcamp Shale and Bone Spring Formation in the Delaware Basin of Texas and New Mexico.
Today, the U.S. Department of the Interior announced the Wolfcamp Shale and overlying Bone Spring Formation in the Delaware Basin portion of Texas and New Mexico's Permian Basin province contain an estimated mean of 46.3 billion barrels of oil, 281 trillion cubic feet of natural gas, and 20 billion barrels of natural gas liquids, according to an assessment by the U.S. Geological Survey (USGS). This estimate is for continuous (unconventional) oil, and consists of undiscovered, technically recoverable resources.

"Christmas came a few weeks early this year," said U.S. Secretary of the Interior Ryan Zinke. "American strength flows from American energy, and as it turns out, we have a lot of American energy. Before this assessment came down, I was bullish on oil and gas production in the United States. Now, I know for a fact that American energy dominance is within our grasp as a nation."

"In the 1980's, during my time in the petroleum industry, the Permian and similar mature basins were not considered viable for producing large new recoverable resources. Today, thanks to advances in technology, the Permian Basin continues to impress in terms of resource potential. The results of this most recent assessment and that of the Wolfcamp Formation in the Midland Basin in 2016 are our largest continuous oil and gas assessments ever released," said Dr. Jim Reilly, USGS Director. "Knowing where these resources are located and how much exists is crucial to ensuring both our energy independence and energy dominance."

Although the USGS has previously assessed conventional oil and gas resources in the Permian Basin province, this is the first assessment of continuous resources in the Wolfcamp shale and Bone Spring Formation in the Delaware Basin portion of the Permian. Oil and gas companies are currently producing oil here using both traditional vertical well technology and horizontal drilling and hydraulic fracturing.

The Wolfcamp shale in the Midland Basin portion of the Permian Basin province was assessed separately in 2016, and at that time it was the largest assessment of continuous oil conducted by the USGS. The Delaware Basin assessment of the Wolfcamp Shale and Bone Spring Formation is more than two times larger than that of the Midland Basin. The Permian Basin province includes a series of basins and other geologic formations in West Texas and southern New Mexico. It is one of the most productive areas for oil and gas in the entire United States.

"The results we've released today demonstrate the impact that improved technologies such as hydraulic fracturing and directional drilling have had on increasing the estimates of undiscovered, technically recoverable continuous (i.e., unconventional) resources," said Walter Guidroz, Program Coordinator of the USGS Energy Resources Program.

Read more at Science Daily

An exoplanet inflated like a balloon

Artist's impression of the exoplanet HAT-P-11b with its extended helium atmosphere blown away by the star, an orange dwarf star smaller, but more active, than the Sun.
Although helium is a rare element on Earth, it is ubiquitous in the Universe. It is, after hydrogen, the main component of stars and gaseous giant planets. Despite its abundance, helium was only detected recently in the atmosphere of a gaseous giant by an international team including astronomers from the University of Geneva (UNIGE), Switzerland. The team, this time led by Genevan researchers, has observed in detail and for the first time how this gas escapes from the overheated atmosphere of an exoplanet, literally inflated with helium. The results are published in Science.

Helium is the second most abundant element in the Universe. Predicted since 2000 as one of the best possible tracers of the atmospheres of exoplanets, these planets orbiting around other stars than the Sun, it took astronomers 18 years to actually detect it. It was hard to spot due to the very peculiar observational signature of helium, located in the infrared, out of range for most of the instruments used previously. The discovery occurred earlier this year, thanks to Hubble Space Telescope observations, which proved difficult to interpret. Team members from UNIGE, members of the National Centre for Competence in Research PlanetS, had the idea of pointing another telescope equipped with a brand-new instrument -- a spectrograph called Carmenes.

Detecting colours of planets with Carmenes

A spectrograph decomposes the light of a star into its component colours, like a rainbow. The "resolution" of a spectrograph is a measure indicating the number of colours that can be revealed. While the human eye cannot distinguish any colour beyond red without an adapted camera, the infrared eye of Hubble is capable of identifying hundreds of colours there. This proved sufficient to identify the coloured signature of helium. The instrument Carmenes, installed on the 4-metre telescope at the observatory of Calar Alto in Andalusia, Spain, is capable to identify more than 100'000 colours in the infrared!

This high spectral resolution allowed the team to observe the position and speed of helium atoms in the upper atmosphere of a gaseous Neptune-size exoplanet, 4 times larger than the Earth. Located in the Cygnus (the Swan) constellation, 124 light-years from home, HAT-P-11b is a "warm Neptune" (a decent 550°C!), twenty times closer to its star than the Earth from the Sun. "We suspected that this proximity with the star could impact the atmosphere of this exoplanet" says Romain Allart, PhD student at UNIGE and first author of the study. "The new observations are so precise that the exoplanet atmosphere is undoubtedly inflated by the stellar radiation and escapes to space," he adds.

A planet inflated with helium

These observations are supported by numerical simulation, led by Vincent Bourrier, co-author of the study and member of the European project FOUR ACES*. Thanks to the simulation, it is possible to track the trajectory of helium atoms: "helium is blown away from the day side of the planet to its night side at over 10'000 km/h," Vincent Bourrier explains. "Because it is such a light gas, it escapes easily from the attraction of the planet and forms an extended cloud all around it." This gives HAT-P-11b the shape of a helium-inflated balloon.

This result opens a new window to observe the extreme atmospheric conditions prevailing in the hottest exoplanets. The Carmenes observations demonstrate that such studies, long thought feasible only from space, can be achieved with greater precision by ground-based telescopes equipped with the right kind of instruments. "These are exciting times for the search of atmospheric signatures in exoplanets," says Christophe Lovis, senior lecturer at UNIGE and co-author of the study. In fact, UNIGE astronomers are also heavily involved in the design and exploitation of two new high-resolution infrared spectrographs, similar to Carmenes. One of them, called SPIRou, has just started an observational campaign from Hawaii, while the UNIGE Department of astronomy houses the first tests of the Near Infrared Planet Searcher (NIRPS), which will be installed in Chile at the end of 2019. "This result will enhance the interest of the scientific community for these instruments. Their number and their geographical distribution will allow us to cover the entire sky, in search for evaporating exoplanets," concludes Lovis.

Read more at Science Daily

An ancient strain of plague may have led to the decline of Neolithic Europeans

This image shows the remains of a 20-year old woman (Gokhem2) from around 4900 BP that was killed by the first plague pandemic. She was one of the victims of a plague pandemic that likely lead to the decline of the Neolithic societies in Europe.
A team of researchers from France, Sweden, and Denmark have identified a new strain of Yersinia pestis, the bacteria that causes plague, in DNA extracted from 5,000-year-old human remains. Their analyses, publishing December 6 in the journal Cell, suggest that this strain is the closest ever identified to the genetic origin of plague. Their work also suggests that plague may have been spread among Neolithic European settlements by traders, contributing to the settlements' decline at the dawn of the Bronze Age.

"Plague is maybe one of the deadliest bacteria that has ever existed for humans. And if you think of the word 'plague,' it can mean this infection by Y. pestis, but because of the trauma plague has caused in our history, it's also come to refer more generally to any epidemic. The kind of analyses we do here let us go back through time and look at how this pathogen that's had such a huge effect on us evolved," says senior author Simon Rasmussen, a metagenomics researcher at the Technical University of Denmark and the University of Copenhagen.

To better understand the evolutionary history of the plague, Rasmussen and his colleagues trawled through publicly available genetic data from ancient humans, screening for sequences similar to more modern plague strains. They found a strain they had never seen before in the genetic material of a 20-year-old woman who died approximately 5,000 years ago in Sweden. The strain had the same genes that make the pneumonic plague deadly today and traces of it were also found in another individual at the same grave site -- suggesting that the young woman did likely die of the disease.

This strain of the plague is the oldest that's ever been discovered. But what makes it particularly interesting is that, by comparing it to other strains, the researchers were able to determine that it's also the most basal -- meaning that it's the closest strain we have to the genetic origin of Y. pestis. It likely diverged from other strains around 5,700 years ago, while the plague that was common in the Bronze Age and the plague that is the ancestor of the strains in existence today diverged 5,300 and 5,100 years ago, respectively. This suggests that there were multiple strains of plague in existence at the end of the Neolithic period.

Rasmussen also believes that this finding offers a new theory about how plague spreads. Massive human migrations from the Eurasian steppe down into Europe are known to have occurred around 5,000 years ago, but how these cultures were able to displace the Neolithic farming culture that was present in Europe at the time is still debated. Previous researchers have suggested that the invaders brought the plague with them, wiping out the large settlements of Stone Age farmers when they arrived.

But if the strain of plague the researchers found in the Swedish woman diverged from the rest of Y. pestis 5,700 years ago, that means it likely evolved before these migrations began and around the time that the Neolithic European settlements were already starting to collapse.

At the time, mega-settlements of 10,000-20,000 inhabitants were becoming common in Europe, which made job specialization, new technology, and trade possible. But they also may have been the breeding ground for plague. "These mega-settlements were the largest settlements in Europe at that time, ten times bigger than anything else. They had people, animals, and stored food close together, and, likely, very poor sanitation. That's the textbook example of what you need to evolve new pathogens," says Rasmussen.

"We think our data fit. If plague evolved in the mega-settlements, then when people started dying from it, the settlements would have been abandoned and destroyed. This is exactly what was observed in these settlements after 5,500 years ago. Plague would also have started migrating along all the trade routes made possible by wheeled transport, which had rapidly expanded throughout Europe in this period," he says.

Eventually, he suggests, the plague would have arrived through these trade interactions at the small settlement in Sweden where the woman his team studied lived. Rasmussen argues that the woman's own DNA also provides further evidence for this theory -- she isn't genetically related to the people who invaded Europe from the Eurasian steppe, supporting the idea that this strain of plague arrived before the mass migrations did. The archaeology also supports this hypothesis, as there were still no signs of the invaders by the time she died.

Of course, there are some limitations to what the data from this study can tell us. Most importantly, the researchers have not yet identified the plague in individuals from the mega-settlements where it may have evolved. "We haven't really found the smoking gun, but it's partly because we haven't looked yet. And we'd really like to do that, because if we could find plague in those settlements, that would be strong support for this theory," says Rasmussen.

Regardless, he believes that this study is a step toward understanding how plague -- and other pathogens -- became deadly. "We often think that these superpathogens have always been around, but that's not the case," he says. "Plague evolved from an organism that was relatively harmless. More recently, the same thing happened with smallpox, malaria, Ebola, and Zika. This process is very dynamic -- and it keeps happening. I think it's really interesting to try to understand how we go from something harmless to something extremely virulent."

Read more at Science Daily

Dec 4, 2018

Volcanoes fed by 'mush' reservoirs rather than molten magma chambers

Etna eruption - Catania, Sicily.
Volcanoes are not fed by molten magma formed in large chambers finds a new study, overturning classic ideas about volcanic eruptions.

Instead, the study suggests that volcanoes are fed by so-called 'mush reservoirs' -- areas of mostly solid crystals with magma in the small spaces between the crystals.

Our understanding of volcanic processes, including those leading to the largest eruptions, has been based on magma being stored in liquid-filled 'magma' chambers -- large, underground caves full of liquid magma. However, these have never been observed.

The new study, by researchers at Imperial College London and the University of Bristol and published today in Nature, suggests the fundamental assumption of a magma chamber needs a re-think.

Lead author Professor Matthew Jackson, from the Department of Earth Sciences and Engineering at Imperial, said: "We now need to look again at how and why eruptions occur from mush reservoirs. We can apply our findings to understanding volcanic eruptions with implications for public safety and also to understand the formation of metal ore deposits associated with volcanic systems."

In order to erupt, volcanoes need a source of magma -- melted, liquid rock -- containing relatively few solid crystals. Traditionally, this magma was thought to be formed and stored in a large underground cave, called a magma chamber.

Recent studies of magma chemistry have challenged this view, leading to the suggestion of the mush reservoir model, where smaller pools of magma sit in the small gaps between solid crystals. However, the mush reservoir model could not explain how magmas containing relatively few crystals arise and are delivered to volcanoes in order for them to erupt at the surface.

Now, with sophisticated modelling of mush reservoirs, the research team has come up with a solution. Within the mush reservoir scenario, the magma is less dense than the crystals, causing it to rise up through the spaces between them.

As it rises, the magma reacts with the crystals, melting them and leading to local areas containing magma with relatively few crystals. It is these short-lived areas of increased magma that can lead to eruptions.

Co-author Professor Stephen Sparks, from the University of Bristol's School of Earth Sciences, said: "A major mystery about volcanoes is that they were thought to be underlain by large chambers of molten rock. Such magma chambers, however, were very difficult to find.

"The new idea developed by geologists at Imperial and Bristol is that molten rock forms within largely crystalline hot rocks, spending most of its time in little pores within the rock rather than in large magma chambers. However, the rock melt is slowly squeezed out to form pools of melt, which can then erupt or form ephemeral magma chambers."

Read more at Science Daily

Dynamics of chromatin during organ and tissue regeneration

Researchers from the Department of Genetics, Microbiology and Statistics and the Institute of Biomedicine of the University of Barcelona (UB), in collaboration with the Centre for Genomic Regulation (CRG), have described thw genes and regulatory elements of their expression that are required during the process of tissue and organ regeneration. The study, which has appeared on the cover of the   journal Genome Research, combines the classic genetic analysis with the new study techniques for chromatin through next-generation sequencing, provides with a new perspective in the field of regenerative medicine.

Participants in the study are Elena Vizcaya-Molina (UB) and Cecilia C Klein (UB-CRG), first signers of the article, which has been led by Montserrat Corominas (UB). Other collaborators are the researchers Roderic GUigó (CRG, Pompeu Fabra University), Florenci Serras (UB) and Rakesh K. Mishra (CCMB, Hyderabad, India).

Regeneration genes

In this article, the authors analysed the transcriptome of imaginal disks in the Drosophila melanogaster's wing in different regeneration time periods. Through the analysis of massive RNA sequencing, they identified those genes that are differentially expressed during the process. Also, they saw that more than 30 % of these genes are located in gene clusters. Thanks to the comparative analysis conducted on other species (mice and zebra fish), the authors discovered a group of genes involved in regeneration and which are conserved in all those species. "Knowing which genes these organisms -which are able to regenerate- have in common can help us understand what is necessary to activate this process in organisms with more limited regenerative skills, such as humans," notes Elena Vizcaya-Molina.

"This study shows the growing importance of omics and bioinformatics to understand basic biological processes," notes the postdoc CRG researcher and UB lecturer, Cecilia Klein. The combination of new sequencing techniques and bioinformatics analysis with the experimental study allows researchers to progress regarding the understanding of gene regulation, in this case, regeneration.

Regulatory elements in regeneration

In this study, researchers also found out for the first time, three kinds of regulatory elements that are related to regeneration: those that increase their activity during regeneration, those that are reused from other development stages or other tissues, and last, a group of unique elements in regeneration. "These regulartory elements are DNA sequences able to lead and shape the gene expression," says Elena Vizcaya. Also, the authors found that these elements could be activated by some conserved genes among species (fly, mouse and zebra fish). "The ectopic activation of specific regulatory elements of regeneration could be a key tool to boost the organs' regenerative ability that are not able to regenerate," concludes th expert Montserrat Corominas.

Read more at Science Daily

Life has a new ingredient

Somewhere in the hostile environment of early Earth, life was born.
Our prehistoric Earth, bombarded with asteroids and lightening, rife with bubbling geothermal pools, may not seem hospitable today. But somewhere in the chemical chaos of our early planet, life did form. How? For decades, scientists have attempted to create miniature replicas of infant Earth in the lab. There, they hunt for the primordial ingredients that created the essential building blocks for life.

It's attractive to chase our origin story. But this pursuit can bring more than just thrill. Knowledge of how Earth built its first cells could inform our search for extraterrestrial life. If we identify the ingredients and environment required to spark spontaneous life, we could search for similar conditions on planets across our universe.

Today, much of the origin-of-life research focuses on one specific building block: RNA. While some scientists believe that life formed from simpler molecules and only later evolved RNA, others look for evidence to prove (or disprove) that RNA formed first. A complex but versatile molecule, RNA stores and transmits genetic information and helps synthesize proteins, making it a capable candidate for the backbone of the first cells.

To verify this "RNA World Hypothesis," researchers face two challenges. First, they need to identify which ingredients reacted to create RNA's four nucleotides -- adenine, guanine, cytosine, and uracil (A, G, C, and U). And, second, they need to determine how RNA stored and copied genetic information in order to replicate itself.

So far, scientists have made significant progress finding precursors to C and U. But A and G remain elusive. Now, in a paper published in PNAS, Jack W. Szostak, Professor of Chemistry and Chemical Biology at Harvard University, along with first-author and graduate student Seohyun (Chris) Kim suggest that RNA could have started with a different set of nucleotide bases. In place of guanine, RNA could have relied on a surrogate -- inosine.

"Our study suggests that the earliest forms of life (with A, U, C, and I) may have arisen from a different set of nucleobases than those found in modern life (A, U, C, and G)," said Kim. How did he and his team arrive at this conclusion? Lab attempts to craft A and G, purine-based nucleotides, produced too many undesired side products. Recently, however, researchers discovered a way to make versions of adenosine and inosine -- 8-oxo-adenosine and 8-oxo-inosine -- from materials available on primeval Earth. So, Kim and his colleagues set out to investigate whether RNA constructed with these analogs could replicate efficiently.

But, the substitutes failed to perform. Like a cake baked with honey instead of sugar, the final product may look and taste similar, but it doesn't function as well. The honey-cake burns and drowns in liquid. The 8-oxo-purine RNA still performs, but it loses both the speed and accuracy needed to copy itself. If it replicates too slowly, it falls apart before completing the process. If it makes too many errors, it cannot serve as a faithful tool for propagation and evolution.

Despite their inadequate performance, the 8-oxo-purines brought an unexpected surprise. As part of the test, the team compared 8-oxo-inosine's abilities against a control, inosine. Unlike its 8-oxo counterpart, inosine enabled RNA to replicate with high speed and few errors. It "turns out to exhibit reasonable rates and fidelities in RNA copying reactions," the team concluded. "We propose that inosine could have served as a surrogate for guanosine in the early emergence of life."

Read more at Science Daily

NASA's OSIRIS-REx spacecraft arrives at asteroid Bennu

This image of Bennu was taken by the OSIRIS-REx spacecraft from a distance of around 50 miles (80 km).
NASA's Origins, Spectral Interpretation, Resource Identification, Security-Regolith Explorer (OSIRIS-REx) spacecraft completed its 1.2 billion-mile (2 billion-kilometer) journey to arrive at the asteroid Bennu Monday. The spacecraft executed a maneuver that transitioned it from flying toward Bennu to operating around the asteroid.

Now, at about 11.8 miles (19 kilometers) from Bennu's Sun-facing surface, OSIRIS-REx will begin a preliminary survey of the asteroid. The spacecraft will commence flyovers of Bennu's north pole, equatorial region, and south pole, getting as close as nearly 4 miles (7 kilometers) above Bennu during each flyover.

The primary science goals of this survey are to refine estimates of Bennu's mass and spin rate, and to generate a more precise model of its shape. The data will help determine potential sites for later sample collection.

OSIRIS-REx's mission will help scientists investigate how planets formed and how life began, as well as improve our understanding of asteroids that could impact Earth. Asteroids are remnants of the building blocks that formed the planets and enabled life. Those like Bennu contain natural resources, such as water, organics and metals. Future space exploration and economic development may rely on asteroids for these materials.

"As explorers, we at NASA have never shied away from the most extreme challenges in the solar system in our quest for knowledge," said Lori Glaze, acting director for NASA's Planetary Science Division. "Now we're at it again, working with our partners in the U.S. and Canada to accomplish the Herculean task of bringing back to Earth a piece of the early solar system."

The mission's navigation team will use the preliminary survey of Bennu to practice the delicate task of navigating around the asteroid. The spacecraft will enter orbit around Bennu on Dec. 31 -- thus making Bennu, which is only about 1,600 feet (492 meters) across -- or about the length of five football fields -- the smallest object ever orbited by a spacecraft. It's a critical step in OSIRIS-REx's years-long quest to collect and eventually deliver at least two ounces (60 grams) of regolith -- dirt and rocks -- from Bennu to Earth.

Starting in October, OSIRIS-REx performed a series of braking maneuvers to slow the spacecraft down as it approached Bennu. These maneuvers also targeted a trajectory to set up Monday's maneuver, which initiates the first north pole flyover and marks the spacecraft's arrival at Bennu.

"The OSIRIS-REx team is proud to cross another major milestone off our list -- asteroid arrival," said Dante Lauretta, OSIRIS-REx principal investigator at the University of Arizona, Tucson. "Initial data from the approach phase show this object to have exceptional scientific value. We can't wait to start our exploration of Bennu in earnest. We've been preparing for this moment for years, and we're ready."

OSIRIS-REx mission marks many firsts in space exploration. It will be the first U.S. mission to carry samples from an asteroid back to Earth and the largest sample returned from space since the Apollo era. It's the first to study a primitive B-type asteroid, which is an asteroid that's rich in carbon and organic molecules that make up life on Earth. It is also the first mission to study a potentially hazardous asteroid and try to determine the factors that alter their courses to bring them close to Earth.

"During our approach toward Bennu, we have taken observations at much higher resolution than were available from Earth," said Rich Burns, the project manager of OSIRIS-REx at NASA's Goddard Space Flight Center in Greenbelt, Maryland. "These observations have revealed an asteroid that is both consistent with our expectations from ground-based measurements and an exceptionally interesting small world. Now we embark on gaining experience flying our spacecraft about such a small body."

When OSIRIS-REx begins to orbit Bennu at the end of this month, it will come close to approximately three quarters of a mile (1.25 kilometers) to its surface. In February 2019, the spacecraft begins efforts to globally map Bennu to determine the best site for sample collection. After the collection site is selected, the spacecraft will briefly touch the surface of Bennu to retrieve a sample. OSIRIS-REx is scheduled to return the sample to Earth in September 2023.

Read more at Science Daily

Dec 3, 2018

Artificial intelligence for studying the ancient human populations of Patagonia

Technological landscapes of nautical mobility (red circles, with some blues that are less well classified by the algorithm) and pedestrian mobility (orange and purple circles) in hunter-gatherer groups that lived in the extreme south of South America.
Argentine and Spanish researchers have used statistical techniques of automatic learning to analyze mobility patterns and technology of the hunter-gatherer groups that inhabited the Southern Cone of America, from the time they arrived about 12,000 years ago until the end of the 19th century. Big data from archaeological sites located in the extreme south of Patagonia have been used for this study.

The presence of humans on the American continent dates back to at least 14,500 years ago, according to datings made at archaeological sites such as Monte Verde, in Chile's Los Lagos Region. But the first settlers continued moving towards the southernmost confines of America.

Now, researchers from Argentina's National Council for Scientific and Technical Research (CONICET) and two Spanish institutions (the Spanish National Research Council and the University of Burgos) have analyzed the relationships between mobility and technology developed by those societies that originated in the far south of Patagonia.

The study, published in the Royal Society Open Science journal, is based on an extensive database of all available archaeological evidence of human presence in this region, from the time the first groups arrived in the early Holocene (12,000 years ago) until the end of the 19th century.

This was followed by the application of machine learning techniques, a statistical system that allows the computer to learn from many data (in this case, big data from characteristic technological elements of the sites) in order to carry out classifications and predictions.

"It is by means of automatic classification algorithms that we have identified two technological packages or 'landscapes': one that characterizes pedestrian hunter-gatherer groups (with their own stone and bone tools) and the other characterizing those that had nautical technology, such as canoes, harpoons and mollusc shells used to make beads," explains Ivan Briz i Godino, an archaeologist of the National Council for Scientific and Technical Research (CONICET) of Argentina and co-author of the work.

"In future excavations, when sets of technological elements such as those we have detected appear, we'll be able to directly deduce the type of mobility of the group or the connections with other communities," adds Briz.

The results of the study have also made it possible to obtain maps with the settlements of the two communities, and this, in turn, has made it possible to locate large regions in which they interacted and shared their technological knowledge. In the case of groups with nautical technology, it has been confirmed that they arrived at around the beginning of the Mid-Holocene (some 6,000 years ago) from the channels and islands of the South Pacific, moving along the coast of what is now Chile.

Read more at Science Daily

Scientists use EEG to decode how people navigate complex sequences

To perform a song, a dance or write computer code, people need to call upon the basic elements of their craft and then order and recombine them in creative ways.

University of Oregon scientists have captured how the brain builds such complex sequences from a small set of basic elements.

Doctoral student Atsushi Kikumoto and Ulrich Mayr, a professor in the Department of Psychology, detailed their National Science Foundation-supported work in a paper published online Nov. 14 in the journal eLife.

In the study, electrical activity and oscillation patterns were measured by electroencephalogram, with electrodes on the scalp from 88 study participants, all university students, while they performed complex, sequential patterns.

"Basic elements -- the alphabet of any type of performance -- need to be combined in a certain order within larger chunks, and these chunks, in turn, need to be combined in a certain order to arrive at the complete sequence," said Mayr, who directs the UO's Cognitive Dynamics Lab. "This is at the heart of a lot of human creativity.

"For example, if you are playing a piece on the piano, your brain needs to keep track in which larger musical phrase, which bar, and which exact note you are currently at," he said. "So, you need a kind of mental addressing system. It is this addressing system that we discovered with our EEG methods."

Subjects memorized sequential patterns that consisted of three different angles of lines as basic elements. When participants subsequently tried to reconstruct the succession of lines, the EEG showed oscillatory patterns that Kikumoto and Mayr decoded using machine learning techniques.

It turns out that the EEG patterns kept track of the precise location within the sequence -- which chunk, which position within the chunk, and which line angle people were focusing on.

The findings from the basic research help to understand why some people have difficulties with executing complex sequential plans, Mayr said.

Within the hierarchically organized addressing system, not everyone showed a robust EEG expression of the more abstract levels, he said. Only people with strong working memory scores -- a reflection of the capacity of an individual's mental workspace -- seemed to have a crisp record of the current chunk.

"Without the chunk information they literally got lost within the mental landscape of the overall sequence," he said.

Read more at Science Daily

In death, Lonesome George reveals why giant tortoises live so long

The late, famous tortoise, Lonesome George, Galapagos Islands
Lonesome George's species may have died with him in 2012, but he and other giant tortoises of the Galapagos are still providing genetic clues to individual longevity through a new study by researchers at Yale University, the University of Oviedo in Spain, the Galapagos Conservancy, and the Galapagos National Park Service.

Genetic analysis of DNA from Lonesome George and samples from other giant tortoises of the Galapagos -- which can live more than 100 years in captivity -- found they possessed a number of gene variants linked to DNA repair, immune response, and cancer suppression not possessed by shorter-lived vertebrates.

The findings were reported Dec. 3 in the journal Nature Ecology & Evolution.

"Lonesome George is still teaching us lessons," said Adalgisa "Gisella" Caccone, senior researcher in Yale's Department of Ecology and Evolutionary Biology and co-senior author of the paper.

In 2010, Caccone began sequencing the whole genome of Lonesome George, the last of the species Chelonoidis abingdonii, to study evolution of the tortoise population on the Galapagos. Carlos Lopez-Otin at the University of Oviedo in Spain analyzed this data and other species of tortoises to look for gene variants associated with longevity.

"We had previously described nine hallmarks of aging, and after studying 500 genes on the basis of this classification, we found interesting variants potentially affecting six of those hallmarks in giant tortoises, opening new lines for aging research," Lopez-Otin said.

From Science Daily

LIGO and Virgo announce four new gravitational-wave detections

LIGO and Virgo black holes.
On Saturday, December 1, scientists attending the Gravitational Wave Physics and Astronomy Workshop in College Park, Maryland, presented new results from the National Science Foundation's LIGO (Laser Interferometer Gravitational-Wave Observatory) and the European- based VIRGO gravitational-wave detector regarding their searches for coalescing cosmic objects, such as pairs of black holes and pairs of neutron stars. The LIGO and Virgo collaborations have now confidently detected gravitational waves from a total of 10 stellar-mass binary black hole mergers and one merger of neutron stars, which are the dense, spherical remains of stellar explosions. Six of the black hole merger events had been reported before, while four are newly announced.

From September 12, 2015, to January 19, 2016, during the first LIGO observing run since undergoing upgrades in a program called Advanced LIGO, gravitational waves from three binary black hole mergers were detected. The second observing run, which lasted from November 30, 2016, to August 25, 2017, yielded one binary neutron star merger and seven additional binary black hole mergers, including the four new gravitational-wave events being reported now. The new events are known as GW170729, GW170809, GW170818, and GW170823, in reference to the dates they were detected.

All of the events are included in a new catalog, also released Saturday, with some of the events breaking records. For instance, the new event GW170729, detected in the second observing run on July 29, 2017, is the most massive and distant gravitational-wave source ever observed. In this coalescence, which happened roughly 5 billion years ago, an equivalent energy of almost five solar masses was converted into gravitational radiation.

GW170814 was the first binary black hole merger measured by the three-detector network, and allowed for the first tests of gravitational-wave polarization (analogous to light polarization).

The event GW170817, detected three days after GW170814, represented the first time that gravitational waves were ever observed from the merger of a binary neutron star system. What's more, this collision was seen in gravitational waves and light, marking an exciting new chapter in multi-messenger astronomy, in which cosmic objects are observed simultaneously in different forms of radiation.

One of the new events, GW170818, which was detected by the global network formed by the LIGO and Virgo observatories, was very precisely pinpointed in the sky. The position of the binary black holes, located 2.5 billion light-years from Earth, was identified in the sky with a precision of 39 square degrees. That makes it the next best localized gravitational-wave source after the GW170817 neutron star merger.

Caltech's Albert Lazzarini, Deputy Director of the LIGO Laboratory, says "The release of four additional binary black hole mergers further informs us of the nature of the population of these binary systems in the universe and better constrains the event rate for these types of events."

"In just one year, LIGO and VIRGO working together have dramatically advanced gravitational- wave science, and the rate of discovery suggests the most spectacular findings are yet to come," says Denise Caldwell, Director of NSF's Division of Physics. "The accomplishments of NSF's LIGO and its international partners are a source of pride for the agency, and we expect even greater advances as LIGO's sensitivity becomes better and better in the coming year."

"The next observing run, starting in Spring 2019, should yield many more gravitational-wave candidates, and the science the community can accomplish will grow accordingly," says David Shoemaker, spokesperson for the LIGO Scientific Collaboration and senior research scientist in MIT's Kavli Institute for Astrophysics and Space Research. "It's an incredibly exciting time."

"It is gratifying to see the new capabilities that become available through the addition of Advanced Virgo to the global network," says Jo van den Brand of Nikhef (the Dutch National Institute for Subatomic Physics) and VU University Amsterdam, who is the spokesperson for the Virgo Collaboration. "Our greatly improved pointing precision will allow astronomers to rapidly find any other cosmic messengers emitted by the gravitational-wave sources." The enhanced pointing capability of the LIGO-Virgo network is made possible by exploiting the time delays of the signal arrival at the different sites and the so-called antenna patterns of the interferometers.

"The new catalog is another proof of the exemplary international collaboration of the gravitational wave community and an asset for the forthcoming runs and upgrades" adds EGO Director Stavros Katsanevas.

The scientific papers describing these new findings, which are being initially published on the arXiv repository of electronic preprints, present detailed information in the form of a catalog of all the gravitational wave detections and candidate events of the two observing runs as well as describing the characteristics of the merging black hole population. Most notably, we find that almost all black holes formed from stars are lighter than 45 times the mass of the Sun. Thanks to more advanced data processing and better calibration of the instruments, the accuracy of the astrophysical parameters of the previously announced events increased considerably.

Read more at Science Daily

Combination of space-based and ground-based telescopes reveals more than 100 exoplanets

This is an artist's impression of the planets orbiting K2-187.
An international team of astronomers using a combination of ground and space based telescopes have reported more than 100 extrasolar planets (here after, exoplanets) in only three months. These planets are quite diverse and expected to play a large role in developing the research field of exoplanets and life in the Universe.

Exoplanets, planets that revolve around stars other than the Sun, have been actively researched in recent years. One of the reasons is the success of the Kepler Space Telescope, which launched in 2009 to search for exoplanets. If a planet crosses (transits) in front of its parent star, then the observed brightness of the star drops by a small amount. The Kepler Space Telescope detected many exoplanets using this method. However, such dimming phenomena could be caused by other reasons. Therefore, confirmation that the phenomena are really caused by exoplanets is very important. The Kepler space telescope experienced mechanical trouble in 2013, which led to a successor mission called K2. Astronomers around the world are competing to confirm exoplanets suggested by the K2 data.

An international research team involving researchers at the University of Tokyo and Astrobiology Center of the National Institutes of Natural Sciences investigated 227 K2 exoplanet candidates using other space telescopes and ground-based telescopes. They confirmed that 104 of them are really exoplanets. Seven of the confirmed exoplanets have ultra-short orbital periods less than 24 hours. The formation process of exoplanets with such short orbital periods is still unclear. Further study of these ultra-short period planets will help to advance research into the processes behind their formation. They also confirmed many low-mass rocky exoplanets with masses less than twice that of the Earth as well as some planetary systems with multiple exoplanets.

Mr. John Livingston, a Ph.D. student at the University of Tokyo and lead author of the papers reporting the exoplanets, explains, "Although the Kepler Space Telescope has been officially retired by NASA, its successor space telescope, called TESS, has already started collecting data. In just the first month of operations, TESS has already found many new exoplanets, and it will continue to discover many more. We can look forward to many new exciting discoveries in the coming years."

From Science Daily

Dec 2, 2018

Scientists reveal substantial water loss in global landlocked regions

This illustration shows terrestrial water storage changes in global endorheic basins from GRACE satellite observations, April 2002 to March 2016. In the top image, terrestrial water storage trends -- in millimeters of equivalent water thickness per year -- for each endorheic unit are highlighted, followed by animated monthly terrestrial water storage anomalies, also in millimeters. The bottom image shows monthly net terrestrial water storage anomalies in gigatonnes, in global endorheic and exorheic systems -- excluding Greenland, Antarctica and the oceans -- and linkage to the El Niño-Southern Oscillation, right axis. Terrestrial water storage anomalies are relative to the time-mean baseline in each unit or system, with removal of seasonality. For comparison, 360 gigatonnes of terrestrial water storage equals 1 millimeter of sea level equivalent.
Along with a warming climate and intensified human activities, recent water storage in global landlocked basins has undergone a widespread decline. A new study reveals this decline has aggravated local water stress and caused potential sea level rise.

The study, "Recent Global Decline in Endorheic Basin Water Storage," was carried out by a team of scientists from six countries and appears in the current issue of Nature Geoscience.

"Water resources are extremely limited in the continental hinterlands where streamflow does not reach the ocean. Scientifically, these regions are called endorheic basins," said Jida Wang, a Kansas State University geographer and the study's lead author.

"Over the past few decades, we have seen increasing evidence of perturbations to the endorheic water balance," said Wang, an assistant professor of geography. "This includes, for example, the desiccating Aral Sea, the depleting Arabian aquifer and the retreating Eurasian glaciers. This evidence motivated us to ask: Is the total water storage across the global endorheic system, about one-fifth of the continental surface, undergoing a net decline?"

Using gravity observations from NASA/German Aerospace Center's Gravity Recovery and Climate Experiment, or GRACE, satellites, Wang and his colleagues quantified a net water loss in global endorheic basins of approximately 100 billion tons of water per year since the start of the current millennium. This means a water mass equivalent to five Great Salt Lakes or three Lake Meads is gone every year from the arid endorheic regions.

Surprisingly, this amount of endorheic water loss is double the rate of concurrent water changes across the remaining landmass except Greenland and Antarctica, Wang said. Opposite to endorheic basins, the remaining regions are exorheic, meaning streamflow originating from these basins drains to the ocean. Exorheic basins account for most of the continental surface and are home to many of the world's greatest rivers, such as the Nile, Amazon, Yangtze and Mississippi.

Wang noted that the signature of water storage changes in exorheic basins resembles some prominent oscillations of the climate system, such as El Niño and La Niña in multiyear cycles. However, the water loss in endorheic basins appears less responsive to such short-term natural variability. This contrast may suggest a profound impact of longer-term climate conditions and direct human water management, such as river diversion, damming and groundwater withdrawal, on the water balance in the dry hinterlands.

This endorheic water loss has dual ramifications, according to the researchers. Not only does it aggravate water stress in the arid endorheic regions, but it could also contribute to a significant factor of global environmental concern: sea level rise. Sea level rise is a result of two main causes: thermal expansion of sea water as a result of increased global temperature, and additional water mass to the ocean.

"The hydrosphere is mass conserved," said Chunqiao Song, researcher with the Nanjing Institute of Geography and Limnology, Chinese Academy of Sciences, and a co-lead author of the study. "When water storage in endorheic basins is in deficit, the reduced water mass doesn't disappear. It was reallocated chiefly through vapor flux to the exorheic system. Once this water is no longer landlocked, it has the potential to affect the sea level budget."

Despite an observation period of 14 years, the endorheic water loss equals an additional sea level rise of 4 millimeters, the study found. The researchers said this impact is nontrivial. It accounts for approximately 10 percent of the observed sea level rise during the same period; compares to nearly half of the concurrent loss in mountain glaciers, excluding Greenland and Antarctica; and matches the entire contribution of global groundwater consumption.

"We are not saying the recent endorheic water loss has completely ended up in the ocean," said Yoshihide Wada, deputy director of the water program at the International Institute for Applied Systems Analysis in Austria and a co-author of the study. "Instead, we are showing a perspective of how substantial the recent endorheic water loss has been. If it persists, such as beyond the decadal timescale, the water surplus added to the exorheic system may signify an important source of sea level rise."

By synergizing multi-mission satellite observations and hydrological modeling, Wang and his colleagues attributed this global endorheic water loss to comparable contributions from the surface -- such as lakes, reservoirs and glaciers -- as well as soil moisture and aquifers.

"Such comparable losses are, however, an aggregation of distinct regional variations," Wang said. "In endorheic Central Eurasia, for instance, about half of the water loss came from the surface, particularly large terminal lakes such as the Aral Sea, the Caspian Sea and Lake Urmia, and retreating glaciers in High Mountain Asia."

While glacial retreat was a response to warming temperature, water losses in the terminal lakes were a combined result of meteorological droughts and long-term water diversions from the feeding rivers.

The net water loss in endorheic Sahara and Arabia, on the other hand, was dominated by unsustainable groundwater withdrawal, according to the researchers. In endorheic North America, including the Great Basin of the U.S., a drought-induced soil moisture loss was likely responsible for most of the regional water loss. Despite a lesser extent, the surface water loss in the Great Salt Lake and the Salton Sea was at a substantial rate of 300 million tons per year, which was partially induced by mineral mining and diversion-based irrigation.

"The water losses from the world's endorheic basins are yet another example of how climate change is further drying the already dry arid and semi-arid regions of the globe. Meanwhile, human activities such as groundwater depletion are significantly accelerating this drying," said Jay Famiglietti, director of the Global Institute of Water Security, Canada 150 research chair in hydrology and remote sensing at the University of Saskatchewan, Canada and co-author of the study.

Wang said the team wants to convey three takeaway messages from their research.

"First, water storage in the endorheic system, albeit limited in total mass, can dominate the water storage trend in the entire land surface during at least decadal timescales," Wang said. "Second, the recent endorheic water loss is less sensitive to natural variability of the climate system, suggesting a possible response to longer-term climate conditions and human water management.

Read more at Science Daily