Aug 15, 2020

200,000 years ago, humans preferred to kip cozy

 Researchers in South Africa's Border Cave, a well-known archaeological site perched on a cliff between eSwatini (Swaziland) and KwaZulu-Natal in South Africa, have found evidence that people have been using grass bedding to create comfortable areas for sleeping and working on at least 200,000 years ago.

These beds, consisting of sheaves of grass of the broad-leafed Panicoideae subfamily were placed near the back of the cave on ash layers. The layers of ash was used to protect the people against crawling insects while sleeping. Today, the bedding layers are visually ephemeral traces of silicified grass, but they can be identified using high magnification and chemical characterisation.

The Border Cave study was conducted by a multidisciplinary team from the University of the Witwatersrand, South Africa, the CNRS (University of Bordeaux), and Université Côte d'Azur, France, the Instituto Superior de Estudios Sociales, Tucumán, Argentina, and the Royal Institute for Cultural Heritage, Belgium.

"We speculate that laying grass bedding on ash was a deliberate strategy, not only to create a dirt-free, insulated base for the bedding, but also to repel crawling insects," says Professor Lyn Wadley, principal researcher and lead author.

"Sometimes the ashy foundation of the bedding was a remnant of older grass bedding that had been burned to clean the cave and destroy pests. On other occasions, wood ash from fireplaces was also used as the clean surface for a new bedding layer."

Several cultures have used ash as an insect repellent because insects cannot easily move through fine powder. Ash blocks insects' breathing and biting apparatus, and eventually dehydrates them. Tarchonanthus (camphor bush) remains were identified on the top of the grass from the oldest bedding in the cave. This plant is still used to deter insects in rural parts of East Africa.

"We know that people worked as well as slept on the grass surface because the debris from stone tool manufacture is mixed with the grass remains. Also, many tiny, rounded grains of red and orange ochre were found in the bedding where they may have rubbed off human skin or coloured objects," says Wadley.

Modern hunter-gatherer camps have fires as focal points; people regularly sleep alongside them and perform domestic tasks in social contexts. People at Border Cave also lit fires regularly, as seen by stacked fireplaces throughout the sequence dated between about 200,000 and 38,000 years ago.

"Our research shows that before 200,000 years ago, close to the origin of our species, people could produce fire at will, and they used fire, ash, and medicinal plants to maintain clean, pest-free camps. Such strategies would have had health benefits that advantaged these early communities."

Read more at Science Daily

Weight between young adulthood and midlife linked to early mortality

 A new Boston University School of Public Health (BUSPH) study finds that changes in weight between young adulthood and midlife may have important consequences for a person's risk of early death.

Published in JAMA Network Open, the study found that participants whose BMIs went from the "obese" range in early adulthood down to the "overweight" range in midlife halved their risk of dying during the study period, compared with individuals whose BMIs stayed in the "obese" range. On the other hand, weight loss after midlife did not significantly reduce participants' risk of death.

The researchers estimate that 12.4% of early deaths in the US may be attributable to having a higher body mass index (BMI) at any point between early- and mid-adulthood.

"The results indicate an important opportunity to improve population health through primary and secondary prevention of obesity, particularly at younger ages," says study corresponding author Dr. Andrew Stokes, assistant professor of global health at BUSPH.

"The present study provides important new evidence on the benefit of maintaining a healthy weight across the life course," says lead author Dr. Wubin Xie, a postdoctoral associate in global health at BUSPH.

The researchers used data from 1998 through 2015 for 24,205 participants from the National Health and Nutrition Examination Survey. The participant were 40-74 years old when they entered the study, and the data included participants' BMI at age 25, 10 years before they entered the study, and when they entered the study. The researchers then analyzed the relationship between BMI change and the likelihood that a participant died over the course of the observed period, controlling for other factors such as participants' sex, past and current smoking, and education level.

They found that study participants whose BMIs went from the "obese" range at age 25 down to the "overweight" range in midlife were 54% less likely to have died than participants whose BMIs stayed in the "obese" range. Instead, these participants with an "obese" to "overweight" trajectory had a risk of death closer to that of participants whose BMIs had been in the "overweight" range all along.

The researchers estimated that 3.2% of deaths in the study would have been avoided if everyone with a BMI in the "obese" range at age 25 had been able to bring their BMIs down to the "overweight" range by midlife. However, they noted that weight loss was rare overall, and only 0.8% of participants had BMIs that went from the "obese" to the "overweight" range.

The researchers did not find a similar reduction in risk of death for participants who lost weight later in their lives. They wrote that this may be because weight loss later in life is more likely to be tied to an aging person's worsening health.

Read more at Science Daily

Aug 14, 2020

Syphilis may have spread through Europe before Columbus

 Syphilis is a sexually transmitted disease -- and while commonly dismissed due to the availability of modern treatments, it is in fact spreading at an alarming rate: Over the last decades, more than 10 million people around the world have been infected with the syphilis subspecies pallidum of the Treponema pallidum bacteria. Other treponematoses, such as yaws and bejel, are caused by other subspecies of Treponema pallidum. The origins of syphilis, which wreaked havoc in Europe from the late 15th to the 18th century, are still unclear. The most popular hypothesis so far holds Christopher Columbus and his sailors liable for bringing the disease to Europe from the New World.

Yaws already widespread in Europe

The new study indicates a fair possibility that Treponema pallidum already existed in Europe before Columbus ever set sails to America. The researchers found treponematoses in archaeological human remains from Finland, Estonia and the Netherlands. Both molecular dating of the ancient bacterial genomes and traditional radiocarbon dating of the samples were used to estimate the age of the pathogens causing these diseases. The results indicate that the genomes dated back to between the early 15th and 18th century.

In addition to the syphilis cases, the researchers found yaws in one of the individuals. Like syphilis, yaws is transmitted via skin contact, although rarely through sexual intercourse. Today, the disease is only found in tropical and subtropical regions. "Our data indicates that yaws was spread through all of Europe. It was not limited to the tropics, as it is today," says last author Verena Schünemann, professor of paleogenetics at the Institute of Evolutionary Medicine of the University of Zurich.

Genome of a previously unknown pathogen discovered

The research team also discovered something else: The skeleton found in the Netherlands contained a pathogen belonging to a new, unknown and basal treponemal lineage. This lineage evolved in parallel to syphilis and yaws but is no longer present as a modern-day disease. "This unforeseen discovery is particularly exciting for us, because this lineage is genetically similar to all present treponemal subspecies, but also has unique qualities that differ from them," says first author Kerttu Majander from UZH.

Because several closely related subspecies of Treponema pallidum existed throughout Europe, it is possible that the diseases persisted in overlapping regions, and sometimes infected the same patient. The spatial distribution in the northern periphery of Europe also suggests that endemic treponematoses had already spread widely in Europe in the early modern period.

Not just Columbus

"Using our ancient genomes, it is now possible for the first time to apply a more reliable dating to the treponema family tree," says Schünemann. The genetic analyses conducted in this study suggest that the predecessor of all modern Treponema pallidum subspecies likely evolved at least 2,500 years ago. For venereal syphilis in particular, the latest common ancestor existed between the 12th and 16th century.

Read more at Science Daily

How people and ecosystems fit together on the Great Barrier Reef

 A world-first study examining the scales of management of the Great Barrier Reef has the potential to help sustain other ecosystems across the world.

Massive marine ecosystems like the Great Barrier Reef aren't just a vibrant home to fish, corals and other creatures, they are also an important source of people's food, livelihoods and recreation.

The new study suggests the way people are managed when undertaking various activities within the marine park -- like fishing, boating, and scientific research -- could serve as an exemplary model for sustainably managing other ecosystems that humans use.

"There is plenty of evidence to suggest that the Great Barrier Reef is managed at appropriate scales within its boundaries," said lead author Professor Graeme Cumming, incoming Director of the ARC Centre of Excellence for Coral Reef Studies.

The reef served as a case study for mapping and measuring different scale matches between people and ecosystems. Prof Cumming explains the concept of scale matches using a backyard garden as an example of an ecosystem.

"For a house with a garden, you already have permission to manage that garden -- to mow the lawn and trim the trees inside your fences. To look after all the parts of it. That's a scale match," Prof Cumming said.

He says being able to manage only a flower bed within the garden is a small-scale match. "If you only have permission to manage the flower bed in your garden, you can manage the flowers, but your lawn and trees become unkempt. The weeds and pests affecting the flowers may come from an adjacent part of the garden, which you'd then have no control over," he said.

The Great Barrier Reef Marine Park Authority (GBRMPA) manages the entire marine park. Some permits, such as permission to access areas by boat as part of a commercial operation, may cover most of the park.

GBRMPA also manages smaller scale permits within the marine park boundaries -- small-scale matches that work best for activities like commercial tourism, lobster fisheries or the installation of certain structures like jetties or moorings.

The study found the permits issued for human activities generally occurred at larger scales than the particular individual marine features of interest, such as reefs or islands.

"The finding that people are managed at a broader scale than ecological variation suggests a general principle for permitting and management," Prof Cumming said. "In essence, people like to have choices about where they go and how they respond to change. This means that they prefer to operate at a broader spatial scale than the ecological features they are interested in, rather than the same scale."

The findings suggest this approach to managing people at broader rather than finer scales may be more effective. For small protected areas, increasing the size of the permissible area may even be critical.

However, GBRMPA can't manage the ecosystem's biggest impact, which lies outside park boundaries: climate change.

"Broad scale problems, like climate change, can only be managed with broad scale solutions, like global action," Prof Cumming said. "This is a scale mismatch because these impacts come from well outside the marine park boundaries."

GBRMPA also don't have control over what happens on the land directly adjacent to the reef. Not being able to stop pollutants and pesticides in storm water reaching the reef is another scale mismatch.

Prof Cumming says comparing the results of this study to similar data from other marine parks, including those that are recognised as dysfunctional, will help determine if the management of the Great Barrier Reef Marine Park is unusual or typical.

 Read more at Science Daily

New catalyst efficiently turns carbon dioxide into useful fuels and chemicals

 As levels of atmospheric carbon dioxide continue to climb, scientists are looking for new ways of breaking down CO2 molecules to make useful carbon-based fuels, chemicals and other products. Now, a team of Brown University researchers has found a way to fine-tune a copper catalyst to produce complex hydrocarbons -- known as C2-plus products -- from CO2 with remarkable efficiency.

In a study published in Nature Communications, the researchers report a catalyst that can produce C2-plus compounds with up to 72% faradaic efficiency (a measure of how efficiently electrical energy is used to convert carbon dioxide into chemical reaction products). That's far better than the reported efficiencies of other catalysts for C2-plus reactions, the researchers say. And the preparation process can be scaled up to an industrial level fairly easily, which gives the new catalyst potential for use in large-scale CO2 recycling efforts.

"There had been reports in the literature of all kinds of different treatments for copper that could produce these C2-plus with a range of different efficiencies," said Tayhas Palmore, the a professor of engineering at Brown who co-authored the paper with Ph.D. student Taehee Kim. "What Taehee did was a set of experiments to unravel what each of these treatment steps was actually doing to the catalyst in terms of reactivity, which pointed the way to optimizing a catalyst for these multi-carbon compounds."

There have been great strides in recent years in developing copper catalysts that could make single-carbon molecules, Palmore says. For example, Palmore and her team at Brown recently developed a copper foam catalyst that can produce formic acid efficiently, an important single-carbon commodity chemical. But interest is increasing in reactions that can produce C2-plus products.

"Ultimately, everyone seeks to increase the number of carbons in the product to the point of producing higher carbon fuels and chemicals," Palmore said.

There had been evidence from prior research that halogenation of copper -- a reaction that coats a copper surface with atoms of chlorine, bromine or iodine in the presence of an electrical potential -- could increase a catalyst's selectivity of C2-plus products. Kim experimented with a variety of different halogenation methods, zeroing in on which halogen elements and which electrical potentials yielded catalysts with the best performance in CO2-to-C2-plus reactions. He found that the optimal preparations could yield faradaic efficiencies of between 70.7% and 72.6%, far higher than any other copper catalyst.

The research helps to reveal the attributes that make a copper catalyst good for C2-plus products. The preparations with the highest efficiencies had a large number of surface defects -- tiny cracks and crevices in the halogenated surface -- that are critical for carbon-carbon coupling reactions. These defect sites appear to be key to the catalysts' high selectivity toward ethylene, a C2-plus product that can be polymerized and used to make plastics.

Ultimately, such a catalyst will aid in large-scale recycling of CO2. The idea is to capture CO2 produced by industrial facilities like power plants, cement manufacturing or directly from air, and convert it into other useful carbon compounds. That requires an efficient catalyst that is easy to produce and regenerate, and inexpensive enough to operate on an industrial scale. This new catalyst is a promising candidate, the researchers say.

 Read more at Science Daily

Recalling memories from a third-person perspective changes how our brain processes them

 Adopting a third-person, observer point of view when recalling your past activates different parts of your brain than recalling a memory seen through your own eyes, according to a new paper.

"Our perspective when we remember changes which brain regions support memory and how these brain regions interact together," explained Peggy St Jacques, assistant professor in the Faculty of Science'sDepartment of Psychology and co-author on the paper.

Specifically, the results show that recalling memories from an observer-like perspective, instead of through your own eyes, leads to greater interaction between the anterior hippocampus and the posterior medial network.

"These findings contribute to a growing body of research that show that retrieving memories is an active process that can bias and even distort our memories," added St Jacques.

"Adopting an observer-like perspective involves viewing the past in a novel way, which requires greater interaction among brain regions that support our ability to recall the details of a memory and to recreate mental images in our mind's eye."

Adopting an observer-like perspective may also serve a therapeutic purpose, explained St Jacques. "This may be an effective way of dealing with troubling memories by viewing the past from a distance and reducing the intensity of the emotions we feel."

This work builds on St Jacques' previous research on visual perspective in memory, which found that the perspective from which we recall a memory can influence how we remember them over time.

From Science Daily

Aug 13, 2020

How stars form in the smallest galaxies

 The question of how small, dwarf galaxies have sustained the formation of new stars over the course of the Universe has long confounded the world's astronomers. An international research team led by Lund University in Sweden has found that dormant small galaxies can slowly accumulate gas over many billions of years. When this gas suddenly collapses under its own weight, new stars are able to arise.

There are around 2,000 billion galaxies in our Universe and, while our own Milky-Way encompasses between 200 and 400 billion stars, small dwarf galaxies contain only a thousand times less. How stars are formed in these tiny galaxies has long been shrouded in mystery.

However, in a new study published in the research journal Monthly Notices of the Royal Astronomical Society, a research team led from Lund University has established that dwarf galaxies are capable of lying dormant for several billion years before starting to form stars again.

"It is estimated that these dwarf galaxies stopped forming stars around 12 billion years ago. Our study shows that this can be a temporary hiatus," says Martin Rey, an astrophysicist at Lund University and the leader of the study.

Through high-resolution computer simulations, the researchers demonstrate that star formation in dwarf galaxies ceased as a result of the heating and ionisation from the strong light of newborn stars. Explosions of so-called white dwarfs -- small faint stars made of the core that remains when normal-sized stars die -further contribute in preventing the star formation process in dwarf galaxies.

"Our simulations show that dwarf galaxies are able to accumulate fuel in the form of gas, which eventually condenses and gives birth to stars. This explains the observed star formation in existing faint dwarf galaxies that has long puzzled astronomers," states Martin Rey.

The computer simulations used by the researchers in the study are amongst the most expensive that can be carried out within physics. Each simulation takes as long as two months and requires the equivalent of 40 laptop computers operating around the clock. The work is continuing with the development of methods to better explain the processes behind star formation in our Universe's smallest galaxies.

Read more at Science Daily

Hubble finds that Betelgeuse's mysterious dimming is due to a traumatic outburst

 Observations by NASA's Hubble Space Telescope are showing that the unexpected dimming of the supergiant star Betelgeuse was most likely caused by an immense amount of hot material ejected into space, forming a dust cloud that blocked starlight coming from Betelgeuse's surface.

Hubble researchers suggest that the dust cloud formed when superhot plasma unleashed from an upwelling of a large convection cell on the star's surface passed through the hot atmosphere to the colder outer layers, where it cooled and formed dust grains. The resulting dust cloud blocked light from about a quarter of the star's surface, beginning in late 2019. By April 2020, the star returned to normal brightness.

Betelgeuse is an aging, red supergiant star that has swelled in size due to complex, evolving changes in its nuclear fusion furnace at the core. The star is so huge now that if it replaced the Sun at the center of our solar system, its outer surface would extend past the orbit of Jupiter.

The unprecedented phenomenon for Betelgeuse's great dimming, eventually noticeable to even the naked eye, started in October 2019. By mid-February 2020, the monster star had lost more than two-thirds of its brilliance.

This sudden dimming has mystified astronomers, who scrambled to develop several theories for the abrupt change. One idea was that a huge, cool, dark "star spot" covered a wide patch of the visible surface. But the Hubble observations, led by Andrea Dupree, associate director of the Center for Astrophysics | Harvard & Smithsonian (CfA), Cambridge, Massachusetts, suggest a dust cloud covering a portion of the star.

Several months of Hubble's ultraviolet-light spectroscopic observations of Betelgeuse, beginning in January 2019, yield a timeline leading up to the darkening. These observations provide important new clues to the mechanism behind the dimming.

Hubble captured signs of dense, heated material moving through the star's atmosphere in September, October, and November 2019. Then, in December, several ground-based telescopes observed the star decreasing in brightness in its southern hemisphere.

"With Hubble, we see the material as it left the star's visible surface and moved out through the atmosphere, before the dust formed that caused the star to appear to dim," Dupree said. "We could see the effect of a dense, hot region in the southeast part of the star moving outward.

"This material was two to four times more luminous than the star's normal brightness," she continued. "And then, about a month later, the south part of Betelgeuse dimmed conspicuously as the star grew fainter. We think it is possible that a dark cloud resulted from the outflow that Hubble detected. Only Hubble gives us this evidence that led up to the dimming."

The team's paper will appear online Aug. 13 in The Astrophysical Journal.

Massive supergiant stars like Betelgeuse are important because they expel heavy elements such as carbon into space that become the building blocks of new generations of stars. Carbon is also a basic ingredient for life as we know it.

Tracing a Traumatic Outburst

Dupree's team began using Hubble early last year to analyze the behemoth star. Their observations are part of a three-year Hubble study to monitor variations in the star's outer atmosphere. Betelgeuse is a variable star that expands and contracts, brightening and dimming, on a 420-day cycle.

Hubble's ultraviolet-light sensitivity allowed researchers to probe the layers above the star's surface, which are so hot -- more than 20,000 degrees Fahrenheit -- they cannot be detected at visible wavelengths. These layers are heated partly by the star's turbulent convection cells bubbling up to the surface.

Hubble spectra, taken in early and late 2019, and in 2020, probed the star's outer atmosphere by measuring magnesium II (singly ionized magnesium) lines. In September through November 2019, the researchers measured material moving about 200,000 miles per hour passing from the star's surface into its outer atmosphere.

This hot, dense material continued to travel beyond Betelgeuse's visible surface, reaching millions of miles from the seething star. At that distance, the material cooled down enough to form dust, the researchers said.

This interpretation is consistent with Hubble ultraviolet-light observations in February 2020, which showed that the behavior of the star's outer atmosphere returned to normal, even though visible-light images showed that it was still dimming.

Although Dupree does not know the outburst's cause, she thinks it was aided by the star's pulsation cycle, which continued normally though the event, as recorded by visible-light observations. The paper's co-author, Klaus Strassmeier, of the Leibniz Institute for Astrophysics Potsdam, used the institute's automated telescope called STELLar Activity (STELLA), to measure changes in the velocity of the gas on the star's surface as it rose and fell during the pulsation cycle. The star was expanding in its cycle at the same time as the upwelling of the convective cell. The pulsation rippling outward from Betelgeuse may have helped propel the outflowing plasma through the atmosphere.

Dupree estimates that about two times the normal amount of material from the southern hemisphere was lost over the three months of the outburst. Betelgeuse, like all stars, is losing mass all the time, in this case at a rate 30 million times higher than the Sun.

Betelgeuse is so close to Earth, and so large, that Hubble has been able to resolve surface features -- making it the only such star, except for our Sun, where surface detail can be seen.

Hubble images taken by Dupree in 1995 first revealed a mottled surface containing massive convection cells that shrink and swell, which cause them to darken and brighten.

A Supernova Precursor?

The red supergiant is destined to end its life in a supernova blast. Some astronomers think the sudden dimming may be a pre-supernova event. The star is relatively nearby, about 725 light-years away, which means the dimming would have happened around the year 1300. But its light is just reaching Earth now.

"No one knows what a star does right before it goes supernova, because it's never been observed," Dupree explained. "Astronomers have sampled stars maybe a year ahead of them going supernova, but not within days or weeks before it happened. But the chance of the star going supernova anytime soon is pretty small."

Dupree will get another chance to observe the star with Hubble in late August or early September. Right now, Betelgeuse is in the daytime sky, too close to the Sun for Hubble observations. But NASA's Solar Terrestrial Relations Observatory (STEREO) has taken images of the monster star from its location in space. Those observations show that Betelgeuse dimmed again from mid-May to mid-July, although not as dramatically as earlier in the year.

Read more at Science Daily

Young children would rather explore than get rewards

 Young children will pass up rewards they know they can collect to explore other options, a new study suggests.

Researchers found that when adults and 4- to 5-year-old children played a game where certain choices earned them rewards, both adults and children quickly learned what choices would give them the biggest returns.

But while adults then used that knowledge to maximize their prizes, children continued exploring the other options, just to see if their value may have changed.

"Exploration seems to be a major driving force during early childhood -- even outweighing the importance of immediate rewards," said Vladimir Sloutsky, co-author of the study and professor of psychology at The Ohio State University.

"We believe it is because young children need to explore to help them understand how the world works."

And despite what adults may think, kids' search for new discoveries is anything but random. Results showed children approached exploration systematically, to make sure they didn't miss anything.

"When adults think of kids exploring, they may think of them as running around aimlessly, opening drawers and cupboards, picking up random objects," Sloutsky said

"But it turns out their exploration isn't random at all."

Sloutsky conducted the study with Nathaniel Blanco, a postdoctoral researcher in psychology at Ohio State. Their results were published online recently in the journal Developmental Science.

The researchers conducted two studies. One study involved 32 4-year-olds and 34 adults.

On a computer screen, participants were shown four alien creatures. When participants clicked on each creature, they were given a set number of virtual candies.

One creature was clearly the best, giving 10 candies, while the others gave 1, 2 and 3 candies, respectively. Those amounts never changed for each creature over the course of the experiment.

The goal was to earn as much candy as possible over 100 trials. (The children could turn their virtual candies into real stickers at the end of the experiment.)

As expected, the adults learned quickly which creature gave the most candies and selected that creature 86 percent of the time. But children selected the highest-reward creature only 43 percent of the time.

And it wasn't because the children didn't realize which choice would reap them the largest reward. In a memory test after the study, 20 of 22 children correctly identified which creature delivered the most candy.

"The children were not motivated by achieving the maximum reward to the extent that adults were," Blanco said. "Instead, children seemed primarily motivated by the information gained through exploring."

But what was interesting was that the children didn't just click randomly on the creatures, Sloutsky said.

When they didn't click on the option with the highest reward, they were most likely to go through the other choices systematically, to ensure they never went too long without testing each individual choice.

"The longer they didn't check a particular option, the less certain they were on its value and the more they wanted to check it again," he said.

In a second study, the game was similar but the value of three of the four choices was visible -- only one was hidden. The option that was hidden was randomly determined in each trial, so it changed nearly every time. But the values of all four choices never changed, even when it was the hidden one.

Like in the first experiment, the 37 adults chose the best option on almost every trial, 94 percent of the time. That was much more than the 36 4- and 5-year-old children, who selected the highest-value option only 40 percent of the time.

When the hidden option was the highest-value option, adults chose it 84 percent of the time, but otherwise they almost never selected it (2 percent of the time).

Children chose the hidden option about 40 percent of the time -- and it didn't matter if it was the highest value one or not.

"The majority of the children were attracted to the uncertainty of the hidden option. They wanted to explore that choice," Sloutsky said.

However, there were some individual differences in children, he noted. A few children, for example, acted much like adults and nearly always chose the highest-value option. In the second experiment, a few children almost always avoided the hidden option.

These variations may have to do with different levels of cognitive maturation in children, he said.

But it appears that all children go through a phase where systematic exploration is one of their main goals.

"Even though we knew that children like to run around and investigate things, we're now learning that there is a lot of regularity to their behavior," Sloutsky said.

Read more at Science Daily

Ancient genomes suggest woolly rhinos went extinct due to climate change, not overhunting

 The extinction of prehistoric megafauna like the woolly mammoth, cave lion, and woolly rhinoceros at the end of the last ice age has often been attributed to the spread of early humans across the globe. Although overhunting led to the demise of some species, a study appearing August 13 in the journal Current Biology found that the extinction of the woolly rhinoceros may have had a different cause: climate change. By sequencing ancient DNA from 14 of these megaherbivores, researchers found that the woolly rhinoceros population remained stable and diverse until only a few thousand years before it disappeared from Siberia, when temperatures likely rose too high for the cold-adapted species.

"It was initially thought that humans appeared in northeastern Siberia fourteen or fifteen thousand years ago, around when the woolly rhinoceros went extinct. But recently, there have been several discoveries of much older human occupation sites, the most famous of which is around thirty thousand years old," says senior author Love Dalén, a professor of evolutionary genetics at the Centre for Palaeogenetics, a joint venture between Stockholm University and the Swedish Museum of Natural History. "So, the decline towards extinction of the woolly rhinoceros doesn't coincide so much with the first appearance of humans in the region. If anything, we actually see something looking a bit like an increase in population size during this period."

To learn about the size and stability of the woolly rhinoceros population in Siberia, the researchers studied the DNA from tissue, bone, and hair samples of 14 individuals. "We sequenced a complete nuclear genome to look back in time and estimate population sizes, and we also sequenced fourteen mitochondrial genomes to estimate the female effective population sizes," says co-first author Edana Lord, a PhD student at the Centre for Palaeogenetics.

By looking at the heterozygosity, or genetic diversity, of these genomes, the researchers were able to estimate the woolly rhino populations for tens of thousands of years before their extinction. "We examined changes in population size and estimated inbreeding," says co-first author Nicolas Dussex, a postdoctoral researcher at the Centre for Palaeogenetics. "We found that after an increase in population size at the start of a cold period some 29,000 years ago, the woolly rhino population size remained constant and that at this time, inbreeding was low."

This stability lasted until well after humans began living in Siberia, contrasting the declines that would be expected if the woolly rhinos went extinct due to hunting. "That's the interesting thing," says Lord. "We actually don't see a decrease in population size after 29,000 years ago. The data we looked at only goes up to 18,500 years ago, which is approximately 4,500 years before their extinction, so it implies that they declined sometime in that gap."

The DNA data also revealed genetic mutations that helped the woolly rhinoceros adapt to colder weather. One of these mutations, a type of receptor in the skin for sensing warm and cold temperatures, has also been found in woolly mammoths. Adaptations like this suggest the woolly rhinoceros, which was particularly suited to the frigid northeast Siberian climate, may have declined due to the heat of a brief warming period, known as the Bølling-Allerød interstadial, that coincided with their extinction towards the end of the last ice age.

"We're coming away from the idea of humans taking over everything as soon as they come into an environment, and instead elucidating the role of climate in megafaunal extinctions," says Lord. "Although we can't rule out human involvement, we suggest that the woolly rhinoceros' extinction was more likely related to climate."

The researchers hope to study the DNA of additional woolly rhinoceroses that lived in that crucial 4,500-year gap between the last genome they sequenced and their extinction. "What we want to do now is to try to get more genome sequences from rhinos that are between eighteen and fourteen thousand years old, because at some point, surely they must decline," says Dalén. The researchers are also looking at other cold-adapted megafauna to see what further effects the warming, unstable climate had. "We know the climate changed a lot, but the question is: how much were different animals affected, and what do they have in common?"

 Read more at Science Daily

Aug 12, 2020

Aging memories may not be 'worse', just 'different'

 "Memory is the first thing to go."

Everyone has heard it, and decades of research studies seem to confirm it: While it may not always be the first sign of aging, some faculties, including memory, do get worse as people age.

It may not be that straightforward.

Zachariah Reagh, assistant professor of psychological and brain sciences in Arts & Sciences at Washington University in St. Louis, looked at the brain activity of older people not by requiring them to recite a group of words or remember a string of numbers. Instead, Reagh looked at a "naturalistic approach," one that more closely resembled real-world activities.

He found that brain activity in older adults isn't necessarily quieter when it comes to memory.

"It's just different," he said.

The study results were published today in the journal Nature Communications.

Common tests of memory involve a person's ability to remember a string of words, count backward, or recognize repeated images. "How many times do you suspect a 75-year-old is going to have to remember, 'tree, apple, cherry, truck?'" asked Reagh, first author on the paper with Angelique Delarazan, Alexander Garber and Charan Ranganath, all of University of California, Davis.

Instead, he used a data set from the Cambridge Centre for Ageing and Neuroscience (Cam-CAN) that included functional MRI (fMRI) scans of people watching an 8-minute movie. "There were no specific instructions, or a 'gotcha' moment," Reagh said. "They just got to kick back, relax and enjoy the film."

But while they may have been relaxing, the subjects' brains were hard at work recognizing, interpreting and categorizing events in the movies. One particular way people categorize events is by marking boundaries -- where one event ends and another begins.

An "event" can be pretty much anything, Reagh said. "This conversation, or a component of it, for example. We take these meaningful pieces and extract them out of a continuous stream."

And what constitutes a boundary is actually consistent among people.

"If you and I watch the same movie, and we are given the instruction to press a button when we feel one meaningful unit has ended, you and I will be much more similar in our responses than we are different," Reagh said.

When looking at the fMRI results -- which use changes in blood flow and blood oxygen to highlight brian activity -- older adults showed similarly increased activity as a control group at the boundaries of events. That's not to say that brains of all ages are processing the information similarly.

"It's just different," Reagh said. "In some areas, activity goes down and, in some, it actually goes up."

Overall activity did decline pretty reliably across ages 18-88, Reagh said, and when grouped into "younger, middle aged, and older," there was a statistically reliable drop in activity from one group to another.

"But we did find a few regions where activity was ramped up across age ranges," he said. "That was unexpected."

Much of the activity he was interested in is in an area of the brain referred to as the posterior medial network -- which includes regions in the midline and toward the backside of the brain. In addition to memory, these areas are heavily involved in representing context and situational awareness. Some of those areas showed decreased activity in the older adults.

"We do think the differences are memory-related," Reagh said. At the boundaries, they saw differences in the levels of activity in the hippocampus that was related to memory in a different measurement -- "story memory," he called it.

"There might be a broad sense in which the hippocampus's response to event boundaries predicts how well you are able to parse and remember stories and complex narratives," no matter one's age, Reagh said.

But for older adults, closer to the front of the brain, particularly the medial prefrontal cortex, things were looking up.

Activity in that area of the brain was ramped up in older adults. This area is implicated in broad, schematic knowledge -- what it's like to go to a grocery store as opposed to a particular grocery store.

"What might be happening is as older adults lose some responsiveness in posterior parts of the brain, they may be shifting away from the more detailed contextual information," Reagh said. But as activity levels heighten in the anterior portions, "things might become more schematic. More 'gist-like.'"

In practice, this might mean that a 20-year-old noting an event boundary in a movie might be more focused on the specifics -- what specific room are the characters in? What is the exact content of the conversation? An older viewer might be paying more attention to the broader picture -- What kind of room are the characters in? Have the characters transitioned from a formal dinner setting to a more relaxed, after-dinner location? Did a loud, tense conversation resolve into a friendly one?

"Older adults might be representing events in different ways, and transitions might be picked up differently than, say, a 20-year-old," Reagh said.

 Read more at Science Daily

New way to make bacteria more sensitive to antibiotics discovered

 Researchers from Singapore-MIT Alliance for Research and Technology (SMART), MIT's research enterprise in Singapore, have discovered a new way to reverse antibiotic resistance in some bacteria using hydrogen sulphide (H2S).

Growing antimicrobial resistance is a major threat for the world with a projected 10 million deaths each year by 2050 if no action is taken. The World Health Organisation also warns that by 2030, drug-resistant diseases could force up to 24 million people into extreme poverty and cause catastrophic damage to the world economy.

In most bacteria studied, the production of endogenous H2S has been shown to cause antibiotic tolerance, so H2S has been speculated as a universal defence mechanism in bacteria against antibiotics.

A team at SMART's Antimicrobial Resistance (AMR) Interdisciplinary Research Group (IRG) tested that theory by adding H2S releasing compounds to Acinetobacter baumannii -- a pathogenic bacteria that does not produce H2S on its own. They found that rather than causing antibiotic tolerance, exogenous H2S sensitised the A. baumannii to multiple antibiotic classes. It was even able to reverse acquired resistance in A. baumannii to gentamicin, a very common antibiotic used to treat several types of infections.

The results of their study, supported by the Singapore National Medical Research Council's Young Investigator Grant, are discussed in a paper titled "Hydrogen sulfide sensitizes Acinetobacter baumannii to killing by antibiotics" published in the journal Frontiers in Microbiology.

"Until now, hydrogen sulfide was regarded as a universal bacterial defense against antibiotics," says Dr Wilfried Moreira, the corresponding author of the paper and Principal Investigator at SMART's AMR IRG. "This is a very exciting discovery because we are the first to show that H2S can, in fact, improve sensitivity to antibiotics and even reverse antibiotic resistance in bacteria that do not naturally produce the agent."

While the study focused on the effects of exogenous H2S on A. baumannii, the scientists believe the results will be mimicked in all bacteria that do not naturally produce H2S.

"Acinetobacter baumannii is a critically important antibiotic-resistant pathogen that poses a huge threat to human health," says Say Yong Ng, lead author of the paper and Laboratory Technologist at SMART AMR. "Our research has found a way to make the deadly bacteria and others like it more sensitive to antibiotics, and can provide a breakthrough in treating many drug-resistant infections."

Read more at Science Daily

'Black dwarf supernova': Physicist calculates when the last supernova ever will happen

 The end of the universe as we know it will not come with a bang. Most stars will very, very slowly fizzle as their temperatures fade to zero.

"It will be a bit of a sad, lonely, cold place," said theoretical physicist Matt Caplan, who added no one will be around to witness this long farewell happening in the far far future. Most believe all will be dark as the universe comes to an end. "It's known as 'heat death,' where the universe will be mostly black holes and burned-out stars," said Caplan, who imagined a slightly different picture when he calculated how some of these dead stars might change over the eons.

Punctuating the darkness could be silent fireworks -- explosions of the remnants of stars that were never supposed to explode. New theoretical work by Caplan, an assistant professor of physics at Illinois State University, finds that many white dwarfs may explode in supernova in the distant far future, long after everything else in the universe has died and gone quiet.

In the universe now, the dramatic death of massive stars in supernova explosions comes when internal nuclear reactions produce iron in the core. Iron cannot be burnt by stars -- it accumulates like a poison, triggering the star's collapse creating a supernova. But smaller stars tend to die with a bit more dignity, shrinking and becoming white dwarfs at the end of their lives.

"Stars less than about 10 times the mass of the sun do not have the gravity or density to produce iron in their cores the way massive stars do, so they can't explode in a supernova right now," said Caplan. "As white dwarfs cool down over the next few trillion years, they'll grow dimmer, eventually freeze solid, and become 'black dwarf' stars that no longer shine." Like white dwarfs today, they'll be made mostly of light elements like carbon and oxygen and will be the size of the Earth but contain about as much mass as the sun, their insides squeezed to densities millions of times greater than anything on Earth.

But just because they're cold doesn't mean nuclear reactions stop. "Stars shine because of thermonuclear fusion -- they're hot enough to smash small nuclei together to make larger nuclei, which releases energy. White dwarfs are ash, they're burnt out, but fusion reactions can still happen because of quantum tunneling, only much slower, Caplan said. "Fusion happens, even at zero temperature, it just takes a really long time." He noted this is the key for turning black dwarfs into iron and triggering a supernova.

Caplan's new work, accepted for publication by Monthly Notices of the Royal Astronomical Society, calculates how long these nuclear reactions take to produce iron, and how much iron black dwarfs of different sizes need to explode. He calls his theoretical explosions "black dwarf supernova" and calculates that the first one will occur in about 10 to the 1100th years. "In years, it's like saying the word 'trillion' almost a hundred times. If you wrote it out, it would take up most of a page. It's mindbogglingly far in the future."

Of course, not all black dwarfs will explode. "Only the most massive black dwarfs, about 1.2 to 1.4 times the mass of the sun, will blow." Still, that means as many as 1 percent of all stars that exist today, about a billion trillion stars, can expect to die this way. As for the rest, they'll remain black dwarfs. "Even with very slow nuclear reactions, our sun still doesn't have enough mass to ever explode in a supernova, even in the far far future. You could turn the whole sun to iron and it still wouldn't pop."

Caplan calculates that the most massive black dwarfs will explode first, followed by progressively less massive stars, until there are no more left to go off after about 1032000 years. At that point, the universe may truly be dead and silent. "It's hard to imagine anything coming after that, black dwarf supernova might be the last interesting thing to happen in the universe. They may be the last supernova ever." By the time the first black dwarfs explode, the universe will already be unrecognizable. "Galaxies will have dispersed, black holes will have evaporated, and the expansion of the universe will have pulled all remaining objects so far apart that none will ever see any of the others explode.It won't even be physically possible for light to travel that far."

Read more at Science Daily

Mystery solved: Bright areas on Ceres come from salty water below

 NASA's Dawn spacecraft gave scientists extraordinary close-up views of the dwarf planet Ceres, which lies in the main asteroid belt between Mars and Jupiter. By the time the mission ended in October 2018, the orbiter had dipped to less than 22 miles (35 kilometers) above the surface, revealing crisp details of the mysterious bright regions Ceres had become known for.

Scientists had figured out that the bright areas were deposits made mostly of sodium carbonate -- a compound of sodium, carbon, and oxygen. They likely came from liquid that percolated up to the surface and evaporated, leaving behind a highly reflective salt crust. But what they hadn't yet determined was where that liquid came from.

By analyzing data collected near the end of the mission, Dawn scientists have concluded that the liquid came from a deep reservoir of brine, or salt-enriched water. By studying Ceres' gravity, scientists learned more about the dwarf planet's internal structure and were able to determine that the brine reservoir is about 25 miles (40 kilometers) deep and hundreds of miles wide.

Ceres doesn't benefit from internal heating generated by gravitational interactions with a large planet, as is the case for some of the icy moons of the outer solar system. But the new research, which focuses on Ceres' 57-mile-wide (92-kilometer-wide) Occator Crater -- home to the most extensive bright areas -- confirms that Ceres is a water-rich world like these other icy bodies.

The findings, which also reveal the extent of geologic activity in Occator Crater, appear in a special collection of papers published by Nature Astronomy, Nature Geoscience, and Nature Communications on Aug. 10.

"Dawn accomplished far more than we hoped when it embarked on its extraordinary extraterrestrial expedition," said Mission Director Marc Rayman of NASA's Jet Propulsion Laboratory in Southern California. "These exciting new discoveries from the end of its long and productive mission are a wonderful tribute to this remarkable interplanetary explorer."

Solving the Bright Mystery

Long before Dawn arrived at Ceres in 2015, scientists had noticed diffuse bright regions with telescopes, but their nature was unknown. From its close orbit, Dawn captured images of two distinct, highly reflective areas within Occator Crater, which were subsequently named Cerealia Facula and Vinalia Faculae. ("Faculae" means bright areas.)

Scientists knew that micrometeorites frequently pelt the surface of Ceres, roughing it up and leaving debris. Over time, that sort of action should darken these bright areas. So their brightness indicates that they likely are young. Trying to understand the source of the areas, and how the material could be so new, was a main focus of Dawn's final extended mission, from 2017 to 2018.

The research not only confirmed that the bright regions are young -- some less than 2 million years old; it also found that the geologic activity driving these deposits could be ongoing. This conclusion depended on scientists making a key discovery: salt compounds (sodium chloride chemically bound with water and ammonium chloride) concentrated in Cerealia Facula.

On Ceres' surface, salts bearing water quickly dehydrate, within hundreds of years. But Dawn's measurements show they still have water, so the fluids must have reached the surface very recently. This is evidence both for the presence of liquid below the region of Occator Crater and ongoing transfer of material from the deep interior to the surface.

The scientists found two main pathways that allow liquids to reach the surface. "For the large deposit at Cerealia Facula, the bulk of the salts were supplied from a slushy area just beneath the surface that was melted by the heat of the impact that formed the crater about 20 million years ago," said Dawn Principal Investigator Carol Raymond. "The impact heat subsided after a few million years; however, the impact also created large fractures that could reach the deep, long-lived reservoir, allowing brine to continue percolating to the surface."

Active Geology: Recent and Unusual

In our solar system, icy geologic activity happens mainly on icy moons, where it is driven by their gravitational interactions with their planets. But that's not the case with the movement of brines to the surface of Ceres, suggesting that other large ice-rich bodies that are not moons could also be active.

Some evidence of recent liquids in Occator Crater comes from the bright deposits, but other clues come from an assortment of interesting conical hills reminiscent of Earth's pingos -- small ice mountains in polar regions formed by frozen pressurized groundwater. Such features have been spotted on Mars, but the discovery of them on Ceres marks the first time they've been observed on a dwarf planet.

On a larger scale, scientists were able to map the density of Ceres' crust structure as a function of depth -- a first for an ice-rich planetary body. Using gravity measurements, they found Ceres' crustal density increases significantly with depth, way beyond the simple effect of pressure. Researchers inferred that at the same time Ceres' reservoir is freezing, salt and mud are incorporating into the lower part of the crust.

Dawn is the only spacecraft ever to orbit two extraterrestrial destinations -- Ceres and the giant asteroid Vesta -- thanks to its efficient ion propulsion system. When Dawn used the last of a key fuel, hydrazine, for a system that controls its orientation, it was neither able to point to Earth for communications nor to point its solar arrays at the Sun to produce electrical power. Because Ceres was found to have organic materials on its surface and liquid below the surface, planetary protection rules required Dawn to be placed in a long-duration orbit that will prevent it from impacting the dwarf planet for decades.

Read more at Science Daily

Aug 11, 2020

Most close relatives of birds neared the potential for powered flight but few crossed its thresholds

 Uncertainties in the evolutionary tree of birds and their closest relatives have impeded deeper understanding of early flight in theropods, the group of dinosaurs that includes birds. To help address this, an international study led by HKU Research Assistant Professor Dr. Michael Pittman (Vertebrate Palaeontology Laboratory, Division of Earth and Planetary Science & Department of Earth Sciences) and co-first-authored by his former Postdoctoral Fellow Dr. Rui Pei (now an Associate Professor at the Institute of Vertebrate Paleontology and Paleoanthropology, Beijing), produced an updated evolutionary tree of early birds and their closest relatives to reconstruct powered flight potential, showing it evolved at least three times. Many ancestors of the closest bird relatives neared the thresholds of powered flight potential, suggesting broad experimentation with wing-assisted locomotion before flight evolved.

"Our revised evolutionary tree supports the traditional relationship of dromaeosaurid ("raptors") and troodontid theropods as the closest relatives of birds. It also supports the status of the controversial anchiornithine theropods as the earliest birds," said Dr. Pei. With this improved evolutionary tree, the team reconstructed the potential of bird-like theropods for power flight, using proxies borrowed from the study flight in living birds. The team found that the potential for powered flight evolved at least three times in theropods: once in birds and twice in dromaeosaurids. "The capability for gliding flight in some dromaeosaurids is well established so us finding at least two origins of powered flight potential among dromaeosaurids is really exciting," said Dr. Pittman. Crucially, the team found that many ancestors of bird relatives neared the thresholds of powered flight potential. "This suggests that theropod dinosaurs broadly experimented with the use of their feathered wings before flight evolved, overturning the paradigm that this was limited to a much more exclusive club," added Dr. Pittman.

This study is the latest in the Vertebrate Palaeontology Laboratory's long-term research into the evolution of early birds and their closest relatives (see Notes). Asked about future plans, Dr. Pittman replied: "We have helped to better constrain the broader functional landscape of theropods just before flight evolved and in its earliest stages. We plan to now focus on the dromaeosaurids and early birds that we have shown to have the potential for powered flight to improve our understanding of what it took to fly and why."

 From Science Daily

Explosive nuclear astrophysics

 Analysis of meteorite content has been crucial in advancing our knowledge of the origin and evolution of our solar system. Some meteorites also contain grains of stardust. These grains predate the formation of our solar system and are now providing important insights into how the elements in the universe formed.

Working in collaboration with an international team, nuclear physicists at the U.S. Department of Energy's (DOE's) Argonne National Laboratory have made a key discovery related to the analysis of "presolar grains" found in some meteorites. This discovery has shed light on the nature of stellar explosions and the origin of chemical elements. It has also provided a new method for astronomical research.

"Tiny presolar grains, about one micron in size, are the residue from stellar explosions in the distant past, long before our solar system existed," said Dariusz Seweryniak, experimental nuclear physicist in Argonne's Physics division. The stellar debris from the explosions eventually became wedged into meteorites that crashed into the Earth.

The major stellar explosions are of two types. One called a "nova" involves a binary star system, where a main star is orbiting a white dwarf star, an extremely dense star that can be the size of Earth but have the mass of our sun. Matter from the main star is continually being pulled away by the white dwarf because of its intense gravitational field. This deposited material initiates a thermonuclear explosion every 1,000 to 100,000 years, and the white dwarf ejects the equivalent of the mass of more than thirty Earths into interstellar space. In a "supernova," a single collapsing star explodes and ejects most of its mass.

Nova and supernova are the sources of the most frequent and violent stellar eruptions in our Galaxy, and for that reason, they have been the subject of intense astronomical investigations for decades. Much has been learned from them, for example, about the origin of the heavier elements.

"A new way of studying these phenomena is analyzing the chemical and isotopic composition of the presolar grains in meteorites," explained Seweryniak. "Of particular importance to our research is a specific nuclear reaction that occurs in nova and supernova -- proton capture on an isotope of chlorine -- which we can only indirectly study in the lab."

In conducting their research, the team pioneered a new approach for astrophysics research. It entails use of the Gamma-Ray Energy Tracking In-beam Array (GRETINA) coupled to the Fragment Mass Analyzer at the Argonne Tandem Linac Accelerator System (ATLAS), a DOE Office of Science User Facility for nuclear physics. GRETINA is a state-of-the-art detection system able to trace the path of gamma rays emitted from nuclear reactions. It is one of only two such systems in the world.

Using GRETINA, the team completed the first detailed gamma-ray spectroscopy study of an astronomically important nucleus of an isotope, argon-34. From the data, they calculated the nuclear reaction rate involving proton capture on a chlorine isotope (chlorine-33).

"In turn, we were able to calculate the ratios of various sulfur isotopes produced in stellar explosions, which will allow astrophysicists to determine whether a particular presolar grain is of nova or supernova origin," said Seweryniak. The team also applied their acquired data to gain deeper understanding of the synthesis of elements in stellar explosions.

 Read more at Science Daily

New method to determine the origin of stardust in meteorites

 Scientists have made a key discovery thanks to stardust found in meteorites, shedding light on the origin of crucial chemical elements.

Meteorites are critical to understanding the beginning of our solar system and how it has evolved over time. However, some meteorites contain grains of stardust that predate the formation of our solar system and are now providing important information about how the elements in the universe formed.

In a study published by Physical Review Letters, researchers from the University of Surrey detail how they made a key discovery connected to the "pre-solar grains" found in primitive meteorites. This discovery has provided new insights into the nature of stellar explosions and the origin of the chemical elements. It has also provided a new method for astronomical research.

Dr Gavin Lotay, Nuclear Astrophysicist and Director of Learning and Teaching at the University of Surrey, said: "Tiny pre-solar grains, about one micron in size, are the residuals of stellar explosions that occurred in the distant past, long before our solar system existed. Stellar debris eventually became wedged into meteorites that, in turn, crashed into the Earth."

One of the most frequent stellar explosions to occur in our galaxy is called a nova, which involves a binary star system consisting of a main sequence star orbiting a white dwarf star -- an extremely dense star that can be the size of Earth but has the mass of our Sun. Matter from the main star is continually pulled away by the white dwarf because of its intense gravitational field. This deposited material initiates a thermonuclear explosion every 1,000 to 100,000 years and the white dwarf ejects the equivalent of the mass of more than thirty Earths into interstellar space. In contrast, a supernova involves a single collapsing star and, when it explodes, it ejects almost all of its mass.

As novae continually enrich our galaxy with chemical elements, they have been the subject of intense astronomical investigations for decades. Much has been learned from them about the origin of the heavier elements, for example. However, a number of key puzzles remain.

Dr Lotay continues: "A new way of studying these phenomena is by analysing the chemical and isotopic composition of the pre-solar grains in meteorites. Of particular importance to our research is a specific nuclear reaction that occurs in novae and supernovae -- proton capture on an isotope of chlorine -- which we can only indirectly study in the laboratory."

In conducting their experiment, the team, led by Dr Lotay and Surrey PhD student Adam Kennington (also a former Surrey undergraduate), pioneered a new research approach. It involves the use of the Gamma-Ray Energy Tracking In-beam Array (GRETINA) coupled to the Fragment Mass Analyzer at the Argonne Tandem Linac Accelerator System (ATLAS), USA. GRETINA is a state-of-the-art detection system able to trace the path of gamma rays (g-ray) emitted from nuclear reactions. It is one of only two such systems in the world that utilise this novel technology.

Using GRETINA, the team completed the first detailed g-ray spectroscopy study of an astronomically important nucleus, argon-34, and were able to calculate the expected abundance of sulfur isotopes produced in nova explosions.

 Read more at Science Daily

Research exposes new vulnerability for SARS-CoV-2

 Northwestern University researchers have uncovered a new vulnerability in the novel coronavirus' infamous spike protein -- illuminating a relatively simple, potential treatment pathway.

The spike protein contains the virus' binding site, which adheres to host cells and enables the virus to enter and infect the body. Using nanometer-level simulations, the researchers discovered a positively charged site (known as the polybasic cleavage site) located 10 nanometers from the actual binding site on the spike protein. The positively charged site allows strong bonding between the virus protein and the negatively charged human-cell receptors.

Leveraging this discovery, the researchers designed a negatively charged molecule to bind to the positively charged cleavage site. Blocking this site inhibits the virus from bonding to the host cell.

"Our work indicates that blocking this cleavage site may act as a viable prophylactic treatment that decreases the virus' ability to infect humans," said Northwestern's Monica Olvera de la Cruz, who led the work. "Our results explain experimental studies showing that mutations of the SARS-CoV-2 spike protein affected the virus transmissibility."

The research was published online last week in the journal ACS Nano.

Olvera de la Cruz is the Lawyer Taylor Professor of Materials Science and Engineering in Northwestern's McCormick School of Engineering. Baofu Qiao, a research assistant professor in Olvera de la Cruz's research group, is the paper's first author.

Made up of amino acids, SARS-CoV-2's polybasic cleavage sites have remained elusive since the COVID-19 outbreak began. But previous research indicates that these mysterious sites are essential for virulence and transmission. Olvera de la Cruz and Qiao discovered that polybasic cleavage site is located 10 nanometers from human cell receptors -- a finding that provided unexpected insight.

"We didn't expect to see electrostatic interactions at 10 nanometers," Qiao said. "In physiological conditions, all electrostatic interactions no longer occur at distances longer than 1 nanometer."

"The function of the polybasic cleavage site has remained elusive," Olvera de la Cruz said. "However, it appears to be cleaved by an enzyme (furin) that is abundant in lungs, which suggests the cleavage site is crucial for virus entry into human cells."

With this new information, Olvera de la Cruz and Qiao next plan to work with Northwestern chemists and pharmacologists to design a new drug that could bind to the spike protein.

Read more at Science Daily

Aug 10, 2020

Individual differences in the brain

 Personality varies widely. There are bold and reserved individuals, who behave very differently when faced with the same environmental stimulus. What is true for humans also applies to fish: their behavior shows a range of individual differences. By selectively breeding zebrafish, scientists from the Max Planck Institute of Neurobiology were able to show that distinct personality traits rapidly emerge and manifest not only in the behavior, but also through far-reaching changes in the brain.

Young zebrafish are just five millimeters long and almost transparent. Nevertheless, the tiny fish display a spectrum of behavior in response to external stimuli. While some animals flee in panic at a loud sound, other fish remain calm. If the sound is repeated, fish in one group learn to ignore it quickly, while others never really get used to it. Between these two extremes -- relaxed or skittish -- there is a whole range of behavioral expressions.

Carlos Pantoja and colleagues in Herwig Baier's team were now able to show that selection for a specific behavioral trait can also change the fishes' brain activity surprisingly quickly. The researchers mated animals only within the extremely relaxed and the extremely skittish groups. After just two generations, the brains of the fry selected for skittishness differed significantly from the brains of the calm offspring.

In the transparent fish larvae, the scientists were able to observe which brain regions were activated by the loud sound. The offspring of the two behavioral extremes showed clear differences in neuronal activity in a part of the hypothalamus and in the so-called dorsal raphe nucleus. A noticeable difference between these two brain regions is that the plastic part of the hypothalamus contains neurons that secrete dopamine, while the raphe nucleus mainly produces serotonin. Dopamine and serotonin are two prominent neuromodulators that have also been associated with personality differences and even psychiatric conditions in humans.

"The ratio of cell activity in these two brain regions could regulate the sensitivity of an individual fish's reaction to the sound and how quickly it gets used to it," explains Carlos Pantoja. "However, this is certainly only one component, as there are also differences in a whole range of other brain areas."

Interestingly, the offspring of the two fish groups not only showed the expected differences in their startle response. While in the larval stage, the more relaxed fish fry was also significantly less spontaneously active. As adults, these fish then adapted much slower to a new environment than adult jumpy fish. "At first glance, this sounds paradoxical. But it could be that the early tendency to fearful overreactions tends to dampen the later stress response," says Pantoja. Similar long-term effects of early stress processing have been reported in mammals.

In both groups of fish, the dopamine-releasing part of the hypothalamus was activated during the startle reaction. However, while this region was only switched on by the sound in the relaxed fish, it was permanently active in the skittish fish. After a mere two generations of behavioral selection, these animals already seemed to be constantly prepared to escape.

Read more at Science Daily

Detailed molecular workings of a key system in learning and memory formation

 One of the new realities in biomedical research is that it's increasingly difficult to use a general approach to score advances. Now, investigations into disease mechanisms, for example, are often conducted at the molecular level by specialists who dedicate years to interrogating a single protein or signaling pathway.

One such scientist is biochemist Margaret Stratton at the University of Massachusetts Amherst, whose lab reports how they used advanced sequencing technology to clear up uncertainty and determine all variants of a single protein/enzyme known as calcium/calmodulin-dependent protein kinase II (CaMKII) in the hippocampus, the brain's memory center.

It plays a central role in calcium signaling throughout the body, Stratton explains. In the hippocampus, CaMKII is required for learning and memory, and when mutations occur they contribute to conditions such as autism spectrum disorders and developmental disabilities, or problems in other systems relating to cardiac pacing and fertility.

Stratton and first authors Roman Sloutsky and Noelle Dziedzic, with others, report in Science Signaling that they found an unexpected new role for the hub domain, or organizational center of the CaMKII molecular complex. Stratton says, "In addition to this known role, we show that this domain affects how sensitive CaMKII is to calcium; it acts like a tuner for sensitivity. This was a surprise. It opens a whole new area for investigation. We also show evidence for how we think it works at the molecular level."

Kinases are quite prevalent in biology, she adds, with more than 500 kinds in humans, but CaMKII is unique with its hub domain. Their unexpected discovery that "the hub actually plays a role in regulating activity gives us a unique handle on CaMKII to potentially control its activity with high specificity."

In vertebrates and humans, genomes encode for four CaMKII variants, and each is associated with many different proteins.

"We collaborated with Luke Chao, a structural biologist at Mass General Hospital, and a postdoc in his lab, Sivakumar Boopathy, to use cutting-edge techniques to structurally characterize the different flavors of CaMKII to understand how they may react differently to calcium." They hoped to identify any that have a modulatory or regulatory role and might serve as a new therapeutic target for controlling it or correcting mutations, she notes.

"All CaMKIIs consist of a catalytic kinase domain, a regulatory segment, a variable linker and a hub domain," Stratton explains. When called upon, this molecule adds phosphates where they are needed for cell function. "When calcium levels rise, CaMKII turns on. When they drop, CaMKII activity does too. Our goal was to unravel the differences to better understand how CaMKII does its job in memory formation."

In the CaMKII structure, the hub domain's job is to gather the other domains around it. A kidney bean-shaped kinase domain is attached to the hub by a spaghetti-like linker. When subunits are assembled into a working complex it looks like a flower, where the kinase domains are petals around the central hub domain, she points out.

In their sequencing experiments, Stratton explains, "We found something quite surprising. We discovered that there are more than 70 different CaMKII variants present in hippocampus. That's an extraordinary number."

Chao's group used cryo-electron microscopy to make images of purified CaMKII, allowing the researchers to see that CaMKII's "action" domain adopts different conformations relative to the hub, Stratton says, "In the 70 or so different variants, the petals are likely in a different orientation around the hub. It still looks like a flower, but all the petals are not exactly the same. This orientation we think is dependent on the hub identity, which is dictated by the sequence of the gene."

 Read more at Science Daily

Biodiversity may limit invasions: Lessons from lizards on Panama Canal islands

 When the U.S. flooded Panama's Chagres River valley in 1910, Gatun Lake held the record as the world's biggest reservoir. This record was surpassed, but researchers at the Smithsonian Tropical Research Institute (STRI), who are now studying invading lizards on the tiny islands that dot the lake, discovered that islands with native lizards act as another kind of reservoir, harboring the parasites that control invaders. The study, published in the journal Biology Letters, is valuable experimental evidence that biodiversity is better, making ecosystems more resistant to invasion.

As part of another study to find out how many generations it takes for slender anole lizards (Anolis apletophallus) to adapt to climate change, a research team led by Christian Cox, a visiting scientist at STRI from Florida International University, and Mike Logan from the University of Nevada, Reno, transplanted lizards from the tropical forest on the mainland to the islands, which tend to be hotter and drier. Before the transplant, they did a general health check of the lizards that included counting the number of parasites (mites) on their bodies.

When they came back several times during the next two years to see how the lizards were doing in their new habitats, they recounted the number of mites.

"We found that on the islands with no resident species of anole lizard, the slender anole lizards that were transplanted to the islands lost their mites within a single generation, and the mites are still gone several generations later (up until the present)," Cox said. "Indeed, individual founding lizards that had mites during the initial transplant had no mites when they were later recaptured. In contrast, anole lizards that were transplanted to an island with another resident (native) species of anole lizard kept their mites for three generations, and some of the founders on the two-species island never lost their mites."

"Our study turned out to be a large-scale experimental test of the enemy release hypothesis," said Logan, who did this work as a three-year STRI/Tupper postdoctoral fellow. "Often, when an invasive animal shows up in a new place, all of its pathogens and parasites are left behind or do not survive, giving it an extra survival advantage in the new place: thus the term enemy release."

The team also found that the two-species island had lower density and lower biomass per unit area of the invasive lizard species, indicating that the continued presence of the mites may be keeping their populations under control.

"Our study is a clear example of something that conservationists have been trying to communicate to the public for some time," Logan said. "Diverse native communities sometimes function as 'enemy reservoirs' for parasites and diseases the keep down the numbers of invaders."

 Read more at Science Daily

Previously undescribed lineage of Archaea illuminates microbial evolution

 In a publication in Nature Communications last Friday, NIOZ scientists Nina Dombrowski and Anja Spang and their collaboration partners describe a previously unknown phylum of aquatic Archaea that are likely dependent on partner organisms for growth while potentially being able to conserve some energy by fermentation. In contrast to initial analyses, this study shows that the new phylum is part of a group of Archaea that are believed to mainly comprise symbionts. Further, the study yields new insights into the diversity and evolutionary history of the Archaea.

Archaea make up one of the main divisions of life, next to the Bacteria and the Eukaryotes, the latter of which comprise for example fungi, plants and animals. Archaea are a large group of microorganisms that live in all habitats on Earth ranging from soils and sediments to marine and freshwater environments as well as from human-made to host-associated habitats including the gut. In turn, Archaea are now thought to play a major role in biogeochemical nutrient cycles.

In a publication in Nature Communications last Friday, evolutionary microbiologists Nina Dombrowski and Anja Spang from the Royal Netherlands Institute of Sea Research (NIOZ) describe a previously unknown archaeal lineage (phylum). The authors named them the Undinarchaeota, in reference to the female water spirit or nymph Undina. For the study, Dombrowski and Spang cooperated with partners from Bristol University, the University of Queensland and the Australian National University.

Diverse symbionts and parasites

Because of their great resemblance to Bacteria, Archaea were only described as a separate lineage about 40 years ago and were not studied intensely until very recently, when it became possible to sequence DNA directly from environmental samples and to reconstruct genomes from uncultivated organisms. This field of genetic research, generally referred to as metagenomics, has not only revealed that microbial life including the Archaea is much more diverse than originally thought, but also provided data needed to shed light on the function of these microbes in their environments.

The newly described Undinarchaeota were discovered in genetic material from marine (Indian, Mediterranean and Atlantic ocean) and aquifer (Rifle aquiver, Colorado River) environments. The authors could show that they belong to a very diverse and until recently unknown group of so-called DPANN archaea. Members of the DPANN include organisms with very small genomes and limited metabolic capabilities, which suggests that these organisms depend on other microbes for growth and survival1,2,3. In fact, the few so far cultivated DPANN archaea are obligate symbionts or parasites that cannot live on their own4.

"In line with this, the Undinarchaeota seem to lack several anabolic pathways, indicating that they are, too, depend on various metabolites from so far unknown partner organisms," says research leader Anja Spang. "However, Undinarchaeota seem to have certain metabolic pathways that lack in some of the most parasitic DPANN archaea and may be able to conserve energy by fermentation."

Complex evolutionary history

While DPANN have only been discovered recently, it becomes increasingly clear that they are widespread and that representatives inhabit all thinkable environments on Earth. Yet, little is known about their evolutionary and ecological role. "In some way, some of the DPANN archaea resemble viruses, needing a host organism, likely other archaea or bacteria, for survival," says Spang. "However, and in contrast to viruses, we currently know very little about the DPANN archaea and how they affect food webs and host evolution. It is also unclear whether DPANN are an ancient archaeal lineage that resembles early cellular life or have evolved later or in parallel with their hosts."

With their study, the authors could shed more light on the complex evolution of Archaea. "Our work revealed that many DPANN archaea frequently exchange genes with their hosts, which makes it very challenging to reconstruct their evolutionary history," says first author Nina Dombrowski. Tom Williams (Bristol University) adds: "However, we could show that DPANN have probably evolved in parallel with their hosts over a long evolutionary time scale, by identifying and studying those genes that were inherited from parent-to-offspring instead of having been transferred between host and symbiont."

Role in marine biogeochemical cycles

Spang expects that certain DPANN including the Undinarchaeota, may be important for biogeochemical nutrient cycles within the oceans and sediments. "One reason that DPANN were discovered relatively recently, is that they were not retained on the filters originally used for concentrating cells from environmental samples due to their small cell sizes." But since their discovery, DPANN turned out to be much more widespread than originally anticipated. Chris Rinke from the University of Queensland: "Prospective research on the Undinarchaeota and other DPANN archaea will be essential to obtain a better understanding of marine biogeochemical cycles and the role symbionts play in the transformation of organic matter."

Read more at Science Daily

Aug 9, 2020

Oldest enzyme in cellular respiration isolated

 In the first billion years, there was no oxygen on Earth. Life developed in an anoxic environment. Early bacteria probably obtained their energy by breaking down various substances by means of fermentation. However, there also seems to have been a kind of "oxygen-free respiration." This was suggested by studies on primordial microbes that are still found in anoxic habitats today.

"We already saw ten years ago that there are genes in these microbes that perhaps encode for a primordial respiration enzyme. Since then, we -- as well as other groups worldwide -- have attempted to prove the existence of this respiratory enzyme and to isolate it. For a long time unsuccessfully because the complex was too fragile and fell apart at each attempt to isolate it from the membrane. We found the fragments, but were unable to piece them together again," explains Professor Volker Müller from the Department of Molecular Microbiology and Bioenergetics at Goethe University.

Through hard work and perseverance, his doctoral researchers Martin Kuhns and Dragan Trifunovic then achieved a breakthrough in two successive doctoral theses. "In our desperation, we at some point took a heat-loving bacterium, Thermotoga maritima, which grows at temperatures between 60 and 90°C," explains Dragan Trifunovic, who will shortly complete his doctorate. "Thermotoga also contains Rnf genes, and we hoped that the Rnf enzyme in this bacterium would be a bit more stable. Over the years, we then managed to develop a method for isolating the entire Rnf enzyme from the membrane of these bacteria."

As the researchers report in their current paper, the enzyme complex functions a bit like a pumped-storage power plant that pumps water into a lake higher up and produces electricity via a turbine from the water flowing back down again.

Only in the bacterial cell the Rnf enzyme (biochemical name = ferredoxin:NAD-oxidoreductase) transports sodium ions out of the cell's interior via the cell membrane to the outside and in so doing produces an electric field. This electric field is used to drive a cellular "turbine" (ATP synthase): It allows the sodium ions to flow back along the electric field into the cell's interior and in so doing it obtains energy in the form of the cellular energy currency ATP.

The biochemical proof and the bioenergetic characterization of this primordial Rnf enzyme explains how first forms of life produced the central energy currency ATP. The Rnf enzyme evidently functions so well that it is still contained in many bacteria and some archaea today, in some pathogenic bacteria as well where the role of the Rnf enzyme is still entirely unclear.

 Read more at Science Daily

Updating Turing's model of pattern formation

 In 1952, Alan Turing published a study which described mathematically how systems composed of many living organisms can form rich and diverse arrays of orderly patterns. He proposed that this 'self-organisation' arises from instabilities in un-patterned systems, which can form as different species jostle for space and resources. So far, however, researchers have struggled to reproduce Turing patterns in laboratory conditions, raising serious doubts about its applicability. In a new study published in EPJ B, researchers led by Malbor Asllani at the University of Limerick, Ireland, have revisited Turing's theory to prove mathematically how instabilities can occur through simple reactions, and in widely varied environmental conditions.

The team's results could help biologists to better understand the origins of many ordered structures in nature, from spots and stripes on animal coats, to clusters of vegetation in arid environments. In Turing's original model, he introduced two diffusing chemical species to different points on a closed ring of cells. As they diffused across adjacent cells, these species 'competed' with each other as they interacted; eventually organising to form patterns. This pattern formation depended on the fact that the symmetry during this process could be broken to different degrees, depending on the ratio between the diffusion speeds of each species; a mechanism now named the 'Turing instability.' However, a significant drawback of Turing's mechanism was that it relied on the unrealistic assumption that many chemicals diffuse at different paces.

Through their calculations, Asllani's team showed that in sufficiently large rings of cells, where diffusion asymmetry causes both species to travel in the same direction, the instabilities which generate ordered patterns will always arise -- even when competing chemicals diffuse at the same rate. Once formed, the patterns will either remain stationary, or propagate steadily around the ring as waves. The team's result addresses one of Turing's key concerns about his own theory, and is an important step forward in our understanding of the innate drive for living systems to organise themselves.

 From Science Daily