Sep 25, 2021

Insights from our genome and epigenome will help prevent, diagnose and treat cancer

In 2020, an estimated 10 million people lost their lives to cancer. This devastating disease is underpinned by changes to our DNA -- the instruction manual for all our cells.

It has been 20 years since scientists first unveiled the sequence of the human genome. This momentous achievement was followed by major technological advances that allow us to today read the layers of information of our DNA in enormous detail -- from the first changes to DNA that occur as a cell becomes cancerous to the complex microenvironments of advanced tumours.

Now, to accelerate discoveries for cancer patients, we need new ways to bring together the different types of complex data we generate to provide new biological insights into cancer evolution.

For today's issue of Science, my colleagues Professor Toshikazu Ushijima, Chief, Epigenomics Division, National Cancer Center Research Institute (Japan), Prof Patrick Tan, Executive Director, Genome Institute of Singapore and I were invited to review the cancer insights we can currently obtain from analysing DNA in its full complexity and define the future challenges we need to tackle to yield the next step-changes for patients.

The complexity of our DNA

Many imagine our DNA -- our genome -- as simply a string of letters. In reality, many layers of information -- known as the epigenome -- completely change its activity.

Our genome can be compared to the different geographical environments of our planet. Much like mountains, islands and oceans are made up of the same basic elements, our genetic sequence of As, Ts, Gs and Cs, forms the basis of complex structural features within our cells.

These geographical environments are created by our epigenome -- additional layers of information, which include chemical markers that attach to our DNA (called DNA methylation) and chemical changes to proteins (histones) that wrap around it, which together orchestrate how DNA is organised in three dimensions inside our cells.

Both our genome and epigenome evolve during the cancer life cycle, and we need to understand these complex changes to improve cancer risk assessment and accelerate therapeutic discoveries for patients.

From cancer formation to metastasis

It was previously thought that genetic changes were sufficient to cause a cancer, but it is becoming clear that both the genome and the epigenome changes together play a significant role in cancer evolution. There is some evidence that, for instance, changes to DNA methylation that occur with ageing may predispose cells to genetic changes that cause cancer.

And take cigarette smoking, where scientists have observed DNA methylation changes in the cells lining the lung well before genetic changes and a lung cancer could be detected. To gain new insights into what drives carcinogenesis, we need to map the precise order of genomic and epigenomic changes.

We are also becoming aware that whilst a cancer can accumulate genetic changes, the epigenome is also 'reprogrammed' as the cancer transitions from a primary to a metastasising tumour, and eventually may develop resistance to treatment. Understanding these changes may lead to new therapeutic targets that can more precisely treat advanced cancers.

New insight through advanced technologies

Cancer cells reside in a tumour ecosystem with other diverse cell types, including immune cells, and connective cells, called stromal cells. Today, advanced imaging and single-cell technologies are helping us map these cells, as well as genomic and epigenomic changes, in the three-dimensional context of a tumour, and at unprecedented resolution. At Garvan, our researchers are conducting these studies at our intravital microscopy facilities and the Garvan-Weizmann Centre for Cellular Genomics.

A number of international research consortia, including the Human Tumour Atlas Network and the Cancer Research UK Grand Challenge project have been established to study cancers at the single-cell and spatial level. However, these consortia will have to tackle enormous challenges in data integration. In today's global research environment, we need globally standardised methods to integrate data from different analysis techniques and laboratories.

By revealing not just associations, but the full integration of DNA and cellular changes that occur during cancer formation and progression, we will understand how cancer can be better diagnosed, treated and prevented.

Big data -- opportunities and challenges

The last 20 years has seen us develop the technology to show that our genome and epigenome are far more complex than we appreciated. We're at a point where new cancer insights will come from solving mathematical problems generated from complex and diverse sequencing and imagining data sets.

Our advanced technologies are allowing us to generate a wealth of data. But the challenge now is data integration -- humans simply cannot digest all the information we generate. This challenge will be addressed by artificial intelligence, which is where we will need to incorporate computational expertise, looking at and modelling data in innovative ways.

Another critical future challenge will be to translate basic findings into tangible clinical applications. A precise understanding of the multiple steps that lead to cancer formation inside cells may allow us to improve our screening of cancer risk and early detection of cancer. In the future, studies of genetic and epigenetic signatures may help us remove carcinogenic agents and processes from our environment altogether.

For advanced cancers, integrated DNA analyses may help pinpoint overlooked mechanisms that cancer cells use to metastasise, which may be promising targets for therapy development.

As geneticists and epigeneticists, the challenge of integrating our data to study cancer is not unlike the challenge of modelling climate change. Climate modelling requires a huge amount of data from different sources to be combined and contextualised to make predictions about the planet's future.

This is the same for genomics and epigenomics -- we need to understand how the multiple different layers of DNA information work together to elicit the damaging effects of 'climate change' in our cells as they become cancerous.

Read more at Science Daily

When it comes to communication skills, maybe we’re born with it?

From inside the womb and as soon as they enter the world, babies absorb information from their environment and the adults around them, quickly learning after birth how to start communicating through cries, sounds, giggles, and other kinds of baby talk. But are a child's long-term language skills shaped by how their brain develops during infancy, and how much of their language development is influenced by their environment and upbringing?

Following dozens of children over the course of five years, a Boston University researcher has taken the closest look yet at the link between how babies' brains are structured in infancy and their ability to learn a language at a young age, and to what degree their environment plays a role in brain and language development.

The new research, described in a paper published in Developmental Cognitive Neuroscience, finds that the brain's organizational pathways might set a foundation for a child's language learning abilities within the first year of life. These pathways are known as white matter, and they act as the connectors between the billions of neurons -- called gray matter -- that comprise the brain tissue. This allows for the exchange of signals and for all of the different tasks and functions we need to perform, as well as all of the biological processes that sustain us.

"A helpful metaphor often used is: white matter pathways are the 'highways,' and gray matter areas are the 'destinations'," says BU neuroscientist and licensed speech pathologist Jennifer Zuk, who led the study. Zuk, a College of Health & Rehabilitation Sciences: Sargent College assistant professor of speech, language, and hearing sciences, says the more someone does a certain task, like learning a new language, the stronger and more refined the pathways become in the areas of the brain responsible for that task, allowing information to flow more efficiently through the white matter highways. Recent evidence suggests that white matter most rapidly develops within the first two years of life, according to Zuk.

In addition to white matter development, scientists have long known that the environment also plays an important role in shaping a person's language abilities, Zuk says. But many uncertainties remain about whether nature or nurture is more dominant in determining the makeup of white matter and how well a baby learns to communicate.

In their study, Zuk says, she and her colleagues sought answers to several specific questions: from very early on, to what extent does predisposed brain structure play a role in development? Does the brain develop in tandem with language, and is the environment ultimately driving the progress of both? And to what extent does brain structure in early infancy set children up for success with language?

To investigate this, Zuk and Boston Children's Hospital researcher and study senior author Nadine Gaab met with 40 families with babies to take images of the infants' brains using magnetic resonance imaging (MRI) and gather first-of-its-kind data on white matter development. No small feat, considering the babies needed to be sound asleep to allow for crisp capture of their brain activity and structure using MRI.

"It was such a fun process, and also one that calls for a lot of patience and perseverance," says Zuk, who had to master the challenge of getting 4-to-18-month-old babies comfortable enough to snooze through the MRI process -- the loud sounds of an MRI could be very disruptive to a sleeping baby. "There are very few researchers in the world using this approach," she says, "because the MRI itself involves a rather noisy background…and having infants in a naturally deep sleep is very helpful in accomplishing this pretty crazy feat."

It's also the first time that scientists have used MRI to look at the relationship between brain structure and language development in full-term, typically developing children from infancy to school age.

One important white matter pathway the researchers looked at using MRI is called the arcuate fasciculus, which connects two regions of the brain responsible for language production and comprehension. Using MRI, the researchers measured the organization of white matter by looking at how easily water diffuses through the tissue, indicating the pathway's density.

Five years after first rocking babies to sleep and gently tucking them inside an MRI machine, Zuk and her collaborators met up with the children and their families again to assess each child's emerging language abilities. Their assessments tested each one's vocabulary knowledge, their ability to identify sounds within individual words, and their ability to blend individual sounds together to understand the word it makes.

According to their findings, children born with higher indications of white matter organization had better language skills five years later, suggesting that communication skills could be strongly linked to predisposed brain structure. But, Zuk says, this is only the first piece of a very complicated puzzle.

"Perhaps the individual differences in white matter we observed in infancy might be shaped by some combination of a child's genetics and their environment," she says. "But it is intriguing to think about what specific factors might set children up with more effective white matter organization early on."

Although their findings indicate a foundation for language is established in infancy, "ongoing experience and exposure [to language] then builds upon this foundation to support a child's ultimate outcomes," Zuk says.

She says this means that during the first year of a child's life "there's a real opportunity for more environmental exposure [to language] and to set children up for success in the long term."

Read more at Science Daily

Sep 24, 2021

Carbon dioxide reactor makes 'Martian fuel'

Engineers at the University of Cincinnati are developing new ways to convert greenhouse gases to fuel to address climate change and get astronauts home from Mars.

UC College of Engineering and Applied Science assistant professor Jingjie Wu and his students used a carbon catalyst in a reactor to convert carbon dioxide into methane. Known as the "Sabatier reaction" from the late French chemist Paul Sabatier, it's a process the International Space Station uses to scrub the carbon dioxide from air the astronauts breathe and generate rocket fuel to keep the station in high orbit.

But Wu is thinking much bigger.

The Martian atmosphere is composed almost entirely of carbon dioxide. Astronauts could save half the fuel they need for a return trip home by making what they need on the red planet once they arrive, Wu said.

"It's like a gas station on Mars. You could easily pump carbon dioxide through this reactor and produce methane for a rocket," Wu said.

UC's study was published in the journal Nature Communications with collaborators from Rice University, Shanghai University and East China University of Science and Technology.

Wu began his career in chemical engineering by studying fuel cells for electric vehicles but began looking at carbon dioxide conversion in his chemical engineering lab about 10 years ago.

"I realized that greenhouse gases were going to be a big issue in society," Wu said. "A lot of countries realized that carbon dioxide is a big issue for the sustainable development of our society. That's why I think we need to achieve carbon neutrality."

The Biden Administration has set a goal of achieving a 50% reduction in greenhouse gas pollutants by 2030 and an economy that relies on renewable energy by 2050.

"That means we'll have to recycle carbon dioxide," Wu said.

Wu and his students, including lead author and UC doctoral candidate Tianyu Zhang, are experimenting with different catalysts such as graphene quantum dots -- layers of carbon just nanometers big -- that can increase the yield of methane.

Wu said the process holds promise to help mitigate climate change. But it also has a big commercial advantage in producing fuel as a byproduct.

"The process is 100 times more productive than it was just 10 years ago. So you can imagine that progress will come faster and faster," Wu said. "In the next 10 years, we'll have a lot of startup companies to commercialize this technique."

Wu's students are using different catalysts to produce not only methane but ethylene. Called the world's most important chemical, ethylene is used in the manufacture of plastics, rubber, synthetic clothing and other products.

"Green energy will be very important. In the future, it will represent a huge market. So I wanted to work on it," Zhang said.

Synthesizing fuel from carbon dioxide becomes even more commercially viable when coupled with renewable energy such as solar or wind power, Wu said.

"Right now we have excess green energy that we just throw away. We can store this excess renewable energy in chemicals," he said.

The process is scalable for use in power plants that can generate tons of carbon dioxide. And it's efficient since the conversion can take place right where excess carbon dioxide is produced.

Wu said advances in fuel production from carbon dioxide make him more confident that humans will set foot on Mars in his lifetime.

Read more at Science Daily

Ancient DNA analysis sheds light on dark event in medieval Spain

An international research team led by the University of Huddersfield's Archaeogenetics Research Group, including geneticists, archaeological scientists, and archaeologists, has published the genome sequence of a unique individual from Islamic medieval Spain -- al-Andalus -- the results of which have shed light on a brutal event that took place in medieval Spain.

The individual, who was discovered in an eleventh century Islamic necropolis from the city of Segorbe, near Valencia in Spain, is known to local archaeologists as the 'Segorbe Giant' because of his unusual height.

His skeleton had suggested that he might have some African ancestry. Most of Spain had been progressively conquered by Arabs and Berbers from Northwest Africa from the eighth century onwards, creating one of the major centres of medieval European civilisation.

The ancient DNA analysis was carried out by Dr Marina Silva and Dr Gonzalo Oteo-Garcia, who had been working on the University's Leverhulme Trust doctoral scholarship programme in evolutionary genomics.

They found that the "Giant" carried highly specific North African genetic lineages on both his male and female lines of descent -- the Y-chromosome and the mitochondrial DNA -- the oldest individual known to have this particular pattern of ancestry. This suggested that his recent ancestry was indeed amongst the newly Islamicised Berber populations of medieval Northwest Africa.

But a more detailed examination revealed a more complex situation. The male and female lines of descent account for only a small fraction of our overall ancestry -- that from our father's father's father and our mother's mother's mother, and so on.

His genome-wide ancestry showed that he also carried a significant amount -- likely more than half -- of local Spanish ancestry in his chromosomes. Moreover, stable isotope analyses suggested that he most likely grew up locally meaning the "Giant's" Berber ancestry was in fact due to migration from an earlier generation. He therefore belonged to a settled community that had thoroughly intermixed local Spanish and immigrant North African ancestry.

What was especially striking revealed Professor Martin Richards, Director of the University's Evolutionary Genomics Research Centre, was that he was very unlike modern people from Valencia, who carry little or none of his Berber genetic heritage.

This can be explained by the changing political situation following the Christian reconquest of Spain as Dr Oteo-Garcia, who recently commenced work at the University of Parma, explained: "The decree of expulsion of Moriscos from the Valencia region, that is, Muslims who had already been forcibly converted to Christianity, was followed by the resettlement by people from further north, who had little North African ancestry, thereby transforming the genetic variation in the region."

Read more at Science Daily

Earliest evidence of human activity found in the Americas, researchers report

Footprints found at White Sands National Park in New Mexico provide the earliest unequivocal evidence of human activity in the Americas and provide insight into life over 23,000 years ago, scientists report.

The findings are described in an article in the journal Science.

Researchers Jeff Pigati and Kathleen Springer, with the U.S. Geological Survey, used radiocarbon dating of seed layers above and below the footprints to determine their age. The dates range in age and confirm human presence over at least two millennia, with the oldest tracks dating back 23,000 years.

This corresponds to the height of the last glacial cycle, during something known as the Last Glacial Maximum, and makes them the oldest known human footprints in the Americas.

It was previously thought that humans entered America much later, after the melting of the North American ice sheets, which opened up migration routes.

"Our dates on the seeds are tightly clustered and maintain stratigraphic order above and below multiple footprint horizons -- this was a remarkable outcome," Springer said.

The footprints tell an interesting tale of what life was like at this time. Judging by their size, the tracks were left mainly by teenagers and younger children, with the occasional adult.

"The footprints left at White Sands give a picture of what was taking place, teenagers interacting with younger children and adults," said lead study author Matthew Bennett from Bournemouth University in England. "We can think of our ancestors as quite functional, hunting and surviving, but what we see here is also activity of play, and of different ages coming together. A true insight into these early people."

"For decades, archaeologists have debated when people first arrived in the Americas," said co-author Vance Holliday, a professor in the UArizona School of Anthropology and Department of Geosciences. "Few archaeologists see reliable evidence for sites older than about 16,000 years. Some think the arrival was later, no more than 13,000 years ago by makers of artifacts called Clovis points. The White Sands tracks provide a much earlier date. There are multiple layers of well-dated human tracks in streambeds where water flowed into an ancient lake. This was 10,000 years before Clovis people."

Holliday and study co-author Brendan Fenerty, a UArizona doctoral student in the Department of Geosciences, documented basic geologic layering and dating in trenches on the White Sands Missile Range near the discovery site several years before the tracks were found.

"We were interested in reconstructing the evolution of the landscape in the context of environmental changes and some younger archaeological sites in the area," Holliday said. "We had no idea what was buried nearby."

Tracks of mammoth, giant ground sloth, dire wolves and birds are also all present at the White Sands site.

"It is an important site because all of the trackways we've found there show an interaction of humans in the landscape alongside extinct animals, like mammoths and giant sloths," said study co-author Sally Reynolds of Bournemouth University. "We can see the co-existence between humans and animals on the site as a whole, and by being able to accurately date these footprints, we're building a greater picture of the landscape."

The human tracks at White Sands were first discovered by David Bustos, resources manager at the park.

"It is incredible to have the confirmation on the age of the human prints, and exciting but also sad to know that this is only a small portion of the 80,000 acres where the prints have been revealed bare and are also being rapidly lost to ongoing soil erosion," Bustos said.

The team also pioneered non-invasive geophysical techniques to help locate the site. Tommy Urban, from Cornell University, led this part of the work.

"Detection and imaging with nondestructive technology has greatly expanded our capacity to study these remarkable footprints in their broader context," he said.

Traditional archaeology relies on the discovery of bones and tools but can often be difficult to interpret. Human footprints provide unequivocal evidence of presence and also of behavior.

Read more at Science Daily

New online tool to help residents reduce the impact of traffic-related air pollution

Researchers at the University of Surrey have released a new online tool to help schools, hospitals and residents understand and reduce the impact of traffic-related air pollution.

The Hedge Design for the Abatement of Traffic Emissions (HedgeDATE) -- online tool allows users to describe their environment -- such as a road with buildings on one side -- and then recommends actions citizens can take to improve its air quality.

For example, HedgeDATE advises that a resident living in a shallow street canyon should install hedges and a green wall; the tool can also give more detailed information on the species of plants suitable for different locations.

The development of HedgeDATE builds on years of research and public engagement work by the University of Surrey's Global Centre for Clean Air Research (GCARE). HedgeDATE grew out of workshops on air pollution mitigation conducted with local communities as part of Guildford Living Lab.

Professor Prashant Kumar, founding director of GCARE and founder of the Guildford Living Lab at the University of Surrey, said: "The best use of research is for public benefit. However, this poses the challenge of translating complex science into simple, evidence-based actions. At GCARE, we proactively engage with the community and produce co-designed solutions such as HedgeDATE. Soon, when the world is back on its feet after the pandemic, we will need to come to terms with the air pollution and climate change crisis.

"HedgeDATE is about giving ordinary people -- our schools, hospitals and councils -- easy-to-use tools that can inspire them to make a real difference to the quality of the air they breathe."

Justine Fuller, co-author of the paper on HedgeDATE from Guildford Borough Council, said: "We have been collaborating with Professor Kumar's team for many years. The HedgeDATE tool is an excellent means of engaging the public on air pollution issues. It has been a pleasure to support this project and will look forward to sharing this with the interested user groups."

Dr Stuart Cole, another co-author from Oxfordshire County Council, said: "We're proud to be collaborators on the HedgeDATE project. HedgeDATE will be a useful educational tool for councils, schools, other organisations and members of the public to consult when planning green areas for improved air quality."

Read more at Science Daily

Systems approach helps assess public health impacts of changing climate, environmental policies

A team co-led by a Washington State University scientist offers an alternative way to understand and minimize health impacts from human-caused changes to the climate and environment in a new study published in the journal One Earth.

Based at WSU Vancouver, lead author Deepti Singh, assistant professor in the School of the Environment, drew on hundreds of studies of climate change, air quality, agriculture, and public health to propose a "systems lens," or scientific approach, that connects health risks with simultaneous environmental changes driven by human practices.

"The health consequences of air pollution, climate change, and transformations in agriculture are often discussed separately," Singh said. "But these issues are all related -- they have similar sources, and each one affects the others. Agricultural activities contribute to air pollution and affect regional climate patterns, while farm production and quality of crops are sensitive to air quality and climate conditions."

Collaborating with researchers at Columbia University, the Indian School of Business, Boston University, and the University of Delaware, Singh studied the situation in South Asia, where rapid industrialization and modern farming practices have aided economic development and increased food production, but also compromised multiple dimensions of human health.

"We're offering a framework to assess the overall health impacts from multiple parts of Earth's natural systems, which are all changing simultaneously because of human activities," Singh said. "The research could help identify policies and solutions that will have multiple co-benefits for the environment and human health."

"Our work sheds new light on the ways that food systems affect, and are affected by, climate change and air pollution," said Kyle Davis, co-author and assistant professor at the University of Delaware.

The scientists reviewed multiple examples of health impacts from changes in climate, air quality, and agricultural output, as well as co-benefits and unintended consequences of efforts to curb emissions and save water, for example. They found these examples share the need for better tools and local, high-resolution data on health, weather, emissions, air pollution, and land use to better measure human and environmental impacts.

"This study points out how useful and effective policy responses need to take multiple factors and interactions into account, and highlights the problem with simplistic explanations," said Ashwini Chhatre, co-author and associate professor of public policy at the Indian School of Business.

Use of fossil fuels, burning of crop residue, and changes to the landscape from expansion and intensification of agriculture have contributed to extremely poor air quality in South Asia, changed the main source of rainfall, the summer monsoon, and also increased health risks for nearly a quarter of the world's population living in the region.

"Late autumn is 'pollution season' in north India, and also brings vicious debates in our society about who and what are contributing to it," Chhatre said.

Additionally, more frequent and intense heat waves and floods have killed thousands, displaced millions, lowered labor productivity, and caused disease outbreaks. Severe air pollution has contributed to increased heart and lung diseases as well as millions of premature deaths and weakened monsoonal rains. At the same time, air pollution and climate change have reduced yields of important food crops.

Read more at Science Daily

Sep 23, 2021

Hubble finds early, massive galaxies running on empty

When the universe was about 3 billion years old, just 20% of its current age, it experienced the most prolific period of star birth in its history. But when NASA's Hubble Space Telescope and the Atacama Large Millimeter/submillimeter Array (ALMA) in northern Chile gazed toward cosmic objects in this period, they found something odd: six early, massive, "dead" galaxies that had run out of the cold hydrogen gas needed to make stars.

Without more fuel for star formation, these galaxies were literally running on empty. The findings are published in the journal Nature.

"At this point in our universe, all galaxies should be forming lots of stars. It's the peak epoch of star formation," explained lead author Kate Whitaker, assistant professor of astronomy at the University of Massachusetts, Amherst. Whitaker is also associate faculty at the Cosmic Dawn Center in Copenhagen, Denmark. "So what happened to all the cold gas in these galaxies so early on?"

This study is a classic example of the harmony between Hubble and ALMA observations. Hubble pinpointed where in the galaxies the stars exist, showing where they formed in the past. By detecting the cold dust that serves as a proxy for the cold hydrogen gas, ALMA showed astronomers where stars could form in the future if enough fuel were present.

Using Nature's Own Telescopes

The study of these early, distant, dead galaxies was part of the appropriately named REQUIEM program, which stands for Resolving QUIEscent Magnified Galaxies At High Redshift. (Redshift happens when light is stretched by the expansion of space and appears shifted toward the red part of the spectrum. The farther away a galaxy is with respect to the observer, the redder it appears.)

The REQUIEM team uses extremely massive foreground galaxy clusters as natural telescopes. The immense gravity of a galaxy cluster warps space, bending and magnifying light from background objects. When an early, massive, and very distant galaxy is positioned behind such a cluster, it appears greatly stretched and magnified, allowing astronomers to study details that would otherwise be impossible to see. This is called "strong gravitational lensing."

Only by combining the exquisite resolution of Hubble and ALMA with this strong lensing was the REQUIEM team able to able to understand the formation of these six galaxies, which appear as they did only a few billion years after the big bang.

"By using strong gravitational lensing as a natural telescope, we can find the distant, most massive, and first galaxies to shut down their star formation," said Whitaker. "I like to think about it like doing science of the 2030s or 40s -- with powerful next-generation space telescopes -- but today instead by combining the capabilities of Hubble and ALMA, which are boosted by strong lensing."

"REQUIEM pulled together the largest sample to date of these rare, strong-lensed, dead galaxies in the early universe, and strong lensing is the key here," said Mohammad Akhshik, principal investigator of the Hubble observing program. "It amplifies the light across all wavelengths so that it's easier to detect, and you also get higher spatial resolution when you have these galaxies stretched across the sky. You can essentially see inside of them at much finer physical scales to figure out what's happening."

Live Fast, Die Young

These sorts of dead galaxies don't appear to rejuvenate, even through later minor mergers and accretions of nearby, small galaxies and gas. Gobbling up things around them mostly just "puffs up" the galaxies. If star formation does turn back on, Whitaker described it as "a kind of a frosting." About 11 billion years later in the present-day universe, these formerly compact galaxies are thought to have evolved to be larger but are still dead in terms of any new star formation.

These six galaxies lived fast and furious lives, creating their stars in a remarkably short time. Why they shut down star formation so early is still a puzzle.

Whitaker proposes several possible explanations: "Did a supermassive black hole in the galaxy's center turn on and heat up all the gas? If so, the gas could still be there, but now it's hot. Or it could have been expelled and now it's being prevented from accreting back onto the galaxy. Or did the galaxy just use it all up, and the supply is cut off? These are some of the open questions that we'll continue to explore with new observations down the road."

Read more at Science Daily

Blowing up medieval gunpowder recipes

First used for battle in China in about 900 A.D., gunpowder spread throughout Eurasia by the end of the 13th century, eventually revolutionizing warfare as a propellant in firearms and artillery. Meanwhile, master gunners tinkered with gunpowder formulas, trying to find the ideal concoction. Now, researchers reporting in ACS Omega have recreated medieval gunpowder recipes and analyzed the energies released during combustion, revealing that the evolution of the perfect powder was a slow, trial-and-error process.

Although largely obsolete in modern weaponry, gunpowder, also known as black powder, is still used in historical weapons, fireworks and pyrotechnics. The explosive is a combination of varying ratios of potassium nitrate (or "saltpeter"), sulfur and charcoal. Medieval recipes sometimes included interesting additives, such as camphor, varnish or brandy, with obscure purposes. Dawn Riegner, Cliff Rogers and their team of chemists and historians wanted to analyze the energetics of medieval gunpowder recipes to help understand the intent of master gunners in creating these formulas, as well as to provide important technical information about early gunpowder manufacturing.

To do this, the researchers identified over 20 gunpowder recipes from medieval texts dated 1336 to 1449 A.D. They prepared the powders and measured the energies released just before and during combustion using differential scanning calorimetry and bomb calorimetry. They also tested a few of the recipes at a West Point firing range using a replica of an early 15th-century stone-throwing cannon.

In general, in the period 1338-1400 A.D., the percentage of saltpeter increased and charcoal decreased, causing lower heats of combustion, which could have produced safer recipes for medieval gunners. After 1400 A.D., the percentage of saltpeter (the most expensive ingredient) decreased slightly, while sulfur and charcoal increased, raising the heat of combustion, although not as high as for the earliest recipes. Certain additives, such as the combination of camphor and ammonium chloride, appeared to make gunpowder stronger, whereas others, such as water or brandy, did not show energetic advantages, but might have served other purposes. For example, they might have made the material more stable during transport or storage. Although the researchers have characterized the gunpowders in the lab and in limited experiments on the firing range, more field work must be done to evaluate which formulation would perform the best in historical contexts, they say.

From Science Daily

Early Homo sapiens groups in Europe faced subarctic climates

The process how our species dispersed into new environments at that time represents an important evolutionary turning point that ultimately led to Homo sapiens populating all continents and a large diversity of climate zones and environments. The mechanisms that facilitated initial waves of expansion remain debated, but a majority of models based on the correlation of archaeological sites with spatially distant climatic archives has so far indicated that human groups relied on warmer climatic conditions to spread into new, more northern, environments.

Using evidence directly from the archaeological layers of Bacho Kiro Cave the Max Planck team was now able to show that humans have been enduring very cold climatic conditions, similar to the ones typical for present-day northern Scandinavia, for several thousand years. "Our evidence shows that these human groups were more flexible with regard to the environments they used and more adaptable to different climatic conditions than previously thought," says lead author Sarah Pederzani, a researcher at the Max Planck Institute for Evolutionary Anthropology and the University of Aberdeen. Jean-Jacques Hublin, director of the Department of Human Evolution at the Max Planck Institute, adds: "Using these new insights, new models of the spread of our species across Eurasia will now need to be constructed, taking into account their higher degree of climatic flexibility."

Archaeological materials from Bacho Kiro Cave in Bulgaria

By directly using archaeological materials, such as the remains of herbivores butchered by humans, to generate climatic data the palaeoclimate research team -- led by Pederzani and Kate Britton, also a researcher at Max Planck Institute for Evolutionary Anthropology and the University of Aberdeen -- was able to establish a very robust record of local climatic conditions that specifically relates to the times when humans were inhabiting Bacho Kiro Cave.

"This technique enables a more confident assignment of local climatic context compared to the more commonly used chronological correlation between archaeological data and climatic archives from different localities that formed the basis of much of the existing research on human climatic adaptability -- it really gives us insight into what life was like 'on the ground'," says Britton. "However, due to the time consuming nature of the analysis and the reliance on the availability of particular animal remains, oxygen isotope studies or other ways of generating climatic data directly from archaeological sites remain scarce for the time period when Homo sapiens first spread across Eurasia," adds Pederzani. Indeed, this Max Planck study is the first study conducted in the context of the Initial Upper Palaeolithic and could therefore yield such surprising results.

Highly resolved record of past temperatures spanning more than 7,000 years

Pederzani spent one year conducting lab work from drilling series of small samples from the animal teeth through wet chemistry preparation and stable isotope ratio mass spectrometry to obtain all the necessary data. "Through this time intensive analysis that included a total of 179 samples, it was possible to obtain a very highly resolved record of past temperatures, including summer, winter and mean annual temperature estimates for human occupations spanning more than 7,000 years," says Pederzani.

Read more at Science Daily

Those earrings are so last year – but the reason you're wearing them is ancient

The necklace, nametag, earrings or uniform you chose to put on this morning might say more than you realize about your social status, job or some other aspect of your identity.

Anthropologists say humans have been doing this -- finding ways to communicate about themselves without the fuss of conversation -- for millennia.

But shell beads recovered from a cave in western Morocco, determined to be between 142,000 and 150,000 years old, suggest that this behavior may go back much farther than previously thought. The finding, detailed Wednesday in the journal Science Advances, was made by a team of archaeologists that includes Steven L. Kuhn, a professor of anthropology in the University of Arizona College of Social and Behavioral Sciences.

The beads, Kuhn and his colleagues say, are the earliest known evidence of a widespread form of nonverbal human communication, and they shed new light on how humans' cognitive abilities and social interactions evolved.

"They were probably part of the way people expressed their identity with their clothing," Kuhn said. "They're the tip of the iceberg for that kind of human trait. They show that it was present even hundreds of thousands of years ago, and that humans were interested in communicating to bigger groups of people than their immediate friends and family."

How does this ancient form of communication show up today? It happens often, Kuhn said.

"You think about how society works -- somebody's tailgating you in traffic, honking their horn and flashing their lights, and you think, 'What's your problem?'" Kuhn said. "But if you see they're wearing a blue uniform and a peaked cap, you realize it's a police officer pulling you over."

Kuhn and an international team of archaeologists recovered the 33 beads between 2014 and 2018 near the mouth of Bizmoune Cave, about 10 miles inland from Essaouira, a city on Morocco's Atlantic coast.

Kuhn co-directs archaeological research at Bizmoune Cave with Abdeljalil Bouzouggar, a professor at the National Institute of Archaeological Sciences and Heritage in Rabat, Morocco, and Phillipe Fernandez, from the University Aix-Marseille in France, who are also authors on the study. El Mehdi Sehasseh, a graduate student at the National Institute of Archaeological Sciences and Heritage, who did the detailed study of the beads, is the study's lead author.

The beads uncovered by Kuhn and his collaborators were made from sea snail shells, and each measures roughly half an inch long. Holes in the center of the beads, as well as other markings from wear and tear, indicate that they were hung on strings or from clothing, Kuhn said.

The beads are like many others found at sites throughout northern and southern Africa, but previous examples date back to no older than 130,000 years. Ancient beads from North Africa are associated with the Aterian, a Middle Stone Age culture known for its distinctive stemmed spear points, whose people hunted gazelles, wildebeest, warthogs and rhinoceros, among other animals.

The beads serve as potential clues for anthropologists studying the evolution of human cognition and communication. Researchers have long been interested in when language appeared. But there was no material record of language until just a few thousand years ago, when humans began writing things down.

The beads, Kuhn said, are essentially a fossilized form of basic communication.

"We don't know what they meant, but they're clearly symbolic objects that were deployed in a way that other people could see them," he said.

The beads are also notable for their lasting form. Rather than painting their bodies or faces with ochre or charcoal, as many people did, the beads' makers made something more permanent, Kuhn said, suggesting the message they intended to convey was a lasting and important one.

In many ways, the beads raise more questions than they answer. Kuhn said he and his colleagues are now interested in learning why the Aterian people felt the need to make the beads when they did. They're exploring several possible explanations. One, Kuhn said, involves a growing population; as more people began occupying North Africa, they may have needed ways to identify themselves.

It is also possible that people in North Africa started using the method of communication at a time when the climate was cold and dry. They may have developed clans or other allegiances to protect limited resources, then perhaps used the beads to express their ethnicity or other identity to show they belonged in a certain area, Kuhn said.

"It's one thing to know that people were capable of making them," Kuhn said, "but then the question becomes, 'OK, what stimulated them to do it?'"

Read more at Science Daily

How poxviruses multiply

The last case of smallpox worldwide occurred in Somalia in October 1977. In 1980, the World Health Organization (WHO) declared the eradication of the smallpox. According to official sources, the virus continues to exist today only in two high-security laboratories in Russia and the USA, where it is used for research purposes.

But although this means that poxviruses are no longer an immediate threat to humans, this virus family is still of great interest to scientists. On the one hand, modified strains are used in the treatment of cancer, and on the other hand, they possess highly intriguing multiplication properties.

Smallpox viruses build their own multiplication machine

While many viruses draw largely on the biochemical resources of the host cell for their multiplication, poxviruses encode their own molecular machinery in their genome for that purpose. The important components of this machinery are two enzymes: DNA polymerase to multiply the viral genes, and RNA polymerase to transcribe the viral genes into mRNA. The RNA polymerase of the vaccinia poxvirus strain, for example, is a large complex comprising 15 different protein subunits with different biochemical functions.

A team of researchers from the Biocenter of the Julius-Maximilian's University of Würzburg (JMU) has now for the first time been able to watch the polymerase of vaccinia viruses doing their work at an atomic level. Before that, the team had already reported on three-dimensional structure of the RNA polymerase at atomic resolution. The group in charge of the work is led by Utz Fischer, who holds the JMU's Chair of the Department of Biochemistry I. The results of their work have now been presented in a publication in the journal Nature Structure and Molecular Biology.

Three-dimensional structures on an atomic scale

"We have mixed isolated RNA polymerase with a piece of DNA containing the promoter, i.e. the start signal for the transcription of viral genes. The enzyme recognized precisely this DNA element and started producing mRNA," explains Julia Bartuli, in charge of the biochemical work of the study. In a next step, the samples were examined in the cryo-electron microscope, in cooperation with Bettina Böttcher from the Department of Biochemistry II. On the basis of the data collected, the scientists were able to reconstruct the three-dimensional structure of the sample down to the atomic scale, using modern computerized methods.

They were enthusiastic about the final result of this lengthy process: "One single sample we examined in the microscope allowed us to reconstruct a total of six different polymerase complexes, which we could finally allocate to individual phases of the transcription process," says Clemens Grimm, in charge of structural analysis in Fischer's department. "We can string the individual pictures together as in a movie and thus represent the early transcription phase with time resolution."

Smallpox continue to be a threat to humans


But why bother to do research on poxviruses if the virus that is so dangerous to humans is eradicated already? There are good reasons for this, replies Professor Fischer: "There is still no reliable cure for a smallpox infection, it can only be prevented by a vaccination. If the still existing virus samples were to be spread again, for example by a terrorist attack, they would hit a population that has no immunization."

Another threat, which may be more real, are zoonotic diseases caused by animal-specific viruses jumping to humans, explains biochemist Utz Fischer. For example, there are sporadic infections of humans by monkeypox, which can make the infected persons severely ill. "If such a zoonotic disease picks up speed, by further adaption to its human host and human-to-human transmission, a dangerous epidemic could emerge," he says.

Using computers to develop new drugs

Inhibitors of viral gene expression would therefore be highly relevant as antiviral drugs. Understanding the atomic structures of RNA polymerase in its different states allows researchers now a rational, structure-based computer approach to the development of such inhibitors. Such studies, which are fundamentally different in method from the classic experimental procedure, are already well underway.

About Smallpox Viruses

People born before 1976 -- in Germany at any rate -- bear on their upper arm a visible scar from their smallpox vaccination. Up to that date vaccination was mandatory in Germany. This vaccination is among the most prominent successes of modern infection protection. It resulted in the eradication of the deadly smallpox pathogen. This pathogen, scientifically known as variola virus, had been the cause of smallpox epidemics that scourged humankind periodically until well into the 20th century, and took the lives of millions of people.

Early forms of an inoculation of sorts have been known since antiquity, when people introduced the scab of a healed smallpox blister into a small wound, hoping to thus prevent a severe illness. This procedure called "variolation" was performed in the 18th century in Europe, among other places at the Juliusspital in Würzburg. The breakthrough in the fight against smallpox was achieved in 1976 by the British physician Edward Jenner, who substituted the harmless horsepox or cowpox pathogen for the much more dangerous smallpox virus.

Read more at Science Daily

Sep 22, 2021

Unveiling galaxies at cosmic dawn that were hiding behind the dust

When astronomers peer deep into the night sky, they observe what the Universe looked like a long time ago. Because the speed of light is finite, studying the most distant observable galaxies allows us to glimpse billions of years into the past when the Universe was very young and galaxies had just started to form stars. Studying this "early Universe" is one of the last frontiers in astronomy and is essential for constructing accurate and consistent astrophysics models. A key goal of scientists is to identify all the galaxies in the first billion years of cosmic history and to measure the rate at which galaxies were growing by forming new stars.

Various efforts have been made over the past decades to observe distant galaxies, which are characterized by electromagnetic emissions that become strongly redshifted (shifted towards longer wavelengths) before reaching the Earth. So far, our knowledge of early galaxies has mostly relied on observations with the Hubble Space Telescope (HST) and large ground-based telescopes, which probe their ultra-violet (UV) emission. However, recently, astronomers have started to use the unique capability of the Atacama Large Millimeter/submillimeter Array (ALMA) telescope to study distant galaxies at submillimeter wavelengths. This could be particularly useful for studying dusty galaxies missed in the HST surveys due to the dust absorbing UV emission. Since ALMA observes in submillimeter wavelengths, it can detect these galaxies by observing the dust emissions instead.

In an ongoing large program called REBELS (Reionization-Era Bright Emission Line Survey), astronomers are using ALMA to observe the emissions of 40 target galaxies at cosmic dawn. Using this dataset, they have recently discovered that the regions around some of these galaxies contain more than meets the eye.

While analyzing the observed data for two REBELS galaxies, Dr. Yoshinobu Fudamoto of the Research Institute for Science and Engineering at Waseda University, Japan, and the National Astronomical Observatory of Japan (NAOJ), noticed strong emission by dust and singly ionized carbon in positions substantially offset from the initial targets. To his surprise, even highly sensitive equipment like the HST couldn't detect any UV emission from these locations. To understand these mysterious signals, Fudamoto and his colleagues investigated matters further.

In their latest paper published in Nature, they presented a thorough analysis, revealing that these unexpected emissions came from two previously unknown galaxies located near the two original REBELS targets. These galaxies are not visible in the UV or visible wavelengths as they are almost completely obscured by cosmic dust. One of them represents the most distant dust-obscured galaxy discovered so far.

What is most surprising about this serendipitous finding is that the newly discovered galaxies, which formed more than 13 billion years ago, are not strange at all when compared with typical galaxies at the same epoch. "These new galaxies were missed not because they are extremely rare, but only because they are completely dust-obscured," explains Fudamoto. However, it is uncommon to find such "dusty" galaxies in the early period of the Universe (less than 1 billion years after the Big Bang), suggesting that the current census of early galaxy formation is most likely incomplete, and would call for deeper, blind surveys. "It is possible that we have been missing up to one out of every five galaxies in the early Universe so far," Fudamoto adds.

The researchers expect that the unprecedented capability of the James Webb Space Telescope (JWST) and its strong synergy with ALMA would lead to significant advances in this field in the coming years. "Completing our census of early galaxies with the currently missing dust-obscured galaxies, like the ones we found this time, will be one of the main objectives of JWST and ALMA surveys in the near future," states Pascal Oesch from University of Geneva.

Read more at Science Daily

Roman-era mixers and millstones made with geology in mind

A study on stone tools from an outpost of the Roman Empire has found that for ancient bakers and millers, having the right tools was a matter of geology.

A team of geoscientists and archaeologists made the discovery by analyzing samples of the tools at a University of Texas at Austin geology lab, finding that dough mixing vats and millstones from Roman-era ruins of Volubilis, a city in Morocco, were made from specific rock types that probably improved each tool's function.

Furthermore, the researchers determined that the stones were sourced locally, a discovery that challenges a theory that some millstones had been imported from afar. It also means that the craftspeople who made the tools may have received input directly from the workers who used them.

"It is interesting because it is a very local source and seemingly from one source," said Jared Benton, a study co-author and an assistant professor at Old Dominion University who studies trade between Roman-era workshops. "One wonders if there's not a group of bakers that are coming together and saying let's buy our stuff from this one quarry, or maybe there's just one guy who [sells the stones], and that's it."

The results were published in the Journal of Archaeological Science: Reports.

Derek Weller, a postdoctoral researcher at the University of Tokyo's Earthquake Research Institute, led the study. Additional co-authors include Omero "Phil" Orlandini, research associate and manager of the Electron Microbeam Laboratory at the UT Jackson School of Geosciences; Lauren LoBue and Scott Culotta, both undergraduates at the Jackson School; and Christy Schirmer, a graduate student in UT's Department of Classics.

The study got its start in early 2020, when Schirmer showed up at Orlandini's lab with a box of rocks. They were pieces of the stone tools that she and Benton had collected from the tools in Volubilis -- and they were curious about where learning more about their geological makeup could lead them.

"They sort of look the same when they're in tool form, but as soon as we started looking, it was clear that they were completely different," Orlandini said.

Olandini got LoBue and Culotta on the case. The undergraduates put all 16 samples through a detailed scientific workup to determine their composition at the geochemical level.

Their research revealed a rock type for each tool type. Grain millstones were made from vesicular basalts (a volcanic stone full of sharp-edged pores); olive mills were made from clastic, fossiliferous limestone (a limestone containing fragments of other rocks and small fossil shells); and dough mixers were made from limestone with no clastic material or fossils.

The study notes how the rocks' attributes relate to each tool's function. For example, the pores in the basalt may have helped provide fresh edges that could help grind wheat into flour as the stone was worn down.

Weller also used the geochemical data to determine that all the stones came from sources near Volubilis. Limestone is plentiful in the region, and two limestone quarries were already known to be active during the Roman era near Volubilis. But archaeologists previously thought the basalt -- which Weller found came from the nearby Middle Atlas Mountains -- was imported from Italy.

In addition, the research found that each rock type came from a single location rather than sourced from different places around Volubilis. Benton said this suggests that a single supplier for each stone type might have been meeting all demand in the city and getting input from local people.

Elizabeth Fentress, an archaeologist specializing in Roman settlements in North Africa, said that the study is a great example of collaborative research.

Read more at Science Daily

Is your child a fussy eater?

Whether it's an exclusive appetite for 'white' foods or an all-out refusal on veggies, when you have a fussy eater on your hands, mealtime can be more than a challenge.

While picky eating is all part of the norm for developing toddlers, when it extends into school years, it takes a toll on all involved, children and parents alike.

Now, new research from USC, the University of South Australia, and the University of Queensland is providing a better understanding of what influences fussy eaters, and what is more likely to increase or decrease picky eating in children under 10.

Reviewing 80 health industry studies, the research found that a range of factors contributed to a child's likelihood of being a fussy eater.

The study found that pressuring a child to eat, offering rewards for eating, very strict parenting all negatively influenced fussy eaters. Conversely, a more relaxed parenting style, eating together as a family, and involving a child in the preparation if food all reduced the likelihood of fussy eating.

Lead researcher and USC PhD student Laine Chilman says the research hopes to help parents and carers better understand fussy eating in children.

"For parents with a fussy eater, mealtimes can be especially stressful -- juggling the family meal and a picky eater is no small feat," Chilman says.

"Some families have kids who turn their noses up at any vegetable. Others are dealing with kids who dislike certain textures or colours of food.

"Some of these preferences relate to a child's characteristics or personality, which are difficult to change, if at all. But others are external factors that could help reduce fussy eating in kids.

"Eating together as a family, with siblings, and having a single meal at a regular time all helped reduce food fussiness. As did getting the fussy child involved in the meal, either by helping to choose the menu, or helping to prepare the meal.

"Yet if fussy eaters were allowed to eat in front of the TV, or if they were rewarded for eating certain foods, these behaviours negatively influenced picky children."

According to the Australian Nutrition and Physical Activity Survey, most children do not meet recommended diet and nutrition guidelines.

UniSA researcher Dr Ann Kennedy-Behr says stress can contribute to fussy eating.

"When you have a child who is a picky eater, it's very stressful for a parent or carer -- they're forever questioning whether their child is getting enough nutrients, enough food, and often enough weight gain," Dr Kennedy-Behr says.

"Yet it's important to understand that being overtly anxious or worried can actually contribute to increased picky eating.

"Avoiding getting cross and limiting any negativity around mealtime will be benefit everyone.

Read more at Science Daily

New research 'sniffs out' how associative memories are formed

Has the scent of freshly baked chocolate chip cookies ever taken you back to afternoons at your grandmother's house? Has an old song ever brought back memories of a first date? The ability to remember relationships between unrelated items (an odor and a location, a song and an event) is known as associative memory.

Psychologists began studying associative memory in the 1800s, with William James describing the phenomenon in his 1890 classic The Principles of Psychology. Scientists today agree that the structures responsible for the formation of associative memory are found in the medial temporal lobe, or the famous "memory center" of the brain, but the particular cells involved, and how those cells are controlled, have remained a mystery until now.

Neuroscientists at the University of California, Irvine have discovered specific types of neurons within the memory center of the brain that are responsible for acquiring new associative memories. Additionally, they have discovered how these associative memory neurons are controlled. We rely on associative memories in our everyday lives and this research is an important step in understanding the detailed mechanism of how these types of memories are formed in the brain.

"Although associative memory is one of the most basic forms of memory in our everyday life, mechanisms underlying associative memory remain unclear" said lead researcher Kei Igarashi, faculty fellow of the Center for the Neurobiology of Learning and Memory and assistant professor of anatomy & neurobiology at the UCI School of Medicine.

The study published today in the journal Nature, reports for the first time, that specific cells in the lateral entorhinal cortex of the medial temporal lobe, called fan cells, are required for the acquisition of new associative memories and that these cells are controlled by dopamine, a brain chemical known to be involved in our experience of pleasure or reward.

In the study, researchers used electrophysiological recordings and optogenetics to record and control activity from fan cells in mice as they learn to associate specific odors with rewards. This approach led researchers to discover that fan cells compute and represent the association of the two new unrelated items (odor and reward). These fan cells are required for successful acquisition of new associative memories. Without these cells, pre-learned associations can be retrieved, but the new associations cannot be acquired. Additionally acquiring new associations also requires dopamine.

"We never expected that dopamine is involved in the memory circuit. However, when the evidence accumulated, it gradually became clear that dopamine is involved," said Igarashi. "These experiments were like a detective story for us, and we are excited about the results."

Read more at Science Daily

Sep 21, 2021

Mars habitability limited by its small size, isotope study suggests

Water is essential for life on Earth and other planets, and scientists have found ample evidence of water in Mars' early history. But Mars has no liquid water on its surface today. New research from Washington University in St. Louis suggests a fundamental reason: Mars may be just too small to hold onto large amounts of water.

Remote sensing studies and analyses of Martian meteorites dating back to the 1980s posit that Mars was once water-rich, compared with Earth. NASA's Viking orbiter spacecraft -- and, more recently, the Curiosity and Perseverance rovers on the ground -- returned dramatic images of Martian landscapes marked by river valleys and flood channels.

Despite this evidence, no liquid water remains on the surface. Researchers proposed many possible explanations, including a weakening of Mars' magnetic field that could have resulted in the loss of a thick atmosphere.

But a study published the week of Sept. 20 in the Proceedings of the National Academy of Sciences suggests a more fundamental reason why today's Mars looks so drastically different from the "blue marble" of Earth.

"Mars' fate was decided from the beginning," said Kun Wang, assistant professor of earth and planetary sciences in Arts & Sciences at Washington University, senior author of the study. "There is likely a threshold on the size requirements of rocky planets to retain enough water to enable habitability and plate tectonics, with mass exceeding that of Mars."

For the new study, Wang and his collaborators used stable isotopes of the element potassium (K) to estimate the presence, distribution and abundance of volatile elements on different planetary bodies.

Potassium is a moderately volatile element, but the scientists decided to use it as a kind of tracer for more volatile elements and compounds, such as water. This is a relatively new method that diverges from previous attempts to use potassium-to-thorium (Th) ratios gathered by remote sensing and chemical analysis to determine the amount of volatiles Mars once had. In previous research, members of the research group used a potassium tracer method to study the formation of the moon.

Wang and his team measured the potassium isotope compositions of 20 previously confirmed Martian meteorites, selected to be representative of the bulk silicate composition of the red planet.

Using this approach, the researchers determined that Mars lost more potassium and other volatiles than Earth during its formation, but retained more of these volatiles than the moon and asteroid 4-Vesta, two much smaller and drier bodies than Earth and Mars.

The researchers found a well-defined correlation between body size and potassium isotopic composition.

"The reason for far lower abundances of volatile elements and their compounds in differentiated planets than in primitive undifferentiated meteorites has been a longstanding question," said Katharina Lodders, research professor of earth and planetary sciences at Washington University, a coauthor of the study. "The finding of the correlation of K isotopic compositions with planet gravity is a novel discovery with important quantitative implications for when and how the differentiated planets received and lost their volatiles."

"Martian meteorites are the only samples available to us to study the chemical makeup of the bulk Mars," Wang said. "Those Martian meteorites have ages varying from several hundred millions to 4 billion years and recorded Mars' volatile evolution history. Through measuring the isotopes of moderately volatile elements, such as potassium, we can infer the degree of volatile depletion of bulk planets and make comparisons between different solar system bodies.

"It's indisputable that there used to be liquid water on the surface of Mars, but how much water in total Mars once had is hard to quantify through remote sensing and rover studies alone," Wang said. "There are many models out there for the bulk water content of Mars. In some of them, early Mars was even wetter than the Earth. We don't believe that was the case."

Zhen Tian, a graduate student in Wang's laboratory and a McDonnell International Academy Scholar, is first author of the paper. Postdoctoral research associate Piers Koefoed is a co-author, as is Hannah Bloom, who graduated from Washington University in 2020. Wang and Lodders are faculty fellows of the university's McDonnell Center for the Space Sciences.

The findings have implications for the search for life on other planets besides Mars, the researchers noted.

Being too close to the sun (or, for exoplanets, being too close to their star) can affect the amount of volatiles that a planetary body can retain. This distance-from-star measurement is often factored into indexes of "habitable zones" around stars.

"This study emphasizes that there is a very limited size range for planets to have just enough but not too much water to develop a habitable surface environment," said Klaus Mezger of the Center for Space and Habitability at the University of Bern, Switzerland, a co-author of the study. "These results will guide astronomers in their search for habitable exoplanets in other solar systems."

Wang now thinks that, for planets that are within habitable zones, planetary size probably should be more emphasized and routinely considered when thinking about whether an exoplanet could support life.

Read more at Science Daily

New discovery about meteorites informs atmospheric entry threat assessment

Researchers at the University of Illinois Urbana-Champaign watched fragments of two meteors as they ramped up the heat from room temperature to the temperature it reaches as it enters Earth's atmosphere and made a significant discovery. The vaporized iron sulfide leaves behind voids, making the material more porous. This information will help when predicting the weight of a meteor, its likelihood to break apart, and the subsequent damage assessment if it should land.

"We extracted samples from the interiors that had not already been exposed to the high heat of the entry environment," said Francesco Panerai, professor in the Department of Aerospace Engineering at UIUC. "We wanted to understand how the microstructure of a meteorite changes as it travels through the atmosphere."

Panerai and collaborators at NASA Ames Research Center used an X-ray microtomography technique that allowed them to observe the samples in place as they were heated up to 2,200 degrees Fahrenheit and create images in three dimensions. The experiments were performed using the synchrotron Advanced Light Source at Lawrence Berkeley National Laboratory.

"The iron sulfide inside the meteorite vaporized as it heated. Some of the grains actually disappeared leaving large voids in the material," Panerai said. "We were surprised by this observation. The ability to look at the interior of the meteorite in 3D, while being heated, led us to discover a progressive increase of material porosity with heating. After that, we took cross sections of the material and looked at the chemical composition to understand the phase that had been modified by the heating, changing its porosity.

"This discovery provides evidence that meteorite materials become porous and permeable, which we speculate will have an effect on its strength and propensity for fragmentation."

NASA selected Tamdakht as case study, a meteorite that landed in a Moroccan desert a few years ago. But the team of researchers wanted to corroborate what they'd seen so they repeated experiments on Tenham to see if a meteorite with different composition would behave in the same way. Both specimens were from a similar class of meteorite called chondrites, the most common among the meteorite finds that are made up of iron and nickel, which are high-density elements.

"Both became porous, but the porosity that develops depends upon the content of the sulfides," Panerai said. "One of the two had higher iron sulfides, which is what evaporates. We found that the vaporizing of iron sulfides happens at mild entry temperatures. This is something that would happen, not at the external fusion crust of the meteorite where the temperature is a lot higher, but just underneath the surface."

The study was motivated by the potential threat meteorites pose humans -- the clearest example being the Chelyabinsk meteor that blasted the Earth's atmosphere over Russia in 2013 and resulted in about 1,500 people being injured from indirect effects such as broken glass from the shock wave. After that incident, NASA created the Asteroid Threat Assessment Program to provide scientific tools that can help decision makers understand potential meteorite threats to the population.

"Most of the cosmic material burns away as it enters. The atmosphere protects us," Panerai said. "But there are significant sized meteorites that can be harmful. For these larger objects that have a non-zero probability of hitting us, we need to have tools to predict what damage they would do if they would hit Earth. Based on these tools, we can predict how it enters the atmosphere, its size, how it behaves as it goes through the atmosphere, etc. so decision makers can take counter measures."

Panerai said the Asteroid Threat Assessment Program is currently developing models to show how meteorites behave and models require a lot of data. "We used machine learning for the data analysis because the amount of data to analyze is huge and we need efficient techniques.

Read more at Science Daily

Elephants benefit from having older siblings, especially sisters

A study of semi-captive Asian elephants in Myanmar has found that calves benefit from having older sisters more than older brothers. The findings are published in the British Ecological Society's Journal of Animal Ecology.

Researchers at universities in Finland, the UK and Myanmar have found that Asian elephant siblings influence younger offspring from early through to late-life. Being raised with older siblings strongly increased calves' long-term survival compared to not having a sibling, with elder sisters having a bigger impact than elder brothers.

In female elephants, those raised with older sisters had higher long-term survival and reproduced for the first time an average of two years earlier, compared to those with older brothers. Reproducing at an earlier age is generally associated with more offspring over the course of an elephant's lifetime.

In male elephants, those raised with older sisters had lower survival but higher body weight, compared to those with older brothers. This seemingly detrimental effect may be explained by a 'live-fast, die young' strategy, where the positive early increase in body mass could lead to survival costs later in life.

Dr Vérane Berger at the University of Turku and lead author of the study said: "Our research confirms that sibling relationships shape individual lives, particularly in social species, such as the elephants, where cooperative behaviours are essential to the development, survival and reproductive potential of individuals."

The long-term consequences from sibling effects are understudied in long-lived animals. One of the reasons for this is that the logistic challenges of field studies make it hard to investigate effects over an animal's entire lifespan.

In this study, the researchers were able to overcome this barrier by studying a population of government-owned, semi-captive timber elephants in Myanmar, for which extensive life history records are kept.

These elephants are used during the day as riding, transport and draft animals. At night the elephants live unsupervised in forests and can interact and mate with both wild and tame elephants. Calves are raised by their mothers until the age of five when they are trained for work. The Myanmar Timber Enterprise (MTE) imposes regulations on the daily and annual workload of elephants.

Dr Mirkka Lahdenperä at the University of Turku and co-author of the study said: "Because the elephants live in their natural habitats, there are many similarities to wild elephants, such as natural foraging and no assistance in breeding. While there are differences -- in the wild, family groups are probably bigger -- there are more similarities than differences and we could assume that some of the associations found in our study would also hold true for wild elephants. But of course, these should be studied"

The researchers used a large, multi-generational dataset of semi-captive Asian elephants to look at the influence the presence and the sex of elder siblings on the body mass, reproduction, sex, and survival of the next calf. The records contained precise reproductive and longevity information for 2,344 calves born between 1945 and 2018.

As the study was correlational, the influence of external factors outside sibling effects, such as the quality of maternal care and elephants' workload and management, cannot be excluded.

On the next steps for this research project, Dr Berger said: "By collecting more information on the body mass of mothers at birth, we hope to disentangle maternal effects from sibling effects.

"More data will also let us explore the effects of the environment on sibling relationships and go into more detail on the effects siblings have on specific aspects of a younger calf's health, such as immunity, muscular function and hormonal variations.

Read more at Science Daily

Meeting sleep recommendations could lead to smarter snacking

Missing out on the recommended seven or more hours of sleep per night could lead to more opportunities to make poorer snacking choices than those made by people who meet shut-eye guidelines, a new study suggests.

The analysis of data on almost 20,000 American adults showed a link between not meeting sleep recommendations and eating more snack-related carbohydrates, added sugar, fats and caffeine.

It turns out that the favored non-meal food categories -- salty snacks and sweets and non-alcoholic drinks -- are the same among adults regardless of sleep habits, but those getting less sleep tend to eat more snack calories in a day overall.

The research also revealed what appears to be a popular American habit not influenced by how much we sleep: snacking at night.

"At night, we're drinking our calories and eating a lot of convenience foods," said Christopher Taylor, professor of medical dietetics in the School of Health and Rehabilitation Sciences at The Ohio State University and senior author of the study.

"Not only are we not sleeping when we stay up late, but we're doing all these obesity-related behaviors: lack of physical activity, increased screen time, food choices that we're consuming as snacks and not as meals. So it creates this bigger impact of meeting or not meeting sleep recommendations."

The American Academy of Sleep Medicine and Sleep Research Society recommend that adults sleep seven hours or longer per night on a regular basis to promote optimal health. Getting less sleep than recommended is associated with higher risk for a number of health problems, including weight gain and obesity, diabetes, high blood pressure and heart disease.

"We know lack of sleep is linked to obesity from a broader scale, but it's all these little behaviors that are anchored around how that happens," Taylor said.

The study abstract is published in the Journal of the Academy of Nutrition and Dietetics and the research will be presented in a poster session on Oct. 18 at the 2021 Food & Nutrition Conference & Expo.

Researchers analyzed data from 19,650 U.S. adults between the ages of 20 and 60 who had participated from 2007 to 2018 in the National Health and Nutrition Examination Survey.

The survey collects 24-hour dietary recalls from each participant -- detailing not just what, but when, all food was consumed -- and questions people about their average amount of nightly sleep during the work week.

The Ohio State team divided participants into those who either did or didn't meet sleep recommendations based on whether they reported sleeping seven or more hours or fewer than seven hours each night. Using U.S. Department of Agriculture databases, the researchers estimated participants' snack-related nutrient intake and categorized all snacks into food groups. Three snacking time frames were established for the analysis: 2:00-11:59 a.m. for morning, noon-5:59 p.m. for afternoon, and 6 p.m.-1:59 a.m. for evening.

Statistical analysis showed that almost everyone -- 95.5% -- ate at least one snack a day, and over 50% of snacking calories among all participants came from two broad categories that included soda and energy drinks and chips, pretzels, cookies and pastries.

Compared to participants who slept seven or more hours a night, those who did not meet sleep recommendations were more likely to eat a morning snack and less likely to eat an afternoon snack, and ate higher quantities of snacks with more calories and less nutritional value.

Though there are lots of physiological factors at play in sleep's relationship to health, Taylor said changing behavior by avoiding the nightly nosh in particular could help adults not only meet the sleep guidelines, but also improve their diet.

"Meeting sleep recommendations helps us meet that specific need for sleep related to our health, but is also tied to not doing the things that can harm health," said Taylor, a registered dietitian. "The longer we're awake, the more opportunities we have to eat. And at night, those calories are coming from snacks and sweets. Every time we make those decisions, we're introducing calories and items related to increased risk for chronic disease, and we're not getting whole grains, fruits and vegetables.

Read more at Science Daily

Darwin’s short-beak enigma solved

Charles Darwin was obsessed with domestic pigeons. He thought they held the secrets of selection in their beaks. Free from the bonds of natural selection, the 350-plus breeds of domestic pigeons have beaks of all shapes and sizes within a single species (Columba livia). The most striking are beaks so short that they sometimes prevent parents from feeding their own young. Centuries of interbreeding taught early pigeon fanciers that beak length was likely regulated by just a few heritable factors. Yet modern geneticists have failed to solve Darwin's mystery by pinpointing the molecular machinery controlling short beaks -- until now.

In a new study, biologists from the University of Utah discovered that a mutation in the ROR2 gene is linked to beak size reduction in numerous breeds of domestic pigeons. Surprisingly, mutations in ROR2 also underlie a human disorder called Robinow syndrome.

"Some of the most striking characteristics of Robinow syndrome are the facial features, which include a broad, prominent forehead and a short, wide nose and mouth, and are reminiscent of the short-beak phenotype in pigeons," said Elena Boer, lead author of the paper who completed the research as a postdoctoral fellow at the U and is now a clinical variant scientist at ARUP Laboratories. "It makes sense from a developmental standpoint, because we know that the ROR2 signaling pathway plays an important role in vertebrate craniofacial development."

The paper published in the journal Current Biology on Sept. 21, 2021.

Mapping genes and skulls

The researchers bred two pigeons with short and medium beaks -- the medium-beaked male was a Racing Homer, a bird bred for speed with a beak length similar to the ancestral rock pigeon. The small-beaked female was an Old German Owl, a fancy pigeon breed that has a little, squat beak.

"Breeders selected this beak purely for aesthetics to the point that it's detrimental -- it would never appear in nature. So, domestic pigeons are a huge advantage for finding genes responsible for size differences," said Michael Shapiro, the James E. Talmage Presidential Endowed Chair in Biology at the U and senior author of the paper. "One of Darwin's big arguments was that natural selection and artificial selection are variations of the same process. Pigeon beak sizes were instrumental in figuring out how that works."

The short- and medium-beaked parents produced an initial F1 brood of children with intermediate-length beaks. When the biologists mated the F1 birds to one another, the resulting F2 grandchildren had beaks ranging from big to little, and all sizes in between. To quantify the variation, Boer measured beak size and shape in the 145 F2 individuals using micro-CT scans generated at the University of Utah Preclinical Imaging Core Facility.

"The cool thing about this method is that it allows us to look at size and shape of the entire skull, and it turns out that it's not just beak length that differs -- the braincase changes shape at the same time," Boer said. "These analyses demonstrated that beak variation within the F2 population was due to actual differences in beak length and not variation in overall skull or body size."

Next, the researchers compared the pigeons' genomes. First, using a technique called quantitative trait loci (QTL) mapping, they identified DNA sequence variants scattered throughout the genome, and then looked to see if those mutations appeared in the F2 grandkids' chromosomes.

"The grandkids with small beaks had the same piece of chromosome as their grandparent with the small beak, which told us that piece of chromosome has something to do with small beaks," said Shapiro. "And it was on the sex chromosome, which classical genetic experiments had suggested, so we got excited."

The team then compared the entire genome sequences of many different pigeon breeds; 56 pigeons from 31 short-beaked breeds and 121 pigeons from 58 medium- or long-beaked breeds. The analysis showed that all individuals with small beaks had the same DNA sequence in an area of the genome that contains the ROR2 gene.

"The fact that we got the same strong signal from two independent approaches was really exciting and provided an additional level of evidence that the ROR2 locus is involved," said Boer.

The authors speculate that the short-beak mutation causes the ROR2 protein to fold in a new way, but the team plans to do functional experiments to figure out how the mutation impacts craniofacial development.

Pigeon enthusiasts

The lure of the domestic pigeon that mesmerized Darwin is still captivating the curious to this day. Many of the blood samples that the research team used for genome sequencing were donated from members of the Utah Pigeon Club and National Pigeon Association, groups of pigeon enthusiasts who continue to breed pigeons and participate in competitions to show off the striking variation among breeds.

Read more at Science Daily

Sep 20, 2021

Astrophysicists solve 'empty sky' gamma-ray mystery

Star-forming galaxies are responsible for creating gamma-rays that until now had not been associated with a known origin, researchers from The Australian National University (ANU) have confirmed.

Lead author Dr Matt Roth, from the ANU Research School of Astronomy and Astrophysics, said until now it has been unclear what created gamma-rays -- one of the most energetic forms of light in the Universe -- that appear in patches of seemingly 'empty sky'.

The discovery could offer clues to help astronomers solve other mysteries of the Universe, such as what kind of particles make up Dark Matter -- one of the holy grails of astrophysics.

"It's a significant milestone to finally discover the origins of this gamma-ray emission, solving a mystery of the Universe astronomers have been trying to decipher since the 1960s," Dr Roth said.

"There are two obvious sources that produce large amounts of gamma-rays seen in the Universe. One when gas falls into the supermassive black holes which are found at the centres of all galaxies -- called an active galactic nucleus (AGN) -- and the other associated with star formation in the disks of galaxies.

"We modelled the gamma-ray emission from all the galaxies in the Universe and compared our results with the predictions for other sources and found that it is star-forming galaxies that produce the majority of this diffuse gamma-ray radiation and not the AGN process."

ANU researchers were able to pinpoint what created these mysterious gamma-rays after obtaining a better understanding of how cosmic rays -- particles that travel at speeds very close to the speed of light -- move through the gas between the stars. Cosmic rays are important because they create large amounts of gamma-ray emission in star-forming galaxies when they collide with the interstellar gas.

Data from NASA's Hubble Space Telescope and Fermi Gamma-Ray Space Telescope was a key resource used to uncover the unknown origins of the gamma-rays. Researchers analysed information about many galaxies such as their star-formation rates, total masses, physical size and distances from Earth.

"Our model can also be used to make predictions for radio emission -- the electromagnetic radiation that has a frequency similar to a car radio -- from star-forming galaxies, which could help researchers understand more about the internal structure of galaxies," Dr Roth said.

"We are currently looking at producing maps of the gamma-ray sky that can be used to inform upcoming gamma-ray observations from next-generation telescopes. This includes the Cherenkov Telescope Array, which Australia is involved in.

Read more at Science Daily

Extreme volcanism did not cause the massive extinction of species in the late Cretaceous

A study published in the journal Geology rules out that extreme volcanic episodes had any influence on the massive extinction of species in the late Cretaceous. The results confirm the hypothesis that it was a giant meteorite impact what caused the great biological crisis that ended up with the non-avian dinosaur lineages and other marine and terrestrial organisms 66 million years ago.

The study was carried out by the researcher Sietske Batenburg, from the Faculty of Earth Sciences of the University of Barcelona, and the experts Vicente Gilabert, Ignacio Arenillas and José Antonio Arz, from the University Research Institute on Environmental Sciences of Aragon (IUCA-University of Zaragoza).

K/Pg boundary: the great extinction of the Cretaceous in Zumaia coasts

The scenario of this study were the Zumaia cliffs (Basque Country), which have an exceptional section of strata that reveals the geological history of the Earth in the period of 115-50 million years ago (Ma). In this environment, the team analyzed sediments and rocks that are rich in microfossils that were deposited between 66.4 and 65.4 Ma, a time interval that includes the known Cretaceous/Paleogene boundary (K/Pg). Dated in 66 Ma, the K/Pg boundary divides the Mesozoic and Cenozoic eras and it coincides with one of the five large extinctions of the planet.

This study analysed the climate changes that occurred just before and after the massive extinction marked by the K/Pg boundary, as well as its potential relation to this large biological crisis. For the first time, researchers examined whether this climate change coincides on the time scale with its potential causes: the Deccan massive volcanism (India) ─one of the most violent volcanic episodes in the geological history of the planet─ and the orbital variations of the Earth.

"The particularity of the Zumaia outcrops lies in that two types of sediments accumulated there ─some richer in clay and others richer in carbonate─ that we can now identify as strata or marl and limestone that alternate with each other to form rhythms," notes the researcher Sietske Batenburg, from the Department of Earth and Ocean Dynamics of the UB. "This strong rhythmicity in sedimentation is related to cyclical variations in the orientation and inclination of the Earth axis in the rotation movement, as well as in the translational movement around the Sun."

These astronomic configurations ─the known Milankovitch cycles, which repeat every 405,000, 100,000, 41,000 and 21,000 years─, regulate the amount of solar radiation they receive, modulate the global temperature of our planet and condition the type of sediment that reaches the oceans. "Thanks to these periodicities identified in the Zumaia sediments, we have been able to determine the most precise dating of the climatic eepisodes that took place around the time when the last dinosaurs lived," says PhD student Vicente Gilabert, from the Department of Earth Sciences at UZ, who will present his thesis defence by the end of this year.

Planktonic foraminifera: revealing the climate of the past

Carbon-13 isotopic analysis on the rocks in combination with the study of planktonic foraminifera ─microfossils used as high-precision biostratigraphic indicators─ has made it possible to reconstruct the paleoclimate and chronology of that time in the Zumaia sediments. More than 90% of the Cretaceous planktonic foraminiferal species from Zumaia became extinct 66 Ma ago, coinciding with a big disruption in the carbon cycle and an accumulation of impact glass spherules originating from the asteroid that hit Chicxulub, in the Yucatan Peninsula (Mexico).

In addition, the conclusions of the study reveal the existence of three intense climatic warming events ─known as hyperthermal events─ that are not related to the Chicxulub impact. The first, known as LMWE and prior to the K/Pg boundary, has been dated to between 66.25 and 66.10 Ma. The other two events, after the mass extinction, are called Dan-C2 (between 65.8 and 65.7 Ma) and LC29n (between 65.48 and 65.41 Ma).

In the last decade, there has been intense debate over whether the hyperthermal events mentioned above were caused by an increased Deccan volcanic activity, which emitted large amounts of gases into the atmosphere. "Our results indicate that all these events are in sync with extreme orbital configurations of the Earth known as eccentricity maxima. Only the LMWE, which produced an estimated global warming of 2-5°C, appears to be temporally related to a Deccan eruptive episode, suggesting that it was caused by a combination of the effects of volcanism and the latest Cretaceous eccentricity maximum," the experts add.

Earth's orbital variations around the Sun

The global climate changes that occurred in the late Cretaceous and early Palaeogene ─between 250,000 years before and 200,000 years after the K/Pg boundary─ were due to eccentricity maxima of the Earth's orbit around the Sun.

However, the orbital eccentricity that influenced climate changes before and after the K/Pg boundary is not related to the late Cretaceous mass extinction of species. The climatic changes caused by the eccentricity maxima and augmented by the Deccan volcanism occurred gradually at a scale of hundreds of thousands of years.

Read more at Science Daily

Researchers call for a focus on fitness over weight loss for obesity-related health conditions

The prevalence of obesity around the world has tripled over the past 40 years, and, along with that rise, dieting and attempts to lose weight also have soared. But according to a review article publishing September 20 in the journal iScience, when it comes to getting healthy and reducing mortality risk, increasing physical activity and improving fitness appear to be superior to weight loss. The authors say that employing a weight-neutral approach to the treatment of obesity-related health conditions also reduces the health risks associated with yo-yo dieting.

"We would like people to know that fat can be fit, and that fit and healthy bodies come in all shapes and sizes," says co-author Glenn Gaesser of the College of Health Solutions at Arizona State University. "We realize that in a weight-obsessed culture, it may be challenging for programs that are not focused on weight loss to gain traction. We're not necessarily against weight loss; we just think that it shouldn't be the primary criterion for judging the success of a lifestyle intervention program."

"This is especially important when you consider the physiological realities of obesity," says co-author Siddhartha Angadi of the School of Education and Human Development at the University of Virginia. "Body weight is a highly heritable trait, and weight loss is associated with substantial metabolic alterations that ultimately thwart weight loss maintenance."

Obesity is associated with a number of health conditions, including cardiovascular disease, diabetes, cancer, and problems with the bones and joints. But weight cycling, commonly called yo-yo dieting, is also associated with health problems, including muscle loss, fatty liver disease, and diabetes. The authors say that by focusing on fitness rather than weight loss, people can gain the benefits of exercise while avoiding the risks associated with weight cycling.

Current public health guidelines recommend that adults accumulate 150-300 minutes per week of moderate-intensity physical activity (the intensity equivalent to walking at casual-to-brisk pace) or 75-150 minutes per week of vigorous-intensity physical activity (the intensity equivalent to jogging or running). "But it's important to note that the benefits of exercise are dose dependent, with the biggest benefits coming from just moving out of the couch-potato zone to doing at least some moderate-intensity activity," Gaesser says. "It's also important to emphasize that physical activity can be accumulated throughout the day. For example, multiple short walks during the day (even as short as two to ten minutes each) are just as beneficial as one long walk for health benefits."

In the review, the authors cite recent research focused on the magnitude of mortality risk reduction associated with weight loss compared to that associated with an increase in physical activity or cardiorespiratory fitness. The risk reduction associated with increasing fitness and physical activity was consistently greater than that associated with intentional weight loss. They also looked at the magnitude of reduction in the risk markers of cardiovascular disease that are associated with either weight loss or increased physical activity. They used meta-analyses from several studies done over a range of time periods and across a broad geographical area. "Science has generally supported the main points proposed in Big Fat Lies, a book on this topic that I first published in 1996," Gaesser notes.

Read more at Science Daily