Dec 9, 2023

Ancient stars made extraordinarily heavy elements

How heavy can an element be? An international team of researchers has found that ancient stars were capable of producing elements with atomic masses greater than 260, heavier than any element on the periodic table found naturally on Earth. The finding deepens our understanding of element formation in stars.

We are, literally, made of star stuff. Stars are element factories, where elements constantly fuse or break apart to create other lighter or heavier elements.

When we refer to light or heavy elements, we're talking about their atomic mass.

Broadly speaking, atomic mass is based on the number of protons and neutrons in the nucleus of one atom of that element.

The heaviest elements are only known to be created in neutron stars via the rapid neutron capture process, or r-process.

Picture a single atomic nucleus floating in a soup of neutrons.

Suddenly, a bunch of those neutrons get stuck to the nucleus in a very short time period -- usually in less than one second -- then undergo some internal neutron-to-proton changes, and voila!

A heavy element, such as gold, platinum or uranium, forms.

The heaviest elements are unstable or radioactive, meaning they decay over time.

One way that they do this is by splitting, a process called fission.

"The r-process is necessary if you want to make elements that are heavier than, say, lead and bismuth," says Ian Roederer, associate professor of physics at North Carolina State University and lead author of the research.

Roederer was previously at the University of Michigan.

"You have to add many neutrons very quickly, but the catch is that you need a lot of energy and a lot of neutrons to do so," Roederer says.

"We have a general idea of how the r-process works, but the conditions of the process are quite extreme," Roederer says.

"We don't have a good sense of how many different kinds of sites in the universe can generate the r-process, we don't know how the r-process ends, and we can't answer questions like, how many neutrons can you add? Or, how heavy can an element be? So we decided to look at elements that could be made by fission in some well-studied old stars to see if we could start to answer some of these questions."

The team took a fresh look at the amounts of heavy elements in 42 well-studied stars in the Milky Way.

The stars were known to have heavy elements formed by the r-process in earlier generations of stars.

By taking a broader view of the amounts of each heavy element found in these stars collectively, rather than individually as is more common, they identified previously unrecognized patterns.

Those patterns signaled that some elements listed near the middle of the periodic table -- such as silver and rhodium -- were likely the remnants of heavy element fission.

The team was able to determine that the r-process can produce atoms with an atomic mass of at least 260 before they fission.

"That 260 is interesting because we haven't previously detected anything that heavy in space or naturally on Earth, even in nuclear weapon tests," Roederer says.

"But seeing them in space gives us guidance for how to think about models and fission -- and could give us insight into how the rich diversity of elements came to be."

Read more at Science Daily

Three proposals from researchers to meet EU climate goals

The EU countries have decided that the EU is to be climate neutral by 2050. By 2030, greenhouse gas emissions must have been reduced by at least 55% compared to 1990. To meet this target, continued vigorous efforts are needed to reduce emissions, but that alone will not be enough. This is the conclusion of seven researchers from Sweden and Germany in an article in the journal Communications Earth & Environment. One of them is Mathias Fridahl, associate professor at the Department of Thematic Studies -- Environmental Change at Linköping University, Sweden.

"We have painted humanity into a corner. It's no longer possible to solve the climate crisis simply by reducing emissions. We also need to clean the atmosphere of carbon dioxide," says Mathias Fridahl.

The problem is that there are currently no incentives for companies and countries to invest in new technologies to remove carbon dioxide.

That is why a change in the EU's climate policy is needed. "There are many technologies that are quite well developed, but which aren't economically viable," says Mathias Fridahl.

He and his colleagues have three proposals that they believe could soon make a difference.

Anyone contributing to the removal of carbon dioxide should be able to get paid for it under the EU emissions trading scheme.

This should only apply to methods that have a long life span, that is, capture linked to the storage of carbon dioxide for thousands of years.

To get the trading scheme up and running, the researchers propose that the EU set up a central bank for carbon dioxide.

The bank would give investors a good price for the carbon dioxide removed from the atmosphere.

In order to maintain the drive to continue reducing emissions at the same time, the proposal is that the bank strongly regulates how removal may be used to compensate for continued emissions.

The bank's financial muscle could come from revenues from carbon tariffs on goods from outside the Union.

To stimulate other measures with a shorter life span, the researchers propose an extension of the EU's land use regulation.

This sets out the measures to remove carbon dioxide that member states are allowed to be credited with when reporting their climate emissions.

Today, there is a limited amount of removal methods in forestry and agriculture.

The researchers contend that if the regulation were extended to more measures, it would encourage countries to invest resources in carbon removal.

The researchers also want the EU to identify which emissions will be very difficult or impossible to do anything about.

Greater clarity would reduce the risk of companies and member states postponing measures in the hope that their emissions will belong to the group that is difficult to tackle.

This would stimulate innovation and efforts to reduce emissions in parallel with initiatives to remove carbon dioxide.

Read more at Science Daily

Major breakthrough for severe asthma treatment

A landmark study has shown that severe asthma can be controlled using biologic therapies, without the addition of regular high-dose inhaled steroids which can have significant side effects.

The findings from the multinational SHAMAL study, published in The Lancet, demonstrated that 92% of patients using the biologic therapy benralizumab could safely reduce inhaled steroid dose and more than 60% could stop all use.

The study's results could be transformative for severe asthma patients by minimising or eliminating the unpleasant, and often serious, side effects of inhaled steroids.

These include osteoporosis which leads to increased risk of fractures, diabetes and cataracts.

Asthma is one of the most common respiratory diseases worldwide -- affecting almost 300 million people -- and around 3 to 5% of these have severe asthma.

This leads to daily symptoms of breathlessness, chest tightness and cough, along with repeated asthma attacks which require frequent hospitalisation.

The SHAMAL study was led by Professor David Jackson, head of the Severe Asthma Centre at Guy's and St Thomas' and Professor of Respiratory Medicine at King's College London.

Professor Jackson said: "Biological therapies such as benralizumab have revolutionised severe asthma care in many ways, and the results of this study show for the first time that steroid related harm can be avoided for the majority of patients using this therapy."

Benralizumab is a biologic therapy that reduces the number of inflammatory cells called eosinophil.

This is produced in abnormal numbers in the airway of patients with severe asthma and is critically involved in the development of asthma attacks.

Benralizumab is injected every four to eight weeks and is available in specialist NHS asthma centres.

The SHAMAL study took place across 22 sites in four countries -- the UK, France, Italy and Germany.

The 208 patients were randomly assigned to taper their high dose inhaled steroid by varying amounts over 32 weeks, followed by a 16 week maintenance period.

Approximately 90% of patients experienced no worsening of asthma symptoms and remained free of any exacerbations throughout the 48 week study.

Similar studies to SHAMAL will be necessary before firm recommendations can be made regarding the safety and efficacy of reducing or eliminating high dose steroid use with other biologic therapies.

Read more at Science Daily

Dec 8, 2023

Unlocking neutron star rotation anomalies: Insights from quantum simulation

A collaboration between quantum physicists and astrophysicists, led by Francesca Ferlaino and Massimo Mannarelli, has achieved a significant breakthrough in understanding neutron star glitches. They were able to numerically simulate this enigmatic cosmic phenomenon with ultracold dipolar atoms. This research establishes a strong link between quantum mechanics and astrophysics and paves the way for quantum simulation of stellar objects from Earth.

Neutron stars have fascinated and puzzled scientists since the first detected signature in 1967.

Known for their periodic flashes of light and rapid rotation, neutron stars are among the densest objects in the universe, with a mass comparable to that of the Sun but compressed into a sphere only about 20 kilometers in diameter.

These stellar objects exhibit a peculiar behavior known as a "glitch," where the star suddenly speeds up its spin.

This phenomenon suggests that neutron stars might be partly superfluid.

In a superfluid, rotation is characterized by numerous tiny vortices, each carrying a fraction of angular momentum.

A glitch occurs when these vortices escape from the star's inner crust to its solid outer crust, thereby increasing the star's rotational speed.

The key ingredient for this study lies in the concept of a "supersolid" -- a state that exhibits both crystalline and superfluid properties -- which is predicted to be a necessary ingredient of neutron star glitches.

Quantized vortices nest within the supersolid until they collectively escape and are consequently absorbed by the outer crust of the star, accelerating its rotation.

Recently, the supersolid phase has been realized in experiments with ultracold dipolar atoms, providing a unique opportunity to simulate the conditions within a neutron star.

The recent study by researchers at the University of Innsbruck and the Austrian Academy of Sciences as well as the Laboratori Nazionali del Gran Sasso and the Gran Sasso Science Institute in Italy demonstrates that glitches can occur in ultracold supersolids, serving as versatile analogues for the inside of neutron stars.

This groundbreaking approach allows for a detailed exploration of the glitch mechanism, including its dependence on the quality of the supersolid.

"Our research establishes a strong link between quantum mechanics and astrophysics and provides a new perspective on the inner nature of neutron stars," says first author Elena Poli.

Glitches provide valuable insights into the internal structure and dynamics of neutron stars.

By studying these events, scientists can learn more about the properties of matter under extreme conditions.

"This research shows a new approach to gain insights into the behavior of neutron stars and opens new avenues for the quantum simulation of stellar objects from low-energy Earth laboratories," emphasizes Francesca Ferlaino.

Read more at Science Daily

Geoscientists map changes in atmospheric CO2 over past 66 million years

Today atmospheric carbon dioxide is at its highest level in at least several million years thanks to widespread combustion of fossil fuels by humans over the past couple centuries.

But where does 419 parts per million (ppm) -- the current concentration of the greenhouse gas in the atmosphere -- fit in Earth's history?

That's a question an international community of scientists, featuring key contributions by University of Utah geologists, is sorting out by examining a plethora of markers in the geologic record that offer clues about the contents of ancient atmospheres. Their initial study was published this week in the journal Science, reconstructing CO2 concentrations going back through the Cenozoic, the era that began with the demise dinosaurs and rise of mammals 66 million years ago.

Glaciers contain air bubbles, providing scientists direct evidence of CO2 levels going back 800,000 years, according to U geology professor Gabe Bowen, one of the study's corresponding authors. But this record does not extend very deep into the geological past.

"Once you lose the ice cores, you lose direct evidence. You no longer have samples of atmospheric gas that you can analyze," Bowen said. "So you have to rely on indirect evidence, what we call proxies. And those proxies are tough to work with because they are indirect."

"Proxies" in the geologic record

These proxies include isotopes in minerals, the morphology of fossilized leaves and other lines of geological evidence that reflect atmospheric chemistry. One of the proxies stems from the foundational discoveries of U geologist Thure Cerling, himself a co-author on the new study, whose past research determined carbon isotopes in ancient soils are indicative of past CO2 levels.

But the strength of these proxies vary and most cover narrow slices of the past. The research team, called the Cenozoic CO2 Proxy Integration Project, or CenCO2PIP, and organized by Columbia University climate scientist Bärbel Hönisch, set out to evaluate, categorize and integrate available proxies to create a high-fidelity record of atmospheric CO2.

"This represents some of the most inclusive and statistically refined approaches to interpreting CO2 over the last 66 million years," said co-author Dustin Harper, a U postdoctoral researcher in Bowen's lab. "Some of the new takeaways are we're able to combine multiple proxies from different archives of sediment, whether that's in the ocean or on land, and that really hasn't been done at this scale."

The new research is a community effort involving some 90 scientists from 16 countries. Funded by dozens of grants from multiple agencies, the group hopes to eventually reconstruct the CO2 record back 540 million years to the dawn of complex life.

At the start of the Industrial Revolution -- when humans began burning to coal, then oil and gas to fuel their economies -- atmospheric CO2 was around 280 ppm. The heat-trapping gas is released into the air when these fossil fuels burn.

Looking forward, concentrations are expected to climb up to 600 to 1,000 ppm by the year 2100, depending on the rate of future emissions. It is not clear exactly how these future levels will influence the climate.

But having a reliable map of past CO2 levels could help scientists more accurately predict what future climates may look like, according to U biology professor William Anderegg, director the U's Wilkes Center for Climate & Policy.

"This is an incredibly important synthesis and has implications for future climate change as well, particularly the key processes and components of the Earth system that we need to understand to project the speed and magnitude of climate change," Anderegg said.

Today's 419 ppm is the highest CO2 in 14 million years

At times in the past when Earth was a far warmer place, levels of CO2 were much higher than now. Still, the 419 ppm recorded today represents a steep and perhaps dangerous spike and is unprecedented in recent geologic history.

"By 8 million years before present, there's maybe a 5% chance that CO2 levels were higher than today," Bowen said, "but really we have to go back 14 million years before we see levels we think were like today."

In other words, human activity has significantly altered the atmosphere within the span of a few generations. As a result, climate systems around the globe are showing alarming signs of disruption, such as powerful storms, prolonged drought, deadly heat waves and ocean acidification.

A solid understanding of atmospheric CO2 variation through geological time is also essential to deciphering and learning from various features of Earth's history. Changes in atmospheric CO2 and climate likely contributed to mass extinctions, as well as evolutionary innovations.

Read more at Science Daily

Serotonin loss may contribute to cognitive decline in the early stages of Alzheimer's disease

Comparing PET scans of more than 90 adults with and without mild cognitive impairment (MCI), Johns Hopkins Medicine researchers say relatively lower levels of the so-called "happiness" chemical, serotonin, in parts of the brain of those with MCI may play a role in memory problems including Alzheimer's disease.

The findings, first published online Sept. 13 in the Journal of Alzheimer's Disease, lend support to growing evidence that measurable changes in the brain happen in people with mild memory problems long before an Alzheimer's diagnosis, and may offer novel targets for treatments to slow or stop disease progression.

"The study shows that people with mild cognitive impairment already display loss of the serotonin transporter. This measure that reflects serotonin degeneration is associated with problems with memory, even when we take into account in our statistical model MRI measures of neurodegeneration and PET measures of the amyloid protein that are associated with Alzheimer's Disease," says Gwenn Smith, Ph.D., professor of psychiatry and behavioral sciences at the Johns Hopkins University School of Medicine.

MCI describes the diagnostic stage between normal brain function in aging and Alzheimer's Disease (AD). Symptoms of MCI include frequent forgetfulness of recent events, word finding difficulty, and loss of the sense of smell.

Those with MCI may stay in this stage indefinitely, or progress to more severe forms of cognitive deficits, giving urgency to the search for predictive markers, and possible early prevention interventions, investigators say.

The investigators cautioned that their study showed a correlation between lower serotonin transporter levels and memory problems in MCI, and was not designed to show causation or the role of serotonin in the progression from MCI to AD. To answer these questions, further research is needed to study over time healthy controls and individuals with MCI to demonstrate the role of serotonin in disease progression.

For the study, the Hopkins scientists recruited 49 volunteers with MCI, and 45 healthy adults ages 55 and older who underwent an MRI to measure changes in brain structure and two positron emission tomography (PET) scans of their brains at Johns Hopkins between 2009 and 2022.

The research team used PET scans to look specifically at the serotonin transporter -- a neurotransmitter, or brain chemical long associated with positive mood, appetite and sleep -- and to look at the amyloid-beta protein (Aβ) distribution in the brain.

Aβ is thought to play a central role in the pathology of AD. Studies in mice done at Johns Hopkins have shown that serotonin degeneration occurs before the development of widespread beta-amyloid deposits in the brain.

Loss of serotonin is often associated with depression, anxiety, and psychological disorders.

Researchers found that MCI patients had lower levels of the serotonin transporter, and higher levels of Aβ than healthy controls.

The MCI patients had up to 25% lower serotonin transporter levels in cortical and limbic regions than healthy controls.

In particular, they report, lower serotonin transporter levels were found in cortical, limbic, and subcortical regions of the brains in those with MCI, areas specifically responsible for executive function, emotion, and memory.

"The correlation we observed between lower serotonin transporters and memory problems in MCI is important because we may have identified a brain chemical that we can safely target that may improve cognitive deficits and, potentially, depressive symptoms," says Smith.

"If we can show that serotonin loss over time is directly involved in the transition from MCI to AD, recently developed antidepressant medications may be an effective way to improve memory deficits and depressive symptoms and thus, may be a powerful way forward to slow disease progression."

Researchers say future studies include longitudinal follow up of individuals with MCI to compare serotonin degeneration to the increase in and Aβ levels, as well as the increase in levels of the Tau protein that is also associated with AD compared to healthy adults.

They are also studying multi-modal antidepressant drugs to treat depression and memory deficits in hopes of mitigating and halting symptoms.

Read more at Science Daily

'Shocking' discovery: Electricity from electric eels may transfer genetic material to nearby animals

The electric eel is the biggest power-making creature on Earth. It can release up to 860 volts, which is enough to run a machine. In a recent study, a research group from Nagoya University in Japan found electric eels can release enough electricity to genetically modify small fish larvae. They published their findings in PeerJ -- Life and Environment.

The researchers' findings add to what we know about electroporation, a gene delivery technique.

Electroporation uses an electric field to create temporary pores in the cell membrane.

This lets molecules, like DNA or proteins, enter the target cell.

The research group was led by Professor Eiichi Hondo and Assistant Professor Atsuo Iida from Nagoya University.

They thought that if electricity flows in a river, it might affect the cells of nearby organisms.

Cells can incorporate DNA fragments in water, known as environmental DNA.

To test this, they exposed the young fish in their laboratory to a DNA solution with a marker that glowed in the light to see if the zebrafish had taken the DNA.

Then, they introduced an electric eel and prompted it to bite a feeder to discharge electricity.

According to Iida, electroporation is commonly viewed as a process only found in the laboratory, but he was not convinced.

"I thought electroporation might happen in nature," he said.

"I realized that electric eels in the Amazon River could well act as a power source, organisms living in the surrounding area could act as recipient cells, and environmental DNA fragments released into the water would become foreign genes, causing genetic recombination in the surrounding organisms because of electric discharge."

The researchers discovered that 5% of the larvae had markers showing gene transfer.

"This indicates that the discharge from the electric eel promoted gene transfer to the cells, even though eels have different shapes of pulse and unstable voltage compared to machines usually used in electroporation," said Iida.

"Electric eels and other organisms that generate electricity could affect genetic modification in nature.."

Read more at Science Daily

Dec 7, 2023

Stellar winds regulate growth of galaxies

Galactic winds enable the exchange of matter between galaxies and their surroundings. In this way, they limit the growth of galaxies, that is, their star formation rate. Although this had already been observed in the local universe, an international research team led by a CNRS scientist has just revealed -- using MUSE, an instrument integrated into the European Southern Observatory's (ESO) Very Large Telescope -- the existence of the phenomenon in galaxies which are more than 7 billion years old and actively forming stars, the category to which most galaxies belong.

The team's findings, to be published in Nature on 6 December 2023, thus show this is a universal process.

Galactic winds are created by the explosion of massive stars.

As they are diffuse and of low density, they are usually hard to spot.

To see them, the scientists combined images of more than a hundred galaxies obtained through very long exposure times.

By studying magnesium atom emission signals, the team was also able to map the morphology of these winds, which appear as cones of matter perpendicularly ejected from both sides of the galactic plane.

In the future, the researchers hope to measure how far these winds extend and the quantity of matter they transport.

Read more at Science Daily

Climate change shown to cause methane to be released from the deep ocean

New research has shown that fire-ice -- frozen methane which is trapped as a solid under our oceans -- is vulnerable to melting due to climate change and could be released into the sea.

An international team of researchers led by Newcastle University found that as frozen methane and ice melts, methane -- a potent greenhouse gas -- is released and moves from the deepest parts of the continental slope to the edge of the underwater shelf.

They even discovered a pocket which had moved 25 miles (40 kilometres).

Publishing in the journal Nature Geoscience, the researchers say this means that much more methane could potentially be vulnerable and released into the atmosphere as a result of climate warming.

Methane hydrate

Methane hydrate, also known as fire-ice, is an ice-like structure found buried in the ocean floor that contains methane.

Vast amounts of methane are stored as marine methane under oceans.

It thaws when the oceans warm, releasing methane into oceans and the atmosphere -- known as dissociated methane -- contributing to global warming.

The scientists used advanced three-dimensional seismic imaging techniques to examine the portion of the hydrate that dissociated during climatic warming off the coast of Mauritania in Northwest Africa.

They identified a specific case where dissociated methane migrated over 40 kilometres and was released through a field of underwater depressions, known as pockmarks, during past warm periods.

Lead author, Professor Richard Davies, Pro-Vice-Chancellor, Global and Sustainability, Newcastle University, said: "It was a Covid lockdown discovery, I revisited imaging of strata just under the modern seafloor offshore of Mauritania and pretty much stumbled over 23 pockmarks. Our work shows they formed because methane released from hydrate, from the deepest parts of the continental slope vented into the ocean. Scientists had previously thought this hydrate was not vulnerable to climatic warming, but we have shown that some of it is."

Researchers have previously studied how changes in bottom water temperature near continental margins can affect the release of methane from hydrates.

However, these studies mainly focused on areas where only a small portion of global methane hydrates are located.

This is one of only a small number that investigate the release of methane from the base of the hydrate stability zone, which is deeper underwater.

The results show that methane released from the hydrate stability zone travelled a significant distance towards land.

Professor Dr Christian Berndt, Head of the Research Unit Marine Geodynamics, GEOMAR, in Kiel, Germany, added:

"This is an important discovery. So far, research efforts focused on the shallowest parts of the hydrate stability zone, because we thought that only this portion is sensitive to climate variations.

"The new data clearly show that far larger volumes of methane may be liberated from marine hydrates and we really have to get to the bottom of this to understand better the role of hydrates in the climate system."

Methane is the second most abundant anthropogenic greenhouse gas after carbon dioxide (CO2). Figures from the United States Environmental Protection Agency show that methane accounts for about 16% of global greenhouse gas emissions.

The study results can play a key role in helping to predict and address the impact of methane on our changing climate.

Read more at Science Daily

Paleolithic humans may have understood the properties of rocks for making stone tools

A research group led by the Nagoya University Museum and Graduate School of Environmental Studies in Japan has clarified differences in the physical characteristics of rocks used by early humans during the Paleolithic. They found that humans selected rock for a variety of reasons and not just because of how easy it was to break off. This suggests that early humans had the technical skill to discern the best rock for the tool. The researchers published the results in the Journal of Paleolithic Archaeology.

As Homo sapiens moved from Africa to Eurasia, they used stone tools made of rocks, such as obsidian and flint, to cut, slice, and craft ranged weapons.

Because of the significant role they played in their culture, understanding how early humans made stone tools is important to archaeologists.

Since the geographic expansion of Homo sapiens in Eurasia started in the Middle East, archaeologists Eiki Suga and Seiji Kadowaki from Nagoya University focused on the prehistoric sites belonging to three chronological periods in the Jebel Qalkha area, southern Jordan.

The team analyzed flint nodules in the outcrops that were exploited during the Middle and Upper Paleolithic (70,000 to 30,000 years ago).

They believe Paleolithic humans understood which rocks were appropriate for making tools and, therefore, intentionally searched for them.

According to their hypothesis, Paleolithic humans intentionally searched for flint that was translucent and smooth, as it could be easily broken off the rock face and shaped into sharp edges.

The group used a Schmidt Hammer and a Rockwell Hardness Device to test the mechanical properties of the rocks.

The Schmidt Hammer measures the elastic behavior of a material after the hammer strikes it, which tells researchers its rebound hardness.

The Rockwell hardness device presses a diamond indenter on the rock surface to test its strength.

At first, as Suga and Kadowaki expected, fine-grained flint was found to require less force to fracture than medium-grained flint.

This would have made the fine-grained flint more attractive in producing small stone tools.

Indeed, many stone tools from the Early Upper Paleolithic (40,000 to 30,000 years ago) contain fine-grained flint.

However, an earlier study by the same team found that during the Late Middle Paleolithic and the Initial Upper Paleolithic (70,000 to 40,000 years ago), medium-grained flint was more commonly used in stone tools than fine-grained flint.

But if fine-grained flint was so easy to use, why did our ancestors not make all their tools from it?

On further investigation, the researchers found that much of the fine-grained flint in the area suffered from abundant internal fractures caused by geological activities, which would have made it unsuitable for large stone tools, such as Levallois products and robust blades.

Therefore, it seems that Paleolithic humans selected the medium-grained flint for large tools, even though it was a tough material to modify into tools, as it was more likely to last longer.

This offers a fascinating insight into our ancestors' behavior, as they selected flint based on many factors other than just how easy it was to fracture and could discern the most suitable rock to use to make stone tools.

Suga is enthusiastic about the findings, which suggest the complexity of our ancestors' behaviour.

"This study illustrates that the Paleolithic humans changed their choice of raw material to suit their stone tool morphologies and production techniques," he said.

"We believe that these prehistoric humans had a sensory understanding of the characteristics of the rocks and intentionally selected the stone material to be used according to the form and production technique of the desired stone tools. This intentional selection of the lithic raw material may have been an important component of the production of stone tools. This may show some aspect of flexible technological behavior adapted to the situation."

Read more at Science Daily

First map of human limb development reveals unexpected growth processes and explains syndromes found at birth

Human fingers and toes do not grow outward; instead, they form from within a larger foundational bud, as intervening cells recede to reveal the digits beneath. This is among many processes captured for the first time as scientists unveil a spatial cell atlas of the entire developing human limb, resolved in space and time.

Researchers at the Wellcome Sanger Institute, Sun Yat-sen University, EMBL's European Bioinformatics Institute and collaborators applied cutting-edge single-cell and spatial technologies to create an atlas characterising the cellular landscape of the early human limb, pinpointing the exact location of cells.

This study is part of the international Human Cell Atlas initiative to map every cell type in the human body, to transform understanding of health and disease.

The atlas, published today (6 December) in Nature, provides an openly available resource that captures the intricate processes governing the limbs' rapid development during the early stages of limb formation.

The atlas also uncovers new links between developmental cells and some congenital limb syndromes, such as short fingers and extra digits.

Limbs are known to initially emerge as undifferentiated cell pouches on the sides of the body, without a specific shape or function.

However after 8 weeks of development, they are well differentiated, anatomically complex and immediately recognisable as limbs, complete with fingers and toes.

This requires a very rapid and precise orchestration of cells.

Any small disturbances to this process can have a downstream effect, which is why variations in the limbs are among the most frequently reported syndromes at birth, affecting approximately one in 500 births globally.

While limb development has been extensively studied in mouse and chick models, the extent to which they mirror the human situation remained unclear.

However, advances in technology now enable researchers to explore the early stages of human limb formation.

In this new study, scientists from the Wellcome Sanger Institute, Sun Yat-sen University, and their collaborators analysed tissues between 5 and 9 weeks of development.

This allowed them to trace specific gene expression programs, activated at certain times and in specific areas, which shape the forming limbs.

Special staining of the tissue revealed clearly how cell populations differentially arrange themselves into patterns of the forming digits.

As part of the study, researchers demonstrated that certain gene patterns have implications for how the hands and feet form, identifying certain genes, which when disrupted, are associated with specific limb syndromes like brachydactyly -- short fingers -- and polysyndactyly -- extra fingers or toes.

The team were also able to confirm that many aspects of limb development are shared between humans and mice.

Overall, these findings not only provide an in-depth characterisation of limb development in humans but also critical insights that could impact the diagnosis and treatment of congenital limb syndromes.

Professor Hongbo Zhang, senior author of the study from Sun Yat-sen University, Guangzhou, said: "Decades of studying model organisms established the basis for our understanding of vertebrate limb development. However, characterising this in humans has been elusive until now, and we couldn't assume the relevance of mouse models for human development. What we reveal is a highly complex and precisely regulated process. It is like watching a sculptor at work, chiselling away at a block of marble to reveal a masterpiece. In this case, nature is the sculptor, and the result is the incredible complexity of our fingers and toes."

Dr Sarah Teichmann, senior author of the study from the Wellcome Sanger Institute, and co-founder of the Human Cell Atlas, said: "For the first time, we have been able to capture the remarkable process of limb development down to single cell resolution in space and time. Our work in the Human Cell Atlas is deepening our understanding of how anatomically complex structures form, helping us uncover the genetic and cellular processes behind healthy human development, with many implications for research and healthcare. For instance, we discovered novel roles of key genes MSC and PITX1 that may regulate muscle stem cells. This could offer potential for treating muscle-related disorders or injuries."

Read more at Science Daily

Dec 5, 2023

Astronomers determine the age of three mysterious baby stars at the heart of the Milky Way

Through analysis of high-resolution data from a ten-metre telescope in Hawaii, researchers at Lund University in Sweden have succeeded in generating new knowledge about three stars at the very heart of the Milky Way. The stars proved to be unusually young with a puzzling chemical composition that surprised the researchers.

The study, which has been published in The Astrophysical Journal Letters, examined a group of stars located in the nuclear star cluster that makes up the heart of the galaxy.

It concerns three stars that are difficult to study because they are extremely far away from our solar system, and hidden behind enormous clouds of dust and gas that block out light.

The fact that the area is also full of stars makes it very complicated to discern individual stars.

In a previous study, the researchers put forward a hypothesis that these specific stars in the middle of the Milky Way could be unusually young.

"We can now confirm this. In our study we have been able to date three of these stars as relatively young, at least as far as astronomers are concerned, with ages of 100 million to about 1 billion years. This can be compared with the sun, which is 4.6 billion years old," says Rebecca Forsberg, researcher in astronomy at Lund University.

The nuclear star cluster has mainly been seen, quite rightly, as a very ancient part of the galaxy.

But the researchers' new discovery of such young stars indicates that there is also active star formation going on in this ancient component of the Milky Way.

However, dating stars 25,000 light years from Earth is not something that can be done in a hurry.

The researchers used high-resolution data from the Keck II telescope in Hawaii, one of the world's largest telescopes with a mirror ten metres in diameter.

For further verification, they then measured how much of the heavy element, iron, the stars contained.

The element is important for tracing the galaxy's development, as the theories the astronomers have about how stars are formed and galaxies develop indicate that young stars have more of the heavy elements, as heavy elements are formed to an increasing extent over time in the universe.

To determine the level of iron, the astronomers observed the stars' spectra in infrared light which, compared with optical light, are parts of the light spectrum that can more easily shine through the densely dust-laden parts of the Milky Way.

It was shown that the iron levels varied considerably, which surprised the researchers.

"The very wide spread of iron levels could indicate that the innermost parts of the galaxy are incredibly inhomogeneous, i.e. unmixed. This is something we had not expected and not only says something about how the centre of the galaxy appears, but also how the early universe may have looked," says Brian Thorsbro, researcher in astronomy at Lund University.

The study sheds significant light on our understanding of the early universe and the functioning of the very centre of the Milky Way.

The results may also be of benefit to inspire continued and future explorations of the heart of the galaxy, as well as the further development of models and simulations of the formation of galaxies and stars.

"Personally, I think it is very exciting that we can now study the very centre of our galaxy with such a high level of detail. These types of measurements have been standard for observations of the galactic disc where we are located, but have beenunreachable goal for more faraway and exotic parts of the galaxy. We can learn a lot about how our home galaxy was formed and developed from such studies," concludes Rebecca Forsberg.

Read more at Science Daily

Can signs of life be detected from Saturn's frigid moon?

As astrophysics technology and research continue to advance, one question persists: is there life elsewhere in the universe? The Milky Way galaxy alone has hundreds of billions of celestial bodies, but scientists often look for three crucial elements in their ongoing search: water, energy and organic material. Evidence indicates that Saturn's icy moon Enceladus is an 'ocean world' that contains all three, making it a prime target in the search for life.

During its 20-year mission, NASA's Cassini spacecraft discovered that ice plumes spew from Enceladus' surface at approximately 800 miles per hour (400 m/s). These plumes provide an excellent opportunity to collect samples and study the composition of Enceladus' oceans and potential habitability.

However, until now it was not known if the speed of the plumes would fragment any organic compounds contained within the ice grains, thus degrading the samples.

Now researchers from the University of California San Diego have shown unambiguous laboratory evidence that amino acids transported in these ice plumes can survive impact speeds of up to 4.2 km/s, supporting their detection during sampling by spacecraft.

Their findings appear in The Proceedings of the National Academy of Sciences (PNAS).

Beginning in 2012, UC San Diego Distinguished Professor of Chemistry and Biochemistry Robert Continetti and his co-workers custom-built a unique aerosol impact spectrometer, designed to study collision dynamics of single aerosols and particles at high velocities.

Although not built specifically to study ice grain impacts, it turned out to be exactly the right machine to do so.

"This apparatus is the only one of its kind in the world that can select single particles and accelerate or decelerate them to chosen final velocities," stated Continetti.

"From several micron diameters down to hundreds of nanometers, in a variety of materials, we're able to examine particle behavior, such as how they scatter or how their structures change upon impact."

In 2024 NASA will launch the Europa Clipper, which will travel to Jupiter.

Europa, one of Jupiter's largest moons, is another ocean world, and has a similar icy composition to Enceladus.

There is hope that the Clipper or any future probes to Saturn will be able to identify a specific series of molecules in the ice grains that could point to whether life exists in the subsurface oceans of these moons, but the molecules need to survive their speedy ejection from the moon and collection by the probe.

Although there has been research into the structure of certain molecules in ice particles, Continetti's team is the first to measure what happens when a single ice grain impacts a surface.

To run the experiment, ice grains were created using electrospray ionization, where water is pushed through a needle held at a high voltage, inducing a charge that breaks the water into increasingly smaller droplets.

The droplets were then injected into a vacuum where they freeze.

The team measured their mass and charge, then used image charge detectors to observe the grains as they flew through the spectrometer.

A key element to the experiment was installing a microchannel plate ion detector to accurately time the moment of impact down to the nanosecond.

The results showed that amino acids -- often called the building blocks of life -- can be detected with limited fragmentation up to impact velocities of 4.2 km/s.

"To get an idea of what kind of life may be possible in the solar system, you want to know there hasn't been a lot of molecular fragmentation in the sampled ice grains, so you can get that fingerprint of whatever it is that makes it a self-contained life form," said Continetti.

"Our work shows that this is possible with the ice plumes of Enceladus."

Continetti's research also raises interesting questions for chemistry itself, including how salt affects the detectability of certain amino acids.

It is believed that Enceladus contains vast salty oceans -- more than is present on Earth.

Because salt changes the properties of water as a solvent as well as the solubility of different molecules, this could mean that some molecules cluster on the surface of the ice grains, making them more likely to be detected.

"The implications this has for detecting life elsewhere in the solar system without missions to the surface of these ocean-world moons is very exciting, but our work goes beyond biosignatures in ice grains," stated Continetti.

"It has implications for fundamental chemistry as well. We are excited by the prospect of following in the footsteps of Harold Urey and Stanley Miller, founding faculty at UC San Diego in looking at the formation of the building blocks of life from chemical reactions activated by ice grain impact."

Read more at Science Daily

Fossil CO2 emissions at record high in 2023

Global carbon emissions from fossil fuels have risen again in 2023 -- reaching record levels, according to new research from the Global Carbon Project science team.

The annual Global Carbon Budget projects fossil carbon dioxide (CO2 emissions of 36.8 billion tonnes in 2023, up 1.1% from 2022.

Fossil CO2 emissions are falling in some regions, including Europe and the USA, but rising overall -- and the scientists say global action to cut fossil fuels is not happening fast enough to prevent dangerous climate change.

Emissions from land-use change (such as deforestation) are projected to decrease slightly but are still too high to be offset by current levels of reforestation and afforestation (new forests).

The report projects that total global CO2 emissions (fossil + land-use change) will be 40.9 billion tonnes in 2023.

This is about the same as 2022 levels, and part of a 10-year "plateau" -- far from the steep reduction in emissions that is urgently needed to meet global climate targets.

The research team included the University of Exeter, the University of East Anglia (UEA), CICERO Center for International Climate Research, Ludwig-Maximilian-University Munich and 90 other institutions around the world.

"The impacts of climate change are evident all around us, but action to reduce carbon emissions from fossil fuels remains painfully slow," said Professor Pierre Friedlingstein, of Exeter's Global Systems Institute, who led the study.

"It now looks inevitable we will overshoot the 1.5°C target of the Paris Agreement, and leaders meeting at COP28 will have to agree rapid cuts in fossil fuel emissions even to keep the 2°C target alive."

Professor Corinne Le Quéré, Royal Society Research Professor at UEA's School of Environmental Sciences said: "The latest CO2 data shows that current efforts are not profound or widespread enough to put global emissions on a downward trajectory towards Net Zero, but some trends in emissions are beginning to budge, showing climate policies can be effective.

"Global emissions at today's level are rapidly increasing the CO2 concentration in our atmosphere, causing additional climate change and increasingly serious and growing impacts."

"All countries need to decarbonise their economies faster than they are at present to avoid the worse impacts of climate change."

How long until we cross 1.5°C of global warming?

This study also estimates the remaining carbon budget before the 1.5°C target is breached consistently over multiple years, not just for a single year.

At the current emissions level, the Global Carbon Budget team estimates a 50% chance global warming will exceed 1.5°C consistently in about seven years.

This estimate is subject to large uncertainties, primarily due to the uncertainty on the additional warming coming from non-CO2 agents, especially for the 1.5°C targets which is getting close to the current warming level.

However, it's clear that the remaining carbon budget -- and therefore the time left to meet the 1.5°C target and avoid the worse impacts of climate change -- is running out fast.

Other key findings from the 2023 Global Carbon Budget include:

  • Regional trends vary dramatically. Emissions in 2023 are projected to increase in India (8.2%) and China (4.0%), and decline in the EU (-7.4%), the USA (-3.0%) and the rest of the world (-0.4%).
  • Global emissions from coal (1.1%), oil (1.5%) and gas (0.5%) are all projected to increase.
  • Atmospheric CO2 levels are projected to average 419.3 parts per million in 2023, 51% above pre-industrial levels.
  • About half of all CO2 emitted continues to be absorbed by land and ocean "sinks," with the rest remaining in the atmosphere where it causes climate change.
  • Global CO2 emissions from fires in 2023 have been larger than the average (based on satellite records since 2003) due to an extreme wildfire season in Canada, where emissions were six to eight times higher than average.
  • Current levels of technology-based Carbon Dioxide Removal (ie excluding nature-based means such as reforestation) amount to about 0.01 million tonnes CO2, more than a million times smaller than current fossil CO2 emissions.


The Global Carbon Budget report, produced by an international team of more than 120 scientists, provides an annual, peer-reviewed update, building on established methodologies in a fully transparent manner.

Read more at Science Daily

More than a meteorite: New clues about the demise of dinosaurs

What wiped out the dinosaurs? A meteorite plummeting to Earth is only part of the story, a new study suggests. Climate change triggered by massive volcanic eruptions may have ultimately set the stage for the dinosaur extinction, challenging the traditional narrative that a meteorite alone delivered the final blow to the ancient giants.

That's according to a study published in Science Advances, co-authored by Don Baker, a professor in McGill University's Department of Earth and Planetary Sciences.

The research team delved into volcanic eruptions of the Deccan Traps -- a vast and rugged plateau in Western India formed by molten lava.

Erupting a staggering one million cubic kilometres of rock, it may have played a key role in cooling the global climate around 65 million years ago.

The work took researchers around the world, from hammering out rocks in the Deccan Traps to analyzing the samples in England and Sweden.

A new season?: 'Volcanic winters'

In the lab, the scientists estimated how much sulfur and fluorine was injected into the atmosphere by massive volcanic eruptions in the 200,000 years before the dinosaur extinction.

Remarkably, they found the sulfur release could have triggered a global drop in temperature around the world -- a phenomenon known as a volcanic winter.

"Our research demonstrates that climatic conditions were almost certainly unstable, with repeated volcanic winters that could have lasted decades, prior to the extinction of the dinosaurs. This instability would have made life difficult for all plants and animals and set the stage for the dinosaur extinction event. Thus our work helps explain this significant extinction event that led to the rise of mammals and the evolution of our species," said Prof.

Don Baker.

New technique

Uncovering clues within ancient rock samples was no small feat.

In fact, a new technique developed at McGill helped decode the volcanic history.

The technique for estimating sulfur and fluorine releases-a complex combination of chemistry and experiments-is a bit like cooking pasta.

"Imagine making pasta at home. You boil the water, add salt, and then the pasta. Some of the salt from the water goes into the pasta, but not much of it," explains Baker.

Similarly, some elements become trapped in minerals as they cool following a volcanic eruption.

Just as you could calculate salt concentrations in the water that cooked the pasta from analyzing salt in the pasta itself, the new technique allowed scientists to measure sulfur and fluorine in rock samples.

With this information, the scientists could calculate the amount of these gases released during the eruptions.

Read more at Science Daily

Dec 4, 2023

Rocky planets can form in extreme environments

An international team of astronomers has used NASA's James Webb Space Telescope to provide the first observation of water and other molecules in the highly irradiated inner, rocky-planet-forming regions of a disk in one of the most extreme environments in our galaxy. These results suggest that the conditions for terrestrial planet formation can occur in a possible broader range of environments than previously thought.

These are the first results from the eXtreme Ultraviolet Environments (XUE) James Webb Space Telescope program, which focuses on the characterization of planet-forming disks (vast, spinning clouds of gas, dust, and chunks of rock where planets form and evolve) in massive star-forming regions.

These regions are likely representative of the environment in which most planetary systems formed.

Understanding the impact of environment on planet formation is important for scientists to gain insights into the diversity of the different types of exoplanets.

The XUE program targets a total of 15 disks in three areas of the Lobster Nebula (also known as NGC 6357), a large emission nebula roughly 5,500 light-years away from Earth in the constellation Scorpius.

The Lobster Nebula is one of the youngest and closest massive star-formation complexes, and is host to some of the most massive stars in our galaxy.

Massive stars are hotter, and therefore emit more ultraviolet (UV) radiation.

This can disperse the gas, making the expected disk lifetime as short as a million years.

Thanks to Webb, astronomers can now study the effect of UV radiation on the inner rocky-planet forming regions of protoplanetary disks around stars like our Sun.

"Webb is the only telescope with the spatial resolution and sensitivity to study planet-forming disks in massive star-forming regions," said team lead María Claudia Ramírez-Tannus of the Max Planck Institute for Astronomy in Germany.

Astronomers aim to characterize the physical properties and chemical composition of the rocky-planet-forming regions of disks in the Lobster Nebula using the Medium Resolution Spectrometer on Webb's Mid-Infrared Instrument (MIRI). This first result focuses on the protoplanetary disk termed XUE 1, which is located in the star cluster Pismis 24.

"Only the MIRI wavelength range and spectral resolution allow us to probe the molecular inventory and physical conditions of the warm gas and dust where rocky planets form," added team member Arjan Bik of Stockholm University in Sweden.

Due to its location near several massive stars in NGC 6357, scientists expect XUE 1 to have been constantly exposed to high amounts of ultraviolet radiation throughout its life.

However, in this extreme environment the team still detected a range of molecules that are the building blocks for rocky planets.

"We find that the inner disk around XUE 1 is remarkably similar to those in nearby star-forming regions," said team member Rens Waters of Radboud University in the Netherlands.

"We've detected water and other molecules like carbon monoxide, carbon dioxide, hydrogen cyanide, and acetylene. However, the emission found was weaker than some models predicted. This might imply a small outer disk radius."

"We were surprised and excited because this is the first time that these molecules have been detected under these extreme conditions," added Lars Cuijpers of Radboud University.

The team also found small, partially crystalline silicate dust at the disk's surface.

This is considered to be the building blocks of rocky planets.

These results are good news for rocky planet formation, as the science team finds that the conditions in the inner disk resemble those found in the well-studied disks located in nearby star-forming regions, where only low-mass stars form.

This suggests that rocky planets can form in a much broader range of environments than previously believed.

The team notes that the remaining observations from the XUE program are crucial to establish the commonality of these conditions.

Read more at Science Daily

Study identifies key algae species helping soft corals survive warming oceans

Scleractinian corals, or hard corals, have been disappearing globally over the past four decades, a result of climate change, pollution, unsustainable coastal development and overfishing. However, some Caribbean octocorals, or soft corals, are not meeting the same fate.

During a two-year survey of soft corals in the Florida Keys, Mary Alice Coffroth, professor emerita of geology at the University at Buffalo, along with a small team of UB researchers, identified three species of octocorals that have survived heat waves. While the coral animal itself may be heat tolerant, Coffroth said that her team concluded that the symbiotic algae inside the coral serve as a protector of sorts.

"The resistance and resilience of Caribbean octocorals offers clues for the future of coral reefs," Coffroth said.

A recent paper outlining their research, "What makes a winner? Symbiont and host dynamics determine Caribbean octocoral resilience to bleaching," was published on Nov. 22, in Science Advances by the American Association for the Advancement of Science (AAAS).

Coffroth is the lead author on the study she conducted between 2015 and 2017 with graduate student Louis Buccella, undergraduates Katherine Eaton and Alyssa Gooding and technician Harleena Franklin. Howard Lasker, professor emeritus in the departments of Environment and Sustainability and Geology, also contributed to the study.

Algae helps corals survive heat waves

Both hard and soft coral depend on a nutritional symbiosis with single-celled algae living within their tissues. Warmer waters can cause the symbiosis to break down, resulting in a loss of the algal symbionts, which turns the corals white, a phenomenon known as bleaching.

"Bleaching can lead to coral death," said Coffroth, who has studied coral reefs in the Florida Keys since 1998, including a more recent study in 2020-21. "It's unclear if the algae leave or are ejected from the coral.

"In this study, we examined possible mechanisms that contribute to the heightened resistance and resilience of three octocoral species in the face of the recurring marine heat waves leading to bleaching events," Coffroth said, noting that this is the first study that follows both symbiont genetic makeup and density in Caribbean octocorals before, during and after a major heat wave.

By and large, Caribbean octocorals harbor symbionts within the genus Breviolum, she said. And this symbiont is helping to make the octocoral better able to handle the rising heat.

"The Breviolum densities declined during the heatwaves but recovered quickly," she explained. "Octocoral mortality was low compared to their scleractinian relatives."

2014 El Niño prompted research

When Coffroth saw bleached corals during the 2014 El Niño and knew that a similar event was predicted for the following summer, she applied for a Rapid Response Research (RAPID) grant from the National Science Foundation. She was awarded $56,305 and with her master's student, Buccella, conducted the study in the Keys, following the fate of the octocorals and their symbionts for 28 months.

She and other members of the team made trips to the Keys Marine Lab at the Florida Institute of Oceanography to study the octocorals in the spring and fall of 2015 and 2016 and spring and summer of 2017, recording coral coloration and taking samples to study density of the symbionts and their genetic identity.

"We knew it was critical to follow individual colonies across an event with long-term monitoring of both host and symbiont responses," she said, "and to examine the response at least at the level of symbiont species, if not the genotype, to identify potentially resilient species."

Climate change moving faster than coral evolution

Although the study began almost a decade ago, Coffroth said the findings are extremely relevant because they mirror what is happening right now, with the continuing warming of ocean waters, increased storms and major bleaching events across the globe.

"There is evidence that corals are withstanding higher temperature now than they did in the 1960s," she said. "That signals evolution, but the problem is that climate change is moving too fast, faster than evolution."

In addition to their beautiful aesthetics, coral reefs provide many benefits to the planet and its inhabitants, including barriers to coastal regions that are susceptible to hurricanes and other tropical storms; habitat for large fish such as grouper and snapper; a tourist destination for snorkeling, fishing and diving; and a source for bioactive compounds used in drugs to treat inflammation and certain kinds of cancer.

"If you a see picture of coral reefs when I started diving in the 1970s and compare it with one now, it makes you want to cry," she said. "The change is just amazing."

While she noted that this study has some important observations, further study is needed to better understand what is happening to the ecosystem.

"I'm seeing species bleach that have never bleached before but also ones showing more resilience," she said. "There is a lot of variation within both the animal and symbiont genera. We need to understand the variation."

The hope is to continue research into coral reef relationships and the durability of the symbiotic algae while also taking steps to halt the damage to the environment by human action, such as overfishing and the burning of fossil fuels.

Read more at Science Daily

'Bone biographies' reveal lives of medieval England's common people -- and illuminate early benefits system

A series of 'bone biographies' created by a major research project tell the stories of medieval Cambridge residents as recorded on their skeletons, illuminating everyday lives during the era of Black Death and its aftermath.

The work is published alongside a new study investigating medieval poverty by examining remains from the cemetery of a former hospital that housed the poor and infirm.

University of Cambridge archaeologists analysed close to 500 skeletal remains excavated from burial grounds across the city, dating between the 11th and 15th centuries. Samples came from a range of digs dating back to the 1970s.

The latest techniques were used to investigate diets, DNA, activities, and bodily traumas of townsfolk, scholars, friars and merchants. Researchers focused on sixteen of the most revealing remains that are representative of various "social types."

The full "osteobiographies" are available on a new website launched by the After the Plague project at Cambridge University.

"An osteobiography uses all available evidence to reconstruct an ancient person's life," said lead researcher Prof John Robb from Cambridge's Department of Archaeology. "Our team used techniques familiar from studies such as Richard III's skeleton, but this time to reveal details of unknown lives -- people we would never learn about in any other way."

"The importance of using osteobiography on ordinary folk rather than elites, who are documented in historical sources, is that they represent the majority of the population but are those that we know least about," said After the Plague researcher Dr Sarah Inskip (now at University of Leicester).

The project used a statistical analysis of likely names drawn from written records of the period to give pseudonyms to the people studied.

"Journalists report anonymous sources using fictitious names. Death and time ensure anonymity for our sources, but we wanted to them to feel relatable," said Robb.

Meet 92 ('Wat'), who survived the plague, eventually dying as an older man with cancer in the city's charitable hospital, and 335 ('Anne'), whose life was beset by repeated injuries, leaving her to hobble on a shortened right leg.

Meet 730 ('Edmund'), who suffered from leprosy but -- contrary to stereotypes -- lived among ordinary people, and was buried in a rare wooden coffin. And 522 ('Eudes'), the poor boy who grew into a square-jawed friar with a hearty diet, living long despite painful gout.

Inside the medieval benefits system

The website coincides with a study from the team published in the journal Antiquity, which investigates the inhabitants of the hospital of St. John the Evangelist.

Founded around 1195, this institution helped the "poor and infirm," housing a dozen or so inmates at any one time. It lasted for some 300 years before being replaced by St. John's College in 1511. The site was excavated in 2010.

"Like all medieval towns, Cambridge was a sea of need," said Robb. "A few of the luckier poor people got bed and board in the hospital for life. Selection criteria would have been a mix of material want, local politics, and spiritual merit."

The study gives an inside look at how a "medieval benefits system" operated. "We know that lepers, pregnant women and the insane were prohibited, while piety was a must," said Robb. Inmates were required to pray for the souls of hospital benefactors, to speed them through purgatory. "A hospital was a prayer factory."

Molecular, bone and DNA data from over 400 remains in the hospital's main cemetery shows inmates to be an inch shorter on average than townsfolk. They were more likely to die younger, and show signs of tuberculosis.

Inmates were more likely to bear traces on their bones of childhoods blighted by hunger and disease. However, they also had lower rates of bodily trauma, suggesting life in the hospital reduced physical hardship or risk.

Children buried in the hospital were small for their age by up to five years' worth of growth. "Hospital children were probably orphans," said Robb. Signs of anaemia and injury were common, and about a third had rib lesions denoting respiratory diseases such as TB.

As well as the long-term poor, up to eight hospital residents had isotope levels indicating a lower-quality diet in older age, and may be examples of the "shame-faced poor": those fallen from comfort into destitution, perhaps after they became unable to work.

"Theological doctrines encouraged aid for the shame-faced poor, who threatened the moral order by showing that you could live virtuously and prosperously but still fall victim to twists of fortune," said Robb.

The researchers suggest that the variety of people within the hospital -- from orphans and pious scholars to the formerly prosperous -- may have helped appeal to a range of donors.

Finding the university scholars

The researchers were also able to identify some skeletons as probably those of early university scholars. The clue was in the arm bones.

Almost all townsmen had asymmetric arm bones, with their right humerus (upper arm bone) built more strongly than their left one, reflecting tough working regimes, particularly in early adulthood.

However, about ten men from the hospital had symmetrical humeri, yet they had no signs of a poor upbringing, limited growth, or chronic illness. Most dated from the later 14th and 15th century.

"These men did not habitually do manual labour or craft, and they lived in good health with decent nutrition, normally to an older age. It seems likely they were early scholars of the University of Cambridge," said Robb.

"University clerics did not have the novice-to-grave support of clergy in religious orders. Most scholars were supported by family money, earnings from teaching, or charitable patronage.

"Less well-off scholars risked poverty once illness or infirmity took hold. As the university grew, more scholars would have ended up in hospital cemeteries."

Isotope work suggests the first Cambridge students came mainly from eastern England, with some from the dioceses of Lincoln and York.

Cambridge and the Black Death

Most remains for this study came from three sites. In addition to the Hospital, an overhaul of the University's New Museums Site in 2015 yielded remains from a former Augustinian Friary, and the project also used skeletons excavated in the 1970s from the grounds of a medieval parish church: 'All Saints by the Castle'.

The team laid out each skeleton to do an inventory, then took samples for radiocarbon dating and DNA analysis. "We had to keep track of hundreds of bone samples zooming all over the place," said Robb

In 1348-9 the bubonic plague -- Black Death -- hit Cambridge, killing between 40-60% of the population. Most of the dead were buried in town cemeteries or plague pits such as one on Bene't Street next to the former friary.

However, the team have used the World Health Organization's methods of calculating "Disease Adjusted Life Years" -- the years of human life and life quality a disease costs a population -- to show that bubonic plague may have only come in tenth or twelfth on the risk rundown of serious health problems facing medieval Europeans.

"Everyday diseases, such as measles, whooping cough and gastrointestinal infections, ultimately took a far greater toll on medieval populations," said Robb.

Read more at Science Daily

A mixed origin made maize successful

Maize is one of the world's most widely grown crops. It is used for both human and animal foods and holds great cultural significance, especially for indigenous peoples in the Americas. Yet despite its importance, the origins of the grain have been hotly debated for more than a century. Now new research, published Dec. 1 in Science, shows that all modern maize descends from a hybrid created just over 5000 years ago in central Mexico, thousands of years after the plant was first domesticated.

The work has implications both for improving one of the world's most important crops and for understanding how the histories of people and their crops influence each other.

"It's a new model for the origins and spread of maize, and how it became a staple across the Americas," said Jeffrey Ross-Ibarra, professor in the Department of Evolution and Ecology at the University of California, Davis and senior author on the paper.

For the last few decades, the consensus has been that maize (Zea mays) was domesticated once from a single wild grass -- called teosinte -- in the lowlands of southwest Mexico about 9,000 to 10,000 years ago.

Known as corn in the United States, maize is not only a staple of diets around the globe, but also can be processed into sweeteners,ethanol fuel and other uses.

More recently, though, it's become clear that the genome of modern maize also contains a hefty dose of DNA from a second teosinte that grows in the highlands of central Mexico.

Ross-Ibarra and collaborators in the U. S., China and Mexico analyzed the genomes of over a thousand samples of maize and wild relatives.

They found that about 20 percent of the genome of all maize worldwide comes from this second highland teosinte.

New model for spread of maize

These new findings suggest that, though maize was domesticated around 10,000 years ago, it was not until 4,000 years later, when it hybridized with highland teosinte, that maize really took off as a popular crop and food staple.

This is also supported by archaeological evidence of the increasing importance of maize around the same time.

The new crop spread rapidly through the Americas and later worldwide.

Today, about 1.2 billion metric tons is harvested each year globally.

The hunt for why highland teosinte enabled maize to become a staple is still underway, Ross-Ibarra said.

The researchers did find genes related to cob size -- perhaps representing an increased yield potential -- and flowering time, which likely helped maize, a tropical crop, to grow at higher latitudes with longer days.

Hybridization may also have brought "hybrid vigor," where a hybrid organism is more vigorous than either of its parents.

The researchers observed that genomic segments from highland teosinte contained fewer harmful mutations than did other parts of the genome.

While the initial hybridization may have been accidental, it's likely that indigenous farmers recognized and took advantage of the novel variation introduced from highland maize, Ross-Ibarra said.

Even today, he said, "If you talk to Mexican farmers, some will tell you that letting wild maize grow near the fields makes their crops stronger."

A team led by Ross-Ibarra with Professor Graham Coop at UC Davis, archaeologists at UC Santa Barbara and geneticists at Swedish University of Agricultural Sciences was recently awarded a $1.6 million grant from the National Science Foundation to study the co-evolution of humans and maize in the Americas.

They will use genetics to look at how humans and maize spread across the continent and how populations of both maize and humans grew and shrank as they interacted with each other.

"We will incorporate human genetic data, maize genetics and archaeological data in an effort to answer many of the questions raised by our new model of maize origins," Ross-Ibarra said.

Read more at Science Daily