Jul 3, 2021

'There may not be a conflict after all' in expanding universe debate

Our universe is expanding, but our two main ways to measure how fast this expansion is happening have resulted in different answers. For the past decade, astrophysicists have been gradually dividing into two camps: one that believes that the difference is significant, and another that thinks it could be due to errors in measurement.

If it turns out that errors are causing the mismatch, that would confirm our basic model of how the universe works. The other possibility presents a thread that, when pulled, would suggest some fundamental missing new physics is needed to stitch it back together. For several years, each new piece of evidence from telescopes has seesawed the argument back and forth, giving rise to what has been called the 'Hubble tension.'

Wendy Freedman, astronomer, and the John and Marion Sullivan University Professor in Astronomy and Astrophysics at the University of Chicago, made some of the original measurements of the expansion rate of the universe that resulted in a higher value of the Hubble constant. But in a new review paper accepted to the Astrophysical Journal, Freedman gives an overview of the most recent observations. Her conclusion: the latest observations are beginning to close the gap.

That is, there may not be a conflict after all, and our standard model of the universe does not need to be significantly modified.

The rate at which the universe is expanding is called the Hubble constant, named for UChicago alum Edwin Hubble, SB 1910, PhD 1917, who is credited with discovering the expansion of the universe in 1929. Scientists want to pin down this rate precisely, because the Hubble constant is tied to the age of the universe and how it evolved over time.

A substantial wrinkle emerged in the past decade when results from the two main measurement methods began to diverge. But scientists are still debating the significance of the mismatch.

One way to measure the Hubble constant is by looking at very faint light left over from the Big Bang, called the cosmic microwave background. This has been done both in space and on the ground with facilities like the UChicago-led South Pole Telescope. Scientists can feed these observations into their 'standard model' of the early universe and run it forward in time to predict what the Hubble constant should be today; they get an answer of 67.4 kilometers per second per megaparsec.

The other method is to look at stars and galaxies in the nearby universe, and measure their distances and how fast they are moving away from us. Freedman has been a leading expert on this method for many decades; in 2001, her team made one of the landmark measurements using the Hubble Space Telescope to image stars called Cepheids. The value they found was 72. Freedman has continued to measure Cepheids in the years since, reviewing more telescope data each time; however, in 2019, she and her colleagues published an answer based on an entirely different method using stars called red giants. The idea was to cross-check the Cepheids with an independent method.

Red giants are very large and luminous stars that always reach the same peak brightness before rapidly fading. If scientists can accurately measure the actual, or intrinsic, peak brightness of the red giants, they can then measure the distances to their host galaxies, an essential but difficult part of the equation. The key question is how accurate those measurements are.

The first version of this calculation in 2019 used a single, very nearby galaxy to calibrate the red giant stars' luminosities. Over the past two years, Freedman and her collaborators have run the numbers for several different galaxies and star populations. "There are now four independent ways of calibrating the red giant luminosities, and they agree to within 1% of each other," said Freedman. "That indicates to us this is a really good way of measuring the distance."

"I really wanted to look carefully at both the Cepheids and red giants. I know their strengths and weaknesses well," said Freedman. "I have come to the conclusion that that we do not require fundamental new physics to explain the differences in the local and distant expansion rates. The new red giant data show that they are consistent."

University of Chicago graduate student Taylor Hoyt, who has been making measurements of the red giant stars in the anchor galaxies, added, "We keep measuring and testing the red giant branch stars in different ways, and they keep exceeding our expectations."

The value of the Hubble constant Freedman's team gets from the red giants is 69.8 km/s/Mpc -- virtually the same as the value derived from the cosmic microwave background experiment. "No new physics is required," said Freedman.

The calculations using Cepheid stars still give higher numbers, but according to Freedman's analysis, the difference may not be troubling. "The Cepheid stars have always been a little noisier and a little more complicated to fully understand; they are young stars in the active star-forming regions of galaxies, and that means there's potential for things like dust or contamination from other stars to throw off your measurements," she explained.

To her mind, the conflict can be resolved with better data.

Next year, when the James Webb Space Telescope is expected to launch, scientists will begin to collect those new observations. Freedman and collaborators have already been awarded time on the telescope for a major program to make more measurements of both Cepheid and red giant stars. "The Webb will give us higher sensitivity and resolution, and the data will get better really, really soon," she said.

But in the meantime, she wanted to take a careful look at the existing data, and what she found was that much of it actually agrees.

"That's the way science proceeds," Freedman said. "You kick the tires to see if something deflates, and so far, no flat tires."

Some scientists who have been rooting for a fundamental mismatch might be disappointed. But for Freedman, either answer is exciting.

Read more at Science Daily

Researchers explore how children learn language

Small children learn language at a pace far faster than teenagers or adults. One explanation for this learning advantage comes not from differences between children and adults, but from the differences in the way that people talk to children and adults.

For the first time, a team of researchers developed a method to experimentally evaluate how parents use what they know about their children's language when they talk to them. They found that parents have extremely precise models of their children's language knowledge, and use these models to tune the language they use when speaking to them. The results are available in an advance online publication of the journal of Psychological Science.

"We have known for years that parents talk to children differently than to other adults in a lot of ways, for example simplifying their speech, reduplicating words and stretching out vowel sounds," said Daniel Yurovsky, assistant professor in psychology at Carnegie Mellon University. "This stuff helps young kids get a toehold into language, but we didn't whether parents change the way they talk as children are acquiring language, giving children language input that is 'just right' for learning the next thing."

Adults tend to speak to children more slowly and at a higher pitch. They also use more exaggerated enunciation, repetition and simplified language structure. Adults also pepper their communication with questions to gauge the child's comprehension. As the child's language fluency increases, the sentence structure and complexity used by adults increases.

Yurovsky likens this to the progression a student follows when learning math in school.

"When you go to school, you start with algebra and then take plane geometry before moving onto calculus," said Yurovsky. "People talk to kids using same kind of structure without thinking about it. They are tracking how much their child knows about language and modifying how they speak so that for children understand them."

Yurovsky and his team sought to understand exactly how caregivers tune their interactions to match their child's speech development. The team developed a game where parents helped their children to pick a specific animal from a set of three, a game that toddlers (aged 15 to 23 months) and their parents play routinely in their daily lives. Half of the animals in the matching game were animals that children typically learn before age 2 (e.g. cat, cow), and the other half were animals that are typically learned later (e.g. peacock, leopard).

The researchers asked 41 child-adult pairs to play the game in a naturalistic setting in the laboratory. They measured the differences in how parents talked about animals they thought their children knew as compared to those they thought their children did not know.

"Parents have an incredibly precise knowledge of their child's language because they have witnessed them grow and learn," said Yurovsky. "These results show that parents leverage their knowledge of their children's language development to fine-tune the linguistic information they provide."

The researchers found that the caregiver used a variety of techniques to convey the 'unknown' animal to the child. The most common approach was to use additional descriptors familiar to the child.

"This [research] approach lets us confirm experimentally ideas that we have developed based on observations of how children and parents engage in the home," said Yurovsky. "We found that parents not only used what they already knew about their children's language knowledge before the study, but also that if they found out they wrong -- their child didn't actually know 'leopard' for example -- they changed the way they talked about that animal the next time around."

The study consisted of 36 experimental trials where each animal appeared as a target at least twice in the game. The participants represented a racial composition similar to the United States (56% white, 27% Black and 8% Hispanic).

The results reflect a western parenting perspective as well as caregivers with a higher educational background than is representative in the country. The researchers did not independently measure the children's knowledge of each animal. The results of this study cannot differentiate whether the children learned any new animals while playing the game.

Yurovsky believes the results may have some relevance for researchers working in the field of machine learning.

"These results could help us understand how to think about machine learning language systems," he said. "Right now we train language models by giving them all of the language data we can get our hands on all at once. But we might do better if we could give them the right data at the right time, keeping it at just the right level of complexity that they are ready for."

Read more at Science Daily

Jul 2, 2021

Astronauts demonstrate CRISPR/Cas9 genome editing in space

Researchers have developed and successfully demonstrated a novel method for studying how cells repair damaged DNA in space. Sarah Stahl-Rommel of Genes in Space and colleagues present the new technique in the open-access journal PLOS ONE on June 30, 2021.

Damage to an organism's DNA can occur during normal biological processes or as a result of environmental causes, such as UV light. In humans and other animals, damaged DNA can lead to cancer. Fortunately, cells have several different natural strategies by which damaged DNA can be repaired. Astronauts traveling outside of Earth's protective atmosphere face increased risk of DNA damage due to the ionizing radiation that permeates space. Therefore, which specific DNA-repair strategies are employed by the body in space may be particularly important. Previous work suggests that microgravity conditions may influence this choice, raising concerns that repair might not be adequate. However, technological and safety obstacles have so far limited investigation into the issue.

Now, Stahl-Rommel and colleagues have developed a new method for studying DNA repair in yeast cells that can be conducted entirely in space. The technique uses CRISPR/Cas9 genome editing technology to create precise damage to DNA strands so that DNA repair mechanisms can then be observed in better detail than would be possible with non-specific damage via radiation or other causes. The method focuses on a particularly harmful type of DNA damage known as a double-strand break.

The researchers successfully demonstrated the viability of the novel method in yeast cells aboard the International Space Station. They hope the technique will now enable extensive research into DNA repair in space. This study marks the first time that CRISPR/Cas9 genome editing has successfully been conducted in space, as well as the first time in space that live cells have undergone successful transformation -- incorporation of genetic material originating from outside the organism.

Future research could refine the new method to better mimic the complex DNA damage caused by ionizing radiation. The technique could also serve as a foundation for investigations into numerous other molecular biology topics related to long-term space exposure and exploration.

"It's not just that the team successfully deployed novel technologies like CRISPR genome editing, PCR, and nanopore sequencing in an extreme environment, but also that we were able to integrate them into a functionally complete biotechnology workflow applicable to the study of DNA repair and other fundamental cellular processes in microgravity," said senior author Sebastian Kraves. "These developments fill this team with hope in humanity's renewed quest to explore and inhabit the vast expanse of space."

First author Sarah Stahl Rommel adds, "Being a part of Genes in Space-6 has been a highlight of my career. I saw firsthand just how much can be accomplished when the ideas of innovative students are supported by the best from academia, industry, and NASA. The expertise of the team resulted in the ability to perform high-quality, complex science beyond the bounds of Earth. I hope this impactful collaboration continues to show students and senior researchers alike what is possible onboard our laboratory in space."

Read more at Science Daily

How long can a person live? The 21st century may see a record-breaker

The number of people who live past the age of 100 has been on the rise for decades, up to nearly half a million people worldwide.

There are, however, far fewer "supercentenarians," people who live to age 110 or even longer. The oldest living person, Jeanne Calment of France, was 122 when she died in 1997; currently, the world's oldest person is 118-year-old Kane Tanaka of Japan.

Such extreme longevity, according to new research by the University of Washington, likely will continue to rise slowly by the end of this century, and estimates show that a lifespan of 125 years, or even 130 years, is possible.

"People are fascinated by the extremes of humanity, whether it's going to the moon, how fast someone can run in the Olympics, or even how long someone can live," said lead author Michael Pearce, a UW doctoral student in statistics. "With this work, we quantify how likely we believe it is that some individual will reach various extreme ages this century."

Longevity has ramifications for government and economic policies, as well as individuals' own health care and lifestyle decisions, rendering what's probable, or even possible, relevant at all levels of society.

The new study, published June 30 in Demographic Research, uses statistical modeling to examine the extremes of human life. With ongoing research into aging, the prospects of future medical and scientific discoveries and the relatively small number of people to have verifiably reached age 110 or older, experts have debated the possible limits to what is referred to as the maximum reported age at death. While some scientists argue that disease and basic cell deterioration lead to a natural limit on human lifespan, others maintain there is no cap, as evidenced by record-breaking supercentenarians.

Pearce and Adrian Raftery, a professor of sociology and of statistics at the UW, took a different approach. They asked what the longest individual human lifespan could be anywhere in the world by the year 2100. Using Bayesian statistics, a common tool in modern statistics, the researchers estimated that the world record of 122 years almost certainly will be broken, with a strong likelihood of at least one person living to anywhere between 125 and 132 years.

To calculate the probability of living past 110 -- and to what age -- Raftery and Pearce turned to the most recent iteration of the International Database on Longevity, created by the Max Planck Institute for Demographic Research. That database tracks supercentenarians from 10 European countries, plus Canada, Japan and the United States.

Using a Bayesian approach to estimate probability, the UW team created projections for the maximum reported age at death in all 13 countries from 2020 through 2100.

Among their findings:
 

  • Researchers estimated near 100% probability that the current record of maximum reported age at death -- Calment's 122 years, 164 days -- will be broken;
  • The probability remains strong of a person living longer, to 124 years old (99% probability) and even to 127 years old (68% probability);
  • An even longer lifespan is possible but much less likely, with a 13% probability of someone living to age 130;
  • It is "extremely unlikely" that someone would live to 135 in this century.


As it is, supercentenarians are outliers, and the likelihood of breaking the current age record increases only if the number of supercentenarians grows significantly. With a continually expanding global population, that's not impossible, researchers say.

People who achieve extreme longevity are still rare enough that they represent a select population, Raftery said. Even with population growth and advances in health care, there is a flattening of the mortality rate after a certain age. In other words, someone who lives to be 110 has about the same probability of living another year as, say, someone who lives to 114, which is about one-half.

"It doesn't matter how old they are, once they reach 110, they still die at the same rate," Raftery said. "They've gotten past all the various things life throws at you, such as disease. They die for reasons that are somewhat independent of what affects younger people.

Read more at Science Daily

Brain circuit for spirituality?

More than 80 percent of people around the world consider themselves to be religious or spiritual. But research on the neuroscience of spirituality and religiosity has been sparse. Previous studies have used functional neuroimaging, in which an individual undergoes a brain scan while performing a task to see what areas of the brain light up. But these correlative studies have given a spotty and often inconsistent picture of spirituality.

A new study led by investigators at Brigham and Women's Hospital takes a new approach to mapping spirituality and religiosity and finds that spiritual acceptance can be localized to a specific brain circuit. This brain circuit is centered in the periaqueductal gray (PAG), a brainstem region that has been implicated in numerous functions, including fear conditioning, pain modulation, altruistic behaviors and unconditional love. The team's findings are published in Biological Psychiatry.

"Our results suggest that spirituality and religiosity are rooted in fundamental, neurobiological dynamics and deeply woven into our neuro-fabric," said corresponding author Michael Ferguson, PhD, a principal investigator in the Brigham's Center for Brain Circuit Therapeutics. "We were astonished to find that this brain circuit for spirituality is centered in one of the most evolutionarily preserved structures in the brain."

To conduct their study, Ferguson and colleagues used a technique called lesion network mapping that allows investigators to map complex human behaviors to specific brain circuits based on the locations of brain lesions in patients. The team leveraged a previously published dataset that included 88 neurosurgical patients who were undergoing surgery to remove a brain tumor. Lesion locations were distributed throughout the brain. Patients completed a survey that included questions about spiritual acceptance before and after surgery. The team validated their results using a second dataset made up of more than 100 patients with lesions caused by penetrating head trauma from combat during the Vietnam War. These participants also completed questionnaires that included questions about religiosity (such as, "Do you consider yourself a religious person? Yes or No?").

Of the 88 neurosurgical patients, 30 showed a decrease in self-reported spiritual belief before and after neurosurgical brain tumor resection, 29 showed an increase, and 29 showed no change. Using lesion network mapping, the team found that self-reported spirituality mapped to a specific brain circuit centered on the PAG. The circuit included positive nodes and negative nodes -- lesions that disrupted these respective nodes either decreased or increased self-reported spiritual beliefs. Results on religiosity from the second dataset aligned with these findings. In addition, in a review of the literature, the researchers found several case reports of patients who became hyper-religious after experiencing brain lesions that affected the negative nodes of the circuit.

Lesion locations associated with other neurological and psychiatric symptoms also intersected with the spirituality circuit. Specifically, lesions causing parkinsonism intersected positive areas of the circuit, as did lesions associated with decreased spirituality. Lesions causing delusions and alien limb syndrome intersected with negative regions, associated with increased spirituality and religiosity.

"It's important to note that these overlaps may be helpful for understanding shared features and associations, but these results should not be over-interpreted," said Ferguson. "For example, our results do not imply that religion is a delusion, that historical religious figures suffered from alien limb syndrome, or that Parkinson's disease arises due to a lack of religious faith. Instead, our results point to the deep roots of spiritual beliefs in a part of our brain that's been implicated in many important functions."

The authors note that the datasets they used do not provide rich information about the patient's upbringing, which can have an influence over spiritual beliefs, and that patients in both datasets were from predominantly Christian cultures. To understand the generalizability of their results, they would need to replicate their study across many backgrounds. The team is also interested in untangling religiosity and spirituality to understand brain circuits that may be driving differences. Additionally, Ferguson would like to pursue clinical and translational applications for the findings, including understanding the role that spirituality and compassion may have in clinical treatment.

"Only recently have medicine and spirituality been fractionated from one another. There seems to be this perennial union between healing and spirituality across cultures and civilizations," said Ferguson. "I'm interested in the degree to which our understanding of brain circuits could help craft scientifically grounded, clinically-translatable questions about how healing and spirituality can co-inform each other."

Read more at Science Daily

High physical activity levels may counter serious health harms of poor sleep

Those who had both the poorest sleep quality and who exercised the least were most at risk of death from heart disease, stroke, and cancer, the findings indicate, prompting the researchers to suggest a likely synergy between the two activities.

Both physical inactivity and poor sleep are independently associated with a heightened risk of death and/or cardiovascular disease and cancer. But it's not clear if they might exert a combined effect on health.

To explore this further, the researchers drew on information provided by 380,055 middle-aged (average age 55) men and women taking part in the UK Biobank study. The UK Biobank is tracking the long term health of more than half a million 37-73 year olds, who were recruited from across the UK between 2006 and 2010.

Participants supplied information on their normal weekly physical activity levels, which were measured in Metabolic Equivalent of Task (MET) minutes. These are roughly equivalent to the amount of energy (calories) expended per minute of physical activity.

For example, 600 MET minutes a week is the equivalent of 150 minutes of moderate intensity activity, or more than 75 minutes of vigorous intensity physical activity a week.

Physical activity levels were categorised as: high (1200 or more MET minutes/week); medium (600 to less than 1200); or low (1 to less than 600); and no moderate to vigorous physical activity, according to World Health Organization guidelines.

Sleep quality was categorised using a 0-5 sleep score derived from chronotype ('night owl' or 'morning lark' preference), sleep duration, insomnia, snoring and daytime sleepiness: healthy (4+); intermediate (2-3); or poor (0-1).

A dozen physical activity and sleep pattern combinations were derived from the information supplied.

Participants' health was then tracked for an average of 11 years up to May 2020 or death, whichever came first, to assess their risk of dying from any cause as well as from all types of cardiovascular disease; coronary heart disease; stroke; all types of cancer; and lung cancer.

During the monitoring period, 15,503 died: 4095 were from any type of cardiovascular disease and 9064 were from all types of cancer.

Of these, 1932 people died from coronary heart disease, 359 from a brain bleed (haemorrhagic) stroke, 450 from a blood clot (ischaemic) stroke and 1595 from lung cancer.

Some 223,445 (59%) participants were in the high physical activity group; 57,771 (15%) in the medium group; 39,298 (10%) in the low group; and 59, 541 (16%) in the no moderate to vigorous physical activity group.

More than half (56%) the participants had a healthy sleep pattern; 42% were classified as having intermediate quality sleep; and 3% were classified as poor sleepers. (Figures are rounded up.)

Those who were younger, female, thinner, better off financially, ate more fruit and vegetables, spent less of their day seated, had no mental health issues, never smoked, didn't work shifts, drank less alcohol and were more physically active tended to have healthier sleep scores.

The lower the sleep score, the higher were the risks of death from any cause, from all types of cardiovascular disease, and from ischaemic stroke.

Compared with those with the high physical activity + healthy sleep score combination, those at the other end of the scale, with the no moderate to vigorous physical activity + poor sleep combination, had the highest risks of death from any cause (57% higher).

They also had the highest risk of death from any type of cardiovascular disease (67% higher), from any type of cancer (45% higher), and from lung cancer (91% higher).

Lower levels of physical activity amplified the unfavourable associations between poor sleep and all health outcomes, with the exception of stroke.

This is an observational study, and as such, can't establish causality, acknowledge the researchers. The study also relied on self-reported data, and the key information on sleep patterns and physical activity was collected at one point in time only, and excluded potentially influential factors, such as job type and household size.

Nevertheless, the researchers conclude: "Physical activity levels at or above the WHO guideline (600 metabolic equivalent task mins/week) threshold eliminated most of the deleterious associations of poor sleep with mortality."

The findings lend weight to efforts to target both physical activity and sleep quality in a bid to improve health, they say.

Read more at Science Daily

Jul 1, 2021

Global climate dynamics drove the decline of mastodonts and elephants, new study suggests

Elephants and their forebears were pushed into wipeout by waves of extreme global environmental change, rather than overhunting by early humans, according to new research.

The study, published today in Nature Ecology & Evolution, challenges claims that early human hunters slaughtered prehistoric elephants, mammoths and mastodonts to extinction over millennia. Instead, its findings indicate the extinction of the last mammoths and mastodonts at the end of the last Ice Age marked the end of progressive climate-driven global decline among elephants over millions of years.

Although elephants today are restricted to just three endangered species in the African and Asian tropics, these are survivors of a once far more diverse and widespread group of giant herbivores, known as the proboscideans, which also include the now completely extinct mastodonts, stegodonts and deinotheres. Only 700,000 years ago, England was home to three types of elephants: two giant species of mammoths and the equally prodigious straight-tusked elephant.

An international group of palaeontologists from the universities of Alcalá, Bristol, and Helsinki, piloted the most detailed analysis to date on the rise and fall of elephants and their predecessors, which examined how 185 different species adapted, spanning 60 million years of evolution that began in North Africa. To probe into this rich evolutionary history, the team surveyed museum fossil collections across the globe, from London's Natural History Museum to Moscow's Paleontological Institute. By investigating traits such as body size, skull shape and the chewing surface of their teeth, the team discovered that all proboscideans fell within one of eight sets of adaptive strategies.

"Remarkably for 30 million years, the entire first half of proboscidean evolution, only two of the eight groups evolved," said Dr Zhang Hanwen, study coauthor and Honorary Research Associate at the University of Bristol's School of Earth Sciences.

"Most proboscideans over this time were nondescript herbivores ranging from the size of a pug to that of a boar. A few species got as big as a hippo, yet these lineages were evolutionary dead-ends. They all bore little resemblance to elephants."

The course of proboscidean evolution changed dramatically some 20 million years ago, as the Afro-Arabian plate collided into the Eurasian continent. Arabia provided crucial migration corridor for the diversifying mastodont-grade species to explore new habitats in Eurasia, and then into North America via the Bering Land Bridge.

"The immediate impact of proboscidean dispersals beyond Africa was quantified for the very first time in our study," said lead author Dr Juan Cantalapiedra, Senior Research Fellow at the University of Alcalá in Spain.

"Those archaic North African species were slow-evolving with little diversification, yet we calculated that once out of Africa proboscideans evolved 25 times faster, giving rise to a myriad of disparate forms, whose specialisations permitted niche partition between several proboscidean species in the same habitats. One case in point being the massive, flattened lower tusks of the 'shovel-tuskers'. Such coexistence of giant herbivores was unlike anything in today's ecosystems."

Dr Zhang added: "The aim of the game in this boom period of proboscidean evolution was 'adapt or die'. Habitat perturbations were relentless, pertained to the ever-changing global climate, continuously promoting new adaptive solutions while proboscideans that didn't keep up were literally, left for dead. The once greatly diverse and widespread mastodonts were eventually reduced to less than a handful of species in the Americas, including the familiar Ice Age American mastodon."

By 3 million years ago the elephants and stegodonts of Africa and eastern Asia seemingly emerged victorious in this unremitting evolutionary ratchet. However, environmental disruption connected to the coming Ice Ages hit them hard, with surviving species forced to adapt to the new, more austere habitats. The most extreme example was the woolly mammoth, with thick, shaggy hair and big tusks for retrieving vegetation covered under thick snow.

The team's analyses identified final proboscidean extinction peaks starting at around 2.4 million years ago, 160,000 and 75,000 years ago for Africa, Eurasia and the Americas, respectively.

"It is important to note that these ages do not demarcate the precise timing of extinctions, but rather indicate the points in time at which proboscideans on the respective continents became subject to higher extinction risk," said Dr Cantalapiedra.

Unexpectedly, the results do not correlate with the expansion of early humans and their enhanced capabilities to hunt down megaherbivores.

"We didn't foresee this result. It appears as if the broad global pattern of proboscidean extinctions in recent geological history could be reproduced without accounting for impacts of early human diasporas. Conservatively, our data refutes some recent claims regarding the role of archaic humans in wiping out prehistoric elephants, ever since big game hunting became a crucial part of our ancestors' subsistence strategy around 1.5 million years ago," said Dr Zhang.

Read more at Science Daily

Physicists observationally confirm Hawking's black hole theorem for the first time

There are certain rules that even the most extreme objects in the universe must obey. A central law for black holes predicts that the area of their event horizons -- the boundary beyond which nothing can ever escape -- should never shrink. This law is Hawking's area theorem, named after physicist Stephen Hawking, who derived the theorem in 1971.

Fifty years later, physicists at MIT and elsewhere have now confirmed Hawking's area theorem for the first time, using observations of gravitational waves. Their results appear in Physical Review Letters.

In the study, the researchers take a closer look at GW150914, the first gravitational wave signal detected by the Laser Interferometer Gravitational-wave Observatory (LIGO), in 2015. The signal was a product of two inspiraling black holes that generated a new black hole, along with a huge amount of energy that rippled across space-time as gravitational waves.

If Hawking's area theorem holds, then the horizon area of the new black hole should not be smaller than the total horizon area of its parent black holes. In the new study, the physicists reanalyzed the signal from GW150914 before and after the cosmic collision and found that indeed, the total event horizon area did not decrease after the merger -- a result that they report with 95 percent confidence.

Their findings mark the first direct observational confirmation of Hawking's area theorem, which has been proven mathematically but never observed in nature until now. The team plans to test future gravitational-wave signals to see if they might further confirm Hawking's theorem or be a sign of new, law-bending physics.

"It is possible that there's a zoo of different compact objects, and while some of them are the black holes that follow Einstein and Hawking's laws, others may be slightly different beasts," says lead author Maximiliano Isi, a NASA Einstein Postdoctoral Fellow in MIT's Kavli Institute for Astrophysics and Space Research. "So, it's not like you do this test once and it's over. You do this once, and it's the beginning."

Isi's co-authors on the paper are Will Farr of Stony Brook University and the Flatiron Institute's Center for Computational Astrophysics, Matthew Giesler of Cornell University, Mark Scheel of Caltech, and Saul Teukolsky of Cornell University and Caltech.

An age of insights

In 1971, Stephen Hawking proposed the area theorem, which set off a series of fundamental insights about black hole mechanics. The theorem predicts that the total area of a black hole's event horizon -- and all black holes in the universe, for that matter -- should never decrease. The statement was a curious parallel of the second law of thermodynamics, which states that the entropy, or degree of disorder within an object, should also never decrease.

The similarity between the two theories suggested that black holes could behave as thermal, heat-emitting objects -- a confounding proposition, as black holes by their very nature were thought to never let energy escape, or radiate. Hawking eventually squared the two ideas in 1974, showing that black holes could have entropy and emit radiation over very long timescales if their quantum effects were taken into account. This phenomenon was dubbed "Hawking radiation" and remains one of the most fundamental revelations about black holes.

"It all started with Hawking's realization that the total horizon area in black holes can never go down," Isi says. "The area law encapsulates a golden age in the '70s where all these insights were being produced."

Hawking and others have since shown that the area theorem works out mathematically, but there had been no way to check it against nature until LIGO's first detection of gravitational waves.

Hawking, on hearing of the result, quickly contacted LIGO co-founder Kip Thorne, the Feynman Professor of Theoretical Physics at Caltech. His question: Could the detection confirm the area theorem?

At the time, researchers did not have the ability to pick out the necessary information within the signal, before and after the merger, to determine whether the final horizon area did not decrease, as Hawking's theorem would assume. It wasn't until several years later, and the development of a technique by Isi and his colleagues, when testing the area law became feasible.

Before and after

In 2019, Isi and his colleagues developed a technique to extract the reverberations immediately following GW150914's peak -- the moment when the two parent black holes collided to form a new black hole. The team used the technique to pick out specific frequencies, or tones of the otherwise noisy aftermath, that they could use to calculate the final black hole's mass and spin.

A black hole's mass and spin are directly related to the area of its event horizon, and Thorne, recalling Hawking's query, approached them with a follow-up: Could they use the same technique to compare the signal before and after the merger, and confirm the area theorem?

The researchers took on the challenge, and again split the GW150914 signal at its peak. They developed a model to analyze the signal before the peak, corresponding to the two inspiraling black holes, and to identify the mass and spin of both black holes before they merged. From these estimates, they calculated their total horizon areas -- an estimate roughly equal to about 235,000 square kilometers, or roughly nine times the area of Massachusetts.

They then used their previous technique to extract the "ringdown," or reverberations of the newly formed black hole, from which they calculated its mass and spin, and ultimately its horizon area, which they found was equivalent to 367,000 square kilometers (approximately 13 times the Bay State's area).

"The data show with overwhelming confidence that the horizon area increased after the merger, and that the area law is satisfied with very high probability," Isi says. "It was a relief that our result does agree with the paradigm that we expect, and does confirm our understanding of these complicated black hole mergers."

The team plans to further test Hawking's area theorem, and other longstanding theories of black hole mechanics, using data from LIGO and Virgo, its counterpart in Italy.

"It's encouraging that we can think in new, creative ways about gravitational-wave data, and reach questions we thought we couldn't before," Isi says. "We can keep teasing out pieces of information that speak directly to the pillars of what we think we understand. One day, this data may reveal something we didn't expect."

Read more at Science Daily

5-minute breathing workout lowers blood pressure as much as exercise, drugs

Working out just five minutes daily via a practice described as "strength training for your breathing muscles" lowers blood pressure and improves some measures of vascular health as well as, or even more than, aerobic exercise or medication, new CU Boulder research shows.

The study, published June 29 in the Journal of the American Heart Association, provides the strongest evidence yet that the ultra-time-efficient maneuver known as High-Resistance Inspiratory Muscle Strength Training (IMST) could play a key role in helping aging adults fend off cardiovascular disease -- the nation's leading killer.

In the United States alone, 65% of adults over age 50 have above-normal blood pressure -- putting them at greater risk of heart attack or stroke. Yet fewer than 40% meet recommended aerobic exercise guidelines.

"There are a lot of lifestyle strategies that we know can help people maintain cardiovascular health as they age. But the reality is, they take a lot of time and effort and can be expensive and hard for some people to access," said lead author Daniel Craighead, an assistant research professor in the Department of Integrative Physiology. "IMST can be done in five minutes in your own home while you watch TV."

Developed in the 1980s as a way to help critically ill respiratory disease patients strengthen their diaphragm and other inspiratory (breathing) muscles, IMST involves inhaling vigorously through a hand-held device which provides resistance. Imagine sucking hard through a tube that sucks back.

Initially, when prescribing it for breathing disorders, doctors recommended a 30-minute-per-day regimen at low resistance. But in recent years, Craighead and colleagues have been testing whether a more time-efficient protocol -- 30 inhalations per day at high resistance, six days per week -- could also reap cardiovascular, cognitive and sports performance improvements.

For the new study, they recruited 36 otherwise healthy adults ages 50 to 79 with above normal systolic blood pressure (120 millimeters of mercury or higher). Half did High-Resistance IMST for six weeks and half did a placebo protocol in which the resistance was much lower.

After six weeks, the IMST group saw their systolic blood pressure (the top number) dip nine points on average, a reduction which generally exceeds that achieved by walking 30 minutes a day five days a week. That decline is also equal to the effects of some blood pressure-lowering drug regimens.

Even six weeks after they quit doing IMST, the IMST group maintained most of that improvement.

"We found that not only is it more time-efficient than traditional exercise programs, the benefits may be longer lasting," Craighead said.

The treatment group also saw a 45% improvement in vascular endothelial function, or the ability for arteries to expand upon stimulation, and a significant increase in levels of nitric oxide, a molecule key for dilating arteries and preventing plaque buildup. Nitric oxide levels naturally decline with age.

Markers of inflammation and oxidative stress, which can also boost heart attack risk, were significantly lower after people did IMST.

And, remarkably, those in the IMST group completed 95% of the sessions.

"We have identified a novel form of therapy that lowers blood pressure without giving people pharmacological compounds and with much higher adherence than aerobic exercise," said senior author Doug Seals, a Distinguished Professor of Integrative Physiology. "That's noteworthy."

The practice may be particularly helpful for postmenopausal women.

In previous research, Seals' lab showed that postmenopausal women who are not taking supplemental estrogen don't reap as much benefit from aerobic exercise programs as men do when it comes to vascular endothelial function. IMST, the new study showed, improved it just as much in these women as in men.

"If aerobic exercise won't improve this key measure of cardiovascular health for postmenopausal women, they need another lifestyle intervention that will," said Craighead. "This could be it."

Preliminary results suggest MST also improved some measures of brain function and physical fitness. And previous studies from other researchers have shown it can be useful for improving sports performance.

"If you're running a marathon, your respiratory muscles get tired and begin to steal blood from your skeletal muscles," said Craighead, who uses IMST in his own marathon training. "The idea is that if you build up endurance of those respiratory muscles, that won't happen and your legs won't get as fatigued."

Seals said they're uncertain exactly how a maneuver to strengthen breathing muscles ends up lowering blood pressure, but they suspect it prompts the cells lining blood vessels to produce more nitric oxide, enabling them to relax.

The National Institutes of Health recently awarded Seals $4 million to launch a larger follow-up study of about 100 people, comparing a 12-week IMST protocol head-to-head with an aerobic exercise program.

Meanwhile, the research group is developing a smartphone app to enable people to do the protocol at home using already commercially available devices.

Read more at Science Daily

Consuming a diet with more fish fats, less vegetable oils can reduce migraine headaches, study finds

A diet higher in fatty fish helped frequent migraine sufferers reduce their monthly number of headaches and intensity of pain compared to participants on a diet higher in vegetable-based fats and oils, according to a new study. The findings by a team of researchers from the National Institute on Aging (NIA) and the National Institute on Alcohol Abuse and Alcoholism (NIAAA), parts of the National Institutes of Health; and the University of North Carolina (UNC) at Chapel Hill, were published in the July 3 issue of The BMJ.

This study of 182 adults with frequent migraines expanded on the team's previous work on the impact of linoleic acid and chronic pain. Linoleic acid is a polyunsaturated fatty acid commonly derived in the American diet from corn, soybean, and other similar oils, as well as some nuts and seeds. The team's previous smaller studies explored if linoleic acid inflamed migraine-related pain processing tissues and pathways in the trigeminal nerve, the largest and most complex of the body's 12 cranial nerves. They found that a diet lower in linoleic acid and higher in levels of omega-3 fatty acids (like those found in fish and shellfish) could soothe this pain pathway inflammation.

In a 16-week dietary intervention, participants were randomly assigned to one of three healthy diet plans. Participants all received meal kits that included fish, vegetables, hummus, salads, and breakfast items. One group received meals that had high levels of fatty fish or oils from fatty fish and lowered linoleic acid. A second group received meals that had high levels of fatty fish and higher linoleic acid. The third group received meals with high linoleic acid and lower levels of fatty fish to mimic average U.S. intakes.

During the intervention period, participants monitored their number of migraine days, duration, and intensity, along with how their headaches affected their abilities to function at work, school, and in their social lives, and how often they needed to take pain medications. When the study began, participants averaged more than 16 headache days per month, over five hours of migraine pain per headache day, and had baseline scores showing a severe impact on quality of life despite using multiple headache medications.

The diet lower in vegetable oil and higher in fatty fish produced between 30% and 40% reductions in total headache hours per day, severe headache hours per day, and overall headache days per month compared to the control group. Blood samples from this group of participants also had lower levels of pain-related lipids. Despite the reduction in headache frequency and pain, these same participants reported only minor improvements in migraine-related overall quality of life compared to other groups in the study.

Migraine, a neurological disease, ranks among the most common causes of chronic pain, lost work time, and lowered quality of life. More than 4 million people worldwide have chronic migraine (at least 15 migraine days per month) and over 90% of sufferers are unable to work or function normally during an attack, which can last anywhere from four hours to three days. Women between the ages of 18 and 44 are especially prone to migraines, and an estimated 18% of all American women are affected. Current medications for migraine usually offer only partial relief and can have negative side effects including sedation, and the possibility of dependence or addiction.

"This research found intriguing evidence that dietary changes have potential for improving a very debilitating chronic pain condition like migraine without the related downsides of often prescribed medications," said Luigi Ferrucci, M.D., Ph.D., scientific director of NIA.

The NIH team was led by Chris Ramsden, a clinical investigator in the NIA and NIAAA intramural research programs, and UNC adjunct faculty member. Ramsden and his team specialize in the study of lipids -- fatty acid compounds found in many natural oils -- and their role in aging, especially chronic pain and neurodegenerative conditions. The UNC team was led by Doug Mann, M.D., of the Department of Neurology, and Kim Faurot, Ph.D., of the Program on Integrative Medicine. Meal plans were designed by Beth MacIntosh, M.P.H., of UNC Healthcare's Department of Nutrition and Food Services.

"Changes in diet could offer some relief for the millions of Americans who suffer from migraine pain," said Ramsden. "It's further evidence that the foods we eat can influence pain pathways."

The researchers noted that these findings serve as validation that diet-based interventions increasing omega-3 fats while reducing linoleic acid sources show better promise for helping people with migraines reduce the number and impact of headache days than fish-oil based supplements, while reducing the need for pain medications. They hope to continue to expand this work to study effects of diet on other chronic pain conditions.

Read more at Science Daily

Study with healthcare workers supports that immunity to SARS-CoV-2 is long-lasting

One year after infection by SARS-CoV-2, most people maintain anti-Spike antibodies regardless of the severity of their symptoms, according to a study with healthcare workers co-led by the Barcelona Institute for Global Health (ISGlobal), the Catalan Health Institute (ICS) and the Jordi Gol Institute (IDIAP JG), with the collaboration of the Daniel Bravo Andreu Private Foundation. The results suggest that vaccine-generated immunity will also be long-lasting.

One of the key questions to better predict the pandemic's evolution is the duration of natural immunity. A growing number of studies suggest that most people generate a humoral (antibody) and cellular (T cells) response that is maintained during several months, maybe years.

During the first wave of the pandemic, the team at ICS/IDIAP JG in collaboration with Carlota Dobaño's team at ISGlobal started a follow-up study of a cohort of healthcare workers with COVID-19 -- a total of 173 people working in healthcare centers of central Catalonia. Most infections were mild to moderate, although some cases required hospitalization.

The research team took regular blood samples from September 2020 onwards to measure the level and type of SARS-CoV-2-specific antibodies in these patients. This work was possible thanks to the support of the Daniel Bravo Foundation, which equipped ISGlobal with the latest technology and necessary resources to perform the study and rapidly reach conclusions during the subsequent waves.

"The results obtained until now lead us to believe that immunity to SARS-CoV-2 will last longer than we originally thought. Being a new virus, it is very important to understand how it behaves and affects different people," says Anna Ruiz Comellas, researcher at the Catalan Institute of Health and co-author of the study.

No significant decay in antibody levels was observed over the first five months, and at 9 months, 92.4% of peoples remained seropositive -- 90% of them had IgG, 76% had IgA and 61% had IgM recognising the Spike protein or the receptor binding domain (RBD). The results were similar among healthcare workers who had not been vaccinated in April (95% had IgG, 83% IgA and 25% IgM).

"These data confirm that IgG have a longer duration, but IgM levels, which are supposed to last less, were unexpectedly quite sustained over time," says Gemma Moncunill, ISGlobal researcher and senior co-author of the study, together with Ruíz-Comellas. Hospitalization, fever, and loss of smell and taste were associated with higher antibody levels at five or nine months.

Four reinfections were observed among the participants. Two of them were symptomatic and occurred in seronegative individuals. Another asymptomatic reinfection occurred in a subject with very low antibody levels. These results indicate that anti-Spike antibodies protect against symptomatic infections. "They also indicate that people who have not been previously infected should be prioritised for vaccination, since those who have already been infected may be protected for at least one year," says Anna Ramírez-Morros, first co-author of the study.

Read more at Science Daily

Jun 30, 2021

Hunting dark energy with gravity resonance spectroscopy

Dark Energy is widely believed to be the driving force behind the universe's accelerating expansion, and several theories have now been proposed to explain its elusive nature. However, these theories predict that its influence on quantum scales must be vanishingly small, and experiments so far have not been accurate enough to either verify or discredit them. In new research published in EPJ ST, a team led by Hartmut Abele at TU Wien in Austria demonstrate a robust experimental technique for studying one such theory, using ultra-cold neutrons. Named 'Gravity Resonance Spectroscopy' (GRS), their approach could bring researchers a step closer to understanding one of the greatest mysteries in cosmology.

Previously, phenomena named 'scalar symmetron fields' have been proposed as a potential candidate for Dark Energy. If they exist, these fields will be far weaker than gravity -- currently the weakest fundamental force known to physics. Therefore, by searching for extremely subtle anomalies in the behaviours of quantum particles trapped in gravitational fields, researchers could prove the existence of these fields experimentally. Within a gravitational field, ultra-cold neutrons can assume several discrete quantum states, which vary depending on the strength of the field. Through GRS, these neutrons are made to transition to higher-energy quantum states by the finely tuned mechanical oscillations of a near-perfect mirror. Any shifts from the expected values for the energy differences between these states could then indicate the influence of Dark Energy.

In their study, Abele's team designed and demonstrated a GRS experiment named 'qBOUNCE,' which they based around a technique named Ramsey spectroscopy. This involved causing neutrons in an ultra-cold beam to transition to higher-energy quantum states -- before scattering away any unwanted states, and picking up the remaining neutrons in a detector. Through precise measurements of the energy differences between particular states, the researchers could place far more stringent bounds on the parameters of scalar symmetron fields. Their technique now paves the way for even more precise searches for Dark Energy in future research.

From Science Daily

Astronomers have identified a white dwarf so massive that it might collapse

Maunakea and Haleakala, Hawai'i -- Astronomers have discovered the smallest and most massive white dwarf ever seen. The smoldering cinder, which formed when two less massive white dwarfs merged, is heavy, "packing a mass greater than that of our Sun into a body about the size of our Moon," says Ilaria Caiazzo, the Sherman Fairchild Postdoctoral Scholar Research Associate in Theoretical Astrophysics at Caltech and lead author of the new study appearing in the July 1 issue of the journal Nature. "It may seem counterintuitive, but smaller white dwarfs happen to be more massive. This is due to the fact that white dwarfs lack the nuclear burning that keep up normal stars against their own self gravity, and their size is instead regulated by quantum mechanics."

The discovery was made by the Zwicky Transient Facility, or ZTF, which operates at Caltech's Palomar Observatory; two Hawai'i telescopes -- W. M. Keck Observatory on Maunakea, Hawai'i Island and University of Hawai'i Institute for Astronomy's Pan-STARRS (Panoramic Survey Telescope and Rapid Response System) on Haleakala, Maui -- helped characterize the dead star, along with the 200-inch Hale Telescope at Palomar, the European Gaia space observatory, and NASA's Neil Gehrels Swift Observatory.

White dwarfs are the collapsed remnants of stars that were once about eight times the mass of our Sun or lighter. Our Sun, for example, after it first puffs up into a red giant in about 5 billion years, will ultimately slough off its outer layers and shrink down into a compact white dwarf. About 97 percent of all stars become white dwarfs.

While our Sun is alone in space without a stellar partner, many stars orbit around each other in pairs. The stars grow old together, and if they are both less than eight solar-masses, they will both evolve into white dwarfs.

The new discovery provides an example of what can happen after this phase. The pair of white dwarfs, which spiral around each other, lose energy in the form of gravitational waves and ultimately merge. If the dead stars are massive enough, they explode in what is called a type Ia supernova. But if they are below a certain mass threshold, they combine together into a new white dwarf that is heavier than either progenitor star. This process of merging boosts the magnetic field of that star and speeds up its rotation compared to that of the progenitors.

Astronomers say that the newfound tiny white dwarf, named ZTF J1901+1458, took the latter route of evolution; its progenitors merged and produced a white dwarf 1.35 times the mass of our Sun. The white dwarf has an extreme magnetic field almost 1 billion times stronger than our Sun's and whips around on its axis at a frenzied pace of one revolution every seven minutes (the zippiest white dwarf known, called EPIC 228939929, rotates every 5.3 minutes).

"We caught this very interesting object that wasn't quite massive enough to explode," says Caiazzo. "We are truly probing how massive a white dwarf can be."

What's more, Caiazzo and her collaborators think that the merged white dwarf may be massive enough to evolve into a neutron-rich dead star, or neutron star, which typically forms when a star much more massive than our Sun explodes in a supernova.

"This is highly speculative, but it's possible that the white dwarf is massive enough to further collapse into a neutron star," says Caiazzo. "It is so massive and dense that, in its core, electrons are being captured by protons in nuclei to form neutrons. Because the pressure from electrons pushes against the force of gravity, keeping the star intact, the core collapses when a large enough number of electrons are removed."

If this neutron star formation hypothesis is correct, it may mean that a significant portion of other neutron stars take shape in this way. The newfound object's close proximity (about 130 light-years away) and its young age (about 100 million years old or less) indicate that similar objects may occur more commonly in our galaxy.

MAGNETIC AND FAST


The white dwarf was first spotted by Caiazzo's colleague Kevin Burdge, a postdoctoral scholar at Caltech, after searching through all-sky images captured by ZTF. This particular white dwarf, when analyzed in combination with data from Gaia, stood out for being very massive and having a rapid rotation.

"No one has systematically been able to explore short-timescale astronomical phenomena on this kind of scale until now. The results of these efforts are stunning," says Burdge, who, in 2019, led the team that discovered a pair of white dwarfs zipping around each other every seven minutes.

The team then analyzed the spectrum of the star using Keck Observatory's Low Resolution Imaging Spectrometer (LRIS), and that is when Caiazzo was struck by the signatures of a very powerful magnetic field and realized that she and her team had found something "very special," as she says. The strength of the magnetic field together with the seven-minute rotational speed of the object indicated that it was the result of two smaller white dwarfs coalescing into one.

Data from Swift, which observes ultraviolet light, helped nail down the size and mass of the white dwarf. With a diameter of 2,670 miles, ZTF J1901+1458 secures the title for the smallest known white dwarf, edging out previous record holders, RE J0317-853 and WD 1832+089, which each have diameters of about 3,100 miles.

Read more at Science Daily

Just enough information will motivate young children to learn, drive curiosity

Preschool children are sensitive to the gap between how much they know and how much there is to learn, according to a Rutgers University-New Brunswick study.

The research, published in the journal Psychological Science, found preschool children are more likely to choose to gather more information about something if they know just enough about it to find it interesting, but not too much that it becomes boring.

Researchers say this "optimal" amount of existing knowledge creates the perfect mix of uncertainty and curiosity in children and motivates them to learn more.

"There is an infinite amount of information in the real world," said lead author Jenny Wang, an assistant professor of cognitive psychology at Rutgers. "Yet despite having to learn so much in such a short amount of time, young children seem to learn happily and effectively. We wanted to understand what drives their curiosity."

The study focused on how children's knowledge level influences what information they find interesting. The findings suggest that children are not simply attracted to information by its novelty.

According to Wang, children are naturally curious but the difficult question is how to harness this natural curiosity.

"Ultimately, findings like this will help parents and educators better support children when they actively explore and learn about the world," Wang said.

In a series of experiments, Wang and her coauthors designed in-person and online storybooks to measure how much 3- to 5-year-old preschool children know about different "knowledge domains." The experiment also assessed their ability to understand and comprehend a specific topic, such as contagion, and asked how children's current knowledge level predicts their interest in learning more about it, including whether someone will get sick after playing with a sneezing friend.

"Intuitively, curiosity seems to belong to those who know the most, like scientists, and those who know the least, like babies," said Wang, who directs the Rutgers Cognition and Learning Center (CALC). "But what we found here is quite surprising: it was children in the middle who showed the most interest in learning more about contagion, compared to children who knew too little or too much."

Read more at Science Daily

Investigational malaria vaccine gives strong, lasting protection

Two U.S. Phase 1 clinical trials of a novel candidate malaria vaccine have found that the regimen conferred unprecedentedly high levels of durable protection when volunteers were later exposed to disease-causing malaria parasites. The vaccine combines live parasites with either of two widely used antimalarial drugs -- an approach termed chemoprophylaxis vaccination. A Phase 2 clinical trial of the vaccine is now underway in Mali, a malaria-endemic country. If the approach proves successful there, chemoprophylaxis vaccination, or CVac, potentially could help reverse the stalled decline of global malaria. Currently, there is no vaccine in widespread use for the mosquito-transmitted disease.

The trials were conducted at the National Institutes of Health (NIH) Clinical Center in Bethesda, Maryland. They were led by Patrick E. Duffy, M.D., of the NIH National Institute of Allergy and Infectious Diseases (NIAID), and Stephen L. Hoffman, M.D., CEO of Sanaria Inc., Rockville, Maryland.

The Sanaria vaccine, called PfSPZ, is composed of sporozoites, the form of the malaria parasite transmitted to people by mosquito bites. Sporozoites travel through blood to the liver to initiate infection. In the CVac trials, healthy adult volunteers received PfSPZ along with either pyrimethamine, a drug that kills liver-stage parasites, or chloroquine, which kills blood-stage parasites. Three months later, under carefully controlled conditions, the volunteers were exposed to either an African malaria parasite strain that was the same as that in the vaccine (homologous challenge) or a variant South American parasite (heterologous challenge) that was more genetically distant from the vaccine strain than hundreds of African parasites. Exposure in both cases was via inoculation into venous blood, which infects all unvaccinated individuals.

At the lowest PfSPZ dosage, the CVac approach conferred modest protection: only two of nine volunteers (22.2%) who received the pyrimethamine combination were protected from homologous challenge. In contrast, seven out of eight volunteers (87.5%) who received the highest PfSPZ dosage combined with pyrimethamine were protected from homologous challenge, and seven out of nine volunteers (77.8%) were protected from heterologous challenge. In the case of the chloroquine combination, all six volunteers (100%) who received the higher PfSPZ dosage were completely protected from heterologous challenge. The high levels of cross-strain protection lasted at least three months (the time elapsed between vaccination and challenge) for both higher-dose regimens. One hundred percent protection for three months against heterologous variant parasites is unprecedented for any malaria vaccine in development, the authors note. These data suggest that CVac could be a promising approach for vaccination of travelers to and people living in malaria-endemic areas.

From Science Daily

Frequent COVID-19 testing key to efficient, early detection, study finds

The chance of detecting the virus that causes COVID-19 increases with more frequent testing, no matter the type of test, a new study found. Both polymerase chain reaction and antigen tests, paired with rapid results reporting, can achieve 98% sensitivity if deployed at least every three days.

"This study shows that frequent testing can be really effective at catching COVID-19 infections and potentially blocking transmission," said study leader Christopher Brooke, a virologist and professor of microbiology at the University of Illinois Urbana-Champaign. "There are many places where vaccination is not yet widespread. With the rise of variants, testing remains an important tool for blocking the spread of the virus."

Part of the Rapid Acceleration of Diagnostics Tech program of the National Institutes of Health, the study brought together researchers at Illinois; the University of Massachusetts Medical School, Worcester; Johns Hopkins School of Medicine, Baltimore; and the NIH National Institute of Biomedical Imaging and Bioengineering. The researchers published their results in the Journal of Infectious Diseases.

Students and employees at the U. of I. who had tested positive for COVID-19 or were identified as close contacts of a person who tested positive were invited to participate. Because of the SHIELD Illinois screening program, which required students and employees to take multiple saliva-based tests each week and returned results in less than 24 hours, the university provided an ideal location for identifying cases before they became symptomatic, the researchers said.

The 43 study participants received three tests daily for 14 days: a PCR nasal swab, a PCR saliva test and an antigen nasal swab. The results of each were compared with live viral cultures taken from the PCR nasal swab, which show when a person is actively infectious. The study also examined how the frequency of testing affected each method's efficacy at detecting an infection.

"Different tests have different advantages and limitations. Antigen tests are fast and cheap, but they are not as sensitive as PCR tests. PCR are the gold standard, but they take some time to return results and are more expensive," said Rebecca Lee Smith, a professor of epidemiology at Illinois and the first author of the study. "This study was to show, based on real data, which test is best under which circumstances and for what purpose."

The results showed that the PCR tests -- particularly saliva-based ones -- were best at detecting cases before the person had an infectious viral load, a key to isolating individuals before they can spread the virus, Smith said. For all three methods, testing every three days had 98% sensitivity to detecting infection.

If that testing frequency declined to once a week, the PCR methods maintained their high sensitivity but the antigen tests dropped to around 80%. That means organizations that wish to deploy antigen testing as part of a reopening strategy or individuals who wish to monitor their status at home should use antigen tests multiple times each week to achieve similar results to PCR testing, the researchers said.

"This work also shows how the PCR and antigen tests could be used in combination," Smith said. "For example, I work with a lot of school districts, helping them to plan for fall, since vaccines are not yet available to those under 12 years old. If a student had a known exposure or comes to school symptomatic, give them both tests. Antigen tests are really good at finding those highly infectious people, so that can tell administrators right away if the child needs to be sent home, rather than waiting 24 hours for PCR results. If the antigen test is negative, the PCR test is a backup, as it may detect the infection earlier than an antigen test would, before the student becomes contagious."

The results of the study helped inform the U.S. Food and Drug Administration's recommendations and instructions on how to use at-home antigen tests that recently received emergency use authorization. The researchers said they hope the results assist schools, businesses and other organizations as they reopen.

Read more at Science Daily

Jun 29, 2021

Astrophysicists detect first black hole-neutron star mergers

A long time ago, in two galaxies about 900 million light-years away, two black holes each gobbled up their neutron star companions, triggering gravitational waves that finally hit Earth in January 2020.

Discovered by an international team of astrophysicists including Northwestern University researchers, two events -- detected just 10 days apart -- mark the first-ever detection of a black hole merging with a neutron star. The findings will enable researchers to draw the first conclusions about the origins of these rare binary systems and how often they merge.

"Gravitational waves have allowed us to detect collisions of pairs of black holes and pairs of neutron stars, but the mixed collision of a black hole with a neutron star has been the elusive missing piece of the family picture of compact object mergers," said Chase Kimball, a Northwestern graduate student who co-authored the study. "Completing this picture is crucial to constraining the host of astrophysical models of compact object formation and binary evolution. Inherent to these models are their predictions of the rates that black holes and neutron stars merge amongst themselves. With these detections, we finally have measurements of the merger rates across all three categories of compact binary mergers."

The research will be published June 29 in the Astrophysical Journal Letters. The team includes researchers from the LIGO Scientific Collaboration (LSC), the Virgo Collaboration and the Kamioka Gravitational Wave Detector (KAGRA) project. An LSC member, Kimball led calculations of the merger rate estimates and how they fit into predictions from the various formation channels of neutron stars and black holes. He also contributed to discussions about the astrophysical implications of the discovery.

Kimball is co-advised by Vicky Kalogera, the principal investigator of Northwestern's LSC group, director of the Center for Interdisciplinary Exploration and Research in Astrophysics (CIERA) and the Daniel I. Linzer Distinguished Professor of Physics and Astronomy in the Weinberg Colleges of Arts and Sciences; and by Christopher Berry, an LSC member and the CIERA Board of Visitors Research Professor at Northwestern as well as a lecturer at the Institute for Gravitational Research at the University of Glasgow. Other Northwestern co-authors include Maya Fishbach, a NASA Einstein Postdoctoral Fellow and LSC member.

Two events in ten days

The team observed the two new gravitational-wave events -- dubbed GW200105 and GW200115 -- on Jan. 5, 2020, and Jan. 15, 2020, during the second half of the LIGO and Virgo detectors third observing run, called O3b. Although multiple observatories carried out several follow-up observations, none observed light from either event, consistent with the measured masses and distances.

"Following the tantalizing discovery, announced in June 2020, of a black-hole merger with a mystery object, which may be the most massive neutron star known, it is exciting also to have the detection of clearly identified mixed mergers, as predicted by our theoretical models for decades now," Kalogera said. "Quantitatively matching the rate constraints and properties for all three population types will be a powerful way to answer the foundational questions of origins."

All three large detectors (both LIGO instruments and the Virgo instrument) detected GW200115, which resulted from the merger of a 6-solar mass black hole with a 1.5-solar mass neutron star, roughly 1 billion light-years from Earth. With observations of the three widely separated detectors on Earth, the direction to the waves' origin can be determined to a part of the sky equivalent to the area covered by 2,900 full moons.

Just 10 days earlier, LIGO detected a strong signal from GW200105, using just one detector while the other was temporarily offline. While Virgo also was observing, the signal was too quiet in its data for Virgo to help detect it. From the gravitational waves, the astronomers inferred that the signal was caused by a 9-solar mass black hole colliding with a 1.9-solar mass compact object, which they ultimately concluded was a neutron star. This merger happened at a distance of about 900 million light-years from Earth.

Because the signal was strong in only one detector, the astronomers could not precisely determine the direction of the waves' origin. Although the signal was too quiet for Virgo to confirm its detection, its data did help narrow down the source's potential location to about 17% of the entire sky, which is equivalent to the area covered by 34,000 full moons.

Where do they come from?

Because the two events are the first confident observations of gravitational waves from black holes merging with neutron stars, the researchers now can estimate how often such events happen in the universe. Although not all events are detectable, the researchers expect roughly one such merger per month happens within a distance of one billion light-years.

While it is unclear where these binary systems form, astronomers identified three likely cosmic origins: stellar binary systems, dense stellar environments including young star clusters, and the centers of galaxies.

Read more at Science Daily

Are we missing other Earths?

Some exoplanet searches could be missing nearly half of the Earth-sized planets around other stars. New findings from a team using the international Gemini Observatory and the WIYN 3.5-meter Telescope at Kitt Peak National Observatory suggest that Earth-sized worlds could be lurking undiscovered in binary star systems, hidden in the glare of their parent stars. As roughly half of all stars are in binary systems, this means that astronomers could be missing many Earth-sized worlds.

Earth-sized planets may be much more common than previously realized. Astronomers working at NASA Ames Research Center have used the twin telescopes of the international Gemini Observatory, a Program of NSF's NOIRLab, to determine that many planet-hosting stars identified by NASA's TESS exoplanet-hunting mission are actually pairs of stars -- known as binary stars -- where the planets orbit one of the stars in the pair. After examining these binary stars, the team has concluded that Earth-sized planets in many two-star systems might be going unnoticed by transit searches like TESS's, which look for changes in the light from a star when a planet passes in front of it. The light from the second star makes it more difficult to detect the changes in the host star's light when the planet transits.

The team started out by trying to determine whether some of the exoplanet host stars identified with TESS were actually unknown binary stars. Physical pairs of stars that are close together can be mistaken for single stars unless they are observed at extremely high resolution. So the team turned to both Gemini telescopes to inspect a sample of exoplanet host stars in painstaking detail. Using a technique called speckle imaging, the astronomers set out to see whether they could spot undiscovered stellar companions.

Using the `Alopeke and Zorro instruments on the Gemini North and South telescopes in Chile and Hawai'i, respectively, the team observed hundreds of nearby stars that TESS had identified as potential exoplanet hosts. They discovered that 73 of these stars are really binary star systems that had appeared as single points of light until observed at higher resolution with Gemini. "With the Gemini Observatory's 8.1-meter telescopes, we obtained extremely high-resolution images of exoplanet host stars and detected stellar companions at very small separations," said Katie Lester of NASA's Ames Research Center, who led this work.

Lester's team also studied an additional 18 binary stars previously found among the TESS exoplanet hosts using the NN-EXPLORE Exoplanet and Stellar Speckle Imager (NESSI) on the WIYN 3.5-meter Telescope at Kitt Peak National Observatory, also a Program of NSF's NOIRLab.

After identifying the binary stars, the team compared the sizes of the detected planets in the binary star systems to those in single-star systems. They realized that the TESS spacecraft found both large and small exoplanets orbiting single stars, but only large planets in binary systems.

These results imply that a population of Earth-sized planets could be lurking in binary systems and going undetected using the transit method employed by TESS and many other planet-hunting telescopes. Some scientists had suspected that transit searches might be missing small planets in binary systems, but the new study provides observational support to back it up and shows which sizes of exoplanets are affected.

"We have shown that it is more difficult to find Earth-sized planets in binary systems because small planets get lost in the glare of their two parent stars," Lester stated. "Their transits are 'filled in' by the light from the companion star," added Steve Howell of NASA's Ames Research Center, who leads the speckle imaging effort and was involved in this research.

"Since roughly 50% of stars are in binary systems, we could be missing the discovery of -- and the chance to study -- a lot of Earth-like planets," Lester concluded.

The possibility of these missing worlds means that astronomers will need to use a variety of observational techniques before concluding that a given binary star system has no Earth-like planets. "Astronomers need to know whether a star is single or binary before they claim that no small planets exist in that system," explained Lester. "If it's single, then you could say that no small planets exist. But if the host is in a binary, you wouldn't know whether a small planet is hidden by the companion star or does not exist at all. You would need more observations with a different technique to figure that out."

As part of their study, Lester and her colleagues also analyzed how far apart the stars are in the binary systems where TESS had detected large planets. The team found that the stars in the exoplanet-hosting pairs were typically farther apart than binary stars not known to have planets. This could suggest that planets do not form around stars that have close stellar companions.

"This speckle imaging survey illustrates the critical need for NSF telescope facilities to characterize newly discovered planetary systems and develop our understanding of planetary populations," said National Science Foundation Division of Astronomical Sciences Program Officer Martin Still.

Read more at Science Daily

Satellite unexpectedly detects a unique exoplanet

The exoplanet-hunting satellite CHEOPS of the European Space Agency (ESA), in which the Instituto de Astrofísica de Canarias (IAC) is participating along with other European institutions, has unexpectedly detected a third planet passing in front of its star while it was exploring two previously known planets around the same star. This transit, according to researchers, will reveal exciting details about a strange planet "without a known equivalent."

The discovery is one of the first results of CHEOPS (CHaracterising ExOPlanet Satellite) and the first time that an exoplanet has been seen with a period longer than 100 days transiting a star which is sufficiently bright to be seen with the naked eye. The discovery was published today in the journal Nature Astronomy.

This bright star similar to the sun, called Nu2 Lupi, is a little more than 50 light years from Earth, in the constellation of Lupus. In 2019, HARPS (High Accuracy Radial velocity Planet Searcher) of the European Southern Observatory (ESO) in Chile discovered three exoplanets in this system (called b, c, and d) with masses between those of the Earth and Neptune, and with orbital periods of 11.6, 27.6 and 107.6 days respectively. Afterwards NASA's TESS satellite, designed to detect transiting planets, found that the two interior planets, b and c, transit Nu2 Lupi, making it one of the only three naked eye stars which have more than one transiting planet.

"Transiting systems such as Nu2 Lupi are of great importance in our understanding of how planets form and evolve, because we can compare several planets around the same bright star in detail," explains Laetitia Delrez, a researcher at the University of Liege (Belgium) and first author of the article.

"Our idea was to follow up previous studies of Nu2 Lupi and to observe planets b and c passing in front of Nu2 Lupi with CHEOPS, but during a transit of planet c we were amazed to see an unexpected transit of planet d, which is further out within the system," she adds.

Transits of planets give a valuable opportunity to study their atmospheres, their orbits, their sizes and their compositions. A transiting planet block out a tiny but detectable proportion of the light of its star when it passes in front of it, and it was this tiny drop in the light which led the researchers to their discovery. Because exoplanets with long periods orbit far away from their stars, the possibility of detecting a planet during transit is very small indeed, which makes the finding with CHEOPS a real surprise.

Using the high precision techniques of CHEOPS planet d was found to have some 2.5 times the radius of the Earth, and its orbital period around its star of a little over 107 days, was confirmed. In addition, using archive observations from terrestrial telescopes its mass could be estimated at 8.8 times that of the Earth.

"The amount of radiation from the star which falls onto planet d is quite small compared to many other known exoplanets. If it were in our own solar system Nu2 Lupi d would orbit between Mercury and Venus," says Mahmoudreza Oshagh, a senior postdoctoral researcher at the IAC, and a co-author of the paper. "Combined with its bright parent star, its long orbital period and its ideal situation for follow-up, this means that planet d is very exciting: it is an exceptional object, with no known equivalent, and it will certainly be a fundamental object for future studies."

The majority of long period transiting exoplanets discovered until now are orbiting stars which are too faint to allow detailed follow-up observations, which means that we know little about their properties. Nu2 Lupi is, however, sufficiently bright to be an attractive object for other powerful space telescopes such as the NASA/ESA Hubble Space Telescope, the future James Webb Space Telescope, as well as major observatories on the ground. "Given its general properties and its orbit, planet d will be an exceptionally favourable objective to study an exoplanet with a moderate atmospheric temperature around a star similar to the Sun," adds Laetitia Delrez.

Combining the new data from CHEOPS with archive data from other observatories, the researchers found that planet b is mainly rocky, while planets c and d appear to have large quantities of water surrounded by hydrogen and helium gas. In fact, planets c and d contain much more water than the Earth, a quarter of the mass of each of them is water, in comparison with less than 0.1% on Earth. But this water is not liquid, it is high pressure ice, or high temperature water vapour.

"Although none of these planets would be habitable, their diversity makes the system very exciting and a great future perspective to show how these bodies formed and how they have changed with time," explains Enric Pallé, an IAC researcher and a co-author of the article. "We can also look for rings or moons within the Nu2 Lupi system, because the extreme accuracy and stability of CHEOPS could allow us to detect bodies close to the size of Mars."

CHEOPS is designed to gather high precision data of individual stars known to harbour planets, rather than to make a more general survey of possible exoplanets around many stars. This approach and accuracy are proving exceptionally useful to understand the planetary systems around the stars around us.

Read more at Science Daily

COVID-19: Reduced sense of taste and smell lingers

Patients with mild Covid-19 infections experience a significantly increased longer lasting reduced sense of taste and smell. This is also the case for long-term shortness of breath, although relatively few people are affected. And women and the elderly are particularly affected. This is shown by new research findings from Aarhus University Aarhus University Hospital and Regional Hospital West Jutland.

The last 14 months have taught us that there are different symptoms and outcomes of Covid-19. However, the vast majority of people who fall ill with Covid-19 experience mild symptoms and get over the disease in two to three weeks.

These are precisely some of the people who have been the subject of a new study from AUH, HEV and AU. In the study, researchers have compared symptoms on a daily basis for up to 90 days in 210 healthcare workers who had tested positive and 630 with a negative test.

Each day, the participants received a link to a questionnaire on whether they had experienced one of the following symptoms within the last 24 hours: coughing, sore throat, headaches, fever, muscle pain, shortness of breath and reduced sense of taste and smell.

"We saw that the prevalence of a longer lasting reduced taste and smell is significantly increased in patients with mild Covid-19 disease who did not require hospitalisation. This pattern is also seen for shortness of breath, but far fewer people were affected," says Henrik Kolstad, who is behind the study.

Women and the elderly experience more symptoms

Thirty per cent of those who had tested positive and almost none of the participants with a negative test reported a reduced sense of taste and smell over the full ninety days. At the beginning of the project, shortness of breath was reported by twenty per cent of those who had tested positive, with the figure falling to five per cent after thirty days, though without ever reaching the level of the participants who had tested negative.

Coughing, sore throat, headaches, muscle pain and fever were more common among those who tested positive than those who tested negative in the first few days, but after thirty days no increases were seen.

Woman with a positive test reported more symptoms compared to women with a negative test than was the case for men with a positive test when compared to men with a negative test. The same was true for older and younger participants. According to the researcher, this could indicate that women and the elderly are more susceptible to developing long-term COVID-19 symptoms.

"This study provides detailed knowledge of which symptom pathways you can expect after having tested positive for COVID-19 without requiring hospitalisation," says Henrik Kolstad.

Read more at Science Daily

Success in reversing dementia in mice sets the stage for human clinical trials

Researchers have identified a new treatment candidate that appears to not only halt neurodegenerative symptoms in mouse models of dementia and Alzheimer's disease, but also reverse the effects of the disorders.

The team, based at Tohoku University, published their results on June 8 in the International Journal of Molecular Sciences. The treatment candidate has been declared safe by Japan's governing board, and the researchers plan to begin clinical trials in humans in the next year.

"There are currently no disease-modifying therapeutics for neurodegenerative disorders such as Alzheimer's disease, Lewy body dementia, Huntington disease and frontotemporal dementia in the world," said paper author Kohji Fukunaga, professor emeritus in Tohoku University's Graduate School of Pharmaceutical Sciences. "We discovered the novel, disease-modifying therapeutic candidate SAK3, which, in our studies, rescued neurons in most protein-misfolding, neurodegenerative diseases."

In a previous study, the team found that the SAK3 molecule - the base structure of which is found in the enhancement of T-type Ca2+ channel activity - appeared to help improve memory and learning in a mouse model of Alzheimer's disease.

According to previous studies, SAK3 enhances the function of a cell membrane channel thereby promoting neuronal activity in the brain. Typically, SAK3 promotes neurotransmitter releases of acetylcholine and dopamine that are significantly reduced in Alzheimer's disease and Lewy body dementia. The Ca2+ channel enhancement is thought to trigger a change from resting to active in neuronal activity. When the Ca2+ channel is dysregulated in the brain, the acetylcholine and dopamine releases are reduced. The result is a dysregulated system that a person experiences as cognitive confusion and uncoordinated motor function.

SAK3 directly binds to the subunit of this channel, resulting in the enhancement of neurotransmission thereby improving cognitive deficits. The researchers found that the same process also appeared to work in a mouse model of Lewy body dementia, which is characterized by a build-up of proteins known as Lewy bodies.

"Even after the onset of cognitive impairment, SAK3 administration significantly prevented the progression of neurodegenerative behaviors in both motor dysfunction and cognition," Fukunaga said.

In comparison, Aduhelm, the Alzheimer's drug recently approved by the U.S. Food and Drug Administration, reduces the number of amyloid plaques in the brain, but it is not yet known if the amyloid reduction actually prevents further cognitive or motor decline in patients. According to Fukunaga, SAK3 helps destroy amyloid plaque - at least in mice.

SAK3 also helps manage the destruction of misfolded alpha-synuclein. Normal alpha-synuclein helps regulate neurotransmitter transmission in the brain. The protein can misfold and aggregate, contributing to what researchers suspect may be an underlying cause of neurodegenerative symptoms. This aggregation can also lead to the loss of dopamine neurons, which help with learning and memory.

"We found that chronic administration of SAK3 significantly inhibited the accumulation of alpha-synuclein in the mice," Fukunaga said, noting that the mice received a daily oral dose of SAK3.

According to Fukunaga, SAK3 enhances the activity of the system that identifies and destroys misfolded proteins. In neurodegenerative diseases, this system is often dysfunctional, leaving misfolded proteins to muck up the cell's machinery.

Read more at Science Daily

Jun 28, 2021

The Goldilocks Supernova

A worldwide team led by UC Santa Barbara scientists at Las Cumbres Observatory has discovered the first convincing evidence for a new type of stellar explosion -- an electron-capture supernova. While they have been theorized for 40 years, real-world examples have been elusive. They are thought to arise from the explosions of massive super-asymptotic giant branch (SAGB) stars, for which there has also been scant evidence. The discovery, published in Nature Astronomy, also sheds new light on the thousand-year mystery of the supernova from A.D. 1054 that was visible all over the world in the daytime, before eventually becoming the Crab Nebula.

Historically, supernovae have fallen into two main types: thermonuclear and iron-core collapse. A thermonuclear supernova is the explosion of a white dwarf star after it gains matter in a binary star system. These white dwarfs are the dense cores of ash that remain after a low-mass star (one up to about 8 times the mass of the sun) reaches the end of its life. An iron core-collapse supernova occurs when a massive star -- one more than about 10 times the mass of the sun -- runs out of nuclear fuel and its iron core collapses, creating a black hole or neutron star. Between these two main types of supernovae are electron-capture supernovae. These stars stop fusion when their cores are made of oxygen, neon and magnesium; they aren't massive enough to create iron.

While gravity is always trying to crush a star, what keeps most stars from collapsing is either ongoing fusion or, in cores where fusion has stopped, the fact that you can't pack the atoms any tighter. In an electron capture supernova, some of the electrons in the oxygen-neon-magnesium core get smashed into their atomic nuclei in a process called electron capture. This removal of electrons causes the core of the star to buckle under its own weight and collapse, resulting in an electron-capture supernova.

If the star had been slightly heavier, the core elements could have fused to create heavier elements, prolonging its life. So it is a kind of reverse Goldilocks situation: The star isn't light enough to escape its core collapsing, nor is it heavy enough to prolong its life and die later via different means.

That's the theory that was formulated beginning in 1980 by Ken'ichi Nomoto of the University of Tokyo and others. Over the decades, theorists have formulated predictions of what to look for in an electron-capture supernova and their SAGB star progenitors. The stars should have a lot of mass, lose much of it before exploding, and this mass near the dying star should be of an unusual chemical composition. Then the electron-capture supernova should be weak, have little radioactive fallout, and have neutron-rich elements in the core.

The new study is led by Daichi Hiramatsu, a graduate student at UC Santa Barbara and Las Cumbres Observatory (LCO). Hiramatsu is a core member of the Global Supernova Project, a worldwide team of scientists using dozens of telescopes around and above the globe. The team found that the supernova SN 2018zd had many unusual characteristics, some of which were seen for the first time in a supernova.

It helped that the supernova was relatively nearby -- only 31 million light-years away -- in the galaxy NGC 2146. This allowed the team to examine archival images taken by the Hubble Space Telescope prior to the explosion and to detect the likely progenitor star before it exploded. The observations were consistent with another recently identified SAGB star in the Milky Way, but inconsistent with models of red supergiants, the progenitors of normal iron core-collapse supernovae.

The authors looked through all published data on supernovae, and found that while some had a few of the indicators predicted for electron-capture supernovae, only SN 2018zd had all six: an apparent SAGB progenitor, strong pre-supernova mass loss, an unusual stellar chemical composition, a weak explosion, little radioactivity and a neutron-rich core.

"We started by asking 'what's this weirdo?'" Hiramatsu said. "Then we examined every aspect of SN 2018zd and realized that all of them can be explained in the electron-capture scenario."

The new discoveries also illuminate some mysteries of the most famous supernova of the past. In A.D. 1054 a supernova happened in the Milky Way Galaxy that, according to Chinese and Japanese records, was so bright that it could be seen in the daytime for 23 days, and at night for nearly two years. The resulting remnant, the Crab Nebula, has been studied in great detail.

The Crab Nebula was previously the best candidate for an electron-capture supernova, but its status was uncertain partly because the explosion happened nearly a thousand years ago. The new result increases the confidence that the historic SN 1054 was an electron-capture supernova. It also explains why that supernova was relatively bright compared to the models: Its luminosity was probably artificially enhanced by the supernova ejecta colliding with material cast off by the progenitor star as was seen in SN 2018zd.

Ken Nomoto at the Kavli IPMU of the University of Tokyo expressed excitement that his theory had been confirmed. "I am very pleased that the electron-capture supernova was finally discovered, which my colleagues and I predicted to exist and have a connection to the Crab Nebula 40 years ago," he said. "I very much appreciate the great efforts involved in obtaining these observations. This is a wonderful case of the combination of observations and theory."

Hiramatsu added, "It was such a 'Eureka moment' for all of us that we can contribute to closing the 40-year-old theoretical loop, and for me personally because my career in astronomy started when I looked at the stunning pictures of the Universe in the high school library, one of which was the iconic Crab Nebula taken by the Hubble Space Telescope."

Read more at Science Daily