Mar 13, 2021

Scientists sketch aged star system using over a century of observations

 Astronomers have painted their best picture yet of an RV Tauri variable, a rare type of stellar binary where two stars -- one approaching the end of its life -- orbit within a sprawling disk of dust. Their 130-year dataset spans the widest range of light yet collected for one of these systems, from radio to X-rays.

"There are only about 300 known RV Tauri variables in the Milky Way galaxy," said Laura Vega, a recent doctoral recipient at Vanderbilt University in Nashville, Tennessee. "We focused our study on the second brightest, named U Monocerotis, which is now the first of these systems from which X-rays have been detected."

A paper describing the findings, led by Vega, was published in The Astrophysical Journal.

The system, called U Mon for short, lies around 3,600 light-years away in the constellation Monoceros. Its two stars circle each other about every six and a half years on an orbit tipped about 75 degrees from our perspective.

The primary star, an elderly yellow supergiant, has around twice the Sun's mass but has billowed to 100 times the Sun's size. A tug of war between pressure and temperature in its atmosphere causes it to regularly expand and contract, and these pulsations create predictable brightness changes with alternating deep and shallow dips in light -- a hallmark of RV Tauri systems. Scientists know less about the companion star, but they think it's of similar mass and much younger than the primary.

The cool disk around both stars is composed of gas and dust ejected by the primary star as it evolved. Using radio observations from the Submillimeter Array on Maunakea, Hawai'i, Vega's team estimated that the disk is around 51 billion miles (82 billion kilometers) across. The binary orbits inside a central gap that the scientists think is comparable to the distance between the two stars at their maximum separation, when they're about 540 million miles (870 million kilometers) apart.

When the stars are farthest from each other, they're roughly aligned with our line of sight. The disk partially obscures the primary and creates another predictable fluctuation in the system's light. Vega and her colleagues think this is when one or both stars interact with the disk's inner edge, siphoning off streams of gas and dust. They suggest that the companion star funnels the gas into its own disk, which heats up and generates an X-ray-emitting outflow of gas. This model could explain X-rays detected in 2016 by the European Space Agency's XMM-Newton satellite.

"The XMM observations make U Mon the first RV Tauri variable detected in X-rays," said Kim Weaver, the XMM U.S. project scientist and an astrophysicist at NASA's Goddard Space Flight Center in Greenbelt, Maryland. "It's exciting to see ground- and space-based multiwavelength measurements come together to give us new insights into a long-studied system."

In their analysis of U Mon, Vega's team also incorporated 130 years of visible light observations.

The earliest available measurement of the system, collected on Dec. 25, 1888, came from the archives of the American Association of Variable Star Observers (AAVSO), an international network of amateur and professional astronomers headquartered in Cambridge, Massachusetts. AAVSO provided additional historical measurements ranging from the mid-1940s to the present.

The researchers also used archived images cataloged by the Digital Access to a Sky Century @ Harvard (DASCH), a program at the Harvard College Observatory in Cambridge dedicated to digitizing astronomical images from glass photographic plates made by ground-based telescopes between the 1880s and 1990s.

U Mon's light varies both because the primary star pulsates and because the disk partially obscures it every 6.5 years or so. The combined AAVSO and DASCH data allowed Vega and her colleagues to spot an even longer cycle, where the system's brightness rises and falls about every 60 years. They think a warp or clump in the disk, located about as far from the binary as Neptune is from the Sun, causes this extra variation as it orbits.

Vega completed her analysis of the U Mon system as a NASA Harriett G. Jenkins Predoctoral Fellow, a program funded by the NASA Office of STEM Engagement's Minority University Research and Education Project.

"For her doctoral dissertation, Laura used this historical dataset to detect a characteristic that would otherwise appear only once in an astronomer's career," said co-author Rodolfo Montez Jr., an astrophysicist at the Center for Astrophysics | Harvard & Smithsonian, also in Cambridge. "It's a testament to how our knowledge of the universe builds over time."

Co-author Keivan Stassun, an expert in star formation and Vega's doctoral advisor at Vanderbilt, notes that this evolved system has many features and behaviors in common with newly formed binaries. Both are embedded in disks of gas and dust, pull material from those disks, and produce outflows of gas. And in both cases, the disks can form warps or clumps. In young binaries, those might signal the beginnings of planet formation.

Read more at Science Daily

An unusual creature is coming out of winter's slumber: Here's why scientists are excited

 If you binged on high-calorie snacks and then spent the winter crashed on the couch in a months-long food coma, you'd likely wake up worse for wear. Unless you happen to be a fat-tailed dwarf lemur.

This squirrel-sized primate lives in the forests of Madagascar, where it spends up to seven months each year mostly motionless and chilling, using the minimum energy necessary to withstand the winter. While zonked, it lives off of fat stored in its tail.

Animals that hibernate in the wild rarely do so in zoos and sanctuaries, with their climate controls and year-round access to food. But now our closest hibernating relative has gone into true, deep hibernation in captivity for the first time at the Duke Lemur Center.

"They did not disappoint," said research scientist Marina Blanco, who led the project. "Indeed, our dwarf lemurs hibernated just like their wild kin do in western Madagascar."

The researchers say recreating some of the seasonal fluctuations of the lemurs' native habitat might be good for the well-being of a species hardwired for hibernation, and also may yield insights into metabolic disorders in humans.

"Hibernation is literally in their DNA," Blanco said.

Blanco has studied dwarf lemurs for 15 years in Madagascar, fitting them with tracking collars to locate them when they are hibernating in their tree holes or underground burrows. But what she and others observed in the wild didn't square with how the animals behaved when cared for in captivity.

Captive dwarf lemurs are fed extra during the summer so they can bulk up like they do in the wild, and then they'll hunker down and let their heart rate and temperature drop for short bouts -- a physiological condition known as torpor. But they rarely stay in this suspended state for longer than 24 hours. Which got Blanco to wondering: After years in captivity, do dwarf lemurs still have what it takes to survive seasonal swings like their wild counterparts do? And what can these animals teach us about how to safely put the human body on pause too, slowing the body's processes long enough for, say, life-saving surgery or even space travel?

To find out, Duke Lemur Center staff teamed up to build fake tree hollows out of wooden boxes and placed them in the dwarf lemurs' indoor enclosures, as a haven for them to wait out the winter. To mimic the seasonal changes the lemurs experience over the course of the year in Madagascar, the team also gradually adjusted the lights from 12 hours a day to a more "winter-like" 9.5 hours, and lowered the thermostat from 77 degrees Fahrenheit to the low 50s.

The animals were offered food if they were awake and active, and weighed every two weeks, but otherwise they were left to lie.

It worked. In the March 11 issue of the journal Scientific Reports, the researchers show for the first time that fat-tailed dwarf lemurs can hibernate quite well in captivity.

For four months, the eight lemurs in the study spent some 70% of their time in metabolic slow-motion: curled up, cool to the touch, barely moving or breathing for up to 11 days at a stretch, showing little interest in food -- akin to their wild counterparts.

Now that spring is afoot in North Carolina and the temperatures are warming, the lemurs are waking up. Their first physical exams after they emerged showed them to be 22% to 35% lighter than they were at the start but otherwise healthy. Their heart rates are back up from just eight beats per minute to about 200, and their appetites have returned.

"We've been able to replicate their wild conditions well enough to get them to replicate their natural patterns," said Erin Ehmke, who directs research at the center.

Females were the hibernation champs, out-stuporing the males and maintaining more of their winter weight. They need what's left of their fat stores for the months of pregnancy and lactation that typically follow after they wake up, Blanco said.

Study co-author Lydia Greene says the next step is to use non-invasive research techniques such as metabolite analysis and sensors in their enclosures to better understand what dwarf lemurs do to prepare their bodies and eventually bounce back from months of standby mode -- work that could lead to new treatments for heart attacks, strokes, and other life-threatening conditions in humans.

Blanco suspects the impressive energy-saving capabilities of these lemurs may also relate to another trait they possess: longevity. The oldest dwarf lemur on record, Jonas, died at the Duke Lemur Center at the age of 29. The fact that dwarf lemurs live longer than non-hibernating species their size suggests that something intrinsic to their biological machinery may protect against aging.

Read more at Science Daily

Mar 12, 2021

Experts recreate a mechanical Cosmos for the world's first computer

 Researchers at UCL have solved a major piece of the puzzle that makes up the ancient Greek astronomical calculator known as the Antikythera Mechanism, a hand-powered mechanical device that was used to predict astronomical events.

Known to many as the world's first analogue computer, the Antikythera Mechanism is the most complex piece of engineering to have survived from the ancient world. The 2,000-year-old device was used to predict the positions of the Sun, Moon and the planets as well as lunar and solar eclipses.

Published in Scientific Reports, the paper from the multidisciplinary UCL Antikythera Research Team reveals a new display of the ancient Greek order of the Universe (Cosmos), within a complex gearing system at the front of the Mechanism.

Lead author Professor Tony Freeth (UCL Mechanical Engineering) explained: "Ours is the first model that conforms to all the physical evidence and matches the descriptions in the scientific inscriptions engraved on the Mechanism itself.

"The Sun, Moon and planets are displayed in an impressive tour de force of ancient Greek brilliance."

The Antikythera Mechanism has generated both fascination and intense controversy since its discovery in a Roman-era shipwreck in 1901 by Greek sponge divers near the small Mediterranean island of Antikythera.

The astronomical calculator is a bronze device that consists of a complex combination of 30 surviving bronze gears used to predict astronomical events, including eclipses, phases of the moon, positions of the planets and even dates of the Olympics.

Whilst great progress has been made over the last century to understand how it worked, studies in 2005 using 3D X-rays and surface imaging enabled researchers to show how the Mechanism predicted eclipses and calculated the variable motion of the Moon.

However, until now, a full understanding of the gearing system at the front of the device has eluded the best efforts of researchers. Only about a third of the Mechanism has survived, and is split into 82 fragments -- creating a daunting challenge for the UCL team.

The biggest surviving fragment, known as Fragment A, displays features of bearings, pillars and a block. Another, known as Fragment D, features an unexplained disk, 63-tooth gear and plate.

Previous research had used X-ray data from 2005 to reveal thousands of text characters hidden inside the fragments, unread for nearly 2,000 years. Inscriptions on the back cover include a description of the cosmos display, with the planets moving on rings and indicated by marker beads. It was this display that the team worked to reconstruct.

Two critical numbers in the X-rays of the front cover, of 462 years and 442 years, accurately represent cycles of Venus and Saturn respectively. When observed from Earth, the planets' cycles sometimes reverse their motions against the stars. Experts must track these variable cycles over long time-periods in order to predict their positions.

"The classic astronomy of the first millennium BC originated in Babylon, but nothing in this astronomy suggested how the ancient Greeks found the highly accurate 462-year cycle for Venus and 442-year cycle for Saturn," explained PhD candidate and UCL Antikythera Research Team member Aris Dacanalis.

Using an ancient Greek mathematical method described by the philosopher Parmenides, the UCL team not only explained how the cycles for Venus and Saturn were derived but also managed to recover the cycles of all the other planets, where the evidence was missing.

PhD candidate and team member David Higgon explained: "After considerable struggle, we managed to match the evidence in Fragments A and D to a mechanism for Venus, which exactly models its 462-year planetary period relation, with the 63-tooth gear playing a crucial role."

Professor Freeth added: "The team then created innovative mechanisms for all of the planets that would calculate the new advanced astronomical cycles and minimize the number of gears in the whole system, so that they would fit into the tight spaces available."

Read more at Science Daily

Astronomers have detected a moving supermassive black hole

 Scientists have long theorized that supermassive black holes can wander through space -- but catching them in the act has proven difficult.

Now, researchers at the Center for Astrophysics | Harvard & Smithsonian have identified the clearest case to date of a supermassive black hole in motion. Their results are published today in the Astrophysical Journal.

"We don't expect the majority of supermassive black holes to be moving; they're usually content to just sit around," says Dominic Pesce, an astronomer at the Center for Astrophysics who led the study. "They're just so heavy that it's tough to get them going. Consider how much more difficult it is to kick a bowling ball into motion than it is to kick a soccer ball -- realizing that in this case, the 'bowling ball' is several million times the mass of our Sun. That's going to require a pretty mighty kick."

Pesce and his collaborators have been working to observe this rare occurrence for the last five years by comparing the velocities of supermassive black holes and galaxies.

"We asked: Are the velocities of the black holes the same as the velocities of the galaxies they reside in?" he explains. "We expect them to have the same velocity. If they don't, that implies the black hole has been disturbed."

For their search, the team initially surveyed 10 distant galaxies and the supermassive black holes at their cores. They specifically studied black holes that contained water within their accretion disks -- the spiral structures that spin inward towards the black hole.

As the water orbits around the black hole, it produces a laser-like beam of radio light known as a maser. When studied with a combined network of radio antennas using a technique known as very long baseline interferometry (VLBI), masers can help measure a black hole's velocity very precisely, Pesce says.

The technique helped the team determine that nine of the 10 supermassive black holes were at rest -- but one stood out and seemed to be in motion.

Located 230 million light-years away from Earth, the black hole sits at the center of a galaxy named J0437+2456. Its mass is about three million times that of our Sun.

Using follow-up observations with the Arecibo and Gemini Observatories, the team has now confirmed their initial findings. The supermassive black hole is moving with a speed of about 110,000 miles per hour inside the galaxy J0437+2456.

But what's causing the motion is not known. The team suspects there are two possibilities.

"We may be observing the aftermath of two supermassive black holes merging," says Jim Condon, a radio astronomer at the National Radio Astronomy Observatory who was involved in the study. "The result of such a merger can cause the newborn black hole to recoil, and we may be watching it in the act of recoiling or as it settles down again."

But there's another, perhaps even more exciting possibility: the black hole may be part of a binary system.

"Despite every expectation that they really ought to be out there in some abundance, scientists have had a hard time identifying clear examples of binary supermassive black holes," Pesce says. "What we could be seeing in the galaxy J0437+2456 is one of the black holes in such a pair, with the other remaining hidden to our radio observations because of its lack of maser emission."

Read more at Science Daily

With gene therapy, scientists develop opioid-free solution for chronic pain

A gene therapy for chronic pain could offer a safer, non-addictive alternative to opioids. Researchers at the University of California San Diego developed the new therapy, which works by temporarily repressing a gene involved in sensing pain. It increased pain tolerance in mice, lowered their sensitivity to pain and provided months of pain relief without causing numbness.

The researchers report their findings in a paper published Mar. 10 in Science Translational Medicine.

The gene therapy could be used to treat a broad range of chronic pain conditions, from lower back pain to rare neuropathic pain disorders -- conditions for which opioid painkillers are the current standard of care.

"What we have right now does not work," said first author Ana Moreno, a bioengineering alumna from the UC San Diego Jacobs School of Engineering. Opioids can make people more sensitive to pain over time, leading them to rely on increasingly higher doses. "There's a desperate need for a treatment that's effective, long-lasting and non-addictive."

The idea for such a treatment emerged when Moreno was a Ph.D. student in UC San Diego bioengineering professor Prashant Mali's lab. Mali had been investigating the possibility of applying CRISPR-based gene therapy approaches to rare as well as common human diseases. Moreno's project focused on exploring potential therapeutic avenues. One day, she came across a paper about a genetic mutation that causes humans to feel no pain. This mutation inactivates a protein in pain-transmitting neurons in the spinal cord, called NaV1.7. In individuals lacking functional NaV1.7, sensations like touching something hot or sharp do not register as pain. On the other hand, a gene mutation that leads to overexpression of NaV1.7 causes individuals to feel more pain.

When Moreno read this, it clicked. "By targeting this gene, we could alter the pain phenotype," she said. "What's also cool is that this gene is only involved in pain. There aren't any severe side effects observed with this mutation."

Non-permanent gene therapy

Moreno had been working on gene repression using the CRISPR gene editing tool as part of her dissertation. Specifically, she was working with a version of CRISPR that uses what's called "dead" Cas9, which lacks the ability to cut DNA. Instead, it sticks to a gene target and blocks its expression.

Moreno saw an opportunity to use this approach to repress the gene that codes for NaV1.7. She points out an appeal of this approach: "It's not cutting out any genes, so there are no permanent changes to the genome. You wouldn't want to permanently lose the ability to feel pain," she said. "One of the biggest concerns with CRISPR gene editing is off-target effects. Once you cut DNA, that's it. You can't go back. With dead Cas9, we're not doing something irreversible."

Mali, who is a co-senior author of the study, says that this use of dead Cas9 opens the door to using gene therapy to target common diseases and chronic ailments.

"In some common diseases, the issue is that a gene is being misexpressed. You don't want to completely shut it down," he said. "But if you could turn down the dose of that gene, you could bring it to a level where it is not pathogenic. That is what we are doing here. We don't completely take away the pain phenotype, we dampen it."

Moreno and Mali co-founded the spinoff company Navega Therapeutics to work on translating this gene therapy approach, which they developed at UC San Diego, into the clinic. They teamed up with Tony Yaksh, an expert in pain systems and a professor of anesthesiology and pharmacology at UC San Diego School of Medicine. Yaksh is a scientific advisor to Navega and co-senior author of the study.

Early lab studies

The researchers engineered a CRISPR/dead Cas9 system to target and repress the gene that codes for NaV1.7. They administered spinal injections of their system to mice with inflammatory and chemotherapy-induced pain. These mice displayed higher pain thresholds than mice that did not receive the gene therapy; they were slower to withdraw a paw from painful stimuli (heat, cold or pressure) and spent less time licking or shaking it after being hurt.

The treatment was tested at various timepoints. It was still effective after 44 weeks in the mice with inflammatory pain and 15 weeks in those with chemotherapy-induced pain. The length of duration is still being tested, researchers said, and is expected to be long-lasting. Moreover, the treated mice did not lose sensitivity or display any changes in normal motor function.

To validate their results, the researchers performed the same tests using another gene editing tool called zinc finger proteins. It's an older technique than CRISPR, but it does the same job. Here, the researchers designed zinc fingers that similarly bind to the gene target and block expression of NaV1.7. Spinal injections of the zinc fingers in mice produced the same results as the CRISPR-dead Cas9 system.

"We were excited that both approaches worked," Mali said. "The beauty about zinc finger proteins is that they are built on the scaffold of a human protein. The CRISPR system is a foreign protein that comes from bacteria, so it could cause an immune response. That's why we explored zinc fingers as well, so we have an option that might be more translatable to the clinic."

The researchers say this solution could work for a large number of chronic pain conditions arising from increased expression of NaV1.7, including diabetic polyneuropathy, erythromelalgia, sciatica and osteoarthritis. It could also provide relief for patients undergoing chemotherapy.

And due to its non-permanent effects, this therapeutic platform could address a poorly met need for a large population of patients with long-lasting (weeks to months) but reversible pain conditions, Yaksh said.

"Think of the young athlete or wounded war fighter in which the pain may resolve with wound healing," he said. "We would not want to permanently remove the ability to sense pain in these people, especially if they have a long life expectancy. This CRISPR/dead Cas9 approach offers this population an alternative therapeutic intervention -- that's a major step in the field of pain management."

Researchers at UC San Diego and Navega will next work on optimizing both approaches (CRISPR and zinc fingers) for targeting the human gene that codes for NaV1.7. Trials in non-human primates to test for efficacy and toxicity will follow. Researchers expect to file for an IND and to commence human clinical trials in a couple years.

Read more at Science Daily

Air pollution: The silent killer called PM 2.5

 Millions of people die prematurely every year from diseases and cancer caused by air pollution. The first line of defence against this carnage is ambient air quality standards. Yet, according to researchers from McGill University, over half of the world's population lives without the protection of adequate air quality standards.

Air pollution varies greatly in different parts of the world. But what about the primary weapons against it? To find answers, researchers from McGill University set out to investigate global air quality standards in a study published in the Bulletin of the World Health Organization.

The researchers focused on air pollution called PM2.5 -- responsible for an estimated 4.2 million premature deaths every year globally. This includes over a million deaths in China, over half a million in India, almost 200,000 in Europe, and over 50,000 in the United States.

"In Canada, about 5,900 people die every year from air pollution, according to estimates from Health Canada. Air pollution kills almost as many Canadians every three years as COVID-19 killed to date," says co-author Parisa Ariya, a Professor in the Department of Chemistry at McGill University.

Small but deadly

Among the different types of air pollution, PM2.5 kills the most people worldwide. It consists of particles smaller than approximately 2.5 microns -- so small that billions of them can fit inside a red blood cell.

"We adopted unprecedented measures to protect people from COVID-19, yet we don't do enough to avoid the millions of preventable deaths caused by air pollution every year," says Yevgen Nazarenko, a Research Associate at McGill University who conducted the study with Devendra Pal under the supervision of Professor Ariya.

The researchers found that where there is protection, standards are often much worse than what the World Health Organization considers safe. Many regions with the most air pollution don't even measure PM2.5 air pollution, like the Middle East. They also found that the weakest air quality standards are often violated, particularly in countries like China and India. In contrast, the strictest standards are often met, in places like Canada and Australia.

Surprisingly, the researchers discovered that high population density is not necessarily a barrier to fighting air pollution successfully. Several jurisdictions with densely populated areas were successful in setting and enforcing strict standards. These included Japan, Taiwan, Singapore, El Salvador, Trinidad and Tobago, and the Dominican Republic.

"Our findings show that more than half of the world urgently needs protection in the form of adequate PM2.5 ambient air quality standards. Putting these standards in place everywhere will save countless lives. And where standards are already in place, they should be harmonized globally," says Nazarenko.

Read more at Science Daily

Mar 11, 2021

Mapping the best places to plant trees

 Reforestation could help to combat climate change, but whether and where to plant trees is a complex choice with many conflicting factors. To combat this problem, researchers reporting in the journal One Earth on December 18 have created the Reforestation Hub, an interactive map of reforestation opportunity in the United States. The tool will help foresters, legislators, and natural resource agency staff weigh the options while developing strategies to restore lost forests.

"Often the information we need to make informed decisions about where to deploy reforestation already exists, it's just scattered across a lot of different locations," says author Susan Cook-Patton, a Senior Forest Restoration Scientist at the Nature Conservancy. "Not everybody has the computer science experience to delve into the raw data, so we tried to bring this information together to develop a menu of options for reforestation, allowing people to choose what they would like to see in their community, state, or nation."

The culmination of these efforts is the Reforestation Hub, a web-based interactive map that color-codes individual counties by reforestation opportunity or level of potential for successful reforestation. And the results show that there is a great deal of reforestation opportunity in the United States.

"There are up to 51.6 million hectares (about 200,000 square miles) of opportunity to restore forest across the United States after excluding productive cropland and other places where trees are infeasible," she says. "Those additional forested areas could absorb the emissions equivalent to all the personal vehicles in California, Texas, and New York combined."

In addition to quantifying the amount of land that could yield viable forests, the Hub also identifies trends in how this opportunity is distributed throughout the country.

"While there's no single best place to restore forest cover, we did find a particularly high density of opportunity in the Southeastern United States," says Cook-Patton. "This is a region where carbon accumulation rates are high, costs are low, and there is a lot of opportunity to achieve multiple benefits like creating habitats for biodiversity, improving water quality, and climate mitigation."

The map also quantifies the acreage of 10 individual opportunity classes -- or categories based on land ownership and quality. Some of these include pastures, post-burn lands, and floodplains. "The choice to plant trees really depends on what people want out of the landscape, whether it's controlling flood waters, improving urban environments, or recovering forests after a fire," she says.

The researchers hope to create similar maps for other countries, an important next step for combating the global problem of climate change.

Read more at Science Daily

The world's oldest crater from a meteorite isn't an impact crater after all

 Several years after scientists discovered what was considered the oldest crater a meteorite made on the planet, another team found it's actually the result of normal geological processes.

During fieldwork at the Archean Maniitsoq structure in Greenland, an international team of scientists led by the University of Waterloo's Chris Yakymchuk found the features of this region are inconsistent with an impact crater. In 2012, a different team identified it as the remnant of a three-billion-year-old meteorite crater.

"Zircon crystals in the rock are like little time capsules," said Yakymchuk, a professor in Waterloo's Department of Earth and Environmental Sciences. "They preserve ancient damage caused by shockwaves you get from a meteorite impact. We found no such damage in them."

Additionally, there are multiple places where the rocks melted and recrystallized deep in the Earth. This process -- called metamorphism -- would occur almost instantaneously if produced from an impact. The Waterloo-led team found it happened 40 million years later than the earlier group proposed.

"We went there to explore the area for potential mineral exploration, and it was through close examination of the area and data collected since 2012 that we concluded the features are inconsistent with a meteorite impact," Yakymchuk said. "While we were disappointed that we weren't working in a structure that was the result of a meteorite hitting the planet three billion years ago, science is about advancing knowledge through discovery, and our understanding of the Earth's ancient history continues to evolve. Our findings provide scientific data for resource companies and Greenlandic prospectors to find new mineral resources."

From Science Daily

Physicists explore the possibility of tunnels in spacetime

For decades, researchers assumed the cosmic rays that regularly bombard Earth from the far reaches of the galaxy are born when stars go supernova -- when they grow too massive to support the fusion occurring at their cores and explode.

Those gigantic explosions do indeed propel atomic particles at the speed of light great distances. However, new research suggests even supernovae -- capable of devouring entire solar systems -- are not strong enough to imbue particles with the sustained energies needed to reach petaelectronvolts (PeVs), the amount of kinetic energy attained by very high-energy cosmic rays.

And yet cosmic rays have been observed striking Earth's atmosphere at exactly those velocities, their passage marked, for example, by the detection tanks at the High-Altitude Water Cherenkov (HAWC) observatory near Puebla, Mexico. Instead of supernovae, the researchers posit that star clusters like the Cygnus Cocoon serve as PeVatrons -- PeV accelerators -- capable of moving particles across the galaxy at such high energy rates.

Their paradigm-shifting research provides compelling evidence for star forming regions to be PeVatrons and is published in two recent papers in Nature Astronomy and Astrophysical Journal Letters.

A characteristic of physics research is how collaborative it is. The research was conducted by Petra Huentemeyer, professor of physics at Michigan Technological University, along with recent graduate Binita Hona '20, doctoral student Dezhi Huang, former MTU postdoc Henrike Fleischhack (now at Catholic University/NASA GSFC/CRESST II), Sabrina Casanova at the Institute of Nuclear Physics Polish Academy of Sciences in Krakow, Ke Fang at the University of Wisconsin and Roger Blanford at Stanford, along with numerous other collaborators of the HAWC Observatory.

Huentemeyer noted that HAWC and physicists from other institutions have measured cosmic rays from all directions and across many decades of energy. It's in tracking the cosmic rays with the highest known energy, PeVs, that their origin becomes so important.

"Cosmic rays below PeV energy are believed to come from our galaxy, but the question is what are the accelerators that can produce them," Huentemeyer said.

Fleischhack said the paradigm shift the researchers have uncovered is that before, scientists thought supernova remnants were the main accelerators of cosmic rays.

"They do accelerate cosmic rays, but they are not able to get to highest energies," she said.

So, what is driving cosmic rays' acceleration to PeV energy?

"There have been several other hints that star clusters could be part of the story," Fleischhack said. "Now we are getting confirmation that they are able to go to highest energies."

Star clusters are formed from the remnants of a supernova event. Known as star cradles, they contain violent winds and clouds of swirling debris -- such as those noted by the researchers in Cygnus OB2 and cluster [BDS2003]8. Inside, several types of massive stars known as spectral type O and type B stars are gathered by the hundreds in an area about 30 parsecs (108 light-years) across.

"Spectral type O stars are the most massive," Hona said. "When their winds interact with each other, shock waves form, which is where acceleration happens."

The researchers' theoretical models suggest that the energetic gamma-ray photons seen by HAWC are more likely produced by protons than by electrons.

"We will use NASA telescopes to search for the counterpart emission by these relativistic particles at lower energies," Fang said.

The extremely high energy at which cosmic rays reach our planet is notable. Specific conditions are required to accelerate particles to such velocities.

The higher the energy, the more difficult it is to confine the particles -- knowledge gleaned from particle accelerators here on Earth in Chicago and Switzerland. To keep particles from whizzing away, magnetism is required.

Stellar clusters -- with their mixture of wind and nascent but powerful stars -- are turbulent regions with different magnetic fields that can provide the confinement necessary for particles to continue to accelerate.

"Supernova remnants have very fast shocks where the cosmic ray can be accelerated; however, they don't have the type of long confinement regions," Casanova said. "This is what star clusters are useful for. They're an association of stars that can create disturbances that confine the cosmic rays and make it possible for the shocks to accelerate them."

But how is it possible to measure atomic interactions on a galactic scale 5,000 light-years from Earth? The researchers used 1,343 days of measurements from HAWC detection tanks.

Huang explained how the physicists at HAWC trace cosmic rays by measuring the gamma rays these cosmic rays produce at galactic acceleration sites: "We didn't measure gamma rays directly; we measured the secondary rays generated. When gamma rays interact with the atmosphere, they generate secondary particles in particle showers."

"When particle showers are detected at HAWC, we can measure the shower and the charge of secondary particles," Huang said. "We use the particle charge and time information to reconstruct information from the primary gamma."

In addition to HAWC, the researchers plan to work with the Southern Wide-field Gamma-ray Observatory (SWGO), an observatory currently in the planning stages that will feature Cherenkov light detectors like HAWC but will be located in the southern hemisphere.

"It would be interesting to see what we can see in the southern hemisphere," Huentemeyer said. "We will have a good view of the galactic center that we don't have in the northern hemisphere. SWGO could give us many more candidates in terms of star clusters."

Read more at Science Daily

Not so fast, supernova: Highest-energy cosmic rays detected in star clusters

 For decades, researchers assumed the cosmic rays that regularly bombard Earth from the far reaches of the galaxy are born when stars go supernova -- when they grow too massive to support the fusion occurring at their cores and explode.

Those gigantic explosions do indeed propel atomic particles at the speed of light great distances. However, new research suggests even supernovae -- capable of devouring entire solar systems -- are not strong enough to imbue particles with the sustained energies needed to reach petaelectronvolts (PeVs), the amount of kinetic energy attained by very high-energy cosmic rays.

And yet cosmic rays have been observed striking Earth's atmosphere at exactly those velocities, their passage marked, for example, by the detection tanks at the High-Altitude Water Cherenkov (HAWC) observatory near Puebla, Mexico. Instead of supernovae, the researchers posit that star clusters like the Cygnus Cocoon serve as PeVatrons -- PeV accelerators -- capable of moving particles across the galaxy at such high energy rates.

Their paradigm-shifting research provides compelling evidence for star forming regions to be PeVatrons and is published in two recent papers in Nature Astronomy and Astrophysical Journal Letters.

A characteristic of physics research is how collaborative it is. The research was conducted by Petra Huentemeyer, professor of physics at Michigan Technological University, along with recent graduate Binita Hona '20, doctoral student Dezhi Huang, former MTU postdoc Henrike Fleischhack (now at Catholic University/NASA GSFC/CRESST II), Sabrina Casanova at the Institute of Nuclear Physics Polish Academy of Sciences in Krakow, Ke Fang at the University of Wisconsin and Roger Blanford at Stanford, along with numerous other collaborators of the HAWC Observatory.

Huentemeyer noted that HAWC and physicists from other institutions have measured cosmic rays from all directions and across many decades of energy. It's in tracking the cosmic rays with the highest known energy, PeVs, that their origin becomes so important.

"Cosmic rays below PeV energy are believed to come from our galaxy, but the question is what are the accelerators that can produce them," Huentemeyer said.

Fleischhack said the paradigm shift the researchers have uncovered is that before, scientists thought supernova remnants were the main accelerators of cosmic rays.

"They do accelerate cosmic rays, but they are not able to get to highest energies," she said.

So, what is driving cosmic rays' acceleration to PeV energy?

"There have been several other hints that star clusters could be part of the story," Fleischhack said. "Now we are getting confirmation that they are able to go to highest energies."

Star clusters are formed from the remnants of a supernova event. Known as star cradles, they contain violent winds and clouds of swirling debris -- such as those noted by the researchers in Cygnus OB2 and cluster [BDS2003]8. Inside, several types of massive stars known as spectral type O and type B stars are gathered by the hundreds in an area about 30 parsecs (108 light-years) across.

"Spectral type O stars are the most massive," Hona said. "When their winds interact with each other, shock waves form, which is where acceleration happens."

The researchers' theoretical models suggest that the energetic gamma-ray photons seen by HAWC are more likely produced by protons than by electrons.

"We will use NASA telescopes to search for the counterpart emission by these relativistic particles at lower energies," Fang said.

The extremely high energy at which cosmic rays reach our planet is notable. Specific conditions are required to accelerate particles to such velocities.

The higher the energy, the more difficult it is to confine the particles -- knowledge gleaned from particle accelerators here on Earth in Chicago and Switzerland. To keep particles from whizzing away, magnetism is required.

Stellar clusters -- with their mixture of wind and nascent but powerful stars -- are turbulent regions with different magnetic fields that can provide the confinement necessary for particles to continue to accelerate.

"Supernova remnants have very fast shocks where the cosmic ray can be accelerated; however, they don't have the type of long confinement regions," Casanova said. "This is what star clusters are useful for. They're an association of stars that can create disturbances that confine the cosmic rays and make it possible for the shocks to accelerate them."

But how is it possible to measure atomic interactions on a galactic scale 5,000 light-years from Earth? The researchers used 1,343 days of measurements from HAWC detection tanks.

Huang explained how the physicists at HAWC trace cosmic rays by measuring the gamma rays these cosmic rays produce at galactic acceleration sites: "We didn't measure gamma rays directly; we measured the secondary rays generated. When gamma rays interact with the atmosphere, they generate secondary particles in particle showers."

"When particle showers are detected at HAWC, we can measure the shower and the charge of secondary particles," Huang said. "We use the particle charge and time information to reconstruct information from the primary gamma."

In addition to HAWC, the researchers plan to work with the Southern Wide-field Gamma-ray Observatory (SWGO), an observatory currently in the planning stages that will feature Cherenkov light detectors like HAWC but will be located in the southern hemisphere.

"It would be interesting to see what we can see in the southern hemisphere," Huentemeyer said. "We will have a good view of the galactic center that we don't have in the northern hemisphere. SWGO could give us many more candidates in terms of star clusters."

Read more at Science Daily

50 new genes for eye color

 The genetics of human eye colour is much more complex than previously thought, according to a new study published today.

An international team of researchers led by King's College London and Erasmus University Medical Center Rotterdam have identified 50 new genes for eye colour in the largest genetic study of its kind to date. The study, published today in Science Advances, involved the genetic analysis of almost 195,000 people across Europe and Asia.

These findings will help to improve the understanding of eye diseases such as pigmentary glaucoma and ocular albinism, where eye pigment levels play a role.

In addition, the team found that eye colour in Asians with different shades of brown is genetically similar to eye colour in Europeans ranging from dark brown to light blue.

This study builds on previous research in which scientists had identified a dozen genes linked to eye colour, believing there to be many more. Previously, scientists thought that variation in eye colour was controlled by one or two genes only, with brown eyes dominant over blue eyes.

Co-senior author Dr Pirro Hysi, King's College London, said: "The findings are exciting because they bring us to a step closer to understanding the genes that cause one of the most striking features of the human faces, which has mystified generations throughout our history. This will improve our understanding of many diseases that we know are associated with specific pigmentation levels."

Co-senior author Dr Manfred Kayser, Erasmus University Medical Center Rotterdam, said:

"This study delivers the genetic knowledge needed to improve eye colour prediction from DNA as already applied in anthropological and forensic studies, but with limited accuracy for the non-brown and non-blue eye colours."

From Science Daily

Mar 10, 2021

Younger Tyrannosaurus Rex bites were less ferocious than their adult counterparts

 By closely examining the jaw mechanics of juvenile and adult tyrannosaurids, some of the fiercest dinosaurs to inhabit earth, scientists led by the University of Bristol have uncovered differences in how they bit into their prey.

They found that younger tyrannosaurs were incapable of delivering the bone-crunching bite that is often synonymous with the Tyrannosaurus rex and that adult specimens were far better equipped for tearing out chunks of flesh and bone with their massive, deeply set jaws.

The team also found that tension from the insertion of the lower pterygoid muscle is linked to decreasing stresses near the front of the typical tyrannosaur jaw, where the animals may have applied their highest impact bite forces using their large, conical teeth.

This would be advantageous with the highly robust teeth on the anterior end of the tyrannosaur jaw, where, usually, they may have applied their highest impact bite forces. Crocodilians experience the reverse situation -- they possess robust teeth near the posterior end of their mandible where they apply their highest bite forces.

Adult tyrannosaurids have been extensively studied due to the availability of relatively complete specimens that have been CT scanned.

The availability of this material has allowed for studies of their feeding mechanics. The adult Tyrannosaurus rex was capable of a 60,000 Newton bite (for comparison, an adult lion averages 1,300 Newtons) and there is evidence of it having actively preyed on large, herbivorous dinosaurs.

The team were interested in inferring more about the feeding mechanics and implications for juvenile tyrannosaurs.

Their main hypotheses were that larger tyrannosaurid mandibles experienced absolutely lower peak stress, because they became more robust (deeper and wider relative to length) as they grew, and that at equalized mandible lengths, younger tyrannosaurids experienced greater stress and strain relative to the adults, suggesting relatively lower bite forces consistent with proportionally slender jaws.

At actual size the juveniles experienced lower absolute stresses when compared to the adult, contradicting our first hypothesis. This means that in real life, adult tyrannosaurs would experience high absolute stresses during feeding but shrug it off due to its immense size. However, when mandible lengths are equalized, the juvenile specimens experienced greater stresses, due to the relatively lower bite forces typical in slender jaws.

Lead author Andre Rowe, a Geology PhD Student at the University of Bristol's School of Earth Sciences, said: "Tyrannosaurids were active predators and their prey likely varied based on their developmental stage.

"Based on biomechanical data, we presume that they pursued smaller prey and fulfilled an environmental role similar to the 'raptor' dinosaurs such as the dromaeosaurs. Adult tyrannosaurs were likely subduing large dinosaurs such as the duckbilled hadrosaurs and Triceratops, which would be quickly killed by their bone-crunching bite.

"This study illustrates the importance of 3D modeling and computational studies in vertebrate paleontology -- the methodology we used in our study can be applied to many different groups of extinct animals so that we can better understand how they adapted to their respective environments."

There are two major components of this research that Andre and the team would like to see future researchers delve into continued CT and surface scanning of dinosaur cranial material and more application of 3D models in dinosaur biomechanics research.

Andre added: "There remains a plethora of unearthed dinosaur material that has not been utilized in studies of feeding and function -- ideally, all of our existing specimens will one day be scanned and made widely available online to researchers everywhere.

Read more at Science Daily

Quantum physicists measure the smallest gravitational force yet

 Researchers have succeeded in measuring the gravitational field of a gold sphere, just 2 mm in diameter, using a highly sensitive pendulum -- and thus the smallest gravitational force. The experiment opens up new possibilities for testing the laws of gravity on previously unattained small scales.

Gravity is the weakest of all known forces in nature -- and yet it is most strongly present in our everyday lives. Every ball we throw, every coin we drop -- all objects are attracted by the Earth's gravity. In a vacuum, all objects near the Earth's surface fall with the same acceleration: their velocity increases by about 9.8 m/s every second. The strength of gravity is determined by the mass of the Earth and the distance from the center. On the Moon, which is about 80 times lighter and almost 4 times smaller than the Earth, all objects fall 6 times slower. And on a planet of the size of a ladybug? Objects would fall 30 billion times slower there than on Earth. Gravitational forces of this magnitude normally occur only in the most distant regions of galaxies to trap remote stars. A team of quantum physicists led by Markus Aspelmeyer and Tobias Westphal of the University of Vienna and the Austrian Academy of Sciences has now demonstrated these forces in the laboratory for the first time. To do so, the researchers drew on a famous experiment conducted by Henry Cavendish at the end of the 18th century.

During the time of Isaac Newton, it was believed that gravity was reserved for astronomical objects such as planets. It was not until the work of Cavendish (and Nevil Maskelyne before him) that it was possible to show that objects on Earth also generate their own gravity. Using an elegant pendulum device, Cavendish succeeded in measuring the gravitational force generated by a lead ball 30 cm tall and weighing 160 kg in 1797. A so-called torsion pendulum -- two masses at the ends of a rod suspended from a thin wire and free to rotate -- is measurably deflected by the gravitational force of the lead mass. Over the coming centuries, these experiments were further perfected to measure gravitational forces with increasing accuracy.

The Vienna team has picked up this idea and built a miniature version of the Cavendish experiment. A 2 mm gold sphere weighing 90 mg serves as the gravitational mass. The torsion pendulum consists of a glass rod 4 cm long and half a millimeter thick, suspended from a glass fiber a few thousandths of a millimeter in diameter. Gold spheres of similar size are attached to each end of the rod. "We move the gold sphere back and forth, creating a gravitational field that changes over time," explains Jeremias Pfaff, one of the researchers involved in the experiment. "This causes the torsion pendulum to oscillate at that particular excitation frequency." The movement, which is only a few millionths of a millimeter, can then be read out with the help of a laser and allows conclusions to be drawn about the force. The difficulty is keeping other influences on the motion as small as possible. "The largest non-gravitational effect in our experiment comes from seismic vibrations generated by pedestrians and tram traffic around our lab in Vienna," says co-author Hans Hepach: "We therefore obtained the best measurement data at night and during the Christmas holidays, when there was little traffic." Other effects such as electrostatic forces could be reduced to levels well below the gravitational force by a conductive shield between the gold masses.

This made it possible to determine the gravitational field of an object that has roughly the mass of a ladybug for the first time. As a next step, it is planned to investigate the gravity of masses thousands of times lighter.

Read more at Science Daily

New tool makes students better at detecting fake imagery and videos

 Researchers at Uppsala University have developed a digital self-test that trains users to assess news items, images and videos presented on social media. The self-test has also been evaluated in a scientific study, which confirmed the researchers' hypothesis that the tool genuinely improved the students' ability to apply critical thinking to digital sources.

The new tool and the scientific review of it are part of the News Evaluator project to investigate new methods of enhancing young people's capacity for critical awareness of digital sources, a key component of digital literacy.

"As research leader in the project, I'm surprised how complicated it is to develop this type of tool against misleading information -- one that's usable on a large scale. Obviously, critically assessing digital sources is complicated. We've been working on various designs and tests, with major experiments in school settings, for years. Now we've finally got a tool that evidently works. The effect is clearly positive and now we launch the self-test on our News Evaluator website http://www.newsevaluator.com, so that all anyone can test themselves for free," says Thomas Nygren, associate professor at Uppsala University.

The tool is structured in a way that allows students to work with it, online, on their own. They get to see news articles in a social-media format, with pictures or videos, and the task is to determine how credible they are. Is there really wood pulp in Parmesan cheese, for instance?

"The aim is for the students to get better at uncovering what isn't true, but also improve their understanding of what may be true even if it seems unlikely at first," Nygren says.

As user support, the tool contains guidance. Students can follow how a professional would have gone about investigating the authenticity of the statements or images -- by opening a new window and doing a separate search alongside the test, or doing a reverse image search, for example. The students are encouraged to learn "lateral reading" (verifying what you read by double checking news). After solving the tasks, the students get feedback on their performance.

When the tool was tested with just over 200 students' help, it proved to have had a beneficial effect on their ability to assess sources critically. Students who had received guidance and feedback from the tool showed distinctly better results than those who had not been given this support. The tool also turned out to provide better results in terms of the above-mentioned ability than other, comparable initiatives that require teacher participation and more time.

Read more at Science Daily

Variant B.1.1.7 of COVID-19 associated with a significantly higher mortality rate, research shows

 The highly infectious variant of COVID-19 discovered in Kent, which swept across the UK last year before spreading worldwide, is between 30 and 100 per cent more deadly than previous strains, new analysis has shown.

A pivotal study, by epidemiologists from the Universities of Exeter and Bristol, has shown that the SARS-CoV-2 variant, B.1.1.7, is associated with a significantly higher mortality rate amongst adults diagnosed in the community compared to previously circulating strains.

The study compared death rates among people infected with the new variant and those infected with other strains.

It showed that the new variant led to 227 deaths in a sample of 54906 patients -- compared to 141 amongst the same number of closely matched patients who had the previous strains.

With the new variant already detected in more than 50 countries worldwide, the analysis provides crucial information to governments and health officials to help prevent its spread.

The study is published in the British Medical Journal on Wednesday, 10 March 2021.

Robert Challen, lead author of the study from the University of Exeter said: "In the community, death from COVID-19 is still a rare event, but the B.1.1.7 variant raises the risk. Coupled with its ability to spread rapidly this makes B.1.1.7 a threat that should be taken seriously."

The Kent variant, first detected in the UK in September 2020, has been identified as being significantly quicker and easier to spread, and was behind the introduction of new lockdown rules across the UK from January.

The study shows that the higher transmissibility of the Kent strain meant that more people who would have previously been considered low risk were hospitalised with the newer variant.

Having analysed data from 54609 matched pairs of patients of all age-groups and demographics, and differing only in strain detected, the team found that there were 227 deaths attributed to the new strain, compared to 141 attributable to earlier strains.

Leon Danon, senior author of the study from the University of Bristol said: "We focussed our analysis on cases that occurred between November 2020 and January 2021, when both the old variants and the new variant were present in the UK. This meant we were able to maximise the number of "matches" and reduce the impact of other biases. Subsequent analyses have confirmed our results.

"SARS-CoV-2 appears able to mutate quickly, and there is a real concern that other variants will arise with resistance to rapidly rolled out vaccines. Monitoring for new variants as they arise, measuring their characteristics and acting appropriately needs to be a key part of the public health response in the future."

Read more at Science Daily

Mar 9, 2021

Northern Hemisphere summers may last nearly half the year by 2100

 Without efforts to mitigate climate change, summers spanning nearly six months may become the new normal by 2100 in the Northern Hemisphere, according to a new study. The change would likely have far-reaching impacts on agriculture, human health and the environment, according to the study authors.

In the 1950s in the Northern Hemisphere, the four seasons arrived in a predictable and fairly even pattern. But climate change is now driving dramatic and irregular changes to the length and start dates of the seasons, which may become more extreme in the future under a business-as-usual climate scenario.

"Summers are getting longer and hotter while winters shorter and warmer due to global warming," said Yuping Guan, a physical oceanographer at the State Key Laboratory of Tropical Oceanography, South China Sea Institute of Oceanology, Chinese Academy of Sciences, and lead author of the new study in Geophysical Research Letters, AGU's journal for high-impact, short-format reports with immediate implications spanning all Earth and space sciences.

Guan was inspired to investigate changes to the seasonal cycle while mentoring an undergraduate student, co-author Jiamin Wang. "More often, I read some unseasonable weather reports, for example, false spring, or May snow, and the like," Guan said.

The researchers used historical daily climate data from 1952 to 2011 to measure changes in the four seasons' length and onset in the Northern Hemisphere. They defined the start of summer as the onset of temperatures in the hottest 25% during that time period, while winter began with temperatures in the coldest 25%. Next, the team used established climate change models to predict how seasons will shift in the future.

The new study found that, on average, summer grew from 78 to 95 days between 1952 to 2011, while winter shrank from 76 to 73 days. Spring and autumn also contracted from 124 to 115 days, and 87 to 82 days, respectively. Accordingly, spring and summer began earlier, while autumn and winter started later. The Mediterranean region and the Tibetan Plateau experienced the greatest changes to their seasonal cycles.

If these trends continue without any effort to mitigate climate change, the researchers predict that by 2100, winter will last less than two months, and the transitional spring and autumn seasons will shrink further as well.

"Numerous studies have already shown that the changing seasons cause significant environmental and health risks," Guan said. For example, birds are shifting their migration patterns and plants are emerging and flowering at different times. These phenological changes can create mismatches between animals and their food sources, disrupting ecological communities.

Seasonal changes can also wreak havoc on agriculture, especially when false springs or late snowstorms damage budding plants. And with longer growing seasons, humans will breathe in more allergy-causing pollen, and disease-carrying mosquitoes can expand their range northward.

Going to extremes

This shift in the seasons may result in more severe weather events, said Congwen Zhu, a monsoon researcher at the State Key Laboratory of Severe Weather and Institute of Climate System, Chinese Academy of Meteorological Sciences, Beijing, who was not involved in the new study.

"A hotter and longer summer will suffer more frequent and intensified high-temperature events -- heatwaves and wildfires," Zhu said. Additionally, warmer, shorter winters may cause instability that leads to cold surges and winter storms, much like the recent snowstorms in Texas and Israel, he said.

"This is a good overarching starting point for understanding the implications of seasonal change," said Scott Sheridan, a climate scientist at Kent State University who was not part of the new study.

Read more at Science Daily

How fast is the universe expanding? Galaxies provide one answer

 Determining how rapidly the universe is expanding is key to understanding our cosmic fate, but with more precise data has come a conundrum: Estimates based on measurements within our local universe don't agree with extrapolations from the era shortly after the Big Bang 13.8 billion years ago.

A new estimate of the local expansion rate -- the Hubble constant, or H0 (H-naught) -- reinforces that discrepancy.

Using a relatively new and potentially more precise technique for measuring cosmic distances, which employs the average stellar brightness within giant elliptical galaxies as a rung on the distance ladder, astronomers calculate a rate -- 73.3 kilometers per second per megaparsec, give or take 2.5 km/sec/Mpc -- that lies in the middle of three other good estimates, including the gold standard estimate from Type Ia supernovae. This means that for every megaparsec -- 3.3 million light years, or 3 billion trillion kilometers -- from Earth, the universe is expanding an extra 73.3 ±2.5 kilometers per second. The average from the three other techniques is 73.5 ±1.4 km/sec/Mpc.

Perplexingly, estimates of the local expansion rate based on measured fluctuations in the cosmic microwave background and, independently, fluctuations in the density of normal matter in the early universe (baryon acoustic oscillations), give a very different answer: 67.4 ±0.5 km/sec/Mpc.

Astronomers are understandably concerned about this mismatch, because the expansion rate is a critical parameter in understanding the physics and evolution of the universe and is key to understanding dark energy -- which accelerates the rate of expansion of the universe and thus causes the Hubble constant to change more rapidly than expected with increasing distance from Earth. Dark energy comprises about two-thirds of the mass and energy in the universe, but is still a mystery.

For the new estimate, astronomers measured fluctuations in the surface brightness of 63 giant elliptical galaxies to determine the distance and plotted distance against velocity for each to obtain H0. The surface brightness fluctuation (SBF) technique is independent of other techniques and has the potential to provide more precise distance estimates than other methods within about 100 Mpc of Earth, or 330 million light years. The 63 galaxies in the sample are at distances ranging from 15 to 99 Mpc, looking back in time a mere fraction of the age of the universe.

"For measuring distances to galaxies out to 100 megaparsecs, this is a fantastic method," said cosmologist Chung-Pei Ma, the Judy Chandler Webb Professor in the Physical Sciences at the University of California, Berkeley, and professor of astronomy and physics. "This is the first paper that assembles a large, homogeneous set of data, on 63 galaxies, for the goal of studying H-naught using the SBF method."

Ma leads the MASSIVE survey of local galaxies, which provided data for 43 of the galaxies -- two-thirds of those employed in the new analysis.

The data on these 63 galaxies was assembled and analyzed by John Blakeslee, an astronomer with the National Science Foundation's NOIRLab. He is first author of a paper now accepted for publication in The Astrophysical Journal that he co-authored with colleague Joseph Jensen of Utah Valley University in Orem. Blakeslee, who heads the science staff that support NSF's optical and infrared observatories, is a pioneer in using SBF to measure distances to galaxies, and Jensen was one of the first to apply the method at infrared wavelengths. The two worked closely with Ma on the analysis.

"The whole story of astronomy is, in a sense, the effort to understand the absolute scale of the universe, which then tells us about the physics," Blakeslee said, harkening back to James Cook's voyage to Tahiti in 1769 to measure a transit of Venus so that scientists could calculate the true size of the solar system. "The SBF method is more broadly applicable to the general population of evolved galaxies in the local universe, and certainly if we get enough galaxies with the James Webb Space Telescope, this method has the potential to give the best local measurement of the Hubble constant."

The James Webb Space Telescope, 100 times more powerful than the Hubble Space Telescope, is scheduled for launch in October.

Giant elliptical galaxies

The Hubble constant has been a bone of contention for decades, ever since Edwin Hubble first measured the local expansion rate and came up with an answer seven times too big, implying that the universe was actually younger than its oldest stars. The problem, then and now, lies in pinning down the location of objects in space that give few clues about how far away they are.

Astronomers over the years have laddered up to greater distances, starting with calculating the distance to objects close enough that they seem to move slightly, because of parallax, as the Earth orbits the sun. Variable stars called Cepheids get you farther, because their brightness is linked to their period of variability, and Type Ia supernovae get you even farther, because they are extremely powerful explosions that, at their peak, shine as bright as a whole galaxy. For both Cepheids and Type Ia supernovae, it's possible to figure out the absolute brightness from the way they change over time, and then the distance can be calculated from their apparent brightness as seen from Earth.

The best current estimate of H0 comes from distances determined by Type Ia supernova explosions in distant galaxies, though newer methods -- time delays caused by gravitational lensing of distant quasars and the brightness of water masers orbiting black holes -- all give around the same number.

The technique using surface brightness fluctuations is one of the newest and relies on the fact that giant elliptical galaxies are old and have a consistent population of old stars -- mostly red giant stars -- that can be modeled to give an average infrared brightness across their surface. The researchers obtained high-resolution infrared images of each galaxy with the Wide Field Camera 3 on the Hubble Space Telescope and determined how much each pixel in the image differed from the "average" -- the smoother the fluctuations over the entire image, the farther the galaxy, once corrections are made for blemishes like bright star-forming regions, which the authors exclude from the analysis.

Neither Blakeslee nor Ma was surprised that the expansion rate came out close to that of the other local measurements. But they are equally confounded by the glaring conflict with estimates from the early universe -- a conflict that many astronomers say means that our current cosmological theories are wrong, or at least incomplete.

The extrapolations from the early universe are based on the simplest cosmological theory -- called lambda cold dark matter, or ?CDM -- which employs just a few parameters to describe the evolution of the universe. Does the new estimate drive a stake into the heart of ?CDM?

"I think it pushes that stake in a bit more," Blakeslee said. "But it (?CDM) is still alive. Some people think, regarding all these local measurements, (that) the observers are wrong. But it is getting harder and harder to make that claim -- it would require there to be systematic errors in the same direction for several different methods: supernovae, SBF, gravitational lensing, water masers. So, as we get more independent measurements, that stake goes a little deeper."

Ma wonders whether the uncertainties astronomers ascribe to their measurements, which reflect both systematic errors and statistical errors, are too optimistic, and that perhaps the two ranges of estimates can still be reconciled.

"The jury is out," she said. "I think it really is in the error bars. But assuming everyone's error bars are not underestimated, the tension is getting uncomfortable."

In fact, one of the giants of the field, astronomer Wendy Freedman, recently published a study pegging the Hubble constant at 69.8 ±1.9 km/sec/Mpc, roiling the waters even further. The latest result from Adam Riess, an astronomer who shared the 2011 Nobel Prize in Physics for discovering dark energy, reports 73.2 ±1.3 km/sec/Mpc. Riess was a Miller Postdoctoral Fellow at UC Berkeley when he performed this research, and he shared the prize with UC Berkeley and Berkeley Lab physicist Saul Perlmutter.

MASSIVE galaxies

The new value of H0 is a byproduct of two other surveys of nearby galaxies -- in particular, Ma's MASSIVE survey, which uses space and ground-based telescopes to exhaustively study the 100 most massive galaxies within about 100 Mpc of Earth. A major goal is to weigh the supermassive black holes at the centers of each one.

To do that, precise distances are needed, and the SBF method is the best to date, she said. The MASSIVE survey team used this method last year to determine the distance to a giant elliptical galaxy, NGC 1453, in the southern sky constellation of Eridanus. Combining that distance, 166 million light years, with extensive spectroscopic data from the Gemini and McDonald telescopes -- which allowed Ma's graduate students Chris Liepold and Matthew Quenneville to measure the velocities of the stars near the center of the galaxy -- they concluded that NGC 1453 has a central black hole with a mass nearly 3 billion times that of the sun.

To determine H0, Blakeslee calculated SBF distances to 43 of the galaxies in the MASSIVE survey, based on 45 to 90 minutes of HST observing time for each galaxy. The other 20 came from another survey that employed HST to image large galaxies, specifically ones in which Type Ia supernovae have been detected.

Most of the 63 galaxies are between 8 and 12 billion years old, which means that they contain a large population of old red stars, which are key to the SBF method and can also be used to improve the precision of distance calculations. In the paper, Blakeslee employed both Cepheid variable stars and a technique that uses the brightest red giant stars in a galaxy -- referred to as the tip of the red giant branch, or TRGB technique -- to ladder up to galaxies at large distances. They produced consistent results. The TRGB technique takes account of the fact that the brightest red giants in galaxies have about the same absolute brightness.

"The goal is to make this SBF method completely independent of the Cepheid-calibrated Type Ia supernova method by using the James Webb Space Telescope to get a red giant branch calibration for SBFs," he said.

"The James Webb telescope has the potential to really decrease the error bars for SBF," Ma added. But for now, the two discordant measures of the Hubble constant will have to learn to live with one another.

"I was not setting out to measure H0; it was a great product of our survey," she said. "But I am a cosmologist and am watching this with great interest."

Read more at Science Daily

Study of coronavirus variants predicts virus evolving to escape current vaccines

 A new study of the U.K. and South Africa variants of SARS-CoV-2 predicts that current vaccines and certain monoclonal antibodies may be less effective at neutralizing these variants and that the new variants raise the specter that reinfections could be more likely.

The study was published in Nature on March 8, 2021. A preprint of the study was first posted to BioRxiv on January 26, 2021.

The study's predictions are now being borne out with the first reported results of the Novavax vaccine, says the study's lead author David Ho, MD. The company reported on Jan. 28 that the vaccine was nearly 90% effective in the company's U.K. trial, but only 49.4% effective in its South Africa trial, where most cases of COVID-19 are caused by the B.1.351 variant.

"Our study and the new clinical trial data show that the virus is traveling in a direction that is causing it to escape from our current vaccines and therapies that are directed against the viral spike," says Ho, the director of the Aaron Diamond AIDS Research Center and the Clyde'56 and Helen Wu Professor of Medicine at Columbia University Vagelos College of Physicians and Surgeons.

"If the rampant spread of the virus continues and more critical mutations accumulate, then we may be condemned to chasing after the evolving SARS-CoV-2 continually, as we have long done for influenza virus," Ho says. "Such considerations require that we stop virus transmission as quickly as is feasible, by redoubling our mitigation measures and by expediting vaccine rollout."

After vaccination, the immune system responds and makes antibodies that can neutralize the virus.

Ho and his team found that antibodies in blood samples taken from people inoculated with the Moderna or Pfizer vaccine were less effective at neutralizing the two variants, B.1.1.7, which emerged last September in England, and B.1.351, which emerged from South Africa in late 2020. Against the U.K. variant, neutralization dropped by roughly 2-fold, but against the South Africa variant, neutralization dropped by 6.5- to 8.5-fold.

"The approximately 2-fold loss of neutralizing activity against the U.K. variant is unlikely to have an adverse impact due to the large 'cushion' of residual neutralizing antibody activity," Ho says, "and we see that reflected in the Novavax results where the vaccine was 85.6% effective against the U.K. variant."

Data from Ho's study about the loss in neutralizing activity against the South Africa variant are more worrisome.

"The drop in neutralizing activity against the South Africa variant is appreciable, and we're now seeing, based on the Novavax results, that this is causing a reduction in protective efficacy," Ho says.

The new study did not examine the more recent variant found in Brazil (B.1.1.28) but given the similar spike mutations between the Brazil and South Africa variants, Ho says the Brazil variant should behave similarly to the South Africa variant.

"We have to stop the virus from replicating and that means rolling out vaccine faster and sticking to our mitigation measures like masking and physical distancing. Stopping the spread of the virus will stop the development of further mutations," Ho says.

The study also found that certain monoclonal antibodies used now to treat COVID patients may not work against the South Africa variant. And based on results with plasma from COVID patients who were infected earlier in the pandemic, the B.1.351 variant from South Africa has the potential to cause reinfection.

New study contains comprehensive analysis of variants

The new study conducted an extensive analysis of mutations in the two SARS-CoV-2 variants compared to other recent studies, which have reported similar findings.

The new study examined all mutations in the spike protein of the two variants. (Vaccines and monoclonal antibody treatments work by recognizing the SARS-CoV-2 spike protein.)

The researchers created SARS-CoV-2 pseudoviruses (viruses that produce the coronavirus spike protein but cannot cause infection) with the eight mutations found in the U.K. variant and the nine mutations found in the South African variant.

They then measured the sensitivity of these pseudoviruses to monoclonal antibodies developed to treat COVID patients, convalescent serum from patients who were infected earlier in the pandemic, and serum from patients who have been vaccinated with the Moderna or Pfizer vaccine.

Implications for monoclonal antibody treatments

The study measured the neutralizing activity of 18 different monoclonal antibodies -- including the antibodies in two products authorized for use in the United States.

Against the U.K. variant, most antibodies were still potent, although the neutralizing activity of two antibodies in development was modestly impaired.

Against the South Africa variant, however, the neutralizing activity of four antibodies was completely or markedly abolished. Those antibodies include bamlanivimab (LY-CoV555, approved for use in the United States) that was completely inactive against the South Africa variant, and casirivimab, one of the two antibodies in an approved antibody cocktail (REGN-COV) that was 58-fold less effective at neutralizing the South Africa variant compared to the original virus. The second antibody in the cocktail, imdevimab, retained its neutralizing ability, as did the complete cocktail.

"Decisions of the use of these treatments will depend heavily on the local prevalence of the South Africa and Brazil variants," Ho says, "highlighting the importance of viral genomic surveillance and proactive development of next-generation antibody therapeutics."

Read more at Science Daily

Research shows we're surprisingly similar to Earth's first animals

 The earliest multicellular organisms may have lacked heads, legs, or arms, but pieces of them remain inside of us today, new research shows.

According to a UC Riverside study, 555-million-year-old oceanic creatures from the Ediacaran period share genes with today's animals, including humans.

"None of them had heads or skeletons. Many of them probably looked like three-dimensional bathmats on the sea floor, round discs that stuck up," said Mary Droser, a geology professor at UCR. "These animals are so weird and so different, it's difficult to assign them to modern categories of living organisms just by looking at them, and it's not like we can extract their DNA -- we can't."

However, well-preserved fossil records have allowed Droser and the study's first author, recent UCR doctoral graduate Scott Evans, to link the animals' appearance and likely behaviors to genetic analysis of currently living things. Their research on these links has been recently published in the journal Proceedings of the Royal Society B.

For their analysis, the researchers considered four animals representative of the more than 40 recognized species that have been identified from the Ediacaran era. These creatures ranged in size from a few millimeters to nearly a meter in length.

Kimberella were teardrop-shaped creatures with one broad, rounded end and one narrow end that likely scraped the sea floor for food with a proboscis. Further, they could move around using a "muscular foot" like snails today. The study included flat, oval-shaped Dickinsonia with a series of raised bands on their surface, and Tribrachidium, who spent their lives immobilized at the bottom of the sea.

Also analyzed were Ikaria, animals recently discovered by a team including Evans and Droser. They were about the size and shape of a grain of rice, and represent the first bilaterians -- organisms with a front, back, and openings at either end connected by a gut. Evans said it's likely Ikaria had mouths, though those weren't preserved in the fossil records, and they crawled through organic matter "eating as they went."

All four of the animals were multicellular, with cells of different types. Most had symmetry on their left and right sides, as well as noncentralized nervous systems and musculature.

Additionally, they seem to have been able to repair damaged body parts through a process known as apoptosis. The same genes involved are key elements of human immune systems, which helps to eliminate virus-infected and pre-cancerous cells.

These animals likely had the genetic parts responsible for heads and the sensory organs usually found there. However, the complexity of interaction between these genes that would give rise to such features hadn't yet been achieved.

"The fact that we can say these genes were operating in something that's been extinct for half a billion years is fascinating to me," Evans said.

The work was supported by a NASA Exobiology grant, and a Peter Buck postdoctoral fellowship.

Going forward, the team is planning to investigate muscle development and functional studies to further understand early animal evolution.

Read more at Science Daily

Mar 8, 2021

Establishing the origin of solar-mass black holes and the connection to dark matter

 What is the origin of black holes and how is that question connected with another mystery, the nature of dark matter? Dark matter comprises the majority of matter in the Universe, but its nature remains unknown.

Multiple gravitational wave detections of merging black holes have been identified within the last few years by the Laser Interferometer Gravitational-Wave Observatory (LIGO), commemorated with the 2017 physics Nobel Prize to Kip Thorne, Barry Barish, and Rainer Weiss. A definitive confirmation of the existence of black holes was celebrated with the 2020 physics Nobel Prize awarded to Andrea Ghez, Reinhard Genzel and Roger Penrose. Understanding the origin of black holes has thus emerged as a central issue in physics.

Surprisingly, LIGO has recently observed a 2.6 solar-mass black hole candidate (event GW190814, reported in Astrophysical Journal Letters 896 (2020) 2, L44). Assuming this is a black hole, and not an unusually massive neutron star, where does it come from?

Solar-mass black holes are particularly intriguing, since they are not expected from conventional stellar evolution astrophysics. Such black holes might arise in the early Universe (primordial black holes) or be "transmuted" from existing neutron stars. Some black holes could have formed in the early universe long before the stars and galaxies formed. Such primordial black holes could make up some part or all of dark matter. If a neutron star captures a primordial black hole, the black hole consumes the neutron star from the inside, turning it into a solar-mass black hole. This process can produce a population of solar mass black holes, regardless of how small the primordial black holes are. Other forms of dark matter can accumulate inside a neutron star causing its eventual collapse into a solar-mass black hole.

A new study, published in Physical Review Letters, advances a decisive test to investigate the origin of solar-mass black holes. This work was led by the Kavli Institute for the Physics and Mathematics of the Universe (Kavli IPMU) Fellow Volodymyr Takhistov and the international team included George M. Fuller, Distinguished Professor of Physics and Director of the Center for Astrophysics and Space Science at the University of California, San Diego, as well as Alexander Kusenko, Professor of Physics and Astronomy at the University of California, Los Angeles and a Kavli IPMU Visiting Senior Scientist.

As the study discusses, "transmuted" solar-mass black holes remaining from neutron stars being devoured by dark matter (either tiny primordial black holes or particle dark matter accumulation) should follow the mass-distribution of the original host neutron stars. Since the neutron star mass distribution is expected to peak around 1.5 solar masses, it is unlikely that heavier solar-mass black holes have originated from dark matter interacting with neutron stars. This suggests that such events as the candidate detected by LIGO, if they indeed constitute black holes, could be of primordial origin from the early Universe and thus drastically affect our understanding of astronomy. Future observations will use this test to investigate and identify the origin of black holes.

Read more at Science Daily

A giant, sizzling planet may be orbiting the star Vega

 Astronomers have discovered new hints of a giant, scorching-hot planet orbiting Vega, one of the brightest stars in the night sky.

The research, published this month in The Astrophysical Journal, was led by University of Colorado Boulder student Spencer Hurt, an undergraduate in the Department of Astrophysical and Planetary Sciences.

It focuses on an iconic and relatively young star, Vega, which is part of the constellation Lyra and has a mass twice that of our own sun. This celestial body sits just 25 light-years, or about 150 trillion miles, from Earth -- pretty close, astronomically speaking.

Scientists can also see Vega with telescopes even when it's light out, which makes it a prime candidate for research, said study coauthor Samuel Quinn.

"It's bright enough that you can observe it at twilight when other stars are getting washed out by sunlight," said Quinn, an astronomer at the Harvard and Smithsonian Center for Astrophysics (CfA).

Despite the star's fame, researchers have yet to find a single planet in orbit around Vega. That might be about to change: Drawing on a decade of observations from the ground, Hurt, Quinn and their colleagues unearthed a curious signal that could be the star's first-known world.

If the team's findings bear out, the alien planet would orbit so close to Vega that its years would last less than two-and-a-half Earth days. (Mercury, in contrast, takes 88 days to circle the sun). This candidate planet could also rank as the second hottest world known to science -- with surface temperatures averaging a searing 5,390 degrees Fahrenheit.

Hurt said the group's research also helps to narrow down where other, exotic worlds might be hiding in Vega's neighborhood.

"This is a massive system, much larger than our own solar system," Hurt said. "There could be other planets throughout that system. It's just a matter of whether we can detect them."

Youthful energy

Quinn would like to try. Scientists have discovered more than 4,000 exoplanets, or planets beyond Earth's solar system, to date. Few of those, however, circle stars that are as bright or as close to Earth as Vega. That means that, if there are planets around the star, scientists could get a really detailed look at them.

"It would be really exciting to find a planet around Vega because it offers possibilities for future characterization in ways that planets around fainter stars wouldn't," Quinn said.

There's just one catch: Vega is what scientists call an A-type star, the name for objects that tend to be bigger, younger and much faster-spinning than our own sun. Vega, for example, rotates around its axis once every 16 hours -- much faster than the sun with a rotational period that clocks in at 27 Earth days. Such a lightning-fast pace, Quinn said, can make it difficult for scientists to collect precise data on the star's motion and, by extension, any planets in orbit around it.

To take on that game of celestial hide-and-seek, he and colleagues pored through roughly 10 years of data on Vega collected by the Fred Lawrence Whipple Observatory in Arizona. In particular, the team was looking for a tell-tale signal of an alien planet -- a slight jiggle in the star's velocity.

"If you have a planet around a star, it can tug on the star, causing it to wobble back and forth," Quinn said.

Hot and puffy

The search may have paid off, said Hurt, who began the study as a summer research fellow working for Quinn at the CfA. The team discovered a signal that indicates that Vega might host what astronomers call a "hot Neptune" or maybe a "hot Jupiter."

"It would be at least the size of Neptune, potentially as big as Jupiter and would be closer to Vega than Mercury is to the sun," Hurt said.

That close to Vega, he added, the candidate world might puff up like a balloon, and even iron would melt into gas in its atmosphere.

The researchers have a lot more work to do before they can definitively say that they've discovered this sizzling planet. Hurt noted that the easiest way to look for it might be to scan the stellar system directly to look for light emitted from the hot, bright planet.

Read more at Science Daily

New discovery explains antihypertensive properties of green and black tea

 A new study from the University of California, Irvine shows that compounds in both green and black tea relax blood vessels by activating ion channel proteins in the blood vessel wall. The discovery helps explain the antihypertensive properties of tea and could lead to the design of new blood pressure-lowering medications.

Published in Cellular Physiology and Biochemistry, the discovery was made by the laboratory of Geoffrey Abbott, PhD, a professor in the Department of Physiology and Biophysics at the UCI School of Medicine. Kaitlyn Redford, a graduate student in the Abbott Lab, was first author of the study titled, "KCNQ5 potassium channel activation underlies vasodilation by tea."

Results from the research revealed that two catechin-type flavonoid compounds (epicatechin gallate and epigallocatechin-3-gallate) found in tea, each activate a specific type of ion channel protein named KCNQ5, which allows potassium ions to diffuse out of cells to reduce cellular excitability. As KCNQ5 is found in the smooth muscle that lines blood vessels, its activation by tea catechins was also predicted to relax blood vessels -- a prediction confirmed by collaborators at the University of Copenhagen.

"We found by using computer modeling and mutagenesis studies that specific catechins bind to the foot of the voltage sensor, which is the part of KCNQ5 that allows the channel to open in response to cellular excitation. This binding allows the channel to open much more easily and earlier in the cellular excitation process," explained Abbott.

Because as many as one third of the world's adult population have hypertension, and this condition is considered to be the number one modifiable risk factor for global cardiovascular disease and premature mortality, new approaches to treating hypertension have enormous potential to improve global public health. Prior studies demonstrated that consumption of green or black tea can reduce blood pressure by a small but consistent amount, and catechins were previously found to contribute to this property. Identification of KCNQ5 as a novel target for the hypertensive properties of tea catechins may facilitate medicinal chemistry optimization for improved potency or efficacy.

In addition to its role in controlling vascular tone, KCNQ5 is expressed in various parts of the brain, where it regulates electrical activity and signaling between neurons. Pathogenic KCNQ5 gene variants exist that impair its channel function and in doing so cause epileptic encephalopathy, a developmental disorder that is severely debilitating and causes frequent seizures. Because catechins can cross the blood-brain barrier, discovery of their ability to activate KCNQ5 may suggest a future mechanism to fix broken KCNQ5 channels to ameliorate brain excitability disorders stemming from their dysfunction.

Tea has been produced and consumed for more than 4,000 years and upwards of 2 billion cups of tea are currently drunk each day worldwide, second only to water in terms of the volume consumed by people globally. The three commonly consumed caffeinated teas (green, oolong, and black) are all produced from the leaves of the evergreen species Camellia sinensis, the differences arising from different degrees of fermentation during tea production.

Black tea is commonly mixed with milk before it is consumed in countries including the United Kingdom and the United States. The researchers in the present study found that when black tea was directly applied to cells containing the KCNQ5 channel, the addition of milk prevented the beneficial KCNQ5-activating effects of tea. However, according to Abbott, "We don't believe this means one needs to avoid milk when drinking tea to take advantage of the beneficial properties of tea. We are confident that the environment in the human stomach will separate the catechins from the proteins and other molecules in milk that would otherwise block catechins' beneficial effects."

This hypothesis is borne out by other studies showing antihypertensive benefits of tea regardless of milk co-consumption. The team also found, using mass spectrometry, that warming green tea to 35 degrees Celsius alters its chemical composition in a way that renders it more effective at activating KCNQ5.

"Regardless of whether tea is consumed iced or hot, this temperature is achieved after tea is drunk, as human body temperature is about 37 degrees Celsius," explained Abbott. "Thus, simply by drinking tea we activate its beneficial, antihypertensive properties."

Read more at Science Daily

Study finds two servings of fish per week can help prevent recurrent heart disease

 An analysis of several large studies involving participants from more than 60 countries, spearheaded by researchers from McMaster University, has found that eating oily fish regularly can help prevent cardiovascular disease (CVD) in high-risk individuals, such as those who already have heart disease or stroke.

The critical ingredient is omega-3 fatty acids, which researchers found was associated with a lower risk of major CVD events such as heart attacks and strokes by about a sixth in high-risk people who ate two servings of fish rich in omega-3 each week.

"There is a significant protective benefit of fish consumption in people with cardiovascular disease," said lead co-author Andrew Mente, associate professor of research methods, evidence, and impact at McMaster and a principal investigator at the Population Health Research Institute.

No benefit was observed with consumption of fish in those without heart disease or stroke.

"This study has important implications for guidelines on fish intake globally. It indicates that increasing fish consumption and particularly oily fish in vascular patients may produce a modest cardiovascular benefit."

Mente said people at low risk for cardiovascular disease can still enjoy modest protection from CVD by eating fish rich in omega-3, but the health benefits were less pronounced than those high-risk individuals.

The study was published in JAMA Internal Medicine on March 8.

The findings were based on data from nearly 192,000 people in four studies, including about 52,000 with CVD, and is the only study conducted on all five continents. Previous studies focused mainly on North America, Europe, China and Japan, with little information from other regions.

"This is by far the most diverse study of fish intake and health outcomes in the world and the only one with sufficient numbers with representation from high, middle and low income countries from all inhabited continents of the world," said study co-lead Dr. Salim Yusuf, professor of medicine at the Michael G. DeGroote School of Medicine and executive director of the PHRI.

This analysis is based in data from several studies conducted by the PHRI over the last 25 years. These studies were funded by the Canadian Institutes for Health Research, several different pharmaceutical companies, charities, the Population Health Research Institute and the Hamilton Health Sciences Research Institute.

From Science Daily