Jan 14, 2021

Burst of light April 15, 2020 likely magnetar eruption in nearby galaxy

 On April 15, 2020, a brief burst of high-energy light swept through the solar system, triggering instruments on several NASA and European spacecraft. Now, multiple international science teams conclude that the blast came from a supermagnetized stellar remnant known as a magnetar located in a neighboring galaxy.

This finding confirms long-held suspicions that some gamma-ray bursts (GRBs) -- cosmic eruptions detected in the sky almost daily -- are in fact powerful flares from magnetars relatively close to home.

"This has always been regarded as a possibility, and several GRBs observed since 2005 have provided tantalizing evidence," said Kevin Hurley, a Senior Space Fellow with the Space Sciences Laboratory at the University of California, Berkeley, who joined several scientists to discuss the burst at the virtual 237th meeting of the American Astronomical Society. "The April 15 event is a game changer because we found that the burst almost certainly lies within the disk of the nearby galaxy NGC 253."

Papers analyzing different aspects of the event and its implications were published on Jan. 13 in the journals Nature and Nature Astronomy.

GRBs, the most powerful explosions in the cosmos, can be detected across billions of light-years. Those lasting less than about two seconds, called short GRBs, occur when a pair of orbiting neutron stars -- both the crushed remnants of exploded stars -- spiral into each other and merge. Astronomers confirmed this scenario for at least some short GRBs in 2017, when a burst followed the arrival of gravitational waves -- ripples in space-time -- produced when neutron stars merged 130 million light-years away.

Magnetars are neutron stars with the strongest-known magnetic fields, with up to a thousand times the intensity of typical neutron stars and up to 10 trillion times the strength of a refrigerator magnet. Modest disturbances to the magnetic field can cause magnetars to erupt with sporadic X-ray bursts for weeks or longer.

Rarely, magnetars produce enormous eruptions called giant flares that produce gamma rays, the highest-energy form of light.

Most of the 29 magnetars now cataloged in our Milky Way galaxy exhibit occasional X-ray activity, but only two have produced giant flares. The most recent event, detected on Dec. 27, 2004, produced measurable changes in Earth's upper atmosphere despite erupting from a magnetar located about 28,000 light-years away.

Shortly before 4:42 a.m. EDT on April 15, 2020, a brief, powerful burst of X-rays and gamma rays swept past Mars, triggering the Russian High Energy Neutron Detector aboard NASA's Mars Odyssey spacecraft, which has been orbiting the Red Planet since 2001. About 6.6 minutes later, the burst triggered the Russian Konus instrument aboard NASA's Wind satellite, which orbits a point between Earth and the Sun located about 930,000 miles (1.5 million kilometers) away. After another 4.5 seconds, the radiation passed Earth, triggering instruments on NASA's Fermi Gamma-ray Space Telescope, as well as on the European Space Agency's INTEGRAL satellite and Atmosphere-Space Interactions Monitor (ASIM) aboard the International Space Station.

The eruption occurred beyond the field of view of the Burst Alert Telescope (BAT) on NASA's Neil Gehrels Swift Observatory, so its onboard computer did not alert astronomers on the ground. However, thanks to a new capability called the Gamma-ray Urgent Archiver for Novel Opportunities (GUANO), the Swift team can beam back BAT data when other satellites trigger on a burst. Analysis of this data provided additional insight into the event.

The pulse of radiation lasted just 140 milliseconds -- as fast as the blink of an eye or a finger snap.

The Fermi, Swift, Wind, Mars Odyssey and INTEGRAL missions all participate in a GRB-locating system called the InterPlanetary Network (IPN). Now funded by the Fermi project, the IPN has operated since the late 1970s using different spacecraft located throughout the solar system. Because the signal reached each detector at different times, any pair of them can help narrow down a burst's location in the sky. The greater the distances between spacecraft, the better the technique's precision.

The IPN placed the April 15 burst, called GRB 200415A, squarely in the central region of NGC 253, a bright spiral galaxy located about 11.4 million light-years away in the constellation Sculptor. This is the most precise sky position yet determined for a magnetar located beyond the Large Magellanic Cloud, a satellite of our galaxy and host to a giant flare in 1979, the first ever detected.

Giant flares from magnetars in the Milky Way and its satellites evolve in a distinct way, with a rapid rise to peak brightness followed by a more gradual tail of fluctuating emission. These variations result from the magnetar's rotation, which repeatedly brings the flare location in and out of view from Earth, much like a lighthouse.

Observing this fluctuating tail is conclusive evidence of a giant flare. Seen from millions of light-years away, though, this emission is too dim to detect with today's instruments. Because these signatures are missing, giant flares in our galactic neighborhood may be masquerading as much more distant and powerful merger-type GRBs.

A detailed analysis of data from Fermi's Gamma-ray Burst Monitor (GBM) and Swift's BAT provides strong evidence that the April 15 event was unlike any burst associated with mergers, noted Oliver Roberts, an associate scientist at Universities Space Research Association's Science and Technology Institute in Huntsville, Alabama, who led the study.

In particular, this was the first giant flare known to occur since Fermi's 2008 launch, and the GBM's ability to resolve changes at microsecond timescales proved critical. The observations reveal multiple pulses, with the first one appearing in just 77 microseconds -- about 13 times the speed of a camera flash and nearly 100 times faster than the rise of the fastest GRBs produced by mergers. The GBM also detected rapid variations in energy over the course of the flare that have never been observed before.

"Giant flares within our galaxy are so brilliant that they overwhelm our instruments, leaving them to hang onto their secrets," Roberts said. "For the first time, GRB 200415A and distant flares like it allow our instruments to capture every feature and explore these powerful eruptions in unparalleled depth."

Giant flares are poorly understood, but astronomers think they result from a sudden rearrangement of the magnetic field. One possibility is that the field high above the surface of the magnetar may become too twisted, suddenly releasing energy as it settles into a more stable configuration. Alternatively, a mechanical failure of the magnetar's crust -- a starquake -- may trigger the sudden reconfiguration.

Roberts and his colleagues say the data show some evidence of seismic vibrations during the eruption. The highest-energy X-rays recorded by Fermi's GBM reached 3 million electron volts (MeV), or about a million times the energy of blue light, itself a record for giant flares. The researchers say this emission arose from a cloud of ejected electrons and positrons moving at about 99% the speed of light. The short duration of the emission and its changing brightness and energy reflect the magnetar's rotation, ramping up and down like the headlights of a car making a turn. Roberts describes it as starting off as an opaque blob -- he pictures it as resembling a photon torpedo from the "Star Trek" franchise -- that expands and diffuses as it travels.

The torpedo also factors into one of the event's biggest surprises. Fermi's main instrument, the Large Area Telescope (LAT), also detected three gamma rays, with energies of 480 MeV, 1.3 billion electron volts (GeV), and 1.7 GeV -- the highest-energy light ever detected from a magnetar giant flare. What's surprising is that all of these gamma rays appeared long after the flare had diminished in other instruments.

Nicola Omodei, a senior research scientist at Stanford University in California, led the LAT team investigating these gamma rays, which arrived between 19 seconds and 4.7 minutes after the main event. The scientists conclude that this signal most likely comes from the magnetar flare. "For the LAT to detect a random short GRB in the same region of the sky and at nearly the same time as the flare, we would have to wait, on average, at least 6 million years," he explained.

A magnetar produces a steady outflow of fast-moving particles. As it moves through space, this outflow plows into, slows, and diverts interstellar gas. The gas piles up, becomes heated and compressed, and forms a type of shock wave called a bow shock.

In the model proposed by the LAT team, the flare's initial pulse of gamma rays travels outward at the speed of light, followed by the cloud of ejected matter, which is moving nearly as fast. After several days, they both reach the bow shock. The gamma rays pass through. Seconds later, the cloud of particles -- now expanded into a vast, thin shell -- collides with accumulated gas at the bow shock. This interaction creates shock waves that accelerate particles, producing the highest-energy gamma rays after the main burst.

The April 15 flare proves that these events constitute their own class of GRBs. Eric Burns, an assistant professor of physics and astronomy at Louisiana State University in Baton Rouge, led a study investigating additional suspects using data from numerous missions. The findings will appear in The Astrophysical Journal Letters. Bursts near the galaxy M81 in 2005 and the Andromeda galaxy (M31) in 2007 had already been suggested to be giant flares, and the team additionally identified a flare in M83, also seen in 2007 but newly reported. Add to these the giant flare from 1979 and those observed in our Milky Way in 1998 and 2004.

Read more at Science Daily

A climate in crisis calls for investment in direct air capture, new research finds

 There is a growing consensus among scientists as well as national and local governments representing hundreds of millions of people, that humanity faces a climate crisis that demands a crisis response. New research from the University of California San Diego explores one possible mode of response: a massively funded program to deploy direct air capture (DAC) systems that remove CO2 directly from the ambient air and sequester it safely underground.

The findings reveal such a program could reverse the rise in global temperature well before 2100, but only with immediate and sustained investments from governments and firms to scale up the new technology.

Despite the enormous undertaking explored in the study, the research also reveals the need for governments, at the same time, to adopt policies that would achieve deep cuts in CO2 emissions. The scale of the effort needed just to achieve the Paris Agreement goals of holding average global temperature rise below 2 degrees Celsius is massive.

The study, published in Nature Communications, assesses how crisis-level government funding on direct air capture -- on par with government spending on wars or pandemics -- would lead to deployment of a fleet of DAC plants that would collectively remove CO2 from the atmosphere.

"DAC is substantially more expensive than many conventional mitigation measures, but costs could fall as firms gain experience with the technology," said first-author Ryan Hanna, assistant research scientist at UC San Diego. "If that happens, politicians could turn to the technology in response to public pressure if conventional mitigation proves politically or economically difficult."

Co-author David G. Victor, professor of industrial innovation at UC San Diego's School of Global Policy and Strategy, added that atmospheric CO2 concentrations are such that meeting climate goals requires not just preventing new emissions through extensive decarbonization of the energy system, but also finding ways to remove historical emissions already in the atmosphere.

"Current pledges to cut global emissions put us on track for about 3 degrees C of warming," Victor said. "This reality calls for research and action around the politics of emergency response. In times of crisis, such as war or pandemics, many barriers to policy expenditure and implementation are eclipsed by the need to mobilize aggressively."

Emergency deployment of direct air capture

The study calculates the funding, net CO2 removal, and climate impacts of a large and sustained program to deploy direct air capture technology.

The authors find that if an emergency direct air capture program were to commence in 2025 and receive investment of 1.2-1.9% of global GDP annually it would remove 2.2-2.3 gigatons of CO2 by the year 2050 and 13-20 gigatons of CO2 by 2075. Cumulatively, the program would remove 570-840 gigatons of CO2 from 2025-2100, which falls within the range of CO2 removals that IPCC scenarios suggest will be needed to meet Paris targets.

Even with such a massive program, the globe would see temperature rise of 2.4-2.5ºC in the year 2100 without further cuts in global emissions below current trajectories.

Exploring the reality of a fleet of CO2 scrubbers in the sky

According to the authors, DAC has attributes that could prove attractive to policymakers if political pressures continue to mount to act on climate change, yet cutting emissions remains insurmountable.

"Policymakers might see value in the installation of a fleet of CO2 scrubbers: deployments would be highly controllable by the governments and firms that invest in them, their carbon removals are verifiable, and they do not threaten the economic competitiveness of existing industries," said Hanna.

From the Civil War to Operation Warp Speed, the authors estimate the financial resources that might be available for emergency deployment of direct air capture -- in excess of one trillion dollars per year -- based on previous spending the U.S. has made in times of crisis.

The authors then built a bottom-up deployment model that constructs, operates and retires successive vintages of DAC scrubbers, given available funds and the rates at which direct air capture technologies might improve with time. They link the technological and economic modeling to climate models that calculate the effects of these deployments on atmospheric CO2 concentration level and global mean surface temperature.

With massive financial resources committed to DAC, the study finds that the ability of the DAC industry to scale up is the main factor limiting CO2 removal from the atmosphere. The authors point to the ongoing pandemic as an analog: even though the FDA has authorized use of coronavirus vaccines, there is still a huge logistical challenge to scaling up production, transporting, and distributing the new therapies quickly and efficiently to vast segments of the public.

Conventional mitigation is still needed, even with wartime spending combating climate change

"Crisis deployment of direct air capture, even at the extreme of what is technically feasible, is not a substitute for conventional mitigation," the authors write.

Nevertheless, they note that the long-term vision for combating climate requires taking negative emissions seriously.

"For policymakers, one implication of this finding is the high value of near-term direct air capture deployments -- even if societies today are not yet treating climate change as a crisis -- because near term deployments enhance future scalability," they write. "Rather than avoiding direct air capture deployments because of high near-term costs, the right policy approach is the opposite."

Additionally, they note that such a large program would grow a new economic sector, producing a substantial number of new jobs.

Read more at Science Daily

How the brain paralyzes you while you sleep

 We laugh when we see Homer Simpson falling asleep while driving, while in church, and while even operating the nuclear reactor. In reality though, narcolepsy, cataplexy, and rapid eye movement (REM) sleep behavior disorder are all serious sleep-related illnesses. Researchers at the University of Tsukuba led by Professor Takeshi Sakurai have found neurons in the brain that link all three disorders and could provide a target for treatments.

REM sleep correlates when we dream. Our eyes move back and forth, but our bodies remain still. This near-paralysis of muscles while dreaming is called REM-atonia, and is lacking in people with REM sleep behavior disorder. Instead of being still during REM sleep, muscles move around, often going as far as to stand up and jump, yell, or punch. Sakurai and his team set out to find the neurons in the brain that normally prevent this type of behavior during REM sleep.

Working with mice, the team identified a specific group of neurons as likely candidates. These cells were located in an area of the brain called the ventral medial medulla and received input from another area called the sublaterodorsal tegmental nucleus, or SLD. "The anatomy of the neurons we found matched what we know," explains Sakurai. "They were connected to neurons that control voluntary movements, but not those that control muscles in the eyes or internal organs. Importantly, they were inhibitory, meaning that they can prevent muscle movement when active." When the researchers blocked the input to these neurons, the mice began moving during their sleep, just like someone with REM sleep behavior disorder.

Narcolepsy is characterized by suddenly falling asleep at any time during the day, even in mid-sentence. Cataplexy is a related illness in which people suddenly lose muscle tone and collapse. Although they are awake, their muscles act as if they are in REM sleep. Sakurai and his team suspected that the special neurons they found were related to these two disorders. They tested their hypothesis using a mouse model of narcolepsy in which cataplexic attacks could be triggered by chocolate. "We found that silencing the SLD-to-ventral medial medulla reduced the number of cataplexic bouts," says Sakurai.

Read more at Science Daily

A rift in the retina may help repair the optic nerve

 In experiments in mouse tissues and human cells, Johns Hopkins Medicine researchers say they have found that removing a membrane that lines the back of the eye may improve the success rate for regrowing nerve cells damaged by blinding diseases. The findings are specifically aimed at discovering new ways to reverse vision loss caused by glaucoma and other diseases that affect the optic nerve, the information highway from the eye to the brain.

"The idea of restoring vision to someone who has lost it from optic nerve disease has been considered science fiction for decades. But in the last five years, stem cell biology has reached a point where it's feasible," says Thomas Johnson, M.D., Ph.D., assistant professor of ophthalmology at the Wilmer Eye Institute at the Johns Hopkins University School of Medicine.

The research was published Jan. 12 in the journal Stem Cell Reports.

A human eye has more than 1 million small nerve cells, called retinal ganglion cells, that transmit signals from light-collecting cells called photoreceptors in the back of the eye to the brain. Retinal ganglion cells send out long arms, or axons, that bundle together with other retinal ganglion cell projections, forming the optic nerve that leads to the brain.

When the eye is subjected to high pressure, as occurs in glaucoma, it damages and eventually kills retinal ganglion cells. In other conditions, inflammation, blocked blood vessels, or tumors can kill retinal ganglion cells. Once they die, retinal ganglion cells don't regenerate.

"That's why it is so important to detect glaucoma early," says Johnson. "We know a lot about how to treat glaucoma and help nerve cells survive an injury, but once the cells die off, the damage to someone's vision becomes permanent."

Johnson is a member of a team of researchers at the Johns Hopkins Wilmer Eye Institute looking for ways scientists can repair or replace lost optic neurons by growing new cells.

In the current study, Johnson and his team grew mouse retinas in a laboratory dish and tracked what happens when they added human retinal ganglion cells, derived from human embryonic stem cells, to the surface of the mouse retinas. They found that most of the transplanted human cells were unable to integrate into the retinal tissue, which contains several layers of cells.

"The transplanted cells clumped together rather than dispersing from one another like on a living retina," says Johnson.

However, the researchers found that a small number of transplanted retinal cells were able to settle uniformly into certain areas of the mouse retina. Looking more closely, the areas where the transplanted cells integrated well aligned with locations where the researchers had to make incisions into the mouse retinas to get them to lie flat in the culture dish. At these incision points, some of the transplanted cells were able to crawl into the retina and integrate themselves in the proper place within the tissue.

"This suggested that there was some type of barrier that had been broken by these incisions," Johnson says. "If we could find a way to remove it, we may have more success with transplantation."

It turns out that the barrier is a well-known anatomical structure of the retina, called the internal limiting membrane. It's a translucent connective tissue created by the retina's cells to separate the fluid of the eye from the retina.

After using an enzyme to loosen the connective fibers of the internal limiting membrane, the researchers removed the membrane and applied the transplanted human cells to the retinas. They found that most of the transplanted retinal ganglion cells grew in a more normal pattern, integrating themselves more fully. The transplanted cells also showed signs of establishing new nerve connections to the rest of the retinal structure when compared with retinas that had intact membranes.

"These findings suggest that altering the internal limiting membrane may be a necessary step in our aim to regrow new cells in damaged retinas," says Johnson.

The researchers plan to continue investigating the development of transplanted retinal ganglion cells to determine the factors they need to function once integrated into the retina.

Read more at Science Daily

Jan 13, 2021

Quasar discovery sets new distance record

 An international team of astronomers has discovered the most distant quasar yet found -- a cosmic monster more than 13 billion light-years from Earth powered by a supermassive black hole more than 1.6 billion times more massive than the Sun and more than 1,000 times brighter than our entire Milky Way Galaxy.

The quasar, called J0313-1806, is seen as it was when the Universe was only 670 million years old and is providing astronomers with valuable insight on how massive galaxies -- and the supermassive black holes at their cores -- formed in the early Universe. The scientists presented their findings to the American Astronomical Society's meeting, now underway virtually, and in a paper accepted to the Astrophysical Journal Letters.

The new discovery beats the previous distance record for a quasar set three years ago. Observations with the Atacama Large Millimeter/submillimeter Array (ALMA) in Chile confirmed the distance measurement to high precision.

Quasars occur when the powerful gravity of a supermassive black hole at a galaxy's core draws in surrounding material that forms an orbiting disk of superheated material around the black hole. The process releases tremendous amounts of energy, making the quasar extremely bright, often outshining the rest of the galaxy.

The black hole at the core of J0313-1806 is twice as massive as that of the previous record holder and that fact provides astronomers with a valuable clue about such black holes and their affect on their host galaxies.

"This is the earliest evidence of how a supermassive black hole is affecting the galaxy around it," said Feige Wang, a Hubble Fellow at the University of Arizona's Steward Observatory and leader of the research team. "From observations of less distant galaxies, we know that this has to happen, but we have never seen it happening so early in the Universe."

The huge mass of J0313-1806's black hole at such an early time in the Universe's history rules out two theoretical models for how such objects formed, the astronomers said. In the first of these models, individual massive stars explode as supernovae and collapse into black holes that then coalesce into larger black holes. In the second, dense clusters of stars collapse into a massive black hole. In both cases, however, the process takes too long to produce a black hole as massive as the one in J0313-1806 by the age at which we see it.

"This tells you that no matter what you do, the seed of this black hole must have formed by a different mechanism," said Xiaohui Fan, also of the University of Arizona. "In this case, it's a mechanism that involves vast quantities of primordial, cold hydrogen gas directly collapsing into a seed black hole."

The ALMA observations of J0313-1806 provided tantalizing details about the quasar host galaxy, which is forming new stars at a rate 200 times that of our Milky Way. "This is a relatively high star formation rate in galaxies of similar age, and it indicates that the quasar host galaxy is growing very fast," said Jinyi Yang, the second author of the report, who is a Peter A. Strittmatter Fellow at the University of Arizona.

The quasar's brightness indicates that the black hole is swallowing the equivalent of 25 Suns every year. The energy released by that rapid feeding, the astronomers said, probably is powering a powerful outflow of ionized gas seen moving at about 20 percent of the speed of light.

Such outflows are thought to be what ultimately stops star formation in the galaxy.

"We think those supermassive black holes were the reason why many of the big galaxies stopped forming stars at some point," Fan said. "We observe this 'quenching' at later times, but until now, we didn't know how early this process began in the history of the Universe. This quasar is the earliest evidence that quenching may have been happening at very early times."

This process also will leave the black hole with nothing left to eat and halt its growth, Fan pointed out.

In addition to ALMA, the astronomers used the 6.5-meter Magellan Baade telescope, the Gemini North telescope and W.M. Keck Observatory in Hawaii, and the Gemini South telescope in Chile.

Read more at Science Daily

Could we harness energy from black holes?

 A remarkable prediction of Einstein's theory of general relativity -- the theory that connects space, time, and gravity -- is that rotating black holes have enormous amounts of energy available to be tapped.

For the last 50 years, scientists have tried to come up with methods to unleash this power. Nobel physicist Roger Penrose theorized that a particle disintegration could draw energy from a black hole; Stephen Hawking proposed that black holes could release energy through quantum mechanical emission; while Roger Blandford and Roman Znajek suggested electromagnetic torque as a main agent of energy extraction.

Now, in a study published in the journal Physical Review D, physicists Luca Comisso from Columbia University and Felipe Asenjo from Universidad Adolfo Ibanez in Chile, found a new way to extract energy from black holes by breaking and rejoining magnetic field lines near the event horizon, the point from which nothing, not even light, can escape the black hole's gravitational pull.

"Black holes are commonly surrounded by a hot 'soup' of plasma particles that carry a magnetic field," said Luca Comisso, research scientist at Columbia University and first author on the study.

"Our theory shows that when magnetic field lines disconnect and reconnect, in just the right way, they can accelerate plasma particles to negative energies and large amounts of black hole energy can be extracted."

This finding could allow astronomers to better estimate the spin of black holes, drive black hole energy emissions, and might even provide a source of energy for the needs of an advanced civilization, Comisso said.

Comisso and Asenjo built their theory on the premise that reconnecting magnetic fields accelerates plasma particles in two different directions. One plasma flow is pushed against the black hole's spin, while the other is propelled in the spin's direction and can escape the clutches of the black hole, which releases power if the plasma swallowed by the black hole has negative energy.

"It is like a person could lose weight by eating candy with negative calories," said Comisso, who explained that essentially a black hole loses energy by eating negative-energy particles. "This might sound weird," he said, "but it can happen in a region called the ergosphere, where the spacetime continuum rotates so fast that every object spins in the same direction as the black hole."

Inside the ergosphere, magnetic reconnection is so extreme that the plasma particles are accelerated to velocities approaching the speed of light.

Asenjo, professor of physics at the Universidad Adolfo Ibáñez and coauthor on the study, explained that the high relative velocity between captured and escaping plasma streams is what allows the proposed process to extract massive amounts of energy from the black hole.

"We calculated that the process of plasma energization can reach an efficiency of 150 percent, much higher than any power plant operating on Earth," Asenjo said. "Achieving an efficiency greater than 100 percent is possible because black holes leak energy, which is given away for free to the plasma escaping from the black hole."

The process of energy extraction envisioned by Comisso and Asenjo might be already operating in a large number of black holes. That may be what is driving black hole flares -- powerful bursts of radiation that can be detected from Earth.

"Our increased knowledge of how magnetic reconnection occurs in the vicinity of the black hole might be crucial for guiding our interpretation of current and future telescope observations of black holes, such as the ones by the Event Horizon Telescope," Asenjo said.

While it may sound like the stuff of science fiction, mining energy from black holes could be the answer to our future power needs.

"Thousands or millions of years from now, humanity might be able to survive around a black hole without harnessing energy from stars," Comisso said. "It is essentially a technological problem. If we look at the physics, there is nothing that prevents it."

The study, Magnetic reconnection as a mechanism for energy extraction from rotating black holes, was funded by the National Science Foundation's Windows on the Universe initiative, NASA, and Chile's National Fund for Scientific and Technological Development.

Vyacheslav (Slava) Lukin, a program director at NSF, said the Foundation aims to catalyzes new theoretical efforts based on frontier observations at facilities such as the EHT, bringing together theoretical physics and observational astronomy under one roof.

"We look forward to the potential translation of seemingly esoteric studies of black hole astrophysics into the practical realm," Lukin said.

"The ideas and concepts discussed in this work are truly fascinating," said Vyacheslav (Slava) Lukin, a program director at the National Science Foundation. He said NSF aims to catalyze new theoretical efforts based on frontier observations, bringing together theoretical physics and observational astronomy under one roof.

Read more at Science Daily

How will we achieve carbon-neutral flight in future?

 Carbon-neutral aviation is possible, but in future, aircraft are likely to continue to be powered by fossil fuels. The CO2 they emit must be systematically stored underground. This is the most economical of various approaches researchers have compared in detail.

It is politically agreed and necessary for climate protection reasons that our entire economy becomes climate-neutral in the coming decades -- and that applies to air travel, too. This is a technically feasible goal, and there are numerous ways to achieve it. ETH Professor Marco Mazzotti and his team have now compared the options that appear to be the easiest to implement in the short and medium term and evaluated them according to factors such as cost-effectiveness.

The ETH researchers conclude that the most favourable option is to continue powering aircraft with fossil fuels in future, but then remove the associated CO2 emissions from the atmosphere using CO2 capture plants and store that CO2 permanently underground (carbon capture and storage, CCS). "The necessary technology already exists, and underground storage facilities have been operating for years in the North Sea and elsewhere," says Viola Becattini, a postdoc in Mazzotti's group and the study's first author.

"The approach may become a cost-competitive mitigation solution for air travel in case, for example, a carbon tax or a cap-and-trade system were imposed on emissions from fossil jet fuels, or if governments were to provide financial incentives for deploying CCS technologies and achieving climate goals," says ETH professor Mazzotti.

Directly or indirectly from the air

Basically, there are two ways to capture CO2: either directly from the air or indirectly at a site where organic material is burned, for example in a waste incineration plant. "Roughly speaking, half of the carbon in the waste burned in municipal incinerators comes from fossil sources, such as plastic that has been produced from petroleum. The other half is organic material, such as wood or wood products like paper and cardboard," Mazzotti says.

From a climate action perspective, capturing and storing the share of carbon that has fossil origin is a zero-sum game: it simply sends carbon that originated underground back to where it came from. As to the share of carbon from organic sources, this was originally absorbed from the air as CO2 by plants, so capturing and storing this carbon is an indirect way to remove CO2 from the air. This means CCS is a suitable method for putting carbon from fossil aviation fuels back underground -- and effectively making air travel carbon-neutral.

In their study, the ETH scientists were able to show that indirect carbon capture from waste incineration gases costs significantly less than direct carbon capture from the air, which is also already technically feasible.

Synthetic fuels more expensive

As a further option, the scientists investigated producing synthetic aviation fuel from CO2 captured directly or indirectly from the air (carbon capture and utilisation, CCU). Because the chemical synthesis of fuel from CO2 is energy-intensive and therefore expensive, this approach is in any case less economical than using fossil fuel and CCS. Regardless of whether the CO2 is captured directly or indirectly, CCU is about three times more expensive than CCS.

ETH Professor Mazzotti also points out one of CCU's pitfalls: depending on the energy source, this approach may even be counterproductive from a climate action perspective, namely if the electricity used to produce the fuel is from fossil fuel-fired power plants. "With Switzerland's current electricity mix or with France's, which has a high proportion of nuclear power, energy-intensive CCU is already more harmful to the climate than the status quo with fossil aviation fuels -- and even more so with the average electricity mix in the EU, which has a higher proportion of fossil fuel-fired power plants," Mazzotti says. The only situation in which CCU would make sense from a climate action perspective is if virtually all the electricity used comes from carbon-neutral sources.

More profitable over time

"Despite this limitation and the fundamentally high cost of CCU, there may be regions of the world where it makes sense. For example, where a lot of renewable electricity is generated and there are no suitable CO2 storage sites," Becattini says.

The ETH researchers calculated the costs of the various options for carbon-neutral aviation not only in the present day, but also for the period out to 2050. They expect CCS and CCU technologies to become less expensive both as technology advances and through economies of scale. The price of CO2 emissions levied as carbon taxes is likely to rise. Because of these two developments, the researchers expect CCS and CCU to become more profitable over time.

Infrastructure required

The researchers emphasise that there are other ways to make air travel carbon-neutral. For instance, there is much research underway into aircraft that run on either electricity or hydrogen. Mazzotti says that while these efforts should be taken seriously, there are drawbacks with both approaches. For one thing, electrically powered aircraft are likely to be unsuitable for long-haul flights because of how much their batteries will weigh. And before hydrogen can be used as a fuel, both the aircraft and their supply infrastructure will have to be completely developed and built from scratch. Because these approaches are currently still in the development stage, with many questions still open, the ETH scientists didn't include them in their analysis and instead focused on drop-in liquid fuels.

Read more at Science Daily

Unsure how to help reverse insect declines? Scientists suggest simple ways

 Entomologist Akito Kawahara's message is straightforward: We can't live without insects. They're in trouble. And there's something all of us can do to help.

Kawahara's research has primarily focused on answering fundamental questions about moth and butterfly evolution. But he's increasingly haunted by studies that sound the alarm about plummeting insect numbers and diversity.

Kawahara has witnessed the loss himself. As a child, he collected insects with his father every weekend, often traveling to a famous oak outside Tokyo whose dripping sap drew thousands of insects. It was there he first saw the national butterfly of Japan, the great purple emperor, Sasakia charonda. When he returned a few years ago, the oak had been replaced by a housing development. S. charonda numbers are in steep decline nationwide.

While scientists differ on the severity of the problem, many findings point to a general downward trend, with one study estimating 40% of insect species are vulnerable to extinction. In response, Kawahara has turned his attention to boosting people's appreciation for some of the world's most misunderstood animals.

"Insects provide so much to humankind," said Kawahara, associate curator at the Florida Museum of Natural History's McGuire Center for Lepidoptera and Biodiversity. "In the U.S. alone, wild insects contribute an estimated $70 billion to the economy every year through free services such as pollination and waste disposal. That's incredible, and most people have no idea."

Insects sustain flowering plants, the lynchpins of most land-based ecosystems, and provide food sources for birds, bats, freshwater fish and other animals. But they face a barrage of threats, including habitat loss, pesticides, pollution, invasive species and climate change. If human activities are driving the decline, Kawahara reasons, then people can also be a part of the solution.

In an opinion piece published in a special edition of the Proceedings of the National Academies of Sciences, Kawahara and his collaborators outline easy ways everyone can contribute to insect conservation.

Mow less

If you have a lawn, mowing less can give insect populations a boost. Kawahara suggests reserving 10% of a landscape for insects, either actively replacing a monoculture of grass with native plants or simply leaving the space unmown. These miniature nature preserves provide crucial habitat and food reservoirs for insects, he said, particularly if they remain free of chemical pesticides and herbicides. Benefits for lawn-maintainers include less yardwork and lower expenses.

"Even a tiny patch could be hugely important for insects as a place to nest and get resources," Kawahara said. "It's a stepping stone they can use to get from one place to another. If every home, school and local park in the U.S. converted 10% of lawn into natural habitat, this would give insects an extra 4 million acres of habitat."

If you don't have a lawn, you can still help by cultivating native plants in pots in window boxes or on balconies and patios.

Dim the lights

Nighttime light pollution has spiked since the 1990s, doubling in some of the world's most biodiverse places. Artificial lights are powerful attractants to nocturnal insects, which can exhaust themselves to death by circling bulbs or fall prey to predators that spot an easy target.

You can give insects a hand -- and reduce your electric bill -- by turning off unnecessary lights after dark and using amber or red bulbs, which are less attractive to insects.

Use insect-friendly soaps and sealants

Chemical pollutants in soaps for washing cars and building exteriors and in coal-tar-based driveway sealants can harm a variety of insect life. Kawahara recommends swapping these out for biodegradable soaps and soy-based sealants. In winter, trading rock salt for salt-free formulations is safer for both insects and pets.

Become an insect ambassador

In the U.S., insects have historically been depicted as devourers of crops, disease vectors and hallmarks of poor sanitation, even though the vast majority do not harm humans. Kawahara said rethinking your own stereotypes of insects and gaining a better understanding of their beauty, diversity and roles is a first step in helping others appreciate them, too.

He recalled leading schoolchildren on an insect-collecting trip during which a student found an elephant stag beetle, an enormous insect with massive jaws -- "one of the coolest, most amazing bugs," Kawahara said.

The student wanted to step on the beetle, thinking it was a cockroach.

"Other students were grossed out, too," Kawahara said. "When I saw that, I was dumbfounded. If this was Japan, kids would be clamoring to be the first to get it and keep it as a pet. The juxtaposition of those cultural reactions was striking."

He pointed to media characterizations of Asian giant hornets -- which he grew up seeing drink sap from the oak tree outside of Tokyo -- as "murder hornets" as another example of how framing insects as dangerous or disgusting has the power to evoke strong reactions from the public.

As antidotes to unfounded fears, walk outdoors to look for local insect life or adopt pet insects, a simple, inexpensive way to introduce children to science, Kawahara said. Documenting what you see on platforms such as iNaturalist not only helps you learn more about your finds, but also provides data for scientific research.

Read more at Science Daily

Grey camouflage 'better than zebra stripes'

 Dull, featureless camouflage provides better protection from predators than zebra stripes, according to a new study.

Biologists explaining the existence of such stripes have proposed the "motion dazzle hypothesis," which suggests that high-contrast patterns can make it difficult for predators to track a moving target.

University of Exeter scientists tested this using a touch-screen game called Dazzle Bug in which visitors to Cornwall's Eden Project had to catch a moving rectangular "bug."

Bug patterns were programmed to "evolve" to find the best camouflage strategy.

"Surprisingly, targets evolved to lose patterns and instead match their backgrounds," said senior author Dr Laura Kelley, of the Centre for Ecology and Conservation on Exeter's Penryn Campus in Cornwall.

"Our results indicate that low-contrast, featureless targets were hardest to catch when in motion."

Lead author Dr Anna Hughes, now at the University of Essex, added: "The presence of highly visible and striking patterns on many animals such as zebras has puzzled biologists for over a century, as these markings are conspicuous to predators.

"Early naturalists suggested that these patterns might create 'motion dazzle', making it hard for predators to estimate the speed or direction of their prey.

"Dazzle patterning was used on ships in World War One and has been tested in numerous studies, but its protective effects remain unclear -- largely due to experiments being small-scale tests of a limited range of patterns."

The scientists tackled this problem using citizen science -- more than 77,000 people played Dazzle Bug at the Eden Project, tracking more than 1.5 million "bugs" in total.

"Our findings provide the clearest evidence to date against the motion dazzle hypothesis and suggest that protection in motion may rely on completely different mechanisms to those previously assumed," Dr Hughes said.

From Science Daily

Jan 12, 2021

Study shows meaningful lockdown activity is more satisfying than busyness

 New research shows people who pursue meaningful activities -- things they enjoy doing -- during lockdown feel more satisfied than those who simply keep themselves busy.

The study, published in PLOS ONE, shows you're better off doing what you love and adapting it to suit social distancing, like swapping your regular morning walk with friends for a zoom exercise session.

Simply increasing your level of activity by doing mindless busywork will leave you unsettled and unsatisfied.

Co-lead researcher Dr Lauren Saling from RMIT University in Melbourne, Australia said while novelty lockdown activities -- like baking or painting -- have their place, trying to continue what you enjoyed before lockdown can be more rewarding.

"Busyness might be distracting but it won't necessarily be fulfilling," she said.

"Rather, think about what activities you miss most and try and find a way of doing them."

Survey participants rated their level of wellbeing as it was during social distancing and retrospectively one month beforehand.

They also indicated how much time they spent engaged in various activities and nominated how important each activity was for them.

Although participants reported feeling more positive emotions while doing novelty 'meaningless' activities like binge watching TV, they also felt more negative emotions -- they felt unhappy just as much as they felt happy.

But when substituting activities enjoyed before lockdown -- like dining with friends -- for a virtual alternative, their positive and negative emotions were more subdued.

Saling said busyness riles you up, prompting you to change your behaviour, but meaningful activity -- doing what you enjoy -- calms you down.

"Extreme emotions are not necessarily a good thing," she said.

"Emotions are a mechanism to make you change your behaviour.

"But when you're doing what you love, it makes sense that you feel more balanced -- simply keeping busy isn't satisfying."

Saling said the study challenged assumptions that we are either happy or sad and that we can stave off sadness by keeping busy.

Rather, those who kept busy with mindless tasks felt more frustrated and even when they were happy felt less fulfilled.

"The study showed positive and negative affect worked together, not as opposites," Saling said.

"Respondents who simply stayed busy during lockdown reported an increase in both positive and negative emotions.

"This heightened emotionality will tend to shift you away from activity in general and towards meaningful activity."

The study also found the biggest change in positive emotions before and during lockdown was experienced by people aged under 40.

Read more at Science Daily

Why independent cultures think alike when it comes to categories: It's not in the brain

 Imagine you gave the exact same art pieces to two different groups of people and asked them to curate an art show. The art is radical and new. The groups never speak with one another, and they organize and plan all the installations independently. On opening night, imagine your surprise when the two art shows are nearly identical. How did these groups categorize and organize all the art the same way when they never spoke with one another?

The dominant hypothesis is that people are born with categories already in their brains, but a study from the Network Dynamics Group (NDG) at the Annenberg School for Communication has discovered a novel explanation. In an experiment in which people were asked to categorize unfamiliar shapes, individuals and small groups created many different unique categorization systems while large groups created systems nearly identical to one another.

"If people are all born seeing the world the same way, we would not observe so many differences in how individuals organize things," says senior author Damon Centola, Professor of Communication, Sociology, and Engineering at the University of Pennsylvania. "But this raises a big scientific puzzle. If people are so different, why do anthropologists find the same categories, for instance for shapes, colors, and emotions, arising independently in many different cultures? Where do these categories come from and why is there so much similarity across independent populations?"

To answer this question, the researchers assigned participants to various sized groups, ranging from 1 to 50, and then asked them to play an online game in which they were shown unfamiliar shapes that they then had to categorize in a meaningful way. All of the small groups invented wildly different ways of categorizing the shapes. Yet, when large groups were left to their own devices, each one independently invented a nearly identical category system.

"If I assign an individual to a small group, they are much more likely to arrive at a category system that is very idiosyncratic and specific to them," says lead author and Annenberg alum Douglas Guilbeault (Ph.D. '20), now an Assistant Professor at the Haas School of Business at the University of California, Berkeley. "But if I assign that same individual to a large group, I can predict the category system that they will end up creating, regardless of whatever unique viewpoint that person happens to bring to the table."

"Even though we predicted it," Centola adds, "I was nevertheless stunned to see it really happen. This result challenges many long -- held ideas about culture and how it forms."

The explanation is connected to previous work conducted by the NDG on tipping points and how people interact within networks. As options are suggested within a network, certain ones begin to be reinforced as they are repeated through individuals' interactions with one another, and eventually a particular idea has enough traction to take over and become dominant. This only applies to large enough networks, but according to Centola, even just 50 people is enough to see this phenomenon occur.

Centola and Guilbeault say they plan to build on their findings and apply them to a variety of real -- world problems. One current study involves content moderation on Facebook and Twitter. Can the process of categorizing free speech versus hate speech (and thus what should be allowed versus removed) be improved if done in networks rather than by solitary individuals? Another current study is investigating how to use network interactions among physicians and other health care professionals to decrease the likelihood that patients will be incorrectly diagnosed or treated due to prejudice or bias, like racism or sexism. These topics are explored in Centola's forthcoming book, CHANGE: How to Make Big Things Happen (Little, Brown & Co., 2021).

"Many of the worst social problems reappear in every culture, which leads some to believe these problems are intrinsic to the human condition," says Centola. "Our research shows that these problems are intrinsic to the social experiences humans have, not necessarily to humans themselves. If we can alter that social experience, we can change the way people organize things, and address some of the world's greatest problems."

Read more at Science Daily

Climate change has caused billions of dollars in flood damages

Flooded street

In a new study, Stanford researchers report that intensifying precipitation contributed one-third of the financial costs of flooding in the United States over the past three decades, totaling almost $75 billion of the estimated $199 billion in flood damages from 1988 to 2017.

The research, published Jan. 11 in the journal Proceedings of the National Academy of Sciences, helps to resolve a long-standing debate about the role of climate change in the rising costs of flooding and provides new insight into the financial costs of global warming overall.

"The fact that extreme precipitation has been increasing and will likely increase in the future is well known, but what effect that has had on financial damages has been uncertain," said lead author Frances Davenport, a PhD student in Earth system science at Stanford's School of Earth, Energy & Environmental Sciences (Stanford Earth). "Our analysis allows us to isolate how much of those changes in precipitation translate to changes in the cost of flooding, both now and in the future."

The global insurance company Munich Re calls flooding "the number-one natural peril in the U.S." However, although flooding is one of the most common, widespread and costly natural hazards, whether climate change has contributed to the rising financial costs of flooding -- and if so, how much -- has been a topic of debate, including in the most recent climate change assessments from the U.S. government and the Intergovernmental Panel on Climate Change.

At the crux of that debate is the question of whether or not the increasing trend in the cost of flooding in the U.S. has been driven primarily by socioeconomic factors like population growth, housing development and increasing property values. Most previous research has focused either on very detailed case studies (for example, of individual disasters or long-term changes in individual states) or on correlations between precipitation and flood damages for the U.S. overall.

In an effort to close this gap, the researchers started with higher resolution climate and socioeconomic data. They then applied advanced methods from economics to quantify the relationship between historical precipitation variations and historical flooding costs, along with methods from statistics and climate science to evaluate the impact of changes in precipitation on total flooding costs. Together, these analyses revealed that climate change has contributed substantially to the growing cost of flooding in the U.S., and that exceeding the levels of global warming agreed upon in the United Nations Paris Agreement is very likely to lead to greater intensification of the kinds of extreme precipitation events that have been most costly and devastating in recent decades.

"Previous studies have analyzed pieces of this puzzle, but this is the first study to combine rigorous economic analysis of the historical relationships between climate and flooding costs with really careful extreme event analyses in both historical observations and global climate models, across the whole United States," said senior author and climate scientist Noah Diffenbaugh, the Kara J. Foundation Professor at Stanford Earth.

"By bringing all those pieces together, this framework provides a novel quantification not only of how much historical changes in precipitation have contributed to the costs of flooding, but also how greenhouse gases influence the kinds of precipitation events that cause the most damaging flooding events," Diffenbaugh added.

The researchers liken isolating the role of changing precipitation to other questions of cause and effect, such as determining how much an increase in minimum wage will affect local employment, or how many wins an individual player contributes to the overall success of a basketball team. In this case, the research team started by developing an economic model based on observed precipitation and monthly reports of flood damage, controlling for other factors that might affect flooding costs like increases in home values. They then calculated the change in extreme precipitation in each state over the study period. Finally, they used the model to calculate what the economic damages would have been if those changes in extreme precipitation had not occurred.

"This counterfactual analysis is similar to computing how many games the Los Angeles Lakers would have won, with and without the addition of LeBron James, holding all other players constant," said study co-author and economist Marshall Burke, an associate professor of Earth system science.

Applying this framework, the research team found that -- when totaled across all the individual states -- changes in precipitation accounted for 36 percent of the actual flooding costs that occurred in the U.S. from 1988 to 2017. The effect of changing precipitation was primarily driven by increases in extreme precipitation, which have been responsible for the largest share of flooding costs historically.

"What we find is that, even in states where the long-term mean precipitation hasn't changed, in most cases the wettest events have intensified, increasing the financial damages relative to what would have occurred without the changes in precipitation," said Davenport, who received a Stanford Interdisciplinary Graduate Fellowship in 2020.

The researchers emphasize that, by providing a new quantification of the scale of the financial costs of climate change, their findings have implications beyond flooding in the U.S.

"Accurately and comprehensively tallying the past and future costs of climate change is key to making good policy decisions," said Burke. "This work shows that past climate change has already cost the U.S. economy billions of dollars, just due to flood damages alone."

The authors envision their approach being applied to different natural hazards, to climate impacts in different sectors of the economy and to other regions of the globe to help understand the costs and benefits of climate adaptation and mitigation actions.

Read more at Science Daily

ALMA captures distant colliding galaxy dying out as it loses the ability to form stars

 

Atacama Large Millimeter/submillimeter Array (ALMA) in Chile.
Galaxies begin to "die" when they stop forming stars, but until now astronomers had never clearly glimpsed the start of this process in a far-away galaxy. Using the Atacama Large Millimeter/submillimeter Array (ALMA), in which the European Southern Observatory (ESO) is a partner, astronomers have seen a galaxy ejecting nearly half of its star-forming gas. This ejection is happening at a startling rate, equivalent to 10,000 Suns-worth of gas a year -- the galaxy is rapidly losing its fuel to make new stars. The team believes that this spectacular event was triggered by a collision with another galaxy, which could lead astronomers to rethink how galaxies stop bringing new stars to life.

"This is the first time we have observed a typical massive star-forming galaxy in the distant Universe about to 'die' because of a massive cold gas ejection," says Annagrazia Puglisi, lead researcher on the new study, from the Durham University, UK, and the Saclay Nuclear Research Centre (CEA-Saclay), France. The galaxy, ID2299, is distant enough that its light takes some 9 billion years to reach us; we see it when the Universe was just 4.5 billion years old.

The gas ejection is happening at a rate equivalent to 10,000 Suns per year, and is removing an astonishing 46% of the total cold gas from ID2299. Because the galaxy is also forming stars very rapidly, hundreds of times faster than our Milky Way, the remaining gas will be quickly consumed, shutting down ID2299 in just a few tens of million years.

The event responsible for the spectacular gas loss, the team believes, is a collision between two galaxies, which eventually merged to form ID2299. The elusive clue that pointed the scientists towards this scenario was the association of the ejected gas with a "tidal tail." Tidal tails are elongated streams of stars and gas extending into interstellar space that result when two galaxies merge, and they are usually too faint to see in distant galaxies. However, the team managed to observe the relatively bright feature just as it was launching into space, and were able to identify it as a tidal tail.

Most astronomers believe that winds caused by star formation and the activity of black holes at the centres of massive galaxies are responsible for launching star-forming material into space, thus ending galaxies' ability to make new stars. However, the new study published today in Nature Astronomy suggests that galactic mergers can also be responsible for ejecting star-forming fuel into space.

"Our study suggests that gas ejections can be produced by mergers and that winds and tidal tails can appear very similar," says study co-author Emanuele Daddi of CEA-Saclay. Because of this, some of the teams that previously identified winds from distant galaxies could in fact have been observing tidal tails ejecting gas from them. "This might lead us to revise our understanding of how galaxies 'die'," Daddi adds.

Puglisi agrees about the significance of the team's finding, saying: "I was thrilled to discover such an exceptional galaxy! I was eager to learn more about this weird object because I was convinced that there was some important lesson to be learned about how distant galaxies evolve."

This surprising discovery was made by chance, while the team were inspecting a survey of galaxies made with ALMA (https://www.eso.org/public/teles-instr/alma/), designed to study the properties of cold gas in more than 100 far-away galaxies. ID2299 had been observed by ALMA for only a few minutes, but the powerful observatory, located in northern Chile, allowed the team to collect enough data to detect the galaxy and its ejection tail.

"ALMA has shed new light on the mechanisms that can halt the formation of stars in distant galaxies. Witnessing such a massive disruption event adds an important piece to the complex puzzle of galaxy evolution," says Chiara Circosta, a researcher at the University College London, UK, who also contributed to the research.

In the future, the team could use ALMA to make higher-resolution and deeper observations of this galaxy, enabling them to better understand the dynamics of the ejected gas. Observations with the future ESO's Extremely Large Telescope could allow the team to explore the connections between the stars and gas in ID2299, shedding new light on how galaxies evolve.

Read more at Science Daily

Jan 11, 2021

Chandra X-ray Observatory studies extraordinary magnetar

 In 2020, astronomers added a new member to an exclusive family of exotic objects with the discovery of a magnetar. New observations from NASA's Chandra X-ray Observatory help support the idea that it is also a pulsar, meaning it emits regular pulses of light.

Magnetars are a type of neutron star, an incredibly dense object mainly made up of tightly packed neutron, which forms from the collapsed core of a massive star during a supernova.

What sets magnetars apart from other neutron stars is that they also have the most powerful known magnetic fields in the universe. For context, the strength of our planet's magnetic field has a value of about one Gauss, while a refrigerator magnet measures about 100 Gauss. Magnetars, on the other hand, have magnetic fields of about a million billion Gauss. If a magnetar was located a sixth of the way to the Moon (about 40,000 miles), it would wipe the data from all of the credit cards on Earth.

On March 12, 2020, astronomers detected a new magnetar with NASA's Neil Gehrels Swift Telescope. This is only the 31st known magnetar, out of the approximately 3,000 known neutron stars.

After follow-up observations, researchers determined that this object, dubbed J1818.0-1607, was special for other reasons. First, it may be the youngest known magnetar, with an age estimated to be about 500 years old. This is based on how quickly the rotation rate is slowing and the assumption that it was born spinning much faster. Secondly, it also spins faster than any previously discovered magnetar, rotating once around every 1.4 seconds.

Chandra's observations of J1818.0-1607 obtained less than a month after the discovery with Swift gave astronomers the first high-resolution view of this object in X-rays. The Chandra data revealed a point source where the magnetar was located, which is surrounded by diffuse X-ray emission, likely caused by X-rays reflecting off dust located in its vicinity. (Some of this diffuse X-ray emission may also be from winds blowing away from the neutron star.)

Harsha Blumer of West Virginia University and Samar Safi-Harb of the University of Manitoba in Canada recently published results from the Chandra observations of J1818.0-1607 in The Astrophysical Journal Letters.

This composite image contains a wide field of view in the infrared from two NASA missions, the Spitzer Space Telescope and the Wide-Field Infrared Survey Explorer (WISE), taken before the magnetar's discovery. X-rays from Chandra show the magnetar in purple. The magnetar is located close to the plane of the Milky Way galaxy at a distance of about 21,000 light-years from Earth.

Other astronomers have also observed J1818.0-1607 with radio telescopes, such as the NSF's Karl Jansky Very Large Array (VLA), and determined that it gives off radio waves. This implies that it also has properties similar to that of a typical "rotation-powered pulsar," a type of neutron star that gives off beams of radiation that are detected as repeating pulses of emission as it rotates and slows down. Only five magnetars including this one have been recorded to also act like pulsars, constituting less than 0.2% of the known neutron star population.

The Chandra observations may also provide support for this general idea. Safi-Harb and Blumer studied how efficiently J1818.0-1607 is converting energy from its decreasing rate of spin into X-rays. They concluded this efficiency is lower than that typically found for magnetars, and likely within the range found for other rotation-powered pulsars.

The explosion that created a magnetar of this age would be expected to have left behind a detectable debris field. To search for this supernova remnant, Safi-Harb and Blumer looked at the X-rays from Chandra, infrared data from Spitzer, and the radio data from the VLA. Based on the Spitzer and VLA data they found possible evidence for a remnant, but at a relatively large distance away from the magnetar. In order to cover this distance the magnetar would need to have traveled at speeds far exceeding those of the fastest known neutron stars, even assuming it is much older than expected, which would allow more travel time.

Read more at Science Daily

Study links severe COVID-19 disease to short telomeres

 Patients with severe COVID-19 disease have significantly shorter telomeres, according to a study conducted by researchers at the Spanish National Cancer Research Centre (CNIO) in collaboration with the COVID-IFEMA Field Hospital, published in the journal Aging. The study, led by Maria A. Blasco and whose first authors are Raúl Sánchez and Ana Guío-Carrión, postulates that telomere shortening as a consequence of the viral infection impedes tissue regeneration and that this is why a significant number of patients suffer prolonged sequelae.

Blasco was already developing a therapy to regenerate lung tissue in pulmonary fibrosis patients; she now believes that this treatment -- which should still take at least a year and a half to become available -- could also help those who have lung lesions remaining after overcoming COVID-19.

Telomeres and tissue regeneration

The Telomeres and Telomerase Group, led by Blasco at the CNIO, has been researching the role of telomeres in tissue regeneration for decades. Telomeres are structures that protect the chromosomes within each cell of the organism. It is known that telomere length is an indicator of ageing: each time a cell divides, its telomeres shorten until they can no longer perform their protective function and the cell, which now becomes damaged, stops dividing. Throughout life, cells are constantly dividing to regenerate tissues, and when they stop doing so because the telomeres are too short, the body ages.

In recent years, researchers have shown in mice that it is possible to reverse this process by activating the production of telomerase, which is the enzyme in charge of making the telomeres longer. In animals, telomerase activation is effective in treating diseases associated with ageing and telomere damage, such as pulmonary fibrosis.

COVID-19 as a regenerative disease

In pulmonary fibrosis the lung tissue develops scars and becomes rigid, causing a progressive loss of breathing capacity. The CNIO group has shown in previous studies that one of the causes of the disease is damage to the telomeres of the cells involved in regenerating the lung tissue, the alveolar type II pneumocytes. And these are precisely the cells that the SARS-CoV-2 coronavirus infects in lung tissue.

"When I read that type II alveolar pneumocytes were involved in COVID-19, I immediately thought that telomeres might be involved," says Blasco.

In the Aging paper, the researchers write: "It caught our attention that a common outcome of SARS-CoV-2 infection seems to be the induction of a fibrosis-like phenotype in lung and kidney, suggesting that the viral infection may be exhausting the regenerative potential of tissues."

The authors propose that it is the short telomeres that hamper tissue regeneration after infection. As Blasco explains, "we know that the virus infects alveolar type II pneumocytes and that these cells are involved in lung regeneration; we also know that if they have telomeric damage they cannot regenerate, which induces fibrosis. This is what is seen in patients with lung lesions after COVID-19: we think they develop pulmonary fibrosis because they have shorter telomeres, which limits the regenerative capacity of their lungs."

Samples of patients in a field hospital

The data presented in the 'Aging' paper provide evidence in favour of this hypothesis, by finding an association between greater severity of COVID-19 and shorter telomeres.

Despite the difficulties arising from conducting research at the height of the pandemic -- "the hospital facilities for COVID-19 patients were overwhelmed," Blasco says -- it was possible to analyse the telomeres of 89 patients admitted to the field hospital at the IFEMA in Madrid using several techniques.

As in the general population, the average length of the telomeres decreased as age increased in the patients studied. Furthermore, as the most severe patients are also the oldest patients, there is also a correlation between greater severity and shorter telomere length.

What could not be foreseen, and this is the most important finding, is that the telomeres of the most seriously ill patients were also shorter, irrespective of age.

The researchers write: "Interestingly, we also found that those patients who have more severe COVID-19 pathologies have shorter telomeres at different ages compared to patients with milder disease."

And they add: "These findings demonstrate that molecular hallmarks of ageing, such as the presence of short telomeres, can influence the severity of COVID-19 pathologies."

Gene therapy for patients with post-COVID-19 pulmonary injury

The intention of the researchers is now to demonstrate a causal relationship between reduced telomere length and pulmonary sequelae of COVID-19. To do this, they will infect mice that have short telomeres and are not able to produce telomerase with SARS-CoV-2; without telomerase, the telomeres cannot be repaired and as a consequence lung tissue regeneration cannot take place. If the hypothesis of Blasco's group is correct, mice with short telomeres and without telomerase should develop more severe pulmonary fibrosis than normal mice.

Confirmation that short telomeres hamper the recovery of severe patients would open the door to new treatment strategies, such as therapies based on telomerase activation.

"Given that short telomeres can be made longer again by telomerase, and given that in previous studies we have shown that telomerase activation has a therapeutic effect on diseases related to short telomeres, such as pulmonary fibrosis, it is tempting to speculate that this therapy could improve some of the pathologies that remain in COVID-19 patients once the viral infection has been overcome, such as pulmonary fibrosis."

Last year the CNIO and the Autonomous University of Barcelona, UAB, created a new spin-off company, Telomere Therapeutics, with the specific aim of developing a telomerase-based gene therapy for the treatment of different pathologies related to telomere shortening, such as pulmonary fibrosis and renal fibrosis. This would be a potentially useful type of therapy in patients with remaining lung damage after COVID-19.

Read more at Science Daily

Measurements of pulsar acceleration reveal Milky Way's dark side

 

Milky Way in the night sky.
It is well known that the expansion of the universe is accelerating due to a mysterious dark energy. Within galaxies, stars also experience an acceleration, though this is due to some combination of dark matter and the stellar density. In a new study to be published in Astrophysical Journal Letters, researchers have now obtained the first direct measurement of the average acceleration taking place within our home galaxy, the Milky Way.

Led by Sukanya Chakrabarti at the Institute for Advanced Study with collaborators from Rochester Institute of Technology, University of Rochester, and University of Wisconsin-Milwaukee, the team used pulsar data to clock the radial and vertical accelerations of stars within and outside of the galactic plane. Based on these new high-precision measurements and the known amount of visible matter in the galaxy, researchers were then able to calculate the Milky Way's dark matter density without making the usual assumption that the galaxy is in a steady-state.

"Our analysis not only gives us the first measurement of the tiny accelerations experienced by stars in the galaxy, but also opens up the possibility of extending this work to understand the nature of dark matter, and ultimately dark energy on larger scales," stated Chakrabarti, the paper's lead author and a current Member and IBM Einstein Fellow at the Institute for Advanced Study.

Stars hurtle through the galaxy at hundreds of kilometers per second, yet this study indicates that the change in their velocities is occurring at a literal snail's pace -- a few centimeters per second, which is about the same speed as a crawling baby. To detect this subtle motion the research team relied on the ultraprecise time-keeping ability of pulsars that are widely distributed throughout the galactic plane and halo -- a diffuse spherical region that surrounds the galaxy.

"By exploiting the unique properties of pulsars, we were able to measure very small accelerations in the Galaxy. Our work opens a new window in galactic dynamics," said co-author Philip Chang of the University of Wisconsin-Milwaukee.

Extending outwards approximately 300,000 light years from the galactic center, the halo may provide important hints to understanding dark matter, which accounts for about 90 percent of the galaxy's mass and is highly concentrated above and below the star-dense galactic plane. Stellar motion in this particular region -- a primary focus of this study -- can be influenced by dark matter. Utilizing the local density measurements obtained through this study, researchers will now have a better idea of how and where to look for dark matter.

While previous studies assumed a state of galactic equilibrium to calculate average mass density, this research is based on the natural, non-equilibrium state of the galaxy. One might analogize this to the difference between the surface of a pond before and after a stone is tossed in. By accounting for the "ripples" the team was able to obtain a more accurate picture of reality. Though in this case, rather than stones, the Milky Way is influenced by a turbulent history of galactic mergers and continues to be perturbed by external dwarf galaxies like the Small and Large Magellanic Clouds. As a result, stars do not have flat orbits and tend to follow a path similar to that of a warped vinyl record, crossing above and below the galactic plane. One of the key factors that enabled this direct observational approach was the use of pulsar data compiled from international collaborations, including NANOGrav (North American Nanohertz Observatory for Gravitational Waves) that has obtained data from the Green Bank and Arecibo telescopes.

This landmark paper expands upon the work of Jan H. Oort (1932); John Bahcall (1984); Kuijken & Gilmore (1989); Holmberg & Flynn (2000); Jo Bovy & Scott Tremaine (2012) to calculate the average mass density in the galactic plane (Oort limit) and local dark matter density. IAS scholars including Oort, Bahcall, Bovy, Tremaine, and Chakrabarti have played an important role in advancing this area of research.

"For centuries astronomers have measured the positions and speeds of stars, but these provide only a snapshot of the complex dynamical behavior of the Milky Way galaxy," stated Scott Tremaine, Professor Emeritus at the Institute for Advanced Study. "The accelerations measured by Chakrabarti and her collaborators are directly caused by the gravitational forces from the matter in the galaxy, both visible and dark, and thereby provide a new and promising window on the distribution and the composition of the matter in the galaxy and the universe."

This particular paper will enable a wide variety of future studies. Accurate measurements of accelerations will also soon be possible using the complementary radial velocity method that Chakrabarti developed earlier this year, which measures the change in the velocity of stars with high precision. This work will also enable more detailed simulations of the Milky Way, improve constraints on general relativity, and provide clues in the search for dark matter. Extensions of this method may ultimately allow us to directly measure the cosmic acceleration as well.

Read more at Science Daily

'Galaxy-sized' observatory sees potential hints of gravitational waves

 

Curved spacetime illustration.
Scientists have used a "galaxy-sized" space observatory to find possible hints of a unique signal from gravitational waves, or the powerful ripples that course through the universe and warp the fabric of space and time itself.

The new findings, which appeared recently in The Astrophysical Journal Letters, hail from a U.S. and Canadian project called the North American Nanohertz Observatory for Gravitational Waves (NANOGrav).

For over 13 years, NANOGrav researchers have pored over the light streaming from dozens of pulsars spread throughout the Milky Way Galaxy to try to detect a "gravitational wave background." That's what scientists call the steady flux of gravitational radiation that, according to theory, washes over Earth on a constant basis. The team hasn't yet pinpointed that target, but it's getting closer than ever before, said Joseph Simon, an astrophysicist at the University of Colorado Boulder and lead author of the new paper.

"We've found a strong signal in our dataset," said Simon, a postdoctoral researcher in the Department of Astrophysical and Planetary Sciences. "But we can't say yet that this is the gravitational wave background."

In 2017, scientists on an experiment called the Laser Interferometer Gravitational-Wave Observatory (LIGO) won the Nobel Prize in Physics for the first-ever direct detection of gravitational waves. Those waves were created when two black holes slammed into each other roughly 130 million lightyears from Earth, generating a cosmic shock that spread to our own solar system.

That event was the equivalent of a cymbal crash -- a violent and short-lived blast. The gravitational waves that Simon and his colleagues are looking for, in contrast, are more like the steady hum of conversation at a crowded cocktail party.

Detecting that background noise would be a major scientific achievement, opening a new window to the workings of the universe, he added. These waves, for example, could give scientists new tools for studying how the supermassive black holes at the centers of many galaxies merge over time.

"These enticing first hints of a gravitational wave background suggest that supermassive black holes likely do merge and that we are bobbing in a sea of gravitational waves rippling from supermassive black hole mergers in galaxies across the universe," said Julie Comerford, an associate professor of astrophysical and planetary science at CU Boulder and NANOGrav team member.

Simon will present his team's results at a virtual press conference on Monday at the 237th meeting of the American Astronomical Society.

Galactic lighthouses

Through their work on NANOGrav, Simon and Comerford are part of a high stakes, albeit collaborative, international race to find the gravitational wave background. Their project joins two others out of Europe and Australia to make up a network called the International Pulsar Timing Array.

Simon said that, at least according to theory, merging galaxies and other cosmological events produce a steady churn of gravitational waves. They're humungous -- a single wave, Simon said, can take years or even longer to pass Earth by. For that reason, no other existing experiments can detect them directly.

"Other observatories search for gravitational waves that are on the order of seconds," Simon said. "We're looking for waves that are on the order of years or decades."

He and his colleagues had to get creative. The NANOGrav team uses telescopes on the ground not to look for gravitational waves but to observe pulsars. These collapsed stars are the lighthouses of the galaxy. They spin at incredibly fast speeds, sending streams of radiation hurtling toward Earth in a blinking pattern that remains mostly unchanged over the eons.

Simon explained that gravitational waves alter the steady pattern of light coming from pulsars, tugging or squeezing the relative distances that these rays travel through space. Scientists, in other words, might be able to spot the gravitational wave background simply by monitoring pulsars for correlated changes in the timing of when they arrive at Earth.

"These pulsars are spinning about as fast as your kitchen blender," he said. "And we're looking at deviations in their timing of just a few hundred nanoseconds."

Something there

To find that subtle signal, the NANOGrav team strives to observe as many pulsars as possible for as long as possible. To date, the group has observed 45 pulsars for at least three years and, in some cases, for well over a decade.

The hard work seems to be paying off. In their latest study, Simon and his colleagues report that they've detected a distinct signal in their data: Some common process seems to be affecting the light coming from many of the pulsars.

"We walked through each of the pulsars one by one. I think we were all expecting to find a few that were the screwy ones throwing off our data," Simon said. "But then we got through them all, and we said, 'Oh my God, there's actually something here.'"

The researchers still can't say for sure what's causing that signal. They'll need to add more pulsars to their dataset and observe them for longer periods to determine if it's actually the gravitational wave background at work.

"Being able to detect the gravitational wave background will be a huge step but that's really only step one," he said. "Step two is pinpointing what causes those waves and discovering what they can tell us about the universe."

Read more at Science Daily

Jan 10, 2021

We hear what we expect to hear

 Humans depend on their senses to perceive the world, themselves and each other. Despite senses being the only window to the outside world, people do rarely question how faithfully they represent the external physical reality. During the last 20 years, neuroscience research has revealed that the cerebral cortex constantly generates predictions on what will happen next, and that neurons in charge of sensory processing only encode the difference between our predictions and the actual reality.

A team of neuroscientists of TU Dresden headed by Prof Dr Katharina von Kriegstein presents new findings that show that not only the cerebral cortex, but the entire auditory pathway, represents sounds according to prior expectations.

For their study, the team used functional magnetic resonance imaging (fMRI) to measure brain responses of 19 participants while they were listening to sequences of sounds. The participants were instructed to find which of the sounds in the sequence deviated from the others. Then, the participants' expectations were manipulated so that they would expect the deviant sound in certain positions of the sequences. The neuroscientists examined the responses elicited by the deviant sounds in the two principal nuclei of the subcortical pathway responsible for auditory processing: the inferior colliculus and the medial geniculate body. Although participants recognised the deviant faster when it was placed on positions where they expected it, the subcortical nuclei encoded the sounds only when they were placed in unexpected positions.

These results can be best interpreted in the context of predictive coding, a general theory of sensory processing that describes perception as a process of hypothesis testing. Predictive coding assumes that the brain is constantly generating predictions about how the physical world will look, sound, feel, and smell like in the next instant, and that neurons in charge of processing our senses save resources by representing only the differences between these predictions and the actual physical world.

Dr Alejandro Tabas, first author of the publication, states on the findings: "Our subjective beliefs on the physical world have a decisive role on how we perceive reality. Decades of research in neuroscience had already shown that the cerebral cortex, the part of the brain that is most developed in humans and apes, scans the sensory world by testing these beliefs against the actual sensory information. We have now shown that this process also dominates the most primitive and evolutionary conserved parts of the brain. All that we perceive might be deeply contaminated by our subjective beliefs on the physical world."

These new results open up new ways for neuroscientists studying sensory processing in humans towards the subcortical pathways. Perhaps due to the axiomatic belief that subjectivity is inherently human, and the fact that the cerebral cortex is the major point of divergence between the human and other mammal's brains, little attention has been paid before to the role that subjective beliefs could have on subcortical sensory representations.

Read more at Science Daily

Unravelling the mystery that makes viruses infectious

 Researchers have for the first time identified the way viruses like the poliovirus and the common cold virus 'package up' their genetic code, allowing them to infect cells.

The findings, published Friday, 8 January in the journal PLOS Pathogens by a team from the Universities of Leeds and York, open up the possibility that drugs or anti-viral agents can be developed that would stop such infections.

Once a cell is infected, a virus needs to spread its genetic material to other cells. This is a complex process involving the creation of what are known as virions -- newly-formed infectious copies of the virus. Each virion is a protein shell containing a complete copy of the virus's genetic code. The virions can then infect other cells and cause disease.

What has been a mystery until now is a detailed understanding of the way the virus assembles these daughter virions.

Professor Peter Stockley, former Director of the Astbury Centre for Structural Molecular Biology at Leeds, who part supervised the research with Professor Reidun Twarock from York, said: "This study is extremely important because of the way it shifts our thinking about how we can control some viral diseases. If we can disrupt the mechanism of virion formation, then there is the potential to stop an infection in its tracks."

"Our analysis suggests that the molecular features that control the process of virion formation are genetically conserved, meaning they do not mutate easily -- reducing the risk that the virus could change and make any new drugs ineffective."

The research at Leeds and York brings together experts in the molecular structure of viruses, electron microscopy and mathematical biology.

The study focuses on a harmless bovine virus that is non-infectious in people, Enterovirus-E, which is the universally adopted surrogate for the poliovirus. The poliovirus is a dangerous virus that infects people, causing polio and is the target of a virus eradication initiative by the World Health Organization.

The enterovirus group also includes the human rhinovirus, which causes the common cold.

The study published today details the role of what are called RNA packaging signals, short regions of the RNA molecule which together with proteins from the virus's casing ensure accurate and efficient formation of an infectious virion.

Using a combination of molecular and mathematical biology, the researchers were able to identify possible sites on the RNA molecule that could act as packaging signals. Using advanced electron microscopes at the Astbury Biostructure Laboratory at the University of Leeds, scientists were able to directly visualise this process -- the first time that has been possible with any virus of this type.

Read more at Science Daily