Feb 11, 2021

Spectacular 'honeycomb heart' revealed in iconic stellar explosion

A unique 'heart-shape', with wisps of gas filaments showing an intricate honeycomb-like arrangement, has been discovered at the centre of the iconic supernova remnant, the Crab Nebula. Astronomers have mapped the void in unprecedented detail, creating a realistic three-dimensional reconstruction. The new work is published in Monthly Notices of the Royal Astronomical Society.

The Crab, formally known as Messier 1, exploded as a dramatic supernova in 1054 CE, and was observed over the subsequent months and years by ancient astronomers across the world. The resulting nebula -- the remnant of this enormous explosion -- has been studied by amateur and professional astronomers for centuries. However, despite this rich history of investigation, many questions remain about what type of star was originally there and how the original explosion took place.

Thomas Martin, the researcher at Université Laval who led the study, hopes to answer these questions using a new 3D reconstruction of the nebula. "Astronomers will now be able to move around and inside the Crab Nebula and study its filaments one by one," said Martin.

The team used the powerful SITELLE imaging spectrometer on the Canada-Hawaii-France Telescope (CFHT) in Mauna Kea, Hawaii, to compare the 3D shape of the Crab to two other supernova remnants. Remarkably, they found that all three remnants had ejecta arranged in large-scale rings, suggesting a history of turbulent mixing and radioactive plumes expanding from a collapsed iron core.

Co-author Dan Milisavljevic, an assistant professor at Purdue University and supernova expert, concludes that the fascinating morphology of the Crab seems to go against the most popular explanation of the original explosion.

"The Crab is often understood as being the result of an electron-capture supernova triggered by the collapse of an oxygen-neon-magnesium core, but the observed honeycomb structure may not be consistent with this scenario," Milisavljevic said.

The new reconstruction was made possible by the ground-breaking technology used by SITELLE, which incorporates a Michelson interferometer design allowing scientists to obtain over 300,000 high-resolution spectra of every single point of the nebula.

"SITELLE was designed with objects like the Crab Nebula in mind; but its wide field of view and adaptability make it ideal to study nearby galaxies and even clusters of galaxies at large distances," said co-author Laurent Drissen.

Read more at Science Daily

Astronomers uncover mysterious origins of 'super-Earths'

Mini-Neptunes and super-Earths up to four times the size of our own are the most common exoplanets orbiting stars beyond our solar system. Until now, super-Earths were thought to be the rocky cores of mini-Neptunes whose gassy atmospheres were blown away. In a new study published in The Astrophysical Journal, astronomers from McGill University show that some of these exoplanets never had gaseous atmospheres to begin with, shedding new light on their mysterious origins.

From observations, we know about 30 to 50 percent of host stars have one or the other, and the two populations appear in about equal proportion. But where did they come from?

One theory is that most exoplanets are born as mini-Neptunes but some are stripped of their gas shells by radiation from host stars, leaving behind only a dense, rocky core. This theory predicts that our Galaxy has very few Earth-sized and smaller exoplanets known as Earths and mini-Earths. However, recent observations show this may not be the case.

To find out more, the astronomers used a simulation to track the evolution of these mysterious exoplanets. The model used thermodynamic calculations based on how massive their rocky cores are, how far they are from their host stars, and how hot the surrounding gas is.

"Contrary to previous theories, our study shows that some exoplanets can never build gaseous atmospheres to begin with," says co-author Eve Lee, Assistant Professor in the Department of Physics at McGill University and the McGill Space Institute.

The findings suggest that not all super-Earths are remnants of mini-Neptunes. Rather, the exoplanets were formed by a single distribution of rocks, born in a spinning disk of gas and dust around host stars. "Some of the rocks grew gas shells, while others emerged and remained rocky super-Earths," she says.

How mini-Neptunes and super-Earths are born

Planets are thought to form in a spinning disk of gas and dust around stars. Rocks larger than the moon have enough gravitational pull to attract surrounding gas to form a shell around its core. Over time this shell of gas cools down and shrinks, creating space for more surrounding gas to be pulled in, and causing the exoplanet to grow. Once the entire shell cools down to the same temperature as the surrounding nebular gas, the shell can no longer shrink and growth stops.

For smaller cores, this shell is tiny, so they remain rocky exoplanets. The distinction between super-Earths and mini-Neptunes comes about from the ability of these rocks to grow and retain gas shells.

Read more at Science Daily

Scientists create liquid crystals that look a lot like their solid counterparts

 A team at the University of Colorado Boulder has designed new kinds of liquid crystals that mirror the complex structures of some solid crystals -- a major step forward in building flowing materials that can match the colorful diversity of forms seen in minerals and gems, from lazulite to topaz.

The group's findings, published today in the journal Nature, may one day lead to new types of smart windows and television or computer displays that can bend and control light like never before.

The results come down to a property of solid crystals that will be familiar to many chemists and gemologists: Symmetry.

Ivan Smalyukh, a professor in the Department of Physics at CU Boulder, explained that scientists categorize all known crystals into seven main classes, plus many more sub-classes -- in part based on the "symmetry operations" of their internal atoms. In other words, how many ways can you stick an imaginary mirror inside of a crystal or rotate it and still see the same structure? Think of this classification system as Baskin-Robbins' 32 flavors but for minerals.

To date, however, scientists haven't been able to create liquid crystals -- flowing materials that are found in most modern display technologies -- that come in those same many flavors.

"We know everything about all the possible symmetries of solid crystals that we can make. There are 230 of them," said Smalyukh, senior author of the new study who is also a fellow of the Renewable and Sustainable Energy Institute (RASEI) at CU Boulder. "When it comes to nematic liquid crystals, the kind in most displays, we only have a few that have been demonstrated so far."

That is, until now.

In their latest findings, Smalyukh and his colleagues came up with a way to design the first liquid crystals that resemble monoclinic and orthorhombic crystals -- two of those seven main classes of solid crystals. The findings, he said, bring a bit more of order to the chaotic world of fluids.

"There are a lot of possible types of liquid crystals, but, so far, very few have been discovered," Smalyukh said. "That is great news for students because there's a lot more to find."

Symmetry in action

To understand symmetry in crystals, first picture your body. If you place a giant mirror running down the middle of your face, you'll see a reflection that looks (more or less) like the same person.

Solid crystals have similar properties. Cubic crystals, which include diamonds and pyrite, for example, are made up of atoms arranged in the shape of a perfect cube. They have a lot of symmetry operations.

"If you rotate those crystals by 90 or 180 degrees around many special axes, for example, all of the atoms stay in the right places," Smalyukh said.

But there are other types of crystals, too. The atoms inside monoclinic crystals, which include gypsum or lazulite, are arranged in a shape that looks like a slanted column. Flip or rotate these crystals all you want, and they still have only two distinct symmetries -- one mirror plane and one axis of 180-degree rotation, or the symmetry that you can see by spinning a crystal around an axis and noticing that it looks the same every 180 degrees. Scientists call that a "low-symmetry" state.

Traditional liquid crystals, however, don't display those kinds of complex structures. The most common liquid crystals, for example, are made up of tiny rod-shaped molecules. Under the microscope, they tend to line up like dry pasta noodles tossed into a pot, Smalyukh said.

"When things can flow they don't usually exhibit such low symmetries," Smalyukh said.

Order in liquids

He and his colleagues wanted to see if they could change that. To begin, the team mixed together two different kinds of liquid crystals. The first was the common class made up of rod-shaped molecules. The second was made up of particles shaped like ultra-thin disks.

When the researchers brought them together, they noticed something strange: Under the right conditions in the lab, those two types of crystals pushed and squeezed each other, changing their orientation and arrangement. The end result was a nematic liquid crystal fluid with symmetry that looks a lot like that of a solid monoclinic crystal. The molecules inside displayed some symmetry, but only one mirror plane and one axis of 180-degree rotation.

The group had created, in other words, a material with the mathematical properties of a lazulite or gypsum crystal -- but theirs could flow like a fluid.

"We're asking a very fundamental question: What are the ways that you can combine order and fluidity in a single material?" Smalyukh said.

And, the team's creations are dynamic: If you heat the liquid crystals up or cool them down, for example, you can morph them into a rainbow of different structures, each with their own properties, said Haridas Mundoor, lead author of the new paper. That's pretty handy for engineers.

"This offers different avenues that can modify display technologies, which may enhance the energy efficiency in performance of devices like smart phones," said Mundoor, a postdoctoral research associate at CU Boulder.

He and his colleagues are still nowhere near making liquid crystals that can replicate the full spectrum of solid crystals. But the new paper gets them closer than ever before -- good news for fans of shiny things everywhere.

Read more at Science Daily

Virtual reality helping to treat fear of heights

 Researchers from the University of Basel have developed a virtual reality app for smartphones to reduce fear of heights. Now, they have conducted a clinical trial to study its efficacy. Trial participants who spent a total of four hours training with the app at home showed an improvement in their ability to handle real height situations.

Fear of heights is a widespread phenomenon. Approximately 5% of the general population experiences a debilitating level of discomfort in height situations. However, the people affected rarely take advantage of the available treatment options, such as exposure therapy, which involves putting the person in the anxiety-causing situation under the guidance of a professional. On the one hand, people are reluctant to confront their fear of heights. On the other hand, it can be difficult to reproduce the right kinds of height situations in a therapy setting.

This motivated the interdisciplinary research team led by Professor Dominique de Quervain to develop a smartphone-based virtual reality exposure therapy app called Easyheights. The app uses 360° images of real locations, which the researchers captured using a drone. People can use the app on their own smartphones together with a special virtual reality headset.

Gradually increasing the height

During the virtual experience, the user stands on a platform that is initially one meter above the ground. After allowing acclimatization to the situation for a certain interval, the platform automatically rises. In this way, the perceived distance above the ground increases slowly but steadily without an increase in the person's level of fear.

The research team studied the efficacy of this approach in a randomized, controlled trial and published the results in the journal NPJ Digital Medicine. Fifty trial participants with a fear of heights either completed a four-hour height training program (one 60-minute session and six 30-minute sessions over the course of two weeks) using virtual reality, or were assigned to the control group, which did not complete these training sessions.

Before and after the training phase -- or the same period of time without training -- the trial participants ascended the Uetliberg lookout tower near Zurich as far as their fear of heights allowed them. The researchers recorded the height level reached by the participants along with their subjective fear level at each level of the tower. At the end of the trial, the researchers evaluated the results from 22 subjects who completed the Easyheights training and 25 from the control group.

The group that completed the training with the app exhibited less fear on the tower and was able to ascend further towards the top than they could before completing the training. The control group exhibited no positive changes. The efficacy of the Easyheights training proved comparable to that of conventional exposure therapy.

Therapy in your own living room

Researchers have already been studying the use of virtual reality for treating fear of heights for more than two decades. "What is new, however, is that smartphones can be used to produce the virtual scenarios that previously required a technically complicated type of treatment, and this makes it much more accessible," explains Dr. Dorothée Bentz, lead author of the study.

The results from the study suggest that the repeated use of a smartphone-based virtual reality exposure therapy can greatly improve the behavior and subjective state of well-being in height situations. People who suffer from a mild fear of heights will soon be able to download the free app from major app stores and complete training sessions on their own. However, the researchers recommend that people who suffer from a serious fear of heights only use the app with the supervision of a professional.

Read more at Science Daily

New weapon against resistant bacteria

 Every day, people die from simple infections even though they have been treated with antibiotics. This is because more and more bacteria have become resistant to the types of antibiotics that doctors can prescribe.

"It's a huge societal problem and a crisis that we must solve. For example, by developing new antibiotics that can defeat the resistant bacteria," says professor of chemistry at the Department of Physics, Chemistry and Pharmacy, University of Southern Denmark, Poul Nielsen.

Resistant bacteria are not only known from pig farms, where it is becoming increasingly difficult to keep the pigsties disease-free. Hospitals are also experiencing with increasing regularity that, for example, infectious diseases cannot be controlled in patients. Thus, an infection in a surgical wound can become life-threatening even if the operation went well.

According to Poul Nielsen, it is important to be at the forefront of the development because the list of resistant bacteria will only grow, which means that the treatment options will be reduced. It is therefore important to develop alternatives that can be used when the current antibiotics no longer work.

"Resistance can occur very quickly, and then it's essential that we're ready," he says.

Together with his research assistant Christoffer Heidtmann and associate professor Janne Kudsk Klitgaard from the Department of Biochemistry and Molecular Biology as well as Clinical Microbiology, he has developed a substance that has the potential to become a new effective antibiotic, and SDU has now taken out a patent for it.

Unlike traditional antibiotics such as penicillin, sulfonamides and tetracyclines, this antibiotic is from the pleuromutilin class.

The substance is developed in a medicinal chemistry project and recently published in the Journal of Medicinal Chemistry.

The substance fights both resistant enterococcus, streptococcus and staphylococcus bacteria. The substance and the pleuromutilin class do this via a unique mechanism of action, which also causes resistance to develop at a very slow pace.

So far, the substance has been tested on bacteria and human cells. The next step towards becoming an approved drug is animal studies and then clinical studies in humans.

"If this substance is to reach doctors and patients as a drug, comprehensive and cost-intensive further development efforts are needed, which we can only initiate under the auspices of the university.

"The big pharmaceutical companies have that kind of money, but they are traditionally not interested in this kind of tasks, because they are not financially attractive," says Poul Nielsen.

According to Poul Nielsen, there are several reasons why it is not financially attractive to develop new antibiotics:

Antibiotics are only taken for days or weeks. There is more money in drugs for chronically ill people, such as antidepressants or blood pressure medicine.

Newly developed antibiotics will be backups and not used until the current antibiotics no longer work. So earnings are not just around the corner.

The bacteria can also become resistant to a new antibiotic, and then it has to be taken off the market again.

"However, this doesn't change the fact that the world community is in dire need of new effective drugs against antibiotic resistance. Maybe we should consider this a societal task, rather than a task that will only be solved if it's financially attractive," says Poul Nielsen.

He and his colleagues hope that the work of further developing their new antibiotic can continue. Whether it will happen, and whether it will be in a public or private context, only time will tell.

Resistant bacteria in Denmark

MRSA (Methicillin-resistant Staphylococcus aureus) comes from pigs, among others. May cause wound infection, abscesses, impetigo, infection of bones and joints as well as blood poisoning.

ESBL (Extended-spectrum beta-lactamase) is an enzyme that causes resistant intestinal bacteria from especially poultry, which can cause inflammation of the bladder, inflammation of the renal pelvis and blood poisoning.

Clostridium difficile is an intestinal bacterium that causes diarrhoea and is transmitted through faeces. It forms spores, which means that water, soap and alcohol have no effect.

VRE (Vancomycin-resistant enterococci) are bacteria that are born resistant to a wide range of antibiotics. VRE typically causes inflammation of the bladder but can also cause inflammation of the heart valves (endocarditis).

Read more at Science Daily

Feb 10, 2021

Can super-Earth interior dynamics set the table for habitability?

 New research led by Carnegie's Yingwei Fei provides a framework for understanding the interiors of super-Earths -- rocky exoplanets between 1.5 and 2 times the size of our home planet -- which is a prerequisite to assess their potential for habitability. Planets of this size are among the most abundant in exoplanetary systems. The paper is published in Nature Communications.

"Although observations of an exoplanet's atmospheric composition will be the first way to search for signatures of life beyond Earth, many aspects of a planet's surface habitability are influenced by what's happening beneath the planet's surface, and that's where Carnegie researcher's longstanding expertise in the properties of rocky materials under extreme temperatures and pressures comes in," explained Earth and Planets Laboratory Director Richard Carlson.

On Earth, the interior dynamics and structure of the silicate mantle and metallic core drive plate tectonics, and generate the geodynamo that powers our magnetic field and shields us from dangerous ionizing particles and cosmic rays. Life as we know it would be impossible without this protection. Similarly, the interior dynamics and structure of super-Earths will shape the surface conditions of the planet.

With exciting discoveries of a diversity of rocky exoplanets in recent decades, are much-more-massive super-Earths capable of creating conditions that are hospitable for life to arise and thrive?

Knowledge of what's occurring beneath a super-Earth's surface is crucial for determining whether or not a distant world is capable of hosting life. But the extreme conditions of super-Earth-mass planetary interiors challenge researchers' ability to probe the material properties of the minerals likely to exist there.

That's where lab-based mimicry comes in.

For decades, Carnegie researchers have been leaders at recreating the conditions of planetary interiors by putting small samples of material under immense pressures and high temperatures. But sometimes even these techniques reach their limitations.

"In order to build models that allow us to understand the interior dynamics and structure of super-Earths, we need to be able to take data from samples that approximate the conditions that would be found there, which could exceed 14 million times atmospheric pressure," Fei explained. "However, we kept running up against limitations when it came to creating these conditions in the lab. "

A breakthrough occurred when the team -- including Carnegie's Asmaa Boujibar and Peter Driscoll, along with Christopher Seagle, Joshua Townsend, Chad McCoy, Luke Shulenburger, and Michael Furnish of Sandia National Laboratories -- was granted access to the world's most powerful, magnetically-driven pulsed power machine (Sandia's Z Pulsed Power Facility) to directly shock a high-density sample of bridgmanite -- a high-pressure magnesium silicate that is believed to be predominant in the mantles of rocky planets -- in order to expose it to the extreme conditions relevant to the interior of super-Earths.

A series of hypervelocity shockwave experiments on representative super-Earth mantle material provided density and melting temperature measurements that will be fundamental for interpreting the observed masses and radii of super-Earths.

The researchers found that under pressures representative of super-Earth interiors, bridgmanite has a very high melting point, which would have important implications for interior dynamics. Under certain thermal evolutionary scenarios, they say, massive rocky planets might have a thermally driven geodynamo early in their evolution, then lose it for billions of years when cooling slows down. A sustained geodynamo could eventually be re-started by the movement of lighter elements through inner core crystallization.

"The ability to make these measurements is crucial to developing reliable models of the internal structure of super-Earths up to eight times our planet's mass," Fei added. "These results will make a profound impact on our ability to interpret observational data."

Read more at Science Daily

Astronomers offer possible explanation for elusive dark-matter-free galaxies

 A team led by astronomers at the University of California, Riverside, has found that some dwarf galaxies may today appear to be dark-matter free even though they formed as galaxies dominated by dark matter in the past.

Galaxies that appear to have little to no dark matter -- nonluminous material thought to constitute 85% of matter in the universe -- complicate astronomers' understanding of the universe's dark matter content. Such galaxies, which have recently been found in observations, challenge a cosmological model used by astronomers called Lambda Cold Dark Matter, or LCDM, where all galaxies are surrounded by a massive and extended dark matter halo.

Dark-matter-free galaxies are not well understood in the astronomical community. One way to study the possible formation mechanisms for these elusive galaxies -- the ultradiffuse DF2 and DF4 galaxies are examples -- is to find similar objects in numerical simulations and study their time evolution and the circumstances that lead to their dark matter loss.

Jessica Doppel, a graduate student in the UC Riverside Department of Physics and Astronomy and the first author of research paper published in the Monthly Notices of the Royal Astronomical Society, explained that in a LCDM universe all galaxies should be dark matter dominated.

"That's the challenge," she said. "Finding analogs in simulations of what observers see is significant and not guaranteed. Beginning to pin down the origins of these types of objects and their often-anomalous globular cluster populations allows us to further solidify our theoretical framework of dark matter and galaxy formation and confirms that no alternative forms of dark matter are needed. We found cold dark matter performs well."

For the study, the researchers used cosmological and hydrodynamical simulation called Illustris, which offers a galaxy formation model that includes stellar evolution, supernova feedback, black hole growth, and mergers. The researchers found a couple of "dwarf galaxies" in clusters had similar stellar content, globular cluster numbers, and dark matter mass as DF2 and DF4. As its name suggests, a dwarf galaxy is small, comprising up to several billion stars. In contrast, the Milky Way, which has more than 20 known dwarf galaxies orbiting it, has 200 to 400 billion stars. Globular clusters are often used to estimate the dark matter content of galaxies, especially dwarfs.

The researchers used the Illustris simulation to investigate the origin of odd dwarf galaxies such as DF2 and DF4. They found simulated analogs to dark-matter-free dwarfs in the form of objects that had evolved within the galaxy clusters for a long time and lost more than 90% of their dark matter via tidal stripping -- the stripping away of material by galactic tidal forces.

"Interestingly, the same mechanism of tidal stripping is able to explain other properties of dwarfs like DF2 and DF4 -- for example, the fact that they are 'ultradiffuse' galaxies," said co-author Laura Sales, an associate professor of physics and astronomy at UCR and Doppel's graduate advisor. "Our simulations suggest a combined solution to both the structure of these dwarfs and their low dark matter content. Possibly, extreme tidal mass loss in otherwise normal dwarf galaxies is how ultradiffuse objects are formed."

In collaboration with researchers at the Max Planck Institute for Astrophysics in Germany, Sales' group is currently working with improved simulations that feature more detailed physics and a numerical resolution about 16 times better than the Illustris simulation.

"With these data, we will be able to extend our study to even lower-mass dwarfs, which are more abundant in the universe and expected to be more dark matter dominated at their centers, making them more challenging to explain," Doppel said. "We will explore if tidal stripping could provide a path to deplete dwarfs of their inner dark matter content. We plan to make predictions about the dwarfs' stellar, globular cluster, and dark matter content, which we will then compare to future observations."

The research team has already been awarded time at the W. M. Keck Observatory to help answer some of the questions pertaining to observations of dwarfs in the Virgo cluster.

Read more at Science Daily

Shining a light on the true value of solar power

 Beyond the environmental benefits and lower electric bills, it turns out installing solar panels on your house actually benefits your whole community. Value estimations for grid-tied photovoltaic systems prove solar panels are beneficial for utility companies and consumers alike.

For years some utility companies have worried that solar panels drive up electric costs for people without panels. Joshua Pearce, Richard Witte Endowed Professor of Materials Science and Engineering and professor of electrical and computer engineering at Michigan Technological University, has shown the opposite is true -- grid-tied solar photovoltaic (PV) owners are actually subsidizing their non-PV neighbors.

Most PV systems are grid-tied and convert sunlight directly into electricity that is either used on-site or fed back into the grid. At night or on cloudy days, PV-owning customers use grid-sourced electricity so no batteries are needed.

"Anyone who puts up solar is being a great citizen for their neighbors and for their local utility," Pearce said, noting that when someone puts up grid-tied solar panels, they are essentially investing in the grid itself. "Customers with solar distributed generation are making it so utility companies don't have to make as many infrastructure investments, while at the same time solar shaves down peak demands when electricity is the most expensive."

Pearce and Koami Soulemane Hayibo, graduate student in the Michigan Tech Open Sustainability Technology (MOST) Lab, found that grid-tied PV-owning utility customers are undercompensated in most of the U.S., as the "value of solar" eclipses both the net metering and two-tiered rates that utilities pay for solar electricity. Their results are published online now and will be printed in the March issue of Renewable and Sustainable Energy Reviews.

The value of solar is becoming the preferred method for evaluating the economics of grid-tied PV systems. Yet value of solar calculations are challenging and there is widespread disagreement in the literature on the methods and data needed. To overcome these limitations, Pearce and Hayibo's paper reviews past studies to develop a generalized model that considers realistic costs and liabilities utility companies can avoid when individual people install grid-tied solar panels. Each component of the value has a sensitivity analysis run on the core variables and these sensitivities are applied for the total value of solar.

The overall value of solar equation has numerous components:
 

  • Avoided operation and maintenance costs (fixed and variable)
  • Avoided fuel.
  • Avoided generations capacity.
  • Avoided reserve capacity (plants on standby that turn on if you have, for example, a large air conditioning load on hot day).
  • Avoided transmission capacity (lines).
  • Environmental and health liability costs associated with forms of electric generation that are polluting.


Pearce said one of the paper's goals was to provide the equations to determine the value of solar so individual utility companies can plug in their proprietary data to quickly make a complete valuation.

"It can be concluded that substantial future regulatory reform is needed to ensure that grid-tied solar PV owners are not unjustly subsidizing U.S. electric utilities," Pearce explains. "This study provides greater clarity to decision makers so they see solar PV is truly an economic benefit in the best interest of all utility customers."

Solar PV technology is now a profitable method to decarbonize the grid, but if catastrophic climate change is to be avoided, emissions from transportation and heating must also decarbonize, Pearce argues.

One approach to renewable heating is leveraging improvements in PV with heat pumps (HPs), and it turns out investing in PV+HP tech has a better rate of return than CDs or savings accounts.

To determine the potential for PV+HP systems in Michigan's Upper Peninsula, Pearce performed numerical simulations and economic analysis using the same loads and climate, but with local electricity and natural gas rates for Sault Ste. Marie, in both Canada and U.S. North American residents can profitably install residential PV+HP systems, earning up to 1.9% return in the U.S. and 2.7% in Canada, to provide for all of their electric and heating needs.

Read more at Science Daily

Emerging robotics technology may lead to better buildings in less time

 Emerging robotics technology may soon help construction companies and contractors create buildings in less time at higher quality and at lower costs.

Purdue University innovators developed and are testing a novel construction robotic system that uses an innovative mechanical design with advances in computer vision sensing technology to work in a construction setting.

The technology was developed with support from the National Science Foundation.

"Our work helps to address workforce shortages in the construction industry by automating key construction operations," said Jiansong Zhang, an assistant professor of construction management technology in the Purdue Polytechnic Institute. "On a construction site, there are many unknown factors that a construction robot must be able to account for effectively. This requires much more advanced sensing and reasoning technologies than those commonly used in a manufacturing environment."

The Purdue team's custom end effector design allows for material to be both placed and fastened in the same operation using the same arm, limiting the amount of equipment that is required to complete a given task.

Computer vision algorithms developed for the project allow the robotic system to sense building elements and match them to building information modeling (BIM) data in a variety of environments, and keep track of obstacles or safety hazards in the system's operational context.

"By basing the sensing for our robotic arm around computer vision technology, rather than more limited-scope and expensive sensing systems, we have the capability to complete many sensing tasks with a single affordable sensor," Zhang said. "This allows us to implement a more robust and versatile system at a lower cost."

Undergraduate researchers in Zhang's Automation and Intelligent Construction (AutoIC) Lab helped create this robotic technology.

The innovators worked with the Purdue Research Foundation Office of Technology Commercialization to patent the technology.

This work will be featured at OTC's 2021 Technology Showcase: The State of Innovation. The annual showcase, being held virtually this year Feb. 10-11, will feature novel innovations from inventors at Purdue and across the state of Indiana.

From Science Daily

Six previously FDA-approved drugs appear promising against SARS-CoV-2 in laboratory testing

 A team of investigators from the Republic of China has discovered that 6 drugs previously approved by the US Food and Drug Administration (FDA) for other indications could be repurposed to treat or prevent COVID-19. The research is published in Antimicrobial Agents and Chemotherapy, a journal of the American Society for Microbiology.

Using FDA-approved drugs saves time -- the drugs don't need to go through the FDA approval process again -- making them available quickly to treat patients who need them.

The research shows that the investigators screened 2 large drug libraries cumulatively containing 3,769 FDA-approved drugs and found drugs that can inhibit 2 protein-cutting enzymes, called proteases, that are essential to the replication of SARS-CoV-2.

The assays for testing the drugs involved growing the virus in petri dishes, applying each drug to different petri dishes containing the virus, and then conducting plaque reduction assays to determine each drug's effectiveness. The original outbreak strain, clade S, was used in the assays. ("Clade" is a technical term for a familial group.)

The researchers, led by corresponding author Po-Huang Liang, PhD, also plan to test the drugs against the dominant GR strain, which was the first variant to emerge, in late January or February, 2020, and the recent, highly contagious United Kingdom strain. Dr. Liang is research fellow and professor at the Institute of Biological Chemistry, Academia Sinica, Taiwan, Republic of China.

Their research also enables the investigators to better understand how coronavirus proteases recognize their substrates during replication. That will help them tweak the drugs they identified to be more effective inhibitors against the protease.

"Despite the variations of the virus strains emerging over the years, considering the significance of the 3CL protease in the viral replication, and the relatively low likelihood for this protein target to mutate, we aim to develop a broad spectrum of antiviral drugs using our platform, helping to prevent the emergence of future pathogenic SARS-CoV strains," said Dr. Liang.

Read more at Science Daily

Feb 9, 2021

1918 pandemic second wave had fatal consequences

 In the event of a pandemic, delayed reactions and a decentralized approach by the authorities at the start of a follow-up wave can lead to longer-lasting, more severe and more fatal consequences, researchers from the universities of Zurich and Toronto have found. The interdisciplinary team compared the Spanish flu of 1918 and 1919 in the Canton of Bern with the coronavirus pandemic of 2020.

The Spanish flu was the greatest demographic catastrophe in Switzerland's recent history, causing approximately 25,000 deaths in the country during 1918 and 1919. In the wake of the current coronavirus pandemic, there has been increased public and scientific interest in the events of that time. An interdisciplinary team of researchers in evolutionary medicine, history, geography and epidemiology from the universities of Zurich and Toronto has spent several years analyzing historical data on the spread of influenza-like illnesses during 1918 and 1919 in the Canton of Bern. The canton is ideally suited as a Swiss case study, because it is large and has a diverse landscape, it was hit particularly hard by the Spanish flu, and right at the start of the pandemic in July 1918 it introduced an obligation to report cases.

Public health measures effective in the first wave

The results of the new study show that the spread of Spanish flu differed depending on the region. In the first wave in July and August 1918, the Canton of Bern intervened relatively quickly, strongly and centrally, including by restricting gatherings and closing schools. "We see from the numbers that these measures -- similar to today -- were associated with a decrease in infection numbers," says co-first author Kaspar Staub of the Institute of Evolutionary Medicine at the University of Zurich. After the first wave had subsided, the canton lifted all measures entirely in September 1918, which led to a rapid resurgence of cases and the onset of a second wave after only a short time.

Delayed action at start of second wave was fatal

At the beginning of the second wave in October 1918, the Canton of Bern reacted hesitantly, unlike in the first wave. Fearing renewed economic consequences, the cantonal authorities left responsibility for new measures up to the individual municipalities for several weeks. "This hesitant and decentralized approach was fatal and contributed to the fact that the second wave became all the stronger and lasted longer," says co-first author Peter Jueni of the University of Toronto.

In addition, shortly after the peak of the second wave in November 2018, there was a national strike with demonstrations on social and labor issues and, most importantly, larger troop deployments. These mass gatherings, as well as a subsequent relaxation of the ban on gatherings when the number of cases was still far too high, were accompanied by a significant resurgence in infections. Ultimately, about 80 percent of the reported illnesses and deaths were attributable to the second wave.

History repeats itself in 2020

By comparing the weekly case counts of the Spanish flu and coronavirus, the researchers found that the second wave started in almost the same calendar week in both 1918 and 2020, and the official delayed response was similar. "While there are still considerable differences between the two pandemics, the steadily increasing parallels between 1918 and 2020 are remarkable," Staub says. The study also shows that empirical knowledge from past pandemics -- for example, on the challenges and how to deal with follow-up waves -- is available. "Since November 2020, deaths from Covid-19 have far exceeded those caused by cancer or cardiovascular disease and for around three months it has been the most common cause of death in Switzerland. In view of the high death rate during the second wave in comparison with other countries, and with the threat of a third wave due to virus mutations from England, South Africa and Brazil, lessons from the past could help the authorities and the public to rethink their response," adds Jueni.

Read more at Science Daily

Ecological interactions as a driver of evolution

 Understanding the interaction of organisms in the evolution of species is an important topic in ecology. Insects and plants, for example, are two large groups on earth that are linked by a variety of interactions. Since the mid-20th century, theories linking this diversity and specific interactions have proliferated.

The development of new technologies and new methods has made it possible to study the interaction between plants and insects in greater detail and to reveal the impact of these interactions on their respective evolution. In a new study, an international team of researchers, including botanist Prof. Stefan Wanke of TU Dresden, has established the link between ecological changes, genome-level adaptations and macroevolutionary consequences, confirming the importance of ecological interactions as drivers of evolution over long periods of time.

Butterflies belonging to the family Papilionidae are an exemplary group for this question. These butterflies specialize in the consumption of poisonous plants, with about 30% of the species feeding exclusively on plants in the family Aristolochiaceae.

Consumption of such plants gives the caterpillars of these butterflies an advantage, as they secrete the plants' toxins, which in turn make them poisonous. However, the larvae themselves do not suffer any harm from the toxin.

"We knew before we started this study that certain genes of the cytochrome P450 family in the Papillonidae are partly responsible for the adaptation to plants, especially for the detoxification of toxic compounds. However, many different genes are probably involved overall, because in addition to detoxification, this adaptation requires that the female butterfly is able to recognize its preferred plant, or also that the caterpillars can develop and survive normally in this environment" explains Prof. Wanke. Scientists had long suspected that evolutionary changes in plants must have an influence on many insect genes. From this, the international team first deduced the relationships between different Papilionidae species and reconstructed their host-plant preferences over time. This allowed them to show that Papilionidae feed on plants belonging to the family Aristolochiaceae and, in particular, the pipevine genus Aristolochia.

Based on the global distribution of these two groups of insects and plants, it was then possible to estimate the historical biogeography -- the movement in time and space -- of Papilionidae and Aristolochiaceae species. The researchers discovered that both groups originated in the Northern Hemisphere about 55 million years ago and subsequently spread throughout the world.

In the case of the Papilionidae, this migration has been accompanied by major changes in host plants since their emergence. The study of Papilionidae species confirmed that various host-plant shifts were generally associated with accelerated species diversification of the butterflies. In other words, more species emerged as a result of host plant change than when the host plant was retained.

Read more at Science Daily

Deepfake detectors can be defeated, computer scientists show for the first time

 Systems designed to detect deepfakes -- videos that manipulate real-life footage via artificial intelligence -- can be deceived, computer scientists showed for the first time at the WACV 2021 conference which took place online Jan. 5 to 9, 2021.

Researchers showed detectors can be defeated by inserting inputs called adversarial examples into every video frame. The adversarial examples are slightly manipulated inputs which cause artificial intelligence systems such as machine learning models to make a mistake. In addition, the team showed that the attack still works after videos are compressed.

"Our work shows that attacks on deepfake detectors could be a real-world threat," said Shehzeen Hussain, a UC San Diego computer engineering Ph.D. student and first co-author on the WACV paper. "More alarmingly, we demonstrate that it's possible to craft robust adversarial deepfakes in even when an adversary may not be aware of the inner workings of the machine learning model used by the detector."

In deepfakes, a subject's face is modified in order to create convincingly realistic footage of events that never actually happened. As a result, typical deepfake detectors focus on the face in videos: first tracking it and then passing on the cropped face data to a neural network that determines whether it is real or fake. For example, eye blinking is not reproduced well in deepfakes, so detectors focus on eye movements as one way to make that determination. State-of-the-art Deepfake detectors rely on machine learning models for identifying fake videos.

The extensive spread of fake videos through social media platforms has raised significant concerns worldwide, particularly hampering the credibility of digital media, the researchers point out. ""If the attackers have some knowledge of the detection system, they can design inputs to target the blind spots of the detector and bypass it," " said Paarth Neekhara, the paper's other first coauthor and a UC San Diego computer science student.

Researchers created an adversarial example for every face in a video frame. But while standard operations such as compressing and resizing video usually remove adversarial examples from an image, these examples are built to withstand these processes. The attack algorithm does this by estimating over a set of input transformations how the model ranks images as real or fake. From there, it uses this estimation to transform images in such a way that the adversarial image remains effective even after compression and decompression.??

The modified version of the face is then inserted in all the video frames. The process is then repeated for all frames in the video to create a deepfake video. The attack can also be applied on detectors that operate on entire video frames as opposed to just face crops.

The team declined to release their code so it wouldn't be used by hostile parties.

High success rate

Researchers tested their attacks in two scenarios: one where the attackers have complete access to the detector model, including the face extraction pipeline and the architecture and parameters of the classification model; and one where attackers can only query the machine 
 learning model to figure out the probabilities of a frame being classified as real or fake. In the first scenario, the attack's success rate is above 99 percent for uncompressed videos. For compressed videos, it was 84.96 percent. In the second scenario, the success rate was 86.43 percent for uncompressed and 78.33 percent for compressed videos. This is the first work which demonstrates successful attacks on state-of-the-art deepfake detectors.

"To use these deepfake detectors in practice, we argue that it is essential to evaluate them against an adaptive adversary who is aware of these defenses and is intentionally trying to foil these defenses,"? the researchers write. "We show that the current state of the art methods for deepfake detection can be easily bypassed if the adversary has complete or even partial knowledge of the detector."

Read more at Science Daily

Genes for face shape identified

 Genes that determine the shape of a person's facial profile have been discovered by a UCL-led research team.

The researchers identified 32 gene regions that influenced facial features such as nose, lip, jaw, and brow shape, nine of which were entirely new discoveries while the others validated genes with prior limited evidence.

The analysis of data from more than 6,000 volunteers across Latin America was published today in Science Advances.

The international research team, led from UCL, Aix-Marseille University and The Open University, found that one of the genes appears to have been inherited from the Denisovans, an extinct group of ancient humans who lived tens of thousands of years ago.

The team found that the gene, TBX15, which contributes to lip shape, was linked with genetic data found in the Denisovan people, providing a clue to the gene's origin. The Denisovans lived in central Asia, and other studies suggest they interbred with modern humans, as some of their DNA lives on in Pacific Islanders and Indigenous people of the Americas.

Co-corresponding author Dr Kaustubh Adhikari (UCL Genetics, Evolution & Environment and The Open University) said: "The face shape genes we found may have been the product of evolution as ancient humans evolved to adapt to their environments. Possibly, the version of the gene determining lip shape that was present in the Denisovans could have helped in body fat distribution to make them better suited to the cold climates of Central Asia, and was passed on to modern humans when the two groups met and interbred."

Co-first author Dr Pierre Faux (Aix-Marseille University) said: "To our knowledge this is the first time that a version of a gene inherited from ancient humans is associated with a facial feature in modern humans. In this case, it was only possible because we moved beyond Eurocentric research; modern-day Europeans do not carry any DNA from the Denisovans, but Native Americans do."

Co-first author Betty Bonfante (Aix-Marseille University) added: "It is one of only a few studies looking for genes affecting the face in a non-European population, and the first one to focus on the profile only."

Researchers have only been able to analyse complex genetic data from thousands of people at once over the last two decades, since the mapping of the human genome enabled the use of genome-wide association studies to find correlations between traits and genes. This study compared genetic information from the study participants with characteristics of their face shape, quantified with 59 measurements (distances, angles and ratios between set points) from photos of the participants' faces in profile.

Co-corresponding author Professor Andres Ruiz-Linares (Fudan University, UCL Genetics, Evolution & Environment, and Aix-Marseille University) said: "Research like this can provide basic biomedical insights and help us understand how humans evolved."

The findings of this research could help understand the developmental processes that determine facial features, which will help researchers studying genetic disorders that lead to facial abnormalities.

Read more at Science Daily

What's driving 'brain fog' in people with COVID-19

 One of the dozens of unusual symptoms that have emerged in COVID-19 patients is a condition that's informally called "COVID brain" or "brain fog." It's characterized by confusion, headaches, and loss of short-term memory. In severe cases, it can lead to psychosis and even seizures. It usually emerges weeks after someone first becomes sick with COVID-19.

In the February 8, 2021, issue of the journal Cancer Cell, a multidisciplinary team from Memorial Sloan Kettering reports an underlying cause of COVID brain: the presence of inflammatory molecules in the liquid surrounding the brain and spinal cord (called the cerebrospinal fluid). The findings suggest that anti-inflammatory drugs, such as steroids, may be useful for treating the condition, but more research is needed.

"We were initially approached by our colleagues in critical care medicine who had observed severe delirium in many patients who were hospitalized with COVID-19," says Jessica Wilcox, the Chief Fellow in neuro-oncology at MSK and one of the first authors of the new study. "That meeting turned into a tremendous collaboration between neurology, critical care, microbiology, and neuroradiology to learn what was going on and to see how we could better help our patients."

Recognizing a Familiar Symptom

The medical term for COVID brain is encephalopathy. Members of MSK's Department of Neurology felt well-poised to study it, Dr. Wilcox says, because they are already used to treating the condition in other systemic inflammatory syndromes. It is a side effect in patients who are receiving a type of immunotherapy called chimeric antibody receptor (CAR) T cell therapy, a treatment for blood cancer. When CAR T cell therapy is given, it causes immune cells to release molecules called cytokines, which help the body to kill the cancer. But cytokines can seep into the area around the brain and cause inflammation.

When the MSK team first began studying COVID brain, though, they didn't know that cytokines were the cause. They first suspected that the virus itself was having an effect on the brain. The study in the Cancer Cell paper focused on 18 patients who were hospitalized at MSK with COVID-19 and were experiencing severe neurologic problems. The patients were given a full neurology workup, including brain scans like MRIs and CTs and electroencephalogram (EEG) monitoring, to try to find the cause of their delirium. When nothing was found in the scans that would explain their condition, the researchers thought the answer might lie in the cerebrospinal fluid.

MSK's microbiology team devised a test to detect the COVID-19 virus in the fluid. Thirteen of the 18 patients had spinal taps to look for the virus, but it was not found. At that point, the rest of the fluid was taken to the lab of MSK physician-scientist Adrienne Boire for further study.

Using Science to Ask Clinical Questions

Jan Remsik, a research fellow in Dr. Boire's lab in the Human Oncology and Pathogenesis Program and the paper's other first author, led the analysis of the fluid. "We found that these patients had persistent inflammation and high levels of cytokines in their cerebrospinal fluid, which explained the symptoms they were having," Dr. Remsik says. He adds that some smaller case studies with only a few patients had reported similar findings, but this study is the largest one so far to look at this effect.

"We used to think that the nervous system was an immune-privileged organ, meaning that it didn't have any kind of relationship at all with the immune system," Dr. Boire says. "But the more we look, the more we find connections between the two." One focus of Dr. Boire's lab is studying how immune cells are able to cross the blood-brain barrier and enter this space, an area of research that's also important for learning how cancer cells are able to spread from other parts of the body to the brain.

"One thing that was really unique about Jan's approach is that he was able to do a really broad molecular screen to learn what was going on," Dr. Boire adds. "He took the tools that we use in cancer biology and applied them to COVID-19."

The inflammatory markers found in the COVID-19 patients were similar, but not identical, to those seen in people who have received CAR T cell therapy. And as with CAR T cell therapy, the neurologic effects are sometimes delayed. The initial inflammatory response with CAR T cell treatment is very similar to the reaction called cytokine storm that's often reported in people with COVID-19, Dr. Wilcox explains. With both COVID-19 and CAR T cell therapy, the neurologic effects come days or weeks later. In CAR T cell patients, neurologic symptoms are treated with steroids, but doctors don't yet know the role of anti-inflammatory treatments for people with neurologic symptoms of COVID-19. "Many of them are already getting steroids, and it's possible they may be benefitting," Dr. Wilcox says.

"This kind of research speaks to the cooperation across the departments at MSK and the interdisciplinary work that we're able to do," Dr. Boire concludes. "We saw people getting sick, and we were able to use our observations to ask big clinical questions and then take these questions into the lab to answer them."

Dr. Boire is an inventor on a patent related to modulating the permeability of the blood-brain barrier and is an unpaid member of the scientific advisory board of EVREN Technologies.

Read more at Science Daily

Feb 8, 2021

Uncovering how some corals resist bleaching

 Coral reefs are beautiful and diverse ecosystems that power the economies of many coastal communities. They're also facing threats that are driving their decline, including the planet's warming waters.

This threat hit extreme levels in 2015, when high temperatures were turning corals white around the globe. Kaneohe Bay in Hawaii was hit hard; nearly half of its corals bleached.

Hidden in the aftermath of this extreme event, however, were biochemical clues as to why some corals bleached while others were resistant, information that could help reefs better weather warming waters in the future. These clues have now been uncovered by researchers at Michigan State University and the University of Hawaii at Manoa.

"It was kind of horrifying," said coral biologist Crawford Drury, who witnessed 2015's bleaching event from Florida before joining UH Manoa's Hawaii Institute for Marine Biology, or HIMB. "It's disheartening to watch, but I try to think of it as an opportunity."

How this disturbing event became an opportunity is now clear thanks to a Feb. 8 report in Nature Ecology & Evolution that showcases HIMB's stewardship and MSU's biochemical expertise.

The researchers discovered chemical signatures in the corals' biology, or biomarkers, that are present in organisms that were most resistant to the bleaching. This previously hidden insight could help researchers and conservationists better restore and protect reefs around the world.

"Usually, we think of biomarkers as signatures of disease, but this could be a signature of health," said MSU's Robert Quinn, an assistant professor in the Department of Biochemistry and Molecular Biology. "This could help us restore reefs with the most resistant stock."

Corals are symbiotic communities where coral animal cells build homes for algae that provide them energy and create their colors. When corals bleach, however, the algae are lost and leave behind skeletons that are susceptible to disease and death.

This symbiosis also plays a role in a coral's resistance and resilience to bleaching, which HIMB was in a unique position to investigate -- literally. The institute sits right next to the reef, enabling experiments in real time.

"The reef is about 100 feet away," Drury said. "I could be there in 30 seconds."

During the 2015 bleaching event, researchers in the Gates Coral Lab at HIMB had tagged individual corals to keep tabs on them. Because most of the corals recovered, the team could follow them over time.

"We think about it as a biological library," said Drury, the principal investigator with the Gates Coral Lab. "It was set up by researchers in our lab who knew it would be very valuable."

Following the bleaching, the team compared and contrasted coral samples in the wild, noting how the organisms responded and recovered, making some surprising observations along the way. For example, neighboring corals could behave completely differently in response to high temperatures. One coral could bleach completely while its neighbor maintained a healthy golden hue.

To understand why, Drury and HIMB postdoctoral researcher Ty Roach, the lead author of the study, sent samples to Quinn at MSU. Here, Quinn and his team could thoroughly analyze the biochemicals of corals collected from this biological library using a method called metabolomics.

"I'm known more for my medical work," said Quinn, who studies the biochemistry of health and disease in humans. "But I've always loved ocean science. My background is in marine microbiology."

If the coral samples are the books in the library, Quinn's lab used sophisticated equipment to reveal the biochemical language within. In particular, his team used tools known as mass spectrometers to understand what set resistant corals apart from susceptible ones.

"The corals are completely different in their chemistry, but you can't tell until you run the mass spec," Quinn said. "These mass specs are some of the most advanced technology on the planet."

Quinn's team found that corals that were resistant to bleaching and those that were susceptible hosted two different communities of algae. The distinguishing feature between these algal populations was found in their cells, in compounds known as lipids.

The researchers' metabolomic analysis detected two different lipid formulations. Bleaching-resistant corals featured algae that have what are known as saturated lipids. Susceptible corals had more unsaturated lipids.

"This is not unlike the difference between oil and margarine, the latter having more saturated fat, making it solid at room temperature," Quinn said.

This discovery poses all sorts of new questions for researchers: How do the corals get these different algae? Is this difference unique to Hawaiian corals or can it be found elsewhere? How can researchers promote the growth and proliferation of resilient corals in a warming world?

"Mass specs are such incredible machines and reveal intricate details of the chemistry involved. The biology is really the hard part." Quinn said. "We're working on new grants. There are so many avenues to explore."

This initial project was funded by the Paul G. Allen Family Foundation.

"This collaboration has been a great opportunity to ask and answer questions," Drury said. "Hopefully, we're just getting started."

In the meantime, having this chemical information is promising for coral conservation. When conservationists reseed corals to help restore reefs, they can potentially select more resilient specimens.

"We can use natural resilience to better understand, support and manage coral reefs under climate change," Drury said.

Read more at Science Daily

Study of supergiant star Betelgeuse unveils the cause of its pulsations

 Betelgeuse is normally one of the brightest, most recognizable stars of the winter sky, marking the left shoulder of the constellation Orion. But lately, it has been behaving strangely: an unprecedentedly large drop in its brightness has been observed in early 2020, which has prompted speculation that Betelgeuse may be about to explode.

To find out more, an international team of scientists, including Ken'ichi Nomoto at the Kavli Institute for the Physics and Mathematics of the Universe (Kavli IPMU), conducted a rigorous examination of Betelgeuse. They concluded that the star is in the early core helium-burning phase (which is more than 100,000 years before an explosion happens) and has smaller mass and radius -- and is closer to Earth -- than previously thought. They also showed that smaller brightness variations of Betelgeuse have been driven by stellar pulsations, and suggested that the recent large dimming event involved a dust cloud.

The research team is led by Dr. Meridith Joyce from the Australian National University (ANU), who was an invited speaker at Kavli IPMU in January 2020, and includes Dr. Shing-Chi Leung, a former Kavli IPMU project researcher and a current postdoctoral scholar at the California Institute of Technology, and Dr. Chiaki Kobayashi, an associate professor at the University of Hertfordshire, who has been an affiliate member of Kavli IPMU.

The team analyzed the brightness variation of Betelgeuse by using evolutionary, hydrodynamic and seismic modelling. They achieved a clearer idea than before that Betelgeuse is currently burning helium in its core. They also showed that stellar pulsations driven by the so-called kappa-mechanism is causing the star to continuously brighten or fade with two periods of 185 (+-13.5) days and approximately 400 days. But the large dip in brightness in early 2020 is unprecedented, and is likely due to a dust cloud in front of Betelgeuse, as seen in the image.

Their analysis reported a present-day mass of 16.5 to 19 solar mass -- which is slightly lower than the most-recent estimates. The study also revealed how big Betelgeuse is, as well as its distance from Earth. The star's actual size has been a bit of a mystery: earlier studies, for instance, suggested it could be bigger than the orbit of Jupiter. However, the team's results showed Betelgeuse only extends out to two-thirds of that, with a radius 750 times the radius of the sun. Once the physical size of the star is known, it will be possible to determine its distance from Earth. Thus far, the team's results show it is a mere 530 light years from us, or 25 percent closer than previously thought.

Read more at Science Daily

Brain changed by caffeine in utero

 New research finds caffeine consumed during pregnancy can change important brain pathways that could lead to behavioral problems later in life. Researchers in the Del Monte Institute for Neuroscience at the University of Rochester Medical Center (URMC) analyzed thousands of brain scans of nine and ten-year-olds, and revealed changes in the brain structure in children who were exposed to caffeine in utero.

"These are sort of small effects and it's not causing horrendous psychiatric conditions, but it is causing minimal but noticeable behavioral issues that should make us consider long term effects of caffeine intake during pregnancy," said John Foxe, Ph.D., director of the Del Monte Institute for Neuroscience, and principal investigator of the Adolescent Brain Cognitive Development or ABCD Study at the University of Rochester. "I suppose the outcome of this study will be a recommendation that any caffeine during pregnancy is probably not such a good idea."

Elevated behavioral issues, attention difficulties, and hyperactivity are all symptoms that researchers observed in these children. "What makes this unique is that we have a biological pathway that looks different when you consume caffeine through pregnancy," said Zachary Christensen, a M.D/Ph.D. candidate in the Medical Science Training Program and first author on the paper published in the journal Neuropharmacology. "Previous studies have shown that children perform differently on IQ tests, or they have different psychopathology, but that could also be related to demographics, so it's hard to parse that out until you have something like a biomarker. This gives us a place to start future research to try to learn exactly when the change is occurring in the brain."

Investigators analyzed brain scans of more than 9,000 nine and ten-year-old participants in the ABCD study. They found clear changes in how the white matter tracks -- which form connections between brain regions -- were organized in children whose mothers reported they consumed caffeine during pregnancy.

URMC is one of 21-sites across the country collecting data for the ABCD study, the largest long-term study of brain development and child health. The study is funded by the National Institutes of Health. Ed Freedman, Ph.D., is the principal investigator of the ABCD study in Rochester and a co-author of the study.

"It is important to point out this is a retrospective study," said Foxe. "We are relying on mothers to remember how much caffeine they took in while they were pregnant."

Previous studies have found caffeine can have a negative effect on pregnancy. It is also known that a fetus does not have the enzyme necessary to breakdown caffeine when it crosses the placenta. This new study reveals that caffeine could also leave a lasting impact on neurodevelopment.

The researchers point out that it is unclear if the impact of the caffeine on the fetal brain varies from one trimester to the next, or when during gestation these structural changes occur.

Read more at Science Daily

Rare blast's remains discovered in Milky Way's center

 Astronomers may have found our galaxy's first example of an unusual kind of stellar explosion. This discovery, made with NASA's Chandra X-ray Observatory, adds to the understanding of how some stars shatter and seed the universe with elements critical for life on Earth.

This intriguing object, located near the center of the Milky Way, is a supernova remnant called Sagittarius A East, or Sgr A East for short. Based on Chandra data, astronomers previously classified the object as the remains of a massive star that exploded as a supernova, one of many kinds of exploded stars that scientists have catalogued.

Using longer Chandra observations, a team of astronomers has now instead concluded that the object is left over from a different type of supernova. It is the explosion of a white dwarf, a shrunken stellar ember from a fuel-depleted star like our Sun. When a white dwarf pulls too much material from a companion star or merges with another white dwarf, the white dwarf is destroyed, accompanied by a stunning flash of light.

Astronomers use these "Type Ia supernovae" because most of them mete out almost the same amount of light every time no matter where they are located. This allows scientists to use them to accurately measure distances across space and study the expansion of the universe.

Data from Chandra have revealed that Sgr A East, however, did not come from an ordinary Type Ia. Instead, it appears that it belongs to a special group of supernovae that produce different relative amounts of elements than traditional Type Ias do, and less powerful explosions. This subset is referred to as "Type Iax," a potentially important member of the supernova family.

"While we've found Type Iax supernovae in other galaxies, we haven't identified evidence for one in the Milky Way until now," said Ping Zhou of Nanjing University in China, who led the new study while at the University of Amsterdam. "This discovery is important for getting a handle of the myriad ways white dwarfs explode."

The explosions of white dwarfs is one of the most important sources in the universe of elements like iron, nickel, and chromium. The only place that scientists know these elements can be created is inside the nuclear furnace of stars or when they explode.

"This result shows us the diversity of types and causes of white dwarf explosions, and the different ways that they make these essential elements," said co-author Shing-Chi Leung of Caltech in Pasadena, California. "If we're right about the identity of this supernova's remains, it would be the nearest known example to Earth."

Astronomers are still debating the cause of Type Iax supernova explosions, but the leading theory is that they involve thermonuclear reactions that travel much more slowly through the star than in Type Ia supernovae. This relatively slow walk of the blast leads to weaker explosions and, hence, different amounts of elements produced in the explosion. It is also possible that part of the white dwarf is left behind.

Sgr A East is located very close to Sagittarius A*, the supermassive black hole in the center of our Milky Way galaxy, and likely intersects with the disk of material surrounding the black hole. The team was able to use Chandra observations targeting the supermassive black hole and the region around it for a total of about 35 days to study Sgr A East and find the unusual pattern of elements in the X-ray data. The Chandra results agree with computer models predicting a white dwarf that has undergone slow-moving nuclear reactions, making it a strong candidate for a Type Iax supernova remnant.

"This supernova remnant is in the background of many Chandra images of our galaxy's supermassive black hole taken over the last 20 years," said Zhiyuan Li, also of Nanjing University. "We finally may have worked out what this object is and how it came to be."

In other galaxies, scientists observe that Type Iax supernovae occur at a rate that is about one third that of Type Ia supernovae. In the Milky Way, there have been three confirmed Type Ia supernova remnants and two candidates that are younger than 2,000 years, corresponding to an age when remnants are still relatively bright before fading later. If Sgr A East is younger than 2,000 years and resulted from a Type Iax supernova, this study suggests that our galaxy is in alignment with respect to the relative numbers of Type Iax supernovae seen in other galaxies.

Along with the suggestion that Sgr A East is the remnant from the collapse of a massive star, previous studies have also pointed out that a normal Type Ia supernova had not been ruled out. The latest study conducted with this deep Chandra data argue against both the massive star and the normal Type Ia interpretations.

These results have been published today in The Astrophysical Journal, and a preprint is available online. The other co-authors of the paper are Ken'ichi Nomoto of The University of Tokyo in Japan, Jacco Vink of the University of Amsterdam in The Netherlands, and Yang Chen, also of Nanjing University.

Read more at Science Daily

Feb 7, 2021

Discoveries at the edge of the periodic table: First ever measurements of einsteinium

 Since element 99 -- einsteinium -- was discovered in 1952 at the Department of Energy's Lawrence Berkeley National Laboratory (Berkeley Lab) from the debris of the first hydrogen bomb, scientists have performed very few experiments with it because it is so hard to create and is exceptionally radioactive.A team of Berkeley Lab chemists has overcome these obstacles to report the first study characterizing some of its properties, opening the door to a better understanding of the remaining transuranic elements of the actinide series.

Published in the journal Nature, the study,"Structural and Spectroscopic Characterization of an Einsteinium Complex,"was co-led by Berkeley Lab scientist Rebecca Abergel and Los Alamos National Laboratory scientist Stosh Kozimor, and included scientists from the two laboratories, UC Berkeley, and Georgetown University, several of whom are graduate students and postdoctoral fellows. With less than 250 nanograms of the element, the team measured the first-ever einsteinium bond distance, a basic property of an element's interactions with other atoms and molecules.

"There's not much known about einsteinium," said Abergel,who leads Berkeley Lab'sHeavy Element Chemistry groupand is an assistant professor in UC Berkeley's Nuclear Engineering department. "It's a remarkable achievement that we were able to work with this small amount of material and do inorganic chemistry. It's significant because the more we understand about its chemical behavior, the more we can apply this understanding for the development of new materials or new technologies, not necessarily just with einsteinium, but with the rest of the actinides too. And we can establish trends in the periodic table."

Short-lived and hard to make

Abergel and her team used experimental facilities not available decades ago when einsteinium was first discovered -- theMolecular Foundryat Berkeley Lab and theStanford Synchrotron Radiation Lightsource (SSRL)at SLAC National Accelerator Laboratory, both DOE Office of Science user facilities -- to conduct luminescence spectroscopy and X-ray absorption spectroscopy experiments.

But first, getting the sample in a usable form was almost half the battle. "This whole paper is a long series of unfortunate events," she said wryly.

The material was made at Oak Ridge National Laboratory's High Flux Isotope Reactor, one of only a few places in the world that is capable of making einsteinium, which involves bombarding curium targets with neutrons to trigger a long chain of nuclear reactions. The first problem they encountered was that the sample was contaminated with a significant amount of californium, as making pure einsteinium in a usable quantity is extraordinarily challenging.

So they had to scrap their original plan to use X-ray crystallography -- which is considered the gold standard for obtaining structural information on highly radioactive molecules but requires a pure sample of metal -- and instead came up with a new way to make samples and leverage element-specific research techniques. Researchers at Los Alamos provided critical assistance in this step by designing a sample holder uniquely suited to the challenges intrinsic to einsteinium.

Then, contending with radioactive decay was another challenge. The Berkeley Lab team conducted their experiments with einsteinium-254, one of the more stable isotopes of the element. It has a half-life of 276 days, which is the time for half of the material to decay. Although the team was able to conduct many of the experiments before the coronavirus pandemic, they had plans for follow-up experiments that got interrupted thanks to pandemic-related shutdowns. By the time they were able to get back into their lab last summer, most of the sample was gone.

Bond distance and beyond

Still, the researchers were able to measure a bond distance with einsteinium and also discovered some physical chemistry behavior that was different from what would be expected from the actinide series, which are the elements on the bottom row of the periodic table.

"Determining the bond distance may not sound interesting, but it's the first thing you would want to know about how a metal binds to other molecules. What kind of chemical interaction is this element going to have with other atoms and molecules?" Abergel said.

Once scientists have this picture of the atomic arrangement of a molecule that incorporates einsteinium, they can try to find interesting chemical properties and improve understanding of periodic trends. "By getting this piece of data, we gain a better, broader understanding of how the whole actinide series behaves. And in that series, we have elements or isotopes that are useful for nuclear power production or radiopharmaceuticals," she said.

Tantalizingly, this research also offers the possibility of exploring what is beyond the edge of the periodic table, and possibly discovering a new element. "We're really starting to understand a little better what happens toward the end of the periodic table, and the next thing is, you could also envision an einsteinium target for discovering new elements," Abergel said. "Similar to the latest elements that were discovered in the past 10 years, like tennessine, which used a berkelium target, if you were to be able to isolate enough pure einsteinium to make a target, you could start looking for other elements and get closer to the (theorized)island of stability," where nuclear physicists have predicted isotopes may have half-lives of minutes or even days, instead of the microsecond or less half-lives that are common in the superheavy elements.

Read more at Science Daily

Climate change may have driven the emergence of SARS-CoV-2

 Global greenhouse gas emissions over the last century have made southern China a hotspot for bat-borne coronaviruses, by driving growth of forest habitat favoured by bats.

A new study published today in the journal Science of the Total Environment provides the first evidence of a mechanism by which climate change could have played a direct role in the emergence of SARS-CoV-2, the virus that caused the COVID-19 pandemic.

The study has revealed large-scale changes in the type of vegetation in the southern Chinese Yunnan province, and adjacent regions in Myanmar and Laos, over the last century. Climatic changes including increases in temperature, sunlight, and atmospheric carbon dioxide -- which affect the growth of plants and trees -- have changed natural habitats from tropical shrubland to tropical savannah and deciduous woodland. This created a suitable environment for many bat species that predominantly live in forests.

The number of coronaviruses in an area is closely linked to the number of different bat species present. The study found that an additional 40 bat species have moved into the southern Chinese Yunnan province in the past century, harbouring around 100 more types of bat-borne coronavirus. This 'global hotspot' is the region where genetic data suggests SARS-CoV-2 may have arisen.

"Climate change over the last century has made the habitat in the southern Chinese Yunnan province suitable for more bat species," said Dr Robert Beyer, a researcher in the University of Cambridge's Department of Zoology and first author of the study, who has recently taken up a European research fellowship at the Potsdam Institute for Climate Impact Research, Germany.

He added: "Understanding how the global distribution of bat species has shifted as a result of climate change may be an important step in reconstructing the origin of the COVID-19 outbreak."

To get their results, the researchers created a map of the world's vegetation as it was a century ago, using records of temperature, precipitation, and cloud cover. Then they used information on the vegetation requirements of the world's bat species to work out the global distribution of each species in the early 1900s. Comparing this to current distributions allowed them to see how bat 'species richness', the number of different species, has changed across the globe over the last century due to climate change.

"As climate change altered habitats, species left some areas and moved into others -- taking their viruses with them. This not only altered the regions where viruses are present, but most likely allowed for new interactions between animals and viruses, causing more harmful viruses to be transmitted or evolve," said Beyer.

The world's bat population carries around 3,000 different types of coronavirus, with each bat species harbouring an average of 2.7 coronaviruses -- most without showing symptoms. An increase in the number of bat species in a particular region, driven by climate change, may increase the likelihood that a coronavirus harmful to humans is present, transmitted, or evolves there.

Most coronaviruses carried by bats cannot jump into humans. But several coronaviruses known to infect humans are very likely to have originated in bats, including three that can cause human fatalities: Middle East Respiratory Syndrome (MERS) CoV, and Severe Acute Respiratory Syndrome (SARS) CoV-1 and CoV-2.

The region identified by the study as a hotspot for a climate-driven increase in bat species richness is also home to pangolins, which are suggested to have acted as intermediate hosts to SARS-CoV-2. The virus is likely to have jumped from bats to these animals, which were then sold at a wildlife market in Wuhan -- where the initial human outbreak occurred.

The researchers echo calls from previous studies that urge policy-makers to acknowledge the role of climate change in outbreaks of viral diseases, and to address climate change as part of COVID-19 economic recovery programmes.

"The COVID-19 pandemic has caused tremendous social and economic damage. Governments must seize the opportunity to reduce health risks from infectious diseases by taking decisive action to mitigate climate change," said Professor Andrea Manica in the University of Cambridge's Department of Zoology, who was involved in the study.

"The fact that climate change can accelerate the transmission of wildlife pathogens to humans should be an urgent wake-up call to reduce global emissions," added Professor Camilo Mora at the University of Hawai'i at Manoa, who initiated the project.

The researchers emphasised the need to limit the expansion of urban areas, farmland, and hunting grounds into natural habitat to reduce contact between humans and disease-carrying animals.

Read more at Science Daily