May 15, 2021

Charting the expansion history of the universe with supernovae

An international research team analyzed a database of more than 1000 supernova explosions and found that models for the expansion of the Universe best match the data when a new time dependent variation is introduced. If proven correct with future, higher-quality data from the Subaru Telescope and other observatories, these results could indicate still unknown physics working on the cosmic scale.

Edwin Hubble's observations over 90 years ago showing the expansion of the Universe remain a cornerstone of modern astrophysics. But when you get into the details of calculating how fast the Universe was expanding at different times in its history, scientists have difficulty getting theoretical models to match observations.

To solve this problem, a team led by Maria Dainotti (Assistant Professor at the National Astronomical Observatory of Japan and the Graduate University for Advanced Studies, SOKENDAI in Japan and an affiliated scientist at the Space Science Institute in the U.S.A.) analyzed a catalog of 1048 supernovae which exploded at different times in the history of the Universe. The team found that the theoretical models can be made to match the observations if one of the constants used in the equations, appropriately called the Hubble constant, is allowed to vary with time.

There are several possible explanations for this apparent change in the Hubble constant. A likely but boring possibility is that observational biases exist in the data sample. To help correct for potential biases, astronomers are using Hyper Suprime-Cam on the Subaru Telescope to observe fainter supernovae over a wide area. Data from this instrument will increase the sample of observed supernovae in the early Universe and reduce the uncertainty in the data.

But if the current results hold-up under further investigation, if the Hubble constant is in fact changing, that opens the question of what is driving the change. Answering that question could require a new, or at least modified, version of astrophysics.

From Science Daily

Hidden processes at work in the hearts of large stars revealed

Astronomers commonly refer to massive stars as the chemical factories of the Universe. They generally end their lives in spectacular supernovae, events that forge many of the elements on the periodic table. How elemental nuclei mix within these enormous stars has a major impact on our understanding of their evolution prior to their explosion. It also represents the largest uncertainty for scientists studying their structure and evolution.

A team of astronomers led by May Gade Pedersen, a postdoctoral scholar at UC Santa Barbara's Kavli Institute for Theoretical Physics, have now measured the internal mixing within an ensemble of these stars using observations of waves from their deep interiors. While scientists have used this technique before, this paper marks the first time this has been accomplished for such a large group of stars at once. The results, published in Nature Astronomy, show that the internal mixing is very diverse, with no clear dependence on a star's mass or age.

Stars spend the majority of their lives fusing hydrogen into helium deep in their cores. However, the fusion in particularly massive stars is so concentrated at the center that it leads to a turbulent convective core similar to a pot of boiling water. Convection, along with other processes like rotation, effectively removes helium ash from the core and replaces it with hydrogen from the envelope. This enables the stars to live much longer than otherwise predicted.

Astronomers believe this mixing arises from various physical phenomena, like internal rotation and internal seismic waves in the plasma excited by the convecting core. However, the theory has remained largely unconstrained by observations as it occurs so deep within the star. That said, there is an indirect method of peering into stars: asteroseismology, the study and interpretation of stellar oscillations. The technique has parallels to how seismologists use earthquakes to probe the interior of the Earth.

"The study of stellar oscillations challenges our understanding of stellar structure and evolution," Pedersen said. "They allow us to directly probe the stellar interiors and make comparisons to the predictions from our stellar models."

Pedersen and her collaborators from KU Leuven, the University of Hasselt, and the University of Newcastle have been able to derive the internal mixing for an ensemble of such stars using asteroseismology. This is the first time such a feat has been achieved, and was possible thanks only to a new sample of 26 slowly pulsating B-type stars with identified stellar oscillations from NASA's Kepler mission.

Slowly pulsating B-type stars are between three and eight times more massive than the Sun. They expand and contract on time scales of the order of 12 hours to 5 days, and can change in brightness by up to 5%. Their oscillation modes are particularly sensitive to the conditions near the core, Pedersen explained.

"The internal mixing inside stars has now been measured observationally and turns out to be diverse in our sample, with some stars having almost no mixing while others reveal levels a million times higher," Pedersen said. The diversity turns out to be unrelated to the mass or age of the star. Rather, it's primarily influenced by the internal rotation, though that is not the only factor at play.

"These asteroseismic results finally allow astronomers to improve the theory of internal mixing of massive stars, which has so far remained uncalibrated by observations coming straight from their deep interiors," she added.

The precision at which astronomers can measure stellar oscillations depends directly on how long a star is observed. Increasing the time from one night to one year results in a thousand-fold increase in the measured precision of oscillation frequencies.

"May and her collaborators have really shown the value of asteroseismic observations as probes of the deep interiors of stars in a new and profound way," said KITP Director Lars Bildsten, the Gluck Professor of Theoretical Physics. "I am excited to see what she finds next."

The best data currently available for this comes from the Kepler space mission, which observed the same patch of the sky for four continuous years. The slowly pulsating B-type stars were the highest mass pulsating stars that the telescope observed. While most of these are slightly too small to go supernova, they do share the same internal structure as the more massive stellar chemical factories. Pedersen hopes insights gleaned from studying the B type stars will shed light on the inner workings of their higher mass, O type counterparts.

She plans to use data from NASA's Transiting Exoplanet Survey Satellite (TESS) to study groups of oscillating high-mass stars in OB associations. These groups comprise 10 to more than 100 massive stars between 3 and 120 solar masses. Stars in OB associations are born from the same molecular cloud and share similar ages, she explained. The large sample of stars, and constraint from their common ages, provides exciting new opportunities to study the internal mixing properties of high-mass stars.

In addition to unveiling the processes hidden within stellar interiors, research on stellar oscillations can also provide information on other properties of the stars.

Read more at Science Daily

Solar wind from the center of the Earth

High-precision noble gas analyses indicate that solar wind particles from our primordial Sun were encased in the Earth's core over 4.5 billion years ago. Researchers from the Institute of Earth Sciences at Heidelberg University have concluded that the particles made their way into the overlying rock mantle over millions of years. The scientists found solar noble gases in an iron meteorite they studied. Because of their chemical composition, such meteorites are often used as natural models for the Earth's metallic core.

The rare class of iron meteorites make up only five percent of all known meteorite finds on Earth. Most are fragments from inside larger asteroids that formed metallic cores in the first one to two million years of our Solar System. The Washington County iron meteorite now being studied at the Klaus Tschira Laboratory for Cosmochemistry at the Institute of Earth Sciences was found nearly 100 years ago. Its name comes from the location in Colorado (USA) where it was discovered. It resembles a metal discus, is six cm thick, and weighs approx. 5.7 kilograms, according to Prof. Dr Mario Trieloff, head of the Geo- and Cosmochemistry research group.

The researchers were finally able to definitively prove the presence of a solar component in the iron meteorite. Using a noble gas mass spectrometer, they determined that the samples from the Washington County meteorite contain noble gases whose isotopic ratios of helium and neon are typical for the solar wind. According to Dr Manfred Vogt, a member of the Trieloff team, "the measurements had to be extraordinarily accurate and precise to differentiate the solar signatures from the dominant cosmogenic noble gases and atmospheric contamination." The team postulates that solar wind particles in the primordial Solar System were trapped by the precursor materials of the Washington County parent asteroid. The noble gases captured along with the particles were dissolved into the liquid metal from which the asteroid's core formed.

The results of their measurements allowed the Heidelberg researchers to draw a conclusion by analogy that the core of the planet Earth might also contain such noble gas components. Yet another scientific observation supports this assumption. Prof. Trieloff's research group has long been measuring solar noble gas isotopes of helium and neon in igneous rock of oceanic islands like Hawaii and Réunion. These magmatites derive from a special form of volcanism sourced by mantle plumes rising from thousands of kilometres deep in the Earth's mantle. Their particularly high solar gas content makes them fundamentally different from the shallow mantle as represented by volcanic activity of submarine mid-ocean mountain ridges. "We always wondered why such different gas signatures could exist at all in a slowly albeit constantly convecting mantle," states the Heidelberg researcher.

Read more at Science Daily

New research optimizes body's own immune system to fight cancer

A groundbreaking study led by engineering and medical researchers at the University of Minnesota Twin Cities shows how engineered immune cells used in new cancer therapies can overcome physical barriers to allow a patient's own immune system to fight tumors. The research could improve cancer therapies in the future for millions of people worldwide.

The research is published in Nature Communications, a peer-reviewed, open access, scientific journal published by Nature Research.

Instead of using chemicals or radiation, immunotherapy is a type of cancer treatment that helps the patient's immune system fight cancer. T cells are a type of white blood cell that are of key importance to the immune system. Cytotoxic T cells are like soldiers who search out and destroy the targeted invader cells.

While there has been success in using immunotherapy for some types of cancer in the blood or blood-producing organs, a T cell's job is much more difficult in solid tumors.

"The tumor is sort of like an obstacle course, and the T cell has to run the gauntlet to reach the cancer cells," said Paolo Provenzano, the senior author of the study and a biomedical engineering associate professor in the University of Minnesota College of Science and Engineering. "These T cells get into tumors, but they just can't move around well, and they can't go where they need to go before they run out of gas and are exhausted."

In this first-of-its-kind study, the researchers are working to engineer the T cells and develop engineering design criteria to mechanically optimize the cells or make them more "fit" to overcome the barriers. If these immune cells can recognize and get to the cancer cells, then they can destroy the tumor.

In a fibrous mass of a tumor, the stiffness of the tumor causes immune cells to slow down about two-fold -- almost like they are running in quicksand.

"This study is our first publication where we have identified some structural and signaling elements where we can tune these T cells to make them more effective cancer fighters," said Provenzano, a researcher in the University of Minnesota Masonic Cancer Center. "Every 'obstacle course' within a tumor is slightly different, but there are some similarities. After engineering these immune cells, we found that they moved through the tumor almost twice as fast no matter what obstacles were in their way."

To engineer cytotoxic T cells, the authors used advanced gene editing technologies (also called genome editing) to change the DNA of the T cells so they are better able to overcome the tumor's barriers. The ultimate goal is to slow down the cancer cells and speed up the engineered immune cells. The researchers are working to create cells that are good at overcoming different kinds of barriers. When these cells are mixed together, the goal is for groups of immune cells to overcome all the different types of barriers to reach the cancer cells.

Provenzano said the next steps are to continue studying the mechanical properties of the cells to better understand how the immune cells and cancer cells interact. The researchers are currently studying engineered immune cells in rodents and in the future are planning clinical trials in humans.

While initial research has been focused on pancreatic cancer, Provenzano said the techniques they are developing could be used on many types of cancers.

"Using a cell engineering approach to fight cancer is a relatively new field," Provenzano said. "It allows for a very personalized approach with applications for a wide array of cancers. We feel we are expanding a new line of research to look at how our own bodies can fight cancer. This could have a big impact in the future."

In addition to Provenzano, the study's authors included current and former University of Minnesota Department of Biomedical Engineering researchers Erdem D. Tabdanov (co-author), Nelson J. Rodríguez-Merced (co-author), Vikram V. Puram, Mackenzie K. Callaway, and Ethan A. Ensminger; University of Minnesota Masonic Cancer Center and Medical School Department of Pediatrics researchers Emily J. Pomeroy, Kenta Yamamoto, Walker S. Lahr, Beau R. Webber, Branden S. Moriarity; National Institute of Biomedical Imaging and Bioengineering researcher Alexander X. Cartagena-Rivera; and National Heart, Lung, and Blood Institute researcher Alexander S. Zhovmer, who is now at the Center for Biologic Evaluation and Research.

Read more at Science Daily

May 13, 2021

New ebolavirus vaccine design seeks to drive stronger antibody defense

Scientists at Scripps Research have unveiled a new Ebola virus vaccine design, which they say has several advantages over standard vaccine approaches for Ebola and related viruses that continue to threaten global health.

In the new design, described in a paper in Nature Communications, copies of the Ebola virus outer spike protein, known as the glycoprotein, are tethered to the surface of a spherical carrier particle. The resulting structure resembles the spherical appearance of common RNA viruses that infect humans -- and is starkly different from the snake-like shape of the Ebola virus.

The scientists say the design is intended to stimulate a better protective immune response than standard vaccine approaches, which often expose the immune system to individual glycoproteins rather than realistic-looking virus particles.

In designing the vaccine, the researchers also modified the outer spike protein to be more stable than the normal, "wild-type" version found in actual Ebola virus. In tests in mice and rabbits, they showed that this stabilized version elicited virus-neutralizing antibodies more strongly than the wild-type glycoprotein used in prior Ebola vaccine approaches.

"Here, we did a step-by-step investigation of glycoprotein stability and how that affects the vaccine's ability to elicit antibodies," says Jiang Zhu, PhD, associate professor in the Department of Integrative Structural and Computational Biology at Scripps Research and inventor of the vaccine. "In the end, we were able to develop a really promising vaccine design."

Continued viral threat

Ebola virus is endemic in various African bat species and can jump to humans, causing outbreaks of hemorrhagic fever with high mortality rates. The largest known outbreak of occurred in West Africa during 2013-2016, killing more than 11,000 people.

About two decades ago, Canadian researchers developed a vaccine against Zaire ebolavirus, more commonly known as Ebola virus. The vaccine, which was later licensed to a major pharma company and is called rVSV-ZEBOV, uses a live virus -- vesicular stomatitis virus -- which has been modified to include the gene for the Ebola virus glycoprotein.

When injected, the rVSV-ZEBOV vaccine infects cells and produces copies of the glycoprotein, eliciting an immune response to protect against future exposure to Ebola virus. Tests in Africa amid the aforementioned outbreak suggested it worked well and it was approved by the Food and Drug Administration in late 2019. However, those tests lacked placebo groups and other standard features of typical large-scale phase-III trials. Thus, questions remain on true efficacy.

In developing their new ebolavirus vaccine design, Zhu and his team focused on the relative instability of the glycoprotein structure as a potential factor in vaccine effectiveness. They investigated the molecular sources of this instability in detail, and eventually came up with a set of modifications that greatly stabilize the glycoprotein. In mice and rabbits, their modified glycoprotein elicited a more potent neutralizing antibody response against two different ebolaviruses -- the Makona strain of Ebola virus and the Uganda strain of Bundibugyo ebolavirus -- and compared those with the wild-type glycoprotein.

The team's design also included special protein segments that self-assemble tightly into a ball-shaped "nanoparticle" that support multiple glycoproteins on their surface. This nanoparticle-based structure presents the glycoproteins to the immune system similar to common human viruses, and thus the body has learned to recognize the spherical particles.

"Think of our nanoparticle as your sport vehicle, with a roof rack that carries a mountain bike and a trunk where you stow your clothes, gears and food," Zhu explains. "The only difference here is that the Ebola virus spike is your mountain bike, and the locking domains and T-cell epitopes are your stuff in the trunk. We call that a multilayered design."

A new approach


This nanoparticle design is distinctively different from other nanoparticle platforms. Zhu explains that in his team's design, the genetic codes of the optimized glycoprotein, the nanoparticle-forming unit, the locking domain and the T-cell epitope are all contained in a single piece of DNA. In cells, this DNA generates a single protein chain that can self-assemble, forming the right structure and associating with other identical chains to create a virus-like protein ball with multiple layers.

"The idea is that the all-in-one design simplifies the manufacturing process and drives the vaccine cost lower," Zhu says.

His team already has used the nanoparticle platform to create a COVID-19 vaccine candidate, which has shown in animal models that it can induce a powerful antibody response to both SARS-CoV-1 and SARS-CoV-2. It also has shown to be effective against variants.

For Ebola virus, the nanoparticle-based vaccines showed far better results in mouse and rabbit virus-neutralization tests that tests that used only glycoproteins to stimulate immune response. Inoculating animals with the Ebola wild-type glycoprotein, which tends to fall apart, led to signs suggesting a vaccine phenomenon known as antibody-dependent enhancement -- in which a vaccine elicits not only virus-neutralizing antibodies, but also antibodies that paradoxically increase the virus's ability to infect cells. The researchers found that their best nanoparticle-based designs only minimally elicit these bad antibodies.

"There are a lot of things in the Ebola virus vaccine field that still need to be examined carefully, but in this study, we ended up with two nanoparticle-based designs that seem very suitable for further optimization and testing," Zhu says.

He says the vaccine approach can be extended to other members of the same virus family, such as Marburg virus, which is also a major threat. Ebolaviruses and marburgvirus both belong to a group of viruses, known as filoviruses, that have a bizarre thread-like shape when seen under a microscope.

Read more at Science Daily

Organic meat less likely to be contaminated with multidrug-resistant bacteria

Meat that is certified organic by the U.S. Department of Agriculture is less likely to be contaminated with bacteria that can sicken people, including dangerous, multidrug-resistant organisms, compared to conventionally produced meat, according to a study from researchers at the Johns Hopkins Bloomberg School of Public Health.

The findings highlight the risk for consumers to contract foodborne illness -- contaminated animal products and produce sicken tens of millions of people in the U.S. each year -- and the prevalence of multidrug-resistant organisms that, when they lead to illness, can complicate treatment.

The researchers found that, compared to conventionally processed meats, organic-certified meats were 56 percent less likely to be contaminated with multidrug-resistant bacteria. The study was based on nationwide testing of meats from 2012 to 2017 as part of the U.S. National Antimicrobial Resistance Monitoring System (NARMS).

In order for meat to be certified organic by the USDA, animals can never have been administered antibiotics or hormones, and animal feed and forage such as grass and hay must be 100 percent organic. A longstanding concern about antibiotic use in livestock and livestock feed is the increased prevalence of antibiotic-resistant pathogens. To monitor this trend, in 1996 the federal government developed NARMS to track antibiotic resistance in bacteria isolated from retail meats, farmed animals, and patients with foodborne illness in the U.S.

For their study, the Bloomberg School research team analyzed U.S. Food and Drug Administration-NARMS data from randomly sampled chicken breast, ground beef, ground turkey, and pork for any contamination and for contamination by multidrug-resistant organisms. The analysis covers four types of bacteria: Salmonella, Campylobacter, Enterococcus, and Escherichia coli.

The study covered a total of 39,348 meat samples, of which 1,422 were found to be contaminated with at least one multidrug-resistant organism. The rate of contamination was 4 percent in the conventionally produced meat samples and just under 1 percent in those that were produced organically.

The study was published May 12 in Environmental Health Perspectives.

"The presence of pathogenic bacteria is worrisome in and of itself, considering the possible increased risk of contracting foodborne illness," says senior author Meghan Davis, DVM, PhD, associate professor in the Department of Environmental Health and Engineering at the Bloomberg School. "If infections turn out to be multidrug resistant, they can be more deadly and more costly to treat."

The analysis also suggested that the type of processing facility may influence the likelihood of meat contamination. Meat processors fall into three categories: exclusively organic, exclusively conventional, or those that handle both organic and conventional meats -- so-called "split" processors. The study found that among conventional meats, those processed at facilities that exclusively handled conventional meats were contaminated with bacteria one-third of the time, while those handled at facilities that processed both conventional and organic meats were contaminated one-quarter of the time. The prevalence of multidrug-resistant bacteria was roughly the same in these two meat processor categories.

"The required disinfection of equipment between processing batches of organic and conventional meats may explain our findings of reduced bacterial contamination on products from facilities that process both types of meats," says Davis.

Read more at Science Daily

How smartphones can help detect ecological change

Leipzig/Jena/Ilmenau. Mobile apps like Flora Incognita that allow automated identification of wild plants cannot only identify plant species, but also uncover large scale ecological patterns. These patterns are surprisingly similar to the ones derived from long-term inventory data of the German flora, even though they have been acquired over much shorter time periods and are influenced by user behaviour. This opens up new perspectives for rapid detection of biodiversity changes. These are the key results of a study led by a team of researchers from Central Germany, which has recently been published in Ecography.

With the help of Artificial Intelligence, plant species today can be classified with high accuracy. Smartphone applications leverage this technology to enable users to easily identify plant species in the field, giving laypersons access to biodiversity at their fingertips. Against the backdrop of climate change, habitat loss and land-use change, these applications may serve another use: by gathering information on the locations of identified plant species, valuable datasets are created, potentially providing researchers with information on changing environmental conditions.

But is this information reliable -- as reliable as the information provided by data collected over long time periods? A team of researchers from the German Centre for Integrative Biodiversity Research (iDiv), the Remote Sensing Centre for Earth System Research (RSC4Earth) of Leipzig University (UL) and Helmholtz Centre for Environmental Research (UFZ), the Max Planck Institute for Biogeochemistry (MPI-BGC) and Technical University Ilmenau wanted to find an answer to this question. The researchers analysed data collected with the mobile app Flora Incognita between 2018 and 2019 in Germany and compared it to the FlorKart database of the German Federal Agency for Nature Conservation (BfN). This database contains long-term inventory data collected by over 5,000 floristic experts over a period of more than 70 years.

Mobile app uncovers macroecological patterns in Germany

The researchers report that the Flora Incognita data, collected over only two years, allowed them to uncover macroecological patterns in Germany similar to those derived from long-term inventory data of German flora. The data was therefore also a reflection of the effects of several environmental drivers on the distribution of different plant species.

However, directly comparing the two datasets revealed major differences between the Flora Incognita data and the long-term inventory data in regions with a low human population density. "Of course, how much data is collected in a region strongly depends on the number of smartphone users in that region," said last author Dr. Jana Wäldchen from MPI-BGC, one of the developers of the mobile app. Deviations in the data were therefore more pronounces in rural areas, except for well-known tourist destinations such as the Zugspitze, Germany's highest mountain, or Amrum, an island on the North Sea coast.

User behaviour also influences which plant species are recorded by the mobile app. "The plant observations carried out with the app reflect what users see and what they are interested in," said Jana Wäldchen. Common and conspicuous species were recorded more often than rare and inconspicuous species. Nonetheless, the large quantity of plant observations still allows a reconstruction of familiar biogeographical patterns. For their study, the researchers had access to more than 900,000 data entries created within the first two years after the app had been launched.

Automated species recognition bears great potential

The study shows the potential of this kind of data collection for biodiversity and environmental research, which could soon be integrated in strategies for long-term inventories. "We are convinced that automated species recognition bears much greater potential than previously thought and that it can contribute to a rapid detection of biodiversity changes," said first author Miguel Mahecha, professor at UL and iDiv Member. In the future, a growing number of users of apps like Flora Incognita could help detect and analyse ecosystem changes worldwide in real time.

The Flora Incognita mobile app was developed jointly by the research groups of Dr. Jana Wäldchen at MPI-BGC and the group of Professor Patrick Mäder at TU Ilmenau. It is the first plant identification app in Germany using deep neural networks (deep learning) in this context. Fed by thousands of plant images, that have been identified by experts, it can already identify over 4,800 plant species.

Read more at Science Daily

Study shows how our brains sync hearing with vision

Every high-school physics student learns that sound and light travel at very different speeds. If the brain did not account for this difference, it would be much harder for us to tell where sounds came from, and how they are related to what we see.

Instead, the brain allows us to make better sense of our world by playing tricks, so that a visual and a sound created at the same time are perceived as synchronous, even though they reach the brain and are processed by neural circuits at different speeds.

One of the brain's tricks is temporal recalibration: altering our sense of time to synchronize our joint perception of sound and vision. A new study finds that recalibration depends on brain signals constantly adapting to our environment to sample, order and associate competing sensory inputs together.

Scientists at The Neuro (Montreal Neurological Institute-Hospital) of McGill university recruited volunteers to view short flashes of light paired with sounds with a variety of delays and asked them to report whether they thought both happened at the same time. The participants performed this task inside a magnetoencephalography (MEG) machine, which recorded and imaged their brain waves with millisecond precision. The audio-visual pairs of stimuli changed each time, with sounds and visual objects presented closer or farther apart in time, and with random orders of presentation.

The researchers found that the volunteers' perception of simultaneity between the audio and visual stimuli in a pair was strongly affected by the perceived simultaneity of the stimulus pair before it. For example, if presented with a sound followed by a visual milliseconds apart and perceived as asynchronous, one is much more likely to report the next audio-visual stimulus pair as synchronous, even when it's not. This form of active temporal recalibration is one of the tools used by the brain to avoid a distorted or disconnected perception of reality, and help establish causal relations between the images and sounds we perceive, despite different physical velocities and neural processing speeds.

The MEG signals revealed that this brain feat was enabled by a unique interaction between fast and slow brain waves in auditory and visual brain regions. Slower brain rhythms pace the temporal fluctuations of excitability in brain circuits. The higher the excitability, the easier an external input is registered and processed by receiving neural networks.

Based on this, the researchers propose a new model for understanding recalibration, whereby faster oscillations riding on top of slower fluctuations create discrete and ordered time slots to register the order of sensory inputs. For example, when an audio signal reaches the first available time slot in the auditory cortex and so does a visual input, the pair is perceived as simultaneous. For this to happen, the brain needs to position the visual time slots a bit later than the auditory ones to account for the slower physiological transduction of visual signals. The researchers found that this relative delay between neural auditory and visual time slots is a dynamic process that constantly adapts to each participant's recent exposure to audiovisual perception.

Their data confirmed the new dynamic integration model by showing how these subtle tens-of-millisecond delays of fast brain oscillations can be measured in every individual and explain their respective judgments of perceived simultaneity.

In autism and speech disorders, the processing of the senses, especially hearing, is altered. In schizophrenia as well, patients can be affected by perceived distortions of sensory inputs. The neurophysiological mechanisms of temporal recalibration described in this study may be altered in these disorders, and their discovery may reveal new research goals to improve these deficits.

Read more at Science Daily

What does your voice say about you?

Everyone has at some point been charmed by the sound of a person's voice: but can we believe our ears? What can a voice really reveal about our character? Now an international research team led by the University of Göttingen has shown that people seem to express at least some aspects of their personality with their voice. The researchers discovered that a lower pitched voice is associated with individuals who are more dominant, extrovert and higher in sociosexuality (more interested in casual sex). The findings were true for women as well as for men. The results were published in the Journal of Research in Personality.

The researchers analysed data from over 2,000 participants and included information from four different countries. Participants filled in questionnaires about themselves to measure personality and provided recordings of their voice so that the pitch could be measured using a computer programme. This is the first time that an objective digital measure of voice pitch has been used in a study of this kind, rather than subjective ratings of how "high" or "deep" a voice might sound. The researchers measured "sociosexuality" by collecting responses about sexual behaviour, attitude and desire. They also collected data to provide ratings of dominance and other character traits such as neuroticism, extraversion, openness to experience, agreeableness and conscientiousness. The number of participants helps to confirm the robustness of the findings: the study involves the largest number to date compared to similar research in this theme.

The researchers found that people with lower pitched voices were more dominant, extroverted and higher in sociosexuality (eg were more interested in sex outside a relationship). However, the relationship between voice pitch and other personality traits (such as agreeableness, neuroticism, conscientiousness or openness) seems less clear. It is possible that these traits are not expressed in the pitch of voices. The researchers found no difference between men and women.

"People's voices can make a huge and immediate impression on us," explains Dr Julia Stern, at the University of Göttingen's Biological Personality Psychology Group. "Even if we just hear someone's voice without any visual clues -- for instance on the phone -- we know pretty soon whether we're talking to a man, a woman, a child or an older person. We can pick up on whether the person sounds interested, friendly, sad, nervous, or whether they have an attractive voice. We also start to make assumptions about trust and dominance." This led Stern to question whether these assumptions were justified. "The first step was to investigate whether voices are, indeed, related to people's personality. And our results suggest that people do seem to express some aspects of their personality with their voice."

Read more at Science Daily

May 11, 2021

Volcanoes on Mars could be active, raising possibility Mars was recently habitable

Evidence of recent volcanic activity on Mars shows that eruptions could have taken place in the past 50,000 years, according to new study by researchers at the University of Arizona's Lunar and Planetary Laboratory and the Planetary Science Institute.

Most volcanism on the Red Planet occurred between 3 and 4 billion years ago, with smaller eruptions in isolated locations continuing perhaps as recently as 3 million years ago. But, until now, there was no evidence to indicate Mars could still be volcanically active.

Using data from satellites orbiting Mars, researchers discovered a previously unknown volcanic deposit. They detail their findings in the paper "Evidence for geologically recent explosive volcanism in Elysium Planitia, Mars," published in the journal Icarus.

"This may be the youngest volcanic deposit yet documented on Mars," said lead study author David Horvath, who did the research as a postdoctoral researcher at UArizona and is now a research scientist at the Planetary Science Institute. "If we were to compress Mars' geologic history into a single day, this would have occurred in the very last second."

The volcanic eruption produced an 8-mile-wide, smooth, dark deposit surrounding a 20-mile-long volcanic fissure.

"When we first noticed this deposit, we knew it was something special," said study co-author Jeff Andrews-Hanna, an associate professor at the UArizona Lunar and Planetary Laboratory and the senior author on the study. "The deposit was unlike anything else found in the region, or indeed on all of Mars, and more closely resembled features created by older volcanic eruptions on the Moon and Mercury."

Further investigation showed that the properties, composition and distribution of material match what would be expected for a pyroclastic eruption -- an explosive eruption of magma driven by expanding gasses, not unlike the opening of a shaken can of soda.

The majority of volcanism in the Elysium Planitia region and elsewhere on Mars consists of lava flowing across the surface, similar to recent eruptions in Iceland being studied by co-author Christopher Hamilton, a UArizona associate professor of lunar and planetary sciences. Although there are numerous examples of explosive volcanism on Mars, they occurred long ago. However, this deposit appears to be different.

"This feature overlies the surrounding lava flows and appears to be a relatively fresh and thin deposit of ash and rock, representing a different style of eruption than previously identified pyroclastic features," Horvath said. "This eruption could have spewed ash as high as 6 miles into Mars' atmosphere. It is possible that these sorts of deposits were more common but have been eroded or buried."

The site of the recent eruption is about 1,000 miles (1,600 kilometers) from NASA's InSight lander, which has been studying seismic activity on Mars since 2018. Two Marsquakes, the Martian equivalent of earthquakes, were found to originate in the region around the Cerberus Fossae, and recent work has suggested the possibility that these could be due to the movement of magma deep underground.

"The young age of this deposit absolutely raises the possibility that there could still be volcanic activity on Mars, and it is intriguing that recent Marsquakes detected by the InSight mission are sourced from the Cerberus Fossae," Horvath said. In fact, the team of researchers predicted this to be a likely location for Marsquakes several months before NASA's InSight lander touched down on Mars.

A volcanic deposit such as this one also raises the possibility for habitable conditions below the surface of Mars in recent history, Horvath said.

"The interaction of ascending magma and the icy substrate of this region could have provided favorable conditions for microbial life fairly recently and raises the possibility of extant life in this region," he said.

Similar volcanic fissures in this region were the source of enormous floods, perhaps as recently as 20 million years ago, as groundwater erupted out onto the surface.

Andrews-Hanna's research group continues to investigate the causes of the eruption. Pranabendu Moitra, a research scientist in the UArizona Department of Geosciences, has been probing the mechanism behind the eruption.

An expert in similar explosive eruptions on Earth, Moitra developed models to look at the possible cause of the Martian eruption. In a forthcoming paper in the journal Earth and Planetary Science Letters, he suggests that the explosion either could have been a result of gases already present in the Martian magma, or it could have happened when the magma came into contact with Martian permafrost.

"The ice melts to water, mixes with the magma and vaporizes, forcing a violent explosion of the mixture," Moitra said. "When water mixes with magma, it's like pouring gasoline on a fire."

He also points out that the youngest volcanic eruption on Mars happened only 6 miles (10 kilometers) from the youngest large-impact crater on the planet -- a 6-mile-wide crater named Zunil.

"The ages of the eruption and the impact are indistinguishable, which raises the possibility, however speculative, that the impact actually triggered the volcanic eruption," Moitra said.

Several studies have found evidence that large quakes on Earth can cause magma stored beneath the surface to erupt. The impact that formed the Zunil crater on Mars would have shaken the Red Planet just like an earthquake, Moitra explained.

While the more dramatic giant volcanoes elsewhere on Mars -- such as Olympus Mons, the tallest mountain in the solar system -- tell a story of the planet's ancient dynamics, the current hotspot of Martian activity seems to be in the relatively featureless plains of the planet's Elysium region.

Andrews-Hanna said it's remarkable that one region hosts the epicenters of present-day earthquakes, the most recent floods of water, the most recent lava flows, and now an even more recent explosive volcanic eruption.

"This may be the most recent volcanic eruption on Mars," he said, "but I think we can rest assured that it won't be the last."

Read more at Science Daily

How planets form controls elements essential for life

The prospects for life on a given planet depend not only on where it forms but also how, according to Rice University scientists.

Planets like Earth that orbit within a solar system's Goldilocks zone, with conditions supporting liquid water and a rich atmosphere, are more likely to harbor life. As it turns out, how that planet came together also determines whether it captured and retained certain volatile elements and compounds, including nitrogen, carbon and water, that give rise to life.

In a study published in Nature Geoscience, Rice graduate student and lead author Damanveer Grewal and Professor Rajdeep Dasgupta show the competition between the time it takes for material to accrete into a protoplanet and the time the protoplanet takes to separate into its distinct layers -- a metallic core, a shell of silicate mantle and an atmospheric envelope in a process called planetary differentiation -- is critical in determining what volatile elements the rocky planet retains.

Using nitrogen as proxy for volatiles, the researchers showed most of the nitrogen escapes into the atmosphere of protoplanets during differentiation. This nitrogen is subsequently lost to space as the protoplanet either cools down or collides with other protoplanets or cosmic bodies during the next stage of its growth.

This process depletes nitrogen in the atmosphere and mantle of rocky planets, but if the metallic core retains enough, it could still be a significant source of nitrogen during the formation of Earth-like planets.

Dasgupta's high-pressure lab at Rice captured protoplanetary differentiation in action to show the affinity of nitrogen toward metallic cores.

"We simulated high pressure-temperature conditions by subjecting a mixture of nitrogen-bearing metal and silicate powders to nearly 30,000 times the atmospheric pressure and heating them beyond their melting points," Grewal said. "Small metallic blobs embedded in the silicate glasses of the recovered samples were the respective analogs of protoplanetary cores and mantles."

Using this experimental data, the researchers modeled the thermodynamic relationships to show how nitrogen distributes between the atmosphere, molten silicate and core.

"We realized that fractionation of nitrogen between all these reservoirs is very sensitive to the size of the body," Grewal said. "Using this idea, we could calculate how nitrogen would have separated between different reservoirs of protoplanetary bodies through time to finally build a habitable planet like Earth."

Their theory suggests that feedstock materials for Earth grew quickly to around moon- and Mars-sized planetary embryos before they completed the process of differentiating into the familiar metal-silicate-gas vapor arrangement.

In general, they estimate the embryos formed within 1-2 million years of the beginning of the solar system, far sooner than the time it took for them to completely differentiate. If the rate of differentiation was faster than the rate of accretion for these embryos, the rocky planets forming from them could not have accreted enough nitrogen, and likely other volatiles, critical to developing conditions that support life.

"Our calculations show that forming an Earth-size planet via planetary embryos that grew extremely quickly before undergoing metal-silicate differentiation sets a unique pathway to satisfy Earth's nitrogen budget," said Dasgupta, the principal investigator of CLEVER Planets, a NASA-funded collaborative project exploring how life-essential elements might have come together on rocky planets in our solar system or on distant, rocky exoplanets.

"This work shows there's much greater affinity of nitrogen toward core-forming metallic liquid than previously thought," he said.

The study follows earlier works, one showing how the impact by a moon-forming body could have given Earth much of its volatile content, and another suggesting that the planet gained more of its nitrogen from local sources in the solar system than once believed.

In the latter study, Grewal said, "We showed that protoplanets growing in both inner and outer regions of the solar system accreted nitrogen, and Earth sourced its nitrogen by accreting protoplanets from both of these regions. However, it was unknown as to how the nitrogen budget of Earth was established."

Read more at Science Daily

The Aqueduct of Constantinople: Managing the longest water channel of the ancient world

Aqueducts are very impressive examples of the art of construction in the Roman Empire. Even today, they still provide us with new insights into aesthetic, practical, and technical aspects of construction and use. Scientists at Johannes Gutenberg University Mainz (JGU) investigated the longest aqueduct of the time, the 426-kilometer-long Aqueduct of Valens supplying Constantinople, and revealed new insights into how this structure was maintained back in time. It appears that the channels had been cleaned of carbonate deposits just a few decades before the site was abandoned.

The late Roman aqueduct provided water for the population of Constantinople

The Roman Empire was ahead of its time in many ways, with a strong commitment to build infrastructure for its citizens which we still find fascinating today. This includes architecturally inspiring temples, theaters, and amphitheaters, but also a dense road network and impressive harbors and mines. "However, the most ground-breaking technical achievement of the Roman Empire lies in its water management, particularly its long-distance aqueducts that delivered water to cities, baths, and mines," said Dr. Gül Sürmelihindi from the Geoarchaeology group at Mainz University. Aqueducts were not a Roman invention, but in Roman hands these long-distance aqueducts developed further and extensively diffused throughout one of the largest empires in history.

Almost every city in the Roman Empire had an ample supply of fresh running water, in some cases actually with a larger volume than is the case today. "These aqueducts are mostly known for their impressive bridges, such as the Pont du Gard in southern France, which are still standing today after two millennia. But they are most impressive because of the way problems in their construction were solved, which would be daunting even for modern engineers," said JGU Professor Cees Passchier. More than 2,000 long-distance Roman aqueducts are known to date, and many more are awaiting discovery. The study undertaken by Dr. Gül Sürmelihindi and her research team focuses on the most spectacular late-Roman aqueduct, the water supply lines of Constantinople, now Istanbul in present-day Turkey.

Carbonate deposits provide insights into Byzantine water management

In AD 324, the Roman Emperor Constantine the Great made Constantinople the new capital of the Roman Empire. Although the city lies at the geopolitically important crossroads of land routes and seaways, fresh water supply was a problem. A new aqueduct was therefore built to supply Constantinople from springs 60 kilometers to the west. As the city grew, this system was expanded in the 5th century to springs that lie even 120 kilometers from the city in a straight line. This gave the aqueduct a total length of at least 426 kilometers, making it the longest of the ancient world. The aqueduct consisted of vaulted masonry channels large enough to walk through, built of stone and concrete, 90 large bridges, and many tunnels up to 5 kilometers long.

Sürmelihindi and her team studied carbonate deposits from this aqueduct, i.e., the limescale that formed in the running water, which can be used to obtain important information about water management and the palaeoenvironment at that time. The researchers found that the entire aqueduct system only contained thin carbonate deposits, representing about 27 years of use. From the annals of the city, however, it is known that the aqueduct system worked for more than 700 years, until at least the 12th century. "This means the entire aqueduct must have been maintained and cleaned of deposits during the Byzantine Empire, even shortly before it ceased working," explained Sürmelihindi. Carbonate deposits can block the entire water supply and have to be removed from time to time.

Double construction over 50 kilometers was likely built for maintenance

Although the aqueduct is late Roman in origin, the carbonate found in the channel is from the Byzantine Middle Ages. This made the researchers think about possible cleaning and maintenance strategies -- because cleaning and repairing a channel of 426 kilometers implies that it cannot be used for weeks or months, while the city population depends on its water supply. They then found that 50 kilometers of the central part of the water system is constructed double, with one aqueduct channel above the other, crossing on two-story bridges. "It is very likely that this system was set up to allow for cleaning and maintenance operations," said Passchier. "It would have been a costly but practical solution."

Read more at Science Daily

Sex cells in parasites are doing their own thing

Researchers at the University of Bristol have discovered how microbes responsible for human African sleeping sickness produce sex cells.

In these single-celled parasites, known as trypanosomes, each reproductive cell splits off in turn from the parental germline cell, which is responsible for passing on genes. Conventional germline cells divide twice to produce all four sex cells -- or gametes -- simultaneously. In humans four sperms are produced from a single germline cell. So, these strange parasite cells are doing their own thing rather than sticking to the biology rulebook.

Trypanosome cell biology has already revealed several curious features. They have two unique intracellular structures -- the kinetoplast, a network of circular DNA and the glycosome, a membrane-enclosed organelle that contains the glycolytic enzymes. They don't follow the central dogma that DNA is faithfully transcribed into RNA, but will go back and edit some of the RNA transcripts after they've been made.

Professor Wendy Gibson of the University of Bristol's School of Biological Sciences led the study. She said "We've got used to trypanosomes doing things their own way, but of course what we think of as normal cell biology is based on very few so-called model organisms like yeast and mice. There's a whole world of weird and wonderful single-celled organisms -- protozoa -- out there that we don't know much about! Trypanosomes have got more attention because they're such important pathogens -- both of humans and their livestock."

Biologists think that sexual reproduction evolved very early on, after the first complex cells appeared a couple of billion years ago. The sex cells are produced by a special form of cell division called meiosis that reduces the number of chromosomes by half, so that gametes have only one complete set of chromosomes instead of two. The chromosome sets from two gametes combine during sexual reproduction, producing new combinations of genes in the offspring. In the case of disease-causing microbes like the trypanosome, sex can potentially lead to a lot of harmful genes being combined in one strain. Thus, research on sexual reproduction helps scientists understand how new strains of disease-causing microbes arise and how characteristics such as drug resistance get spread between different strains.

From Science Daily

New material to treat wounds can protect against resistant bacteria

Researchers at Chalmers University of Technology, Sweden, have developed a new material that prevents infections in wounds -- a specially designed hydrogel, that works against all types of bacteria, including antibiotic-resistant ones. The new material offers great hope for combating a growing global problem.

The World Health Organization describes antibiotic-resistant bacteria as one of the greatest threats to global health. To deal with the problem, there needs to be a shift in the way we use antibiotics, and new, sustainable medical technologies must be developed.

"After testing our new hydrogel on different types of bacteria, we observed a high level of effectiveness, including against those which have become resistant to antibiotics," says Martin Andersson, research leader for the study and Professor at the Department of Chemistry and Chemical Engineering at Chalmers University of Technology.

Research and development of the material has been ongoing for many years at Martin Andersson's group at Chalmers, growing in scope along the way, with a particular focus on the possibilities for wound care. Now, the important results are published as a scientific article in the journal ACS Biomaterials Science & Engineering.

The main purpose of the studies so far has been to explore new medical technology solutions to help reduce the use of systemic antibiotics. Resistant bacteria cause what is referred to as hospital-acquired infection -- a life-threatening condition and is increasing in incidence worldwide.

Mimicking the natural immune system

The active substance in the new bactericidal material consists of antimicrobial peptides, small proteins which are found naturally in our immune system.

"With these types of peptides, there is a very low risk for bacteria to develop resistance against them, since they only affect the outermost membrane of the bacteria. That is perhaps the foremost reason why they are so interesting to work with," says Martin Andersson.

Researchers have long tried to find ways to use these peptides in medical devices, but so far without much success. The problem is that they break down quickly when they come into contact with bodily fluids such as blood. The current study describes how the researchers managed to overcome the problem through the development of a nanostructured hydrogel, into which the peptides are permanently bound, creating a protective environment.

"The material is very promising. It is harmless to the body's own cells and gentle on the skin. In our measurements, the protective effect of the hydrogel on the antimicrobial peptides is clear -- the peptides degrade much slower when they are bound to it," says Edvin Blomstrand, doctoral student at the Department of Chemistry and Chemical Engineering at Chalmers, and one of the main authors of the article.

"We expected good results, but we were really positively surprised at quite how effective the material has proven," adds Martin Andersson.

According to the researchers, this new material is the first medical device to make successful use of antimicrobial peptides in a clinically and commercially viable manner. There are many varied and promising opportunities for clinical application.

Startup company Amferia takes the research from lab to market

In recent years, foundational research into the antimicrobial peptide hydrogel has run in parallel with commercial development of the innovation through the spin-off company Amferia AB.

The company was founded in 2018 by Martin Andersson together with Saba Atefyekta and Anand Kumar Rajasekharan, who both defended their dissertations at Chalmers' Department of Chemistry and Chemical Engineering.

The material and the idea, which is currently developed as an antibacterial wound patch, has generated interest around the world, attracting significant investment and receiving several awards. The company is working intensively to get the material to market so that it can benefit wider society.

Before the new material can benefit hospitals and patients, clinical studies are needed, which are ongoing. A CE marking of the material is expected to be completed in 2022. Furthermore, the wound patch version of the new material is undergoing trials in veterinary care, for treating pets. The company Amferia AB is already collaborating with a number of veterinary clinics around Europe where the hydrogel is now being tested.

"Amferia has recently entered into a strategic partnership with Sweden's largest distributor of premium medical & diagnostic devices to jointly launch these wound care products for the Swedish veterinary market during 2021" says Martin Andersson.

More about antimicrobial peptides and the new material

The beneficial properties of antimicrobial peptides have been known for some decades, and thousands of different varieties occurring in the natural immune systems of humans, animals and plants have been discovered. Researchers have long tried to mimic and use their natural function to prevent and treat infections without having to use traditional antibiotics. However, because the peptides are broken down as soon as they come in contact with blood or other body fluids, successful clinical usage has proved elusive. The researchers knew that smart new solutions were needed to protect the peptide from degradation. The new material in the study has been shown to work very well, allowing the peptides to be applied directly to wounds and injuries on the body, with the effect of both preventing and treating infection. The material is also non-toxic, so it can be used directly on the skin. The potential of this new material can also be seen in the flexibility that it offers for different types of products.

Read more at Science Daily

May 10, 2021

In the emptiness of space, Voyager I detects plasma 'hum'

Voyager 1 -- one of two sibling NASA spacecraft launched 44 years ago and now the most distant human-made object in space -- still works and zooms toward infinity.

The craft has long since zipped past the edge of the solar system through the heliopause -- the solar system's border with interstellar space -- into the interstellar medium. Now, its instruments have detected the constant drone of interstellar gas (plasma waves), according to Cornell University-led research published in Nature Astronomy.

Examining data slowly sent back from more than 14 billion miles away, Stella Koch Ocker, a Cornell doctoral student in astronomy, has uncovered the emission. "It's very faint and monotone, because it is in a narrow frequency bandwidth," Ocker said. "We're detecting the faint, persistent hum of interstellar gas."

This work allows scientists to understand how the interstellar medium interacts with the solar wind, Ocker said, and how the protective bubble of the solar system's heliosphere is shaped and modified by the interstellar environment.

Launched in September 1977, the Voyager 1 spacecraft flew by Jupiter in 1979 and then Saturn in late 1980. Travelling at about 38,000 mph, Voyager 1 crossed the heliopause in August 2012.

After entering interstellar space, the spacecraft's Plasma Wave System detected perturbations in the gas. But, in between those eruptions -- caused by our own roiling sun -- researchers have uncovered a steady, persistent signature produced by the tenuous near-vacuum of space.

"The interstellar medium is like a quiet or gentle rain," said senior author James Cordes, the George Feldstein Professor of Astronomy. "In the case of a solar outburst, it's like detecting a lightning burst in a thunderstorm and then it's back to a gentle rain."

Ocker believes there is more low-level activity in the interstellar gas than scientists had previously thought, which allows researchers to track the spatial distribution of plasma -- that is, when it's not being perturbed by solar flares.

Cornell research scientist Shami Chatterjee explained how continuous tracking of the density of interstellar space is important. "We've never had a chance to evaluate it. Now we know we don't need a fortuitous event related to the sun to measure interstellar plasma," Chatterjee said. "Regardless of what the sun is doing, Voyager is sending back detail. The craft is saying, 'Here's the density I'm swimming through right now. And here it is now. And here it is now. And here it is now.' Voyager is quite distant and will be doing this continuously."

Voyager 1 left Earth carrying a Golden Record created by a committee chaired by the late Cornell professor Carl Sagan, as well as mid-1970s technology. To send a signal to Earth, it took 22 watts, according to NASA's Jet Propulsion Laboratory. The craft has almost 70 kilobytes of computer memory and -- at the beginning of the mission -- a data rate of 21 kilobits per second.

Read more at Science Daily

New vaccine blocks COVID-19 and variants, plus other coronaviruses

A potential new vaccine developed by members of the Duke Human Vaccine Institute has proven effective in protecting monkeys and mice from a variety of coronavirus infections -- including SARS-CoV-2 as well as the original SARS-CoV-1 and related bat coronaviruses that could potentially cause the next pandemic.

The new vaccine, called a pan-coronavirus vaccine, triggers neutralizing antibodies via a nanoparticle. The nanoparticle is composed of the coronavirus part that allows it to bind to the body's cell receptors and is formulated with a chemical booster called an adjuvant. Success in primates is highly relevant to humans.

The findings appear Monday, May 10, in the journal Nature.

"We began this work last spring with the understanding that, like all viruses, mutations would occur in the SARS-CoV-2 virus, which causes COVID-19," said senior author Barton F. Haynes, M.D., director of the Duke Human Vaccine Institute (DHVI). "The mRNA vaccines were already under development, so we were looking for ways to sustain their efficacy once those variants appeared.

"This approach not only provided protection against SARS-CoV-2, but the antibodies induced by the vaccine also neutralized variants of concern that originated in the United Kingdom, South Africa and Brazil," Haynes said. "And the induced antibodies reacted with quite a large panel of coronaviruses."

Haynes and colleagues, including lead author Kevin Saunders, Ph.D., director of research at DHVI, built on earlier studies involving SARS, the respiratory illness caused by a coronavirus called SARS-CoV-1. They found a person who had been infected with SARS developed antibodies capable of neutralizing multiple coronaviruses, suggesting that a pan-coronavirus might be possible.

The Achilles heel for the coronaviruses is their receptor-binding domain, located on the spike that links the viruses to receptors in human cells. While this binding site enables it to enter the body and cause infection, it can also be targeted by antibodies.

The research team identified one particular receptor-binding domain site that is present on SARS-CoV-2, its circulating variants and SARS-related bat viruses that makes them highly vulnerable to cross-neutralizing antibodies.

The team then designed a nanoparticle displaying this vulnerable spot. The nanoparticle is combined with a small molecule adjuvant -- specifically, the toll-like receptor 7 and 8 agonist called 3M-052, formulated with Alum, which was developed by 3M and the Infectious Disease Research Institute. The adjuvant boosts the body's immune response.

In tests of its effect on monkeys, the nanoparticle vaccine blocked COVID-19 infection by 100%. The new vaccine also elicited significantly higher neutralizing levels in the animals than current vaccine platforms or natural infection in humans.

"Basically what we've done is take multiple copies of a small part of the coronavirus to make the body's immune system respond to it in a heightened way," Saunders said. "We found that not only did that increase the body's ability to inhibit the virus from causing infection, but it also targets this cross-reactive site of vulnerability on the spike protein more frequently. We think that's why this vaccine is effective against SARS-CoV-1, SARS-CoV-2 and at least four of its common variants, plus additional animal coronaviruses."

"There have been three coronavirus epidemics in the past 20 years, so there is a need to develop effective vaccines that can target these pathogens prior to the next pandemic," Haynes said. "This work represents a platform that could prevent, rapidly temper, or extinguish a pandemic."

Read more at Science Daily

Universal equation for explosive phenomena

Climate change, a pandemic or the coordinated activity of neurons in the brain: In all of these examples, a transition takes place at a certain point from the base state to a new state. Researchers at the Technical University of Munich (TUM) have discovered a universal mathematical structure at these so-called tipping points. It creates the basis for a better understanding of the behavior of networked systems.

It is an essential question for scientists in every field: How can we predict and influence changes in a networked system? "In biology, one example is the modelling of coordinated neuron activity," says Christian Kühn, professor of multiscale and stochastic dynamics at TUM. Models of this kind are also used in other disciplines, for example when studying the spread of diseases or climate change.

All critical changes in networked systems have one thing in common: a tipping point where the system makes a transition from a base state to a new state. This may be a smooth shift, where the system can easily return to the base state. Or it can be a sharp, difficult-to-reverse transition where the system state can change abruptly or "explosively." Transitions of this kind also occur in climate change, for example with the melting of the polar ice caps. In many cases, the transitions result from the variation of a single parameter, such as the rise in concentrations of greenhouse gases behind climate change.

Similar structures in many models

In some cases -- such as climate change -- a sharp tipping point would have extremely negative effects, while in others it would be desirable. Consequently, researchers have used mathematical models to investigate how the type of transition is influenced by the introduction of new parameters or conditions. "For example, you could vary another parameter, perhaps related to how people change their behavior in a pandemic. Or you might adjust an input in a neural system," says Kühn. "In these examples and many other cases, we have seen that we can go from a continuous to a discontinuous transition or vice versa."

Kühn and Dr. Christian Bick of Vrije Universiteit Amsterdam studied existing models from various disciplines that were created to understand certain systems. "We found it remarkable that so many mathematical structures related to the tipping point looked very similar in those models," says Bick. "By reducing the problem to the most basic possible equation, we were able to identify a universal mechanism that decides on the type of tipping point and is valid for the greatest possible number of models."

Read more at Science Daily

Reaching your life goals as a single-celled organism

How is it possible to move in the desired direction without a brain or nervous system? Single-celled organisms apparently manage this feat without any problems: for example, they can swim towards food with the help of small flagellar tails.

How these extremely simply built creatures manage to do this was not entirely clear until now. However, a research team at TU Wien (Vienna) has now been able to simulate this process on the computer: They calculated the physical interaction between a very simple model organism and its environment. This environment is a liquid with a non-uniform chemical composition, it contains food sources that are unevenly distributed.

The simulated organism was equipped with the ability to process information about food in its environment in a very simple way. With the help of a machine learning algorithm, the information processing of the virtual being was then modified and optimised in many evolutionary steps. The result was a computer organism that moves in its search for food in a very similar way to its biological counterparts.

Chemotaxis: Always going where the chemistry is right

"At first glance, it is surprising that such a simple model can solve such a difficult task," says Andras Zöttl, who led the research project, which was carried out in the "Theory of Soft Matter" group (led by Gerhard Kahl) at the Institute of Theoretical Physics at TU Wien. "Bacteria can use receptors to determine in which direction, for example, the oxygen or nutrient concentration is increasing, and this information then triggers a movement into the desired direction. This is called chemotaxis."

The behaviour of other, multicellular organisms can be explained by the interconnection of nerve cells. But a single-celled organism has no nerve cells -- in this case, only extremely simple processing steps are possible within the cell. Until now, it was not clear how such a low degree of complexity could be sufficient to connect simple sensory impressions -- for example from chemical sensors -- with targeted motor activity.

"To be able to explain this, you need a realistic, physical model for the movement of these unicellular organisms," says Andreas Zöttl. "We have chosen the simplest possible model that physically allows independent movement in a fluid in the first place. Our single-celled organism consists of three masses connected by simplified muscles. The question now arises: can these muscles be coordinated in such a way that the entire organism moves in the desired direction? And above all: can this process be realised in a simple way, or does it require complicated control?"

A small network of signals and commands

"Even if the unicellular organism does not have a network of nerve cells -- the logical steps that link its 'sensory impressions' with its movement can be described mathematically in a similar way to a neuronal network," says Benedikt Hartl, who used his expertise in artificial intelligence to implement the model on the computer. In the single-celled organism, too, there are logical connections between different elements of the cell. Chemical signals are triggered and ultimately lead to a certain movement of the organism.

"These elements and the way they influence each other were simulated on the computer and adjusted with a genetic algorithm: Generation after generation, the movement strategy of the virtual unicellular organisms was changed slightly," reports Maximilian Hübl, who did many of the calculations on this topic as part of his Master's thesis. Those unicellular organisms that succeeded best in directing their movement to where the desired chemicals were located were allowed to "reproduce," while the less successful variants "died out." In this way, after many generations, a control network emerged -- very similar to biological evolution -- that allows a virtual unicellular organism to convert chemical perceptions into targeted movement in an extremely simple way and with very basic circuits.

Random wobbling movement -- but with a concrete goal

"You shouldn't think of it as a highly developed animal that consciously perceives something and then runs towards it," says Andreas Zöttl. "It's more like a random wobbling movement. But one that ultimately leads in the right direction on average. And that's exactly what you observe with single-celled organisms in nature."

Read more at Science Daily

May 9, 2021

Repurposing tabletop sensors to search for dark matter

 Scientists are certain that dark matter exists. Yet, after more than 50 years of searching, they still have no direct evidence for the mysterious substance.

University of Delaware's Swati Singh is among a small group of researchers across the dark matter community that have begun to wonder if they are looking for the right type of dark matter.

"What if dark matter is much lighter than what traditional particle physics experiments are looking for?" said Singh, an assistant professor of electrical and computer engineering at UD.

Now, Singh, Jack Manley, a UD doctoral student, and collaborators at the University of Arizona and Haverford College, have proposed a new way to look for the particles that might make up dark matter by repurposing existing tabletop sensor technology. The team recently reported their approach in a paper published in Physical Review Letters.

Co-authors on the paper include Dalziel Wilson, an assistant professor of optical sciences from Arizona, Mitul Dey Chowdhury, an Arizona doctoral student, and Daniel Grin, an assistant professor of physics at Haverford College.

No ordinary matter

Singh explained that if you add up all the things that emit light, such as stars, planets and interstellar gas, it only accounts for about 15% of the matter in the Universe. The other 85% is known as dark matter. It doesn't emit light, but researchers know it exists by its gravitational effects. They also know it isn't ordinary matter, such as gas, dust, stars, planets and us.

"It could be made up of black holes, or it could be made up of something trillions of times smaller than an electron, known as ultralight dark matter" said Singh, a quantum theorist known for her pioneering efforts to push forward mechanical dark matter detection.

One possibility is that dark matter is made up of dark photons, a type of dark matter that would exert a weak oscillating force on normal matter, causing a particle to move back and forth. However, since dark matter is everywhere, it exerts that force on everything, making it hard to measure this movement.

Singh and her collaborators said they think they can overcome this obstacle by using optomechanical accelerometers as sensors to detect and amplify this oscillation.

"If the force is material dependent, by using two objects composed of different materials the amount that they are forced will be different, meaning that you would be able to measure that difference in acceleration between the two materials," said Manley, the paper's lead author.

Wilson, a quantum experimentalist and one of the UD team's collaborators, likened an optomechanical accelerometer to a miniature tuning fork. "It's a vibrating device which, due to its small size, is very sensitive to perturbations from the environment," he said.

Now, the researchers have proposed an experiment using a membrane made of silicon nitride and a fixed beryllium mirror to bounce light between the two surfaces. If the distance between the two materials changes, the researchers would know from the reflected light that dark photons were present because the silicon nitride and beryllium have different material properties.

Collaboration was a key part of developing the experiment's design, according to Manley. He and Singh (theorists) worked with Wilson and Dey Chowdhury (experimentalists) on the theoretical calculations that went into the detailed blueprint for building their proposed tabletop accelerometer sensor. Meanwhile, Grin, a cosmologist, helped shed light on the particle physics aspects of ultralight dark matter, such as why it would be ultralight, why it might couple to materials differently and how it might be produced.

As a theorist, Manley said the opportunity to learn more about how devices work and how experimentalists build things to prove the theories that he and Singh develop has deepened his expertise while simultaneously widening his exposure to possible career paths.

A growing body of work

Importantly, this latest work builds on previously published research by the collaborating teams, reported last summer in Physical Review Letters. The paper, which included contributions from former UD graduate student Russell Stump, showed that several existing and near-term laboratory-scale devices are sensitive enough to detect, or rule out, possible particles that could be ultralight dark matter.

The research reported that certain types of ultralight dark matter would connect, or couple, with normal matter in a way that would cause a periodic change in the size of atoms. While small fluctuations in the size of a single atom may be difficult to notice, the effect is amplified in an object composed of many atoms, and further amplification can be achieved if that object is an acoustic resonator. The collaboration evaluated the performance of several resonators made of diverse materials ranging from superfluid helium to single-crystalline sapphire, and found these sensors can be used to detect that dark matter-induced strain signal.

Both projects were supported in part through Singh's funding from the National Science Foundation to explore emerging ideas around using state-of-the-art quantum devices to detect astrophysical phenomena with tabletop technologies that are smaller and less expensive than other methods.

Together, Singh said, these papers extend the body of work on what is known about possible ways to detect dark matter and suggest the possibility of a new generation of table-top experiments.

Singh and Manley are working with other experimental groups, too, to develop additional tabletop sensors to look for such dark matter or other weak astrophysical signals. They also are actively cultivating broader discussions on this topic within the dark matter and quantum sensors communities.

For example, Singh recently discussed transformational instrumentation advances in particle physics detectors at a virtual workshop organized by the Department of Energy's Coordinating Panel for Advanced Detectors (CPAD). She also presented these results at a special workshop during the American Physical Society's April meeting.

Read more at Science Daily

How we retrieve our knowledge about the world

To understand the world, we arrange individual objects, people, and events into different categories or concepts. Concepts such as 'the telephone' consist primarily of visible features, i.e. shape and color, and sounds, such as ringing. In addition, there are actions, i.e. how we use a telephone.

However, the concept of telephone does not only arise in the brain when we have a telephone in front of us. It also appears when the term is merely mentioned. If we read the word "telephone," our brain also calls up the concept of telephone. The same regions in the brain are activated that would be activated if we actually saw, heard, or used a telephone. The brain thus seems to simulate the characteristics of a telephone when its name alone is mentioned.

Until now, however, it was unclear, depending on the situation, whether the entire concept of a telephone is called up or only individual features such as sounds or actions and whether only the brain areas that process the respective feature become active. So, when we think of a telephone, do we always think of all its features or only the part that is needed at the moment? Do we retrieve our sound knowledge when a phone rings, but our action knowledge when we use it?

Researchers at the Max Planck Institute for Human Cognitive and Brain Sciences in Leipzig have now found the answer: It depends on the situation. If, for example, the study participants thought of the sounds associated with the word "telephone," the corresponding auditory areas in the cerebral cortex were activated, which are also activated during actual hearing. When thinking about using a telephone, the somatomotor areas that underlie the involved movements came into action.

In addition to these sensory-dependent, so-called modality-specific areas, it was found that there are areas that process both sounds and actions together. One of these so-called multimodal areas is the left inferior parietal lobule (IPL). It became active when both features were requested.

The researchers also found out that, in addition to characteristics based on sensory impressions and actions, there must be other criteria by which we understand and classify terms. This became apparent when the participants were only asked to distinguish between real and invented words. Here, a region that was not active for actions or sounds kicked in: the so-called anterior temporal lobe (ATL). The ATL therefore seems to process concepts abstractly or "amodally," completely detached from sensory impressions.

From these findings, the scientists finally developed a hierarchical model to reflect how conceptual knowledge is represented in the human brain. According to this model, information is passed on from one hierarchical level to the next and at the same time becomes more abstract with each step. On the lowest level, therefore, are the modality-specific areas that process individual sensory impressions or actions. These transmit their information to the multimodal regions such as the IPL, which process several linked perceptions simultaneously, such as sounds and actions. The amodal ATL, which represents features detached from sensory impressions, operates at the highest level. The more abstract a feature, the higher the level at which it is processed and the further it is removed from actual sensory impressions.

"We thus show that our concepts of things, people, and events are composed, on the one hand, of the sensory impressions and actions associated with them and, on the other hand, of abstract symbol-like features," explains Philipp Kuhnke, lead author of the study, which was published in the journal Cerebral Cortex. "Which features are activated depends strongly on the respective situation or task" added Kuhnke.

In a follow-up study in Cerebral Cortex, the researchers also found that modality-specific and multimodal regions work together in a situation-dependent manner when we retrieve conceptual features. The multimodal IPL interacted with auditory areas when retrieving sounds, and with somatomotor areas when retrieving actions. This showed that the interaction between modality-specific and multimodal regions determined the behavior of the study participants. The more these regions worked together, the more strongly the participants associated words with actions and sounds.

Read more at Science Daily