Sep 18, 2021

Ancient DNA rewrites early Japanese history -- modern day populations have tripartite genetic origin

Ancient DNA extracted from human bones has rewritten early Japanese history by underlining that modern day populations in Japan have a tripartite genetic origin -- a finding that refines previously accepted views of a dual genomic ancestry.

Twelve newly sequenced ancient Japanese genomes show that modern day populations do indeed show the genetic signatures of early indigenous Jomon hunter-gatherer-fishers and immigrant Yayoi farmers -- but also add a third genetic component that is linked to the Kofun peoples, whose culture spread in Japan between the 3rd and 7th centuries.

Rapid cultural transformations

The Japanese archipelago has been occupied by humans for at least 38,000 years but Japan underwent rapid transformations only in the last 3,000 years, first from foraging to wet-rice farming, and then to a technologically advanced imperial state.

The previous, long-standing hypothesis suggested that mainland Japanese populations derive dual-ancestry from the indigenous Jomon hunter-gatherer-fishers, who inhabited the Japanese archipelago from around 16,000 to 3,000 years ago, and later Yayoi farmers, who migrated from the Asian continent and lived in Japan from around 900 BC to 300 AD.

But the 12 newly sequenced ancient Japanese genomes -- which came from the bones of people living in pre- and post-farming periods -- also identify a later influx of East Asian ancestry during the imperial Kofun period, which lasted from around 300 to 700 AD and which saw the emergence of political centralisation in Japan.

Shigeki Nakagome, Assistant Professor in Psychiatry in Trinity College Dublin's School of Medicine, led the research, which brought together an interdisciplinary team of researchers from Japan and Ireland. Professor Nakagome said:

"Researchers have been learning more and more about the cultures of the Jomon, Yayoi, and Kofun periods as more and more ancient artefacts show up, but before our research we knew relatively little about the genetic origins and impact of the agricultural transition and later state-formation phase."

"We now know that the ancestors derived from each of the foraging, agrarian, and state-formation phases made a significant contribution to the formation of Japanese populations today. In short, we have an entirely new tripartite model of Japanese genomic origins -- instead of the dual-ancestry model that has been held for a significant time."

Genomic insights into key Japanese transformations

In addition to the overarching discovery, the analyses also found that the Jomon maintained a small effective population size of around 1,000 over several millennia, with a deep divergence from continental populations dated to 20,000-15,000 years ago -- a period which saw Japan become more geographically insular through rising sea-levels.

The Japanese archipelago had become accessible through the Korean Peninsula at the beginning of the Last Glacial Maximum, some 28,000 years ago, enabling movement between. And the widening of the Korea Strait 16,000 to 17,000 years ago due to rising sea-levels may have led to the subsequent isolation of the Jomon lineage from the rest of the continent. These time frames also coincide with the oldest evidence of Jomon pottery production.

"The indigenous Jomon people had their own unique lifestyle and culture within Japan for thousands of years prior to the adoption of rice farming during the subsequent Yayoi period. Our analysis clearly finds them to be a genetically distinct population with an unusually high affinity between all sampled individuals -- even those differing by thousands of years in age and excavated from sites on different islands," explained Niall Cooke, PhD Researcher at Trinity. "These results strongly suggest a prolonged period of isolation from the rest of the continent."

The spread of agriculture is often marked by population replacement, as documented in the Neolithic transition throughout most of Europe, with only minimal contributions from hunter-gatherer populations observed in many regions. However, the researchers found genetic evidence that the agricultural transition in prehistoric Japan involved the process of assimilation, rather than replacement, with almost equal genetic contributions from the indigenous Jomon and new immigrants associated with wet-rice farming.

Several lines of archaeological evidence support the introduction of new large settlements to Japan, most likely from the southern Korean peninsula, during the Yayoi-Kofun transition. And the analyses provide strong support for the genetic exchange involved in the appearance of new social, cultural, and political traits in this state-formation phase.

Read more at Science Daily

Allergies to mRNA-based COVID-19 vaccines rare, generally mild, study finds

Allergic reactions to the new mRNA-based COVID-19 vaccines are rare, typically mild and treatable, and they should not deter people from becoming vaccinated, according to research from the Stanford University School of Medicine.

The findings will be published online Sept. 17 in JAMA Network Open.

"We wanted to understand the spectrum of allergies to the new vaccines and understand what was causing them," said the study's senior author, Kari Nadeau, MD, PhD, the Naddisy Foundation Professor in Pediatric Food Allergy, Immunology, and Asthma.

The study analyzed 22 potential allergic reactions to the first 39,000 doses of Pfizer and Moderna COVID-19 vaccines given to health care providers at Stanford soon after the vaccines received emergency use authorization from the Food and Drug Administration.

Most of those in the study who developed reactions were allergic to an ingredient that helps stabilize the COVID-19 vaccines; they did not show allergies to the vaccine components that provide immunity to the SARS-CoV-2 virus. Furthermore, these allergic reactions occurred via an indirect activation of allergy pathways, which makes them easier to mitigate than many allergic responses.

"It's nice to know these reactions are manageable," said Nadeau, who directs the Sean N. Parker Center for Allergy and Asthma Research at Stanford. "Having an allergic reaction to these new vaccines is uncommon, and if it does happen, there's a way to manage it."

The study's lead author is former postdoctoral scholar Christopher Warren, PhD, now an assistant professor at Northwestern University Feinberg School of Medicine.

The research also suggests how vaccine manufacturers can reformulate the vaccines to make them less likely to trigger allergic responses, Nadeau said.

Delivery of protein-making instructions

The mRNA-based COVID-19 vaccines provide immunity via small pieces of messenger RNA that encode molecular instructions for making proteins. Because the mRNA in the vaccines is fragile, it is encased in bubbles of lipids -- fatty substances -- and sugars for stability. When the vaccine is injected into someone's arm, the mRNA can enter nearby muscle and immune cells, which then manufacture noninfectious proteins resembling those on the surface of the SARS-CoV-2 virus. The proteins trigger an immune response that allows the person's immune system to recognize and defend against the virus.

Estimated rates of severe vaccine-related anaphylaxis -- allergic reactions bad enough to require hospitalization -- are 4.7 and 2.5 cases per million doses for the Pfizer and Moderna vaccines, respectively, according to the federal Vaccine Adverse Event Reporting System. However, the federal system doesn't capture all allergic reactions to vaccines, tending to miss those that are mild or moderate.

For a more complete understanding of allergic reactions to the new vaccines -- how common they are, as well as how severe -- the research team examined the medical records of health care workers who received 38,895 doses of mRNA-based COVID-19 vaccines at Stanford Medicine between Dec. 18, 2020, and Jan. 26, 2021. The vaccinations included 31,635 doses of the Pfizer vaccine and 7,260 doses of the Moderna vaccine.

The researchers searched vaccine recipients' medical records for treatment of allergic reactions and identified which reactions were linked to the vaccines. Twenty-two recipients, 20 of them women, had possible allergic reactions, meaning specific symptoms starting within three hours of receiving the shots. The researchers looked for the following symptoms in recipients' medical records: hives; swelling of the mouth, lips, tongue or throat; shortness of breath, wheezing or chest tightness; or changes in blood pressure or loss of consciousness. Only 17 of the 22 recipients had reactions that met diagnostic criteria for an allergic reaction. Three recipients received epinephrine, usually given for stronger anaphylaxis. All 22 fully recovered.

Of the 22 recipients, 15 had physician-documented histories of prior allergic reactions, including 10 to antibiotics, nine to foods and eight to nonantibiotic medications. (Some recipients had more than one type of allergy.)

The researchers performed follow-up laboratory testing on 11 individuals to determine what type of allergic reaction they had, as well as what triggered their allergy: Was it one of the inert sugar or lipid ingredients in the bubble, or something else in the vaccine?

The study participants underwent skin-prick tests, in which a clinician injected small amounts of potential allergens -- the lipids, sugars (polyethylene glycol or polysorbates) or entire vaccine -- into the skin. Skin-prick testing detects allergic reactions mediated by a form of antibody known as immunoglobin E, or IgE; these reactions are generally associated with the severest allergies.

None of the recipients reacted on skin-prick tests to the inert ingredients in the vaccines, and just one recipient's skin reacted to the whole COVID-19 vaccine. Follow-up blood tests showed that the vaccine recipients did not have significant levels of IgE antibodies against the vaccine ingredients.

Since the skin tests did not explain the mechanism of recipients' allergic reactions, the investigators proceeded to another type of diagnostic test. Vaccine recipients provided blood samples for tests of allergic activation of immune cells known as basophils. The blood samples from 10 of the 11 participants showed a reaction to the inert ingredient polyethylene glycol (PEG), which is used in both the Pfizer and Moderna vaccines. In addition, all 11 recipients had basophil activation in response to the whole mRNA vaccine when it was mixed with their own basophils.

All 11 subjects had high levels of IgG antibodies against PEG in their blood; IgG antibodies help activate basophils under some conditions, and this finding suggests the individuals were likely sensitive to PEG before receiving their vaccines.

"What's important is what we didn't find, as much as what we did find," Nadeau said. "It does not seem that the mRNA itself causes the allergic reactions."

In addition, the data suggest that reactions to the COVID-19 vaccines were generally not the most severe form of allergic reaction, which is good news in terms of vaccine safety, she said. Allergic reactions mediated by IgG and basophils can be managed with antihistamines, fluids, corticosteroids and close observation, meaning that many individuals who have had a reaction to their first vaccine dose can safely receive a second dose under medical supervision.

PEG is widely used as a stabilizer in household products, cosmetics and medications, with women more likely to be exposed to large quantities of the substance, possibly explaining why more vaccine allergies have been seen among women. (Repeated exposures to a substance can sometimes sensitize the immune system and provoke allergies.) Because most reactions were to PEG rather than the vaccine's active ingredients, it is likely that vaccine manufacturers can reformulate the vaccines with different stabilizers that are less likely to cause allergies, Nadeau said.

Read more at Science Daily

Sep 17, 2021

How to catch a perfect wave: Scientists take a closer look inside the perfect fluid

Scientists have reported new clues to solving a cosmic conundrum: How the quark-gluon plasma -- nature's perfect fluid -- evolved into matter.

A few millionths of a second after the Big Bang, the early universe took on a strange new state: a subatomic soup called the quark-gluon plasma.

And just 15 years ago, an international team including researchers from the Relativistic Nuclear Collisions (RNC) group at Lawrence Berkeley National Laboratory (Berkeley Lab) discovered that this quark-gluon plasma is a perfect fluid -- in which quarks and gluons, the building blocks of protons and neutrons, are so strongly coupled that they flow almost friction-free.

Scientists postulated that highly energetic jets of particles fly through the quark-gluon plasma -- a droplet the size of an atom's nucleus -- at speeds faster than the velocity of sound, and that like a fast-flying jet, emit a supersonic boom called a Mach wave. To study the properties of these jet particles, in 2014 a team led by Berkeley Lab scientists pioneered an atomic X-ray imaging technique called jet tomography. Results from those seminal studies revealed that these jets scatter and lose energy as they propagate through the quark-gluon plasma.

But where did the jet particles' journey begin within the quark-gluon plasma? A smaller Mach wave signal called the diffusion wake, scientists predicted, would tell you where to look. But while the energy loss was easy to observe, the Mach wave and accompanying diffusion wake remained elusive.

Now, in a study published recently in the journal Physical Review Letters, the Berkeley Lab scientists report new results from model simulations showing that another technique they invented called 2D jet tomography can help researchers locate the diffusion wake's ghostly signal.

"Its signal is so tiny, it's like looking for a needle in a haystack of 10,000 particles. For the first time, our simulations show one can use 2D jet tomography to pick up the tiny signals of the diffusion wake in the quark-gluon plasma," said study leader Xin-Nian Wang, a senior scientist in Berkeley Lab's Nuclear Science Division who was part of the international team that invented the 2D jet tomography technique.

To find that supersonic needle in the quark-gluon haystack, the Berkeley Lab team culled through hundreds of thousands of lead-nuclei collision events simulated at the Large Hadron Collider (LHC) at CERN, and gold-nuclei collision events at the Relativistic Heavy Ion Collider (RHIC) at Brookhaven National Laboratory. Some of the computer simulations for the current study were performed at Berkeley Lab's NERSC supercomputer user facility.

Wang says that their unique approach "will help you get rid of all this hay in your stack -- help you focus on this needle." The jet particles' supersonic signal has a unique shape that looks like a cone -- with a diffusion wake trailing behind, like ripples of water in the wake of a fast-moving boat. Scientists have searched for evidence of this supersonic "wakelet" because it tells you that there is a depletion of particles. Once the diffusion wake is located in the quark-gluon plasma, you can distinguish its signal from the other particles in the background.

Read more at Science Daily

Climatically driven landscape evolution during warm periods

The "Lichtenberg" project is an experimental laboratory for landscape research: By means of a drilling campaign and with the support of the State Office for Mining, Energy and Geology (LBEG), the research team started a comprehensive investigation of the area near the village of Lichtenberg about three years ago, because the sediments offer a unique insight into the history of the Eemian.

Landscape reconstruction shows the development of a lake in the Wendland region

With the help of borehole geophysics and several seismic measurements as well as the analysis of numerous sediment cores and pollen contents, the researchers managed to reconstruct the development of a small lake as part of a lake landscape that extends over more than 200 square kilometers in the southern Wendland. The results of a study now show the course of development: Both at the beginning and towards the end of the Eemian, there was a strong rise in the water level during the climatic changes due to, among other things, lower evaporation caused by more open vegetation, combined with considerable soil erosion causing relatively unstable land surfaces. In contrast, during the main phase of the warm period, closed deciduous forest cover prevailed, which in turn resulted in a gradual lowering of the lake level. The dense vegetation cover of that time provided optimal protection against soil erosion and resulted in remarkable stability of the land surfaces.

"With the help of geophysics, it was possible for us to visualize the landscape not only selectively, but spatially at high resolution," says David Colin Tanner, project manager at LIAG. "Through interdisciplinary collaboration with numerous partners, we were ultimately able to reconstruct the sedimentary, vegetation and hydrological conditions in the course of the Eemian very well -- a great added value also for predicting future landscape changes."

Michael Hein, geographer at MPI-EVA, explains why: "The Eemian interglacial is characterized by similar climatic conditions to those predicted for the course of the 21st century and is therefore highly interesting for basic research. For the Eemian, we can now try to understand how landscapes respond to such climate changes under natural conditions -- without the determining influence of humans.

Evidence of most northern Neanderthal settlement in the last warm period

For the main phase of the Eemian, the researchers found evidence of Neanderthal occupation on the lake shore. According to the current state of research, this is the northernmost evidence of human ancestors during the last warm period in Europe.

Marcel Weiß, archaeologist at the MPI-EVA, adds: "On paper, we now have the northernmost site in Europe from this period, but I have no doubt that the settlement area of the Neanderthals in the Eemian extended even further north. The picture of Neanderthal settlement and migration patterns, as well as their habitat requirements, is still fundamentally incomplete," he said. "Until now, it was mostly assumed that they largely avoided dense forests during the Eemian. This now has to be revised to some extent with the new findings through landscape reconstruction."

Read more at Science Daily

Using visual information to learn voluntary behavior while blind

The visual cortex makes up one of the largest regions of the brain, which is a testament to how much information we receive from our eyes. The primary visual cortex, or V1, is the first stage of processing visual input in the brain. Without a functional V1, a person is oblivious to an object that their eyes receive the visual input. However, scientists are in disagreement about whether we must be conscious of what we receive a visual input in order to learn from it. A new study in Scientific Reports by researchers at ASHBi and the University of Sheffield suggests that even if monkeys do not realize they have received a visual signal, they still change their behavior using it.

The loss of the V1 does not mean they do not respond to a visual object, as explained by blindsight, a condition first defined about 50 years ago, in which patients do not consciously detect a visual stimulus but nevertheless localize the target by eye movements or hand reaching. In other words, in blindsight, individuals use visual information of the object but are unaware that they do. Then, to what extent is it possible to act independently of consciousness?

To test this question with regards to blindsight, ASHBi Professor Tadashi Isa, Dr. Rikako Kato and colleagues lesioned V1 in one hemisphere of two monkeys and had them conduct a hidden area search task. In this task, the lesioned monkeys were required to identify the hidden area within a blank screen. When their eyes located the hidden area, a visual signal appeared informing them the area had been discovered, and the monkeys were rewarded with a drop of juice after two seconds delay.

The experiment was designed so that in some cases the visual signal would appear to the intact V1 side and in others it would appear to the lesioned side. The study shows that regardless of the side, the monkeys still identified the hidden area and received the reward. Moreover, the findings suggest that oculomotor behavior after visual cue presentation can be an indicator of confidence. When the confirmation feedback signal was shown to the intact V1 side, the monkeys were certain that they had found the hidden area, as demonstrated by the stopping of searching movements. In contrast, these searching saccadic movements continued when the confirmation signal was shown to the lesioned side.

Dr. Rikako Kato believes deeper study of the brain regions and neural pathways will go beyond understanding the human brain.

Read more at Science Daily

Cutting-edge 3D facial scans could give genetic clues to autism

New Australian research is using high-tech 3D facial scans to give us a better understanding of the genetic causes of autism.

Researchers from Edith Cowan University (ECU) used sophisticated machine learning techniques to analyse 5000 points on faces to measure facial asymmetry in parents of children on the autism spectrum.

The research team from ECU, UWA and Telethon Kids Institute have previously found children on the autism spectrum were more likely to have greater facial asymmetry than non-autistic children.

This is important because better understanding of facial characteristics of autistic people contribute to efforts for early identification and help to understand hereditary (or genetic) causal links.

Genetic factors are known to play a major role in autism however there is growing evidence that environmental factors, such as hormones or maternal health, could also influence development of the condition.

In the current study researchers compared the facial asymmetry of 192 parents of autistic children to 163 adults with no known history of autism.

They found parents of children on the autism spectrum had more asymmetric faces than other adults of a similar age.

ECU School of Science Research Fellow Dr Syed Zulqarnian Gilani said the research was an important step in better understanding the genetic causes of autism.

"These findings suggest there could be a link between the genes which affect the likelihood of an individual having greater facial asymmetry and autism," he said.

"By using these cutting-edge 3D scans of faces combined with machine learning techniques we can distinguish between thousands of subtle differences in faces to determine an overall facial asymmetry score.

"When we compared those scores, we saw that faces of parents of autistic children were more likely to have higher asymmetry compared to other adults."

A new way of looking at autism

According to Dr Diana Tan, the project's lead author and a Postdoctoral Research Associate at UWA and Telethon Kids Institute, the research helps increase our understanding of autism.

"Autism is not traditionally known to be a condition with distinctive facial features, but our research has challenged this notion," she said.

"Our study provided evidence that the genetic factors leading to the development of autism may also express in physical characteristics, which leads to our understanding of the interplay between genes, physical and brain development in humans."

"We previously examined another facial marker -- facial masculinity -- that was associated with autism. The next step of this project would be to evaluate the usefulness of combining facial asymmetry and masculinity in determining the likelihood of autism diagnosis."

Read more at Science Daily

Sep 16, 2021

Milk enabled massive steppe migration

The long-distance migrations of early Bronze Age pastoralists in the Eurasian steppe have captured widespread interest. But the factors behind their remarkable spread have been heavily debated by archaeologists. Now a new study in Nature provides clues regarding a critical component of the herders' lifestyle that was likely instrumental to their success: dairying.

From the Xiongnu to the Mongols, the pastoralist populations of the Eurasian steppe have long been a source of fascination. Amongst the earliest herding groups in this region were the Yamnaya, Bronze Age pastoralists who began expanding out of the Pontic-Caspian steppe more than 5000 years ago. These Bronze Age migrations resulted in gene flow across vast areas, ultimately linking pastoralist populations in Scandinavia with groups that expanded into Siberia.

Just how and why these pastoralists travelled such extraordinary distances in the Bronze Age has remained a mystery. Now a new study led by researchers from the Max Planck Institute for the Science of Human History in Jena, Germany has revealed a critical clue and it might come as a surprise. It appears that the Bronze Age migrations coincided with a simple but important dietary shift -- the adoption of milk drinking.

The researchers drew on a humble but extraordinary source of information from the archaeological record -- they looked at ancient tartar (dental calculus) on the teeth of preserved skeletons. By carefully removing samples of the built-up calculus, and using advanced molecular methods to extract and then analyse the proteins still preserved within this resistant and protective material, the researchers were able to identify which ancient individuals likely drank milk, and which did not.

Their results surprised them. "The pattern was incredibly strong," observes study leader and palaeoproteomics specialist Dr. Shevan Wilkin, "The majority of pre-Bronze Age Eneolithic individuals we tested -- over 90% -- showed absolutely no evidence of consuming dairy. In contrast, a remarkable 94% of the Early Bronze Age individuals had clearly been milk drinkers."

The researchers realized they had uncovered a significant pattern. They then further analysed the data in order to examine what kind of milk the herders were consuming. "The differences between the milk peptides of different species are minor but critical," explains Dr. Wilkin. "They can allow us to reconstruct what species the consumed milk comes from." While most of the milk peptides pointed to species like cow, sheep and goat, which was not surprising in light of the associated archaeological remains, calculus from a couple of individuals revealed an unexpected species: horse.

"Horse domestication is a heavily debated topic in Eurasian archaeology," notes Dr. Wilkin. One site where early Central Asian milk drinking had been proposed was the 3500-year-old site of Botai in Kazakhstan. The researchers tested calculus from a couple of Botai individuals, but found no evidence of milk drinking. This fits with the idea that Przewalskii horses -- an early form of which were excavated from the site -- were not the ancestors of today's domestic horse, as shown by recent archaeogenetic study. Instead, horse domestication -- and the drinking of horse milk -- likely began about 1500 kilometers to the west in the Pontic Caspian steppe.

"Our results won't make everyone happy, but they are very clear," says Professor Nicole Boivin, senior author of the study and Director of the Department of Archaeology at the MPI Science of Human History. "We see a major transition to dairying right at the point that pastoralists began expanding eastwards." Domesticated horses likely had a role to play too. "Steppe populations were no longer just using animals for meat, but exploiting their additional properties -milking them and using them for transport, for example," states Professor Boivin.

Read more at Science Daily

Have we detected dark energy? Scientists say it’s a possibility

Dark energy, the mysterious force that causes the universe to accelerate, may have been responsible for unexpected results from the XENON1T experiment, deep below Italy's Apennine Mountains.

A new study, led by researchers at the University of Cambridge and reported in the journal Physical Review D, suggests that some unexplained results from the XENON1T experiment in Italy may have been caused by dark energy, and not the dark matter the experiment was designed to detect.

They constructed a physical model to help explain the results, which may have originated from dark energy particles produced in a region of the Sun with strong magnetic fields, although future experiments will be required to confirm this explanation. The researchers say their study could be an important step toward the direct detection of dark energy.

Everything our eyes can see in the skies and in our everyday world -- from tiny moons to massive galaxies, from ants to blue whales -- makes up less than five percent of the universe. The rest is dark. About 27% is dark matter -- the invisible force holding galaxies and the cosmic web together -- while 68% is dark energy, which causes the universe to expand at an accelerated rate.

"Despite both components being invisible, we know a lot more about dark matter, since its existence was suggested as early as the 1920s, while dark energy wasn't discovered until 1998," said Dr Sunny Vagnozzi from Cambridge's Kavli Institute for Cosmology, the paper's first author. "Large-scale experiments like XENON1T have been designed to directly detect dark matter, by searching for signs of dark matter 'hitting' ordinary matter, but dark energy is even more elusive."

To detect dark energy, scientists generally look for gravitational interactions: the way gravity pulls objects around. And on the largest scales, the gravitational effect of dark energy is repulsive, pulling things away from each other and making the Universe's expansion accelerate.

About a year ago, the XENON1T experiment reported an unexpected signal, or excess, over the expected background. "These sorts of excesses are often flukes, but once in a while they can also lead to fundamental discoveries," said Dr Luca Visinelli, a researcher at Frascati National Laboratories in Italy, a co-author of the study. "We explored a model in which this signal could be attributable to dark energy, rather than the dark matter the experiment was originally devised to detect."

At the time, the most popular explanation for the excess were axions -- hypothetical, extremely light particles -- produced in the Sun. However, this explanation does not stand up to observations, since the amount of axions that would be required to explain the XENON1T signal would drastically alter the evolution of stars much heavier than the Sun, in conflict with what we observe.

We are far from fully understanding what dark energy is, but most physical models for dark energy would lead to the existence of a so-called fifth force. There are four fundamental forces in the universe, and anything that can't be explained by one of these forces is sometimes referred to as the result of an unknown fifth force.

However, we know that Einstein's theory of gravity works extremely well in the local universe. Therefore, any fifth force associated to dark energy is unwanted and must be 'hidden' or 'screened' when it comes to small scales, and can only operate on the largest scales where Einstein's theory of gravity fails to explain the acceleration of the Universe. To hide the fifth force, many models for dark energy are equipped with so-called screening mechanisms, which dynamically hide the fifth force.

Vagnozzi and his co-authors constructed a physical model, which used a type of screening mechanism known as chameleon screening, to show that dark energy particles produced in the Sun's strong magnetic fields could explain the XENON1T excess.

"Our chameleon screening shuts down the production of dark energy particles in very dense objects, avoiding the problems faced by solar axions," said Vagnozzi. "It also allows us to decouple what happens in the local very dense Universe from what happens on the largest scales, where the density is extremely low."

The researchers used their model to show what would happen in the detector if the dark energy was produced in a particular region of the Sun, called the tachocline, where the magnetic fields are particularly strong.

"It was really surprising that this excess could in principle have been caused by dark energy rather than dark matter," said Vagnozzi. "When things click together like that, it's really special."

Their calculations suggest that experiments like XENON1T, which are designed to detect dark matter, could also be used to detect dark energy. However, the original excess still needs to be convincingly confirmed. "We first need to know that this wasn't simply a fluke," said Visinelli. "If XENON1T actually saw something, you'd expect to see a similar excess again in future experiments, but this time with a much stronger signal."

Read more at Science Daily

Astronomers solve 900-year-old cosmic mystery surrounding Chinese supernova of 1181AD

A 900-year-old cosmic mystery surrounding the origins of a famous supernova first spotted over China in 1181AD has finally been solved, according to an international team of astronomers.

New research published today (September 15, 2021) says that a faint, fast expanding cloud (or nebula), called Pa30, surrounding one of the hottest stars in the Milky Way, known as Parker's Star, fits the profile, location and age of the historic supernova.

There have only been five bright supernovae in the Milky Way in the last millennium (starting in 1006). Of these, the Chinese supernova, which is also known as the 'Chinese Guest Star' of 1181AD has remained a mystery. It was originally seen and documented by Chinese and Japanese astronomers in the 12th century who said it was as bright as the planet Saturn and remained visible for six months. They also recorded an approximate location in the sky of the sighting, but no confirmed remnant of the explosion has even been identified by modern astronomers. The other four supernovae are all now well known to modern day science and include the famous Crab nebula.

The source of this 12th century explosion remained a mystery until this latest discovery made by a team of international astronomers from Hong Kong, the UK, Spain, Hungary and France, including Professor Albert Zijlstra from The University of Manchester. In the new paper, the astronomers found that the Pa 30 nebula is expanding at an extreme velocity of more than 1,100 km per second (at this speed, traveling from the Earth to the Moon would take only 5 minutes). They use this velocity to derive an age at around 1,000 years, which would coincide with the events of 1181AD.

Prof Zijlstra (Professor in Astrophysics at the University of Manchester) explains: "The historical reports place the guest star between two Chinese constellations, Chuanshe and Huagai. Parker's Star fits the position well. That means both the age and location fit with the events of 1181."

Pa 30 and Parker's Star have previously been proposed as the result of a merger of two White Dwarfs. Such events are thought to lead to a rare and relatively faint type of supernova, called a 'Type Iax supernova'.

Prof Zijlstra added: "Only around 10% of supernovae are of this type and they are not well understood. The fact that SN1181 was faint but faded very slowly fits this type. It is the only such event where we can study both the remnant nebula and the merged star, and also have a description of the explosion itself."

Read more at Science Daily

Gut microbiota influences the ability to lose weight

Gut microbiota influences the ability to lose weight in humans, according to new research. The findings were published this week in mSystems, an open-access journal of the American Society for Microbiology.

"Your gut microbiome can help or cause resistance to weight loss and this opens up the possibility to try to alter the gut microbiome to impact weight loss," said lead study author Christian Diener, Ph.D., a research scientist at the Institute for Systems Biology in Seattle, Washington.

To conduct their research, Dr. Diener and colleagues focused on a large cohort of individuals who were involved in a lifestyle intervention study. Instead of a specific diet or exercise program, this intervention involved a commercial behavioral coaching program paired with advice from a dietician and nurse coach. The researchers focused on 48 individuals who lost more than 1% of their body weight per month over a 6 to 12 month period and 57 individuals who did not lose any weight and had a stable body mass index (BMI) over the same period. The researchers relied on metagenomics, the study of genetic material recovered from blood and stool samples. The individuals analyzed blood metabolites, blood proteins, clinical labs, dietary questionnaires and gut bacteria in the two groups.

After controlling for age, sex and baseline BMI, the researchers identified 31 baseline stool metagenomic functional features that were associated with weight loss responses. These included complex polysaccharide and protein degradation genes, stress-response genes, respiration-related genes, cell wall synthesis genes and gut bacterial replication rates. A major finding was that the ability of the gut microbiome to break down starches was increased in people who did not lose weight. Another key finding was that genes that help bacteria grow faster, multiply, replicate and assemble cell walls were increased in people who lost more weight.

"Before this study, we knew the composition of bacteria in the gut were different in obese people than in people who were non-obese, but now we have seen that there are a different set of genes that are encoded in the bacteria in our gut that also responds to weight loss interventions," said Dr. Diener. "The gut microbiome is a major player in modulating whether a weight loss intervention will have success or not. The factors that dictate obesity versus nonobesity are not the same factors that dictate whether you will lose weight on a lifestyle intervention."

Read more at Science Daily

COVID-19 nasal vaccine candidate effective at preventing disease transmission, study shows

Breathe in, breathe out. That's how easy it is for SARS-CoV-2, the virus that causes COVID-19, to enter your nose. And though remarkable progress has been made in developing intramuscular vaccines against SARS-CoV- 2, such as the readily available Pfizer, Moderna and Johnson & Johnson vaccines, nothing yet -- like a nasal vaccine -- has been approved to provide mucosal immunity in the nose, the first barrier against the virus before it travels down to the lungs.

But now, we're one step closer.

Navin Varadarajan, University of Houston M.D. Anderson Professor of Chemical and Biomolecular Engineering, and his colleagues, are reporting in iScience the development of an intranasal subunit vaccine that provides durable local immunity against inhaled pathogens.

"Mucosal vaccination can stimulate both systemic and mucosal immunity and has the advantage of being a non-invasive procedure suitable for immunization of large populations," said Varadarajan. "However, mucosal vaccination has been hampered by the lack of efficient delivery of the antigen and the need for appropriate adjuvants that can stimulate a robust immune response without toxicity."

To solve those problems, Varadarajan collaborated with Xinli Liu, associate professor of pharmaceutics at the UH College of Pharmacy, and an expert in nanoparticle delivery. Liu's team was able to encapsulate the agonist of the stimulator of interferon genes (STING) within liposomal particles to yield the adjuvant named NanoSTING. The function of the adjuvant is to promote the body's immune response.

"NanoSTING has a small particle size around 100 nanometers which exhibits significantly different physical and chemical properties to the conventional adjuvant," said Liu.

"We used NanoSTING as the adjuvant for intranasal vaccination and single-cell RNA-sequencing to confirm the nasal-associated lymphoid tissue as an inductive site upon vaccination. Our results show that the candidate vaccine formulation is safe, produces rapid immune responses -- within seven days -- and elicits comprehensive immunity against SARS-CoV-2," said Varadarajan.

A fundamental limitation of intramuscular vaccines is that they are not designed to elicit mucosal immunity. As prior work with other respiratory pathogens like influenza has shown, sterilizing immunity to virus re-infection requires adaptive immune responses in the respiratory tract and the lung.

The nasal vaccine will also serve to equitably distribute vaccines worldwide, according to the researchers. It is estimated that first world countries have already secured and vaccinated multiple intramuscular doses for each citizen while billions of people in countries like India, South Africa, and Brazil with large outbreaks are currently unimmunized. These outbreaks and viral spread are known to facilitate viral evolution leading to decreased efficacy of all vaccines.

"Equitable distribution requires vaccines that are stable and that can be shipped easily. As we have shown, each of our components, the protein (lyophilized) and the adjuvant (NanoSTING) are stable for over 11 months and can be stored and shipped without the need for freezing," said Varadarajan.

Varadarajan is co-founder of AuraVax Therapeutics Inc., a pioneering biotech company developing novel intranasal vaccines and therapies to help patients defeat debilitating diseases, including COVID-19. The company has an exclusive license agreement with UH with respect to the intellectual property covering intranasal vaccines and STING agonist technologies. They have initiated the manufacturing process and plan to engage the FDA later this year.

Read more at Science Daily

Sep 15, 2021

Planets form in organic soups with different ingredients

Astronomers have mapped out the chemicals inside of planetary nurseries in extraordinary detail. The newly unveiled maps reveal the locations of dozens of molecules within five protoplanetary disks -- regions of dust and gas where planets form around young stars.

"These planet-forming disks are teeming with organic molecules, some which are implicated in the origins of life here on Earth," explains Karin Öberg, an astronomer at the Center for Astrophysics | Harvard & Smithsonian (CfA) who led the map-making project. "This is really exciting; the chemicals in each disk will ultimately affect the type of planets that form -- and determine whether or not the planets can host life."

A series of 20 papers detailing the project, appropriately named Molecules with ALMA at Planet-forming Scales, or MAPS, was published today in the open-access repository arXiv. The papers have also been accepted to the Astrophysical Journal Supplement as a forthcoming special edition series to showcase the high-resolution images and their implications.

Planets Form in Different Soups

The new maps of the disks reveal that the chemicals in protoplanetary disks are not located uniformly throughout each disk; instead, each disk is a different planet-forming soup, a mixed bag of molecules, or planetary ingredients. The results suggest that planet formation occurs in diverse chemical environments and that as they form, each planet may be exposed to vastly different molecules depending on its location in a disk.

"Our maps reveal it matters a great deal where in a disk a planet forms," says Öberg, the lead author of MAPS I (https://arxiv.org/abs/2109.06268), the first paper in the series. "Many of the chemicals in the disks are organic, and the distribution of these organics varies dramatically within a particular disk. Two planets can form around the same star and have very different organic inventories, and therefore predispositions to life."

CfA graduate student Charles Law led MAPS III (https://arxiv.org/abs/2109.06210), the study that mapped out the specific locations of 18 molecules -- including hydrogen cyanide, and other nitriles connected to the origins of life -- in each of the five disks. The images were taken with the Atacama Large Millimeter/submillimeter Array (ALMA) in 2018 and 2019. The vast amount of data collected required a 100-terabyte hard drive and took two years to analyze and breakdown into separate maps of each molecule.

The final maps of each disk surprised Law and showed that "understanding the chemistry occurring even in a single disk is much more complicated than we thought."

"Each individual disk appears quite different from the next one, with its own distinctive set of chemical substructures," Law explains. "The planets forming in these disks are going to experience very different chemical environments."

Fishing for Planetary Newborns

The MAPS project provided astronomers with opportunities to study more than just the chemical environment of disks.

"Our team used these maps to show where some of the forming planets are located within disks, enabling scientists to connect the observed chemical soups with the future compositions of specific planets," Öberg says.

The effort was led by Richard Teague, a Submillimeter Array fellow at the CfA, who used the data and imagery collected by MAPS to hunt for newborn planets.

Astronomers are confident that planets form in protoplanetary disks, but there is a catch: they can't directly see them. Dense gas and dust, which will last some three million years, shields young, developing planets from view.

"It's like trying to see a fish underwater," Teague says. "We know they're there, but we can't peer that far down. We have to look for subtle signs on the surface of the water, like ripples and waves."

In protoplanetary disks, gas and dust naturally rotate around a central star. The speed of the moving material, which astronomers can measure, should remain consistent throughout the disk. But if a planet is lurking beneath the surface, Teague believes it can slightly disturb the gas traveling around it, causing a small deviation in velocity or the spiraling gas to move in an unexpected way.

Using this tactic, Teague analyzed gas velocities in two of the five protoplanetary disks -- around the young stars HD 163296 and MWC 480. Small hiccups in velocity in certain portions of the disks revealed a young Jupiter-like planet embedded in each of the disks. The observations are detailed in MAPS XVIII (https://arxiv.org/abs/2109.06218).

As the planets grow, they will eventually "carve open gaps in the structure of the disks" so we can see them, Teague says, but the process will take thousands of years.

Teague hopes to confirm the discoveries sooner than that using the forthcoming James Webb Space Telescope. "It should have the sensitivity to pinpoint the planets," he says.

Law also hopes to confirm the results by studying more protoplanetary disks in the future.

Law says, "If we want to see if the chemical diversity observed in MAPS is typical, we're going to need to increase our sample size and map out more disks in the same way."

Read more at Science Daily

Oldest known mammal cavities discovered in 55-million-year-old fossils suggests a sweet tooth for fruit

A new U of T study has discovered the oldest known cavities ever found in a mammal, the likely result of a diet that included eating fruit.

The cavities were discovered in fossils of Microsyops latidens, a pointy-snouted animal no bigger than a racoon that was part of a group of mammals known as stem primates. It walked the earth for about 500,000 years before going extinct around 54 million years ago.

"These fossils were sitting around for 54 million years and a lot can happen in that time," says Keegan Selig, lead author of the study who recently completed his PhD student in Professor Mary Silcox's lab at U of T Scarborough.

"I think most people assumed these holes were some kind of damage that happened over time, but they always occurred in the same part of the tooth and consistently had this smooth, rounded curve to them."

Very few fossils of M. latidens' body have been found, but a large sample of fossilized teeth have been unearthed over the years in Wyoming's Southern Bighorn Basin. While they were first dug up in the 1970s and have been studied extensively since, Selig is the first to identify the little holes in their teeth as being cavities.

Cavities form when bacteria in the mouth turns foods containing carbohydrates into acids. These acids erode tooth enamel (the hard protective coating on the tooth) before eating away at dentin, the softer part of the tooth beneath the enamel. This decay slowly develops into tiny holes.

For the research, published in the journal Scientific Reports, Selig looked at the fossilized teeth of a thousand individuals under a microscope and was able to identify cavities in 77 of them. To verify the results, he also did micro-CT scans (a type of X-ray that looks inside an object without having to break it apart) on some of the fossils.

As for what caused the cavities, Selig says the likely culprit was the animal's fruit-rich diet. While primates would have been eating fruit for quite some time before M. Latidens, for a variety of reasons fruit became more abundant around 65 million years ago and primates would have started eating more of it.

An interesting discovery was that out the fossil teeth studied, seven per cent from the oldest group contained cavities while 17 per cent of the more recent group contained cavities. This suggests a shift in their diet over time that included more fruit or other sugar-rich foods.

"Eating fruit is considered one of the hallmarks of what makes early primates unique," says Selig, whose research looks on reconstructing the diets of fossil mammals.

He adds that M. Latidens would naturally want to eat fruit since its full of sugar and contains a lot of energy. "If you're a little primate scurrying around in the trees, you would want to eat food with a high energy value. They also likely weren't concerned about getting cavities."

The study, which received funding from the Natural Sciences and Engineering Research Council of Canada (NSERC), not only includes the largest and earliest known sample of cavities in an extinct mammal, it also offers some clues into how the diet of M. Latidens changed over time. It also offers a framework to help researchers look for cavities in the fossils of other extinct mammals.

Selig says identifying cavities in fossils can tell us a lot about the biology of these animals. It can help figure out what they were eating and how they evolved over time based on their diet. For example, while evolutionary changes in the structure of a jaw or teeth suggest broader changes in diet over time, cavities also offer a window into what that specific animal was eating in their lifetime.

Read more at Science Daily

Prehistoric humans rarely mated with their cousins

The researchers re-analyzed previously published DNA data from ancient humans that lived during the last 45,000 years to find out how closely related their parents were. The results surprising: Ancient humans rarely chose their cousins as mates. In a global dataset of 1,785 individuals only 54, that is, about three percent, show the typical signs of their parents being cousins. Those 54 did not cluster in space or time, showing that cousin matings were sporadic events in the studied ancient populations. Notably, even for hunter-gatherers who lived more than 10,000 years ago, unions between cousins were the exception.

To analyze such a large dataset, the researchers developed a new computational tool to screen ancient DNA for parental relatedness. It detects long stretches of DNA that are identical in the two DNA copies, one inherited from the mother and one from the father. The closer the parents are related, the longer and more abundant such identical segments are. For modern DNA data, computational methods can identify these stretches with ease. However, the quality of DNA from bones that are thousands of years old is, in most cases, too low to apply these methods. Thus, the new method fills the gaps in the ancient genomes by leveraging modern high-quality DNA data. "By applying this new technique we could screen more than ten times as many ancient genomes than previously possible," says Harald Ringbauer from MPI-EVA, the lead researcher of the study.

Beyond identifying matings of close kin, the new method also allowed the researchers to study background relatedness. Such relatedness originates from the typically many unknown distant relationships within small populations. As a key result, the researchers found a substantial demographic impact of the technological innovation of agriculture. This was always followed by a marked decay in background parental relatedness, indicative of increasing population sizes. By analyzing time transects of more than a dozen geographic regions across the globe, the researchers expanded upon previous evidence that population sizes increased in societies practicing farming compared to hunter-gatherer subsistence strategies.

The new method to screen ancient DNA for parental relatedness gives researchers a versatile new tool. Looking forward, the field of ancient DNA is quickly developing, with more and more ancient genomes being produced every year. By elucidating mating choices as well as the dynamics of past population sizes, the new method will allow researchers to shed more light on the lives of our ancestors.

From Science Daily

Creative exploration followed by exploitation can lead to a career’s greatest hits

Before developing his famed "drip technique," abstract artist Jackson Pollock dabbled in drawing, print making and surrealist paintings of humans, animals and nature.

According to a new study from Kellogg School of Management at Northwestern University,this period of exploration followed by exploitation of his new drip technique set up Pollock for a "hot streak," or a burst of high-impact works clustered together in close succession. In Pollock's case, this was a three-year period from 1947 to 1950, during which he created all his drippy, splattered masterpieces that he is still famous for today.

By using artificial intelligence to mine big data related to artists, film directors and scientists, the Northwestern researchers discovered this pattern is not uncommon but, instead, a magical formula. Hot streaks, they found, directly result from years of exploration (studying diverse styles or topics) immediately followed by years of exploitation (focusing on a narrow area to develop deep expertise).

The research will be published on Sept. 13 in the journal Nature Communications.

With this new understanding about what triggers a hot streak, institutions can intentionally create environments that support and facilitate hot streaks in order to help their members thrive.

"Neither exploration nor exploitation alone in isolation is associated with a hot streak. It's the sequence of them together," said Dashun Wang, who led the study. "Although exploration is considered a risk because it might not lead anywhere, it increases the likelihood of stumbling upon a great idea. By contrast, exploitation is typically viewed as a conservative strategy. If you exploit the same type of work over and over for a long period of time, it might stifle creativity. But, interestingly, exploration followed by exploitation appears to show consistent associations with the onset of hot streaks."

Wang is a professor of management and organizations Kellogg School and of industrial engineering and management sciences in Northwestern's McCormick School of Engineering. He also is director of the Center for Science of Science Innovationand a core member of the Northwestern Institute for Complex Systems.

Inspired by Van Gogh

In 2018, Wang and his colleagues published a paper in Nature, characterizing hot streaks in artistic, cultural and scientific careers. After establishing that these hot streaks do occur, Wang was motivated to discover what triggers them. He found a clue while visiting the Van Gogh Museum in Amsterdam.

Van Gogh experienced an artistic breakthrough from 1888-1890, during which he painted his most famous works, including The Starry Night, Sunflowers and Bedroom in Arles. Before that, however, his work was less impressionistic and more realistic. He also tended to use somber earth tones rather than the bright, sweeping colors, for which he is best known today.

"If you look at his production before 1888, it was all over the place," Wang said. "It was full of still-life paintings, pencil drawings and portraits that are much different in character from the work he created during his hot streak."

Mining data from artists, scientists, filmmakers

In the new study, Wang's team developed computational methods using deep-learning algorithms and network science and then applied these methods to large-scale datasets tracing the career outputs of artists, film directors and scientists.

For artists, Wang's team used algorithms for image recognition to mine data from 800,000 visual arts images collected from museums and galleries, which cover the career histories of 2,128 artists, including Pollock and Van Gogh. For film directors, the team collected data sets from the Internet Movie Database (IMDb), which included 79,000 films by 4,337 directors. For scientists, the team analyzed the career histories of 20,040 scientists by combining publication and citation datasets from the Web of Science and Google Scholar.

Wang and his collaborators quantified a hot streak within each career based on the impact of works produced, measured by auction price, IMDB ratings and academic paper citations. Then, they correlated the timing of hot streaks with the creative trajectories of each individual. Looking at careers four years before and after the hot streak, the researchers examined how each individual's work changed around the beginning of a hot streak.

Combination of creative experimentation, implementation is 'powerful'

The team found that when an episode of exploration was not followed by exploitation, the chance for a hot streak was significantly reduced. Similarly, exploitation alone -- that was not preceded by exploration -- also did not guarantee a hot streak. But when exploration was closely followed by exploitation, the researchers noted the probability of a hot streak consistently and significantly increased.

"We were able to identify among the first regularities underlying the onset of hot streaks, which appears universal across diverse creative domains," Wang said. "Our findings suggest that creative strategies that balance experimentation with implementation may be especially powerful."

"This knowledge can help individuals and organizations understand the different types of activities to engage in -- such as exploring new domains or exploiting existing knowledge and competencies -- and the optimal sequence to use in order to achieve the most significant impact," added study co-author Jillian Chown, an assistant professor of management and organizations at Kellogg School.

Read more at Science Daily

New findings on ambient UVB radiation, vitamin D, and protection against severe COVID-19

New research from Trinity College Dublin and University of Edinburgh has examined the association between vitamin D and COVID-19, and found that ambient ultraviolet B (UVB) radiation (which is key for vitamin D production in the skin) at an individual's place of residence in the weeks before COVID-19 infection, was strongly protective against severe disease and death. The paper has been published in the journal Scientific Reports.

Previous studies have linked vitamin D deficiency with an increased susceptibility to viral and bacterial respiratory infections. Similarly, several observational studies found a strong correlation between vitamin D deficiency and COVID-19, but it could be that these effects are confounded and in fact a result of other factors, such as obesity, older age or chronic illness which are also linked with low vitamin D.

To overcome this, researchers were able to calculate "genetically-predicted" vitamin D level, that is not confounded by other demographic, health and lifestyle factors, by using the information from over one hundred genes that determine vitamin D status.

The Mendelian Randomisation is a particular analytical approach that enabled researchers to investigate whether vitamin D and COVID-19 might be causally linked using genetic data. Few earlier studies attempted this but failed to show a causal link. This could be because UVB radiation sunshine which is the most important source of vitamin D for majority of people was ignored.

Researchers, for the first time, looked jointly at genetically-predicted and UVB-predicted vitamin D level. Almost half a million individuals in the UK took part in the study, and ambient UVB radiation before COVID-19 infection was individually assessed for each participant. When comparing the two variables, researchers found that correlation with measured vitamin D concentration in the circulation was three-fold stronger for UVB-predicted vitamin D level, compared to genetically-predicted.

Researchers found that ambient UVB radiation at an individual's place of residence preceding COVID-19 infection was strongly and inversely associated with hospitalisation and death. This suggests that vitamin D may protect against severe COVID-19 disease and death. Additionally, while the results from the Mendelian Randomisation analysis weren't conclusive, some indication of a potential causal effect was noted. Because of the relatively weak link between genetically-predicted vitamin D level that is used for Mendelian Randomisation analysis, it is possible that the number of cases in the current study was too small to convincingly determine causal effect, but future larger studies might provide the answer.

Professor Lina Zgaga, Associate Professor in Epidemiology, School of Medicine, Trinity College and senior researcher on the study said:

"Our study adds further evidence that vitamin D might protect against severe COVID-19 infection. Conducting a properly designed COVID-19 randomised controlled trial of vitamin D supplementation is critical. Until then, given that vitamin D supplements are safe and cheap, it is definitely advisable to take supplements and protect against vitamin D deficiency, particularly with winter on the horizon."

Professor Evropi Theodoratou, Professor of Cancer Epidemiology and Global Health, University of Edinburgh and senior researcher on the study said:

"Given the lack of highly effective therapies against COVID-19, we think it is important to remain open-minded to emerging results from rigorously conducted studies of vitamin D."

Read more at Science Daily

Sep 14, 2021

Largest virtual universe free for anyone to explore

Forget about online games that promise you a "whole world" to explore. An international team of researchers has generated an entire virtual UNIVERSE, and made it freely available on the cloud to everyone.

Uchuu (meaning "Outer Space" in Japanese) is the largest and most realistic simulation of the Universe to date. The Uchuu simulation consists of 2.1 trillion particles in a computational cube an unprecedented 9.63 billion light-years to a side. For comparison, that's about three-quarters the distance between Earth and the most distant observed galaxies. Uchuu will allow us to study the evolution of the Universe on a level of both size and detail inconceivable until now.

Uchuu focuses on the large-scale structure of the Universe: mysterious halos of dark matter which control not only the formation of galaxies, but also the fate of the entire Universe itself. The scale of these structures ranges from the largest galaxy clusters down to the smallest galaxies. Individual stars and planets aren't resolved, so don't expect to find any alien civilizations in Uchuu. But one way that Uchuu wins big in comparison to other virtual worlds is the time domain; Uchuu simulates the evolution of matter over almost the entire 13.8 billion year history of the Universe from the Big Bang to the present. That is over 30 times longer than the time since animal life first crawled out of the seas on Earth.

Julia F. Ereza, a Ph.D. student at IAA-CSIC who uses Uchuu to study the large-scale structure of the Universe explains the importance of the time domain, "Uchuu is like a time machine: we can go forward, backward and stop in time, we can 'zoom in' on a single galaxy or 'zoom out' to visualize a whole cluster, we can see what is really happening at every instant and in every place of the Universe from its earliest days to the present, being an essential tool to study the Cosmos."

An international team of researchers from Japan, Spain, U.S.A., Argentina, Australia, Chile, France, and Italy created Uchuu using ATERUI II, the world's most powerful supercomputer dedicated to astronomy. Even with all this power, it still took a year to produce Uchuu. Tomoaki Ishiyama, an associate professor at Chiba University who developed the code used to generate Uchuu, explains, "To produce Uchuu we have used ... all 40,200 processors (CPU cores) available exclusively for 48 hours each month. Twenty million supercomputer hours were consumed, and 3 Petabytes of data were generated, the equivalent of 894,784,853 pictures from a 12-megapixel cell phone."

Before you start worrying about download time, the research team used high-performance computational techniques to compress information on the formation and evolution of dark matter haloes in the Uchuu simulation into a 100-terabyte catalog. This catalog is now available to everyone on the cloud in an easy to use format thanks to the computational infrastructure skun6 located at the Instituto de Astrofísica de Andalucía (IAA-CSIC), the RedIRIS group, and the Galician Supercomputing Center (CESGA). Future data releases will include catalogues of virtual galaxies and gravitational lensing maps.

Read more at Science Daily

Personality matters, even for squirrels

Humans acknowledge that personality goes a long way, at least for our species. But scientists have been more hesitant to ascribe personality -- defined as consistent behavior over time -- to other animals.

A study from the University of California, Davis is the first to document personality in golden-mantled ground squirrels, which are common across the western U.S. and parts of Canada. The study, published in the journal Animal Behaviour, found the squirrels show personality for four main traits: boldness, aggressiveness, activity level and sociability.

While the golden-mantled ground squirrel is under no conservation threat, the findings suggest that understanding how an animal's personality influences use of space is important for wildlife conservation.

'Individuals matter'

To see them chitter and skitter, stop and then scurry, the fact that ground squirrels have personalities may not seem surprising. But the scientific field of animal personality is relatively young, as is the recognition that there are ecological consequences of animal personality. For instance, bolder, more aggressive squirrels may find more food or defend a larger territory, but their risky behavior may also make them vulnerable to predation or accidents.

"This adds to the small but growing number of studies showing that individuals matter," said lead author Jaclyn Aliperti, who conducted the study while earning her Ph.D. in ecology at UC Davis. "Accounting for personality in wildlife management may be especially important when predicting wildlife responses to new conditions, such as changes or destruction of habitat due to human activity."

Personality tests

Scientists have been studying golden-mantled ground squirrels at the Rocky Mountain Biological Laboratory in Gothic, Colorado for decades. It was established as a long-term study site more than 30 years ago by Aliperti's advisor, Dirk Van Vuren, a professor in the UC Davis Department of Wildlife, Fish and Conservation Biology.

Aliperti drew from this powerful data set for her study, while also initiating a series of experiments there over the course of three summers to observe and quantify the squirrels' personalities.

She notes that while there are no Meyers-Briggs tests for animals, there are standardized approaches to quantifying animal personalities. She observed and recorded squirrel responses to four tests:
 

  • Novel environment: Squirrels were placed in an enclosed box with gridded lines and holes.
  • Mirror: Squirrels are presented with their mirror image, which they do not recognize as their own.
  • Flight initiative: Squirrels were approached slowly in the wild to see how long they wait before running away.
  • Behavior-in-trap: Squirrels were caught, unharmed, in a simple trap and their behavior briefly observed.


The social squirrel's advantage

Overall, the study found that bolder squirrels had larger core areas where they concentrated their activity. Bold, active squirrels moved faster. Also, squirrels that were bolder, more aggressive and more active had greater access to perches, such as rocks. Perch access is important because it can provide a better vantage point for seeing and evading predators. Interestingly, perch access was also associated with sociability.

Golden-mantled ground squirrels are considered an asocial species. They are relatively small, giving them little opportunity to form the tighter social bonds common in larger ground squirrels, which typically spend more time in family units while reaching maturity. However, the study said that "within this asocial species, individuals that tend to be relatively more social seem to have an advantage."

In such cases, being more social could save an individual's life. Such personality differences can influence a squirrel's ability to survive and reproduce, which could scale up to the population or community level.

Squirrels of Davis

UC Davis is home to many squirrels, which have become an honorary mascot of sorts on campus.

"The squirrels of UC Davis are something else," said Aliperti.

She means it literally. They are tree squirrels and very different from the ground squirrels Aliperti studied. Yet she says her work has changed how she views the squirrels of Davis.

"I view them more as individuals," Aliperti said. "I view them as, 'Who are you? Where are you going? What are up to?' versus on a species level."

Noticing such individuality brings a more personal angle to viewing wildlife.

"Animal personality is a hard science, but if it makes you relate to animals more, maybe people will be more interested in conserving them," said Aliperti.

Read more at Science Daily

Major branches in the tree of language reconstructed

The diversity of human languages can be likened to branches on a tree. If you're reading this in English, you're on a branch that traces back to a common ancestor with Scots, which traces back to a more distant ancestor that split off into German and Dutch. Moving further in, there's the European branch that gave rise to Germanic; Celtic; Albanian; the Slavic languages; the Romance languages like Italian and Spanish; Armenian; Baltic; and Hellenic Greek. Before this branch, and some 5,000 years into human history, there's Indo-European -- a major proto-language that split into the European branch on one side, and on the other, the Indo-Iranian ancestor of modern Persian, Nepali, Bengali, Hindi, and many more.

One of the defining goals of historical linguistics is to map the ancestry of modern languages as far back as it will go -- perhaps, some linguists hope, to a single common ancestor that would constitute the trunk of the metaphorical tree. But while many thrilling connections have been suggested based on systemic comparisons of data from most of the world's languages, much of the work, which goes back as early as the 1800s, has been prone to error. Linguists are still debating over the internal structure of such well-established families as Indo-European, and over the very existence of chronologically deeper and larger families.

To test which branches hold up under the weight of scrutiny, a team of researchers associated with the Evolution of Human Languages program is using a novel technique to comb through the data and to reconstruct major branches in the linguistic tree. In two recent papers, they examine the ~5,000-year-old Indo-European family, which has been well studied, and a more tenuous, older branch known as the Altaic macrofamily, which is thought to connect the linguistic ancestors of such distant languages as Turkish, Mongolian, Korean, and Japanese.

"The deeper you want to go back in time, the less you can rely on classic methods of language comparison to find meaningful correlates," says co-author George Starostin, an Santa Fe Institute external professor based at the Higher School of Economics in Moscow. He explains that one of the major challenges when comparing across languages is distinguishing between words that have similar sounds and meanings because they might descend from a common ancestor, from those that are similar because their cultures borrowed terms from each other in the more recent past.

"We have to get to the deepest layer of language to identify its ancestry because the outer layers, they are contaminated. They get easily corrupted by replacements and borrowings," he says.

To tap into the core layers of language, Starostin's team starts with an established list of core, universal concepts from the human experience. It includes meanings like "rock," "fire," "cloud," "two," "hand," and "human," amongst 110 total concepts. Working from this list, the researchers then use classic methods of linguistic reconstruction to come up with a number of word shapes which they then match with specific meanings from the list. The approach, dubbed "onomasiological reconstruction," notably differs from traditional approaches to comparative linguistics because it focuses on finding which words were used to express a given meaning in the proto-language, rather than on reconstructing phonetic shapes of those words and associating them with a vague cloud of meanings.

Their latest re-classification of the Indo-European family, which applies the onomasiological principle and was published in the journal Linguistics, confirmed well-documented genealogies in the literature. Similar research on the Eurasian Altaic language group, whose proto-language dates back an estimated 8,000 years, confirmed a positive signal of a relationship between most major branches of Altaic -- Turkic, Mongolic, Tungusic, and Japanese. However, it failed to reproduce a previously published relationship between Korean and the other languages in the Altaic grouping. This could either mean that the new criteria were too strict or (less likely) that previous groupings were incorrect.

As the researchers test and reconstruct the branches of human language, one of the ultimate goals is to understand the evolutionary paths languages follow over generations, much like evolutionary biologists do for living organisms.

Read more at Science Daily

Scientists claim that overeating is not the primary cause of obesity

Statistics from the Centers for Disease Control and Prevention (CDC) show that obesity affects more than 40% of American adults, placing them at higher risk for heart disease, stroke, type 2 diabetes, and certain types of cancer. The USDA's Dietary Guidelines for Americans 2020 -- 2025 further tells us that losing weight "requires adults to reduce the number of calories they get from foods and beverages and increase the amount expended through physical activity."

This approach to weight management is based on the century-old energy balance model which states that weight gain is caused by consuming more energy than we expend. In today's world, surrounded by highly palatable, heavily marketed, cheap processed foods, it's easy for people to eat more calories than they need, an imbalance that is further exacerbated by today's sedentary lifestyles. By this thinking, overeating, coupled with insufficient physical activity, is driving the obesity epidemic. On the other hand, despite decades of public health messaging exhorting people to eat less and exercise more, rates of obesity and obesity-related diseases have steadily risen.

The authors of "The Carbohydrate-Insulin Model: A Physiological Perspective on the Obesity Pandemic," a perspective published in The American Journal of Clinical Nutrition, point to fundamental flaws in the energy balance model, arguing that an alternate model, the carbohydrate-insulin model, better explains obesity and weight gain. Moreover, the carbohydrate-insulin model points the way to more effective, long-lasting weight management strategies.

According to lead author Dr. David Ludwig, Endocrinologist at Boston Children's Hospital and Professor at Harvard Medical School, the energy balance model doesn't help us understand the biological causes of weight gain: "During a growth spurt, for instance, adolescents may increase food intake by 1,000 calories a day. But does their overeating cause the growth spurt or does the growth spurt cause the adolescent to get hungry and overeat?"

In contrast to the energy balance model, the carbohydrate-insulin model makes a bold claim: overeating isn't the main cause of obesity. Instead, the carbohydrate-insulin model lays much of the blame for the current obesity epidemic on modern dietary patterns characterized by excessive consumption of foods with a high glycemic load: in particular, processed, rapidly digestible carbohydrates. These foods cause hormonal responses that fundamentally change our metabolism, driving fat storage, weight gain, and obesity.

When we eat highly processed carbohydrates, the body increases insulin secretion and suppresses glucagon secretion. This, in turn, signals fat cells to store more calories, leaving fewer calories available to fuel muscles and other metabolically active tissues. The brain perceives that the body isn't getting enough energy, which, in turn, leads to feelings of hunger. In addition, metabolism may slow down in the body's attempt to conserve fuel. Thus, we tend to remain hungry, even as we continue to gain excess fat.

To understand the obesity epidemic, we need to consider not only how much we're eating, but also how the foods we eat affect our hormones and metabolism. With its assertion that all calories are alike to the body, the energy balance model misses this critical piece of the puzzle.

While the carbohydrate-insulin model is not new -- its origins date to the early 1900s -- The American Journal of Clinical Nutrition perspective is the most comprehensive formulation of this model to date, authored by a team of 17 internationally recognized scientists, clinical researchers, and public health experts. Collectively, they have summarized the growing body of evidence in support of the carbohydrate-insulin model. Moreover, the authors have identified a series of testable hypotheses that distinguish the two models to guide future research.

Adoption of the carbohydrate-insulin model over the energy-balance model has radical implications for weight management and obesity treatment. Rather than urge people to eat less, a strategy which usually doesn't work in the long run, the carbohydrate-insulin model suggests another path that focuses more on what we eat. According to Dr. Ludwig, "reducing consumption of the rapidly digestible carbohydrates that flooded the food supply during the low-fat diet era lessens the underlying drive to store body fat. As a result, people may lose weight with less hunger and struggle."

Read more at Science Daily

Sep 13, 2021

Astronomers spot the same supernova three times — and predict a fourth sighting in 16 years

An enormous amount of gravity from a cluster of distant galaxies causes space to curve so much that light from them is bent and emanated our way from numerous directions. This "gravitational lensing" effect has allowed University of Copenhagen astronomers to observe the same exploding star in three different places in the heavens. They predict that a fourth image of the same explosion will appear in the sky by 2037. The study, which has just been published in the journal Nature Astronomy, provides a unique opportunity to explore not just the supernova itself, but the expansion of our universe.

One of the most fascinating aspects of Einstein's famed theory of relativity is that gravity is no longer described as a force, but as a "curvature" of space itself. The curvature of space caused by heavy objects does not just cause planets to spin around stars, but can also bend the orbit of light beams.

The heaviest of all structures in the universe -- galaxy clusters made up of hundreds or thousands of galaxies -- can bend light from distant galaxies behind them so much that they appear to be in a completely different place than they actually are.

But that's not it: light can take several paths around a galaxy cluster, making it possible for us to get lucky and make two or more sightings of the same galaxy in different places in the sky using a powerful telescope.

Supernova déjà-vu

Some routes around a galaxy cluster are longer than others, and therefore take more time. The slower the route, the stronger the gravity; yet another astonishing consequence of relativity. This staggers the amount of time needed for light to reach us, and thereby the different images that we see.

This wondrous effect has allowed a team of astronomers at the Cosmic Dawn Center -- a basic research center run by the Niels Bohr Institute at the University of Copenhagen and DTU Space at the Technical University of Denmark -- along with their international partners, to observe a single galaxy in no less than four different places in the sky.

The observations were made using the infrared wavelength range of the Hubble Space Telescope.

By analyzing the Hubble data, researchers noted three bright light sources in a background galaxy that were evident in a previous set of observations from 2016, which disappeared when Hubble revisited the area in 2019. These three sources turned out to be several images of a single star whose life ended in a colossal explosion known as a supernova.

"A single star exploded 10 billion years ago, long before our own sun was formed. The flash of light from that explosion has just reached us," explains Associate Professor Gabriel Brammer of the Cosmic Dawn Center, who led the study with Professor Steven Rodney of the University of South Carolina.

The supernova, nicknamed "SN-Requiem," can be seen in three of the four "mirrored images" of the galaxy. Each image presents a different view of the explosive supernova's development. In the final two images, it has not yet exploded. But, by examining how galaxies are distributed within the galaxy cluster and how these images are distorted by curved space, it is actually possible to calculate how "delayed" these images are.

This has allowed astronomers to make a remarkable prediction:

"The fourth image of the galaxy is roughly 21 years behind, which should allow us to see the supernova explode one more time, sometime around 2037," explains Gabriel Brammer.

Can teach us more about the universe

Should we get to witness the SN-Requiem explosion again in 2037, it will not only confirm our understanding of gravity, but also help to shed light on another cosmological riddle that has emerged in the last few years, namely the expansion of our universe.

We know that the universe is expanding, and that different methods allow us to measure by how fast. The problem is that the various measurement methods do not all produce the same result, even when measurement uncertainties are taken into account. Could our observational techniques be flawed, or -- more interestingly -- will we need to revise our understandings of fundamental physics and cosmology?

"Understanding the structure of the universe is going to be a top priority for the main earth-based observatories and international space organizations over the next decade.Studies planned for the future will cover much of the sky and are expected to reveal dozens or even hundreds of rare gravitational lenses with supernovae like SN Requiem," Brammer elaborates:

"Accurate measurements of delays from such sources provide unique and reliable determinations of cosmic expansion and can even help reveal the properties of dark matter and dark energy."

Read more at Science Daily

A recent reversal in the response of western Greenland’s ice caps to climate change

Greenland may be best known for its enormous continental scale ice sheet that soars up to 3,000 meters above sea level, whose rapid melting is a leading contributor to global sea level rise. But surrounding this massive ice sheet, which covers 79% of the world's largest island, is Greenland's rugged coastline dotted with ice capped mountainous peaks. These peripheral glaciers and ice caps are now also undergoing severe melting due to anthropogenic (human-caused) warming. However, climate warming and the loss of these ice caps may not have always gone hand-in-hand.

New collaborative research from the Woods Hole Oceanographic Institution and five partner institutions (University of Arizona, University of Washington, Pennsylvania State University, Desert Research Institute and University of Bergen), published today in Nature Geoscience, reveals that during past periods glaciers and ice caps in coastal west Greenland experienced climate conditions much different than the interior of Greenland. Over the past 2,000 years, these ice caps endured periods of warming during which they grew larger rather than shrinking.

This novel study breaks down the climate history displayed in a core taken from an ice cap off Greenland's western coast. According to the study's researchers, while ice core drilling has been ongoing in Greenland since the mid-20th century, coastal ice core studies remain extremely limited, and these new findings are providing a new perspective on climate change compared to what scientists previously understood by using ice cores from the interior portions of the Greenland ice sheet alone.

"Glaciers and ice caps are unique high-resolution repositories of Earth's climate history, and ice core analysis allows scientists to examine how environmental changes -- like shifts in precipitation patterns and global warming -- affect rates of snowfall, melting, and in turn influence ice cap growth and retreat," said Sarah Das, Associate Scientist of Geology and Geophysics at WHOI. "Looking at differences in climate change recorded across several ice core records allows us to compare and contrast the climate history and ice response across different regions of the Arctic." However, during the course of this study, it also became clear that many of these coastal ice caps are now melting so substantially that these incredible archives are in great peril of disappearing forever.

Due to the challenging nature of studying and accessing these ice caps, this team was the first to do such work, centering their study, which began in 2015, around a core collected from the Nuussuaq Peninsula in Greenland. This single core offers insight into how coastal climate conditions and ice cap changes covaried during the last 2,000 years, due to tracked changes in its chemical composition and the amount of snowfall archived year after year in the core. Through their analysis, investigators found that during periods of past warming, ice caps were growing rather than melting, contradicting what we see in the present day.

"Currently, we know Greenland's ice caps are melting due to warming, further contributing to sea level rise. But, we have yet to explore how these ice caps have changed in the past due to changes in climate," said Matthew Osman, postdoctoral research associate at the University of Arizona and a 2019 graduate of the MIT-WHOI Joint program. "The findings of this study were a surprise because we see that there is an ongoing shift in the fundamental response of these ice caps to climate: today, they're disappearing, but in the past, within small degrees of warming, they actually tended to grow."

According to Das and Osman, this phenomenon happens because of a "tug-of-war" between what causes an ice cap to grow (increased precipitation) or recede (increased melting) during periods of warming. Today, scientists observe melting rates that are outpacing the rate of annual snowfall atop ice caps. However, in past centuries these ice caps would expand due to increased levels of precipitation brought about by warmer temperatures. The difference between the past and present is the severity of modern anthropogenic warming.

The team gathered this data by drilling through an ice cap on top of one of the higher peaks of the Nuussuaq Peninsula. The entire core, about 140 meters in length, took about a week to retrieve. They then brought the meter-long pieces of core to the National Science Foundation Ice Core Facility in Denver, Colorado, and stored at -20 degrees Celsius. The core pieces were then analyzed by their layers for melt features and trace chemistry at the Desert Research Institute in Reno, Nevada. By looking at different properties of the core's chemical content, such as parts per billion of lead and sulfur, investigators were able to accurately date the core by combining these measurements with a model of past glacier flow.

"These model estimates of ice cap flow, coupled with the actual ages that we have from this high precision chemistry, help us outline changes in ice cap growth over time. This method provides a new way of understanding past ice cap changes and how that is correlated with climate," said Das. "Because we're collecting a climate record from the coast, we're able to document for the first time that there were these large shifts in temperature, snowfall and melt over the last 2,000 years, showing much more variability than is observed in records from the interior of Greenland," Das added.

Read more at Science Daily

Transforming marine biodiversity discovery and monitoring

A new system for sampling fragments of DNA from marine organisms drifting in the ocean is set to create new opportunities for research on biodiversity and ways of supporting conservation activities.

Over the past decade biodiversity researchers have increasingly used DNA sequences extracted from environmental samples such as soil, marine and fresh water, and even air -- termed environmental DNA (eDNA) -- to identify the organisms present in a huge range of habitats.

Sequencing these tiny traces of DNA has proved to be a powerful technique for detecting elusive species that may only rarely be observed directly, or in early life stage, when they may be difficult to identify, revolutionising biodiversity discovery and monitoring.

Researchers from the University of Leeds and University of Milano-Bicocca in Italy have developed an innovative new approach for collecting marine eDNA samples which promises to open up biodiversity monitoring of remote offshore ocean locations.

The team has developed a novel system for easy sampling that can be deployed from ocean-going ferries and other commercial vessels such as container ships, allowing the possibility of using the global commercial shipping fleet to help monitor marine biodiversity.

Although DNA sequencing is becoming more cost-effective every year, the biggest challenge is often collecting samples over the large geographic areas needed to scale up these new monitoring techniques to a global reach.

Sampling marine eDNA far from land usually depends on access to dedicated research vessels, which are complex and expensive to operate. These logistical constraints limit the geographic scope and frequency of surveys, impeding the expansion of large scale eDNA surveys.

The new system does not require complex equipment deployed from a ship; water is collected from the engine cooling system with simple apparatus and can be carried out by non-specialists. Since commercial vessels regularly cross remote corners of most of the world's oceans, they could provide almost limitless opportunities for sample collection to contribute to biodiversity monitoring programmes.

The team collaborated with the company Corsica-Sardinia Ferries, which supports a long-term visual survey programme for cetaceans run by ISPRA (Italian Institute for Environmental Protection and Research; also a partner in the current study), to test the system, on their route between Livorno in Tuscany, and Golfo Aranci in Sardinia.

The results showed the ferry-collected samples had traces of DNA from all parts of the vertebrate ecosystem, ranging from small prey fish at the base of the food chain, such as anchovies and sardines, through small and larger predatory fish such as tuna and swordfish, all the way to dolphins, and ocean giants including fin and sperm whales.

Co-lead author Dr Simon Goodman, from the School of Biology, University of Leeds, is co-lead author of the report, published in Frontiers in Marine Science.

He said: "When we first started to dig into the sequencing results I was astounded as to how well it had captured the structure of the vertebrate ecosystem.

"It's a really exciting result and highlights the power that eDNA has for revealing fine scale ecological variation."

One of the study leads, Dr Elena Valsecchi from the Department of Environmental and Earth Sciences, the University of Milano-Bicocca, said: "This innovative methodology applied to environmental DNA allows us to make a sort of CAT (computerized axial tomography) scan of the sea.

"Next we will be scanning multiple ferry routes in the Mediterranean in order to produce a high-resolution "image" on the state of biodiversity in our seas."

Overall eDNA from 100 unique vertebrate species were detected, with species composition proving to be a good match for that known from the Mediterranean from conventional survey techniques.

In addition, the team detected fine scale variation in species occurrence related to environmental factors, such as that the relative abundance of sequences for anchovy and sardines correlated with the different water temperatures the species are known to prefer for spawning.

Read more at Science Daily

Acoustic illusions

When listening to music, we don't just hear the notes produced by the instruments, we are also immersed in its echoes from our surroundings. Sound waves bounce back off the walls and objects around us, forming a characteristic sound effect -- a specific acoustic field. This explains why the same piece of music sounds very different when played in an old church or a modern concrete building.

Architects have long been capitalising on this fact when building, say, concert halls. However, the principle can also be transferred to other applications: objects hidden underground can be visualised by measuring how sound waves from a known source are reflected.

Active and passive manipulation

Some scientists want to go one step further and systematically manipulate the acoustic field to achieve an effect that shouldn't exist per se, given the real-life situation. For instance, they are attempting to create an illusory audio experience that tricks the listener into believing they are in a concrete building or an old church. Alternatively, objects can be made invisible by manipulating the acoustic field in such a way that the listener no longer perceives them.

Usually, the desired illusion relies on using passive methods that involve structuring the surfaces with the help of what are known as metamaterials. One way of hiding an object acoustically is to coat its surface and stop it from reflecting any sound waves. However, this approach is inflexible and usually works only within a limited frequency range, making it unsuitable for many applications.

Active methods seek to achieve the illusion by superimposing another layer of sound waves. In other words, by adding a second signal to the initial acoustic field. However, until now the scope for using this approach has also been limited, as it works only if the initial field can be predicted with some certainty.

Real-time illusion

Now the group headed by Johan Robertsson, Professor of Applied Geophysics at ETH Zurich, has worked with scientists from the University of Edinburgh to develop a new concept that significantly improves the active illusion. Led by Theodor Becker, a postdoc in Robertsson's group, and Dirk-Jan van Manen, the senior scientist who was instrumental in designing the experiments, the researchers have managed to augment the initial field in real time, as they report in the latest issue of the journal Science Advances. As a result, they can make objects disappear and they can mimic non-existent ones.

To achieve the special acoustic effects, the researchers installed a large test facility for the project in the Centre for Immersive Wave Experimentation at the Switzerland Innovation Park Zurich in Dübendorf. Specifically, this facility allows them to mask the existence of an object measuring roughly 12 centimetres or simulate an imaginary object of equal size.

The target object is enclosed in an outer ring of microphones as control sensors and an inner ring of loudspeakers as control sources. The control sensors register which external acoustic signals reach the object from the initial field. Based on these measurements, a computer then calculates which secondary sounds the control sources must produce to achieve the desired augmentation of the initial field.

Sophisticated technology

To mask the object, the control sources emit a signal that completely obliterates the sound waves reflected off the object. By contrast, to simulate an object (also known as holography), the control sources augment the initial acoustic field as if sound waves were bouncing off an object at the centre of the two rings.

For this augmentation to work, the data measured by the control sensors must be transformed instantaneously into instructions for the control sources. To control the system, the researchers therefore use field-programmable gate arrays (FPGAs) with an extremely short response time.

"Our facility allows us to manipulate the acoustic field over a frequency range of more than three and a half octaves," Robertsson says. The maximum frequency for cloaking is 8,700 Hz and 5,900 Hz for simulating. To date, the researchers have been able to manipulate the acoustic field on a surface in two dimensions. As a next step, they want to increase the process to three dimensions and extend its functional range. The system currently augments airborne sound waves. However, Robertsson explains, the new process could also produce acoustic illusions under water. He envisages a vast array of potential uses in different fields, such as sensor technology, architecture and communications, as well as in the education sector.

Read more at Science Daily