Sep 5, 2020

Quantum leap for speed limit bounds

 Nature's speed limits aren't posted on road signs, but Rice University physicists have discovered a new way to deduce them that is better -- infinitely better, in some cases -- than previous methods.

"The big question is, 'How fast can anything -- information, mass, energy -- move in nature?'" said Kaden Hazzard, a theoretical quantum physicist at Rice. "It turns out that if somebody hands you a material, it is incredibly difficult, in general, to answer the question."

In a study published today in the American Physical Society journal PRX Quantum, Hazzard and Rice graduate student Zhiyuan Wang describe a new method for calculating the upper bound of speed limits in quantum matter.

"At a fundamental level, these bounds are much better than what was previously available," said Hazzard, an assistant professor of physics and astronomy and member of the Rice Center for Quantum Materials. "This method frequently produces bounds that are 10 times more accurate, and it's not unusual for them to be 100 times more accurate. In some cases, the improvement is so dramatic that we find finite speed limits where previous approaches predicted infinite ones."

Nature's ultimate speed limit is the speed of light, but in nearly all matter around us, the speed of energy and information is much slower. Frequently, it is impossible to describe this speed without accounting for the large role of quantum effects.

In the 1970s, physicists proved that information must move much slower than the speed of light in quantum materials, and though they could not compute an exact solution for the speeds, physicists Elliott Lieb and Derek Robinson pioneered mathematical methods for calculating the upper bounds of those speeds.

"The idea is that even if I can't tell you the exact top speed, can I tell you that the top speed must be less than a particular value," Hazzard said. "If I can give a 100% guarantee that the real value is less than that upper bound, that can be extremely useful."

Hazzard said physicists have long known that some of the bounds produced by the Lieb-Robinson method are "ridiculously imprecise."

"It might say that information must move less than 100 miles per hour in a material when the real speed was measured at 0.01 miles per hour," he said. "It's not wrong, but it's not very helpful."

The more accurate bounds described in the PRX Quantum paper were calculated by a method Wang created.

"We invented a new graphical tool that lets us account for the microscopic interactions in the material instead of relying only on cruder properties such as its lattice structure," Wang said.

Hazzard said Wang, a third-year graduate student, has an incredible talent for synthesizing mathematical relationships and recasting them in new terms.

"When I check his calculations, I can go step by step, churn through the calculations and see that they're valid," Hazzard said. "But to actually figure out how to get from point A to point B, what set of steps to take when there's an infinite variety of things you could try at each step, the creativity is just amazing to me."

The Wang-Hazzard method can be applied to any material made of particles moving in a discrete lattice. That includes oft-studied quantum materials like high-temperature superconductors, topological materials, heavy fermions and others. In each of these, the behavior of the materials arises from interactions of billions upon billions of particles, whose complexity is beyond direct calculation.

Hazzard said he expects the new method to be used in several ways.

"Besides the fundamental nature of this, it could be useful for understanding the performance of quantum computers, in particular in understanding how long they take to solve important problems in materials and chemistry," he said.

Hazzard said he is certain the method will also be used to develop numerical algorithms because Wang has shown it can put rigorous bounds on the errors produced by oft-used numerical techniques that approximate the behavior of large systems.

A popular technique physicists have used for more than 60 years is to approximate a large system by a small one that can be simulated by a computer.

"We draw a small box around a finite chunk, simulate that and hope that's enough to approximate the gigantic system," Hazzard said. "But there has not been a rigorous way of bounding the errors in these approximations."

The Wang-Hazzard method of calculating bounds could lead to just that.

"There is an intrinsic relationship between the error of a numerical algorithm and the speed of information propagation," Wang explained, using the sound of his voice and the walls in his room to illustrate the link.

"The finite chunk has edges, just as my room has walls. When I speak, the sound will get reflected by the wall and echo back to me. In an infinite system, there is no edge, so there is no echo."

In numerical algorithms, errors are the mathematical equivalent of echoes. They reverberate from the edges of the finite box, and the reflection undermines the algorithms' ability to simulate the infinite case. The faster information moves through the finite system, the shorter the time the algorithm faithfully represents the infinite. Hazzard said he, Wang and others in his research group are using their method to craft numerical algorithms with guaranteed error bars.

 Read more at Science Daily

Red hot meat: The wrong recipe for heart disease

 From MasterChef to MKR, the world's best chefs have taught us how to barbeque, grill and panfry a steak to perfection. But while the experts may be seeking that extra flavour, new research from the University of South Australia suggests high-heat caramelization could be bad for our health.

Conducted in partnership with the Gyeongsang National University the study found that consuming red and processed meat increased a protein compound that may increase the risk of heart disease, stroke, and complications in diabetes.

UniSA researcher Dr Permal Deo says the research provides important dietary insights for people at risk of such degenerative diseases.

"When red meat is seared at high temperatures, such as grilling, roasting or frying, it creates compounds called advanced glycation end products -- or AGEs ¬- which when consumed, can accumulate in your body and interfere with normal cell functions," Dr Deo says.

"Consumption of high-AGE foods can increase our total daily AGE intake by 25 per cent, with higher levels contributing to vascular and myocardial stiffening, inflammation and oxidative stress -- all signs of degenerative disease."

Published in Nutrients, the study tested the impacts of two diets -- one high in red meat and processed grains and the other high in whole grains dairy, nuts and legumes, and white meat using steaming, boiling, stewing and poaching cooking methods.

It found that the diet high in red meat significantly increased AGE levels in blood suggesting it may contribute to disease progression

Largely preventable, cardiovascular disease (CVD) is the number one cause of death globally. In Australia, it represents one in five of all deaths.

Co-researcher UniSA's Professor Peter Clifton says while there are still questions about how dietary AGEs are linked to chronic disease, this research shows that eating red meat will alter AGE levels.

"The message is pretty clear: if we want to reduce heart disease risk, we need to cut back on how much red meat we eat or be more considered about how we cook it.

"Frying, grilling and searing may be the preferred cooking methods of top chefs, but this might not be the best choice for people looking to cut their risk of disease.

"If you want to reduce your risk of excess AGEs, then slow cooked meals could be a better option for long-term health."

From Science Daily

Sep 4, 2020

Peculiar planetary system architecture around three Orion stars explained

 The discovery that our galaxy is teeming with exoplanets has also revealed the vast diversity of planetary systems out there and raised questions about the processes that shaped them. New work published in Science by an international team including Carnegie's Jaehan Bae could explain the architecture of multi-star systems in which planets are separated by wide gaps and do not orbit on the same plane as their host star's equatorial center.

"In our Solar System, the eight planets and many other minor objects orbit in a flat plane around the Sun; but in some distant systems, planets orbit on an incline -- sometimes a very steep one," Bae explained. "Understanding the origins of extremely oblique orbital angles such as these could help reveal details about the planetary formation process."

Stars are born in nurseries of gas and dust called molecular clouds -- often forming in small groups of two or three. These young stars are surrounded by rotating disks of leftover material, which accretes to form baby planets. The disk's structure will determine the distribution of the planets that form from it, but much about this process remains unknown.

Led by University of Exeter's Stefan Kraus, the team found the first direct evidence confirming the theoretical prediction that gravitational interactions between the members of multi-star systems can warp or break their disks, resulting in misaligned rings surrounding the stellar hosts.

Over a period of 11 years, the researchers made observations of the the GW Orionis triple-star system, located just over 1,300 light-years away in the Orion constellation. Their work was accomplished using the European Southern Observatory's Very Large Telescope and the Atacama Large Millimeter/submillimeter Array -- a radio telescope made up of 66 antennas.

"Our images reveal an extreme case where the disk is not flat at all, but is warped and has a misaligned ring that has broken away from the disk," Kraus said.

Their findings were tested by simulations, which demonstrated that the observed disorder in the orbits of the three stars could have caused the disk to fracture into the distinct rings.

"We predict that many planets on oblique, wide-separation orbits will be discovered in future planet imaging campaigns," said co-author Alexander Kreplin, also of the University of Exeter.

Bae concluded: "This system is a great example of how theory and observing can inform each other. I'm excited to see what we learn about this system and others like it with additional study."

Read more at Science Daily

Splitting water molecules for a renewable energy future

 The future economy based on renewable and sustainable energy sources might utilize battery-powered cars, large-scale solar and wind farms, and energy reserves stored in batteries and chemical fuels. Although there are examples of sustainable energy sources in use already, scientific and engineering breakthroughs will determine the timeline for widespread adoption.

One proposed paradigm for shifting away from fossil fuels is the hydrogen economy, in which hydrogen gas powers society's electrical needs. To mass produce hydrogen gas, some scientists are studying the process of splitting water -- two hydrogen atoms and one oxygen atom -- which would result in hydrogen fuel and breathable oxygen gas.

Feng Lin, an assistant professor of chemistry in the Virginia Tech College of Science, is focusing on energy storage and conversion research. This work is part of a new study published in the journal Nature Catalysis that solves a key, fundamental barrier in the electrochemical water splitting process where the Lin Lab demonstrates a new technique to reassemble, revivify, and reuse a catalyst that allows for energy-efficient water splitting. Chunguang Kuai, a former graduate student of Lin's, is first author of the study with Lin and co-authors chemistry graduate students Zhengrui Xu, Anyang Hu, and Zhijie Yang.

The core idea of this study goes back to a subject in general chemistry classes: catalysts. These substances increase the rate of a reaction without being consumed in the chemical process. One way a catalyst increases the reaction rate is by decreasing the amount of energy needed for the reaction to commence.

Water may seem basic as a molecule made up of just three atoms, but the process of splitting it is quite difficult. But Lin's lab has done so. Even moving one electron from a stable atom can be energy-intensive, but this reaction requires the transfer of four to oxidize oxygen to produce oxygen gas.

"In an electrochemical cell, the four-electron transfer process will make the reaction quite sluggish, and we need to have a higher electrochemical level to make it happen," Lin said. "With a higher energy needed to split water, the long-term efficiency and catalyst stability become key challenges."

In order to meet that high energy requirement, the Lin Lab introduces a common catalyst called mixed nickel iron hydroxide (MNF) to lower the threshold. Water splitting reactions with MNF work well, but due to the high reactivity of MNF, it has a short lifespan and the catalytic performance decreases quickly.

Lin and his team discovered a new technique that would allow for periodic reassembling to MNF's original state, thus allowing the process of splitting water to continue. (The team used fresh water in their experiments, but Lin suggests salt water -- the most abundant form of water on Earth -- could work as well.)

MNF has a long history with energy studies. When Thomas Edison tinkered with batteries more than a century ago, he also used the same nickel and iron elements in nickel hydroxide-based batteries. Edison observed the formation of oxygen gas in his nickel hydroxide experiments, which is bad for a battery, but in the case of splitting water, production of oxygen gas is the goal.

"Scientists have realized for a long time that the addition of iron into the nickel hydroxide lattice is the key for the reactivity enhancement of water splitting." Kuai said. "But under the catalytic conditions, the structure of the pre-designed MNF is highly dynamic due to the highly corrosive environment of the electrolytic solution."

During Lin's experiments, MNF degrades from a solid form into metal ions in the electrolytic solution -- a key limitation to this process. But Lin's team observed that when the electrochemical cell flips from the high, electrocatalytic potential to a low, reducing potential, just for a period of two minutes, the dissolved metal ions reassemble into the ideal MNF catalyst. This occurs due to a reversal of the pH gradient within the interface between the catalyst and the electrolytic solution.

"During the low potential for two minutes, we demonstrated we not only get nickel and iron ions deposited back into the electrode, but mixing them very well together and creating highly active catalytic sites," Lin said. "This is truly exciting, because we rebuild the catalytic materials at the atomic length scale within a few nano-meter electrochemical interface."

Another reason that the reformation works so well is that the Lin Lab synthesized novel MNF as thin sheets that are easier to reassemble than a bulk material.

Validating findings through X-rays

To corroborate these findings, Lin's team conducted synchrotron X-ray measurements at the Advanced Photon Source of Argonne National Laboratory and at Stanford Synchrotron Radiation Lightsource of SLAC National Accelerator Laboratory. These measurements use the same basic premise as the common hospital X-ray but on a much larger scale.

"We wanted to observe what had happened during this entire process," Kuai said. "We can use X-ray imaging to literally see the dissolution and redeposition of these metal irons to provide a fundamental picture of the chemical reactions."

Synchrotron facilities require a massive loop, similar to the size of the Drillfield at Virginia Tech, that can perform X-ray spectroscopy and imaging at high speeds. This provides Lin high levels of data under the catalytic operating conditions. The study also provides insights into a range of other important electrochemical energy sciences, such as nitrogen reduction, carbon dioxide reduction, and zinc-air batteries.

"Beyond imaging, numerous X-ray spectroscopic measurements have allowed us to study how individual metal ions come together and form clusters with different chemical compositions," Lin said. "This has really opened the door for probing electrochemical reactions in real chemical reaction environments."

Read more at Science Daily

True size of prehistoric mega-shark finally revealed

 

Megalodon illustration
To date only the length of the legendary giant shark Megalodon had been estimated. But now, a new study led by the University of Bristol and Swansea University has revealed the size of the rest of its body, including fins that are as large as an adult human.

There is a grim fascination in determining the size of the largest sharks, but this can be difficult for fossil forms where teeth are often all that remain.

Today, the most fearsome living shark is the Great White, at over six metres (20 feet) long, which bites with a force of two tonnes.

Its fossil relative, the big tooth shark Megalodon, star of Hollywood movies, lived from 23 to around three million years ago, was over twice the length of a Great White and had a bite force of more than ten tonnes.

The fossils of the Megalodon are mostly huge triangular cutting teeth bigger than a human hand.

Jack Cooper, who has just completed the MSc in Palaeobiology at the University of Bristol's School of Earth Sciences, and colleagues from Bristol and Swansea used a number of mathematical methods to pin down the size and proportions of this monster, by making close comparisons to a diversity of living relatives with ecological and physiological similarities to Megalodon.

The project was supervised by shark expert Dr Catalina Pimiento from Swansea University and Professor Mike Benton, a palaeontologist at Bristol. Dr Humberto Ferrón of Bristol also collaborated.

Their findings are published today in the journal Scientific Reports.

Jack Cooper said: "I have always been mad about sharks. As an undergraduate, I have worked and dived with Great whites in South Africa -- protected by a steel cage of course. It's that sense of danger, but also that sharks are such beautiful and well-adapted animals, that makes them so attractive to study.

"Megalodon was actually the very animal that inspired me to pursue palaeontology in the first place at just six years old, so I was over the moon to get a chance to study it.

"This was my dream project. But to study the whole animal is difficult considering that all we really have are lots of isolated teeth."

Previously the fossil shark, known formally as Otodus megalodon, was only compared with the Great White. Jack and his colleagues, for the first time, expanded this analysis to include five modern sharks.

Dr Pimiento said: "Megalodon is not a direct ancestor of the Great White but is equally related to other macropredatory sharks such as the Makos, Salmon shark and Porbeagle shark, as well as the Great white. We pooled detailed measurements of all five to make predictions about Megalodon."

Professor Benton added: "Before we could do anything, we had to test whether these five modern sharks changed proportions as they grew up. If, for example, they had been like humans, where babies have big heads and short legs, we would have had some difficulties in projecting the adult proportions for such a huge extinct shark.

"But we were surprised, and relieved, to discover that in fact that the babies of all these modern predatory sharks start out as little adults, and they don't change in proportion as they get larger."

Jack Cooper said: "This means we could simply take the growth curves of the five modern forms and project the overall shape as they get larger and larger -- right up to a body length of 16 metres."

The results suggest that a 16-metre-long Otodus megalodon likely had a head round 4.65 metres long, a dorsal fin approximately 1.62 metres tall and a tail around 3.85 metres high.

This means an adult human could stand on the back of this shark and would be about the same height as the dorsal fin.

Read more at Science Daily

Has Earth's oxygen rusted the Moon for billions of years?

Moon

To the surprise of many planetary scientists, the oxidized iron mineral hematite has been discovered at high latitudes on the Moon, according to a study published today in Science Advances led by Shuai Li, assistant researcher at the Hawai'i Institute of Geophysics and Planetology (HIGP) in the UH Mānoa School of Ocean and Earth Science and Technology (SOEST).

Iron is highly reactive with oxygen -- forming reddish rust commonly seen on Earth. The lunar surface and interior, however, are virtually devoid of oxygen, so pristine metallic iron is prevalent on the Moon and highly oxidized iron has not been confirmed in samples returned from the Apollo missions. In addition, hydrogen in solar wind blasts the lunar surface, which acts in opposition to oxidation. So, the presence of highly oxidized iron-bearing minerals, such as hematite, on the Moon is an unexpected discovery.

"Our hypothesis is that lunar hematite is formed through oxidation of lunar surface iron by the oxygen from the Earth's upper atmosphere that has been continuously blown to the lunar surface by solar wind when the Moon is in Earth's magnetotail during the past several billion years," said Li.

To make this discovery, Li, HIGP professor Paul Lucey and co-authors from NASA's Jet Propulsion Laboratory (JPL) and elsewhere analyzed the hyperspectral reflectance data acquired by the Moon Mineralogy Mapper (M3) designed by NASA JPL onboard India's Chandrayaan-1 mission.

This new research was inspired by Li's previous discovery of water ice in the Moon's polar regions in 2018.

"When I examined the M3 data at the polar regions, I found some spectral features and patterns are different from those we see at the lower latitudes or the Apollo samples," said Li. "I was curious whether it is possible that there are water-rock reactions on the Moon. After months investigation, I figured out I was seeing the signature of hematite."

The team found the locations where hematite is present are strongly correlated with water content at high latitude Li and others found previously and are more concentrated on the nearside, which always faces the Earth.

"More hematite on the lunar nearside suggested that it may be related to Earth," said Li. "This reminded me a discovery by the Japanese Kaguya mission that oxygen from the Earth's upper atmosphere can be blown to the lunar surface by solar wind when the Moon is in the Earth's magnetotail. So, Earth's atmospheric oxygen could be the major oxidant to produce hematite. Water and interplanetary dust impact may also have played critical roles"

"Interestingly, hematite is not absolutely absent from the far-side of the Moon where Earth's oxygen may have never reached, although much fewer exposures were seen," said Li. "The tiny amount of water (< ~0.1 wt.%) observed at lunar high latitudes may have been substantially involved in the hematite formation process on the lunar far-side, which has important implications for interpreting the observed hematite on some water poor S-type asteroids."

"This discovery will reshape our knowledge about the Moon's polar regions," said Li. "Earth may have played an important role on the evolution of the Moon's surface."

Read more at Science Daily

Sep 3, 2020

New mathematical method shows how climate change led to fall of ancient civilization

 A Rochester Institute of Technology researcher developed a mathematical method that shows climate change likely caused the rise and fall of an ancient civilization. In an article recently featured in the journal Chaos: An Interdisciplinary Journal of Nonlinear Science, Nishant Malik, assistant professor in RIT's School of Mathematical Sciences, outlined the new technique he developed and showed how shifting monsoon patterns led to the demise of the Indus Valley Civilization, a Bronze Age civilization contemporary to Mesopotamia and ancient Egypt.

Malik developed a method to study paleoclimate time series, sets of data that tell us about past climates using indirect observations. For example, by measuring the presence of a particular isotope in stalagmites from a cave in South Asia, scientists were able to develop a record of monsoon rainfall in the region for the past 5,700 years. But as Malik notes, studying paleoclimate time series poses several problems that make it challenging to analyze them with mathematical tools typically used to understand climate.

"Usually the data we get when analyzing paleoclimate is a short time series with noise and uncertainty in it," said Malik. "As far as mathematics and climate is concerned, the tool we use very often in understanding climate and weather is dynamical systems. But dynamical systems theory is harder to apply to paleoclimate data. This new method can find transitions in the most challenging time series, including paleoclimate, which are short, have some amount of uncertainty and have noise in them."

There are several theories about why the Indus Valley Civilization declined -- including invasion by nomadic Indo-Aryans and earthquakes -- but climate change appears to be the most likely scenario. But until Malik applied his hybrid approach -- rooted in dynamical systems but also draws on methods from the fields of machine learning and information theory -- there was no mathematical proof. His analysis showed there was a major shift in monsoon patterns just before the dawn of this civilization and that the pattern reversed course right before it declined, indicating it was in fact climate change that caused the fall.

Malik said he hopes the method will allow scientists to develop more automated methods of finding transitions in paleoclimate data and leads to additional important historical discoveries. The full text of the study is published in Chaos: An Interdisciplinary Journal of Nonlinear Science.

From Science Daily

An unexpected origin story for a lopsided black hole merger

 A lopsided merger of two black holes may have an oddball origin story, according to a new study by researchers at MIT and elsewhere.

The merger was first detected on April 12, 2019 as a gravitational wave that arrived at the detectors of both LIGO (the Laser Interferometer Gravitational-wave Observatory), and its Italian counterpart, Virgo. Scientists labeled the signal as GW190412 and determined that it emanated from a clash between two David-and-Goliath black holes, one three times more massive than the other. The signal marked the first detection of a merger between two black holes of very different sizes.

Now the new study, published today in the journal Physical Review Letters, shows that this lopsided merger may have originated through a very different process compared to how most mergers, or binaries, are thought to form.

It's likely that the more massive of the two black holes was itself a product of a prior merger between two parent black holes. The Goliath that spun out of that first collision may have then ricocheted around a densely packed "nuclear cluster" before merging with the second, smaller black hole -- a raucous event that sent gravitational waves rippling across space.

GW190412 may then be a second generation, or "hierarchical" merger, standing apart from other first-generation mergers that LIGO and Virgo have so far detected.

"This event is an oddball the universe has thrown at us -- it was something we didn't see coming," says study coauthor Salvatore Vitale, an assistant professor of physics at MIT and a LIGO member. "But nothing happens just once in the universe. And something like this, though rare, we will see again, and we'll be able to say more about the universe."

Vitale's coauthors are Davide Gerosa of the University of Birmingham and Emanuele Berti of Johns Hopkins University.

A struggle to explain

There are two main ways in which black hole mergers are thought to form. The first is known as a common envelope process, where two neighboring stars, after billions of years, explode to form two neighboring black holes that eventually share a common envelope, or disk of gas. After another few billion years, the black holes spiral in and merge.

"You can think of this like a couple being together all their lives," Vitale says. "This process is suspected to happen in the disc of galaxies like our own."

The other common path by which black hole mergers form is via dynamical interactions. Imagine, in place of a monogamous environment, a galactic rave, where thousands of black holes are crammed into a small, dense region of the universe. When two black holes start to partner up, a third may knock the couple apart in a dynamical interaction that can repeat many times over, before a pair of black holes finally merges.

In both the common envelope process and the dynamical interaction scenario, the merging black holes should have roughly the same mass, unlike the lopsided mass ratio of GW190412. They should also have relatively no spin, whereas GW190412 has a surprisingly high spin.

"The bottom line is, both these scenarios, which people traditionally think are ideal nurseries for black hole binaries in the universe, struggle to explain the mass ratio and spin of this event," Vitale says.

Black hole tracker

In their new paper, the researchers used two models to show that it is very unlikely that GW190412 came from either a common envelope process or a dynamical interaction.

They first modeled the evolution of a typical galaxy using STAR TRACK, a simulation that tracks galaxies over billions of years, starting with the coalescing of gas and proceeding to the way stars take shape and explode, and then collapse into black holes that eventually merge. The second model simulates random, dynamical encounters in globular clusters -- dense concentrations of stars around most galaxies.

The team ran both simulations multiple times, tuning the parameters and studying the properties of the black hole mergers that emerged. For those mergers that formed through a common envelope process, a merger like GW190412 was very rare, cropping up only after a few million events. Dynamical interactions were slightly more likely to produce such an event, after a few thousand mergers.

However, GW190412 was detected by LIGO and Virgo after only 50 other detections, suggesting that it likely arose through some other process.

"No matter what we do, we cannot easily produce this event in these more common formation channels," Vitale says.

The process of hierarchical merging may better explain the GW190412's lopsided mass and its high spin. If one black hole was a product of a previous pairing of two parent black holes of similar mass, it would itself be more massive than either parent, and later significantly overshadow its first-generation partner, creating a high mass ratio in the final merger.

A hierarchical process could also generate a merger with a high spin: The parent black holes, in their chaotic merging, would spin up the resulting black hole, which would then carry this spin into its own ultimate collision.

"You do the math, and it turns out the leftover black hole would have a spin which is very close to the total spin of this merger," Vitale explains.

No escape

If GW190412 indeed formed through hierarchical merging, Vitale says the event could also shed light on the environment in which it formed. The team found that if the larger of the two black holes formed from a previous collision, that collision likely generated a huge amount of energy that not only spun out a new black hole, but kicked it across some distance.

"If it's kicked too hard, it would just leave the cluster and go into the empty interstellar medium, and not be able to merge again," Vitale says.

If the object was able to merge again (in this case, to produce GW190412), it would mean the kick that it received was not enough to escape the stellar cluster in which it formed. If GW190412 indeed is a product of hierarchical merging, the team calculated that it would have occurred in an environment with an escape velocity higher than 150 kilometers per second. For perspective, the escape velocity of most globular clusters is about 50 kilometers per second.

This means that whatever environment GW190412 arose from had an immense gravitational pull, and the team believes that such an environment could have been either the disk of gas around a supermassive black hole, or a "nuclear cluster" -- an incredibly dense region of the universe, packed with tens of millions of stars.

Read more at Science Daily

Obesity may alter immune system response to COVID-19

 Obesity may cause a hyperactive immune system response to COVID-19 infection that makes it difficult to fight off the virus, according to a new manuscript published in the Endocrine Society's journal, Endocrinology.

Obesity not only leads to problems like heart disease and diabetes, but also influences the immune system in many ways. Obesity causes a chronic, low grade activation of some parts of the immune system. When someone with this preexisting condition is faced with an infection, this could lead to hyper-activation of the immune system, but in a detrimental way that does not fight infection.

"The COVID-19 pandemic has made us aware of the complex interactions of obesity with infectious diseases, and the gaps in our understanding of how chronic health conditions affect our immune responses to acute infection," said the study's corresponding author, Durga Singer, M.D., of the University of Michigan in Ann Arbor, Mich. "Recent evidence has highlighted how one part of the immune system, the macrophage, may be a culprit in driving severe COVID-19 disease. Our manuscript focuses on what is already known about the interaction of obesity, macrophages and other infections like influenza. These findings highlight the importance of understanding how obesity might interact with new drugs or vaccines that are developed for COVID-19."

In this review, the authors describe the impact of obesity on the immune system. They discuss the irregular immune responses caused by obesity that drive organ injury in severe COVID-19 infection and impair a person's ability to fight the virus.

The other authors of the study are Gabrielle P. Huizinga and Benjamin H. Singer of the University of Michigan.

The manuscript received funding from the National Institute of Diabetes and Digestive and Kidney Diseases and the National Institute of Neurological Disorders and Stroke.

From Science Daily

Study reveals lactose tolerance happened quickly in Europe

 The ability for humans to digest milk as adults has altered our dietary habits and societies for centuries. But when and how that ability -- known as lactase persistence or lactose tolerance -- occurred and became established is up for debate. By testing the genetic material from the bones of people who died during a Bronze Age battle around 1,200 BC, an international team of scientists including Krishna Veeramah, PhD, of Stony Brook University, suggest that lactase persistence spread throughout Central Europe in only a few thousand years, an extremely fast transformation compared to most evolutionary changes seen in humans. Their findings are published in Current Biology.

Despite the prominence of milk drinking in Europe and North American today, approximately two-thirds of the world's population remains lactose intolerant. Generally, no mammal digests milk as an adult, which is why for example people should not give adult cat or dog pets milk. However, a subset of humans have a genetic mutation that enables the enzyme lactase to digest the lactose sugar found in milk throughout an individual's lifetime. Many of these people are from Central or Northern Europe.

The battle occurred on the banks of the Tollense, a river in present day Germany, and is the most significant that we know about from Bronze Age Europe, probably consisting of about 4,000 warriors, almost a quarter of which died during the fighting. Despite being more than three thousand years old, the researchers were able to sequence DNA from some of the bone fragments recovered from the battle site.

Veeramah, Associate Professor in the Department of Ecology and Evolution in the College of Arts and Sciences, led part of the research that involved analyzing how the overall genetic ancestry of the battlefield population compared to other modern and ancient populations, and then compared the frequency of the lactase-persistent allele to other modern and ancient populations, particularly medieval European populations.

The research team, led by Joachim Burger and colleagues at Johannes Gutenberg University Mainz (JGU), found that despite the battle occurring more than 4,000 years after the introduction of agriculture in Europe -- which in part would have involved the consumption of dairy from early cattle, goats and sheep domesticates -- only one in eight of the warriors had a genetic variant that enabled them to break down lactose.

"When we look at other European genetic data from the early Medieval period less than 2,000 years later, we find that more than 60 percent of individuals had the ability to drink milk as adults, close to what we observe in modern Central European countries, which ranges from 70 to 90 percent" said Veeramah. "This is actually an incredibly fast rate of change for the gene that controls milk digestion. It appears that by simply possessing this one genetic change, past European individuals with the ability to digest lactose had a six percent greater chance of producing children than those who could not. This is the strongest evidence we have for positive natural selection in humans."

Joachim Burger of JGU, lead author on the study, added that there still is not definitive answer to the question: Why did being able to digest the sugar in milk after infancy provide such a big evolutionary advantage?

Read more at Science Daily

Sep 2, 2020

Researchers predict location of novel candidate for mysterious dark energy

 Astronomers have known for two decades that the expansion of the universe is accelerating, but the physics of this expansion remains a mystery. Now, a team of researchers at the University of Hawaiʻi at Mānoa have made a novel prediction -- the dark energy responsible for this accelerating growth comes from a vast sea of compact objects spread throughout the voids between galaxies. This conclusion is part of a new study published in The Astrophysical Journal.

In the mid-1960s, physicists first suggested that stellar collapse should not form true black holes, but should instead form Generic Objects of Dark Energy (GEODEs). Unlike black holes, GEODEs do not 'break' Einstein's equations with singularities. Instead, a spinning layer surrounds a core of dark energy. Viewed from the outside, GEODEs and black holes appear mostly the same, even when the "sounds" of their collisions are measured by gravitational wave observatories.

Because GEODEs mimic black holes, it was assumed they moved through space the same way as black holes. "This becomes a problem if you want to explain the accelerating expansion of the universe," said UH Mānoa Department of Physics and Astronomy research fellow Kevin Croker, lead author of the study. "Even though we proved last year that GEODEs, in principle, could provide the necessary dark energy, you need lots of old and massive GEODEs. If they moved like black holes, staying close to visible matter, galaxies like our own Milky Way would have been disrupted."

Croker collaborated with UH Mānoa Department of Physics and Astronomy graduate student Jack Runburg, and Duncan Farrah, a faculty member at the UH Institute for Astronomy and the Physics and Astronomy department, to investigate how GEODEs move through space. The researchers found that the spinning layer around each GEODE determines how they move relative to each other. If their outer layers spin slowly, GEODEs clump more rapidly than black holes. This is because GEODEs gain mass from the growth of the universe itself. For GEODEs with layers that spin near the speed of light, however, the gain in mass becomes dominated by a different effect and the GEODEs begin to repel each other. "The dependence on spin was really quite unexpected," said Farrah. "If confirmed by observation, it would be an entirely new class of phenomenon."

The team solved Einstein's equations under the assumption that many of the oldest stars, which were born when the universe was less than 2 percent of its current age, formed GEODEs when they died. As these ancient GEODEs fed on other stars and abundant interstellar gas, they began to spin very rapidly. Once spinning quickly enough, the GEODEs' mutual repulsion caused most of them to 'socially distance' into regions that would eventually become the empty voids between present-day galaxies.

This study supports the position that GEODEs can solve the dark energy problem while remaining in harmony with different observations across vast distances. GEODEs stay away from present-day galaxies, so they do not disrupt delicate star pairs counted within the Milky Way. The number of ancient GEODEs required to solve the dark energy problem is consistent with the number of ancient stars. GEODEs do not disrupt the measured distribution of galaxies in space because they separate away from luminous matter before it forms present-day galaxies. Finally, GEODEs do not directly affect the gentle ripples in the afterglow of the Big Bang, because they are born from dead stars hundreds of millions of years after the release of this cosmic background radiation.

The researchers were cautiously optimistic about their results. "It was thought that, without a direct detection of something different than a Kerr [Black Hole] signature from LIGO-Virgo [gravitational wave observatories], you'd never be able to tell that GEODEs existed," said Farrah. Croker added, "but now that we have a clearer understanding of how Einstein's equations link big and small, we've been able to make contact with data from many communities, and a coherent picture is beginning to form."

Read more at Science Daily

Zooming in on dark matter

 Cosmologists have zoomed in on the smallest clumps of dark matter in a virtual universe -- which could help us to find the real thing in space.

An international team of researchers, including Durham University, UK, used supercomputers in Europe and China to focus on a typical region of a computer-generated universe.

The zoom they were able to achieve is the equivalent of being able to see a flea on the surface of the Moon.

This allowed them to make detailed pictures and analyses of hundreds of virtual dark matter clumps (or haloes) from the very largest to the tiniest.

Dark matter particles can collide with dark matter anti-particles near the centre of haloes where, according to some theories, they are converted into a burst of energetic gamma-ray radiation.

Their findings, published in the journal Nature, could mean that these very small haloes could be identified in future observations by the radiation they are thought to give out.

Co-author Professor Carlos Frenk, Ogden Professor of Fundamental Physics at the Institute for Computational Cosmology, at Durham University, UK, said: "By zooming in on these relatively tiny dark matter haloes we can calculate the amount of radiation expected to come from different sized haloes.

"Most of this radiation would be emitted by dark matter haloes too small to contain stars and future gamma-ray observatories might be able to detect these emissions, making these small objects individually or collectively 'visible'.

"This would confirm the hypothesised nature of the dark matter, which may not be entirely dark after all."

Most of the matter in the universe is dark (apart from the gamma radiation they emit in exceptional circumstances) and completely different in nature from the matter that makes up stars, planets and people.

The universe is made of approximately 27 per cent dark matter with the rest largely consisting of the equally mysterious dark energy. Normal matter, such as planets and stars, makes up a relatively small five per cent of the universe.

Galaxies formed and grew when gas cooled and condensed at the centre of enormous clumps of this dark matter -- so-called dark matter haloes.

Astronomers can infer the structure of large dark matter haloes from the properties of the galaxies and gas within them.

The biggest haloes contain huge collections of hundreds of bright galaxies, called galaxy clusters, weighing a 1,000 trillion times more than our Sun.

However, scientists have no direct information about smaller dark matter haloes that are too tiny to contain a galaxy. These can only be studied by simulating the evolution of the Universe in a large supercomputer.

The smallest are thought to have the same mass as the Earth according to current popular scientific theories about dark matter that underlie the new research.

The simulations were carried out using the Cosmology Machine supercomputer, part of the DiRAC High-Performance Computing facility in Durham, funded by the Science and Technology Facilities Council (STFC), and computers at the Chinese Academy of Sciences.

By zooming-in on the virtual universe in such microscopic detail, the researchers were able to study the structure of dark matter haloes ranging in mass from that of the Earth to a big galaxy cluster.

Surprisingly, they found that haloes of all sizes have a very similar internal structure and are extremely dense at the centre, becoming increasingly spread out, with smaller clumps orbiting in their outer regions.

The researchers said that without a measure scale it was almost impossible to tell an image of a dark matter halo of a massive galaxy from one of a halo with a mass a fraction of the Sun's.

Co-author Professor Simon White, of the Max Planck Institute of Astrophysics, Germany, said: "We expect that small dark matter haloes would be extremely numerous, containing a substantial fraction of all the dark matter in the universe, but they would remain mostly dark throughout cosmic history because stars and galaxies grow only in haloes more than a million times as massive as the Sun.

"Our research sheds light on these small haloes as we seek to learn more about what dark matter is and the role it plays in the evolution of the universe."

Read more at Science Daily

Keeping the beat: It's all in your brain

 How do people coordinate their actions with the sounds they hear? This basic ability, which allows people to cross the street safely while hearing oncoming traffic, dance to new music or perform team events such as rowing, has puzzled cognitive neuroscientists for years. A new study led by researchers at McGill University is shining a light on how auditory perception and motor processes work together.

Keeping the beat -- it takes more than just moving or listening well

In a recent paper in the Journal of Cognitive Neuroscience, the researchers, led by Caroline Palmer, a professor in McGill's Department of Psychology, were able to identify neural markers of musicians' beat perceptions. Surprisingly, these markers did not correspond to the musician's ability to either hear or produce a beat -- only to their ability to synchronize with it.

"The authors, as performing musicians, are familiar with musical situations in which one performer is not correctly aligned in time with fellow performers -- so we were interested in exploring how musician's brains respond to rhythms. It could be that some people are better musicians because they listen differently or it could be that they move their bodies differently," explains Palmer, the Canada Research Chair in Cognitive Neuroscience of Performance, and the senior author on the paper.

"We found that the answer was a match between the pulsing or oscillations in the brain rhythms and the pulsing of the musical rhythm -- it's not just listening or movement. It's a linking of the brain rhythm to the auditory rhythm."

Super-synchronizers -- an exception or a learnable skill?

The researchers used electroencephalography (EEGs involve placing electrodes on the scalp to detect electrical activity in the brain) to measure brain activity as participants in the experiment, all of them experienced musicians, synchronized their tapping with a range of musical rhythms they were hearing. By doing so they were able to identify neural markers of musicians' beat perceptions that corresponded to their ability to synchronize well.

"We were surprised that even highly trained musicians sometimes showed reduced ability to synchronize with complex rhythms, and that this was reflected in their EEGs," said co-first authors Brian Mathias and Anna Zamm, both PhD students in the Palmer lab. "Most musicians are good synchronizers; nonetheless, this signal was sensitive enough to distinguish the "good" from the "better" or "super-synchronizers," as we sometimes call them."

It's not clear whether anyone can become a super-synchronizer, but according to Palmer, the lead researcher, it may be possible to improve ones ability to synchronize.

"The range of musicians we sampled suggests that the answer would be yes. And the fact that only 2-3 % of the population are 'beat deaf' is also encouraging. Practice definitely improves your ability and improves the alignment of the brain rhythms with the musical rhythms. But whether everyone is going to be as good as a drummer is not clear."

From Science Daily

Inflammation linked to Alzheimer's disease development

 Alzheimer's disease is a neurodegenerative condition that is characterized by the buildup of clumps of beta-amyloid protein in the brain. Exactly what causes these clumps, known as plaques, and what role they play in disease progression is an active area of research important for developing prevention and treatment strategies.

Recent studies have found that beta-amyloid has antiviral and antimicrobial properties, suggesting a possible link between the immune response against infections and the development of Alzheimer's disease.

Chemical biologists at the Sloan Kettering Institute have now discovered clear evidence of this link: A protein called IFITM3 that is involved in the immune response to pathogens also plays a key role in the accumulation of beta-amyloid in plaques.

"We've known that the immune system plays a role in Alzheimer's disease -- for example, it helps to clean up beta-amyloid plaques in the brain," says Yue-Ming Li, a chemical biologist at SKI. "But this is the first direct evidence that immune response contributes to the production of beta-amyloid plaques -- the defining feature of Alzheimer's disease."

In a paper published September 2 in Nature, Dr. Li and his team show that IFITM3 alters the activity of an enzyme called gamma-secretase, which chops up precursor proteins into the fragments of beta-amyloid that make up plaques.

They found that removing IFITM3 decreased the activity of the gamma-secretase enzyme and, as a result, reduced that number of amyloid plaques that formed in a mouse model of the disease.

Mounting Evidence for a New Hypothesis

Neuroinflammation, or inflammation in the brain, has emerged as an important line of inquiry in Alzheimer's disease research. Markers of inflammation, such as certain immune molecules called cytokines, are boosted in Alzheimer's disease mouse models and in the brains of people with Alzheimer's disease. Dr. Li's study is the first to provide a direct link between this inflammation and plaque development -- by way of IFITM3.

Scientists know that the production of IFITM3 starts in response to activation of the immune system by invading viruses and bacteria. These observations, combined with the new findings from Dr. Li's lab that IFITM3 directly contributes to plaque formation, suggest that viral and bacterial infections could increase the risk of Alzheimer's disease development. Indeed, Dr. Li and his colleagues found that the level of IFITM3 in human brain samples correlated with levels of certain viral infections as well as with gamma-secretase activity and beta-amyloid production.

Age is the number one risk factor for Alzheimer's, and the levels of both inflammatory markers and IFITM3 increased with advancing age in mice, the researchers found.

They also discovered that IFITM3 is increased in a subset of late onset Alzheimer's patients, meaning that IFITM3 could potentially be used as a biomarker to identify a subset of patients who might benefit from therapies targeted against IFITM3.

Read more at Science Daily

A 'bang' in LIGO and Virgo detectors signals most massive gravitational-wave source yet

 For all its vast emptiness, the universe is humming with activity in the form of gravitational waves. Produced by extreme astrophysical phenomena, these reverberations ripple forth and shake the fabric of space-time, like the clang of a cosmic bell.

Now researchers have detected a signal from what may be the most massive black hole merger yet observed in gravitational waves. The product of the merger is the first clear detection of an "intermediate-mass" black hole, with a mass between 100 and 1,000 times that of the sun.

They detected the signal, which they have labeled GW190521, on May 21, 2019, with the National Science Foundation's Laser Interferometer Gravitational-wave Observatory (LIGO), a pair of identical, 4-kilometer-long interferometers in the United States; and Virgo, a 3-kilometer-long detector in Italy.

The signal, resembling about four short wiggles, is extremely brief in duration, lasting less than one-tenth of a second. From what the researchers can tell, GW190521 was generated by a source that is roughly 5 gigaparsecs away, when the universe was about half its age, making it one of the most distant gravitational-wave sources detected so far.

As for what produced this signal, based on a powerful suite of state-of-the-art computational and modeling tools, scientists think that GW190521 was most likely generated by a binary black hole merger with unusual properties.

Almost every confirmed gravitational-wave signal to date has been from a binary merger, either between two black holes or two neutron stars. This newest merger appears to be the most massive yet, involving two inspiraling black holes with masses about 85 and 66 times the mass of the sun.

The LIGO-Virgo team has also measured each black hole's spin and discovered that as the black holes were circling ever closer together, they could have been spinning about their own axes, at angles that were out of alignment with the axis of their orbit. The black holes' misaligned spins likely caused their orbits to wobble, or "precess," as the two Goliaths spiraled toward each other.

The new signal likely represents the instant that the two black holes merged. The merger created an even more massive black hole, of about 142 solar masses, and released an enormous amount of energy, equivalent to around 8 solar masses, spread across the universe in the form of gravitational waves.

"This doesn't look much like a chirp, which is what we typically detect," says Virgo member Nelson Christensen, a researcher at the French National Centre for Scientific Research (CNRS), comparing the signal to LIGO's first detection of gravitational waves in 2015. "This is more like something that goes 'bang,' and it's the most massive signal LIGO and Virgo have seen."

The international team of scientists, who make up the LIGO Scientific Collaboration (LSC) and the Virgo Collaboration, have reported their findings in two papers published today. One, appearing in Physical Review Letters, details the discovery, and the other, in The Astrophysical Journal Letters, discusses the signal's physical properties and astrophysical implications.

"LIGO once again surprises us not just with the detection of black holes in sizes that are difficult to explain, but doing it using techniques that were not designed specifically for stellar mergers," says Pedro Marronetti, program director for gravitational physics at the National Science Foundation. "This is of tremendous importance since it showcases the instrument's ability to detect signals from completely unforeseen astrophysical events. LIGO shows that it can also observe the unexpected."

In the mass gap

The uniquely large masses of the two inspiraling black holes, as well as the final black hole, raise a slew of questions regarding their formation.

All of the black holes observed to date fit within either of two categories: stellar-mass black holes, which measure from a few solar masses up to tens of solar masses and are thought to form when massive stars die; or supermassive black holes, such as the one at the center of the Milky Way galaxy, that are from hundreds of thousands, to billions of times that of our sun.

However, the final 142-solar-mass black hole produced by the GW190521 merger lies within an intermediate mass range between stellar-mass and supermassive black holes -- the first of its kind ever detected.

The two progenitor black holes that produced the final black hole also seem to be unique in their size. They're so massive that scientists suspect one or both of them may not have formed from a collapsing star, as most stellar-mass black holes do.

According to the physics of stellar evolution, outward pressure from the photons and gas in a star's core support it against the force of gravity pushing inward, so that the star is stable, like the sun. After the core of a massive star fuses nuclei as heavy as iron, it can no longer produce enough pressure to support the outer layers. When this outward pressure is less than gravity, the star collapses under its own weight, in an explosion called a core-collapse supernova, that can leave behind a black hole.

This process can explain how stars as massive as 130 solar masses can produce black holes that are up to 65 solar masses. But for heavier stars, a phenomenon known as "pair instability" is thought to kick in. When the core's photons become extremely energetic, they can morph into an electron and antielectron pair. These pairs generate less pressure than photons, causing the star to become unstable against gravitational collapse, and the resulting explosion is strong enough to leave nothing behind. Even more massive stars, above 200 solar masses, would eventually collapse directly into a black hole of at least 120 solar masses. A collapsing star, then, should not be able to produce a black hole between approximately 65 and 120 solar masses -- a range that is known as the "pair instability mass gap."

But now, the heavier of the two black holes that produced the GW190521 signal, at 85 solar masses, is the first so far detected within the pair instability mass gap.

"The fact that we're seeing a black hole in this mass gap will make a lot of astrophysicists scratch their heads and try to figure out how these black holes were made," says Christensen, who is the director of the Artemis Laboratory at the Nice Observatory in France.

One possibility, which the researchers consider in their second paper, is of a hierarchical merger, in which the two progenitor black holes themselves may have formed from the merging of two smaller black holes, before migrating together and eventually merging.

"This event opens more questions than it provides answers," says LIGO member Alan Weinstein, professor of physics at Caltech. "From the perspective of discovery and physics, it's a very exciting thing."

"Something unexpected"

There are many remaining questions regarding GW190521.

As LIGO and Virgo detectors listen for gravitational waves passing through Earth, automated searches comb through the incoming data for interesting signals. These searches can use two different methods: algorithms that pick out specific wave patterns in the data that may have been produced by compact binary systems; and more general "burst" searches, which essentially look for anything out of the ordinary.

LIGO member Salvatore Vitale, assistant professor of physics at MIT, likens compact binary searches to "passing a comb through data, that will catch things in a certain spacing," in contrast to burst searches that are more of a "catch-all" approach.

In the case of GW190521, it was a burst search that picked up the signal slightly more clearly, opening the very small chance that the gravitational waves arose from something other than a binary merger.

"The bar for asserting we've discovered something new is very high," Weinstein says. "So we typically apply Occam's razor: The simpler solution is the better one, which in this case is a binary black hole."

But what if something entirely new produced these gravitational waves? It's a tantalizing prospect, and in their paper the scientists briefly consider other sources in the universe that might have produced the signal they detected. For instance, perhaps the gravitational waves were emitted by a collapsing star in our galaxy. The signal could also be from a cosmic string produced just after the universe inflated in its earliest moments -- although neither of these exotic possibilities matches the data as well as a binary merger.

Read more at Science Daily

Sep 1, 2020

How to weigh a dinosaur

 How do you weigh a long-extinct dinosaur? A couple of ways, as it turns out, neither of which involve actual weighing -- but according to a new study, different approaches still yield strikingly similar results.

New research published September 1 in the journal Biological Reviews involved a review of dinosaur body mass estimation techniques carried out over more than a century.

The findings should give us some confidence that we are building an accurate picture of these prehistoric animals, says study leader Dr. Nicolás Campione -- particularly our knowledge of the more massive dinosaurs that have no correlates in the modern world.

"Body size, in particular body mass, determines almost at all aspects of an animal's life, including their diet, reproduction, and locomotion," said Dr. Campione, a member of the University of New England's Palaeoscience Research Centre.

"If we know that we have a good estimate of a dinosaur's body mass, then we have a firm foundation from which to study and understand their life retrospectively."

Estimating the mass of a dinosaur like the emblematic Tyrannosaurus rex is no small feat -- it is a creature that took its last breath some 66 million years ago and, for the most part, only its bones remain today. It is a challenge that has taxed the ingenuity of palaeobiologists for more than a century. Scientific estimates of the mass of the biggest land predator of all time have differed substantially, ranging from about three tonnes to over 18 tonnes.

The research team led by Dr. Campione compiled and reviewed an extensive database of dinosaur body mass estimates reaching back to 1905, to assess whether different approaches for calculating dinosaur mass were clarifying or complicating the science.

Although a range of different methods to estimating body mass have been tried over the years, they all come down to two fundamental approaches. Scientists have either measured and scaled bones in living animals, such as the circumference of the arm (humerus) and leg (femur) bones, and compared them to dinosaurs; or they have calculated the volume of three-dimensional reconstructions that approximate what the animal may have looked like in real life. Debate over which method is 'better' has raged in the literature.

The researchers found that once scaling and reconstruction methods are compared en masse, most estimates agree. Apparent differences are the exception, not the rule.

"In fact, the two approaches are more complementary than antagonistic," Dr. Campione said.

The bone scaling method, which relies on relationships obtained directly from living animals of known body mass, provides a measure of accuracy, but often of low precision; whereas reconstructions that consider the whole skeleton provide precision, but of unknown accuracy. This is because reconstructions depend on our own subjective ideas about what extinct animals looked like, which have changed appreciably over time.

"There will always be uncertainty around our understanding of long-extinct animals, and their weight is always going to be a source of it," said Dr. David Evans, Temerty Chair of Vertebrate Palaeontology at the Royal Ontario Museum in Toronto, senior author on the new paper. "Our new study suggests we are getting better at weighing dinosaurs, and it paves the way for more realistic dinosaur body mass estimation in the future."

The researchers recommend that future work seeking to estimate the sizes of Mesozoic dinosaurs, and other extinct animals, need to better-integrate the scaling and reconstruction approaches to reap their benefits.

Drs. Campione and Evans suggest that an adult T. rex would have weighed approximately seven tonnes -- an estimate that is consistent across reconstruction and limb bone scaling approaches alike. But the research emphasizes the inaccuracy of such single values and the importance of incorporating uncertainty in mass estimates, not least because dinosaurs, like humans, did not come in one neat package. Such uncertainties suggest an average minimum weight of five tonnes and a maximum average weight of 10 tonnes for the 'king' of dinosaurs.

Read more at Science Daily

Being a selfish jerk doesn't get you ahead: Study

 The evidence is in: Nice guys and gals don't finish last, and being a selfish jerk doesn't get you ahead.

That's the clear conclusion from research that tracked disagreeable people from college or graduate school to where they landed in their careers about 14 years later.

"I was surprised by the consistency of the findings. No matter the individual or the context, disagreeableness did not give people an advantage in the competition for power -- even in more cutthroat, 'dog-eat-dog' organizational cultures," said Berkeley Haas Prof. Cameron Anderson, who co-authored the study with Berkeley Psychology Prof. Oliver P. John, doctoral student Daron L. Sharps, and Assoc. Prof. Christopher J. Soto of Colby College.

The paper was published August 31 in the Proceedings of the National Academy of Sciences.

The researchers conducted two studies of people who had completed personality assessments as undergraduates or MBA students at three universities. They surveyed the same people more than a decade later, asking about their power and rank in their workplaces, as well as the culture of their organizations. They also asked their co-workers to rate the study participants' rank and workplace behavior. Across the board, they found those with selfish, deceitful, and aggressive personality traits were not more likely to have attained power than those who were generous, trustworthy, and generally nice.

That's not to say that jerks don't reach positions of power. It's just that they didn't get ahead faster than others, and being a jerk simply didn't help, Anderson said. That's because any power boost they get from being intimidating is offset by their poor interpersonal relationships, the researchers found. In contrast, the researchers found that extroverts were the most likely to have advanced in their organizations, based on their sociability, energy, and assertiveness -- backing up prior research.

"The bad news here is that organizations do place disagreeable individuals in charge just as often as agreeable people," Anderson said. "In other words, they allow jerks to gain power at the same rate as anyone else, even though jerks in power can do serious damage to the organization."

The age-old question of whether being aggressively Machiavellian helps people get ahead has long interested Anderson, who studies social status. It's a critical question for managers, because ample research has shown that jerks in positions of power are abusive, prioritize their own self-interest, create corrupt cultures, and ultimately cause their organizations to fail. They also serve as toxic role models for society at large.

For example, people who read former-Apple CEO Steve Jobs' biography might think, "Maybe if I become an even bigger asshole I'll be successful like Steve," the authors note in their paper. "My advice to managers would be to pay attention to agreeableness as an important qualification for positions of power and leadership," Anderson said. "Prior research is clear: agreeable people in power produce better outcomes."

While there's clearly no shortage of jerks in power, there's been little empirical research to settle the question of whether being disagreeable actually helped them get there, or is simply incidental to their success. Anderson and his co-authors set out to create a research design that would clear up the debate.

What defines a jerk? The participants had all completed the Big Five Inventory (BFI), an assessment based on general consensus among psychologists of the five fundamental personality dimensions: openness to experience, conscientiousness, extraversion, neuroticism, and agreeableness. It was developed by Anderson's co-author John, who directs the Berkeley Personality Lab. In addition, some of the participants also completed a second personality assessment, the NEO Personality Inventory-Revised (NEO PI-R).

"Disagreeableness is a relatively stable aspect of personality that involves the tendency to behave in quarrelsome, cold, callous, and selfish ways," the researchers explained. ." ..Disagreeable people tend to be hostile and abusive to others, deceive and manipulate others for their own gain, and ignore others' concerns or welfare."

In the first study, which involved 457 participants, the researchers found no relationship between power and disagreeableness, no matter whether the person had scored high or low on those traits. That was true regardless of gender, race or ethnicity, industry, or the cultural norms in the organization.

The second study went deeper, looking at the four main ways people attain power: through dominant-aggressive behavior, or using fear and intimidation; political behavior, or building alliances with influential people; communal behavior, or helping others; and competent behavior, or being good at one's job. They also asked the subjects' co-workers to rate their place in the hierarchy, as well as their workplace behavior (interestingly, the co-workers' ratings largely matched the subjects' self-assessments).

This allowed the researchers to better understand why disagreeable people do not get ahead faster than others. Even though jerks tend to engage in dominant behavior, their lack of communal behavior cancels out any advantage their aggressiveness gives them, they concluded.

Read more at Science Daily

Nature conservation policy rarely changes people's behavior

 It is a well-known problem: too rarely do nature conservation initiatives, recommendations or strategies announced by politicians lead to people really changing their everyday behaviour. A German-Israeli research team led by the Helmholtz Centre for Environmental Research (UFZ) and the German Centre for Integrative Biodiversity Research (iDiv) has investigated the reasons for this. According to the team, the measures proposed by politicians do not sufficiently exploit the range of possible behavioural interventions and too rarely specify the actual target groups, they write in the journal Conservation Biology.

The protection of pollinating insects is a major issue in international nature conservation policy. Stirred up by scientific findings on high population losses of insect groups such as bees or butterflies, which, for example, affect pollination services in agriculture, Europe is putting insect protection at the forefront of environmental policy. Many governments in Europe have presented national strategies to ensure that pollinators are maintained. A team of researchers from UFZ, iDiv and Technion -- Israel Institute of Technology analysed the available eight national strategy papers to protect pollinators in terms of behavioural change interventions. The result: "Nature conservation policies to preserve pollinators are often too ineffective in this respect and change little in people's behaviour," says first author and environmental psychologist Dr. Melissa Marselle, who is conducting research at the UFZ and iDiv on the impact of biodiversity on human health.

The scientists coded around 610 behavioural measures in the strategy papers. Using the "Behaviour Change Wheel" theory, which originates from health psychology and integrates 19 different behavioural models, the scientists categorized the behavioural measures for pollinator conservation into the nine different types of interventions -- i.e. measures that could change people's behaviour. According to this, most of the 790 or so behavioral measures for pollinator conservation (23 percent) can be assigned to the behaviour change interventions of education and awareness raising, followed by structural measures such as planting hedges, sowing flower strips in fields or creating green spaces in the city (19 percent). Only around four percent of the behavioural measures for pollinator conservation can be summarized under the intervention of modeling, for example, peer-to-peer learning or the use of best-practice examples from farmers who work in exemplary fashion. Other little-mentioned behavioural interventions for pollinator conservation were incentive systems for farmers or municipalities (three percent) and statutory regulations (two percent). Interventions that create a financial cost to discourage a certain behaviour, such as additional taxes on the use of pesticides, did not appear in any of the policy papers for pollinator conservation.

"This shows that national biodiversity strategies focus primarily on educational and structural measures and neglect other effective instruments," says Melissa Marselle. "Educational measures to impart knowledge and to create understanding are important. But relying on education alone is not very effective if you really want to change environmental behaviour. It would be more effective to link it to a wider range of other measures." For example, clearly identifying supply chains and producer principles on labels can encourage many people to buy an organic or pollinator-friendly products -- even at a higher price. Stronger financial incentives for farmers who operate sustainably would also be effective, and the certification of sustainable buildings could be linked to the use of pollinator-friendly plants as flower beds. Taxes and additional costs for consumers also ensure rapid changes in behaviour: In the UK, for example, a compulsory levy on the purchase of plastic bags has led to a decline in their use.

A further shortcoming of the strategy papers was identified as the fact that in 41 percent of the behavioural measures for pollinator conservation the target groups whose behaviour needs to change were not named and specified. The objectives are often very well described, but mostly revolve around the question of how certain actions change the environment. However, it is often not defined in more detail to whom the actions are directed and who should implement them: the public, farmers or local authorities? It could be more effective to first consider what the different actors can do, with the help of behavioural researchers, and then, building on that, to consider measures to achieve certain goals.

Read more at Science Daily

Face shield or face mask to stop the spread of COVID-19?

 If the United States Centers for Disease Control and Prevention (CDC) guidelines aren't enough to convince you that face shields alone shouldn't be used to stop the spread of COVID-19, then maybe a new visualization study will.

To increase public awareness about the effectiveness of face shields alone as well as face masks with exhalation valves, researchers from Florida Atlantic University's College of Engineering and Computer Science used qualitative visualizations to test how face shields and masks with valves perform in impeding the spread of aerosol-sized droplets. Widespread public use of these alternatives to regular masks could potentially have an adverse effect on mitigation efforts.

For the study, just published in the journal Physics of Fluids, researchers employed flow visualization in a laboratory setting using a laser light sheet and a mixture of distilled water and glycerin to generate the synthetic fog that made up the content of a cough-jet. They visualized droplets expelled from a mannequin's mouth while simulating coughing and sneezing. By placing a plastic face shield and an N95-rated face mask with a valve, they were able to map out the paths of droplets and demonstrate how they performed.

Results of the study show that although face shields block the initial forward motion of the jet, the expelled droplets move around the visor with relative ease and spread out over a large area depending on light ambient disturbances. Visualizations for the face mask equipped with an exhalation port indicate that a large number of droplets pass through the exhale valve unfiltered, which significantly reduces its effectiveness as a means of source control.

"From this latest study, we were able to observe that face shields are able to block the initial forward motion of the exhaled jet, however, aerosolized droplets expelled with the jet are able to move around the visor with relative ease," said Manhar Dhanak, Ph.D., department chair, professor, and director of SeaTech, who co-authored the paper with Siddhartha Verma, Ph.D., lead author and an assistant professor; and John Frankenfeld, a technical professional, all within FAU's Department of Ocean and Mechanical Engineering. "Over time, these droplets can disperse over a wide area in both lateral and longitudinal directions, albeit with decreasing droplet concentration."

To demonstrate the performance of the face shield, researchers used a horizontal laser sheet in addition to a vertical laser sheet revealing how the droplets cross the horizontal plane. Not only did the researchers observe forward spread of the droplets, they found that droplets also spread in the reverse direction. Notably, face shields impede forward motion of the exhaled droplets to some extent, and masks with valves do so to an even lesser extent. However, once released into the environment, the aerosol-sized droplets get dispersed widely depending on light ambient disturbances.

Like the N-95-rated face mask used in this study, other types of masks such as certain cloth-based masks that are available commercially also come equipped with one to two exhale ports, located on either side of the facemask. The N95-rated face mask with the exhale valve used in this study had a small amount of exhaled droplets that escaped from the gap between the top of the mask and the bridge of the nose. Moreover, the exhalation port significantly reduced the effectiveness of the mask as a means of source control, as a large number of droplets passed through the valve unfiltered and unhindered.

"There is an increasing trend of people substituting regular cloth or surgical masks with clear plastic face shields as well as using masks that are equipped with exhalation valves," said Verma. "A driving factor for this increased adoption is better comfort compared to regular masks. However, face shields have noticeable gaps along the bottom and the sides, and masks with exhalation ports include a one-way valve which restricts airflow when breathing in, but allows free outflow of air. The inhaled air gets filtered through the mask material, but the exhaled breath passes through the valve unfiltered."

The researchers say that the key takeaway from this latest study illustrates that face shields and masks with exhale valves may not be as effective as regular face masks in restricting the spread of aerosolized droplets. Despite the increased comfort that these alternatives offer, they say it may be preferable to use well-constructed, high quality cloth or surgical masks that are of a plain design, instead of face shields and masks equipped with exhale valves. Widespread public adoption of the alternatives, in lieu of regular masks, could potentially have an adverse effect on ongoing mitigation efforts against COVID-19.

Read more at Science Daily

Aug 31, 2020

Sea level rise from ice sheets track worst-case climate change scenario

 Ice sheets in Greenland and Antarctica whose melting rates are rapidly increasing have raised the global sea level by 1.8cm since the 1990s, and are matching the Intergovernmental Panel on Climate Change's worst-case climate warming scenarios.

According to a new study from the University of Leeds and the Danish Meteorological Institute, if these rates continue, the ice sheets are expected to raise sea levels by a further 17cm and expose an additional 16 million people to annual coastal flooding by the end of the century.

Since the ice sheets were first monitored by satellite in the 1990s, melting from Antarctica has pushed global sea levels up by 7.2mm, while Greenland has contributed 10.6mm. And the latest measurements show that the world's oceans are now rising by 4mm each year.

"Although we anticipated the ice sheets would lose increasing amounts of ice in response to the warming of the oceans and atmosphere, the rate at which they are melting has accelerated faster than we could have imagined," said Dr Tom Slater, lead author of the study and climate researcher at the Centre for Polar Observation and Modelling at the University of Leeds.

"The melting is overtaking the climate models we use to guide us, and we are in danger of being unprepared for the risks posed by sea level rise."

The results are published today in a study in the journal Nature Climate Change. It compares the latest results from satellite surveys from the Ice Sheet Mass Balance Intercomparison Exercise (IMBIE) with calculations from climate models. The authors warn that the ice sheets are losing ice at a rate predicted by the worst-case climate warming scenarios in the last large IPCC report.

Dr Anna Hogg, study co-author and climate researcher in the School of Earth and Environment at Leeds, said: "If ice sheet losses continue to track our worst-case climate warming scenarios we should expect an additional 17cm of sea level rise from the ice sheets alone. That's enough to double the frequency of storm-surge flooding in many of the world's largest coastal cities."

So far, global sea levels have increased in the most part through a mechanism called thermal expansion, which means that volume of seawater expands as it gets warmer. But in the last five years, ice melt from the ice sheets and mountain glaciers has overtaken global warming as the main cause of rising sea levels.

Read more at Science Daily

Study finds missing link in the evolutionary history of carbon-fixing protein Rubisco

 A team led by researchers at the University of California, Davis, has discovered a missing link in the evolution of photosynthesis and carbon fixation. Dating back more than 2.4 billion years, a newly discovered form of the plant enzyme rubisco could give new insight into plant evolution and breeding.

Rubisco is the most abundant enzyme on the planet. Present in plants, cyanobacteria (also known as blue-green algae) and other photosynthetic organisms, it's central to the process of carbon fixation and is one of Earth's oldest carbon-fixing enzymes.

"It's the primary driver for producing food, so it can take CO2 from the atmosphere and fix that into sugar for plants and other photosynthetic organisms to use. It's the primary driving enzyme for feeding carbon into life that way," said Doug Banda, a postdoctoral scholar in the lab of Patrick Shih, assistant professor of plant biology in the UC Davis College of Biological Sciences.

Form I rubisco evolved over 2.4 billion years ago before the Great Oxygenation Event, when cyanobacteria transformed the Earth's atmosphere by producing oxygen through photosynthesis. Rubisco's ties to this ancient event make it important to scientists studying the evolution of life.

In a study appearing Aug. 31 in Nature Plants, Banda and researchers from UC Davis, UC Berkeley and the Lawrence Berkeley National Laboratory report the discovery of a previously unknown relative of form I rubisco, one that they suspect diverged from form I rubisco prior to the evolution of cyanobacteria.

The new version, called form I-prime rubisco, was found through genome sequencing of environmental samples and synthesized in the lab. Form I-prime rubisco gives researchers new insights into the structural evolution of form I rubisco, potentially providing clues as to how this enzyme changed the planet.

An invisible world

Form I rubisco is responsible for the vast majority of carbon fixation on Earth. But other forms of rubisco exist in bacteria and in the group of microorganisms called Archaea. These rubisco variants come in different shapes and sizes, and even lack small subunits. Yet they still function.

"Something intrinsic to understanding how form I rubisco evolved is knowing how the small subunit evolved," said Shih. "It's the only form of rubisco, that we know of, that makes this kind of octameric assembly of large subunits."

Study co-author Professor Jill Banfield, of UC Berkeley's earth and planetary sciences department, uncovered the new rubisco variant after performing metagenomic analyses on groundwater samples. Metagenomic analyses allow researchers to examine genes and genetic sequences from the environment without culturing microorganisms.

"We know almost nothing about what sort of microbial life exists in the world around us, and so the vast majority of diversity has been invisible," said Banfield. "The sequences that we handed to Patrick's lab actually come from organisms that were not represented in any databases."

Banda and Shih successfully expressed form I-prime rubisco in the lab using E. coli and studied its molecular structure.

Form I rubisco is built from eight core large molecular subunits with eight small subunits perched on top and bottom. Each piece of the structure is important to photosynthesis and carbon fixation. Like form I rubisco, form I-prime rubisco is built from eight large subunits. However, it does not possess the small subunits previously thought essential.

"The discovery of an octameric rubisco that forms without small subunits allows us to ask evolutionary questions about what life would've looked like without the functionality imparted by small subunits," said Banda. "Specifically, we found that form I-prime enzymes had to evolve fortified interactions in the absence of small subunits, which enabled structural stability in a time when Earth's atmosphere was rapidly changing."

According to the researchers, form I-prime rubisco represents a missing link in evolutionary history. Since form I rubisco converts inorganic carbon into plant biomass, further research on its structure and functionality could lead to innovations in agriculture production.

Read more at Science Daily

Can a black hole fire up the cold heart of the Phoenix Galaxy Cluster?

 Radio astronomers have detected jets of hot gas blasted out by a black hole in the galaxy at the heart of the Phoenix Galaxy Cluster, located 5.9 billion light-years away in the constellation Phoenix. This is an important result for understanding the coevolution of galaxies, gas, and black holes in galaxy clusters.

Galaxies are not distributed randomly in space. Through mutual gravitational attraction, galaxies gather together to form collections known as clusters. The space between galaxies is not entirely empty. There is very dilute gas throughout a cluster which can be detected by X-ray observations.

If this intra-cluster gas cooled, it would condense under its own gravity to form stars at the center of the cluster. However, cooled gas and stars are not usually observed in the hearts of nearby clusters, indicating that some mechanism must be heating the intra-cluster gas and preventing star formation. One potential candidate for the heat source is jets of high-speed gas accelerated by a super-massive black hole in the central galaxy.

The Phoenix Cluster is unusual in that it does show signs of dense cooled gas and massive star formation around the central galaxy. This raises the question, "does the central galaxy have black hole jets as well?"

A team led by Takaya Akahori at the National Astronomical Observatory of Japan used the Australia Telescope Compact Array (ATCA) to search for black hole jets in the Phoenix Galaxy Cluster with the highest resolution to date. They detected matching structures extending out from opposite sides of the central galaxy. Comparing with observations of the region taken from the Chandra X-ray Observatory archive data shows that the structures detected by ATCA correspond to cavities of less dense gas, indicating that they are a pair of bipolar jets emitted by a black hole in the galaxy. Therefore, the team discovered the first example, in which intra-cluster gas cooling and black hole jets coexist, in the distant Universe.

Further details of the galaxy and jets could be elucidated through higher-resolution observations with next generation observational facilities, such as the Square Kilometre Array scheduled to start observations in the late 2020s.

From Science Daily

Astrophysics: A direct view of star/disk interactions

 A team including researchers from the Institute for Astrophysics of the University of Cologne has for the first time directly observed the columns of matter that build up newborn stars. This was observed in the young star TW Hydrae system located approximately 163 light years from Earth. This result was obtained with the Very Large Telescope Interferometer (VLTI) and its GRAVITY instrument of the European Southern Observatory (ESO) in Chile. The article 'A measure of the size of the magnetospheric accretion region in TW Hydrae' has been published in a recent issue of Nature.

The formation of stars in the Galaxy involves processes in which primordial matter such as gas and dust present in the giant molecular clouds is rapidly aggregated via gravity to form a protostar. This 'accretion' of gas occurs through the disk that forms around the newborn star and represents the major mechanism of supply of material to the growing central baby star. These so-called protoplanetary disks are one of the key ingredients to explain the formation of very diverse exoplanets that are to date frequently discovered orbiting our closest neighbours.

Based on theoretical and observational evidence, many scenarios were hypothesized to describe the mechanism of interaction between the star and the parent circumstellar disk, like for instance the funnelling and accretion of host gas onto the central star along the local magnetic field. But this could never be directly observed and proven so far with any telescope. The main reason is that the level of details of the image -- astronomers talk about angular resolution -- necessary to observe what happens very close to the star was simply out of reach. For comparison, detecting these events would be like discerning a small one-cubic meter box on the surface of the Moon. With a normal telescope, this is not possible. However, with an interferometer like the VLTI in Chile and its instrument GRAVITY, which delivers unprecedented angular resolution in the infrared, such a precise observation has now become possible. An interferometer collects and combines the light from different telescopes a few hundred meters apart, which provides the same level of accuracy as a hypothetical giant telescope with a comparable diameter.

With the contribution of members of Cologne's Institute for Astrophysics, astrophysicists from several European institutions exploited the GRAVITY instrument at the VLTI to probe the closest regions around the young solar analog TW Hydrae, which is thought to be the most representative example of what our Sun may have looked like at the time of its formation, more than 5 billion years ago. By measuring very precisely the typical angular size of the very inner gaseous regions -- using a particular infrared atomic transition of the hot hydrogen gas -- the scientists were able to directly prove that the hot gas emission was indeed resulting from magnetospheric accretion taking place very close to the stellar surface. 'This is an important milestone in our attempt to confirm the mechanisms at work in the field of star formation', said Professor Lucas Labadie, co-author of the paper. 'We now want to extend such exploration to other young stars of different nature to understand how the evolution of the circumstellar disk, the birthplace of planets, goes.' T

Read more at Science Daily

How antibiotics interact

 It is usually difficult to predict how well drugs will work when they are combined. Sometimes, two antibiotics increase their effect and inhibit the growth of bacteria more efficiently than expected. In other cases, the combined effect is weaker. Since there are many different ways of combining drugs -- such as antibiotics -- it is important to be able to predict the effect of these drug combinations. A new study has found out that it is often possible to predict the outcomes of combining certain antibiotics by quantitatively characterizing how individual antibiotics work. That is the result of a joint study by Professor Tobias Bollenbach at the University of Cologne with Professor Gasper Tkacik and the doctoral researcher Bor Kavcic at the Institute of Science and Technology Austria. The paper 'Mechanisms of drug interactions between translation-inhibiting antibiotics' has been published in Nature Communications.

'We wanted to find out how antibiotics that inhibit protein synthesis in bacteria work when combined with each other, and predict these effects as far as possible, using mathematical models,' Bollenbach explained. As head of the research group 'Biological Physics and Systems Biology' at the University of Cologne, he explores how cells respond to drug combinations and other signals.

Bacterial ribosomes can gradually translate the DNA sequence of genes into the amino acid sequence of proteins (translation). Many antibiotics target this process and inhibit translation. Different antibiotics specifically block different steps of the translation cycle. The scientists found out that the interactions between the antibiotics are often caused by bottlenecks in the translation cycle. For example, antibiotics that inhibit the beginning and middle of the translation cycle have much weaker effects when combined.

In order to clarify the underlying mechanisms of drug interactions, the scientists created artificial translation bottlenecks that genetically mimic the effect of specific antibiotics. If such a bottleneck is located in the middle of the translation cycle, a traffic jam of ribosomes forms, which dissolves upon introducing another bottleneck at the beginning of the translation cycle. Using a combination of theoretical models from statistical physics and experiments, the scientists showed that this effect explains the drug interaction between antibiotics that block these translation steps.

Tobias Bollenbach concluded: 'A quantitative understanding of the effect of individual antibiotics allows us to predict the effect of antibiotic combinations without having to test all possible combinations by trial and error. This finding is important because the same approach can be applied to other drugs, enabling the development of new, particularly effective drug combinations in the long term.'

From Science Daily