Apr 24, 2021

The wave beneath their wings

It's a common sight: pelicans gliding along the waves, right by the shore. These birds make this kind of surfing look effortless, but actually the physics involved that give them a big boost are not simple.

Researchers at the University of California San Diego have recently developed a theoretical model that describes how the ocean, the wind and the birds in flight interact in a recent paper in Movement Ecology.

UC San Diego mechanical engineering Ph.D. student Ian Stokes and adviser Professor Drew Lucas, of UC San Diego's Department of Mechanical and Aerospace Engineering and Scripps Institution of Oceanography, found that pelicans can completely offset the energy they expend in flight by exploiting wind updrafts generated by waves through what is known as wave-slope soaring. In short, by practicing this behavior, sea-birds take advantage of winds generated by breaking waves to stay aloft.

The model could be used to develop better algorithms to control drones that need to fly over water for long periods of time, the researchers said. Potential uses do not stop there.

"There's a community of biologists and ornithologists that studies the metabolic cost of flight in birds that can use this and see how their research connects to our estimates from theory. Likewise, our model generates a basic prediction for the winds generated by passing swell, which is important to physicists that study how the ocean and atmosphere interact in order to improve weather forecasting," Stokes said.

"This is an interesting project because it shows how the waves are actually moving the air around, making wind. If you're a savvy bird, you can optimize how you move to track waves and to take advantage of these updrafts. Since seabirds travel long distances to find food, the benefits may be significant," Lucas said.

Stokes and Lucas are, of course, not the first scientists to study the physics of the atmosphere that pelicans and other birds are hardwired to intuit so they can conserve energy for other activities. For centuries, humans have been inspired by the sight of birds harnessing the power and patterns of the winds for soaring flight.

That's how it started with Stokes, who is now in the second year of his PhD at UC San Diego. As a UC Santa Barbara undergraduate, Stokes, a surfer and windsurfer in his off hours, needed a project for his senior physics class and thought of the birds that would accompany him on the waves. When he looked closer, he appreciated the connection between their flight dynamics and the study of environmental fluid dynamics, a speciality of scientists at UC San Diego. The project ultimately turned into a master's thesis with Lucas, drawing inspiration from oceanographers at Scripps who seek to understand the interactions between the ocean and atmosphere.

Wave-slope soaring is just one of the many behaviors in sea-birds that take advantage of the energy in their environment. By tapping into these predictable patterns, the birds are able to forage, travel, and find mates more effectively.

Read more at Science Daily

Genetic effects of Chernobyl radiation

In two landmark studies, researchers have used cutting-edge genomic tools to investigate the potential health effects of exposure to ionizing radiation, a known carcinogen, from the 1986 accident at the Chernobyl nuclear power plant in northern Ukraine. One study found no evidence that radiation exposure to parents resulted in new genetic changes being passed from parent to child. The second study documented the genetic changes in the tumors of people who developed thyroid cancer after being exposed as children or fetuses to the radiation released by the accident.

The findings, published around the 35th anniversary of the disaster, are from international teams of investigators led by researchers at the National Cancer Institute (NCI), part of the National Institutes of Health. The studies were published online in Science on April 22.

"Scientific questions about the effects of radiation on human health have been investigated since the atomic bombings of Hiroshima and Nagasaki and have been raised again by Chernobyl and by the nuclear accident that followed the tsunami in Fukushima, Japan," said Stephen J. Chanock, M.D., director of NCI's Division of Cancer Epidemiology and Genetics (DCEG). "In recent years, advances in DNA sequencing technology have enabled us to begin to address some of the important questions, in part through comprehensive genomic analyses carried out in well-designed epidemiological studies."

The Chernobyl accident exposed millions of people in the surrounding region to radioactive contaminants. Studies have provided much of today's knowledge about cancers caused by radiation exposures from nuclear power plant accidents. The new research builds on this foundation using next-generation DNA sequencing and other genomic characterization tools to analyze biospecimens from people in Ukraine who were affected by the disaster.

The first study investigated the long-standing question of whether radiation exposure results in genetic changes that can be passed from parent to offspring, as has been suggested by some studies in animals. To answer this question, Dr. Chanock and his colleagues analyzed the complete genomes of 130 people born between 1987 and 2002 and their 105 mother-father pairs.

One or both of the parents had been workers who helped clean up from the accident or had been evacuated because they lived in close proximity to the accident site. Each parent was evaluated for protracted exposure to ionizing radiation, which may have occurred through the consumption of contaminated milk (that is, milk from cows that grazed on pastures that had been contaminated by radioactive fallout). The mothers and fathers experienced a range of radiation doses.

The researchers analyzed the genomes of adult children for an increase in a particular type of inherited genetic change known as de novo mutations. De novo mutations are genetic changes that arise randomly in a person's gametes (sperm and eggs) and can be transmitted to their offspring but are not observed in the parents.

For the range of radiation exposures experienced by the parents in the study, there was no evidence from the whole-genome sequencing data of an increase in the number or types of de novo mutations in their children born between 46 weeks and 15 years after the accident. The number of de novo mutations observed in these children were highly similar to those of the general population with comparable characteristics. As a result, the findings suggest that the ionizing radiation exposure from the accident had a minimal, if any, impact on the health of the subsequent generation.

"We view these results as very reassuring for people who were living in Fukushima at the time of the accident in 2011," said Dr. Chanock. "The radiation doses in Japan are known to have been lower than those recorded at Chernobyl."

In the second study, researchers used next-generation sequencing to profile the genetic changes in thyroid cancers that developed in 359 people exposed as children or in utero to ionizing radiation from radioactive iodine (I-131) released by the Chernobyl nuclear accident and in 81 unexposed individuals born more than nine months after the accident. Increased risk of thyroid cancer has been one of the most important adverse health effects observed after the accident.

The energy from ionizing radiation breaks the chemical bonds in DNA, resulting in a number of different types of damage. The new study highlights the importance of a particular kind of DNA damage that involves breaks in both DNA strands in the thyroid tumors. The association between DNA double-strand breaks and radiation exposure was stronger for children exposed at younger ages.

Next, the researchers identified the candidate "drivers" of the cancer in each tumor -- the key genes in which alterations enabled the cancers to grow and survive. They identified the drivers in more than 95% of the tumors. Nearly all the alterations involved genes in the same signaling pathway, called the mitogen-activated protein kinase (MAPK) pathway, including the genes BRAF, RAS, and RET.

The set of affected genes is similar to what has been reported in previous studies of thyroid cancer. However, the researchers observed a shift in the distribution of the types of mutations in the genes. Specifically, in the Chernobyl study, thyroid cancers that occurred in people exposed to higher radiation doses as children were more likely to result from gene fusions (when both strands of DNA are broken and then the wrong pieces are joined back together), whereas those in unexposed people or those exposed to low levels of radiation were more likely to result from point mutations (single base-pair changes in a key part of a gene).

The results suggest that DNA double-strand breaks may be an early genetic change following exposure to radiation in the environment that subsequently enables the growth of thyroid cancers. Their findings provide a foundation for further studies of radiation-induced cancers, particularly those that involve differences in risk as a function of both dose and age, the researchers added.

"An exciting aspect of this research was the opportunity to link the genomic characteristics of the tumor with information about the radiation dose -- the risk factor that potentially caused the cancer," said Lindsay M. Morton, Ph.D., deputy chief of the Radiation Epidemiology Branch in DCEG, who led the study.

"The Cancer Genome Atlas set the standard for how to comprehensively profile tumor characteristics," Dr. Morton continued. "We extended that approach to complete the first large genomic landscape study in which the potential carcinogenic exposure was well-characterized, enabling us to investigate the relationship between specific tumor characteristics and radiation dose."

She noted that the study was made possible by the creation of the Chernobyl Tissue Bank about two decades ago -- long before the technology had been developed to conduct the kind of genomic and molecular studies that are common today.

Read more at Science Daily

Apr 23, 2021

Mars has right ingredients for present-day microbial life beneath its surface, study finds

As NASA's Perseverance rover begins its search for ancient life on the surface of Mars, a new study suggests that the Martian subsurface might be a good place to look for possible present-day life on the Red Planet.

The study, published in the journal Astrobiology, looked at the chemical composition of Martian meteorites -- rocks blasted off of the surface of Mars that eventually landed on Earth. The analysis determined that those rocks, if in consistent contact with water, would produce the chemical energy needed to support microbial communities similar to those that survive in the unlit depths of the Earth. Because these meteorites may be representative of vast swaths of the Martian crust, the findings suggest that much of the Mars subsurface could be habitable.

"The big implication here for subsurface exploration science is that wherever you have groundwater on Mars, there's a good chance that you have enough chemical energy to support subsurface microbial life," said Jesse Tarnas, a postdoctoral researcher at NASA's Jet Propulsion Laboratory who led the study while completing his Ph.D. at Brown University. "We don't know whether life ever got started beneath the surface of Mars, but if it did, we think there would be ample energy there to sustain it right up to today."

In recent decades, scientists have discovered that Earth's depths are home to a vast biome that exists largely separated from the world above. Lacking sunlight, these creatures survive using the byproducts of chemical reactions produced when rocks come into contact with water.

One of those reactions is radiolysis, which occurs when radioactive elements within rocks react with water trapped in pore and fracture space. The reaction breaks water molecules into their constituent elements, hydrogen and oxygen. The liberated hydrogen is dissolved in the remaining groundwater, while minerals like pyrite (fool's gold) soak up free oxygen to form sulfate minerals. Microbes can ingest the dissolved hydrogen as fuel and use the oxygen preserved in the sulfates to "burn" that fuel.

In places like Canada's Kidd Creek Mine, these "sulfate-reducing" microbes have been found living more than a mile underground, in water that hasn't seen the light of day in more than a billion years. Tarnas has been working with a team co-led by Brown University professor Jack Mustard and Professor Barbara Sherwood Lollar of the University of Toronto to better understand these underground systems, with an eye toward looking for similar habitats on Mars and elsewhere in the solar system. The project, called Earth 4-D: Subsurface Science and Exploration, is supported by the Canadian Institute for Advances Research.

For this new study, the researchers wanted to see if the ingredients for radiolysis-driven habitats could exist on Mars. They drew on data from NASA's Curiosity rover and other orbiting spacecraft, as well as compositional data from a suite of Martian meteorites, which are representative of different parts of the planet's crust.

The researchers were looking for the ingredients for radiolysis: radioactive elements like thorium, uranium and potassium; sulfide minerals that could be converted to sulfate; and rock units with adequate pore space to trap water. The study found that in several different types of Martian meteorites, all the ingredients are present in adequate abundances to support Earth-like habitats. This was particularly true for regolith breccias -- meteorites sourced from crustal rocks more than 3.6 billion years old -- which were found to have the highest potential for life support. Unlike Earth, Mars lacks a plate tectonics system that constantly recycle crustal rocks. So these ancient terrains remain largely undisturbed.

The researchers say the findings help make the case for an exploration program that looks for signs of present-day life in the Martian subsurface. Prior research has found evidence of an active groundwater system on Mars in the past, the researchers say, and there's reason to believe that groundwater exists today. One recent study, for example, raised the possibility of an underground lake lurking under the planet's southern ice cap. This new research suggests that wherever there's groundwater, there's energy for life.

Tarnas and Mustard say that while there are certainly technical challenges involved in subsurface exploration, they aren't as insurmountable as people may think. A drilling operation wouldn't require "a Texas-sized oil rig," Mustard said, and recent advances in small drill probes could soon put the Martian depths within reach.

Read more at Science Daily

More belly weight increases danger of heart disease even if BMI does not indicate obesity

People with abdominal obesity and excess fat around the body's mid-section and organs have an increased risk of heart disease even if their body mass index (BMI) measurement is within a healthy weight range, according to a new Scientific Statement from the American Heart Association published today in the Association's flagship journal, Circulation.

"This scientific statement provides the most recent research and information on the relationship between obesity and obesity treatment in coronary heart disease, heart failure and arrhythmias," said Tiffany M. Powell-Wiley, M.D., M.P.H., FAHA, chair of the writing committee and a Stadtman Tenure-Track Investigator and chief of the Social Determinants of Obesity and Cardiovascular Risk Laboratory in the Division of Intramural Research at the National Heart, Lung, and Blood Institute at the National Institutes of Health in Bethesda, Maryland. "The timing of this information is important because the obesity epidemic contributes significantly to the global burden of cardiovascular disease and numerous chronic health conditions that also impact heart disease."

A greater understanding of obesity and its impact on cardiovascular health highlights abdominal obesity, sometimes referred to as visceral adipose tissue, or VAT, as a cardiovascular disease risk marker. VAT is commonly determined by waist circumference, the ratio of waist circumference to height (taking body size into account) or waist-to-hip ratio, which has been shown to predict cardiovascular death independent of BMI.

Experts recommend both abdominal measurement and BMI be assessed during regular health care visits because a high waist circumference or low waist-to-hip ratio, even in healthy weight individuals, could mean an increased risk of heart disease. Abdominal obesity is also linked to fat accumulation around the liver that often leads to non-alcoholic fatty liver disease, which adds to cardiovascular disease risk.

"Studies that have examined the relationship between abdominal fat and cardiovascular outcomes confirm that visceral fat is a clear health hazard," said Powell-Wiley.

The risk-inducing power of abdominal obesity is so strong that in people who are overweight or have obesity based on BMI, low levels of fat tissue around their midsection and organs could still indicate lower cardiovascular disease risks. This concept, referred to as "metabolically healthy obesity," seems to differ depending on race/ethnicity and sex.

Worldwide, around 3 billion people are overweight (BMI = 25 to 29.9 kg/m2) or have obesity obese(BMI ?30 kg/m2). Obesity is a complex disease related to many factors, including biologic, psychological, environmental and societal aspects, all of which may contribute to a person's risk for obesity. Obesity is associated with greater risk of coronary artery disease and death due to cardiovascular disease and contributes to many cardiovascular risk factors and other health conditions, including dyslipidemia (high cholesterol), type 2 diabetes, high blood pressure and sleep disorders.

For this statement, experts evaluated research on managing and treating obesity, particularly abdominal obesity. The writing group reports that reducing calories can reduce abdominal fat, and the most beneficial physical activity to reduce abdominal obesity is aerobic exercise. Their analysis found that meeting the current recommendations of 150 min/week of physical activity may be sufficient to reduce abdominal fat, with no additional loss from longer activity times. Exercise or a combination of dietary change and physical activity has been shown in some instances to reduce abdominal obesity even without weight loss.

Lifestyle changes and subsequent weight loss improve blood sugar, blood pressure, triglyceride and cholesterol levels -- a cluster of factors referred to as metabolic syndrome -- and reduce inflammation, improve blood vessel function and treat non-alcoholic fatty liver disease. However, studies of lifestyle change programs have not shown a reduction in coronary artery disease events (such as heart attack or chest pain).

In contrast, bariatric surgery for weight loss treatment is associated with a reduction in coronary artery disease risk compared to non-surgical weight loss. This difference may be attributed to the larger amount of weight loss and the resultant changes in metabolism that are typical after bariatric surgery.

"Additional work is needed to identify effective interventions for patients with obesity that improve cardiovascular disease outcomes and reduce cardiovascular disease mortality, as is seen with bariatric surgery," said Powell-Wiley.

The statement also addresses the "obesity paradox," which is sometimes observed in research, particularly in populations that have overweight or have Class I obesity (BMI = 30 to 34.9 kg/m2). The paradox suggests that even though overweight and obesity are strong risk factors for the development of cardiovascular disease, they are not always a risk factor for negative cardiovascular outcomes. The writing group notes that people with overweight or obesity are often screened earlier for cardiovascular disease than people with healthy weight, thus resulting in earlier diagnoses and treatment.

"The underlying mechanisms for the obesity paradox remain unclear," said Powell-Wiley. "Despite the existence of the paradox for short-term cardiovascular disease outcomes, the data show that patients with overweight or obesity suffer from cardiovascular disease events at an earlier age, live with cardiovascular disease for more of their lives and have a shorter average lifespan than patients with normal weight."

In reviewing the effects of obesity on a common heart rhythm disorder, the writing group reports there is now "convincing data" that obesity may cause atrial fibrillation, a quivering or irregular heartbeat. Estimates suggest obesity may account for one-fifth of all atrial fibrillation cases and 60% of recently documented increases in people with atrial fibrillation. Research has demonstrated people with atrial fibrillation who had intense weight loss experienced a significant reduction in cumulative time spent in atrial fibrillation.

"The research provides strong evidence that weight management be included as an essential aspect of managing atrial fibrillation, in addition to the standard treatments to control heart rate, rhythm and clotting risk," said Powell-Wiley.

The statement identifies areas of future research, including a call for further study of lifestyle interventions that may be most effective in decreasing visceral adiposity and improving cardiovascular outcomes. Powell-Wiley said, "It's important to understand how nutrition can be personalized based on genetics or other markers for cardiovascular disease risk.

Read more at Science Daily

Ankle exoskeleton enables faster walking

Being unable to walk quickly can be frustrating and problematic, but it is a common issue, especially as people age. Noting the pervasiveness of slower-than-desired walking, engineers at Stanford University have tested how well a prototype exoskeleton system they have developed -- which attaches around the shin and into a running shoe -- increased the self-selected walking speed of people in an experimental setting.

The exoskeleton is externally powered by motors and controlled by an algorithm. When the researchers optimized it for speed, participants walked, on average, 42 percent faster than when they were wearing normal shoes and no exoskeleton. The results of this study were published April 20 in IEEE Transactions on Neural Systems and Rehabilitation Engineering.

"We were hoping that we could increase walking speed with exoskeleton assistance, but we were really surprised to find such a large improvement," said Steve Collins, associate professor of mechanical engineering at Stanford and senior author of the paper. "Forty percent is huge."

For this initial set of experiments, the participants were young, healthy adults. Given their impressive results, the researchers plan to run future tests with older adults and to look at other ways the exoskeleton design can be improved. They also hope to eventually create an exoskeleton that can work outside the lab, though that goal is still a ways off.

"My research mission is to understand the science of biomechanics and motor control behind human locomotion and apply that to enhance the physical performance of humans in daily life," said Seungmoon Song, a postdoctoral fellow in mechanical engineering and lead author of the paper. "I think exoskeletons are very promising tools that could achieve that enhancement in physical quality of life."

Walking in the loop


The ankle exoskeleton system tested in this research is an experimental emulator that serves as a testbed for trying out different designs. It has a frame that fastens around the upper shin and into an integrated running shoe that the participant wears. It is attached to large motors that sit beside the walking surface and pull a tether that runs up the length of the back of the exoskeleton. Controlled by an algorithm, the tether tugs the wearer's heel upward, helping them point their toe down as they push off the ground.

For this study, the researchers had 10 participants walk with five different modes of operation. They walked in normal shoes without the exoskeleton, with the exoskeleton turned off and with the exoskeleton turned on with three different modes: optimized for speed, optimized for energy use, and a placebo mode adjusted to make them walk more slowly. In all of the tests, participants walked on a treadmill that adapts to their speed.

The mode that was optimized for speed -- which resulted in the 42 percent increase in walking pace -- was created through a human-in-the-loop process. An algorithm repeatedly adjusted the exoskeleton settings while the user walked, with the goal of improving the user's speed with each adjustment. Finding the speed-optimized mode of operation took about 150 rounds of adjustment and two hours per person.

In addition to greatly increasing walking speed, the speed-optimized mode also reduced energy use, by about 2 percent per meter traveled. However, that result varied widely from person to person, which is somewhat expected, given that it was not an intentional feature of that exoskeleton mode.

"The study was designed to specifically answer the scientific question about increasing walking speed," Song said. "We didn't care too much about the other performance measures, like comfort or energy. However, seven out of 10 participants not only walked faster but consumed less energy, which really shows how much potential exoskeletons have for helping people in an efficient way."

The settings that were optimized specifically for energy use were borrowed from a previous experiment. In the current study, this mode decreased energy use more than the speed-optimized settings but did not increase speed as much. As intended, the placebo mode both slowed down participants and boosted their energy use.

Better, faster, stronger

Now that the researchers have attained such significant speed assistance, they plan to focus future versions of the ankle exoskeleton emulator on reducing energy use consistently across users, while also being more comfortable.

In considering older adults specifically, Collins and his lab wonder whether future designs could reduce pain caused by weight on joints or improve balance. They plan to conduct similar walking tests with older adults and hope those provide encouraging results as well.

Read more at Science Daily

Creativity and community: How modern humans overcame the Neanderthals

A new study is the first-ever to identify the genes for creativity in Homo sapiens that distinguish modern humans from chimpanzees and Neanderthals. The research identified 267 genes that are found only in modern humans and likely play an important role in the evolution of the behavioral characteristics that set apart Homo sapiens, including creativity, self-awareness, cooperativeness, and healthy longevity. The study, led by an international and interdisciplinary team of researchers from the American Museum of Natural History and Washington University among other institutions, is published today in the journal Molecular Psychiatry.

"One of the most fundamental questions about human nature is what sparked the explosive emergence of creativity in modern humans in the period just before and after their widespread dispersal from Africa and the related extinction of Neanderthals and other human relatives," said study co-author Ian Tattersall, curator emeritus in the American Museum of Natural History's Division of Anthropology. "Major controversies persist about the basis for human creativity in art and science, as well as about potential differences in cognition, language, and personality that distinguish modern humans from extinct hominids. This new study is the result of a truly pathbreaking use of genomic methodologies to enlighten us about the mechanisms underpinning our uniqueness."

Modern humans demonstrate remarkable creativity compared to their closest living relatives, the great apes (chimpanzees, gorillas, and orangutans and their immediate ancestors), including innovativeness, flexibility, depth of planning, and related cognitive abilities for symbolism and self-awareness that also enable spontaneous generation of narrative art and language. But the genetic basis for the emergence of creativity in modern humans remains a mystery, even after the recovery of full-genome data for both chimpanzees and our extinct close relatives the Neanderthals.

"It has been difficult to identify the genes that led to the emergence of human creativity before now because of the large number of changes in the human genome after it diverged from the common ancestor of humans and chimpanzees around 10 million years ago, as well as uncertainty about the functions of those changes," said Robert Cloninger, a psychiatrist and geneticist at Washington University in St. Louis, and the lead author of the study. "Therefore, we began our research by first identifying the way the genes that influence modern human personality are organized into coordinated systems of learning that have allowed us to adapt flexibly and creatively to changing life conditions."

The team led by Cloninger had previously identified 972 genes that regulate gene expression for human personality, which is comprised of three nearly separate networks for learning and memory. One, for regulating emotional reactivity -- emotional drives, habit learning, social attachment, conflict resolution -- emerged in monkeys and apes about 40 million years ago. The second, which regulates intentional self-control -- self-directedness and cooperation for mutual benefit -- emerged a little less than 2 million years ago. A third one, for creative self-awareness, emerged about 100,000 years ago.

In the latest study, the researchers discovered that 267 genes from this larger group are found only in modern humans and not in chimpanzees or Neanderthals. These uniquely human genes code for the self-awareness brain network and also regulate processes that allow Homo sapiens to be creative in narrative art and science, to be more prosocial, and to live longer lives through greater resistance to aging, injury, and illness than the now-extinct hominids they replaced.

Genes regulating emotional reactivity were nearly the same in humans, Neanderthals, and chimps. And Neanderthals were about midway between chimps and Homo sapiens in their genes for self-control and self-awareness.

"We found that the adaptability and well-being of Neanderthals was about 60 to 70 percent of that of Homo sapiens, which means that the difference in fitness between them was large," Cloninger said. "After the more creative, sociable, and physically resilient Homo sapiens migrated out of Africa between 65,000 and 55,000 years ago, they displaced Neanderthals and other hominids, who all became extinct soon after 40,000 years ago."

The genes that distinguish modern humans from Neanderthals and chimpanzees are nearly all regulatory genes made of RNA, not protein-coding genes made of DNA.

"The protein-coding genes of Homo sapiens, Neanderthals, and chimps are nearly all the same, and what distinguishes these species is the regulation of the expression of their protein-coding genes by the genes found only in humans," said co-author Igor Zwir, a computer scientist at Washington University School of Medicine and the University of Granada. "We found that the regulatory genes unique to modern humans were constituents of clusters together with particular protein-coding genes that are overexpressed in the human brain network for self-awareness. The self-awareness network is essential to the physical, mental, and social well-being of humans because it provides the insight to regulate our habits in accord with our goals and values."

The researchers determined that the genes unique to modern humans were selected because of advantages tied to greater creativity, prosocial behavior, and healthy longevity. Living longer, healthier lives and being more prosocial and altruistic allowed Homo sapiens to support their children, grandchildren, and others in their communities throughout their lives in diverse and sometimes harsh conditions. And being more innovative than other hominids allowed humans to adapt more flexibly to unpredictable climatic fluctuations.

"In the bigger picture, this study helps us understand how we can effectively respond to the challenges that modern humans currently face," Tattersall said. "Our behavior is not fixed or determined by our genes. Indeed, human creativity, prosociality, and healthy longevity emerged in the context of the need to adjust rapidly to harsh and diverse conditions and to communicate in large social groups."

Read more at Science Daily

Apr 22, 2021

ALMA discovers rotating infant galaxy with help of natural cosmic telescope

Using the Atacama Large Millimeter/submillimeter Array (ALMA), astronomers found a rotating baby galaxy 1/100th the size of the Milky Way at a time when the Universe was only seven percent of its present age. Thanks to assistance by the gravitational lens effect, the team was able to explore for the first time the nature of small and dark "normal galaxies" in the early Universe, representative of the main population of the first galaxies, which greatly advances our understanding of the initial phase of galaxy evolution.

"Many of the galaxies that existed in the early Universe were so small that their brightness is well below the limit of the current largest telescopes on Earth and in Space, making difficult to study their properties and internal structure," says Nicolas Laporte, a Kavli Senior Fellow at the University of Cambridge. "However, the light coming from the galaxy named RXCJ0600-z6, was highly magnified by gravitational lensing, making it an ideal target for studying the properties and structure of a typical baby galaxies."

Gravitational lensing is a natural phenomenon in which light emitted from a distant object is bent by the gravity of a massive body such as a galaxy or a galaxy cluster located in the foreground. The name "gravitational lensing" is derived from the fact that the gravity of the massive object acts like a lens. When we look through a gravitational lens, the light of distant objects is intensified and their shapes are stretched. In other words, it is a "natural telescope" floating in space.

The ALMA Lensing Cluster Survey (ALCS) team used ALMA to search for a large number of galaxies in the early Universe that are enlarged by gravitational lensing. Combining the power of ALMA, with the help of the natural telescopes, the researchers are able to uncover and study fainter galaxies.

Why is it crucial to explore the faintest galaxies in the early Universe? Theory and simulations predict that the majority of galaxies formed few hundred millions years after the Big-Bang are small, and thus faint. Although several galaxies in the early Universe have been previously observed, those studied were limited to the most massive objects, and therefore the less representative galaxies, in the early Universe, because of telescopes capabilities. The only way to understand the standard formation of the first galaxies, and obtain a complete picture of galaxy formation, is to focus on the fainter and more numerous galaxies.

The ALCS team performed a large-scale observation program that took 95 hours, which is a very long time for ALMA observations, to observe the central regions of 33 galaxy clusters that could cause gravitational lensing. One of these clusters, called RXCJ0600-2007, is located in the direction of the constellation of Lepus, and has a mass 1000 trillion times that of the Sun. The team discovered a single distant galaxy that is being affected by the gravitational lens created by this natural telescope. ALMA detected the light from carbon ions and stardust in the galaxy and, together with data taken with the Gemini telescope, determined that the galaxy is seen as it was about 900 million years after the Big Bang (12.9 billion years ago). Further analysis of these data suggested that a part of this source is seen 160 times brighter than it is intrinsically.

By precisely measuring the mass distribution of the cluster of galaxies, it is possible to "undo" the gravitational lensing effect and restore the original appearance of the magnified object. By combining data from Hubble Space Telescope and the European Southern Observatory's Very Large Telescope with a theoretical model, the team succeeded in reconstructing the actual shape of the distant galaxy RXCJ0600-z6. The total mass of this galaxy is about 2 to 3 billion times that of the Sun, which is about 1/100th of the size of our own Milky Way Galaxy.

What astonished the team is that RXCJ0600-z6 is rotating. Traditionally, gas in the young galaxies was thought to have random, chaotic motion. Only recently has ALMA discovered several rotating young galaxies that have challenged the traditional theoretical framework, but these were several orders of magnitude brighter (larger) than RXCJ0600-z6.

"Our study demonstrates, for the first time, that we can directly measure the internal motion of such faint (less massive) galaxies in the early Universe and compare it with the theoretical predictions," says Kotaro Kohno, a professor at the University of Tokyo and the leader of the ALCS team.

Read more at Science Daily

To design truly compostable plastic, scientists take cues from nature

Despite our efforts to sort and recycle, less than 9% of plastic gets recycled in the U.S., and most ends up in landfill or the environment.

Biodegradable plastic bags and containers could help, but if they're not properly sorted, they can contaminate otherwise recyclable #1 and #2 plastics. What's worse, most biodegradable plastics take months to break down, and when they finally do, they form microplastics -- tiny bits of plastic that can end up in oceans and animals' bodies -- including our own.

Now, as reported in the journal Nature, scientists at the Department of Energy's Lawrence Berkeley National Laboratory (Berkeley Lab) and UC Berkeley have designed an enzyme-activated compostable plastic that could diminish microplastics pollution, and holds great promise for plastics upcycling. The material can be broken down to its building blocks -- small individual molecules called monomers -- and then reformed into a new compostable plastic product.

"In the wild, enzymes are what nature uses to break things down -- and even when we die, enzymes cause our bodies to decompose naturally. So for this study, we asked ourselves, 'How can enzymes biodegrade plastic so it's part of nature?" said senior author Ting Xu , who holds titles of faculty senior scientist in Berkeley Lab's Materials Sciences Division, and professor of chemistry and materials science and engineering at UC Berkeley.

At Berkeley Lab, Xu -- who for nearly 15 years has dedicated her career to the development of functional polymer materials inspired by nature -- is leading an interdisciplinary team of scientists and engineers from universities and national labs around the country to tackle the mounting problem of plastic landfill posed by both single-use and so-called biodegradable plastics.

Most biodegradable plastics in use today are usually made of polylactic acid (PLA), a vegetable-based plastic material blended with cornstarch. There is also polycaprolactone (PCL), a biodegradable polyester that is widely used for biomedical applications such as tissue engineering.

But the problem with conventional biodegradable plastic is that they're indistinguishable from single-use plastics such as plastic film -- so a good chunk of these materials ends up in landfills. And even if a biodegradable plastic container gets deposited at an organic waste facility, it can't break down as fast as the lunch salad it once contained, so it ends up contaminating organic waste, said co-author Corinne Scown, a staff scientist and deputy director for the Research, Energy Analysis & Environmental Impacts Division in Berkeley Lab's Energy Technologies Area.

Another problem with biodegradable plastics is that they aren't as strong as regular plastic -- that's why you can't carry heavy items in a standard green compost bag. The tradeoff is that biodegradable plastics can break down over time -- but still, Xu said, they only break down into microplastics, which are still plastic, just a lot smaller.

So Xu and her team decided to take a different approach -- by "nanoconfining" enzymes into plastics.

Putting enzymes to work


Because enzymes are part of living systems, the trick would be carving out a safe place in the plastic for enzymes to lie dormant until they're called to action.

In a series of experiments, Xu and co-authors embedded trace amounts of the commercial enzymes Burkholderia cepacian lipase (BC-lipase) and proteinase K within the PLA and PCL plastic materials. The scientists also added an enzyme protectant called four-monomer random heteropolymer, or RHP, to help disperse the enzymes a few nanometers (billionths of a meter) apart.

In a stunning result, the scientists discovered that ordinary household tap water or standard soil composts converted the enzyme-embedded plastic material into its small-molecule building blocks called monomers, and eliminated microplastics in just a few days or weeks.

They also learned that BC-lipase is something of a finicky "eater." Before a lipase can convert a polymer chain into monomers, it must first catch the end of a polymer chain. By controlling when the lipase finds the chain end, it is possible to ensure the materials don't degrade until being triggered by hot water or compost soil, Xu explained.

In addition, they found that this strategy only works when BC-lipase is nanodispersed -- in this case, just 0.02 percent by weight in the PCL block -- rather than randomly tossed in and blended.

"Nanodispersion puts each enzyme molecule to work -- nothing goes to waste," Xu said.

And that matters when factoring in costs. Industrial enzymes can cost around $10 per kilogram, but this new approach would only add a few cents to the production cost of a kilogram of resin because the amount of enzymes required is so low -- and the material has a shelf life of more than 7 months, Scown added.

The proof is in the compost

X-ray scattering studies performed at Berkeley Lab's Advanced Light Source characterized the nanodispersion of enzymes in the PCL and PLA plastic materials.

Interfacial-tension experiments conducted by co-author Tom Russell revealed in real time how the size and shape of droplets changed as the plastic material decomposed into distinct molecules. The lab results also differentiated between enzyme and RHP molecules.

"The interfacial test gives you information about how the degradation is proceeding," he said. "But the proof is in the composting -- Ting and her team successfully recovered plastic monomers from biodegradable plastic simply by using RHPs, water, and compost soil."

Russell is a visiting faculty scientist and professor of polymer science and engineering from the University of Massachusetts who leads the Adaptive Interfacial Assemblies Towards Structuring Liquids program in Berkeley Lab's Materials Sciences Division.

Developing a very affordable and easily compostable plastic film could incentivize produce manufacturers to package fresh fruits and vegetables with compostable plastic instead of single-use plastic wrap -- and as a result, save organic waste facilities the extra expense of investing in expensive plastic-depackaging machines when they want to accept food waste for anaerobic digestion or composting, Scown said.

Since their approach could potentially work well with both hard, rigid plastics and soft, flexible plastics, Xu would like to broaden their study to polyolefins, a ubiquitous family of plastics commonly used to manufacture toys and electronic parts.

The team's truly compostable plastic could be on the shelves soon. They recently filed a patent application through UC Berkeley's patent office. And co-author Aaron Hall, who was a Ph.D. student in materials science and engineering at UC Berkeley at the time of the study, founded UC Berkeley startup Intropic Materials to further develop the new technology. He was recently selected to participate in Cyclotron Road, an entrepreneurial fellowship program in partnership with Activate.

Read more at Science Daily

Mechanical engineers develop new high-performance artificial muscle technology

In the field of robotics, researchers are continually looking for the fastest, strongest, most efficient and lowest-cost ways to actuate, or enable, robots to make the movements needed to carry out their intended functions.

The quest for new and better actuation technologies and 'soft' robotics is often based on principles of biomimetics, in which machine components are designed to mimic the movement of human muscles -- and ideally, to outperform them. Despite the performance of actuators like electric motors and hydraulic pistons, their rigid form limits how they can be deployed. As robots transition to more biological forms and as people ask for more biomimetic prostheses, actuators need to evolve.

Associate professor (and alum) Michael Shafer and professor Heidi Feigenbaum of Northern Arizona University's Department of Mechanical Engineering, along with graduate student researcher Diego Higueras-Ruiz, published a paper in Science Robotics presenting a new, high-performance artificial muscle technology they developed in NAU's Dynamic Active Systems Laboratory. The paper, titled "Cavatappi artificial muscles from drawing, twisting, and coiling polymer tubes," details how the new technology enables more human-like motion due to its flexibility and adaptability, but outperforms human skeletal muscle in several metrics.

"We call these new linear actuators cavatappi artificial muscles based on their resemblance to the Italian pasta," Shafer said.

Because of their coiled, or helical, structure, the actuators can generate more power, making them an ideal technology for bioengineering and robotics applications. In the team's initial work, they demonstrated that cavatappi artificial muscles exhibit specific work and power metrics ten and five times higher than human skeletal muscles, respectively, and as they continue development, they expect to produce even higher levels of performance.

"The cavatappi artificial muscles are based on twisted polymer actuators (TPAs), which were pretty revolutionary when they first came out because they were powerful, lightweight and cheap. But they were very inefficient and slow to actuate because you had to heat and cool them. Additionally, their efficiency is only about two percent," Shafer said. "For the cavatappi, we get around this by using pressurized fluid to actuate, so we think these devices are far more likely to be adopted. These devices respond about as fast as we can pump the fluid. The big advantage is their efficiency. We have demonstrated contractile efficiency of up to about 45 percent, which is a very high number in the field of soft actuation."

The engineers think this technology could be used in soft robotics applications, conventional robotic actuators (for example, for walking robots), or even potentially in assistive technologies like exoskeletons or prostheses.

"We expect that future work will include the use of cavatappi artificial muscles in many applications due to their simplicity, low-cost, lightweight, flexibility, efficiency and strain energy recovery properties, among other benefits," Shafer said.

Technology is available for licensing, partnering opportunities.

Working with the NAU Innovations team, the inventors have taken steps to protect their intellectual property. The technology has entered the protection and early commercialization stage and is available for licensing and partnering opportunities. For more information, please contact NAU Innovations.

Read more at Science Daily

Astronomers release new all-sky map of Milky Way's outer reaches

Astronomers using data from NASA and ESA (European Space Agency) telescopes have released a new all-sky map of the outermost region of our galaxy. [Editor's note: See Related Multimedia link below.] Known as the galactic halo, this area lies outside the swirling spiral arms that form the Milky Way's recognizable central disk and is sparsely populated with stars. Though the halo may appear mostly empty, it is also predicted to contain a massive reservoir of dark matter, a mysterious and invisible substance thought to make up the bulk of all the mass in the universe.

The data for the new map comes from ESA's Gaia mission and NASA's Near Earth Object Wide Field Infrared Survey Explorer, or NEOWISE, which operated from 2009 to 2013 under the moniker WISE. The study makes use of data collected by the spacecraft between 2009 and 2018.

The new map reveals how a small galaxy called the Large Magellanic Cloud (LMC) -- so named because it is the larger of two dwarf galaxies orbiting the Milky Way -- has sailed through the Milky Way's galactic halo like a ship through water, its gravity creating a wake in the stars behind it. The LMC is located about 160,000 light-years from Earth and is less than one-quarter the mass of the Milky Way.

Though the inner portions of the halo have been mapped with a high level of accuracy, this is the first map to provide a similar picture of the halo's outer regions, where the wake is found -- about 200,000 light-years to 325,000 light-years from the galactic center. Previous studies have hinted at the wake's existence, but the all-sky map confirms its presence and offers a detailed view of its shape, size, and location.

This disturbance in the halo also provides astronomers with an opportunity to study something they can't observe directly: dark matter. While it doesn't emit, reflect, or absorb light, the gravitational influence of dark matter has been observed across the universe. It is thought to create a scaffolding on which galaxies are built, such that without it, galaxies would fly apart as they spin. Dark matter is estimated to be five times more common in the universe than all the matter that emits and/or interacts with light, from stars to planets to gas clouds.

Although there are multiple theories about the nature of dark matter, all of them indicate that it should be present in the Milky Way's halo. If that's the case, then as the LMC sails through this region, it should leave a wake in the dark matter as well. The wake observed in the new star map is thought to be the outline of this dark matter wake; the stars are like leaves on the surface of this invisible ocean, their position shifting with the dark matter.

The interaction between the dark matter and the Large Magellanic Cloud has big implications for our galaxy. As the LMC orbits the Milky Way, the dark matter's gravity drags on the LMC and slows it down. This will cause the dwarf galaxy's orbit to get smaller and smaller, until the galaxy finally collides with the Milky Way in about 2 billion years. These types of mergers might be a key driver in the growth of massive galaxies across the universe. In fact, astronomers think the Milky Way merged with another small galaxy about 10 billion years ago.

"This robbing of a smaller galaxy's energy is not only why the LMC is merging with the Milky Way, but also why all galaxy mergers happen," said Rohan Naidu, a doctoral student in astronomy at Harvard University and a co-author of the new paper. "The wake in our map is a really neat confirmation that our basic picture for how galaxies merge is on point!"

A Rare Opportunity

The authors of the paper also think the new map -- along with additional data and theoretical analyses -- may provide a test for different theories about the nature of dark matter, such as whether it consists of particles, like regular matter, and what the properties of those particles are.

"You can imagine that the wake behind a boat will be different if the boat is sailing through water or through honey," said Charlie Conroy, a professor at Harvard University and an astronomer at the Center for Astrophysics | Harvard & Smithsonian, who coauthored the study. "In this case, the properties of the wake are determined by which dark matter theory we apply."

Conroy led the team that mapped the positions of over 1,300 stars in the halo. The challenge arose in trying to measure the exact distance from Earth to a large portion of those stars: It's often impossible to figure out whether a star is faint and close by or bright and far away. The team used data from ESA's Gaia mission, which provides the location of many stars in the sky but cannot measure distances to the stars in the Milky Way's outer regions.

After identifying stars most likely located in the halo (because they were not obviously inside our galaxy or the LMC), the team looked for stars belonging to a class of giant stars with a specific light "signature" detectable by NEOWISE. Knowing the basic properties of the selected stars enabled the team to figure out their distance from Earth and create the new map. It charts a region starting about 200,000 light-years from the Milky Way's center, or about where the LMC's wake was predicted to begin, and extends about 125,000 light-years beyond that.

Conroy and his colleagues were inspired to hunt for LMC's wake after learning about a team of astrophysicists at the University of Arizona in Tucson that makes computer models predicting what dark matter in the galactic halo should look like. The two groups worked together on the new study.

One model by the Arizona team, included in the new study, predicted the general structure and specific location of the star wake revealed in the new map. Once the data had confirmed that the model was correct, the team could confirm what other investigations have also hinted at: that the LMC is likely on its first orbit around the Milky Way. If the smaller galaxy had already made multiple orbits, the shape and location of the wake would be significantly different from what has been observed. Astronomers think the LMC formed in the same environment as the Milky Way and another nearby galaxy, M31, and that it is close to completing a long first orbit around our galaxy (about 13 billion years). Its next orbit will be much shorter due to its interaction with the Milky Way.

"Confirming our theoretical prediction with observational data tells us that our understanding of the interaction between these two galaxies, including the dark matter, is on the right track," said University of Arizona doctoral student in astronomy Nicolás Garavito-Camargo, who led work on the model used in the paper.

The new map also provides astronomers with a rare opportunity to test the properties of the dark matter (the notional water or honey) in our own galaxy. In the new study, Garavito-Camargo and colleagues used a popular dark matter theory called cold dark matter that fits the observed star map relatively well. Now the University of Arizona team is running simulations that use different dark matter theories to see which one best matches the wake observed in the stars.

"It's a really special set of circumstances that came together to create this scenario that lets us test our dark matter theories," said Gurtina Besla, a co-author of the study and an associate professor at the University of Arizona. "But we can only realize that test with the combination of this new map and the dark matter simulations that we built."

Read more at Science Daily

Apr 21, 2021

Energy unleashed by submarine volcanoes could power a continent

Volcanic eruptions deep in our oceans are capable of extremely powerful releases of energy, at a rate high enough to power the whole of the United States, according to research published today.

Eruptions from deep-sea volcanoes were long-thought to be relatively uninteresting compared with those on land. While terrestrial volcanoes often produce spectacular eruptions, dispersing volcanic ash into the environment, it was thought that deep marine eruptions only produced slow moving lava flows.

But data gathered by remotely operated vehicles deep in the North East Pacific and analysed by scientists at the University of Leeds, has revealed a link between the way ash is dispersed during submarine eruptions and the creation of large and powerful columns of heated water rising from the ocean floor, known as megaplumes.

These megaplumes contain hot chemical-rich water and act in the same way as the atmospheric plumes seen from land-based volcanoes, spreading first upwards and then outwards, carrying volcanic ash with them. The size of megaplumes is immense, with the volumes of water equivalent to forty million Olympic-sized swimming pools. They have been detected above various submarine volcanoes but their origin has remained unknown. The results of this new research show that they form rapidly during the eruption of lava.

The research was carried out by Sam Pegler, from the School of Mathematics and David Ferguson, from the School of Earth and Environment and is being published today in the journal Nature Communications.

Together they developed a mathematical model which shows how ash from these submarine eruptions spreads several kilometres from the volcano. They used the ash pattern deposited by a historic submarine eruption to reconstruct its dynamics. This showed that the rate of energy released and required to carry ash to the observed distances is extremely high -- equivalent to the power used by the whole of the USA.

David Ferguson said: "The majority of Earth's volcanic activity occurs underwater, mostly at depths of several kilometres in the deep ocean but, in contrast to terrestrial volcanoes, even detecting that an eruption has occurred on the seafloor is extremely challenging. Consequently, there remains much for scientists to learn about submarine volcanism and its effects on the marine environment."

The research shows that submarine eruptions cause megaplumes to form but the release of energy is so rapid that it cannot be supplied from the erupted molten lava alone. Instead, the research concludes that submarine volcanic eruptions lead to the rapid emptying of reservoirs of hot fluids within the earth's crust. As the magma forces its way upwards towards the seafloor, it drives this hot fluid with it.

Sam Pegler added: "Our work provides evidence that megaplumes are directly linked to the eruption of lava and are responsible for transporting volcanic ash in the deep ocean. It also shows that plumes must have formed in a matter of hours, creating an immense rate of energy release.

David Ferguson adds: "Observing a submarine eruption in person remains extremely difficult but the development of instruments based on the seafloor means data can be streamed live as the activity occurs.

Read more at Science Daily

Body mass index, age can affect your risk for neck pain

With roughly 80% of jobs being sedentary, often requiring several hours of sitting stooped in front of a computer screen, neck pain is a growing occupational hazard. Smartphones and other devices have also caused people to bend their necks for prolonged periods. But is bad posture solely to blame?

In a recent study, researchers at Texas A&M University have found that while poor neck and head postures are indeed the primary determinants of neck pain, body mass index, age and the time of the day also influence the neck's ability to perform sustained or repeated movements.

"Neck pain is one of the leading and fastest-growing causes of disability in the world," said Xudong Zhang, professor in the Wm Michael Barnes '64 Department of Industrial and Systems Engineering. "Our study has pointed to a combination of work and personal factors that strongly influence the strength and endurance of the neck over time. More importantly, since these factors have been identified, they can then be modified so that the neck is in better health and pain is avoided or deterred."

The results of the study are published online in the journal Human Factors, a flagship journal in the field of human factors and ergonomics.

According to the Global Burden of Disease Study by the Institute for Health Metrics and Evaluation, neck pain is ranked as the fourth leading cause of global disability. One of the main reasons for neck pain has been attributed to lifestyle, particularly when people spend long durations of time with their necks bent forward. However, Zhang said a systematic, quantitative study has been lacking on how personal factors, such as sex, weight, age and work-related habits, can affect neck strength and endurance.

For their experiments, Zhang and his team recruited 20 adult men and 20 adult women with no previous neck-related issues to perform controlled head-neck exertions in a laboratory setting. Instead of asking the participants to hold a specific neck posture for a long time, similar to what might happen at a workplace, they performed "sustained-till exhaustion" head-neck exertions.

"In the laboratory, conducting experiments where subjects do long tasks with their neck can take several hours of data collection, which is not very practical for the experimenters and, of course, the participants in our study," said Zhang. "To solve this problem, our experiments were strategically designed to mimic workplace neck strains but in a shorter period of time."

In these exercises, subjects were seated and asked to put on an augmented helmet that allowed them to exert measurable force by the neck. Then, the researchers asked them to either keep their necks straight or maintain their neck tilted in a forward or backward position. In this position, a force was applied to their head and neck on an adjustable frame. This exertion was either to their maximum capacity of half of it. Before testing, the researchers noted their subjects' age, body mass index and the time of day.

When Zhang and his team analyzed their data, they found that, as expected, work-related factors like head/neck posture play a very important role in determining both neck strength and endurance. But they also observed that while there was no significant difference between male and female subjects' in neck endurance, body mass index was a significant predictor of neck endurance. Also, to their surprise, the time of day affected the neck's ability to sustain an exertion without fatigue.

"It is intuitive to think that over the course of the day, our necks get more tired since we use it more," Zhang said. "But roughly half of our participants were tested in the morning and the remaining in the afternoon. Also, some of the participants had day jobs and some worked the night shift. Despite this, we consistently found the time-of-day effect on neck endurance."

The researchers said their database of neck strength and endurance is also necessary for building advanced musculoskeletal biomechanical models of the neck, which can then be used to, for example, tease apart specific neck muscles that are more vulnerable to injury.

"Looking ahead, we might have the data to begin evaluating if patients recovering from neck injuries are ready to return to work based on whether their neck strength and endurance are within the norm," Zhang said. "Also, engineers and designers could utilize our data to make wearable devices, like helmets, that are more ergonomic and less stressful on the neck."

Other contributors to this work include Suman Chowdhury from Texas Tech University, and Yu Zhou, Bocheng Wan and Curran Reddy from the industrial and systems engineering department.

Read more at Science Daily

Astronauts' mental health risks tested in the Antarctic

Astronauts who spend extended time in space face stressors such as isolation, confinement, lack of privacy, altered light-dark cycles, monotony and separation from family. Interestingly, so do people who work at international research stations in Antarctica, where the extreme environment is characterized by numerous stressors that mirror those present during long-duration space exploration.

To better understand the psychological hurdles faced by astronauts, University of Houston professor of psychology Candice Alfano and her team developed the Mental Health Checklist (MHCL), a self-reporting instrument for detecting mental health changes in isolated, confined, extreme (ICE) environments. The team used the MHCL to study psychological changes at two Antarctic stations. The findings are published in Acta Astronautica.

"We observed significant changes in psychological functioning, but patterns of change for specific aspects of mental health differed. The most marked alterations were observed for positive emotions such that we saw continuous declines from the start to the end of the mission, without evidence of a 'bounce-back effect' as participants were preparing to return home," reports Alfano. "Previous research both in space and in polar environments has focused almost exclusively on negative emotional states including anxiety and depressive symptoms. But positive emotions such as satisfaction, enthusiasm and awe are essential features for thriving in high-pressure settings."

Negative emotions also increased across the study, but changes were more variable and predicted by physical complaints. Collectively, these results might suggest that while changes in negative emotions are shaped by an interaction of individual, interpersonal and situational factors, declines in positive emotions are a more universal experience in ICE environments. "Interventions and counter measures aimed at enhancing positive emotions may, therefore, be critical in reducing psychological risk in extreme settings," said Alfano.

At coastal and inland Antarctic stations, Alfano and her team tracked mental health symptoms across a nine-month period, including the harshest winter months, using the MHCL. A monthly assessment battery also examined changes in physical complaints, biomarkers of stress such as cortisol, and the use of different emotion regulation strategies for increasing or decreasing certain emotions.

Study results also revealed that participants tended to use fewer effective strategies for regulating (i.e., increasing) their positive emotions as their time at the stations increased.

 Read more at Science Daily

Shift-work causes negative impacts on health, affects men and women differently

Shift-work and irregular work schedules can cause several health-related issues and affect our defence against infection, according to new research from the University of Waterloo.

These health-related issues occur because the body's natural clock, called the circadian clock, can be disrupted by inconsistent changes in the sleep-wake schedule and feeding patterns often caused by shift work. To study this, researchers at Waterloo developed a mathematical model to look at how a disruption in the circadian clock affects the immune system in fighting off illness.

"Because our immune system is affected by the circadian clock, our ability to mount an immune response changes during the day," said Anita Layton, professor of Applied Mathematics, Computer Science, Pharmacy and Biology at Waterloo. "How likely are you to fight off an infection that occurs in the morning than midday? The answer depends on whether you are a man or a woman, and whether you are among quarter of the modern-day labour force that has an irregular work schedule."

The researchers created new computational models, separately for men and women, which simulate the interplay between the circadian clock and the immune system. The model is composed of the core clock genes, their related proteins, and the regulatory mechanism of pro- and anti-inflammatory mediators. By adjusting the clock, the models can simulate male and female shift-workers.

The results of these computer simulations conclude that the immune response varies with the time of infection. Model simulation suggests that the time before we go to bed is the "worst" time to get an infection. That is the period of the day when our body is least prepared to produce the pro- and anti-inflammatory mediators needed during an infection. Just as importantly, an individual's sex impacts the severity of the infection.

"Shift work likely affects men and women differently," said Stéphanie Abo, a PhD candidate in Waterloo's Department of Applied Mathematics. "Compared to females, the immune system in males is more prone to overactivation, which can increase their chances of sepsis following an ill-timed infection."

The study, Modeling the circadian regulation of the immune system: sexually dimorphic effects of shift work, authored by Waterloo's Faculty of Mathematics' Layton and Abo, was recently published in the journal PLoS Computational Biology.

From Science Daily

Apr 20, 2021

Flushing a public toilet? Don't linger, because aerosolized droplets do

 Flushing a toilet can generate large quantities of microbe-containing aerosols depending on the design, water pressure or flushing power of the toilet. A variety of pathogens are usually found in stagnant water as well as in urine, feces and vomit. When dispersed widely through aerosolization, these pathogens can cause Ebola, norovirus that results in violent food poisoning, as well as COVID-19 caused by SARS-CoV-2.

Respiratory droplets are the most prominent source of transmission for COVID-19, however, alternative routes may exist given the discovery of small numbers of viable viruses in urine and stool samples. Public restrooms are especially cause for concern for transmitting COVID-19 because they are relatively confined, experience heavy foot traffic and may not have adequate ventilation.

A team of scientists from Florida Atlantic University's College of Engineering and Computer Science once again put physics of fluids to the test to investigate droplets generated from flushing a toilet and a urinal in a public restroom under normal ventilation conditions. To measure the droplets, they used a particle counter placed at various heights of the toilet and urinal to capture the size and number of droplets generated upon flushing.

Results of the study, published in the journal Physics of Fluids, demonstrate how public restrooms could serve as hotbeds for airborne disease transmission, especially if they do not have adequate ventilation or if toilets do not have a lid or cover. Most public restrooms in the United States often are not equipped with toilet seat lids and urinals are not covered.

For the study, researchers obtained data from three different scenarios: toilet flushing; covered toilet flushing and urinal flushing. They examined the data to determine the increase in aerosol concentration, the behavior of droplets of different sizes, how high the droplets rose, and the impact of covering the toilet. Ambient aerosol levels were measured before and after conducting the experiments.

"After about three hours of tests involving more than 100 flushes, we found a substantial increase in the measured aerosol levels in the ambient environment with the total number of droplets generated in each flushing test ranging up to the tens of thousands," said Siddhartha Verma, Ph.D., co-author and an assistant professor in FAU's Department of Ocean and Mechanical Engineering. "Both the toilet and urinal generated large quantities of droplets smaller than 3 micrometers in size, posing a significant transmission risk if they contain infectious microorganisms. Due to their small size, these droplets can remain suspended for a long time."

The droplets were detected at heights of up to 5 feet for 20 seconds or longer after initiating the flush. Researchers detected a smaller number of droplets in the air when the toilet was flushed with a closed lid, although not by much, suggesting that aerosolized droplets escaped through small gaps between the cover and the seat.

"The significant accumulation of flush-generated aerosolized droplets over time suggests that the ventilation system was not effective in removing them from the enclosed space even though there was no perceptible lack of airflow within the restroom," said Masoud Jahandar Lashaki, Ph.D., co-author and an assistant professor in FAU's Department of Civil, Environmental and Geomatics Engineering. "Over the long-term, these aerosols could rise up with updrafts created by the ventilation system or by people moving around in the restroom."

There was a 69.5 percent increase in measured levels for particles sized 0.3 to 0.5 micrometers, a 209 percent increase for particles sized 0.5 to 1 micrometers, and a 50 percent increase for particles sized 1 to 3 micrometers. Apart from the smallest aerosols, comparatively larger aerosols also pose a risk in poorly ventilated areas even though they experience stronger gravitational settling. They often undergo rapid evaporation in the ambient environment and the resulting decreases in size and mass, or the eventual formation of droplet nuclei, can allow microbes to remain suspended for several hours.

"The study suggests that incorporation of adequate ventilation in the design and operation of public spaces would help prevent aerosol accumulation in high occupancy areas such as public restrooms," said Manhar Dhanak, Ph.D., co-author, chair of FAU's Department of Ocean and Mechanical Engineering, and professor and director of SeaTech. "The good news is that it may not always be necessary to overhaul the entire system, since most buildings are designed to certain codes. It might just be a matter of redirecting the airflow based on the restroom's layout."

During the 300-second sampling, the toilet and urinal were flushed manually five different times at the 30-, 90-, 150-, 210-, and 270-second mark, with the flushing handle held down for five consecutive seconds. The restroom was deep cleaned and closed 24 hours prior to conducting the experiments, with the ventilation system operating normally. The temperature and relative humidity within the restroom were 21 degrees Celsius (69.8 degrees Fahrenheit) and 52 percent, respectively.

"Aerosolized droplets play a central role in the transmission of various infectious diseases including COVID-19, and this latest research by our team of scientists provides additional evidence to support the risk of infection transmission in confined and poorly ventilated spaces," said Stella Batalama, Ph.D., dean of the College of Engineering and Computer Science.

Read more at Science Daily

Study reveals roadmap of muscle decline with age

 Scientists have produced a comprehensive roadmap of muscle aging in mice that could be used to find treatments that prevent decline in muscle mobility and function, according to a report published today in eLife.

The study reveals which molecules in the muscle are most significantly altered at different life stages, and shows that a molecule called Klotho, when administered to mice in old, but not very old, age, was able to improve muscle strength.

Age-related loss of skeletal muscle mass and function -- called sarcopenia -- is associated with loss of mobility and increased risk of falls. Yet, although scientists know how sarcopenia affects the appearance and behaviour of muscle tissues, the underlying molecular mechanisms for sarcopenia remain poorly understood. Current treatments for sarcopenia largely involve prescribing physical activity or dietary modifications, and these have shown moderate success.

"Although there are no proven treatments for sarcopenia yet, there are some pharmaceutical treatments entering clinical trials. Interestingly, many of these act on mechanisms that also involve a protein called Klotho," says co-first author Zachary Clemens, Doctoral Student at the Department of Environmental and Occupational Health, University of Pittsburgh, Pennsylvania, US. "Evidence suggests that Klotho levels gradually decline with age, and so we wanted to test whether supplementation with Klotho may attenuate the development of sarcopenia."

The team first characterised and compared changes in the structure, function and gene activity in skeletal muscle across the lifespan in mice. They grouped mice into four age categories -- young, middle-aged, old and oldest-old -- and looked at muscle weight, type of muscle fibers, whether the muscles had accumulated fat, and skeletal muscle function. Although old mice displayed mild sarcopenia, the common clinical features of sarcopenia were only present in the oldest-old mice.

Next, they looked at changes in muscle gene activity and found a progressive disruption in genes known to be associated with the hallmarks of aging from the young to the oldest-old mice.

"To date, most studies in skeletal muscle have focused on the identification of specific pathways that are associated with sarcopenia to identify a molecular mechanism linked to the condition," explains co-first author Sruthi Sivakumar, Doctoral Student at the Department of Bioengineering, University of Pittsburgh. "We employed an integrative approach, where we created a network by converting gene expression levels to protein-protein interactions, and then we studied how this interaction network changed over time."

From this network, the team determined the 'network entropy' of the muscle cells as a means to estimate the loss of molecular order within the system over time. They found the greatest difference in order between the young and old age groups (at which point it reached maximal entropy), with little difference between the old and oldest-old mice. Additionally, when they looked at human muscle gene data from different age groups, they saw that entropy reached its lowest level in the fourth decade of life, after which time entropy escalated. This was of interest to the team as the fourth decade of life is the time point when sarcopenia often starts to develop.

Next, they looked at whether administering Klotho to mice would have beneficial effects on the muscle healing after injury. They found that applying Klotho after muscle injury reduced scarring and increased structures associated with force production in the animals. Injured mice that received Klotho also had better muscle function -- such as muscle twitch and force production -- and their whole-body endurance improved two-fold.

Finally, the team looked at whether giving the mice Klotho could reverse age-related declines in muscle quality and function. They found that Klotho administration led to some improvements in the old mice: force production was improved by 17% and endurance when supporting whole body weight was 60% greater compared to mice without treatment. But this was only seen in the old mice, and not in the oldest-old animals. Further investigation showed that Klotho affected genes associated with the hallmarks of aging in all age groups, but that the oldest-old mice showed a dysregulated gene response.

Read more at Science Daily

Chickens and pigs with integrated genetic scissors

 Researchers at the TUM have demonstrated a way to efficiently study molecular mechanisms of disease resistance or biomedical issues in farm animals. Researchers are now able to introduce specific gene mutations into a desired organ or even correct existing genes without creating new animal models for each target gene. This reduces the number of animals required for research..

CRISPR/Cas9 enables desired gene manipulations

CRISPR/Cas9 is a tool to rewrite DNA information. Genes can be inactivated or specifically modified using this method. The CRISPR/Cas9 system consists of two components.

The gRNA (guide RNA) is a short sequence that binds specifically to the DNA segment of the gene that is to be modified. The Cas9 nuclease, the actual "gene scissors," binds to the gRNA and cuts the respective section of the target DNA. This cut activates repair mechanisms that can inactivate gene functions or incorporate specific mutations.

Healthy chickens and pigs with integrated gene scissors

"The generated animals provide the gene scissors, the Cas9 protein, right along with them. So all we have to do is to introduce the guide RNAs to get animals which have specific genetic characteristics," explains Benjamin Schusser, Professor of Reproductive Biotechnology at the TUM. "The initial generation of these animals took about three years. Cas9 can now be used at all stages of animal development, since every cell in the body permanently possesses the Cas9 protein. We have been successfully able to utilize this technique in chicken embryos as well as in living pigs."

The healthy chickens and pigs produced by the researchers thus possess the Cas9 nuclease in all organs studied. This is particularly useful in biomedical and agricultural research.

Analytical tool to fight viral or cancer diseases

Pigs are used as disease models for humans because their anatomy and physiology are much more similar to humans in comparison to mice (currently a common disease model). Thus, a modified pig may help to better understand the mechanism of carcinogenesis in humans. Potential new treatments for humans can also be tested in animal models.

"Due to the presence of Cas9 in the cells the processes are significantly accelerated and simplified," says Angelika Schnieke, Professor of Livestock Biotechnology at the TUM. "Cas9-equipped animals make it possible, for example, to specifically inactivate tumor-relevant genes and to simulate cancer development."

Cas9 pigs and chickens enable researchers to test which genes might be involved in the formation of traits, such as disease resistance, directly in the animal. "The mechanism of the CRISPR/Cas9 system may also be useful for combating infections using DNA viruses. Initial cell culture experiments showed that this already works for the avian herpes virus," says Prof. Schusser.

Read more at Science Daily

NASA's Ingenuity Mars Helicopter succeeds in historic first flight

 Monday, NASA's Ingenuity Mars Helicopter became the first aircraft in history to make a powered, controlled flight on another planet. The Ingenuity team at the agency's Jet Propulsion Laboratory in Southern California confirmed the flight succeeded after receiving data from the helicopter via NASA's Perseverance Mars rover at 6:46 a.m. EDT (3:46 a.m. PDT).

"Ingenuity is the latest in a long and storied tradition of NASA projects achieving a space exploration goal once thought impossible," said acting NASA Administrator Steve Jurczyk. "The X-15 was a pathfinder for the space shuttle. Mars Pathfinder and its Sojourner rover did the same for three generations of Mars rovers. We don't know exactly where Ingenuity will lead us, but today's results indicate the sky -- at least on Mars -- may not be the limit."

The solar-powered helicopter first became airborne at 3:34 a.m. EDT (12:34 a.m. PDT) -- 12:33 Local Mean Solar Time (Mars time) -- a time the Ingenuity team determined would have optimal energy and flight conditions. Altimeter data indicate Ingenuity climbed to its prescribed maximum altitude of 10 feet (3 meters) and maintained a stable hover for 30 seconds. It then descended, touching back down on the surface of Mars after logging a total of 39.1 seconds of flight. Additional details on the test are expected in upcoming downlinks.

Ingenuity's initial flight demonstration was autonomous -- piloted by onboard guidance, navigation, and control systems running algorithms developed by the team at JPL. Because data must be sent to and returned from the Red Planet over hundreds of millions of miles using orbiting satellites and NASA's Deep Space Network, Ingenuity cannot be flown with a joystick, and its flight was not observable from Earth in real time.

NASA Associate Administrator for Science Thomas Zurbuchen announced the name for the Martian airfield on which the flight took place.

"Now, 117 years after the Wright brothers succeeded in making the first flight on our planet, NASA's Ingenuity helicopter has succeeded in performing this amazing feat on another world," Zurbuchen said. "While these two iconic moments in aviation history may be separated by time and 173 million miles of space, they now will forever be linked. As an homage to the two innovative bicycle makers from Dayton, this first of many airfields on other worlds will now be known as Wright Brothers Field, in recognition of the ingenuity and innovation that continue to propel exploration."

Ingenuity's chief pilot, Håvard Grip, announced that the International Civil Aviation Organization (ICAO) -- the United Nations' civil aviation agency -- presented NASA and the Federal Aviation Administration with official ICAO designator IGY, call-sign INGENUITY.

These details will be included officially in the next edition of ICAO's publication Designators for Aircraft Operating Agencies, Aeronautical Authorities and Services. The location of the flight has also been given the ceremonial location designation JZRO for Jezero Crater.

As one of NASA's technology demonstration projects, the 19.3-inch-tall (49-centimeter-tall) Ingenuity Mars Helicopter contains no science instruments inside its tissue-box-size fuselage. Instead, the 4-pound (1.8-kg) rotorcraft is intended to demonstrate whether future exploration of the Red Planet could include an aerial perspective.

This first flight was full of unknowns. The Red Planet has a significantly lower gravity -- one-third that of Earth's -- and an extremely thin atmosphere with only 1% the pressure at the surface compared to our planet. This means there are relatively few air molecules with which Ingenuity's two 4-foot-wide (1.2-meter-wide) rotor blades can interact to achieve flight. The helicopter contains unique components, as well as off-the-shelf-commercial parts -- many from the smartphone industry -- that were tested in deep space for the first time with this mission.

"The Mars Helicopter project has gone from 'blue sky' feasibility study to workable engineering concept to achieving the first flight on another world in a little over six years," said Michael Watkins, director of JPL. "That this project has achieved such a historic first is testimony to the innovation and doggedness of our team here at JPL, as well as at NASA's Langley and Ames Research Centers, and our industry partners. It's a shining example of the kind of technology push that thrives at JPL and fits well with NASA's exploration goals."

Parked about 211 feet (64.3 meters) away at Van Zyl Overlook during Ingenuity's historic first flight, the Perseverance rover not only acted as a communications relay between the helicopter and Earth, but also chronicled the flight operations with its cameras. The pictures from the rover's Mastcam-Z and Navcam imagers will provide additional data on the helicopter's flight.

"We have been thinking for so long about having our Wright brothers moment on Mars, and here it is," said MiMi Aung, project manager of the Ingenuity Mars Helicopter at JPL. "We will take a moment to celebrate our success and then take a cue from Orville and Wilbur regarding what to do next. History shows they got back to work -- to learn as much as they could about their new aircraft -- and so will we."

Perseverance touched down with Ingenuity attached to its belly on Feb. 18. Deployed to the surface of Jezero Crater on April 3, Ingenuity is currently on the 16th sol, or Martian day, of its 30-sol (31-Earth day) flight test window. Over the next three sols, the helicopter team will receive and analyze all data and imagery from the test and formulate a plan for the second experimental test flight, scheduled for no earlier than April 22. If the helicopter survives the second flight test, the Ingenuity team will consider how best to expand the flight profile.

Read more at Science Daily

Apr 19, 2021

How many T. rexes were there? Billions

 How many Tyrannosaurus rexes roamed North America during the Cretaceous period?

That's a question Charles Marshall pestered his paleontologist colleagues with for years until he finally teamed up with his students to find an answer.

What the team found, to be published this week in the journal Science, is that about 20,000 adult T. rexes probably lived at any one time, give or take a factor of 10, which is in the ballpark of what most of his colleagues guessed.

What few paleontologists had fully grasped, he said, including himself, is that this means that some 2.5 billion lived and died over the approximately 2 1/2 million years the dinosaur walked the earth.

Until now, no one has been able to compute population numbers for long-extinct animals, and George Gaylord Simpson, one of the most influential paleontologists of the last century, felt that it couldn't be done.

Marshall, director of the University of California Museum of Paleontology, the Philip Sandford Boone Chair in Paleontology and a UC Berkeley professor of integrative biology and of earth and planetary science, was also surprised that such a calculation was possible.

"The project just started off as a lark, in a way," he said. "When I hold a fossil in my hand, I can't help wondering at the improbability that this very beast was alive millions of years ago, and here I am holding part of its skeleton -- it seems so improbable. The question just kept popping into my head, 'Just how improbable is it? Is it one in a thousand, one in a million, one in a billion?' And then I began to realize that maybe we can actually estimate how many were alive, and thus, that I could answer that question."

Marshall is quick to point out that the uncertainties in the estimates are large. While the population of T. rexes was most likely 20,000 adults at any give time, the 95% confidence range -- the population range within which there's a 95% chance that the real number lies -- is from 1,300 to 328,000 individuals. Thus, the total number of individuals that existed over the lifetime of the species could have been anywhere from 140 million to 42 billion.

"As Simpson observed, it is very hard to make quantitative estimates with the fossil record," he said. "In our study, we focused in developing robust constraints on the variables we needed to make our calculations, rather than on focusing on making best estimates, per se."

He and his team then used Monte Carlo computer simulation to determine how the uncertainties in the data translated into uncertainties in the results.

The greatest uncertainty in these numbers, Marshall said, centers around questions about the exact nature of the dinosaur's ecology, including how warm-blooded T. rex was. The study relies on data published by John Damuth of UC Santa Barbara that relates body mass to population density for living animals, a relationship known as Damuth's Law. While the relationship is strong, he said, ecological differences result in large variations in population densities for animals with the same physiology and ecological niche. For example, jaguars and hyenas are about the same size, but hyenas are found in their habitat at a density 50 times greater than the density of jaguars in their habitat.

"Our calculations depend on this relationship for living animals between their body mass and their population density, but the uncertainty in the relationship spans about two orders of magnitude," Marshall said. "Surprisingly, then, the uncertainty in our estimates is dominated by this ecological variability and not from the uncertainty in the paleontological data we used."

As part of the calculations, Marshall chose to treat T. rex as a predator with energy requirements halfway between those of a lion and a Komodo dragon, the largest lizard on Earth.

The issue of T. rex's place in the ecosystem led Marshall and his team to ignore juvenile T. rexes, which are underrepresented in the fossil record and may, in fact, have lived apart from adults and pursued different prey. As T. rex crossed into maturity, its jaws became stronger by an order of magnitude, enabling it to crush bone. This suggests that juveniles and adults ate different prey and were almost like different predator species.

This possibility is supported by a recent study, led by evolutionary biologist Felicia Smith of the University of New Mexico, which hypothesized that the absence of medium-size predators alongside the massive predatory T. rex during the late Cretaceous was because juvenile T. rex filled that ecological niche.

What the fossils tell us

The UC Berkeley scientists mined the scientific literature and the expertise of colleagues for data they used to estimate that the likely age at sexual maturity of a T. rex was 15.5 years; its maximum lifespan was probably into its late 20s; and its average body mass as an adult -- its so-called ecological body mass, -- was about 5,200 kilograms, or 5.2 tons. They also used data on how quickly T. rexes grew over their life span: They had a growth spurt around sexual maturity and could grow to weigh about 7,000 kilograms, or 7 tons.

From these estimates, they also calculated that each generation lasted about 19 years, and that the average population density was about one dinosaur for every 100 square kilometers.

Then, estimating that the total geographic range of T. rex was about 2.3 million square kilometers, and that the species survived for roughly 2 1/2 million years, they calculated a standing population size of 20,000. Over a total of about 127,000 generations that the species lived, that translates to about 2.5 billion individuals overall.

With such a large number of post-juvenile dinosaurs over the history of the species, not to mention the juveniles that were presumably more numerous, where did all those bones go? What proportion of these individuals have been discovered by paleontologists? To date, fewer than 100 T. rex individuals have been found, many represented by a single fossilized bone.

"There are about 32 relatively well-preserved, post-juvenile T. rexes in public museums today," he said. "Of all the post-juvenile adults that ever lived, this means we have about one in 80 million of them."

"If we restrict our analysis of the fossil recovery rate to where T. rex fossils are most common, a portion of the famous Hell Creek Formation in Montana, we estimate we have recovered about one in 16,000 of the T. rexes that lived in that region over that time interval that the rocks were deposited," he added. "We were surprised by this number; this fossil record has a much higher representation of the living than I first guessed. It could be as good as one in a 1,000, if hardly any lived there, or it could be as low as one in a quarter million, given the uncertainties in the estimated population densities of the beast."

Marshall expects his colleagues will quibble with many, if not most, of the numbers, but he believes that his calculational framework for estimating extinct populations will stand and be useful for estimating populations of other fossilized creatures.

"In some ways, this has been a paleontological exercise in how much we can know, and how we go about knowing it," he said. "It's surprising how much we actually know about these dinosaurs and, from that, how much more we can compute. Our knowledge of T. rex has expanded so greatly in the past few decades thanks to more fossils, more ways of analyzing them and better ways of integrating information over the multiple fossils known."

The framework, which the researchers have made available as computer code, also lays the foundation for estimating how many species paleontologists might have missed when excavating for fossils, he said.

Read more at Science Daily

Human land-use and climate change will have significant impact on animal genetic diversity

Over the last 200 years, researchers have worked towards understanding the global distribution of species and ecosystems. But so far even the basic knowledge on the global geography of genetic diversity was limited.

That now changes with a recent paper from Globe Institute. Professor David Nogues Bravo and his team has spent the last eight years combining data from scientific gene banks with scenarios of future climate and land-use change. The result is the first ever global assessment of how it will impact the genetic diversity of mammals, e.g. when tropical forests are converted to agricultural land.

'Our study identifies both genetically poor and highly diverse areas severely exposed to global change, paving the way to better estimate the vulnerability to global change such as rise in temperature as well as land-use changes. It could help countries to find out how much of the genetic diversity in their own country may be exposed to different global change impacts, while also establishing priorities and conservation policies', says David Nogues Bravo.

For example, Northern Scandinavia will be heavily impacted by climate change and not so much from land use change, whereas the tropical areas of the world will suffer from both climate change and land-use change. However, David Nogues Bravo underlines that it is difficult to compare areas.

'The genetic diversity in Scandinavia is always going to be lower than in the tropics, but that doesn't mean that the overall diversity there is not important. If we lose populations and species such as the polar bear, it's just one species but it will it will impact the total stability of ecosystems. However, the largest threat to genetic diversity will be in the tropical areas, which currently harbor the largest diversity of the bricks of life, genes. These regions include ecosystems like mangroves, jungles and grasslands', says David Nogues Bravo.

Putting it all together

The researchers have looked into gene banks with mitochondrial data from mammals. The mitochondria also regulate the metabolism, and by looking how it has changed over time, it can also unveil changes in diversity.

'The mitochondrial diversity is a broad estimate of adaptive capacity. We also used to think that mitochondria was a neutral marker, when it is in fact under selection. That means that some selection may relate to the physiological limits of a species in relation to climate, which makes it a very useful tool for researchers to track how global change impacts the genetic diversity in a specific area', explains David Nogues Bravo.

For many samples, there were not any geographical information available. The researchers used artificial intelligence to add geographical locations and then they built models predicting how much genetic exits in places without data.

Then the researchers analyzed maps of genetic diversity, future climate change and future land-use change, to reveal how and where global change will impact mammals.

Interest from the United Nations-agency

The research has attracted the attention of Secretariat of the United Nations Convention on Biological Diversity. David Nogues Bravo hope that the assessment map could become an important tool for the high-level summits among countries to help define policies for biodiversity protection.

'We are only now starting to have the tools, data and knowledge to understand how genetic diversity changes across the globe. In a decade from now, we will be able to know also how much of that genetic diversity has been lost since the Industrial Revolution for thousands of species and in a stronger position to bring effective measures to protect it', he says.

In the coming years, he hopes that scientists will map the global genetic diversity of many other forms of life, including plants, fungi and animals across the lands, rivers and oceans.

'Have been attempts to map the genetic diversity for amphibians, birds and reptiles, but we don't have maps for plants, insects or fungi. And whereas there are around 5000 mammal species, there are many more insect or fungi species, maybe millions. We don't even know how many, yet. So it will take longer, but it will come in the next decade', he says.

Read more at Science Daily