Aug 21, 2021

Seeing both the forest and the trees: Trans-scale scope shows big picture of tiny targets

Scientists from the Transdimensional Life Imaging Division of the Institute for Open and Transdisciplinary Research Initiatives (OTRI) at Osaka University created an optical imaging system that can capture an unprecedented number of cells in a single image. By combining an ultra-high pixel camera and a huge lens, the team was able to easily observe exceedingly rare, "one-in-a-million" situations. This work provides a valuable new tool for the simultaneous observation of centimeter-scale dynamics of multicellular populations with micrometer resolution to see the functions of individual cells.

In biology, scientists are often interested in the outliers of a population, such as cells with a rare function that may appear in fewer than one in a million individuals. These experiments have been hampered by the inherent tradeoff with microscopes between seeing cells at a sufficient spatial resolution while still maintaining a large enough field of view to capture unusual specimens. Scientists often spend several minutes moving slides in search of just the right cells to study.

Now, a team of scientists led by Osaka University has devised a system that can produce an image containing up to a million cells at once. "Conventional biological microscopes can observe at most 1,000 cells, with a field of view limited to a few millimeters. Our setup uses machine vision powered by a high-pixel camera with a macro lens," first author Taro Ichimura says. The team built the optical imaging system with a 120-megapixel camera and a telecentric macro lens. This provided a much larger field of view than conventional microscopes, up to about one and half by one centimeter, while still resolving individual cells and the interactions between them that characterize the population. The team termed the imaging technology "trans-scale scope," which signifies that the technology can be applied to imaging from the micrometer-scale to the centimeter-scale. "As a technological singularity for a powerful cell measurement, our trans-scale scope system AMATERAS is expected to contribute to a wide range of applications, from basic research for understanding the operating mechanism of multicellular systems, to medical applications such as the quality control of artificial cell sheets," senior author Takeharu Nagai says.

Read more at Science Daily

We can expect more emissions from oil refineries in the near-term future, analysis finds

A global inventory has revealed that CO2 emissions from oil refineries were 1.3 Gigatonnes (Gt) in 2018 and could be as large as 16.5 Gt from 2020 to 2030. Based on the results, the researchers recommend distinct mitigation strategies for refineries in different regions and age groups. The findings appear August 20 in the journal One Earth.

"This study provides a detailed picture of oil refining capacity and CO2 emissions worldwide," says Dabo Guan of Tsinghua University. "Understanding the past and future development trends of the oil refining industry is crucial for guiding regional and global emissions reduction."

Climate change is one of the most fundamental challenges facing humanity today, and continuous expansion of fossil-fuel-based energy infrastructure may be one of the key obstacles in achieving the Paris Agreement goals. The oil refining industry plays a crucial role in both the energy supply chain and climate change. The petroleum oil refining industry is the third-largest stationary emitter of greenhouse gases in the world, contributing 6% of all industrial greenhouse gas emissions. In particular, CO2 accounts for approximately 98% of greenhouse gases emitted by petroleum refineries.

In the new study, Guan and his collaborators developed a publicly available global inventory of CO2 emissions from 1,056 oil refineries from 2000 to 2018. CO2 emissions of the refinery industry were about 1.3 Gt in 2018. If all existing and proposed refineries operate as usual, without the adoption of any low-carbon measures, they could emit up to 16.5Gt of CO2 from 2020 to 2030. Based on the findings, the authors recommend mitigation strategies, such as improving refinery efficiency and upgrading heavy oil-processing technologies, which could potentially reduce global cumulative emissions by 10% from 2020 to 2030. The inventory will be updated and improved in the future as more and better data become available.

The study also showed that the average output of global oil refineries gradually increased from 2000 to 2018, in terms of barrels per day. But the results varied by refinery age group. Specifically, the average capacity of young refineries, which are mainly distributed in Asia-Pacific and the Middle East, increased significantly from 2000 to 2018, while the average capacity of refineries older than 19 years remained stable. "Given the greater committed emissions brought about by the long remaining operating time of young refineries, there is an urgent need for these refineries to adopt low-carbon technologies to reduce their CO2 emissions," Guan says. "As for middle-aged and old refineries, improving operational efficiency, eliminating the backward capacity, and speeding up the upgrading of refining configuration are the key means to balance growing demand and reducing CO2 emissions."

Read more at Science Daily

Aug 20, 2021

Evolution now accepted by majority of Americans

The level of public acceptance of evolution in the United States is now solidly above the halfway mark, according to a new study based on a series of national public opinion surveys conducted over the last 35 years.

"From 1985 to 2010, there was a statistical dead heat between acceptance and rejection of evolution," said lead researcher Jon D. Miller of the Institute for Social Research at the University of Michigan. "But acceptance then surged, becoming the majority position in 2016."

Examining data over 35 years, the study consistently identified aspects of education -- civic science literacy, taking college courses in science and having a college degree -- as the strongest factors leading to the acceptance of evolution.

"Almost twice as many Americans held a college degree in 2018 as in 1988," said co-author Mark Ackerman, a researcher at Michigan Engineering, the U-M School of Information and Michigan Medicine. "It's hard to earn a college degree without acquiring at least a little respect for the success of science."

The researchers analyzed a collection of biennial surveys from the National Science Board, several national surveys funded by units of the National Science Foundations, and a series focused on adult civic literacy funded by NASA. Beginning in 1985, these national samples of U.S. adults were asked to agree or disagree with this statement: "Human beings, as we know them today, developed from earlier species of animals."

The series of surveys showed that Americans were evenly divided on the question of evolution from 1985 to 2007. According to a 2005 study of the acceptance of evolution in 34 developed nations, led by Miller, only Turkey, at 27%, scored lower than the United States. But over the last decade, until 2019, the percentage of American adults who agreed with this statement increased from 40% to 54%.

The current study consistently identified religious fundamentalism as the strongest factor leading to the rejection of evolution. While their numbers declined slightly in the last decade, approximately 30% of Americans continue to be religious fundamentalists as defined in the study. But even those who scored highest on the scale of religious fundamentalism shifted toward acceptance of evolution, rising from 8% in 1988 to 32% in 2019.

Miller predicted that religious fundamentalism would continue to impede the public acceptance of evolution.

"Such beliefs are not only tenacious but also, increasingly, politicized," he said, citing a widening gap between Republican and Democratic acceptance of evolution.

As of 2019, 34% of conservative Republicans accepted evolution compared to 83% of liberal Democrats.

Read more at Science Daily

Study reveals existing drugs that kill SARS-CoV2 in cells

Since the beginning of the pandemic, researchers worldwide have been looking for ways to treat COVID-19. And while the COVID-19 vaccines represent the best measure to prevent the disease, therapies for those who do get infected remain in short supply. A new groundbreaking study from U-M reveals several drug contenders already in use for other purposes -- including one dietary supplement -- that have been shown to block or reduce SARS-CoV2 infection in cells.

The study, published recently in the Proceedings of the National Academy of Science, uses artificial intelligence-powered image analysis of human cell lines during infection with the novel coronavirus. The cells were treated with more than 1,400 individual FDA-approved drugs and compounds, either before or after viral infection, and screened, resulting in 17 potential hits. Ten of those hits were newly recognized, with seven identified in previous drug repurposing studies, including remdesivir, which is one of the few FDA-approved therapies for COVID-17 in hospitalized patients.

"Traditionally, the drug development process takes a decade -- and we just don't have a decade," said Jonathan Sexton, Ph.D., Assistant Professor of Internal Medicine at the U-M Medical School and one of the senior authors on the paper. "The therapies we discovered are well positioned for phase 2 clinical trials because their safety has already been established."

The team validated the 17 candidate compounds in several types of cells, including stem-cell derived human lung cells in an effort to mimic SARS-CoV2 infection of the respiratory tract. Nine showed anti-viral activity at reasonable doses, including lactoferrin, a protein found in human breastmilk that is also available over the counter as a dietary supplement derived from cow's milk.

"We found lactoferrin had remarkable efficacy for preventing infection, working better than anything else we observed," Sexton said. He adds that early data suggest this efficacy extends even to newer variants of SARS-CoV2, including the highly transmissible Delta variant.

The team is soon launching clinical trials of the compound to examine its ability to reduce viral loads and inflammation in patients with SARS-CoV2 infection.

The trials are adding to the list of ongoing studies of promising repurposed drugs. Sexton noted that over the course of the pandemic, other drug repurposing studies have identified different compounds with potential efficacy against SARS-CoV2. "The results seem to be dependent on what cell system is used," he said.

"But there is an emerging consensus around a subset of drugs and those are the ones that have the highest priority for clinical translation. We fully expect that the majority of these won't work in human beings, but we anticipate there are some that will."

A surprising finding about certain drugs and COVID

Remarkably, the U-M study also identified a class of compounds called MEK-inhibitors, typically prescribed to treat cancer, that appear to worsen SARS-CoV2 infection. The finding sheds light on how the virus spreads among cells.

"People going in for chemotherapy are at risk already due to a lowered immune response. We need to investigate whether some of these drugs worsen disease progression," said Sexton.

The next step, he noted, is to use electronic health records to see whether patients on these drugs have worse COVID-19 outcomes.

The work is one of the first major discoveries to come out of the new U-M Center for Drug Repurposing (CDR), which was established in November 2019, just as the pandemic began. The Michigan Institute for Clinical & Health Research (MICHR), with partners across campus, launched the Center with the goal of finding potential therapeutics for the thousands of human diseases for which there is no treatment.

"Repurposing existing therapeutic interventions in the clinical setting has many advantages that result in significantly less time from discovery to clinical use, including documented safety profiles, reduced regulatory burden, and substantial cost savings," said George A. Mashour, MD, PhD, co-director of MICHR and founder/executive sponsor of the CDR.

Read more at Science Daily

Novel AI blood testing technology can ID lung cancers with high accuracy

A novel artificial intelligence blood testing technology developed by researchers at the Johns Hopkins Kimmel Cancer Center was found to detect over 90% of lung cancers in samples from nearly 800 individuals with and without cancer.

The test approach, called DELFI (DNA evaluation of fragments for early interception), spots unique patterns in the fragmentation of DNA shed from cancer cells circulating in the bloodstream. Applying this technology to blood samples taken from 796 individuals in Denmark, the Netherlands and the U.S., investigators found that the DELFI approach accurately distinguished between patients with and without lung cancer.

Combining the test with analysis of clinical risk factors, a protein biomarker, and followed by computed tomography imaging, DELFI helped detect 94% of patients with cancer across stages and subtypes. This included 91% of patients with earlier or less invasive stage I/II cancers and 96% of patients with more advanced stage III/IV cancers. These results will be published in the August 20 issue of the journal Nature Communications.

Lung cancer is the most common cause of cancer death, claiming almost 2 million lives worldwide each year. However, fewer than 6% of Americans at risk for lung cancers undergo recommended low-dose computed tomography screening, despite projections that tens of thousands of deaths could be avoided, and even fewer are screened worldwide, explains senior study author Victor E. Velculescu, M.D., Ph.D., professor of oncology and do-director of the Cancer Genetics and Epigenetics Program at the Johns Hopkins Kimmel Cancer Center. This is due to a variety of reasons, including concerns of potential harm from investigation of false positive imaging results, radiation exposure or worries about complications from invasive procedures. "It is clear that there is an urgent, unmet clinical need for development of alternative, noninvasive approaches to improve cancer screening for high-risk individuals and, ultimately, the general population," says lead author Dimitrios Mathios, a postdoctoral fellow at the Johns Hopkins Kimmel Cancer Center. "We believe that a blood test, or 'liquid biopsy,' for lung cancer could be a good way to enhance screening efforts, because it would be easy to do, broadly accessible and cost-effective."

The DELFI technology uses a blood test to indirectly measure the way DNA is packaged inside the nucleus of a cell by studying the size and amount of cell-free DNA present in the circulation from different regions across the genome. Healthy cells package DNA like a well-organized suitcase, in which different regions of the genome are placed carefully in various compartments. The nuclei of cancer cells, by contrast, are like more disorganized suitcases, with items from across the genome thrown in haphazardly. When cancer cells die, they release DNA in a chaotic manner into the bloodstream. DELFI helps identify the presence of cancer using machine learning, a type of artificial intelligence, to examine millions of cell-free DNA fragments for abnormal patterns, including the size and amount of DNA in different genomic regions. This approach provides a view of cell-free DNA referred to as the "fragmentome." The DELFI approach only requires low-coverage sequencing of the genome, enabling this technology to be cost-effective in a screening setting, the researchers say.

For the study, investigators from Johns Hopkins, working with researchers in Denmark and the Netherlands, first performed genome sequencing of cell-free DNA in blood samples from 365 individuals participating in a seven-year Danish study called LUCAS. The majority of participants were at high risk for lung cancer and had smoking-related symptoms such as cough or difficulty breathing. The DELFI approach found that patients who were later determined to have cancer had widespread variation in their fragmentome profiles, while patients found not to have cancer had consistent fragmentome profiles. Subsequently, researchers validated the DELFI technology using a different population of 385 individuals without cancer and 46 individuals with cancer. Overall, the approach detected over 90% of patients with lung cancer, including those with early and advanced stages, and with different subtypes. "DNA fragmentation patterns provide a remarkable fingerprint for early detection of cancer that we believe could be the basis of a widely available liquid biopsy test for patients with lung cancer," says author Rob Scharpf, Ph.D., associate professor of oncology at the Johns Hopkins Kimmel Cancer Center.

Read more at Science Daily

A parent’s genes can influence a child’s educational success, inherited or not

A child's educational success depends on the genes that they haven't inherited from their parents, as well as the genes they have, according to a new study led by UCL researchers.

Funded by the Nuffield Foundation, the study confirms that genes a person inherits directly are most likely to contribute to their achievements in education. But parent genes that aren't directly inherited, yet have still shaped parents' own education levels and subsequently influenced the lifestyle and family environment they provide for their children, are also important and can affect how well a person does at school and beyond.

The study, a systematic review and meta-analysis of prior evidence of genetic impacts on educational outcomes, is published today in the American Journal of Human Genetics.

Children resemble their parents because of nature (the genes they inherit) and nurture (the environment they grow up in). But nature and nurture effects are intertwined.

Mothers and fathers each pass on half of their genes to their children, and although the other half of their genes are not passed on, they continue to influence the parents' traits and ultimately influence the traits in their children. For example, parents with a higher genetic propensity for learning may have a greater interest in activities such as reading that, in turn, nurture learning in their offspring.

This concept -- when parents' genes influence outcomes for their offspring by shaping the environment that they provide for them -- is called genetic nurture. It describes how parents' genes indirectly their children's characteristics.

For the current paper, researchers reviewed and analysed 12 studies in several countries and used a method called polygenic scoring to study the influence of millions of genetic variants on educational attainment in nearly 40,000 parent and child pairs.

The researchers found that genetic nurture had about half as much impact on education success as genetic inheritance.

Genetic nurture effects captured by polygenic scores in the studies explained at least 1.28% of variance in educational outcomes, while direct genetic effects explained at least 2.89% of variance in educational outcomes. The researchers say the findings are underestimated given that polygenic scores only capture a fraction of heritability in educational outcomes; the actual genetic effects could be multiple times higher, but direct genetic effects would probably still be roughly double those of genetic nurture effects.

Lead researcher Dr Jean-Baptiste Pingault (UCL Psychology & Language Sciences) said: "We discovered genetic nurture has a significant effect on a child's educational achievement. The effects were mainly down to their parents' education and how it influences the environment they provide. We also found that fathers and mothers had similar genetic nurture effects, suggesting both parents are equally important in shaping and fostering an environment favourable for a child's learning.

"This study illustrates how complex the relationship between genes and the environment is. Although our study uses genetic methods, it provides strong evidence that, as well as genetics, the environment really matters when we talk about education.

"Two aspects are complementary here. First, some of it depends on the genetic lottery, so parents do not have complete control and not everything is down to what they do. That said, what parents do and their choices do seem to matter. Our findings show that socioeconomic status and parental education are probably key.

"It is really important to understand how educational attainment (years of education, highest degree obtained) and achievement (scores and grades achieved) are passed on through families, and how this knowledge could help us break cycles of disadvantage across generations."

First author of the paper, Dr Biyao Wang (UCL Psychology & Language Sciences) said: "It is too early yet to say whether the most important is what happens within the family (such as parents reading to their children) or outside the family (such as parents choosing the best school and activities). Next we hope to work out which pathways genetic nurture operates through, if it changes during different stages of development, and identify what aspects of the environment are most important. This will be key to designing new interventions to encourage and support all children to succeed."

Read more at Science Daily

Aug 19, 2021

Fast changes between the solar seasons resolved by new sun clock

Violent activity on our Sun leads to some of the most extreme space weather events on Earth, impacting systems such as satellites, communications systems, power distribution and aviation. The roughly 11 year cycle of solar activity has three 'seasons', each of which affects the space weather felt at Earth differently: (i) solar maximum, the sun is active and disordered, when space weather is stormy and events are irregular (ii) the declining phase, when the sun and solar wind becomes ordered, and space weather is more moderate and (iii) solar minimum, when activity is quiet.

In a new study led by the University of Warwick and published in The Astrophysical Journal, scientists found that the change from solar maximum to the declining phase is fast, happening within a few (27 day) solar rotations. They also showed that the declining phase is twice as long in even-numbered solar cycles as it is in odd-numbered cycles.

No two solar cycles are the same in amplitude or duration. To study the solar seasons, the scientists built a sun clock from the daily sunspot number record available since 1818. This maps the irregular solar cycles onto a regular clock. The magnetic polarity of the sun reverses after each roughly 11 year solar cycle giving a roughly 22 year magnetic cycle (named after George Ellery Hale) and to explore this, a 22 year clock was constructed. The effect on space weather at earth can be tracked back using the longest continuous records of geomagnetic activity over the past 150 years, and once the clock is constructed, it can be used to study multiple observations of seasonal solar activity which affect the earth.

With the greater detail afforded by the sun clock, the scientists could see that the switch from solar maximum to the declining phase is fast, occurring within a few (27 day) solar rotations. There was also a clear difference in the duration of the declining phase when the sun's magnetic polarity is 'up' compared to 'down': in even-numbered cycles it is around twice as long as odd-numbered cycles. As we are about to enter cycle 25, the scientists anticipate that the next declining phase will be short.

Lead author Professor Sandra Chapman of the University of Warwick Department of Physics said: "By combining well known methods in a new way, our clock resolves changes in the Sun's climate to within a few solar rotations. Then you find the changes between some phases can be really sharp.

"If you know you've had a long cycle, you know the next one's going to be short, we can estimate how long it's going to last. Knowing the timing of the climate seasons helps to plan for space weather. Operationally it is useful to know when conditions will be active or quiet, for satellites, power grids, communications."

The results also provide a clue to understanding how the Sun reverses polarity after every cycle.

Read more at Science Daily

Attractiveness pays off at work — but there’s a trick to level the playing field

Beautiful people are more likely to get hired, receive better performance evaluations and get paid more -- but it's not just because of their good looks, according to new research from the University at Buffalo School of Management.

The study, forthcoming in Personnel Psychology, was recently published online. It found that while a "beauty premium" exists across professions, it's partially because attractive people develop distinct traits as a result of how the world responds to their attractiveness. They build a greater sense of power and have more opportunities to improve nonverbal communication skills throughout their lives.

"We wanted to examine whether there's an overall bias toward beauty on the job, or if attractive people excel professionally because they're more effective communicators," says Min-Hsuan Tu, PhD, assistant professor of organization and human resources in the UB School of Management. "What we found was that while good looking people have a greater sense of power and are better nonverbal communicators, their less-attractive peers can level the playing field during the hiring process by adopting a powerful posture."

The researchers conducted two studies that evaluated 300 elevator pitches of participants in a mock job search. In the first study, managers determined the good looking people to be more hirable because of their more effective nonverbal presence.

In the second study, the researchers asked certain participants to strike a 'power pose' by standing with their feet shoulder-width apart, hands on hips, chest out and chin up during their pitch. With this technique, the less attractive people were able to match the level of nonverbal presence that their more attractive counterparts displayed naturally.

"By adopting the physical postures associated with feelings of power and confidence, less attractive people can minimize behavioral differences in the job search," says Tu. "But power posing is not the only solution -- anything that can make you feel more powerful, like doing a confidence self-talk, visualizing yourself succeeding, or reflecting on past accomplishments before a social evaluation situation can also help."

From Science Daily

Key mental abilities can actually improve during aging

It's long been believed that advancing age leads to broad declines in our mental abilities. Now new research from Georgetown University Medical Center offers surprisingly good news by countering this view.

The findings, published August 19, 2021, in Nature Human Behaviour, show that two key brain functions, which allow us to attend to new information and to focus on what's important in a given situation, can in fact improve in older individuals. These functions underlie critical aspects of cognition such as memory, decision making, and self-control, and even navigation, math, language, and reading.

"These results are amazing, and have important consequences for how we should view aging," says the study's senior investigator, Michael T. Ullman, PhD, a professor in the Department of Neuroscience, and Director of Georgetown's Brain and Language Lab.

"People have widely assumed that attention and executive functions decline with age, despite intriguing hints from some smaller-scale studies that raised questions about these assumptions," he says. "But the results from our large study indicate that critical elements of these abilities actually improve during aging, likely because we simply practice these skills throughout our life."

"This is all the more important because of the rapidly aging population, both in the US and around the world," Ullman says. He adds that with further research, it may be possible to deliberately improve these skills as protection against brain decline in healthy aging and disorders.

The research team, which includes first author João Veríssimo, PhD, an assistant professor at the University of Lisbon, Portugal, looked at three separate components of attention and executive function in a group of 702 participants aged 58 to 98. They focused on these ages since this is when cognition often changes the most during aging.

The components they studied are the brain networks involved in alerting, orienting, and executive inhibition. Each has different characteristics and relies on different brain areas and different neurochemicals and genes. Therefore, Ullman and Veríssimo reasoned, the networks may also show different aging patterns.

Alerting is characterized by a state of enhanced vigilance and preparedness in order to respond to incoming information. Orienting involves shifting brain resources to a particular location in space. The executive network inhibits distracting or conflicting information, allowing us to focus on what's important.

"We use all three processes constantly," Veríssimo explains. "For example, when you are driving a car, alerting is your increased preparedness when you approach an intersection. Orienting occurs when you shift your attention to an unexpected movement, such as a pedestrian. And executive function allows you to inhibit distractions such as birds or billboards so you can stay focused on driving."

The study found that only alerting abilities declined with age. In contrast, both orienting and executive inhibition actually improved.

The researchers hypothesize that because orienting and inhibition are simply skills that allow people to selectively attend to objects, these skills can improve with lifelong practice. The gains from this practice can be large enough to outweigh the underlying neural declines, Ullman and Veríssimo suggest. In contrast, they believe that alerting declines because this basic state of vigilance and preparedness cannot improve with practice.

"Because of the relatively large number of participants, and because we ruled out numerous alternative explanations, the findings should be reliable and so may apply quite broadly," Veríssimo says. Moreover, he explains that "because orienting and inhibitory skills underlie numerous behaviors, the results have wide-ranging implications."

Read more at Science Daily

Researchers uncover the biology and treatment behind a rare autoinflammatory disease

The absence of a protein that activates the body's antiviral defenses can cause a rare rheumatoid-like autoinflammatory condition that is treatable with an FDA-approved class of drugs known as TNF (tumor necrosis factor) inhibitors, a global research team led by Mount Sinai has found.

The condition, which is now termed TBK1 deficiency, was previously known to scientists but its name, cause and treatment were identified for the first time in the study published August 6 in Cell.

The scientists reported that the absence of the protein, known as TBK1 (TANK-binding kinase 1), renders cells vulnerable to a jarring form of programmed cell death in response to TNF, but that this genetic defect can be effectively and quickly addressed by therapeutic agents that block the source of the inflammation.

Homozygous mutations in TBK1, which occur when copies of the aberrant gene encoding the protein are passed on by both parents, are extremely rare. Based on past studies in mouse models and human cell cultures, researchers assumed that these mutations would leave individuals susceptible to a wide range of viral infections. They found instead that none of the four people in the cohort they studied, ages 7 to 32, showed signs of inadequate antiviral immunity. Rather, they all suffered from a systemic autoinflammatory condition that resulted from a dysregulated response to TNF, an important protein involved in controlling inflammation and cell death.

"TBK1 signaling activates the body's antiviral mechanisms to fight off infection and block different stages of viral replication, as well as control TNF-mediated inflammation," says lead author Justin Taft, PhD, an investigator in the Department of Microbiology, the Precision Immunology Institute, and the Center for Inborn Errors of Immunity at the Icahn School of Medicine at Mount Sinai. "But if a mutation prevents expression of the TBK1 gene or disrupts its function, then cells become overly sensitive to TNF. And that can trigger a disproportionate amount of cell death, which sets off a violent cascade of debris from dying cells that inflames surrounding tissue and fuels the inflammation."

By treating TBK1-deficient individuals with anti-TNF therapeutics, the Mount Sinai-led team confirmed its suspicions about the underlying biology of the genetically driven condition. "We have essentially defined a new disease and its associated mechanisms of autoinflammation, which previously were managed with steroid treatments, non-steroidal anti-inflammatory drugs, or other non-specific therapeutics clinicians deemed worth trying," says Dusan Bogunovic, PhD, Director of the Center for Inborn Errors of Immunity; Associate Professor of Oncological Sciences, Microbiology, and Pediatrics; member of the Precision Immunology Institute and The Mindich Child Health and Development Institute; and senior author of the study. "We were able to target the condition directly and effectively with TNF inhibitors once we knew the causative factors of the inflammation. And the clinical improvement was quick and substantial."

For Dr. Taft, the work of the global team of researchers underscores the power of the increasingly vital field of personalized medicine. "We started with four individuals who were known to have a homozygous TBK1 mutation, and did extensive lab work to determine how the defect could induce this autoinflammatory reaction," says Dr. Taft. "And from the genetics we uncovered not only a plausible mechanism of the disease and new information around TBK1 biology, but identified a disease-specific treatment that significantly improves the autoinflammatory condition."

Read more at Science Daily

Aug 18, 2021

New simulation shows how galaxies feed their supermassive black holes

Galaxies' spiral arms are responsible for scooping up gas to feed to their central supermassive black holes, according to a new high-powered simulation.

Started at Northwestern University, the simulation is the first to show, in great detail, how gas flows across the universe all the way down to the center of a supermassive black hole. While other simulations have modeled black hole growth, this is the first single computer simulation powerful enough to comprehensively account for the numerous forces and factors that play into the evolution of supermassive black holes.

The simulation also gives rare insight into the mysterious nature of quasars, which are incredibly luminous, fast-growing black holes. Some of the brightest objects in the universe, quasars often even outshine entire galaxies.

"The light we observe from distant quasars is powered as gas falls into supermassive black holes and gets heated up in the process," said Northwestern's Claude-André Faucher-Giguère, one of the study's senior authors. "Our simulations show that galaxy structures, such as spiral arms, use gravitational forces to 'put the brakes on' gas that would otherwise orbit galaxy centers forever. This breaking mechanism enables the gas to instead fall into black holes and the gravitational brakes, or torques, are strong enough to explain the quasars that we observe."

The research was published today(Aug. 17) in the Astrophysical Journal.

Faucher-Giguèreis an associate professor of physics and astronomy at Northwestern's Weinberg College of Arts and Sciencesand a member of the Center for Interdisciplinary Exploration and Research in Astrophysics(CIERA). Daniel Anglés-Alcázar, an assistant professor at the University of Connecticut and former CIERA fellow inFaucher-Giguère's group, is the paper's first author.

Equivalent to the mass of millions or even billions of suns, supermassive black holes can swallow 10 times the mass of a sun in just one year. But while some supermassive black holes enjoy a continuous supply of food, others go dormant for millions of years, only to reawaken abruptly with a serendipitous influx of gas. The details about how gas flows across the universe to feed supermassive black holes have remained a long-standing question.

To address this mystery, the research team developed the new simulation, which incorporates many of the key physical processes -- including the expansion of the universe and the galactic environment on large scales, gravity gas hydrodynamics and feedback from massive stars -- into one model.

"Powerful events such as supernovae inject a lot of energy into the surrounding medium, and this influences how the galaxy evolves," Anglés-Alcázar said. "So we need to incorporate all of these details and physical processes to capture an accurate picture."

Building on previous work from the FIRE ("Feedback In Realistic Environments") project, the new technology greatly increases model resolution and allows for following the gas as it flows across the galaxy with more than 1,000 times better resolution than previously possible.

"Other models can tell you a lot of details about what's happening very close to the black hole, but they don't contain information about what the rest of the galaxy is doing or even less about what the environment around the galaxy is doing," Anglés-Alcázar said. "It turns out, it is very important to connect all these processes at the same time."

"The very existence of supermassive black holes is quite amazing, yet there is no consensus on how they formed," Faucher-Giguère said. "The reason supermassive black holes are so difficult to explain is that forming them requires cramming a huge amount of matter into a tiny space. How does the universe manage to do that? Until now, theorists developed explanations relying on patching together different ideas for how matter in galaxies gets crammed into the innermost one millionth of a galaxy's size."

Read more at Science Daily

Leaky sewers are likely responsible for large amounts of medications in streams

Pharmaceutical compounds can harm the environment. However, in waterways that don't receive treated wastewater, these pollutants aren't expected to be present. Now, researchers reporting in ACS' Environmental Science & Technology have found that amounts of some medications carried by a stream in Baltimore were substantial, despite generally low concentrations over the course of a year. Because wastewater plants don't impact this stream, the high loads are likely coming from leaking sewer pipes, they say.

Thousands of medications are approved for human use in the U.S., and many of them are harmful to microorganisms, algae and insects when they make their way into lakes and streams through wastewater. The concentrations of pharmaceutical compounds are usually used to determine their impact on organisms living in streams and rivers. However, contaminant concentrations may change quickly from one day to the next, and so singular snapshots do not correctly illustrate their cumulative effects on aquatic life. Instead, load -- the mass of a pollutant that passes through a stream or river over time -- better represents the risks to downstream environments, where the contaminants end up. While loads are used in regulations for traditional pollutants, such as nutrients, they have not been considered for pharmaceuticals. So, Megan Fork and colleagues wanted to get an idea of the yearly load of medicines transported by an urban stream in Baltimore.

The researchers tested water from an urban stream draining into Baltimore's Inner Harbor in Maryland on a weekly basis for a year. At the outflow point, they found 16 pharmaceutical compounds whose presence and amount varied considerably from week to week, ranging from concentrations of parts per trillion to parts per billion. Trimethoprim -- an antibiotic -- was found most regularly, but acetaminophen -- a common pain reliever -- was at the highest concentrations. The team used their weekly measurements to estimate annual loads of pharmaceuticals, calculating that the equivalent of 30,000 doses of antidepressants, 1,700 doses of antibiotics and 30,000 tablets of acetaminophen entered the Inner Harbor through the stream. Interestingly, this watershed did not receive wastewater treatment plant effluent, so it's likely these compounds are coming from leaky sewer pipes. Improvements to aging infrastructure could reduce this source of harmful compounds to urban streams and other waterways, the researchers say.

From Science Daily

Lonely flies, like many humans, eat more and sleep less

COVID-19 lockdowns scrambled sleep schedules and stretched waistlines. One culprit may be social isolation itself. Scientists have found that lone fruit flies quarantined in test tubes sleep too little and eat too much after only about one week of social isolation, according to a new study published in Nature. The findings, which describe how chronic separation from the group leads to changes in gene expression, neural activity, and behavior in flies, provide one of the first robust animal models for studying the body's biological reaction to loneliness.

"Flies are wired to have a specific response to social isolation," says Michael W. Young, the Richard and Jeanne Fisher Professor and head of the Laboratory of Genetics at Rockefeller. "We found that loneliness has pathological consequences, connected to changes in a small group of neurons, and we've begun to understand what those neurons are doing."

The science of loneliness

Drosophila are social creatures. The fruit flies forage and feed in groups, serenade one another through complex mating rituals, tussle in miniature boxing matches. And then they conk out: flies sleep 16 hours each day, split between a languorous midday nap and a full night's rest.

So when Wanhe Li, a research associate in Young's lab, began investigating the biological underpinnings of chronic social isolation, she turned to the gregarious and well-studied fruit fly. "Over and over again, Drosophila have put us on the right track," says Young. "Evolution packed a great deal of complexity into these insects long ago and, when we dig into their systems, we often find the rudiments of something that is also manifest in mammals and humans."

"When we have no roadmap, the fruit fly becomes our roadmap," Li adds.

For the study, Wanhe Li first compared how flies fare under various lockdown conditions. After seven days, flies housed together in groups of varying sizes produced no anomalous behaviors. Even two flies cut off from the crowd were content with one another. But when a single fly was entirely isolated, the lonely insect began eating more and sleeping less.

Further investigation revealed that a group of genes linked to starvation were expressed differently in the brains of lonely flies -- a tempting genetic basis for the observed connection between isolation and overeating.

Li then found that a small group of brain cells known as P2 neurons were involved in the observed changes to sleep and feeding behavior. Shutting down the P2 neurons of chronically-isolated flies suppressed overeating and restored sleep; boosting P2 in flies isolated from the group for only one day caused them to eat and sleep as if they had been alone for a full week.

"We managed to trick the fly into thinking that it had been chronically isolated," says Wanhe Li. "The P2 neurons seem to be linked to the perception of the duration of social isolation, or the intensiveness of loneliness, like a timer counting down how long the fly has been alone."

The Young lab painstakingly confirmed these observations. They engineered insomniac flies, to make sure that lack of sleep alone did not cause overeating (it didn't). They tested group-reared flies to find out whether manipulating P2 neurons would cause overeating and sleep loss in socialized flies (it doesn't). Ultimately, they concluded that only a perfect storm of both P2 neuron activity and social isolation will cause flies to begin to losing sleep and overeating.

Explaining the "Quarantine 15"

Scientists have observed that many social animals -- from fruit flies to humans -- eat more and sleep less when isolated. The reason for this is unclear. One possibility, Young says, is that social isolation signals a degree of uncertainty about the future. Preparation for tough times may include being alert and awake as often as possible and eating whenever food is available.

This study can hardly confirm that humans in COVID-19 lockdowns ate more and slept less due to the same biological mechanisms that keep lonely flies hungry and sleep deprived. But now that Li and Young have identified the neurons and genes responding to chronic isolation in fruit flies, future researchers can search for corresponding connections between loneliness, overeating, and insomnia in laboratory animals and, eventually, humans.

Read more at Science Daily

Genetic histories and social organization in Neolithic and Bronze Age Croatia

Present-day Croatia was an important crossroads for migrating peoples along the Danubian corridor and the Adriatic coast, linking east and west. "While this region is important for understanding population and cultural transitions in Europe, limited availability of human remains means that in-depth knowledge about the genetic ancestry and social complexity of prehistoric populations here remains sparse," says first author Suzanne Freilich, a researcher at the Max Planck Institute for the Science of Human History and the University of Vienna.

To this aim, an international team of researchers set out to fill the gap. They studied two archaeological sites in eastern Croatia -- one containing predominantly Middle Neolithic burials from within the settlement site, the other a Middle Bronze Age necropolis containing cremations and inhumations -- and sequenced whole genomes of 28 individuals from these two sites. The researchers' goal was to understand both the genetic ancestry as well as social organisation within each community -- in particular, to study local residency patterns, kinship relations and to learn more about the varied burial rites observed.

Middle Neolithic settlement at Popova zemlja

Dated to around 4,700-4,300 BCE the Middle Neolithic settlement at Beli-Manastir Popova zemlja belongs to the Sopot culture. Many children, especially girls, were buried here, in particular along the walls of pit houses. "One question was whether individuals buried in the same buildings were biologically related to each other," says Suzanne Freilich.

"We found that individuals with different burial rites did not differ in their genetic ancestry, which was similar to Early Neolithic people. We also found a high degree of haplotype diversity and, despite the size of the site, no very closely related individuals," Freilich adds. This suggests that this community was part of a large, mainly exogamous population where people marry outside their kin group. Interestingly, however, the researchers also identified a few cases of endogamous mating practices, including two individuals who would have been the children of first cousins or equivalent, something rarely found in the ancient DNA record.

Middle Bronze Age necropolis at Jagodnjak-Krčevine

The second site the researchers studied was the Middle Bronze Age necropolis of Jagodnjak-Krčevine that belongs to the Transdanubian Encrusted Pottery Culture and dates to around 1,800-1,600 BCE. "This site contains burials that are broadly contemporaneous with some individuals from the Dalmatian coast, and we wanted to find out whether individuals from these different ecoregions carried similar ancestry," says Stephan Schiffels.

The researchers found that the people from Jagodnjak actually carried very distinct ancestry due to the presence of significantly more western European hunter-gatherer-related ancestry. This ancestry profile is present in a small number of other studied genomes from further north in the Carpathian Basin. These new genetic results support archaeological evidence that suggests a shared population history for these groups as well as the presence of trade and exchange networks.

"We also found that all male individuals at the site had identical Y chromosome haplotypes," says Freilich. "We identified two male first degree relatives, second degree and more distantly related males, while the one woman in our sample was unrelated. This points to a patrilocal social organisation where women leave their own home to join their husband's home." Contrary to the Middle Neolithic site at Popova zemlja, biological kinship was a factor for selection to be buried at this site. In addition the authors found evidence of rich infant graves that suggests they likely inherited their status or wealth from their families.

Filling the gap in the archaeogenetic record


This study helps to fill the gap in the archaeogenetic record for this region, characterising the diverse genetic ancestries and social organisations that were present in Neolithic and Bronze Age eastern Croatia. It highlights the heterogeneous population histories of broadly contemporaneous coastal and inland Bronze Age groups, and connections with communities further north in the Carpathian Basin. Furthermore, it sheds light on the subject of Neolithic intramural burials -- burials within a settlement -- that has been debated among archaeologists and anthropologists for some time. The authors show that at the site of Popova zemlja, this burial rite was not associated with biological kinship, but more likely represented age and sex selection related to Neolithic community belief systems.

Read more at Science Daily

Aug 17, 2021

Cracking a mystery of massive black holes and quasars with supercomputer simulations

At the center of galaxies, like our own Milky Way, lie massive black holes surrounded by spinning gas. Some shine brightly, with a continuous supply of fuel, while others go dormant for millions of years, only to reawaken with a serendipitous influx of gas. It remains largely a mystery how gas flows across the universe to feed these massive black holes.

UConn Assistant Professor of Physics Daniel Anglés-Alcázar, lead author on a paper published today in The Astrophysical Journal, addresses some of the questions surrounding these massive and enigmatic features of the universe by using new, high-powered simulations.

"Supermassive black holes play a key role in galaxy evolution and we are trying to understand how they grow at the centers of galaxies," says Anglés-Alcázar. "This is very important not just because black holes are very interesting objects on their own, as sources of gravitational waves and all sorts of interesting stuff, but also because we need to understand what the central black holes are doing if we want to understand how galaxies evolve."

Anglés-Alcázar, who is also an Associate Research Scientist at the Flatiron Institute Center for Computational Astrophysics, says a challenge in answering these questions has been creating models powerful enough to account for the numerous forces and factors that play into the process. Previous works have looked either at very large scales or the very smallest of scales, "but it has been a challenge to study the full range of scales connected simultaneously."

Galaxy formation, Anglés-Alcázar says, starts with a halo of dark matter that dominates the mass and gravitational potential in the area and begins pulling in gas from its surroundings. Stars form from the dense gas, but some of it must reach the center of the galaxy to feed the black hole. How does all that gas get there? For some black holes, this involves huge quantities of gas, the equivalent of ten times the mass of the sun or more swallowed in just one year, says Anglés-Alcázar.

"When supermassive black holes are growing very fast, we refer to them as quasars," he says. "They can have a mass well into one billion times the mass of the sun and can outshine everything else in the galaxy. How quasars look depends on how much gas they add per unit of time. How do we manage to get so much gas down to the center of the galaxy and close enough that the black hole can grab it and grow from there?"

The new simulations provide key insights into the nature of quasars, showing that strong gravitational forces from stars can twist and destabilize the gas across scales, and drive sufficient gas influx to power a luminous quasar at the epoch of peak galaxy activity.

In visualizing this series of events, it is easy to see the complexities of modeling them, and Anglés-Alcázar says it is necessary to account for the myriad components influencing black hole evolution.

"Our simulations incorporate many of the key physical processes, for example, the hydrodynamics of gas and how it evolves under the influence of pressure forces, gravity, and feedback from massive stars. Powerful events such as supernovae inject a lot of energy into the surrounding medium and this influences how the galaxy evolves, so we need to incorporate all of these details and physical processes to capture an accurate picture."

Building on previous work from the FIRE ("Feedback In Realistic Environments") project, Anglés-Alcázar explains the new technique outlined in the paper that greatly increases model resolution and allows for following the gas as it flows across the galaxy with more than a thousand times better resolution than previously possible,

"Other models can tell you a lot of details about what's happening very close to the black hole, but they don't contain information about what the rest of the galaxy is doing, or even less, what the environment around the galaxy is doing. It turns out, it is very important to connect all of these processes at the same time, this is where this new study comes in."

The computing power is similarly massive, Anglés-Alcázar says, with hundreds of central processing units (CPUs) running in parallel that could have easily taken the length of millions of CPU hours.

"This is the first time that we have been able to create a simulation that can capture the full range of scales in a single model and where we can watch how gas is flowing from very large scales all the way down to the very center of the massive galaxy that we are focusing on."

For future studies of large statistical populations of galaxies and massive black holes, we need to understand the full picture and the dominant physical mechanisms for as many different conditions as possible, says Anglés-Alcázar.

Read more at Science Daily

Development and evolution of dolphin, whale blowholes

Modern cetaceans -- which include dolphins, whales and porpoises -- are well adapted for aquatic life. They have blubber to insulate and fins to propel and steer. Today's cetaceans also sport a unique type of nasal passage: It rises at an angle relative to the roof of the mouth -- or palate -- and exits at the top of the head as a blowhole.

This is an apt adaptation for an air-breathing animal at home in the water. Yet as embryos, the cetacean nasal passage starts out in a position more typical of mammals: parallel to the palate and exiting at the tip of the snout, or rostrum. Cetacean experts have long puzzled over how the nasal passage switches during embryonic and fetal development from a palate-parallel pathway to an angled orientation terminating in a blowhole.

"The shift in orientation and position of the nasal passage in cetaceans is a developmental process that's unlike any other mammal," said Rachel Roston, a postdoctoral researcher at the University of Washington School of Dentistry. "It's an interesting question to see what parts remain connected, what parts shift orientation and how might they work together through a developmental process to bring about this change."

New research by Roston and V. Louise Roth, a professor of biology at Duke University, is shedding light on this process. By measuring anatomical details of embryos and fetuses of pantropical spotted dolphins, they determined the key anatomical changes that flip the orientation of the nasal passage up. Their findings, published July 19 in the Journal of Anatomy, are an integrative model for this developmental transition for cetaceans.

"We discovered that there are three phases of growth, primarily in the head, that can explain how the nasal passage shifts in orientation and position," said lead author Roston, who began this study as a doctoral student at Duke.

The three phases of growth are:
 

  1. Initially parallel, the roof of the mouth and the nasal passage become separated as the area between them grows into a triangular shape. This phase begins during embryonic development after the face starts forming, which, for the pantropical spotted dolphin, is in the first two months after fertilization.
  2. The snout grows longer at an angle to the nasal passage, further separating the nostrils from the tip of the snout. This phase begins later in fetal development and may continue even after birth.
  3. The skull folds backward, and the head and body become more aligned. This rotates the nasal passage up so that it becomes nearly vertical relative to the body axis. This phase begins in late embryonic development and continues through fetal development.


"While the nose moves to the top of the head, many of the important angular changes are actually in the bottom, or base, of the skull. That's not necessarily something you'd expect to find!" said Roston.

The three phases of growth do not unfold in a step-by-step process, but instead overlap with each other temporally, Roston said. They represent distinct developmental transformations that, put together, shift the nasal passage to the top of the head.

Roston and Roth developed this model using anatomical data obtained by photographs and CT scans of 21 embryonic and fetal pantropical spotted dolphin specimens held by the Smithsonian Institution's National Museum of Natural History and the Natural History Museum of Los Angeles County. The specimens represented a wide range of embryonic and fetal development.

For comparison, they obtained data from eight fin whale fetuses, also at the National Museum of Natural History, and found significant differences between them and the pantropical spotted dolphin. In fin whales, the skull folded in a region in the back of the skull, near where the skull joins with the vertebral column. In the pantropical spotted dolphin, the folding is centered near the middle of the skull.

The model Roston and Roth developed could inform how scientists view cetacean evolution. These creatures began to evolve from a four-legged, land-dwelling mammalian ancestor, which had a nasal passage parallel to the palate, more than 50 million years ago. As cetaceans evolved, the blowhole gradually migrated from the tip of the snout to the back of the snout, and then gradually up to the top of the skull.

In addition, the two species represent different branches of the cetacean family tree that diverged more than 30 million years ago. Dolphins -- along with porpoises, orcas, sperm whales and beaked whales -- are odontocetes, commonly known as toothed whales. Fin whales are from a group called the baleen whales, named for their distinct feeding apparatus.

"I'm struck by two interesting discoveries that emerged from this work," said Roth. "Although they both develop blowholes, there are key differences between a baleen and a toothed whale in how they reorient their nasal passages during development. Moreover, surprisingly, accompanying the processes of developing upwardly oriented nostrils there are profound changes within the braincase."

In the future, examining more species from both lineages could indicate whether all baleen and toothed whales differ in this manner, Roston said.

"This model gives us a hypothesis for the developmental steps that had to occur to make that anatomical transformation happen, and will serve as a point of comparison for additional studies of growth and development in whales, dolphins and porpoises," said Roston.

Read more at Science Daily

New theory of life’s multiple origins

The history of life on Earth has often been likened to a four-billion-year-old torch relay. One flame, lit at the beginning of the chain, continues to pass on life in the same form all the way down. But what if life is better understood on the analogy of the eye, a convergent organ that evolved from independent origins? What if life evolved not just once, but multiple times independently?

In a new paper, published in the Journal of Molecular Evolution, Santa Fe Institute researchers Chris Kempes and David Krakauer argue that in order to recognize life's full range of forms, we must develop a new theoretical frame.

In their three-layered frame, Kempes and Krakauer call for researchers to consider, first, the full space of materials in which life could be possible; second, the constraints that limit the universe of possible life; and, third, the optimization processes that drive adaptation. In general, the framework considers life as adaptive information and adopts the analogy of computation to capture the processes central to life.

Several significant possibilities emerge when we consider life within the new framework. First, life originates multiple times -- some apparent adaptations are actually "a new form of life, not just an adaptation," explains Krakauer -- and it takes a far broader range of forms than conventional definitions allow.

Culture, computation, and forests are all forms of life in this frame. As Kempes explains, "human culture lives on the material of minds, much like multicellular organisms live on the material of single-celled organisms."

When researchers focus on the life traits of single organisms, they often neglect the extent to which organisms' lives depend upon entire ecosystems as their fundamental material, and also ignore the ways that a life system may be more or less living. Within the Kempes-Krakauer framework, by contrast, another implication appears: life becomes a continuum rather than a binary phenomenon. In this vein, the authors point to a variety of recent efforts that quantitatively place life on a spectrum.

By taking a broader view of life's principles, Kempes and Krakauer hope to generate more fertile theories for studying life. With clearer principles for finding life forms, and a new range of possible life forms that emerges from new principles, we'll not only clarify what life is, explains Krakauer, we'll also be better equipped "to build devices to find life," to create it in labs, and to recognize to what degree the life we see is living.

From Science Daily

Building bonds between males leads to more offspring for chimpanzees

If you're a male chimp looking for love -- or offspring -- it pays to make friends with other males.

A study led by the University of Michigan, in collaboration with Arizona State and Duke universities, examined why male chimpanzees form close relationships with each other, and found that male chimpanzees that build strong bonds with the alpha male of the group, or with a large network of other males, are more successful at siring offspring. The results are published in the journal iScience.

"One big question that biologists have had for a long time is why you see so many friendly behaviors such as cooperation and alliance in animals," said lead study author and U-M postdoctoral researcher Joseph Feldblum. "One would expect to see these social bonds -- or strong, friendly social relationships -- only if they provide some sort of fitness benefit to the individuals. Males wouldn't spend all this time grooming other males and forgoing trying to find females or food unless you get some kind of benefit from it."

One benefit would be the opportunity to sire more offspring, but no previous studies have looked at the link between social relationships and reproductive success in chimpanzees. Much of the research in this area has been done in female primates, who are primarily concerned with accessing resources in order to reproduce quickly. For males, the biggest task is getting reproductive access to females, says Feldblum, also an assistant professor in the U-M Department of Anthropology and member of the Michigan Society of Fellows.

"Chimps cooperate frequently, and often in these very dramatic ways: You see things like grooming, all kinds of complex alliance formation and group territorial defense," Feldblum said. "The question is: What do males get out of it and how?"

It turns out, they get babies.

One function of these social bonds, the researchers found, is to help males gain access to mating opportunities they wouldn't otherwise be able to get without help from their friends. To examine the link between sociality and paternity success, the researchers examined behavioral and genetic data from a population of chimpanzees living in western Tanzania. The group is part of the ongoing study of chimpanzees in Gombe Stream National Park, begun by Jane Goodall in 1960.

The researchers began by constructing a base model that captures the effects of male age, dominance rank and genetic relatedness to the mother on male siring success. They first used the model to look at 56 siring events with known paternity between 1980 and 2014. Then, they tested whether adding measures of male social bonds to the models improved their ability to predict which male would sire a given offspring.

They found that males with more strong association ties -- males with the highest number of social bonds with other males -- had a higher likelihood of siring offspring. In fact, two or more strong association ties meant a male chimpanzee was more than 50% more likely to sire a given offspring, after accounting for the chimp's age, relation to the mother and dominance rank score.

Next, the researchers wanted to understand how a chimp's relationship to the alpha male underpinned male reproductive success. To do this, they examined the role of strong bonds with the alpha male, looking at 45 siring events by non-alpha males. They generated the same base model, this time comparing the model with models that included several measures of bond strength with the alpha male, among other measures.

The model that fit best included what is called the composite sociality index, which includes grooming and association with the alpha male. It showed that subordinate males with strong bonds with the alpha male, as well as those with many strong association ties, were more likely to sire a given offspring.

"Sucking up to the boss is nothing new," said co-author Anne Pusey of Duke University. "We show that it's always paid off."

But the researchers also found that two factors -- a strong bond with the alpha male and many strong association ties -- both independently contributed to reproductive success.

In animal behavior, coalition formation is when two or more individuals jointly direct aggression toward a third or another group of individuals. According to previous research, individuals that are more central in the network of coalitions tend to rise in rank and sire more offspring.

In the current work, the researchers showed that males who form stronger ties are also more likely to form coalitions, and the researchers hypothesize that this larger alliance network helps males gain mating opportunities. They also found that forming these many strong bonds leads to chimps' improvement in rank within the group; those that made it to the alpha position were also more likely to sire offspring.

A clearer idea of the benefits of social relationships in chimpanzees provides clues about the evolution of friendship in humans.

"Together with bonobos, chimpanzees are our closest living relatives, and help us to identify which features of human social life are unique. This study suggests that strong bonds among males have deep evolutionary roots and provided the foundation for the more complex relationships that we see in humans," said senior author Ian Gilby, a researcher at ASU. "This research also highlights the value of long-term studies like these, which are essential for understanding the biology of a species that lives for many decades and is slow to reproduce."

Feldblum says more research is needed to tease out how coalition formations and these social bonds lead to siring success.

"Is it that if your ally is nearby, you're more likely to mate with an estrus female, or does having your allies around you protect you from harassment from other males?" Feldblum said. "Or because your ally will support you if a conflict erupts, your stress levels are lower and you can devote more energy to mating efforts? This last step we still don't know."

Gilby is an associate professor at the School of Human Evolution and Social Change at ASU, and a research affiliate in the Institute of Human Origins, which curates the data used in this study. Pusey, professor emerita at Duke, has spent the last 30 years of her career assembling, organizing and digitizing this unique dataset.

Read more at Science Daily

Mutated enzyme weakens connection between brain cells that help control movement

In one type of a rare, inherited genetic disorder that affects control of body movement, scientists have found a mutation in an enzyme impairs communication between neurons and what should be the inherent ability to pick up our pace when we need to run, instead of walk, across the street.

The disorder is spinocerebellar ataxia, or SCA, a neurodegenerative condition resulting from different genetic mutations whose debilitating bottom line can include ataxia -- loss of control of body movement -- and atrophy of the cerebellum, a small part of the brain jam packed with neurons, which coordinates movement and balance, says Dr. Ferenc Deak, neuroscientist at the Medical College of Georgia at Augusta University.

The enzyme is ELOVL4, which produces very long chain fatty acids, and its mutation is known to cause the specific SCA type 34. Animal models with this SCA type have problems with motor control by age two months, and scientists from MCG and the University of Oklahoma Health Sciences Center wanted to know precisely why.

"We found a dramatically diminished synaptic response. The information was to go faster, go faster and they never really got the message," Deak, co-corresponding author of the study in the journal Molecular Neurobiology, says of these communication connections between neurons. "They were transmitting the signal, but when they had to adjust their synaptic connection to coordinate the different movement, that did not happen in the mutant knock-in rat," he says of the SCA34 model generated using the gene editing technique CRISPR cas9.

Despite the different gene mutations that are causative in SCA, a common bottom line appears to be altered output of the cerebellum and an impact on Purkinje cells, big brain cells in the cerebellum, which can receive about 100 times the input of usual neurons. The big cells also exclusively inhibit communication, so they shut down signals that would interfere with something like a muscle being activated. Loss of these key cells is clear in many forms of SCA, Deak says.

Much like an air traffic controller at a busy airport, these big brain cells obviously monitor a lot of different input simultaneously, and they are the only neuron sending out messages from that part of the brain.

Purkinje cells get a lot of their input from granule cells, one of the smallest neurons in the brain but largest in number. Both cell types express a lot of ELOVL4 and also depend on the enzyme, Deak says. ELOVL4 was known to be important to the communication between these and other cells, but why remained elusive.

The new studies found mutation of ELOVL4 resulted in significant reduction of the ability of synapses that bring messages to and away from Purkinje cells to strengthen their signaling, which is essential in this case to coordinating movement, so you could speed up your pace if needed or move your hands wildly about on command.

Their findings point to the essential role of ELOVL4 in motor function and synaptic plasticity, Deak says.

They also suggest that patients with SCA34 have an impairment and asynchrony in the communication between key neurons in the cerebellum well before their brain shows clear signs of degeneration.

Deak notes that over time, the impaired responses between these constantly communicating cells may lead to the degeneration of the cerebellum often found in patients when they first go to their doctor with complaints about problems with walking, talking and other movement.

But in their model of SCA34, the structure of the cerebellum looked normal up to six months of age, even though the animal models clearly had the expected motor deficits, the scientists report.

They found the synapses also were intact and functioning at a basic level that enables the rats to, for example, walk normally, but in these knock-in rats the usual plasticity or flexibility was lacking. Rather synapses in the mutant couldn't increase signaling and make that transition.

ELOVL4 can make both saturated and unsaturated very long chain fatty acids -- dubbed "long" because of the large number of carbon atoms they contain -- depending on which tissue the enzyme is in. In the cerebellum, it enables Purkinje and granule cells to make saturated very long chain fatty acids, which were known to be important to synaptic function, Deak says. But exactly how they are important was an unknown.

The scientists think the weakened synaptic responsiveness they found is a quantity problem: the mutated enzyme makes about 70% of the usual amount of very long chain fatty acids, which appears to be the threshold for gait problems. If the cells produced none, it would result in excessive seizures and death as Deak has seen in other models.

Their current research includes finding ways to deliver more saturated very long chain fatty acids to the brain. The scientists have a patent pending on one way to make this maneuver, which is made tougher by the fact that when you produce saturated very long chain fatty acids they have the consistency of candle wax, Deak says, which the rats don't even digest, just poop out.

Very long chain fatty acids are essential to life but their exact roles are mostly elusive, the scientists say.

"What we know from our work is that they are a very important component for certain cell membranes," Deak says, like the membranes of some excitatory and inhibitory neurons as well as skin cells. In fact, the scientists have shown that when ELOVL4 is missing in the skin, body fluids seep through the skin, our largest natural barrier. In generating other ELOVL4 mutant mice models they had to overexpress ELOVL4 specifically in the skin to enable survival, Deak said.

Deak's research has shown that these saturated very long chain fatty acids also like to accumulate and strengthen vesicles, tiny traveling compartments that can move about inside cells, so they are better able to get to their destination before they fuse with a cell membrane. Fusing is necessary for neurotransmission -- one of the things vesicles in the brain transport is chemical messengers called neurotransmitters -- but unregulated fusion is bad.

The scientists documented its impact when they found that mice with two mutant copies of ELOVL4 died of seizures. While they were making these findings in the laboratory, there were reports out of Saudi Arabia about children having the same mutations and issues, he says. In fact, it was Deak's research interest in seizures that prompted his pursuit of better understanding the roles of very long chain fatty acids. He suspects they have a role in his other interest as well: the aging brain and Alzheimer's.

A team led by Dr. Martin-Paul Agbaga created the "knock-in" rat model of the human condition SCA34, which has been identified in one French-Canadian family and three Japanese families. In these individuals, skin problems can surface from shortly after birth to adolescence, and movement problems and typically progressive movement issues start surfacing in their 30s.

Agbaga, co-corresponding author of the new paper and one of Deak's longtime collaborators, is a vision scientist and cell biologist at the University of Oklahoma Health Sciences Center Department of Ophthalmology and Dean McGee Eye Institute.

Dr. Robert E. Anderson, professor of vision research at the Dean McGee Eye Institute, founder of the ELOVL4 research group in Oklahoma, and a worldwide leader of research on lipid pathophysiology in the retina, is a coauthor on the paper. Deak came to MCG from the University of Oklahoma Health Sciences Center last year.

Read more at Science Daily

Aug 16, 2021

Nearby star-forming region yields clues to the formation of our solar system

A region of active star formation in the constellation Ophiuchus is giving astronomers new insights into the conditions in which our own solar system was born. In particular, a new study of the Ophiuchus star-forming complex shows how our solar system may have become enriched with short-lived radioactive elements.

Evidence of this enrichment process has been around since the 1970s, when scientists studying certain mineral inclusions in meteorites concluded that they were pristine remnants of the infant solar system and contained the decay products of short-lived radionuclides. These radioactive elements could have been blown onto the nascent solar system by a nearby exploding star (a supernova) or by the strong stellar winds from a type of massive star known as a Wolf-Rayet star.

The authors of the new study, published August 16 in Nature Astronomy, used multi-wavelength observations of the Ophiuchus star-forming region, including spectacular new infrared data, to reveal interactions between the clouds of star-forming gas and radionuclides produced in a nearby cluster of young stars. Their findings indicate that supernovas in the star cluster are the most likely source of short-lived radionuclides in the star-forming clouds.

"Our solar system was most likely formed in a giant molecular cloud together with a young stellar cluster, and one or more supernova events from some massive stars in this cluster contaminated the gas which turned into the sun and its planetary system," said coauthor Douglas N. C. Lin, professor emeritus of astronomy and astrophysics at UC Santa Cruz. "Although this scenario has been suggested in the past, the strength of this paper is to use multi-wavelength observations and a sophisticated statistical analysis to deduce a quantitative measurement of the model's likelihood."

First author John Forbes at the Flatiron Institute's Center for Computational Astrophysics said data from space-based gamma-ray telescopes enable the detection of gamma rays emitted by the short-lived radionuclide aluminum-26. "These are challenging observations. We can only convincingly detect it in two star-forming regions, and the best data are from the Ophiuchus complex," he said.

The Ophiuchus cloud complex contains many dense protostellar cores in various stages of star formation and protoplanetary disk development, representing the earliest stages in the formation of a planetary system. By combining imaging data in wavelengths ranging from millimeters to gamma rays, the researchers were able to visualize a flow of aluminum-26 from the nearby star cluster toward the Ophiuchus star-forming region.

"The enrichment process we're seeing in Ophiuchus is consistent with what happened during the formation of the solar system 5 billion years ago," Forbes said. "Once we saw this nice example of how the process might happen, we set about trying to model the nearby star cluster that produced the radionuclides we see today in gamma rays."

Forbes developed a model that accounts for every massive star that could have existed in this region, including its mass, age, and probability of exploding as a supernova, and incorporates the potential yields of aluminum-26 from stellar winds and supernovas. The model enabled him to determine the probabilities of different scenarios for the production of the aluminum-26 observed today.

"We now have enough information to say that there is a 59 percent chance it is due to supernovas and a 68 percent chance that it's from multiple sources and not just one supernova," Forbes said.

This type of statistical analysis assigns probabilities to scenarios that astronomers have been debating for the past 50 years, Lin noted. "This is the new direction for astronomy, to quantify the likelihood," he said.

The new findings also show that the amount of short-lived radionuclides incorporated into newly forming star systems can vary widely. "Many new star systems will be born with aluminum-26 abundances in line with our solar system, but the variation is huge -- several orders of magnitude," Forbes said. "This matters for the early evolution of planetary systems, since aluminum-26 is the main early heating source. More aluminum-26 probably means drier planets."

The infrared data, which enabled the team to peer through dusty clouds into the heart of the star-forming complex, was obtained by coauthor João Alves at the University of Vienna as part of the European Southern Observatory's VISION survey of nearby stellar nurseries using the VISTA telescope in Chile.

"There is nothing special about Ophiuchus as a star formation region," Alves said. "It is just a typical configuration of gas and young massive stars, so our results should be representative of the enrichment of short-lived radioactive elements in star and planet formation across the Milky Way."

Read more at Science Daily

Cities are making mammals bigger

A new study shows urbanization is causing many mammal species to grow bigger, possibly because of readily available food in places packed with people.

The finding runs counter to many scientists' hypothesis that cities would trigger mammals to get smaller over time. Buildings and roads trap and re-emit a greater degree of heat than green landscapes, causing cities to have higher temperatures than their surroundings, a phenomenon known as the urban heat island effect. Animals in warmer climates tend to be smaller than the same species in colder environments, a classic biological principle called Bergmann's Rule.

But Florida Museum of Natural History researchers discovered an unexpected pattern when they analyzed nearly 140,500 measurements of body length and mass from more than 100 North American mammal species collected over 80 years: City-dwelling mammals are both longer and heftier than their rural counterparts.

"In theory, animals in cities should be getting smaller because of these heat island effects, but we didn't find evidence for this happening in mammals," said study lead author Maggie Hantak, a Florida Museum postdoctoral researcher. "This paper is a good argument for why we can't assume Bergmann's Rule or climate alone is important in determining the size of animals."

Hantak and her collaborators created a model that examined how climate and the density of people living in a given area -- a proxy for urbanization -- influence the size of mammals. As temperatures dropped, both body length and mass increased in most mammal species studied, evidence of Bergmann's Rule at work, but the trend was stronger in areas with more people.

Surprisingly, mammals in cities generally grew larger regardless of temperature, suggesting urbanization rivals or exceeds climate in driving mammal body size, said Robert Guralnick, Florida Museum curator of biodiversity informatics.

"That wasn't what we expected to find at all," he said. "But urbanization represents this new disturbance of the natural landscape that didn't exist thousands of years ago. It's important to recognize that it's having a huge impact."

About a decade ago, scientists began to raise the alarm that warmer temperatures brought by climate change are causing many animal species to grow smaller over time. While many of the consequences of shifting body size are unknown, researchers cautioned that smaller animals may have smaller or fewer offspring, creating a feedback loop, and shrinking prey could also put pressure on meat-eaters to find more resources.

Guralnick and Hantak said they hope their findings will lead more researchers to add urbanization to their analyses of changing body size.

"When we think about what's going to happen to mammalian body size over the next 100 years, a lot of people frame that as global warming causing animals to get smaller," Guralnick said. "What if that isn't the biggest effect? What if it's that urbanization is going to lead to fatter mammals?"

Not all animals respond to human-induced environmental changes in the same way, Hantak added. The researchers also investigated how the effects of climate and urbanization may be tempered or amplified by the behavior and habits of certain species.

They found animals that use hibernation or torpor, a temporary way of slowing metabolic rate and dropping body temperature, shrank more dramatically in response to increases in temperature than animals without these traits. The finding could have important implications for conservation efforts, Hantak said.

"We thought species that use torpor or hibernation would be able to hide from the effects of unfavorable temperatures, but it seems they're actually more sensitive," she said.

While cities radically transform the landscape, they provide animals with new opportunities as well as threats, Guralnick said. The abundance of food, water and shelter and relative lack of predators in cities may help certain species succeed in comparison with their neighbors in rural areas. The results of the 2020 U.S. Census show that almost all human population growth over the past decade has occurred in the nation's metro areas. As urbanization ramps up, animals could be divided into "winners and losers," and mammal distributions may shift, he said.

"Animals that like living in urban environments could have a selective advantage while other species may lose out because of the continued fragmentation of landscapes," Guralnick said. "This is relevant to how we think about managing suburban and urban areas and our wildlands in 100 years."

While bigger is often better biologically, the long-term consequences to urban mammals of eating a diet of human food waste have yet to be determined, Hantak said.

"When you change size, it could change your whole lifestyle," she said.

Hantak and her collaborators were able to conduct the study thanks to thousands of measurements collected by natural historians in the field and museums. The research team used information from three databases: VertNet, the National Science Foundation's National Ecological Observatory Network (NEON) and the North American Census of Small Mammals (NASCM). Cumulatively, this data offers a broadscale view of how increasing urbanization is impacting mammals with very different life histories, from wolves, bobcats and deer to bats, shrews and rodents, Guralnick said.

"Museum collections have the power to tell us stories about the natural world," he said. "Because we have these collections, we can ask questions about what mammals looked like before humans dominated the landscape. Digitizing specimen data unlocks these resources so that everyone can make discoveries about our planet."

Read more at Science Daily

Pollinators: First global risk index for species declines and effects on humanity

Disappearing habitats and use of pesticides are driving the loss of pollinator species around the world, posing a threat to "ecosystem services" that provide food and wellbeing to many millions -- particularly in the Global South -- as well as billions of dollars in crop productivity.

This is according to an international panel of experts, led by the University of Cambridge, who used available evidence to create the first planetary risk index of the causes and effects of dramatic pollinator declines in six global regions.

The bees, butterflies, wasps, beetles, bats, flies and hummingbirds that distribute pollen, vital for the reproduction of over 75% of food crops and flowering plants -- including coffee, rapeseed and most fruits -- are visibly diminishing the world over, yet little is known of the consequences for human populations.

"What happens to pollinators could have huge knock-on effects for humanity," said Dr Lynn Dicks from Cambridge's Department of Zoology. "These small creatures play central roles in the world's ecosystems, including many that humans and other animals rely on for nutrition. If they go, we may be in serious trouble."

Dicks assembled a 20-strong team of scientists and indigenous representatives to attempt an initial evaluation of the drivers and risks for pollinator declines worldwide. The research is published today in Nature Ecology & Evolution.

The top three global causes of pollinator loss are habitat destruction, followed by land management -- primarily the grazing, fertilizers and crop monoculture of farming -- and then widespread pesticide use, according to the study. The effect of climate change comes in at number four, although data are limited.

Perhaps the biggest direct risk to humans across all regions is "crop pollination deficit": falls in quantity and quality of food and biofuel crops. Experts ranked the risk of crop yield "instability" as serious or high across two-thirds of the planet -- from Africa to Latin America -- where many rely directly on pollinated crops through small-holder farming.

"Crops dependent on pollinators fluctuate more in yield than, for example, cereals," said Dicks. "Increasingly unusual climatic phenomena, such as extreme rainfall and temperature, are already affecting crops. Pollinator loss adds further instability -- it's the last thing people need."

A major 2016 report to which Lynn Dicks contributed suggested there has been up to a 300% increase in pollinator-dependent food production over the past half century, with an annual market value that may be as much as US$577 billion.

Reduced species diversity was seen as a high-ranking global risk to humans, which not only risks food security but a loss of "aesthetic and cultural value." These species have been emblems of nature for millennia, argue the experts, and too little consideration is given to how their declines affect human wellbeing.

"Pollinators have been sources of inspiration for art, music, literature and technology since the dawn of human history," said Dicks. "All the major world religions have sacred passages about bees. When tragedy struck Manchester in 2017, people reached for bees as a symbol of community strength."

"Pollinators are often the most immediate representatives of the natural world in our daily lives. These are the creatures that captivate us early in life. We notice and feel their loss. Where are the clouds of butterflies in the late summer garden, or the myriad moths fluttering in through open windows at night?"

"We are in the midst of a species extinction crisis, but for many people that is intangible. Perhaps pollinators are the bellwether of mass extinction," said Dicks.

Loss of access to "managed pollinators" such as industrial beehives was ranked as a high risk to North American society, where they boost crops including apples and almonds, and have suffered serious declines from disease and 'colony collapse disorder'.

The impact of pollinator decline on wild plants and fruits was viewed a serious risk in Africa, Asia-Pacific and Latin America -- regions with many low-income countries where rural populations rely on wild-growing foods.

In fact, Latin America was viewed as the region with most to lose. Insect-pollinated crops such as cashew, soybean, coffee and cocoa are essential to regional food supply and international trade right across the continent. It is also home to large indigenous populations reliant on pollinated plants, with pollinator species such as hummingbirds embedded in oral culture and history.

Asia Pacific was another global region where pollinator decline was perceived to pose serious risks to human well-being. China and India are increasingly reliant on fruit and vegetable crops that need pollinators, some of which now require people to pollinate by hand.

The researchers caution that not enough is known about the state of pollinator populations in the Global South, as evidence of decline is still primarily from wealthy regions such as Europe (where at least 37% of bee and 31% of butterfly species are in decline). Pollination deficits and biodiversity loss were seen as the biggest risks to Europeans, with potential to affect crops ranging from strawberries to oilseed rape.

Dr Tom Breeze, co-author and Ecological Economics Research Fellow at the University of Reading, said: "This study highlights just how much we still don't know about pollinator decline and the impacts this has on human societies, particularly in parts of the developing world.

Read more at Science Daily

Having a good listener improves your brain health

Supportive social interactions in adulthood are important for your ability to stave off cognitive decline despite brain aging or neuropathological changes such as those present in Alzheimer's disease, a new study finds.

In the study publishing August 16 in JAMA Network Open, researchers observed that simply having someone available most or all of the time whom you can count on to listen to you when you need to talk is associated with greater cognitive resilience -- a measure of your brain's ability to function better than would be expected for the amount of physical aging- or disease-related changes in the brain, which many neurologists believe can be boosted by engaging in mentally stimulating activities, physical exercise, and positive social interactions.

"We think of cognitive resilience as a buffer to the effects of brain aging and disease," says lead researcher Joel Salinas, MD, the Lulu P. and David J. Levidow Assistant Professor of Neurology at NYU Grossman School of Medicine and member of the Department of Neurology's Center for Cognitive Neurology. "This study adds to growing evidence that people can take steps, either for themselves or the people they care about most, to increase the odds they'll slow down cognitive aging or prevent the development of symptoms of Alzheimer's disease -- something that is all the more important given that we still don't have a cure for the disease."

An estimated 5 million Americans are living with Alzheimer's disease, a progressive condition that affects mostly those over 65 and interferes with memory, language, decision-making, and the ability to live independently. Salinas says that while the disease usually affects an older population, the results of this study indicate that people younger than 65 would benefit from taking stock of their social support. For every unit of decline in brain volume, individuals in their 40s and 50s with low listener availability had a cognitive age that was four years older than those with high listener availability.

"These four years can be incredibly precious. Too often we think about how to protect our brain health when we're much older, after we've already lost a lot of time decades before to build and sustain brain-healthy habits," says Salinas. "But today, right now, you can ask yourself if you truly have someone available to listen to you in a supportive way, and ask your loved ones the same. Taking that simple action sets the process in motion for you to ultimately have better odds of long-term brain health and the best quality of life you can have."

Salinas also recommends that physicians consider adding this question to the standard social history portion of a patient interview: asking patients whether they have access to someone they can count on to listen to them when they need to talk. "Loneliness is one of the many symptoms of depression, and has other health implications for patients," says Salinas. "These kinds of questions about a person's social relationships and feelings of loneliness can tell you a lot about a patient's broader social circumstances, their future health, and how they're really doing outside of the clinic."

How the Study Was Conducted

Researchers used one of the longest running and most closely monitored community-based cohorts in the U.S., the Framingham Heart Study (FHS), as the source of their study's 2,171 participants, with an average age of 63. FHS participants self-reported information on the availability of supportive social interactions including listening, good advice, love and affection, sufficient contact with people they're close with, and emotional support.

Study participants' cognitive resilience was measured as the relative effect of total cerebral brain volume on global cognition, using MRI scans and neuropsychological assessments taken as part of the FHS. Lower brain volumes tend to associate with lower cognitive function, and in this study, researchers examined the modifying effect of individual forms of social support on the relationship between cerebral volume and cognitive performance.

The cognitive function of individuals with greater availability of one specific form of social support was higher relative to their total cerebral volume. This key form of social support was listener availability and it was highly associated with greater cognitive resilience.

Researchers note that further study of individual social interactions may improve understanding of the biological mechanisms that link psychosocial factors to brain health. "While there is still a lot that we don't understand about the specific biological pathways between psychosocial factors like listener availability and brain health, this study gives clues about concrete, biological reasons why we should all seek good listeners and become better listeners ourselves," says Salinas.

Read more at Science Daily