Jan 28, 2021

Eyes reveal life history of fish

 If you look deep into the eyes of a fish, it will tell you its life story.

Scientists from the University of California, Davis, demonstrate that they can use stable isotopic analysis of the eye lenses of freshwater fish -- including threatened and endangered salmon -- to reveal a fish's life history and what it ate along the way.

They conducted their study, published today in the journal Methods in Ecology and Evolution, through field-based experiments in California's Central Valley. The study carries implications for managing floodplains, fish and natural resources; prioritizing habitat restoration efforts; and understanding how landscape disturbances impact fish.

The technique had previously been used in marine environments, but this is its first use for freshwater fish, many of which are threatened or endangered in California. Lead author Miranda Bell Tilcock, an assistant specialist with the UC Davis Center for Watershed Sciences, helped pioneer the technique for freshwater fish.

"Even the nerdiest fish biologists say, 'You can do what with fish eyes?'" said co-author and team co-lead Rachel Johnson, a research fisheries biologist with NOAA Fisheries' Southwest Fisheries Science Center and associate with the UC Davis Center for Watershed Sciences. "This is an exciting new tool we can use to measure the value of different habitats and focus conservation work."

THE EYES HAVE IT

Much like tree rings, fish eyeballs are archival. The lenses grow in layers throughout a fish's life, recording as chemical signatures the habitats used while each layer was forming and locking in the dietary value of what the fish ate in each habitat.

"It's like a little diet journal the fish keeps for us, which is really nice," Tilcock said.

To uncover that history, researchers perform what Tilcock said is "like peeling the world's tiniest onion." With fine-tipped forceps, they remove layer after layer, revealing a veritable Russian nesting doll of eye lenses. At the end is a tiny ball, like what you'd find in a silica packet, that can shatter like glass. This is the core, where the fish's eyes first began to develop.

Relative to other archival tissue, fish eyeballs are especially rich in protein. The isotopic values in the food webs bind to protein in the eye, leaving tell-tale geochemical fingerprints that isotopic analysis can uncover.

HABITAT IN THE EYES OF THE BEHOLDER

The first field-based experiments using the technique for freshwater fish took place on the Yolo Bypass of California's Central Valley. Here, fall-run, juvenile chinook salmon grew in three distinct food webs: river, floodplain and hatchery.

Scientists then conducted stable isotope analyses on the eye lenses of an adult salmon to reveal its diet history from birth to death. Stable isotopes are forms of atoms that don't decay into other elements and are incorporated into a fish's tissue through its diet. They can be used to trace origins, food webs and migratory patterns of species.

Taking the premise of "you are what you eat," the study's authors looked at the chemical crumbs of carbon, nitrogen and sulfur values in the eye lenses to determine which food webs and habitats the fish used at various life stages.

They found that fish on the floodplain grew quickly and appeared to grow additional laminae, or layers of lenses, during the 39-day study compared to fish reared in the river or hatchery. Also, the Yolo Bypass is home to rice fields, which decompose to add unique sulfur and carbon values -- a strong clue for researchers tracing which habitats fish use.

"This tool is not just unique to salmon in the Central Valley," Tilcock said. "There are many migratory species all over the world that need freshwater habitat. If you can isolate their habitat and value for diet, you can quantify it for long-term success."

For example, co-author and team co-leader Carson Jeffres, field and lab director at UC Davis' Center for Watershed Science, used the technique recently on fish in Brazil to look at changes in the food web there following a dam's construction.

EYES AND EARS WORK TOGETHER

Tilcock, Johnson and Jeffres are part of an "Eyes and Ears" project at UC Davis funded by the California Department of Fish and Wildlife. The project studies fish life history through eye lenses and otoliths, which are found within a fish's ears.

"You use the otolith to trace the river or hatchery where a fish was born based on the unique geology and water chemistry of the tributaries in the San Francisco Bay watershed," Johnson said. "Then you have the eye lens, which tells you where it's eating to help identify floodplain habitats."

"They really work together to present a fuller picture of how salmon move and what they eat as they use different mosaics of habitats across the landscape over their lifetime" said Jeffres. "Now we have the tool we have been looking for to link juvenile floodplain benefits across the salmon life cycle to adulthood. It's the holy grail of measuring restoration success."

Read more at Science Daily

Mira's last journey: Exploring the dark universe

 A team of physicists and computer scientists from the U.S. Department of Energy's (DOE) Argonne National Laboratory performed one of the five largest cosmological simulations ever. Data from the simulation will inform sky maps to aid leading large-scale cosmological experiments.

The simulation, called the Last Journey, follows the distribution of mass across the universe over time -- in other words, how gravity causes a mysterious invisible substance called "dark matter" to clump together to form larger-scale structures called halos, within which galaxies form and evolve.

The scientists performed the simulation on Argonne's supercomputer Mira. The same team of scientists ran a previous cosmological simulation called the Outer Rim in 2013, just days after Mira turned on. After running simulations on the machine throughout its seven-year lifetime, the team marked Mira's retirement with the Last Journey simulation.

The Last Journey demonstrates how far observational and computational technology has come in just seven years, and it will contribute data and insight to experiments such as the Stage-4 ground-based cosmic microwave background experiment (CMB-S4), the Legacy Survey of Space and Time (carried out by the Rubin Observatory in Chile), the Dark Energy Spectroscopic Instrument and two NASA missions, the Roman Space Telescope and SPHEREx.

"We worked with a tremendous volume of the universe, and we were interested in large-scale structures, like regions of thousands or millions of galaxies, but we also considered dynamics at smaller scales," said Katrin Heitmann, deputy division director for Argonne's High Energy Physics (HEP) division.

The code that constructed the cosmos

The six-month span for the Last Journey simulation and major analysis tasks presented unique challenges for software development and workflow. The team adapted some of the same code used for the 2013 Outer Rim simulation with some significant updates to make efficient use of Mira, an IBM Blue Gene/Q system that was housed at the Argonne Leadership Computing Facility (ALCF), a DOE Office of Science User Facility.

Specifically, the scientists used the Hardware/Hybrid Accelerated Cosmology Code (HACC) and its analysis framework, CosmoTools, to enable incremental extraction of relevant information at the same time as the simulation was running.

"Running the full machine is challenging because reading the massive amount of data produced by the simulation is computationally expensive, so you have to do a lot of analysis on the fly," said Heitmann. "That's daunting, because if you make a mistake with analysis settings, you don't have time to redo it."

The team took an integrated approach to carrying out the workflow during the simulation. HACC would run the simulation forward in time, determining the effect of gravity on matter during large portions of the history of the universe. Once HACC determined the positions of trillions of computational particles representing the overall distribution of matter, CosmoTools would step in to record relevant information -- such as finding the billions of halos that host galaxies -- to use for analysis during post-processing.

"When we know where the particles are at a certain point in time, we characterize the structures that have formed by using CosmoTools and store a subset of data to make further use down the line," said Adrian Pope, physicist and core HACC and CosmoTools developer in Argonne's Computational Science (CPS) division. "If we find a dense clump of particles, that indicates the location of a dark matter halo, and galaxies can form inside these dark matter halos."

The scientists repeated this interwoven process -- where HACC moves particles and CosmoTools analyzes and records specific data -- until the end of the simulation. The team then used features of CosmoTools to determine which clumps of particles were likely to host galaxies. For reference, around 100 to 1,000 particles represent single galaxies in the simulation.

"We would move particles, do analysis, move particles, do analysis," said Pope. "At the end, we would go back through the subsets of data that we had carefully chosen to store and run additional analysis to gain more insight into the dynamics of structure formation, such as which halos merged together and which ended up orbiting each other."

Using the optimized workflow with HACC and CosmoTools, the team ran the simulation in half the expected time.

Community contribution

The Last Journey simulation will provide data necessary for other major cosmological experiments to use when comparing observations or drawing conclusions about a host of topics. These insights could shed light on topics ranging from cosmological mysteries, such as the role of dark matter and dark energy in the evolution of the universe, to the astrophysics of galaxy formation across the universe.

"This huge data set they are building will feed into many different efforts," said Katherine Riley, director of science at the ALCF. "In the end, that's our primary mission -- to help high-impact science get done. When you're able to not only do something cool, but to feed an entire community, that's a huge contribution that will have an impact for many years."

The team's simulation will address numerous fundamental questions in cosmology and is essential for enabling the refinement of existing models and the development of new ones, impacting both ongoing and upcoming cosmological surveys.

"We are not trying to match any specific structures in the actual universe," said Pope. "Rather, we are making statistically equivalent structures, meaning that if we looked through our data, we could find locations where galaxies the size of the Milky Way would live. But we can also use a simulated universe as a comparison tool to find tensions between our current theoretical understanding of cosmology and what we've observed."

Looking to exascale

"Thinking back to when we ran the Outer Rim simulation, you can really see how far these scientific applications have come," said Heitmann, who performed Outer Rim in 2013 with the HACC team and Salman Habib, CPS division director and Argonne Distinguished Fellow. "It was awesome to run something substantially bigger and more complex that will bring so much to the community."

As Argonne works towards the arrival of Aurora, the ALCF's upcoming exascale supercomputer, the scientists are preparing for even more extensive cosmological simulations. Exascale computing systems will be able to perform a billion billion calculations per second -- 50 times faster than many of the most powerful supercomputers operating today.

"We've learned and adapted a lot during the lifespan of Mira, and this is an interesting opportunity to look back and look forward at the same time," said Pope. "When preparing for simulations on exascale machines and a new decade of progress, we are refining our code and analysis tools, and we get to ask ourselves what we weren't doing because of the limitations we have had until now."

The Last Journey was a gravity-only simulation, meaning it did not consider interactions such as gas dynamics and the physics of star formation. Gravity is the major player in large-scale cosmology, but the scientists hope to incorporate other physics in future simulations to observe the differences they make in how matter moves and distributes itself through the universe over time.

Read more at Science Daily

Making wheat and peanuts less allergenic

 The United States Department of Agriculture identifies a group of "big eight" foods that causes 90% of food allergies. Among these foods are wheat and peanuts.

Sachin Rustgi, a member of the Crop Science Society of America, studies how we can use breeding to develop less allergenic varieties of these foods. Rustgi recently presented his research at the virtual 2020 ASA-CSSA-SSSA Annual Meeting.

Allergic reactions caused by wheat and peanuts can be prevented by avoiding these foods, of course. "While that sounds simple, it is difficult in practice," says Rustgi.

Avoiding wheat and peanuts means losing out on healthy food options. These two foods are nutritional powerhouses.

Wheat is a great source of energy, fiber, and vitamins. Peanuts provide proteins, good fats, vitamins and minerals.

"People with food allergies can try hard to avoid the foods, but accidental exposure to an allergen is also possible," says Rustgi. Allergen exposure can lead to hospitalization, especially for people with peanut allergies.

"For others, avoiding wheat and peanuts is not easy due to geographical, cultural, or economic reasons," explains Rustgi.

Rustgi and his colleagues are using plant breeding and genetic engineering to develop less allergenic varieties of wheat and peanuts. Their goal is to increase food options for people with allergies.

For wheat, researchers focus on a group of proteins, called gluten.

The gluten in bread flour makes dough elastic. Gluten also contributes to the chewy texture of bread.

But gluten can cause an immune reaction for individuals with Celiac disease. In addition, others experience non-celiac gluten sensitivity, leading to a variety of adverse symptoms.

Researchers have been trying to breed varieties of wheat with lower gluten content. The challenge, in part, lies in the complicated nature of gluten genetics. The information needed to make gluten is embedded in the DNA in wheat cells.

But gluten isn't a single protein -- it's a group of many different proteins. The instructions cells needed to make the individual gluten proteins are contained within different genes.

In wheat, these gluten genes are distributed all over a cell's DNA. Since so many portions of the DNA play a role in creating gluten, it is difficult for plant breeders to breed wheat varieties with lower gluten levels.

"When we started this research, a major question was whether it would be possible to work on a characteristic controlled by so many genes," says Rustgi.

For peanuts, the situation is similar. Peanuts contain 16 different proteins recognized as allergens.

"Not all peanut proteins are equally allergenic," says Rustgi. Four proteins trigger an allergic reaction in more than half of peanut sensitive individuals.

Like the gluten genes in wheat, the peanut allergen genes are spread throughout the peanut DNA.

"Affecting this many targets is not an easy task, even with current technology," says Rustgi.

Rustgi and the research team are testing many varieties of wheat and peanuts to find ones that are naturally less allergenic than others.

These low-allergenic varieties can be bred with crop varieties that have desirable traits, such as high yields or pest resistance. The goal is to develop low-allergenic wheat that can be grown commercially.

In addition to traditional breeding efforts, Rustgi is also using genetic engineering to reduce allergenic proteins in wheat and peanuts.

For example, a technology called CRISPR allows scientists to make very precise changes to a cell's DNA.

Rustgi is using CRISPR to target gluten genes in wheat. Recent improvements in CRISPR technology allow researchers to target many genes at once.

Genes targeted by CRISPR are changed or mutated. This means that cells can no longer 'read' these genes to make the specific proteins.

"Disrupting the gluten genes in wheat could yield wheat with significantly lower levels of gluten. A similar approach would work in peanuts," says Rustgi.

Other approaches include understanding how gluten production is regulated in wheat cells. As it turns out, one protein serves as a 'master regulator' for many gluten genes.

That's important because disrupting this master regulator could lead to reduced amounts of gluten in wheat. Targeting a single gene is much easier than trying to disrupt the several gluten genes.

"Wheat and peanuts are the major sources of proteins to many, especially those living in resource-deprived conditions," says Rustgi. "Finding affordable ways to make wheat and peanuts available for all is very important."

Developing wheat and peanuts with reduced allergen levels is a key step toward this goal.

Read more at Science Daily

Detecting trace amounts of multiple classes of antibiotics in foods

 Widespread use of antibiotics in human healthcare and livestock husbandry has led to trace amounts of the drugs ending up in food products. Long-term consumption could cause health problems, but it's been difficult to analyze more than a few antibiotics at a time because they have different chemical properties. Now, researchers reporting in ACS' Journal of Agricultural and Food Chemistry have developed a method to simultaneously measure 77 antibiotics in a variety of foods.

Antibiotics can be present at trace amounts in meat, eggs and milk if the animals aren't withdrawn from the drugs for a sufficient period of time before the products are collected. Also, antibiotics can accumulate in cereals, vegetables and fruits from manure fertilizer or treated wastewater applied to crops. Consuming these foods over a long period of time could lead to increased antibiotic resistance of bacterial pathogens or to an imbalance in the gut microbiome. However, most previous monitoring methods for antibiotics in foods have been limited to a few compounds at a time, usually within a single class of antibiotics with similar structures and chemical properties. Other methods have analyzed multiple antibiotics in only a single food type, such as eggs or milk. Yujie Ben and colleagues wanted to develop a time- and cost-effective method that could detect a wide range of antibiotics in different types of foods.

The researchers added trace amounts of 81 antibiotics from seven categories to vegetable samples and tested 20 different methods for extracting the drugs from the food. Only one extraction process, which involved treating freeze-dried, homogenized food samples with an acidified acetonitrile solution and a mixture of magnesium sulfate and sodium acetate, allowed the researchers to isolate 77 of the antibiotics. After establishing that their method was sensitive and accurate with spiked antibiotics in several foods, the team applied it to store-bought samples of wheat flour, mutton, eggs, milk, cabbage and bananas, detecting a total of 10 antibiotics. One of them, roxithromycin, was detected at trace amounts in all six food types. The new method should help with understanding, monitoring and regulating antibiotic levels in foods, the researchers say.

From Science Daily

Jan 27, 2021

Simulating 800,000 years of California earthquake history to pinpoint risks

 Massive earthquakes are, fortunately, rare events. But that scarcity of information blinds us in some ways to their risks, especially when it comes to determining the risk for a specific location or structure.

"We haven't observed most of the possible events that could cause large damage," explained Kevin Milner, a computer scientist and seismology researcher at the Southern California Earthquake Center (SCEC) at the University of Southern California. "Using Southern California as an example, we haven't had a truly big earthquake since 1857 -- that was the last time the southern San Andreas broke into a massive magnitude 7.9 earthquake. A San Andreas earthquake could impact a much larger area than the 1994 Northridge earthquake, and other large earthquakes can occur too. That's what we're worried about."

The traditional way of getting around this lack of data involves digging trenches to learn more about past ruptures, collating information from lots of earthquakes all around the world and creating a statistical model of hazard, or using supercomputers to simulate a specific earthquake in a specific place with a high degree of fidelity.

However, a new framework for predicting the likelihood and impact of earthquakes over an entire region, developed by a team of researchers associated with SCEC over the past decade, has found a middle ground and perhaps a better way to ascertain risk.

A new study led by Milner and Bruce Shaw of Columbia University, published in the Bulletin of the Seismological Society of America in January 2021, presents results from a prototype Rate-State earthquake simulator, or RSQSim, that simulates hundreds of thousands of years of seismic history in California. Coupled with another code, CyberShake, the framework can calculate the amount of shaking that would occur for each quake. Their results compare well with historical earthquakes and the results of other methods, and display a realistic distribution of earthquake probabilities.

According to the developers, the new approach improves the ability to pinpoint how big an earthquake might occur in a given location, allowing building code developers, architects, and structural engineers to design more resilient buildings that can survive earthquakes at a specific site.

"For the first time, we have a whole pipeline from start to finish where earthquake occurrence and ground-motion simulation are physics-based," Milner said. "It can simulate up to 100,000s of years on a really complicated fault system."

Applying massive computer power to big problems

RSQSim transforms mathematical representations of the geophysical forces at play in earthquakes -- the standard model of how ruptures nucleate and propagate -- into algorithms, and then solves them on some of the most powerful supercomputers on the planet. The computationally-intensive research was enabled over several years by government-sponsored supercomputers at the Texas Advanced Computing Center, including Frontera -- the most powerful system at any university in the world -- Blue Waters at the National Center for Supercomputing Applications, and Summit at the Oak Ridge Leadership Computing Facility.

"One way we might be able to do better in predicting risk is through physics-based modeling, by harnessing the power of systems like Frontera to run simulations," said Milner. "Instead of an empirical statistical distribution, we simulate the occurrence of earthquakes and the propagation of its waves."

"We've made a lot of progress on Frontera in determining what kind of earthquakes we can expect, on which fault, and how often," said Christine Goulet, Executive Director for Applied Science at SCEC, also involved in the work. "We don't prescribe or tell the code when the earthquakes are going to happen. We launch a simulation of hundreds of thousands of years, and just let the code transfer the stress from one fault to another."

The simulations began with the geological topography of California and simulated over 800,000 virtual years how stresses form and dissipate as tectonic forces act on the Earth. From these simulations, the framework generated a catalogue -- a record that an earthquake occurred at a certain place with a certain magnitude and attributes at a given time. The catalog that the SCEC team produced on Frontera and Blue Waters was among the largest ever made, Goulet said. The outputs of RSQSim were then fed into CyberShake that again used computer models of geophysics to predict how much shaking (in terms of ground acceleration, or velocity, and duration) would occur as a result of each quake.

"The framework outputs a full slip-time history: where a rupture occurs and how it grew," Milner explained. "We found it produces realistic ground motions, which tells us that the physics implemented in the model is working as intended." They have more work planned for validation of the results, which is critical before acceptance for design applications.

The researchers found that the RSQSim framework produces rich, variable earthquakes overall -- a sign it is producing reasonable results -- while also generating repeatable source and path effects.

"For lots of sites, the shaking hazard goes down, relative to state-of-practice estimates" Milner said. "But for a couple of sites that have special configurations of nearby faults or local geological features, like near San Bernardino, the hazard went up. We are working to better understand these results and to define approaches to verify them."

The work is helping to determine the probability of an earthquake occurring along any of California's hundreds of earthquake-producing faults, the scale of earthquake that could be expected, and how it may trigger other quakes.

Support for the project comes from the U.S. Geological Survey (USGS), National Science Foundation (NSF), and the W.M. Keck Foundation. Frontera is NSF's leadership-class national resource. Compute time on Frontera was provided through a Large-Scale Community Partnership (LSCP) award to SCEC that allows hundreds of U.S. scholars access to the machine to study many aspects of earthquake science. LSCP awards provide extended allocations of up to three years to support long-lived research efforts. SCEC -- which was founded in 1991 and has computed on TACC systems for over a decade -- is a premier example of such an effort.

The creation of the catalog required eight days of continuous computing on Frontera and used more than 3,500 processors in parallel. Simulating the ground shaking at 10 sites across California required a comparable amount of computing on Summit, the second fastest supercomputer in the world.

"Adoption by the broader community will be understandably slow," said Milner. "Because such results will impact safety, it is part of our due diligence to make sure these results are technically defensible by the broader community," added Goulet. But research results such as these are important in order to move beyond generalized building codes that in some cases may be inadequately representing the risk a region face while in other cases being too conservative.

Read more at Science Daily

Purported phosphine on Venus more likely to be ordinary sulfur dioxide

 In September, a team led by astronomers in the United Kingdom announced that they had detected the chemical phosphine in the thick clouds of Venus. The team's reported detection, based on observations by two Earth-based radio telescopes, surprised many Venus experts. Earth's atmosphere contains small amounts of phosphine, which may be produced by life. Phosphine on Venus generated buzz that the planet, often succinctly touted as a "hellscape," could somehow harbor life within its acidic clouds.

Since that initial claim, other science teams have cast doubt on the reliability of the phosphine detection. Now, a team led by researchers at the University of Washington has used a robust model of the conditions within the atmosphere of Venus to revisit and comprehensively reinterpret the radio telescope observations underlying the initial phosphine claim. As they report in a paper accepted to the Astrophysical Journal and posted Jan. 25 to the preprint site arXiv, the U.K.-led group likely wasn't detecting phosphine at all.

"Instead of phosphine in the clouds of Venus, the data are consistent with an alternative hypothesis: They were detecting sulfur dioxide," said co-author Victoria Meadows, a UW professor of astronomy. "Sulfur dioxide is the third-most-common chemical compound in Venus' atmosphere, and it is not considered a sign of life."

The team behind the new study also includes scientists at NASA's Caltech-based Jet Propulsion Laboratory, the NASA Goddard Space Flight Center, the Georgia Institute of Technology, the NASA Ames Research Center and the University of California, Riverside.

The UW-led team shows that sulfur dioxide, at levels plausible for Venus, can not only explain the observations but is also more consistent with what astronomers know of the planet's atmosphere and its punishing chemical environment, which includes clouds of sulfuric acid. In addition, the researchers show that the initial signal originated not in the planet's cloud layer, but far above it, in an upper layer of Venus' atmosphere where phosphine molecules would be destroyed within seconds. This lends more support to the hypothesis that sulfur dioxide produced the signal.

Both the purported phosphine signal and this new interpretation of the data center on radio astronomy. Every chemical compound absorbs unique wavelengths of the electromagnetic spectrum, which includes radio waves, X-rays and visible light. Astronomers use radio waves, light and other emissions from planets to learn about their chemical composition, among other properties.

In 2017 using the James Clerk Maxwell Telescope, or JCMT, the U.K.-led team discovered a feature in the radio emissions from Venus at 266.94 gigahertz. Both phosphine and sulfur dioxide absorb radio waves near that frequency. To differentiate between the two, in 2019 the same team obtained follow-up observations of Venus using the Atacama Large Millimeter/submillimeter Array, or ALMA. Their analysis of ALMA observations at frequencies where only sulfur dioxide absorbs led the team to conclude that sulfur dioxide levels in Venus were too low to account for the signal at 266.94 gigahertz, and that it must instead be coming from phosphine.

In this new study by the UW-led group, the researchers started by modeling conditions within Venus' atmosphere, and using that as a basis to comprehensively interpret the features that were seen -- and not seen -- in the JCMT and ALMA datasets.

"This is what's known as a radiative transfer model, and it incorporates data from several decades' worth of observations of Venus from multiple sources, including observatories here on Earth and spacecraft missions like Venus Express," said lead author Andrew Lincowski, a researcher with the UW Department of Astronomy.

The team used that model to simulate signals from phosphine and sulfur dioxide for different levels of Venus' atmosphere, and how those signals would be picked up by the JCMT and ALMA in their 2017 and 2019 configurations. Based on the shape of the 266.94-gigahertz signal picked up by the JCMT, the absorption was not coming from Venus' cloud layer, the team reports. Instead, most of the observed signal originated some 50 or more miles above the surface, in Venus' mesosphere. At that altitude, harsh chemicals and ultraviolet radiation would shred phosphine molecules within seconds.

"Phosphine in the mesosphere is even more fragile than phosphine in Venus' clouds," said Meadows. "If the JCMT signal were from phosphine in the mesosphere, then to account for the strength of the signal and the compound's sub-second lifetime at that altitude, phosphine would have to be delivered to the mesosphere at about 100 times the rate that oxygen is pumped into Earth's atmosphere by photosynthesis."

The researchers also discovered that the ALMA data likely significantly underestimated the amount of sulfur dioxide in Venus' atmosphere, an observation that the U.K.-led team had used to assert that the bulk of the 266.94-gigahertz signal was from phosphine.

"The antenna configuration of ALMA at the time of the 2019 observations has an undesirable side effect: The signals from gases that can be found nearly everywhere in Venus' atmosphere -- like sulfur dioxide -- give off weaker signals than gases distributed over a smaller scale," said co-author Alex Akins, a researcher at the Jet Propulsion Laboratory.

This phenomenon, known as spectral line dilution, would not have affected the JCMT observations, leading to an underestimate of how much sulfur dioxide was being seen by JCMT.

"They inferred a low detection of sulfur dioxide because of that artificially weak signal from ALMA," said Lincowski. "But our modeling suggests that the line-diluted ALMA data would have still been consistent with typical or even large amounts of Venus sulfur dioxide, which could fully explain the observed JCMT signal."

"When this new discovery was announced, the reported low sulfur dioxide abundance was at odds with what we already know about Venus and its clouds," said Meadows. "Our new work provides a complete framework that shows how typical amounts of sulfur dioxide in the Venus mesosphere can explain both the signal detections, and non-detections, in the JCMT and ALMA data, without the need for phosphine."

Read more at Science Daily

How heavy is dark matter? Scientists radically narrow the potential mass range for the first time

 Scientists have calculated the mass range for Dark Matter -- and it's tighter than the science world thought.

Their findings -- due to be published in Physics Letters B in March -- radically narrow the range of potential masses for Dark Matter particles, and help to focus the search for future Dark Matter-hunters. The University of Sussex researchers used the established fact that gravity acts on Dark Matter just as it acts on the visible universe to work out the lower and upper limits of Dark Matter's mass.

The results show that Dark Matter cannot be either 'ultra-light' or 'super-heavy', as some have theorised, unless an as-yet undiscovered force also acts upon it.

The team used the assumption that the only force acting on Dark Matter is gravity, and calculated that Dark Matter particles must have a mass between 10-3 eV and 107 eV. That's a much tighter range than the 10-24 eV -- 1019 GeV spectrum which is generally theorised.

What makes the discovery even more significant is that if it turns out that the mass of Dark Matter is outside of the range predicted by the Sussex team, then it will also prove that an additional force -- as well as gravity -- acts on Dark Matter.

Professor Xavier Calmet from the School of Mathematical and Physical Sciences at the University of Sussex, said:

"This is the first time that anyone has thought to use what we know about quantum gravity as a way to calculate the mass range for Dark Matter. We were surprised when we realised no-one had done it before -- as were the fellow scientists reviewing our paper.

"What we've done shows that Dark Matter cannot be either 'ultra-light' or 'super-heavy' as some theorise -- unless there is an as-yet unknown additional force acting on it. This piece of research helps physicists in two ways: it focuses the search area for Dark Matter, and it will potentially also help reveal whether or not there is a mysterious unknown additional force in the universe."

Folkert Kuipers, a PhD student working with Professor Calmet, at the University of Sussex, said:

"As a PhD student, it's great to be able to work on research as exciting and impactful as this. Our findings are very good news for experimentalists as it will help them to get closer to discovering the true nature of Dark Matter."

The visible universe -- such as ourselves, the planets and stars -- accounts for 25 per cent of all mass in the universe. The remaining 75 per cent is comprised of Dark Matter.

Read more at Science Daily

Pace of prehistoric human innovation could be revealed by 'linguistic thermometer'

 Multi-disciplinary researchers at The University of Manchester have helped develop a powerful physics-based tool to map the pace of language development and human innovation over thousands of years -- even stretching into pre-history before records were kept.

Tobias Galla, a professor in theoretical physics, and Dr Ricardo Bermúdez-Otero, a specialist in historical linguistics, from The University of Manchester, have come together as part of an international team to share their diverse expertise to develop the new model, revealed in a paper entitled 'Geospatial distributions reflect temperatures of linguistic feature' authored by Henri Kauhanen, Deepthi Gopal, Tobias Galla and Ricardo Bermúdez-Otero, and published by the journal Science Advances.

Professor Galla has applied statistical physics -- usually used to map atoms or nanoparticles -- to help build a mathematically-based model that responds to the evolutionary dynamics of language. Essentially, the forces that drive language change can operate across thousands of years and leave a measurable "geospatial signature," determining how languages of different types are distributed over the surface of the Earth.

Dr Bermúdez-Otero explained: "In our model each language has a collection of properties or features and some of those features are what we describe as 'hot' or 'cold'.

"So, if a language puts the object before the verb, then it is relatively likely to get stuck with that order for a long period of time -- so that's a 'cold' feature. In contrast, markers like the English article 'the' come and go a lot faster: they may be here in one historical period, and be gone in the next. In that sense, definite articles are 'hot' features.

"The striking thing is that languages with 'cold' properties tend to form big clumps, whereas languages with 'hot' properties tend to be more scattered geographically."

This method therefore works like a thermometer, enabling researchers to retrospectively tell whether one linguistic property is more prone to change in historical time than another. This modelling could also provide a similar benchmark for the pace of change in other social behaviours or practices over time and space.

"For example, suppose that you have a map showing the spatial distribution of some variable cultural practice for which you don't have any historical records -- this could be be anything, like different rules on marriage or on the inheritance of possessions," added Dr Bermúdez-Otero.

"Our method could, in principle, be used to ascertain whether one practice changes in the course of historical time faster than another, ie whether people are more innovative in one area than in another, just by looking at how the present-day variation is distributed in space."

The source data for the linguistic modelling comes from present-day languages and the team relied on The World Atlas of Language Structures (WALS). This records information of 2,676 contemporary languages.

Professor Galla explained: "We were interested in emergent phenomena, such as how large-scale effects, for example patterns in the distribution of language features arise from relatively simple interactions. This is a common theme in complex systems research.

Read more at Science Daily

Jan 26, 2021

From fins to limbs

 When tetrapods (four-limbed vertebrates) began to move from water to land roughly 390 million years ago it set in motion the rise of lizards, birds, mammals, and all land animals that exist today, including humans and some aquatic vertebrates such as whales and dolphins.

The earliest tetrapods originated from their fish ancestors in the Devonian period and are more than twice as old as the oldest dinosaur fossils. They resembled a cross between a giant salamander and a crocodile and were about 1-2 meters long, had gills, webbed feet and tail fins, and were still heavily tied to water. Their short arms and legs had up to eight digits on each hand and foot and they were probably ambush predators, lurking in shallow water waiting for prey to come near.

Scientists know how the fins of fish transformed into the limbs of tetrapods, but controversies remain about where and how the earliest tetrapods used their limbs. And, while many hypotheses have been proposed, very few studies have rigorously tested them using the fossil record.

In a paper published January 22 in Science Advances an international team of researchers examined three-dimensional digital models of the bones, joints, and muscles of the fins and limbs of two extinct early tetrapods and a closely related fossil fish to reveal how function of the forelimb changed as fins evolved into limbs. The research led by Julia Molnar, Assistant Professor at New York Institute of Technology College of Osteopathic Medicine and Stephanie Pierce, Thomas D. Cabot Associate Professor of Organismic and Evolutionary Biology at Harvard University, discovered three distinct functional stages in the transition from fins to limbs, and that these early tetrapods had a very distinct pattern of muscle leverage that didn't look like a fish fin or modern tetrapod limbs.

To reconstruct how limbs of the earliest known tetrapods functioned, Molnar, Pierce and co-authors John Hutchinson (Royal Veterinary College), Rui Diogo (Howard University), and Jennifer Clack (University of Cambridge) first needed to figure out what muscles were present in the fossil animals. A challenging task as muscles are not preserved in fossils, and the muscles of modern fish fins are completely different from those of tetrapod limbs. The team spent several years trying to answer the question, how exactly did the few simple muscles of a fin become dozens of muscles that perform all sorts of functions in a tetrapod limb?

"Determining what muscles were present in a 360-million-year-old fossil took many years of work just to get to the point where we could begin to build very complicated musculoskeletal models," said Pierce. "We needed to know how many muscles were present in the fossil animals and where they attached to on the bones so we could test how they functioned."

They built three-dimensional musculoskeletal models of the pectoral fin in Eusthenopteron (a fish closely related to tetrapods that lived during the Late Devonian period about 385 million years ago) and the forelimbs of two early tetrapods, Acanthostega (365 million years old living towards the end of the Late Devonian period) and Pederpes (348-347 million years old living during the early Carboniferous period). For comparison, they also built similar models of the pectoral fins of living fishes (coelacanth, lungfish) and forelimbs of living tetrapods (salamander, lizard).

To determine how the fins and limbs worked, the researchers used computational software originally developed to study human locomotion. This technique had been used recently to study locomotion in the ancestors of humans and also dinosaurs like T. rex, but never in something as old as an early tetrapod.

Manipulating the models in the software, the team were able to measure two functional traits: the joint's maximum range of motion and the muscles' ability to move the fin or limb joints. The two measurements would reveal trade-offs in the locomotor system and allow the researchers to test hypotheses of function in extinct animals.

The team found the forelimbs of all terrestrial tetrapods passed through three distinct functional stages: a "benthic fish" stage that resembled modern lungfish, an "early tetrapod" stage unlike any extinct animal, and a "crown tetrapod" stage with characteristics of both lizards and salamanders.

"The fin from Eusthenopteron had a pattern that was reminiscent of the lungfish, which is one of the closest living relatives of tetrapods," said Pierce. "But the early tetrapod limbs showed more similarities to each other than either fish or modern tetrapods."

"That was perhaps the most surprising," said Molnar. "I thought Pederpes, and possibly Acanthostega, would fall pretty well within the range of modern tetrapods. But they formed their own distinct cluster that didn't look like a modern tetrapod limb or a fish fin. They were not smack dab in the middle but had their own collection of characteristics that probably reflected their unique environment and behaviors."

The results showed that early tetrapod limbs were more adapted for propulsion rather than weight bearing. In the water, animals use their limbs for propulsion to move themselves forward or backward allowing the water to support their body weight. Moving on land, however, requires the animal act against gravity and push downward with their limbs to support their body mass.

This doesn't mean that early tetrapods were incapable of moving on land, but rather they didn't move like a modern-day living tetrapod. Their means of locomotion was probably unique to these animals that were still very much tied to the water, but were also venturing onto land, where there were many opportunities for vertebrate animals but little competition or fear from predators.

"These results are exciting as they independently support a study I published last year using completely different fossils and methods," said Pierce. "That study, which focused on the upper arm bone, indicated that early tetrapods had some capacity for land movement but that they may not have been very good at it."

The researchers are closer to reconstructing the evolution of terrestrial locomotion, but more work is needed. They plan to next model the hind limb to investigate how all four limbs worked together. It has been suggested that early tetrapods were using their forelimbs for propulsion, but modern tetrapods get most of their propulsive power from the hind limb.

Read more at Science Daily

Climate change in antiquity: Mass emigration due to water scarcity

 The absence of monsoon rains at the source of the Nile was the cause of migrations and the demise of entire settlements in the late Roman province of Egypt. This demographic development has been compared with environmental data for the first time by professor of ancient history, Sabine Huebner of the University of Basel -- leading to a discovery of climate change and its consequences.

The oasis-like Faiyum region, roughly 130 km south-west of Cairo, was the breadbasket of the Roman Empire. Yet at the end of the third century CE, numerous formerly thriving settlements there declined and were ultimately abandoned by their inhabitants. Previous excavations and contemporary papyri have shown that problems with field irrigation were the cause. Attempts by local farmers to adapt to the dryness and desertification of the farmland -- for example, by changing their agricultural practices -- are also documented.

Volcanic eruption and monsoon rains

Basel professor of ancient history Sabine R. Huebner has now shown in the US journal Studies in Late Antiquity that changing environmental conditions were behind this development. Existing climate data indicates that the monsoon rains at the headwaters of the Nile in the Ethiopian Highlands suddenly and permanently weakened. The result was lower high-water levels of the river in summer. Evidence supporting this has been found in geological sediment from the Nile Delta, Faiyum and the Ethiopian Highlands, which provides long-term climate data on the monsoons and the water level of the Nile.

A powerful tropical volcanic eruption around 266 CE, which in the following year brought a below-average flood of the Nile, presumably also played a role. Major eruptions are known from sulfuric acid deposits in ice cores from Greenland and Antarctica, and can be dated to within three years. Particles hurled up into the stratosphere lead to a cooling of the climate, disrupting the local monsoon system.

New insights into climate, environment, and society

In the third century CE, the entire Roman Empire was hit by crises that are relatively well documented in the province of Egypt by more than 26,000 preserved papyri (documents written on sheets of papyrus). In the Faiyum region, these include records of inhabitants who switched to growing vines instead of grain or to sheep farming due to the scarcity of water. Others accused their neighbors of water theft or turned to the Roman authorities for tax relief. These and other adaptive strategies of the population delayed the death of their villages for several decades.

Read more at Science Daily

New galaxy sheds light on how stars form

 A lot is known about galaxies. We know, for instance, that the stars within them are shaped from a blend of old star dust and molecules suspended in gas. What remains a mystery, however, is the process that leads to these simple elements being pulled together to form a new star.

But now an international team of scientists, including astrophysicists from the University of Bath in the UK and the National Astronomical Observatory (OAN) in Madrid, Spain have taken a significant step towards understanding how a galaxy's gaseous content becomes organised into a new generation of stars.

Their findings have important implications for our understanding of how stars formed during the early days of the universe, when galaxy collisions were frequent and dramatic, and star and galaxy formation occurred more actively than it does now.

For this study, the researchers used the Chile-based Atacama Large Millimeter Array (ALMA) -- a network of radio telescopes combined to form one, mega telescope -- to observe a type of galaxy called a tidal dwarf galaxy (TDG). TDGs emerge from the debris of two older galaxies colliding with great force. They are actively star-forming systems and pristine environments for scientists trying to piece together the early days of other galaxies, including our own -- the Milky Way (thought to be 13.6-billion years old).

"The little galaxy we've been studying was born in a violent, gas-rich galactic collision and offers us a unique laboratory to study the physics of star formation in extreme environments," said co-author Professor Carole Mundell, head of Astrophysics at the University of Bath.

From their observations, the researchers learnt that a TDG's molecular clouds are similar to those found in the Milky Way, both in terms of size and content. This suggests there is a universal star-formation process at play throughout the universe.

Unexpectedly, however, the TDG in the study (labelled TDG J1023+1952) also displayed a profusion of dispersed gas. In the Milky Way, clouds of gas are by far the most prominent star-forming factories.

"The fact that molecular gas appears in both cloud form and as diffuse gas was a surprise," said Professor Mundell.

Dr Miguel Querejeta from the OAN in Spain and lead author of the study added: "ALMA's observations were made with great precision so we can say with confidence that the contribution of diffuse gas is much higher in the tidal dwarf galaxy we studied than typically found in normal galaxies."

He added: "This most likely means most of the molecular gas in this tidal dwarf galaxy is not involved in forming stars, which questions popular assumptions about star formation."

Because of the vast distance that separates Earth from TDG J1023+1952 (around 50 million light years), individual clouds of molecular gas appear as tiny regions in the sky when viewed through the naked eye. However, ALMA has the power to distinguish the smallest details.

"We have managed to identify clouds with an apparent size as small as observing a coin placed several kilometres away from us," said Professor Mundell, adding: "It's remarkable that we can now study stars and the gas clouds from which they are formed in a violent extragalactic collision with the same detail that we can study those forming in the calm environment of our own Milky Way."

Read more at Science Daily

Women influenced coevolution of dogs and humans

 Man's best friend might actually belong to a woman.

In a cross-cultural analysis, Washington State University researchers found several factors may have played a role in building the mutually beneficial relationship between humans and dogs, including temperature, hunting and surprisingly -- gender.

"We found that dogs' relationships with women might have had a greater impact on the dog-human bond than relationships with men," said Jaime Chambers, a WSU anthropology Ph.D. student and first author on the paper published in the Journal of Ethnobiology. "Humans were more likely to regard dogs as a type of person if the dogs had a special relationship with women. They were more likely to be included in family life, treated as subjects of affection and generally, people had greater regard for them."

While dogs are the oldest, most widespread domesticated animal, very few anthropologic studies have directly focused on the human relationship with canines. Yet when the WSU researchers searched the extensive collection of ethnographic documents in the Human Relations Area Files database, they found thousands of mentions of dogs.

Ultimately, they located data from more than 844 ethnographers writing on 144 traditional, subsistence-level societies from all over the globe. Looking at these cultures can provide insight into how the dog-human relationship developed, Chambers said.

"Our modern society is like a blip in the timeline of human history," she said. "The truth is that human-dog relationships have not looked like they do in Western industrialized societies for most of human history, and looking at traditional societies can offer a wider vision."

The researchers noted specific instances that showed dogs' utility, or usefulness, to humans, and humans' utility to dogs as well as the "personhood" of dogs -- when canines were treated like people, such as being given names, allowed to sleep in the same beds or mourned when they died.

A pattern emerged that showed when women were more involved with dogs, the humans' utility to dogs went up, as did the dogs' personhood.

Another prevalent trend involved the environment: the warmer the overall climate, the less useful dogs tended to be to humans.

"Relative to humans, dogs are really not particularly energy efficient," said Robert Quinlan, WSU anthropology professor and corresponding author on the paper. "Their body temperature is higher than humans, and just a bit of exercise can make them overheat on a hot day. We saw this trend that they had less utility to humans in warmer environments."

Quinlan noted there were some exceptions to this with a few dog-loving cultures in the tropics, but it was a fairly consistent trend.

Hunting also seemed to strengthen the dog-human connection. In cultures that hunted with dogs, they were more valued by their human partners: they were higher in the measures of dogs' utility to humans and in personhood. Those values declined, however, when food production increased whether it was growing crops or keeping livestock. This finding seemed to go against the commonly held perception of herding dogs working in concert with humans, but Quinlan noted that in many cultures, herding dogs often work alone whereas hunting requires a more intense cooperation.

This study adds evidence to the evolutionary theory that dogs and humans chose each other, rather than the older theory that humans intentionally sought out wolf pups to raise on their own. Either way, there have been clear benefits for the dogs, Chambers said.

Read more at Science Daily

Street trees close to the home may reduce the risk of depression

 Depression, especially in urban areas, is on the rise, now more than ever. Mental health outcomes are influenced by, among other things, the type of environment where one lives. Former studies show that urban greenspace has a positive benefit on people experiencing mental ill health, but most of these studies used self-reported measures, which makes it difficult to compare the results and generalise conclusions on the effects of urban greenspace on mental health.

An interdisciplinary research team of UFZ, iDiv and Leipzig University tried to improve this issue by involving an objective indicator: prescriptions of antidepressants. To find out whether a specific type of 'everyday' green space -- street trees dotting the neighbourhood sidewalks -- could positively influence mental health, they focused on the questions, how the number and type of street trees and their proximity close to home correlated to the number of antidepressants prescribed.

The researchers analysed data from almost 10,000 Leipzig inhabitants, a mid-size city in Germany, who took part in the LIFE-Adult health study running at the University of Leipzig Medical Faculty. Combining that with data on the number and species type of street trees throughout the city of Leipzig, the researchers were able to identify the association between antidepressants prescriptions and the number of street trees at different distances from people's homes. Results were controlled for other factors known to be associated with depression, such as employment, gender, age, and body weight.

More trees immediately around the home (less than 100 meters) was associated with a reduced risk of being prescribed antidepressant medication. This association was especially strong for deprived groups. As these social groups are at the greatest risk for being prescribed antidepressants in Germany, street trees in cities can thereby serve as a nature-based solution for good mental health, the researchers write. At the same time, street trees may also help reduce the 'gap' in health inequality between economically different social groups. No association of tree types, however, and depression could be shown in this study.

"Our finding suggests that street trees -- a small scale, publicly accessible form of urban greenspace -- can help close the gap in health inequalities between economically different social groups," says lead author of the study Dr Melissa Marselle. "This is good news because street trees are relatively easy to achieve and their number can be increased without much planning effort." As an environmental psychologist, she conducted the research at UFZ and iDiv and is now based at the De Montford University of Leicester, UK. Marselle hopes that the research "should prompt local councils to plant street trees to urban areas as a way to improve mental health and reduce social inequalities. Street trees should be planted equally in residential areas to ensure those who are socially disadvantaged have equal access to receive its health benefits."

"Importantly, most planning guidance for urban greenspace is often based on purposeful visits for recreation," adds Dr Diana Bowler (iDiv, FSU, UFZ), data analyst in the team. "Our study shows that everyday nature close to home -- the biodiversity you see out of the window or when walking or driving to work, school or shopping -- is important for mental health." This finding is especially now in times of the COVID-19 lock-downs, Bowler adds.

And it's not only human health which could benefit. "We propose that adding street trees in residential urban areas is a nature-based solution that may not only promote mental health, but can also contribute to climate change mitigation and biodiversity conservation," says senior author Prof Aletta Bonn, who leads the department of ecosystem services at UFZ, iDiv and Friedrich-Schiller-University Jena. "To create these synergy effects, you don't even need large-scale expensive parks: more trees along the streets will do the trick. And that's a relatively inexpensive measure."

Read more at Science Daily

Jan 25, 2021

Dinosaur embryo find helps crack baby tyrannosaur mystery

 They are among the largest predators ever to walk the Earth, but experts have discovered that some baby tyrannosaurs were only the size of a Border Collie dog when they took their first steps.

The first-known fossils of tyrannosaur embryos have shed light on the early development of the colossal animals, which could grow to 40 feet in length and weigh eight tonnes.

A team of palaeontologists, led by a University of Edinburgh researcher, made the discovery by examining the fossilised remains of a tiny jaw bone and claw unearthed in Canada and the US.

Producing 3D scans of the delicate fragments revealed that they belonged to baby tyrannosaurs -- cousins of T. rex -- which, based on the size of the fossils, were around three feet long when they hatched.

The team's findings suggest that tyrannosaur eggs -- the remains of which have never been found -- were around 17 inches long. This could aid efforts to recognise such eggs in the future and gain greater insights into the nesting habits of tyrannosaurs, researchers say.

The analysis also revealed that the three-centimetre-long jaw bone possesses distinctive tyrannosaur features, including a pronounced chin, indicating that these physical traits were present before the animals hatched.

Little is known about the earliest developmental stages of tyrannosaurs -- which lived more than 70-million-years-ago -- despite being one of the most studied dinosaur families. Most tyrannosaur fossils previously studied have been of adult or older juvenile animals.

The study, published in the Canadian Journal of Earth Sciences, was supported by the Royal Society, Natural Sciences and Engineering Research Council of Canada, and National Science Foundation. It also involved researchers from the Universities of Alberta and Calgary, Canada, and Montana State and Chapman Universities, US.

Read more at Science Daily

When galaxies collide

 It was previously thought that collisions between galaxies would necessarily add to the activity of the massive black holes at their centers. However, researchers have performed the most accurate simulations of a range of collision scenarios and have found that some collisions can reduce the activity of their central black holes. The reason is that certain head-on collisions may in fact clear the galactic nuclei of the matter which would otherwise fuel the black holes contained within.

When you think about gargantuan phenomena such as the collision of galaxies, it might be tempting to imagine it as some sort of cosmic cataclysm, with stars crashing and exploding, and destruction on an epic scale. But actually it is closer to a pair of clouds combining, usually a larger one absorbing a smaller one. It's unlikely any stars within them would collide themselves. But that said, when galaxies collide, the consequences can be enormous.

Galaxies collide in different ways. Sometimes a small galaxy will collide with the outer part of a larger one and either pass through or merge, in either case exchanging a lot of stars along the way. But galaxies can also collide head-on, where the smaller of the two will be torn apart by overpowering tidal forces of the larger one. It's in this scenario that something very interesting can happen within the galactic nucleus.

"At the heart of most galaxies lies a massive black hole, or MBH," said Research Associate Yohei Miki from the University of Tokyo. "For as long as astronomers have explored galactic collisions, it has been assumed that a collision would always provide fuel for an MBH in the form of matter within the nucleus. And that this fuel would feed the MBH, significantly increasing its activity, which we would see as ultraviolet and X-ray light amongst other things. However, we now have good reason to believe that this sequence of events is not inevitable and that in fact the exact opposite might sometimes be true."

It seems logical that a galactic collision would only increase the activity of an MBH, but Miki and his team were curious to test this notion. They constructed highly detailed models of galactic collision scenarios and ran them on supercomputers. The team was pleased to see that in some circumstances, an incoming small galaxy might actually strip away the matter surrounding the MBH of the larger one. This would reduce instead of increase its activity.

"We computed the dynamic evolution of the gaseous matter which surrounds the MBH in a torus, or donut, shape," said Miki. "If the incoming galaxy accelerated this torus above a certain threshold determined by properties of the MBH, then the matter would be ejected and the MBH would be starved. These events can last in the region of a million years, though we are still unsure about how long the suppression of MBH activity may last."

Read more at Science Daily

Puzzling six-exoplanet system with rhythmic movement challenges theories of how planets form

 Using a combination of telescopes, including the Very Large Telescope of the European Southern Observatory (ESO's VLT), astronomers have revealed a system consisting of six exoplanets, five of which are locked in a rare rhythm around their central star. The researchers believe the system could provide important clues about how planets, including those in the Solar System, form and evolve.

The first time the team observed TOI-178, a star some 200 light-years away in the constellation of Sculptor, they thought they had spotted two planets going around it in the same orbit. However, a closer look revealed something entirely different. "Through further observations we realised that there were not two planets orbiting the star at roughly the same distance from it, but rather multiple planets in a very special configuration," says Adrien Leleu from the Université de Genève and the University of Bern, Switzerland, who led a new study of the system published today in Astronomy & Astrophysics.

The new research has revealed that the system boasts six exoplanets and that all but the one closest to the star are locked in a rhythmic dance as they move in their orbits. In other words, they are in resonance. This means that there are patterns that repeat themselves as the planets go around the star, with some planets aligning every few orbits. A similar resonance is observed in the orbits of three of Jupiter's moons: Io, Europa and Ganymede. Io, the closest of the three to Jupiter, completes four full orbits around Jupiter for every orbit that Ganymede, the furthest away, makes, and two full orbits for every orbit Europa makes.

The five outer exoplanets of the TOI-178 system follow a much more complex chain of resonance, one of the longest yet discovered in a system of planets. While the three Jupiter moons are in a 4:2:1 resonance, the five outer planets in the TOI-178 system follow a 18:9:6:4:3 chain: while the second planet from the star (the first in the resonance chain) completes 18 orbits, the third planet from the star (second in the chain) completes 9 orbits, and so on. In fact, the scientists initially only found five planets in the system, but by following this resonant rhythm they calculated where in its orbit an additional planet would be when they next had a window to observe the system.

More than just an orbital curiosity, this dance of resonant planets provides clues about the system's past. "The orbits in this system are very well ordered, which tells us that this system has evolved quite gently since its birth," explains co-author Yann Alibert from the University of Bern. If the system had been significantly disturbed earlier in its life, for example by a giant impact, this fragile configuration of orbits would not have survived.

Disorder in the rhythmic system

But even if the arrangement of the orbits is neat and well-ordered, the densities of the planets "are much more disorderly," says Nathan Hara from the Université de Genève, Switzerland, who was also involved in the study. "It appears there is a planet as dense as the Earth right next to a very fluffy planet with half the density of Neptune, followed by a planet with the density of Neptune. It is not what we are used to." In our Solar System, for example, the planets are neatly arranged, with the rocky, denser planets closer to the central star and the fluffy, low-density gas planets farther out.

"This contrast between the rhythmic harmony of the orbital motion and the disorderly densities certainly challenges our understanding of the formation and evolution of planetary systems," says Leleu.

Combining techniques

To investigate the system's unusual architecture, the team used data from the European Space Agency's CHEOPS satellite, alongside the ground-based ESPRESSO instrument on ESO's VLT and the NGTS and SPECULOOS, both sited at ESO's Paranal Observatory in Chile. Since exoplanets are extremely tricky to spot directly with telescopes, astronomers must instead rely on other techniques to detect them. The main methods used are imaging transits -- observing the light emitted by the central star, which dims as an exoplanet passes in front of it when observed from the Earth -- and radial velocities -- observing the star's light spectrum for small signs of wobbles which happen as the exoplanets move in their orbits. The team used both methods to observe the system: CHEOPS, NGTS and SPECULOOS for transits and ESPRESSO for radial velocities.

By combining the two techniques, astronomers were able to gather key information about the system and its planets, which orbit their central star much closer and much faster than the Earth orbits the Sun. The fastest (the innermost planet) completes an orbit in just a couple of days, while the slowest takes about ten times longer. The six planets have sizes ranging from about one to about three times the size of Earth, while their masses are 1.5 to 30 times the mass of Earth. Some of the planets are rocky, but larger than Earth -- these planets are known as Super-Earths. Others are gas planets, like the outer planets in our Solar System, but they are much smaller -- these are nicknamed Mini-Neptunes.

Read more at Science Daily

Hair aging differs by race, ethnicity

 While aging is an unavoidable biological process with many influencing factors that results in visible changes to the hair, there is limited literature examining the characteristics of hair aging across the races. Now a new study describes the unique characteristics of hair aging among different ethnicities that the authors hope will aid in a culturally sensitive approach when making recommendations to prevent hair damage during one's life-time.

Among the findings: hair-graying onset varies with race, with the average age for Caucasians being mid-30s, that for Asians being late 30s, and that for Africans being mid-40s. Caucasians and Asians typically experience damage to the distal hair shaft, while African-Americans see damage occurring closer to the hair root. Postmenopausal changes include decreased anagen (active or growing) hairs in the frontal scalp, lower growth rates and smaller hair diameters.

Similar to skin, hair aging comprises both intrinsic aging, which includes the natural physiological changes that occur with time, and extrinsic aging, or changes associated with environmental exposures and physical stress caused by daily grooming.

"Despite a similar chemical composition, the structural properties of hair vary between different ethnicities and, consequently, the aging of hair differs as well. As the population ages and becomes more diverse, it is of greater necessity to understand the hair aging process in different types of hair," says corresponding author Neelam Vashi, MD, associate professor of dermatology at Boston University School of Medicine and director of the Boston University Cosmetic and Laser Center at Boston Medical Center.

The researchers performed a literature search among 69 publications to review what is known about changes in hair structure over time, focusing on the differences in hair aging according to ethnic background. Information regarding hair structure, aging characteristics and responses to extrinsic damage together with differences between races and ethnicities was collected.

Read more at Science Daily

Jan 24, 2021

Immune system mounts a lasting defense after recovery from COVID-19, researchers find

 

Coronavirus illustration.
As the number of people who have fought off SARS-CoV-2 climbs ever higher, a critical question has grown in importance: How long will their immunity to the novel coronavirus last? A new Rockefeller study offers an encouraging answer, suggesting that those who recover from COVID-19 are protected against the virus for at least six months, and likely much longer.

The findings, published in Nature, provide the strongest evidence yet that the immune system "remembers" the virus and, remarkably, continues to improve the quality of antibodies even after the infection has waned. Antibodies produced months after the infection showed increased ability to block SARS-CoV-2, as well as its mutated versions such as the South African variant.

The researchers found that these improved antibodies are produced by immune cells that have kept evolving, apparently due to a continued exposure to the remnants of the virus hidden in the gut tissue.

Based on these findings, researchers suspect that when the recovered patient next encounters the virus, the response would be both faster and more effective, preventing re-infection.

"This is really exciting news. The type of immune response we see here could potentially provide protection for quite some time, by enabling the body to mount a rapid and effective response to the virus upon re-exposure," says Michel C. Nussenzweig, the Zanvil A. Cohn and Ralph M. Steinman Professor and head of the Laboratory of Molecular Immunology, whose team has been tracking and characterizing antibody response in Covid-19 patients since the early days of the pandemic in New York.

Long-lasting memory

Antibodies, which the body creates in response to infection, linger in the blood plasma for several weeks or months, but their levels significantly drop with time. The immune system has a more efficient way of dealing with pathogens: instead of producing antibodies all the time, it creates memory B cells that recognize the pathogen, and can quickly unleash a new round of antibodies when they encounter it a second time.

But how well this memory works depends on the pathogen. To understand the case with SARS-CoV-2, Nussenzweig and his colleagues studied the antibody responses of 87 individuals at two timepoints: one month after infection, and then again six months later. As expected, they found that although antibodies were still detectable by the six-month point, their numbers had markedly decreased. Lab experiments showed that the ability of the participants' plasma samples to neutralize the virus was reduced by five-fold.

In contrast, the patients' memory B cells, specifically those that produce antibodies against SARS-CoV-2, did not decline in number, and even slightly increased in some cases. "The overall numbers of memory B cells that produced antibodies attacking the Achilles' heel of the virus, known as the receptor-binding domain, stayed the same," says Christian Gaebler, a physician and immunologist in Nussenzweig's lab. "That's good news because those are the ones that you need if you encounter the virus again."

Viral stowaways

A closer look at the memory B cells revealed something surprising: these cells had gone through numerous rounds of mutation even after the infection resolved, and as a result the antibodies they produced were much more effective than the originals. Subsequent lab experiments showed this new set of antibodies were better able to latch on tightly to the virus and could recognize even mutated versions of it.

"We were surprised to see the memory B cells had kept evolving during this time," Nussenzweig says. "That often happens in chronic infections, like HIV or herpes, where the virus lingers in the body. But we weren't expecting to see it with SARS-CoV-2, which is thought to leave the body after infection has resolved."

SARS-CoV-2 replicates in certain cells in the lungs, upper throat, and small intestine, and residual viral particles hiding within these tissues could be driving the evolution of memory cells. To look into this hypothesis, the researchers have teamed up with Saurabh Mehandru, a former Rockefeller scientist and currently a physician at Mount Sinai Hospital, who has been examining biopsies of intestinal tissue from people who had recovered from COVID-19 on average three months earlier.

In seven of the 14 individuals studied, tests showed the presence of SARS-CoV-2's genetic material and its proteins in the cells that line the intestines. The researchers don't know whether these viral left-overs are still infectious or are simply the remains of dead viruses.

Read more at Science Daily

Saturn's tilt caused by its moons, researchers say

 

Saturn illustration.
Two scientists from CNRS and Sorbonne University working at the Institute of Celestial Mechanics and Ephemeris Calculation (Paris Observatory -- PSL/CNRS) have just shown that the influence of Saturn's satellites can explain the tilt of the rotation axis of the gas giant. Their work, published on 18 January 2021 in the journal Nature Astronomy, also predicts that the tilt will increase even further over the next few billion years.

Rather like David versus Goliath, it appears that Saturn's tilt may in fact be caused by its moons. This is the conclusion of recent work carried out by scientists from the CNRS, Sorbonne University and the University of Pisa, which shows that the current tilt of Saturn's rotation axis is caused by the migration of its satellites, and especially by that of its largest moon, Titan.

Recent observations have shown that Titan and the other moons are gradually moving away from Saturn much faster than astronomers had previously estimated. By incorporating this increased migration rate into their calculations, the researchers concluded that this process affects the inclination of Saturn's rotation axis: as its satellites move further away, the planet tilts more and more.

The decisive event that tilted Saturn is thought to have occurred relatively recently. For over three billion years after its formation, Saturn's rotation axis remained only slightly tilted. It was only roughly a billion years ago that the gradual motion of its satellites triggered a resonance phenomenon that continues today: Saturn's axis interacted with the path of the planet Neptune and gradually tilted until it reached the inclination of 27° observed today.

These findings call into question previous scenarios. Astronomers were already in agreement about the existence of this resonance. However, they believed that it had occurred very early on, over four billion years ago, due to a change in Neptune's orbit. Since that time, Saturn's axis was thought to have been stable. In fact, Saturn's axis is still tilting, and what we see today is merely a transitional stage in this shift. Over the next few billion years, the inclination of Saturn's axis could more than double.

Read more at Science Daily

Butterfly wing clap explains mystery of flight

 

Silver-washed fritillary butterfly on flower.
The fluttery flight of butterflies has so far been somewhat of a mystery to researchers, given their unusually large and broad wings relative to their body size. Now researchers at Lund University in Sweden have studied the aerodynamics of butterflies in a wind tunnel. The results suggest that butterflies use a highly effective clap technique, therefore making use of their unique wings. This helps them rapidly take off when escaping predators.

The study explains the benefits of both the wing shape and the flexibility of their wings.

The Lund researchers studied the wingbeats of freely flying butterflies during take-off in a wind tunnel. During the upward stroke, the wings cup, creating an air-filled pocket between them. When the wings then collide, the air is forced out, resulting in a backward jet that propels the butterflies forward. The downward wingbeat has another function: the butterflies stay in the air and do not fall to the ground.

The wings colliding was described by researchers almost 50 years ago, but it is only in this study that the theory has been tested on real butterflies in free flight. Until now, the common perception has been that butterfly wings are aerodynamically inefficient, however, the researchers suggest that the opposite is actually true.

"That the wings are cupped when butterflies clap them together, makes the wing stroke much more effective. It is an elegant mechanism that is far more advanced than we imagined, and it is fascinating. The butterflies benefit from the technique when they have to take off quickly to escape from predators," says biology researcher Per Henningsson, who studied the butterflies' aerodynamics together with colleague Christoffer Johansson.

"The shape and flexibility of butterfly wings could inspire improved performance and flight technology in small drones," he continues.

In addition to studying the butterflies in a wind tunnel, the researchers designed mechanical wings that mimic real ones. The shape and flexibility of the mechanical wings as they are cupped and folded confirm the efficiency.

Read more at Science Daily

Much of Earth's nitrogen was locally sourced

 

Protoplanetary disk illustration
Where did Earth's nitrogen come from? Rice University scientists show one primordial source of the indispensable building block for life was close to home.

The isotopic signatures of nitrogen in iron meteorites reveal that Earth likely gathered its nitrogen not only from the region beyond Jupiter's orbit but also from the dust in the inner protoplanetary disk.

Nitrogen is a volatile element that, like carbon, hydrogen and oxygen, makes life on Earth possible. Knowing its source offers clues to not only how rocky planets formed in the inner part of our solar system but also the dynamics of far-flung protoplanetary disks.

The study by Rice graduate student and lead author Damanveer Grewal, Rice faculty member Rajdeep Dasgupta and geochemist Bernard Marty at the University of Lorraine, France, appears in Nature Astronomy.

Their work helps settle a prolonged debate over the origin of life-essential volatile elements in Earth and other rocky bodies in the solar system.

"Researchers have always thought that the inner part of the solar system, within Jupiter's orbit, was too hot for nitrogen and other volatile elements to condense as solids, meaning that volatile elements in the inner disk were in the gas phase," Grewal said.

Because the seeds of present-day rocky planets, also known as protoplanets, grew in the inner disk by accreting locally sourced dust, he said it appeared they did not contain nitrogen or other volatiles, necessitating their delivery from the outer solar system. An earlier study by the team suggested much of this volatile-rich material came to Earth via the collision that formed the moon.

But new evidence clearly shows only some of the planet's nitrogen came from beyond Jupiter.

In recent years, scientists have analyzed nonvolatile elements in meteorites, including iron meteorites that occasionally fall to Earth, to show dust in the inner and outer solar system had completely different isotopic compositions.

"This idea of separate reservoirs had only been developed for nonvolatile elements," Grewal said. "We wanted to see if this is true for volatile elements as well. If so, it can be used to determine which reservoir the volatiles in present-day rocky planets came from."

Iron meteorites are remnants of the cores of protoplanets that formed at the same time as the seeds of present-day rocky planets, becoming the wild card the authors used to test their hypothesis.

The researchers found a distinct nitrogen isotopic signature in the dust that bathed the inner protoplanets within about 300,000 years of the formation of the solar system. All iron meteorites from the inner disk contained a lower concentration of the nitrogen-15 isotope, while those from the outer disk were rich in nitrogen-15.

This suggests that within the first few million years, the protoplanetary disk divided into two reservoirs, the outer rich in the nitrogen-15 isotope and the inner rich in nitrogen-14.

"Our work completely changes the current narrative," Grewal said. "We show that the volatile elements were present in the inner disk dust, probably in the form of refractory organics, from the very beginning. This means that contrary to current understanding, the seeds of the present-day rocky planets -- including Earth -- were not volatile-free."

Dasgupta said the finding is significant to those who study the potential habitability of exoplanets, a topic of great interest to him as principal investigator of CLEVER Planets, a NASA-funded collaborative project exploring how life-essential elements might come together on distant exoplanets.

"At least for our own planet, we now know the entire nitrogen budget does not come only from outer solar system materials," said Dasgupta, Rice's Maurice Ewing Professor of Earth, Environmental and Planetary Sciences.

"Even if other protoplanetary disks don't have the kind of giant planet migration resulting in the infiltration of volatile-rich materials from the outer zones, their inner rocky planets closer to the star could still acquire volatiles from their neighboring zones," he said.

Read more at Science Daily