Archaeologists working in two Italian caves have discovered some of the earliest known examples of ancient humans using an adhesive on their stone tools -- an important technological advance called "hafting."
The new study, which included CU Boulder's Paola Villa, shows that Neanderthals living in Europe from about 55 to 40 thousand years ago traveled away from their caves to collect resin from pine trees. They then used that sticky substance to glue stone tools to handles made out of wood or bone.
The findings add to a growing body of evidence that suggests that these cousins of Homo sapiens were more clever than some have made them out to be.
"We continue to find evidence that the Neanderthals were not inferior primitives but were quite capable of doing things that have traditionally only been attributed to modern humans," said Villa, corresponding author of the new study and an adjoint curator at the CU Museum of Natural History.
That insight, she added, came from a chance discovery from Grotta del Fossellone and Grotta di Sant'Agostino, a pair of caves near the beaches of what is now Italy's west coast.
Those caves were home to Neanderthals who lived in Europe during the Middle Paleolithic period, thousands of years before Homo sapiens set foot on the continent. Archaeologists have uncovered more than 1,000 stone tools from the two sites, including pieces of flint that measured not much more than an inch or two from end to end.
In a recent study of the tools, Villa and her colleagues noticed a strange residue on just a handful of the flints -- bits of what appeared to be organic material.
"Sometimes that material is just inorganic sediment, and sometimes it's the traces of the adhesive used to keep the tool in its socket" Villa said.
To find out, study lead author Ilaria Degano at the University of Pisa conducted a chemical analysis of 10 flints using a technique called gas chromatography/mass spectrometry. The tests showed that the stone tools had been coated with resin from local pine trees. In one case, that resin had also been mixed with beeswax.
Villa explained that the Italian Neanderthals didn't just resort to their bare hands to use stone tools. In at least some cases, they also attached those tools to handles to give them better purchase as they sharpened wooden spears or performed other tasks like butchering or scraping leather.
"You need stone tools to cut branches off of trees and make them into a point," Villa said.
The find isn't the oldest known example of hafting by Neanderthals in Europe -- two flakes discovered in the Campitello Quarry in central Italy predate it. But it does suggest that this technique was more common than previously believed.
The existence of hafting also provides more evidence that Neanderthals, like their smaller human relatives, were able to build a fire whenever they wanted one, Villa said -- something that scientists have long debated. She said that pine resin dries when exposed to air. As a result, Neanderthals needed to warm it over a small fired to make an effective glue.
"This is one of several proofs that strongly indicate that Neanderthals were capable of making fire whenever they needed it," Villa said.
In other words, enjoying the glow of a warm campfire isn't just for Homo sapiens.
Read more at Science Daily
Jun 28, 2019
A new normal: Study explains universal pattern in fossil record
Throughout life's history on earth, biological diversity has gone through ebbs and flows -- periods of rapid evolution and of dramatic extinctions. We know this, at least in part, through the fossil record of marine invertebrates left behind since the Cambrian period. Remarkably, extreme events of diversification and extinction happen more frequently than a typical, Gaussian, distribution would predict. Instead of the typical bell-shaped curve, the fossil record shows a fat-tailed distribution, with extreme, outlier, events occurring with higher-than-expected probability.
While scientists have long known about this unusual pattern in the fossil record, they have struggled to explain it. Many random processes that occur over a long time with large sample sizes, from processes that produce school grades to height among a population, converge on the common Gaussian distribution. "It's a very reasonable default expectation," says Santa Fe Institute Omidyar Fellow Andy Rominger. So why doesn't the fossil record display this common pattern?
In a new paper published in Science Advances, Rominger and colleagues Miguel Fuentes (San Sebastián University, Chile) and Pablo Marquet (Pontifical Catholic University of Chile) have taken a new approach to tackling this question. Instead of trying to only describe fluctuations in biodiversity across all types of organisms, they also look at fluctuations within clades, or groups of organisms that share a common ancestral lineage.
"Within a lineage of closely related organisms, there should be a conserved evolutionary dynamic. Between different lineages, that dynamic can change," says Rominger. That is, within clades, related organisms tend to find an effective adaptive strategy and never stray too far. But between these clade-specific fitness peaks are valleys of metaphorically uninhabited space. "It turns out, just invoking that simple idea, with some very simple mathematics, described the patterns in the fossil record very well."
These simple mathematics are tools that Fuentes, in 2009, used to describe another system with an unusual fat-tailed distribution: the stock market. By using superstatistics -- an approach from thermodynamics to describe turbulent flow -- Fuentes could accurately describe the hard-to-predict dramatic crashes and explosions in value.
"In biology, we see these crashes and explosions too, in terms of biodiversity," says Rominger. "We wondered if Fuentes' elegant approach could also describe the evolutionary dynamics we see in the fossil record."
Read more at Science Daily
While scientists have long known about this unusual pattern in the fossil record, they have struggled to explain it. Many random processes that occur over a long time with large sample sizes, from processes that produce school grades to height among a population, converge on the common Gaussian distribution. "It's a very reasonable default expectation," says Santa Fe Institute Omidyar Fellow Andy Rominger. So why doesn't the fossil record display this common pattern?
In a new paper published in Science Advances, Rominger and colleagues Miguel Fuentes (San Sebastián University, Chile) and Pablo Marquet (Pontifical Catholic University of Chile) have taken a new approach to tackling this question. Instead of trying to only describe fluctuations in biodiversity across all types of organisms, they also look at fluctuations within clades, or groups of organisms that share a common ancestral lineage.
"Within a lineage of closely related organisms, there should be a conserved evolutionary dynamic. Between different lineages, that dynamic can change," says Rominger. That is, within clades, related organisms tend to find an effective adaptive strategy and never stray too far. But between these clade-specific fitness peaks are valleys of metaphorically uninhabited space. "It turns out, just invoking that simple idea, with some very simple mathematics, described the patterns in the fossil record very well."
These simple mathematics are tools that Fuentes, in 2009, used to describe another system with an unusual fat-tailed distribution: the stock market. By using superstatistics -- an approach from thermodynamics to describe turbulent flow -- Fuentes could accurately describe the hard-to-predict dramatic crashes and explosions in value.
"In biology, we see these crashes and explosions too, in terms of biodiversity," says Rominger. "We wondered if Fuentes' elegant approach could also describe the evolutionary dynamics we see in the fossil record."
Read more at Science Daily
People's motivations bias how they gather information
A new study suggests people stop gathering evidence earlier when the data supports their desired conclusion than when it supports the conclusion they wish was false. Filip Gesiarz, Donal Cahill and Tali Sharot of University College London, U.K. report in PLOS Computational Biology.
Previous studies had already provided some clues that people gather less information before reaching desirable beliefs. For example, people are more likely to seek a second medical opinion when the first diagnosis is grave. However, certain design limitations of those studies prevented a definitive conclusion and the reasons behind this bias was previously unknown. By fitting people's behavior to a mathematical model Gesiarz and colleagues were able to identify the reasons for this bias.
"Our research suggests that people start with an assumption that their favored conclusion is more likely true and weight each piece of evidence supporting it more than evidence opposing it. Because of that, people will find no need to gather additional information that could have revealed their conclusion to be false. They will stop the investigation as soon as the jury tilts in their favor" said Gesiarz.
In this new study 84 volunteers played an online categorization game in which they could gather as much evidence as they wanted to help them make judgements and were paid according to how accurate they were. In addition, if the evidence pointed to a certain category they would get bonus points and if it pointed to another category they would lose points. So while there was reason to wish the evidence pointed to a specific judgement, the only way for volunteers to maximize rewards was to provide accurate responses. Despite this, they found that the volunteers stopped gathering data earlier when it supported the conclusion they wished was true than when it supported the undesirable conclusion.
"Today, a limitless amount of information is available at the click of a mouse," Sharot says. "However, because people are likely to conduct less through searches when the first few hits provide desirable information, this wealth of data will not necessarily translate to more accurate beliefs."
Next, the authors hope to determine what factors make certain individuals more likely to have a bias in how they gather information than others. For instance, they are curious whether children might show the same bias revealed in this study, or whether people with depression, which is associated with motivation problems, have different data-gathering patterns.
From Science Daily
Previous studies had already provided some clues that people gather less information before reaching desirable beliefs. For example, people are more likely to seek a second medical opinion when the first diagnosis is grave. However, certain design limitations of those studies prevented a definitive conclusion and the reasons behind this bias was previously unknown. By fitting people's behavior to a mathematical model Gesiarz and colleagues were able to identify the reasons for this bias.
"Our research suggests that people start with an assumption that their favored conclusion is more likely true and weight each piece of evidence supporting it more than evidence opposing it. Because of that, people will find no need to gather additional information that could have revealed their conclusion to be false. They will stop the investigation as soon as the jury tilts in their favor" said Gesiarz.
In this new study 84 volunteers played an online categorization game in which they could gather as much evidence as they wanted to help them make judgements and were paid according to how accurate they were. In addition, if the evidence pointed to a certain category they would get bonus points and if it pointed to another category they would lose points. So while there was reason to wish the evidence pointed to a specific judgement, the only way for volunteers to maximize rewards was to provide accurate responses. Despite this, they found that the volunteers stopped gathering data earlier when it supported the conclusion they wished was true than when it supported the undesirable conclusion.
"Today, a limitless amount of information is available at the click of a mouse," Sharot says. "However, because people are likely to conduct less through searches when the first few hits provide desirable information, this wealth of data will not necessarily translate to more accurate beliefs."
Next, the authors hope to determine what factors make certain individuals more likely to have a bias in how they gather information than others. For instance, they are curious whether children might show the same bias revealed in this study, or whether people with depression, which is associated with motivation problems, have different data-gathering patterns.
From Science Daily
Some corals can survive in acidified ocean conditions, but have lower density skeletons
Coral reef. |
The study took advantage of the unusual seawater chemistry found naturally at sites along the Caribbean coastline of Mexico's Yucatan Peninsula, where water discharging from submarine springs has lower pH than the surrounding seawater, with reduced availability of the carbonate ions corals need to build their calcium carbonate skeletons.
In a two-year field experiment, the international team of researchers transplanted genetically identical fragments of three species of corals to a site affected by the springs and to a nearby control site not influenced by the springs, and then monitored the survival, growth rates, and other physiological traits of the transplants. They reported their findings in a paper published June 26 in Proceedings of the Royal Society B.
"The good news is the corals can survive and deposit calcium carbonate, but the density of their skeletons is reduced, which means the framework of the reef would be less robust and might be more susceptible to storm damage and bioerosion," said Adina Paytan, a research professor at UCSC's Institute of Marine Sciences and corresponding author of the paper.
Of the three species tested, the one that performed best in the low-pH conditions was Siderastrea siderea, commonly known as massive starlet coral, a slow-growing species that forms large dome-shaped structures. Another slow-growing dome-shaped species, Porites astreoides (mustard hill coral), did almost as well, although its survival rate was 20 percent lower. Both of these species outperformed the fast-growing branching coral Porites porites (finger coral).
Coauthor Donald Potts, professor of ecology and evolutionary biology at UC Santa Cruz, said the transplanted species are all widespread throughout the Caribbean. "The slow-growing, dome-shaped corals tend to be more tolerant of extreme conditions, and they are important in building up the permanent structure of the reef," he said. "We found that they have the potential for persistence in acidified conditions."
Corals will have to cope with more than ocean acidification, however. The increasing carbon dioxide level in the atmosphere is also driving climate change, resulting in warmer ocean temperatures and rising sea levels. Unusually warm temperatures can disrupt the symbiosis between coral polyps and the algae that live in them, leading to coral bleaching. And rapidly rising sea levels could leave slow-growing corals at depths where they would die from insufficient sunlight.
Nevertheless, Potts noted that several species of Caribbean corals have long fossil records showing that they have persisted through major changes in Earth's history. "These are species with a history of survival and tolerance," he said.
He added that both S. siderea and P. astreoides had higher chlorophyll concentrations at the low-pH site, indicating that their algal symbionts were responding positively and potentially increasing the energy resources available to the corals for resisting stress.
Both of the slow-growing species that did well under acidified conditions have internal fertilization and brood their larvae, so that their offspring have the potential to settle immediately in the same area, Potts said. "This means there is potential for local genetic adaptation over successive generations to changing environmental conditions," he said.
The authors also noted that the differences among coral species in survival and calcification under acidified conditions could be useful information for reef restoration efforts and perhaps even for efforts to genetically modify corals to give them greater stress tolerance.
Paytan said she remains "cautiously optimistic," despite the many threats facing coral reefs worldwide.
"These corals are more robust than we thought," she said. "They have the potential to persist with ocean acidification, but it costs them energy to cope with it, so we have to do all we can to reduce other stressors, such as nutrient pollution and sedimentation."
Read more at Science Daily
Jun 27, 2019
Some extinct crocs were vegetarians
American alligator. |
"The most interesting thing we discovered was how frequently it seems extinct crocodyliforms ate plants," said Keegan Melstrom, a doctoral student at the University of Utah. "Our study indicates that complexly-shaped teeth, which we infer to indicate herbivory, appear in the extinct relatives of crocodiles at least three times and maybe as many as six."
All living crocodylians possess a similar general body shape and ecology to match their lifestyle as semiaquatic generalist carnivores, which includes relatively simple, conical teeth. It was clear from the start of the study that extinct species showed a different pattern, including species with many specializations not seen today. One such specialization is a feature known as heterodonty: regionalized differences in tooth size or shape.
"Carnivores possess simple teeth whereas herbivores have much more complex teeth," Melstrom explained. "Omnivores, organisms that eat both plant and animal material, fall somewhere in between. Part of my earlier research showed that this pattern holds in living reptiles that have teeth, such as crocodylians and lizards. So these results told us that the basic pattern between diet and teeth is found in both mammals and reptiles, despite very different tooth shapes, and is applicable to extinct reptiles."
To infer what those extinct crocodyliforms most likely ate, Melstrom and his graduate advisor, chief curator Randall Irmis, compared the tooth complexity of extinct crocodyliforms to those of living animals using a method originally developed for use in living mammals. Overall, they measured 146 teeth from 16 different species of extinct crocodyliforms.
Using a combination of quantitative dental measurements and other morphological features, the researchers reconstructed the diets of those extinct crocodyliforms. The results show that those animals had a wider range of dental complexities and presumed dietary ecologies than had been appreciated previously.
Plant-eating crocodyliforms appeared early in the evolutionary history of the group, the researchers conclude, shortly after the end-Triassic mass extinction, and persisted until the end-Cretaceous mass extinction that killed off all dinosaurs except birds. Their analysis suggests that herbivory arose independently a minimum of three times, and possibly six times, in Mesozoic crocodyliforms.
"Our work demonstrates that extinct crocodyliforms had an incredibly varied diet," Melstrom said. "Some were similar to living crocodylians and were primarily carnivorous, others were omnivores and still others likely specialized in plants. The herbivores lived on different continents at different times, some alongside mammals and mammal relatives, and others did not. This suggests that an herbivorous crocodyliform was successful in a variety of environments!"
Melstrom says they are continuing to reconstruct the diets of extinct crocodyliforms, including in fossilized species that are missing teeth. He also wants to understand why the extinct relatives of crocodiles diversified so radically after one mass extinction but not another, and whether dietary ecology could have played a role.
Read more at Science Daily
Honeybees infect wild bumblebees through shared flowers
Honeybee on flower. |
Several of the viruses associated with bumblebees' trouble are moving from managed bees in apiaries to nearby populations of wild bumblebees -- "and we show this spillover is likely occurring through flowers that both kinds of bees share," says Samantha Alger, a scientist at the University of Vermont who led the new research.
"Many wild pollinators are in trouble and this finding could help us protect bumblebees," she says. "This has implications for how we manage domestic bees and where we locate them."
The first-of-its-kind study was published June 26 in the journal PLOS ONE.
Virus Hunters
Around the globe, the importance of wild pollinators has been gaining attention as diseases and declines in managed honeybees threaten key crops. Less well understood is that many of the threats to honeybees (Apis mellifera) -- including land degradation, certain pesticides, and diseases -- also threaten native bees, such as the rusty patched bumblebee, recently listed under the Endangered Species Act; it has declined by nearly 90% but was once an excellent pollinator of cranberries, plums, apples and other agricultural plants.
The research team -- three scientists from the University of Vermont and one from the University of Florida -- explored 19 sites across Vermont. They discovered that two well-know RNA viruses found in honeybees -- deformed wing virus and black queen cell virus -- were higher in bumblebees collected less than 300 meters from commercial beehives. The scientists also discovered that active infections of the deformed wing virus were higher near these commercial apiaries but no deformed wing virus was found in the bumblebees they collected where foraging honeybees and apiaries were absent.
Most impressive, the team detected viruses on 19% of the flowers they sampled from sites near apiaries. "I thought this was going to be like looking for a needle in a haystack. What are the chances that you're going to pick a flower and find a bee virus on it?" says Alger. "Finding this many was surprising." In contrast, the scientists didn't detect any bee viruses on flowers sampled more than one kilometer from commercial beehives.
The UVM scientists -- including Alger and co-author Alex Burnham, a doctoral student -- and other bee experts have for some years suspected that RNA viruses might move from honeybees to bumblebees through shared flowers. But -- with the exception of one small study in a single apiary -- the degree to which these viruses can be "horizontally transmitted," the scientists write, with flowers as the bridge, has not been examined until now.
Taken together, these results strongly suggest that "viruses in managed honeybees are spilling over to wild bumblebee populations and that flowers are an important route," says Alison Brody, a professor in UVM's Department of Biology, and senior author on the new PLOS study. "Careful monitoring and treating of diseased honeybee colonies could protect wild bees from these viruses as well as other pathogens or parasites."
Just Like Chicken?
Alger -- an expert beekeeper and researcher in UVM's Department of Plant & Soil Science and Gund Institute for Environment -- is deeply concerned about the long-distance transport of large numbers of honeybees for commercial pollination. "Big operators put hives on flatbed trucks and move them to California to pollinate almonds and then onto Texas for another crop," she says -- carrying their diseases wherever they go. And between bouts of work on monoculture farm fields, commercial bees are often taken to more pristine natural habitats "to rest and recover, where there is diverse, better forage," says Alger.
"This research suggests that we might want to keep apiaries outside of areas where there are vulnerable pollinator species, like the rusty patched bumblebees," Alger says, "especially because we have so much more to learn about what these viruses are actually doing to bumblebees."
Read more at Science Daily
Thunderbolts of lightning, gamma rays exciting
Lightning storm |
In the city of Kanazawa, Ishikawa Prefecture, in central Japan, Wada and colleagues work with local schools and businesses to install radiation monitors onto buildings. These radiation monitors are not there due to some worry about local radiation levels, though. They form a network, the purpose of which is to detect radiation coming from the sky. It may surprise some, but it's been known for around 30 years that thunderstorms can bring with them gamma-ray activity.
"Forever, people have seen lightning and heard thunder. These were the ways we could experience this power of nature," said Wada. "With the discovery of electromagnetism, scientists learned to see lightning with radio receivers. But now we can observe lightning in gamma rays -- ionizing radiation. It's like having four eyes to study the phenomena."
There are two known kinds of gamma-ray phenomena associated with thunderclouds: gamma-ray glows, weak emissions which last about a minute, and short-lived terrestrial gamma-ray flashes (TGFs), which occur as lightning strikes and are much more intense than gamma-ray glows. Both occur in regions of thunderclouds sandwiched between layers of varying charge. The charged regions accelerate electrons to near the speed of light. At these speeds, referred to as relativistic, electrons that stray very close to the nuclei of nitrogen atoms in the air slow down a little and emit a telltale gamma ray. This is called bremsstrahlung radiation.
"During a winter thunderstorm in Kanazawa, our monitors detected a simultaneous TGF and lightning strike. This is fairly common, but interestingly we also saw a gamma-ray glow in the same area at the same time," continued Wada. "Furthermore, the glow abruptly disappeared when the lightning struck. We can say conclusively the events are intimately connected and this is the first time this connection has been observed."
The mechanism underlying lightning discharge is highly sought after and this research may offer previously unknown insights. Wada and team intend to further their investigation to explore the possibility that gamma-ray glows don't just precede lightning strikes, but may in fact cause them. Radiation levels of the gamma-ray flashes are quite low, approximately a tenth the level one may receive from a typical medical X-ray.
"Our finding marks a milestone in lightning research and we will soon double our number of radiation sensors from 23 to about 40 or 50. With more sensors, we could greatly improve predictive models," explained Wada. "It's hard to say right now, but with sufficient sensor data, we may be able to predict lightning strikes within about 10 minutes of them happening and within around 2 kilometers of where they happen. I'm excited to be part of this ongoing research."
Further investigations will likely still take place in Kanazawa as the area has rare and ideal meteorological conditions for this kind of work. Most radiation observations in storms come from airborne or mountain-based stations as thunderclouds are generally very high up. But winter storms in Kanazawa bring thunderclouds surprisingly close to the ground, ideal for study with the low-cost portable monitors developed by the research team.
The researchers created these unique portable radiation monitors in part with technology derived from space-based satellite observatories designed for astrophysics experiments. This is appropriate as the data from this kind of research could be useful for those who research astrophysics and in particular solar physics in the context of particle acceleration. But there is a more down-to-earth offshoot as well.
"Paleontologists who study life from the last 50,000 years or so use a technique called carbon-14 dating to determine the age of a sample. The technique relies on knowledge of the levels of two kinds of carbon, carbon-12 and carbon-14," said Wada. "It's commonly thought carbon-14 is created by cosmic rays at a roughly constant rate, hence the predictive power of the technique. But there's a suggestion thunderstorms may alter the ratio of carbon-12 to carbon-14, which may slightly change the accuracy of or calibration required for carbon-14 dating to work."
Read more at Science Daily
ALMA pinpoints the formation site of planet around nearest young star
Planetary disk around star. |
The young star TW Hydrae, located194 light-years away in the constellation Hydra, is the closest star around which planets may be forming. Its surrounding dust disk is the best target to study the process of planet formation.
Previous ALMA observations revealed that the disk is composed of concentric rings. Now, new higher sensitivity ALMA observations revealed a previously unknown small clump in the planet forming disk. The clump is elongated along the direction of the disk rotation, with a width approximately equal to the distance between the Sun and the Earth, and a length of about four-and-a-half times that.
"The true nature of the clump is still not clear," says Takashi Tsukagoshi at the National Astronomical Observatory of Japan and the lead author of the research paper. "It could be a 'circumplanetary' disk feeding a Neptune-sized infant planet. Or it might be that swirling gas is raking up the dust particles."
Planets form in disks of gas and dust around young stars. Micrometer-sized dust particles stick together to grow to larger grains, rocks, and finally a planet. Theoretical studies predict that an infant planet is surrounded by a 'circumplanetary' disk, a small structure within the larger dust disk around the star. The planet collects material through this circumplanetary disk. It is important to find such a circumplanetary disk to understand the final stage of planet growth.
Cold dust and gas in the disks around young stars are difficult to see in visible light, but they emit radio waves. With its high sensitivity and resolution for such radio waves, ALMA is one of the most powerful instruments to study the genesis of planets.
However, the brightness and elongated shape of the structure revealed by ALMA don't exactly match theoretical predictions for circumplanetary disks. It might be a gas vortex, which are also expected to form here and there around a young star. Finding only a single dust clump at this time is also contrary to theoretical studies. So the research team could not reach a definitive answer on the nature of the dusty clump.
Read more at Science Daily
Jun 26, 2019
Cyanide compounds discovered in meteorites may hold clues to the origin of life
Cyanide and carbon monoxide are both deadly poisons to humans, but compounds containing iron, cyanide, and carbon monoxide discovered in carbon-rich meteorites by a team of scientists at Boise State University and NASA may have helped power life on early Earth. The extraterrestrial compounds found in meteorites resemble the active site of hydrogenases, which are enzymes that provide energy to bacteria and archaea by breaking down hydrogen gas (H2). Their results suggest that these compounds were also present on early Earth, before life began, during a period of time when Earth was constantly bombarded by meteorites and the atmosphere was likely more hydrogen-rich.
"When most people think of cyanide, they think of spy movies -- a guy swallowing a pill, foaming at the mouth and dying, but cyanide was probably an essential compound for building molecules necessary for life," explained Dr. Karen Smith, senior research scientist at Boise State University, Boise, Idaho. Cyanide, a carbon atom bound to a nitrogen atom, is thought to be crucial for the origin of life, as it is involved in the non-biological synthesis of organic compounds like amino acids and nucleobases, which are the building blocks of proteins and nucleic acids used by all known forms of life.
Smith is lead author of a paper on this research published June 25 in Nature Communications. Smith, along with Boise State assistant professor Mike Callahan, a co-author on the paper, developed new analytical methods to extract and measure ancient traces of cyanide in meteorites. They found that the meteorites containing cyanide belong to a group of carbon-rich meteorites called CM chondrites. Other types of meteorites tested, including a Martian meteorite, contained no cyanide.
"Data collected by NASA's OSIRIS-REx spacecraft of asteroid Bennu indicate that it is related to CM chondrites," said co-author Jason Dworkin of NASA's Goddard Space Flight Center in Greenbelt, Maryland. "OSIRIS-REx will deliver a sample from Bennu to study on Earth in 2023. We will search for these very compounds to try to connect Bennu to known meteorites and to understand the potential delivery of prebiotic compounds such as cyanide, which may have helped start life on the early Earth or other bodies in the solar system."
Cyanide has been found in meteorites before. However, in the new work, Smith and Callahan were surprised to discover that cyanide, along with carbon monoxide (CO), were binding with iron to form stable compounds in the meteorites. They identified two different iron cyano-carbonyl complexes in the meteorites using high-resolution liquid chromatography-mass spectrometry. "One of the most interesting observations from our study is that these iron cyano-carbonyl complexes resemble portions of the active sites of hydrogenases, which have a very distinct structure," Callahan said.
Hydrogenases are present in almost all modern bacteria and archaea and are widely believed to be ancient in origin. Hydrogenases are large proteins, but the active site -- the region where chemical reactions take place -- happens to be a much smaller metal-organic compound contained within the protein, according to Callahan. It is this compound that resembles the cyanide-bearing compounds the team discovered in meteorites.
An enduring mystery regarding the origin of life is how biology could have arisen from non-biological chemical processes. The similarities between the active sites in hydrogenase enzymes and the cyanide compounds the team found in meteorites suggests that non-biological processes in the parent asteroids of meteorites and on ancient Earth could have made molecules useful to emerging life.
Read more at Science Daily
"When most people think of cyanide, they think of spy movies -- a guy swallowing a pill, foaming at the mouth and dying, but cyanide was probably an essential compound for building molecules necessary for life," explained Dr. Karen Smith, senior research scientist at Boise State University, Boise, Idaho. Cyanide, a carbon atom bound to a nitrogen atom, is thought to be crucial for the origin of life, as it is involved in the non-biological synthesis of organic compounds like amino acids and nucleobases, which are the building blocks of proteins and nucleic acids used by all known forms of life.
Smith is lead author of a paper on this research published June 25 in Nature Communications. Smith, along with Boise State assistant professor Mike Callahan, a co-author on the paper, developed new analytical methods to extract and measure ancient traces of cyanide in meteorites. They found that the meteorites containing cyanide belong to a group of carbon-rich meteorites called CM chondrites. Other types of meteorites tested, including a Martian meteorite, contained no cyanide.
"Data collected by NASA's OSIRIS-REx spacecraft of asteroid Bennu indicate that it is related to CM chondrites," said co-author Jason Dworkin of NASA's Goddard Space Flight Center in Greenbelt, Maryland. "OSIRIS-REx will deliver a sample from Bennu to study on Earth in 2023. We will search for these very compounds to try to connect Bennu to known meteorites and to understand the potential delivery of prebiotic compounds such as cyanide, which may have helped start life on the early Earth or other bodies in the solar system."
Cyanide has been found in meteorites before. However, in the new work, Smith and Callahan were surprised to discover that cyanide, along with carbon monoxide (CO), were binding with iron to form stable compounds in the meteorites. They identified two different iron cyano-carbonyl complexes in the meteorites using high-resolution liquid chromatography-mass spectrometry. "One of the most interesting observations from our study is that these iron cyano-carbonyl complexes resemble portions of the active sites of hydrogenases, which have a very distinct structure," Callahan said.
Hydrogenases are present in almost all modern bacteria and archaea and are widely believed to be ancient in origin. Hydrogenases are large proteins, but the active site -- the region where chemical reactions take place -- happens to be a much smaller metal-organic compound contained within the protein, according to Callahan. It is this compound that resembles the cyanide-bearing compounds the team discovered in meteorites.
An enduring mystery regarding the origin of life is how biology could have arisen from non-biological chemical processes. The similarities between the active sites in hydrogenase enzymes and the cyanide compounds the team found in meteorites suggests that non-biological processes in the parent asteroids of meteorites and on ancient Earth could have made molecules useful to emerging life.
Read more at Science Daily
Algorithm designed to map universe, solve mysteries
Cornell University researchers have developed an algorithm designed to visualize models of the universe in order to solve some of physics' greatest mysteries.
The algorithm was developed by applying scientific principles used to create models for understanding cell biology and physics to the challenges of cosmology and big data.
"Science works because things behave much more simply than they have any right to," said professor of physics James Sethna. "Very complicated things end up doing rather simple collective behavior."
Sethna is the senior author of "Visualizing Probabilistic Models With Intensive Principal Component Analysis," published in the Proceedings of the National Academy of Sciences.
The algorithm, designed by first author Katherine Quinn, allows researchers to image a large set of probabilities to look for patterns or other information that might be useful, and provides them with better intuition for understanding complex models and data.
"A person can't just sit down and do it," Quinn said. "We need better algorithms that can extract what we're interested in, without being told what to look for. We can't just say, 'Look for interesting universes.' This algorithm is a way of untangling information in a way that can reveal the interesting structure of the data."
Further complicating the researchers' task was the fact that the data consists of ranges of probabilities, rather than raw images or numbers.
Their solution takes advantage of different properties of probability distributions to visualize a collection of things that could happen. In addition to cosmology, their model has applications to machine learning and statistical physics, which also work in terms of predictions.
To test the algorithm, the researchers used data from the European Space Agency's Planck satellite, and studied it with co-author Michael Niemack, associate professor of physics. They applied the model data on the cosmic microwave background -- radiation left over from the universe's early days.
The model produced a map depicting possible characteristics of different universes, of which our own universe is one point.
This new method of visualizing the qualities of our universe highlights the hierarchical structure of the dark energy and dark matter dominated model that fits the cosmic microwave background data so well. These visualizations present a promising approach for optimizing cosmological measurements in the future, Niemack said.
Read more at Science Daily
The algorithm was developed by applying scientific principles used to create models for understanding cell biology and physics to the challenges of cosmology and big data.
"Science works because things behave much more simply than they have any right to," said professor of physics James Sethna. "Very complicated things end up doing rather simple collective behavior."
Sethna is the senior author of "Visualizing Probabilistic Models With Intensive Principal Component Analysis," published in the Proceedings of the National Academy of Sciences.
The algorithm, designed by first author Katherine Quinn, allows researchers to image a large set of probabilities to look for patterns or other information that might be useful, and provides them with better intuition for understanding complex models and data.
"A person can't just sit down and do it," Quinn said. "We need better algorithms that can extract what we're interested in, without being told what to look for. We can't just say, 'Look for interesting universes.' This algorithm is a way of untangling information in a way that can reveal the interesting structure of the data."
Further complicating the researchers' task was the fact that the data consists of ranges of probabilities, rather than raw images or numbers.
Their solution takes advantage of different properties of probability distributions to visualize a collection of things that could happen. In addition to cosmology, their model has applications to machine learning and statistical physics, which also work in terms of predictions.
To test the algorithm, the researchers used data from the European Space Agency's Planck satellite, and studied it with co-author Michael Niemack, associate professor of physics. They applied the model data on the cosmic microwave background -- radiation left over from the universe's early days.
The model produced a map depicting possible characteristics of different universes, of which our own universe is one point.
This new method of visualizing the qualities of our universe highlights the hierarchical structure of the dark energy and dark matter dominated model that fits the cosmic microwave background data so well. These visualizations present a promising approach for optimizing cosmological measurements in the future, Niemack said.
Read more at Science Daily
Air pollution found to affect marker of female fertility in real-life study
Ovarian reserve, a term widely adopted to reflect the number of resting follicles in the ovary and thus a marker of potential female fertility, has been found in a large-scale study to be adversely affected by high levels of air pollution.
Results from the Ovarian Reserve and Exposure to Environmental Pollutants (ORExPo study), a 'real-world data' study using hormone measurements taken from more than 1300 Italian women, are presented today at the Annual Meeting of ESHRE by first investigator Professor Antonio La Marca from the University of Modena and Reggio Emilia, Italy.
Behind the study lay emerging evidence that many environmental chemicals, as well as natural and artificial components of everyday diet, have the potential to disturb the physiological role of hormones, interfering with their biosynthesis, signaling or metabolism. The hormone in this case, anti- Müllerian hormone or AMH, is secreted by cells in the ovary and is now widely recognised as a reliable circulating marker of ovarian reserve.(1)
'The influence of age and smoking on AMH serum levels is now largely accepted,' explains Professor La Marca, 'but a clear effect of environmental factors has not been demonstrated so far.'
The ORExPo study was in effect an analysis of all AMH measurements taken from women living in the Modena area between 2007 and 2017 and assembled in a large database. These measurements were extended to a computing data warehouse in which AMH levels were linked to patients' age and residential address. The analysis was completed with environmental data and a 'geo-localisation' estimate based on each patient's residence. The assessment of environmental exposure considered daily particulate matter (PM) and values of nitrogen dioxide (NO2), a polluting gas which gets into the air from burning fuel.
Results from the 1463 AMH measurements collected from 1318 women firstly showed -- as expected -- that serum AMH levels after the age of 25 were inversely and significantly related to the women's age. However, it was also found that AMH levels were inversely and significantly related to environmental pollutants defined as PM10, PM2.5 and NO2. This association was age-independent.
These results were determined by dividing the full dataset into quartiles reflecting PM10, PM2.5 and NO2 concentrations. The analysis found significantly lower levels of AMH in the fourth quartile than in the lowest quartiles, which, said Professor La Marca, 'again confirms that independently of age the higher the level of particulate matter and NO2, the lower the serum concentration of AMH'. The lowest concentration of AMH -- reflecting 'severe ovarian reserve reduction' -- was measured in subjects who were exposed to levels of PM10, PM2.5 and NO2 above 29.5, 22 and 26 mcg/m3 respectively. Nevertheless, these were values well below the upper limits recommended by the EU and local authorities (ie, 40, 25 and 40 mcg /m3 respectively).
Severe ovarian reserve reduction, as reflected in a serum AMH concentration below 1 ng/ml, was significantly more frequent in the fourth quartile than in the first three quartiles for PM10 (62% vs 38%), for PM2.5, and for NO2. 'This means by our calculations,' said Professor La Marca, 'exposure to high levels of PM10, PM2.5 and NO2 increases the risk of having a severely reduced ovarian reserve by a factor between 2 and 3.'
Read more at Science Daily
Results from the Ovarian Reserve and Exposure to Environmental Pollutants (ORExPo study), a 'real-world data' study using hormone measurements taken from more than 1300 Italian women, are presented today at the Annual Meeting of ESHRE by first investigator Professor Antonio La Marca from the University of Modena and Reggio Emilia, Italy.
Behind the study lay emerging evidence that many environmental chemicals, as well as natural and artificial components of everyday diet, have the potential to disturb the physiological role of hormones, interfering with their biosynthesis, signaling or metabolism. The hormone in this case, anti- Müllerian hormone or AMH, is secreted by cells in the ovary and is now widely recognised as a reliable circulating marker of ovarian reserve.(1)
'The influence of age and smoking on AMH serum levels is now largely accepted,' explains Professor La Marca, 'but a clear effect of environmental factors has not been demonstrated so far.'
The ORExPo study was in effect an analysis of all AMH measurements taken from women living in the Modena area between 2007 and 2017 and assembled in a large database. These measurements were extended to a computing data warehouse in which AMH levels were linked to patients' age and residential address. The analysis was completed with environmental data and a 'geo-localisation' estimate based on each patient's residence. The assessment of environmental exposure considered daily particulate matter (PM) and values of nitrogen dioxide (NO2), a polluting gas which gets into the air from burning fuel.
Results from the 1463 AMH measurements collected from 1318 women firstly showed -- as expected -- that serum AMH levels after the age of 25 were inversely and significantly related to the women's age. However, it was also found that AMH levels were inversely and significantly related to environmental pollutants defined as PM10, PM2.5 and NO2. This association was age-independent.
These results were determined by dividing the full dataset into quartiles reflecting PM10, PM2.5 and NO2 concentrations. The analysis found significantly lower levels of AMH in the fourth quartile than in the lowest quartiles, which, said Professor La Marca, 'again confirms that independently of age the higher the level of particulate matter and NO2, the lower the serum concentration of AMH'. The lowest concentration of AMH -- reflecting 'severe ovarian reserve reduction' -- was measured in subjects who were exposed to levels of PM10, PM2.5 and NO2 above 29.5, 22 and 26 mcg/m3 respectively. Nevertheless, these were values well below the upper limits recommended by the EU and local authorities (ie, 40, 25 and 40 mcg /m3 respectively).
Severe ovarian reserve reduction, as reflected in a serum AMH concentration below 1 ng/ml, was significantly more frequent in the fourth quartile than in the first three quartiles for PM10 (62% vs 38%), for PM2.5, and for NO2. 'This means by our calculations,' said Professor La Marca, 'exposure to high levels of PM10, PM2.5 and NO2 increases the risk of having a severely reduced ovarian reserve by a factor between 2 and 3.'
Read more at Science Daily
Scientists closer to unraveling mechanisms of speech processing in the brain
In the 1860s, French physician Paul Broca published his findings that the brain's speech production center was located in the left hemisphere. Though scientists have largely accepted since then that the left half of the brain dominates language processing, the reasons behind this lateralization have remained unclear.
"The lateralization of language processing in the auditory cortical areas of the brain has been known for over 150 years, but the function, neural mechanisms, and development of this hemispheric specialization are still unknown," said Hysell V. Oviedo, a biology professor with The Graduate Center, CUNY and the City College of New York.
A new study from Oviedo's lab, published in Nature Communications, makes headway into this mystery. Using the mouse as a model system, the researchers observed different specializations between the left and right auditory processing centers of the brain, and found differences in their wiring diagrams that may explain their distinct speech processing functions.
In addition to answering long-standing questions in neuroscience and language processing, the results of Oviedo's study could someday lead to a better understanding of certain mental health problems. Autism spectrum disorder has been linked to a failure of lateralized language processing to develop between the two halves of the brain. And abnormal lateralization is a risk factor for auditory hallucinations in schizophrenia.
One common feature of mouse vocalizations is syllables with downward jumps in pitch. The left auditory cortex in the mouse showed greater activation in response to these tone sequences, whereas the right auditory cortex appeared to be more of a generalist, responding to any tone sequence. Specializations to detect specific tone sequences prevalent in vocalizations could underlie the left auditory center's dominance in processing the content or meaning of speech. While the right auditory center's generalist scheme could underlie its dominance in processing the intonation or prosody of speech.
Notably, the specialized differences between the left and right sides are not innate. Rather, Oviedo says, the differences between their circuitry depend on the acoustic environment in which the mouse was raised.
"Our discovery of the differences in the wiring diagram provides the opportunity to study the molecular phenotypes that shape the development of vocalization processing and how it goes awry in neurodevelopmental communication disorders," Oviedo said.
Read more at Science Daily
"The lateralization of language processing in the auditory cortical areas of the brain has been known for over 150 years, but the function, neural mechanisms, and development of this hemispheric specialization are still unknown," said Hysell V. Oviedo, a biology professor with The Graduate Center, CUNY and the City College of New York.
A new study from Oviedo's lab, published in Nature Communications, makes headway into this mystery. Using the mouse as a model system, the researchers observed different specializations between the left and right auditory processing centers of the brain, and found differences in their wiring diagrams that may explain their distinct speech processing functions.
In addition to answering long-standing questions in neuroscience and language processing, the results of Oviedo's study could someday lead to a better understanding of certain mental health problems. Autism spectrum disorder has been linked to a failure of lateralized language processing to develop between the two halves of the brain. And abnormal lateralization is a risk factor for auditory hallucinations in schizophrenia.
One common feature of mouse vocalizations is syllables with downward jumps in pitch. The left auditory cortex in the mouse showed greater activation in response to these tone sequences, whereas the right auditory cortex appeared to be more of a generalist, responding to any tone sequence. Specializations to detect specific tone sequences prevalent in vocalizations could underlie the left auditory center's dominance in processing the content or meaning of speech. While the right auditory center's generalist scheme could underlie its dominance in processing the intonation or prosody of speech.
Notably, the specialized differences between the left and right sides are not innate. Rather, Oviedo says, the differences between their circuitry depend on the acoustic environment in which the mouse was raised.
"Our discovery of the differences in the wiring diagram provides the opportunity to study the molecular phenotypes that shape the development of vocalization processing and how it goes awry in neurodevelopmental communication disorders," Oviedo said.
Read more at Science Daily
Jun 25, 2019
How trees affect the weather
Nature, said Ralph Waldo Emerson, is no spendthrift. Unfortunately, he was wrong.
New research led by University of Utah biologists William Anderegg, Anna Trugman and David Bowling find that some plants and trees are prolific spendthrifts in drought conditions -- "spending" precious soil water to cool themselves and, in the process, making droughts more intense. The findings are published in Proceedings of the National Academy of Sciences.
"We show that the actual physiology of the plants matters," Anderegg says. "How trees take up, transport and evaporate water can influence societally important extreme events, like severe droughts, that can affect people and cities."
Functional traits
Anderegg studies how tree traits affect how well forests can handle hot and dry conditions. Some plants and trees, he's found, possess an internal plumbing system that slows down the movement of water, helping the plants to minimize water loss when it's hot and dry. But other plants have a system more suited for transporting large quantities of water vapor into the air -- larger openings on leaves, more capacity to move water within the organism. Anderegg's past work has looked at how those traits determine how well trees and forests can weather droughts. But this study asks a different question: How do those traits affect the drought itself?
"We've known for a long time that plants can affect the atmosphere and can affect weather," Anderegg says. Plants and forests draw water out of the soil and exhale it into the atmosphere, affecting the balance of water and heat at our planet's surface, which fundamentally controls the weather. In some cases, like in the Amazon rainforest, all of that water vapor can jumpstart precipitation. Even deforestation can affect downwind weather by leaving regions drier than before.
Anderegg and his colleagues used information from 40 sites around the world, in sites ranging from Canada to Australia. At each site, instruments collected data on the flows of heat, water and carbon in and out of the air, as well as what tree species were prevalent around the instrumentation. Comparing that data with a database of tree traits allowed the researchers to draw conclusions about what traits were correlated with more droughts becoming more intense.
Two traits stuck out: maximum leaf gas exchange rate and water transport. The first trait is the rate at which leaves can pump water vapor into the air. The second describes how much water the tree can move to the leaves. The results showed that in cool regions, plants and trees slowed down their water use in response to declining soil moisture. But in hot climates, some plants and trees with high water transport and leaf gas exchange rates cranked up the AC, so to speak, when the soil got dry, losing more and more water in an effort to carry out photosynthesis and stay cool while depleting the soil moisture that was left.
"You end up getting to these conditions that are hotter and drier much faster with those plants than with other plants," Anderegg says.
More drought to come
It's true that hot and dry regions tend to have more plants and trees that are adapted to dry conditions. But regardless of the climate some species with water-intensive traits, such as oaks in a Mediterranean climate, can still exacerbate a drought.
Anderegg says that understanding the relationship between a tree's traits and drought conditions helps climate scientists and local leaders to plan for future drought effects on communities.
"Failing to account for this key physiology of plants would give us less accurate predictions for what climate change is going to mean for drought in a lot of regions," he says.
Read more at Science Daily
New research led by University of Utah biologists William Anderegg, Anna Trugman and David Bowling find that some plants and trees are prolific spendthrifts in drought conditions -- "spending" precious soil water to cool themselves and, in the process, making droughts more intense. The findings are published in Proceedings of the National Academy of Sciences.
"We show that the actual physiology of the plants matters," Anderegg says. "How trees take up, transport and evaporate water can influence societally important extreme events, like severe droughts, that can affect people and cities."
Functional traits
Anderegg studies how tree traits affect how well forests can handle hot and dry conditions. Some plants and trees, he's found, possess an internal plumbing system that slows down the movement of water, helping the plants to minimize water loss when it's hot and dry. But other plants have a system more suited for transporting large quantities of water vapor into the air -- larger openings on leaves, more capacity to move water within the organism. Anderegg's past work has looked at how those traits determine how well trees and forests can weather droughts. But this study asks a different question: How do those traits affect the drought itself?
"We've known for a long time that plants can affect the atmosphere and can affect weather," Anderegg says. Plants and forests draw water out of the soil and exhale it into the atmosphere, affecting the balance of water and heat at our planet's surface, which fundamentally controls the weather. In some cases, like in the Amazon rainforest, all of that water vapor can jumpstart precipitation. Even deforestation can affect downwind weather by leaving regions drier than before.
Anderegg and his colleagues used information from 40 sites around the world, in sites ranging from Canada to Australia. At each site, instruments collected data on the flows of heat, water and carbon in and out of the air, as well as what tree species were prevalent around the instrumentation. Comparing that data with a database of tree traits allowed the researchers to draw conclusions about what traits were correlated with more droughts becoming more intense.
Two traits stuck out: maximum leaf gas exchange rate and water transport. The first trait is the rate at which leaves can pump water vapor into the air. The second describes how much water the tree can move to the leaves. The results showed that in cool regions, plants and trees slowed down their water use in response to declining soil moisture. But in hot climates, some plants and trees with high water transport and leaf gas exchange rates cranked up the AC, so to speak, when the soil got dry, losing more and more water in an effort to carry out photosynthesis and stay cool while depleting the soil moisture that was left.
"You end up getting to these conditions that are hotter and drier much faster with those plants than with other plants," Anderegg says.
More drought to come
It's true that hot and dry regions tend to have more plants and trees that are adapted to dry conditions. But regardless of the climate some species with water-intensive traits, such as oaks in a Mediterranean climate, can still exacerbate a drought.
Anderegg says that understanding the relationship between a tree's traits and drought conditions helps climate scientists and local leaders to plan for future drought effects on communities.
"Failing to account for this key physiology of plants would give us less accurate predictions for what climate change is going to mean for drought in a lot of regions," he says.
Read more at Science Daily
How icy outer solar system satellites may have formed
Using sophisticated computer simulations and observations, a team led by researchers from the Earth-Life Science Institute (ELSI) at Tokyo Institute of Technology has shown how the so-called trans-Neptunian Objects (or TNOs) may have formed. TNOs, which include the dwarf planet Pluto, are a group of icy and rocky small bodies -- smaller than planets but larger than comets -- that orbit the Solar System beyond the planet Neptune. TNOs likely formed at the same time as the Solar System, and understanding their origin could provide important clues as to how the entire Solar System originated.
Like many solar system bodies, including the Earth, TNOs often have their own satellites, which likely formed early on from collisions among the building blocks of the Solar System. Understanding the origin of TNOs along with their satellites may help understand the origin and early evolution of the entire Solar System. The properties of TNOs and their satellites -- for example, their orbital properties, composition and rotation rates -- provide a number of clues for understanding their formation. These properties may reflect their formation and collisional history, which in turn may be related to how the orbits of the giant planets Jupiter, Saturn, Neptune, and Uranus changed over time since the Solar System formed.
The New Horizons spacecraft flew by Pluto, the most famous TNO, in 2015. Since then, Pluto and its satellite Charon have attracted a lot of attention from planetary scientists, and many new small satellites around other large TNOs have been found. In fact, all known TNOs larger than 1000 km in diameter are now known to have satellite systems. Interestingly, the range of estimated mass ratio of these satellites to their host systems ranges from 1/10 to 1/1000, encompassing the Moon-to-Earth mass ratio (~1/80). This may be significant because Earth's Moon and Charon are thought to have formed from a giant impactor.
To study the formation and evolution of TNO satellite systems, the research team performed more than 400 giant impact simulations and tidal evolution calculations. "This is really hard work," says the study's senior author, Professor Hidenori Genda from the Earth-Life Science Institute (ELSI) at Tokyo Institute of Technology. Other Tokyo Tech team members included Sota Arakawa and Ryuki Hyodo.
The Tokyo Tech study found that the size and orbit of the satellite systems of large TNOs are best explained if they formed from impacts of molten progenitors. They also found that TNOs which are big enough can retain internal heat and remain molten for a span of only a few million years; especially if their internal heat source is short-lived radioactive isotopes such as Aluminum-26, which has also been implicated in the internal heating of the parent bodies of meteorites. Since these progenitors would need to have a high short-lived radionuclide content in order to be molten, these results suggest that TNO-satellite systems formed before the outward migration of the outer planets, including Neptune, or in the first ~ 700 million years of Solar System history.
Read more at Science Daily
Like many solar system bodies, including the Earth, TNOs often have their own satellites, which likely formed early on from collisions among the building blocks of the Solar System. Understanding the origin of TNOs along with their satellites may help understand the origin and early evolution of the entire Solar System. The properties of TNOs and their satellites -- for example, their orbital properties, composition and rotation rates -- provide a number of clues for understanding their formation. These properties may reflect their formation and collisional history, which in turn may be related to how the orbits of the giant planets Jupiter, Saturn, Neptune, and Uranus changed over time since the Solar System formed.
The New Horizons spacecraft flew by Pluto, the most famous TNO, in 2015. Since then, Pluto and its satellite Charon have attracted a lot of attention from planetary scientists, and many new small satellites around other large TNOs have been found. In fact, all known TNOs larger than 1000 km in diameter are now known to have satellite systems. Interestingly, the range of estimated mass ratio of these satellites to their host systems ranges from 1/10 to 1/1000, encompassing the Moon-to-Earth mass ratio (~1/80). This may be significant because Earth's Moon and Charon are thought to have formed from a giant impactor.
To study the formation and evolution of TNO satellite systems, the research team performed more than 400 giant impact simulations and tidal evolution calculations. "This is really hard work," says the study's senior author, Professor Hidenori Genda from the Earth-Life Science Institute (ELSI) at Tokyo Institute of Technology. Other Tokyo Tech team members included Sota Arakawa and Ryuki Hyodo.
The Tokyo Tech study found that the size and orbit of the satellite systems of large TNOs are best explained if they formed from impacts of molten progenitors. They also found that TNOs which are big enough can retain internal heat and remain molten for a span of only a few million years; especially if their internal heat source is short-lived radioactive isotopes such as Aluminum-26, which has also been implicated in the internal heating of the parent bodies of meteorites. Since these progenitors would need to have a high short-lived radionuclide content in order to be molten, these results suggest that TNO-satellite systems formed before the outward migration of the outer planets, including Neptune, or in the first ~ 700 million years of Solar System history.
Read more at Science Daily
Damage to the ozone layer and climate change forming feedback loop
Kelp forest with school of fish. |
"What we're seeing is that ozone changes have shifted temperature and precipitation patterns in the southern hemisphere, and that's altering where the algae in the ocean are, which is altering where the fish are, and where the walruses and seals are, so we're seeing many changes in the food web," said Kevin Rose, a researcher at Rensselaer Polytechnic Institute who serves on the panel and is a co-author of the review article.
The 1987 Montreal Protocol on Substances that Deplete the Ozone Layer -- the first multilateral environmental agreement to be ratified by all member nations of the United Nations -- was designed to protect Earth's main filter for solar ultraviolet radiation by phasing out production of harmful humanmade substances, such as the chlorofluorocarbons class of refrigerants. The treaty has largely been considered a success, with global mean total ozone projected to recover to pre-1980 levels by the middle of the 21st century. Earlier this year, however, researchers reported detecting new emissions of ozone depleting substances emanating from East Asia, which could threaten ozone recovery.
While ozone depletion has long been known to increase harmful UV radiation at the Earth's surface, its effect on climate has only recently become evident. The report points to the Southern Hemisphere, where a hole in the ozone layer above Antarctica has pushed the Antarctic Oscillation -- the north-south movement of a wind belt that circles the Southern Hemisphere -- further south than it has been in roughly a thousand years. The movement of the Antarctic Oscillation is in turn directly contributing to climate change in the Southern Hemisphere.
As climate zones have shifted southward, rainfall patterns, sea-surface temperatures, and ocean currents across large areas of the southern hemisphere have also shifted, impacting terrestrial and aquatic ecosystems. The effects can be seen in Australia, New Zealand, Antarctica, South America, Africa, and the Southern Ocean.
In the oceans, for example, some areas have become cooler and more productive, where other areas have become warmer and less productive.
Warmer oceans are linked to declines in Tasmanian kelp beds and Brazilian coral reefs, and the ecosystems that rely on them. Cooler waters have benefitted some populations of penguins, seabirds, and seals, who profit from greater populations of krill and fish. One study reported that female albatrosses may have become a kilogram heavier in certain areas because of the more productive cooler waters linked to ozone depletion.
Rose also pointed to subtler feedback loops between climate and UV radiation described in the report. For example, higher concentrations of carbon dioxide have led to more acidic oceans, which reduces the thickness of calcified shells, rendering shellfish more vulnerable to UV radiation. Even humans, he said, are likely to wear lighter clothes in a warmer atmosphere, making themselves more susceptible to damaging UV rays.
The report found that climate change may also be affecting the ozone layer and how quickly the ozone layer is recovering.
"Greenhouse gas emissions trap more heat in the lower atmosphere which leads to a cooling of the upper atmosphere. Those colder temperatures in the upper atmosphere are slowing the recovery of the ozone layer," Rose said.
As one of three scientific panels to support the Montreal Protocol, the Environmental Effects Assessment Panel focused in particular on the effects of UV radiation, climate change, and ozone depletion. Thirty-nine researchers contributed to the article, which is titled "Ozone depletion, ultraviolet radiation, climate change and prospects for a sustainable future." Rose, an aquatic ecologist, serves on the aquatic ecosystems working group, which is one of seven working groups that are part of the panel.
Read more at Science Daily
How octopus arms make decisions
Octopus |
A new model being presented in Bellevue, Washington State, is the first attempt at a comprehensive representation of information flow between the octopus's suckers, arms and brain, based on previous research in octopus neuroscience and behavior, and new video observations conducted in the lab.
The new research supports previous findings that octopus' suckers can initiate action in response to information they acquire from their environment, coordinating with neighboring suckers along the arm. The arms then process sensory and motor information, and muster collective action in the peripheral nervous system, without waiting on commands from the brain.
The result is a bottom-up, or arm-up, decision mechanism rather than the brain-down mechanism typical of vertebrates, like humans, according to Dominic Sivitilli, a graduate student in behavioral neuroscience and astrobiology at the University of Washington in Seattle who will present the new research Wednesday at the 2019 Astrobiology Science Conference (AbSciCon 2019).
The researchers ultimately want to use their model to understand how decisions made locally in the arms fit into the context of complex behaviors like hunting, which also require direction from the brain.
"One of the big picture questions we have is just how a distributed nervous system would work, especially when it's trying to do something complicated, like move through fluid and find food on a complex ocean floor. There are a lot of open questions about how these nodes in the nervous system are connected to each other," said David Gire, a neuroscientist at the University of Washington and Sivitilli's advisor for the project.
Long an inspiration for science-fictional, tentacled aliens from outer space, the octopus may be as alien an intelligence as we can meet on Earth, Sivitilli said. He believes understanding how the octopus perceives its world is as close as we can come to preparing to meet intelligent life beyond our planet.
"It's an alternative model for intelligence," Sivitilli said. "It gives us an understanding as to the diversity of cognition in the world, and perhaps the universe."
The octopus exhibits many similar behaviors to vertebrates, like humans, but its nervous system architecture is fundamentally different, because it evolved after vertebrates and invertebrates parted evolutionary ways, more than 500 million years ago.
Vertebrates arranged their central nervous system in a cord up the backbone, leading to highly centralized processing in the brain. Cephalopods, like the octopus, evolved multiple concentrations of neurons called ganglia, arranged in a distributed network throughout the body. Some of these ganglia grew more dominant, evolving into a brain, but the underlying distributed architecture persists in the octopus's arms, and throughout its body.
"The octopus' arms have a neural ring that bypasses the brain, and so the arms can send information to each other without the brain being aware of it," Sivitilli said. "So while the brain isn't quite sure where the arms are in space, the arms know where each other are and this allows the arms to coordinate during actions like crawling locomotion."
Of the octopus' 500 million neurons, more than 350 million are in its eight arms. The arms need all that processing power to manage incoming sensory information, to move and to keep track of their position in space. Processing information in the arms allows the octopus to think and react faster, like parallel processors in computers.
Sivitilli works with the largest octopus in the world, the Giant Pacific octopus, as well as the smaller East Pacific red, or ruby, octopus. Both species are native to Puget Sound off Seattle's coast and the Salish Sea, and have learning and problem-solving capabilities analogous to those studied in crows, parrots and primates.
To entertain the octopuses and study their movements, Sivitilli and his colleagues gave the octopuses interesting, new objects to investigate, like cinder blocks, textured rocks, Legos and elaborate mazes with food inside. His research group is looking for patterns that reveal how the octopus' nervous system delegates among the arms as the animal approaches a task or reacts to new stimuli, looking for clues to which movements are directed by the brain and which are managed from the arms.
Sivitilli employed a camera and a computer program to observe the octopus as it explored objects in its tank and looked for food. The program quantifies movements of the arms, tracking how the arms work together in synchrony, suggesting direction from the brain, or asynchronously, suggesting independent decision-making in each appendage.
Read more at Science Daily
Jun 24, 2019
How to bend waves to arrive at the right place
In free space, the light wave of a laser beam propagates on a perfectly straight line. Under certain circumstances, however, the behavior of a wave can be much more complicated. In the presence of a disordered, irregular environment a very strange phenomenon occurs: An incoming wave splits into several paths, it branches in a complicated way, reaching some places with high intensity, while avoiding others almost completely.
This kind of "branched flow" has first been observed in 2001. Scientists at TU Wien (Vienna) have now developed a method to exploit this effect. The core idea of this new approach is to send a wave signal exclusively along one single pre-selected branch, such that the wave is hardly noticeable anywhere else. The results have now been published in the journal PNAS.
From Quantum Particles to Tsunamis
"Originally, this effect was discovered when studying electrons moving as quantum waves through tiny microstructures," says Prof. Stefan Rotter from the Institute of Theoretical Physics at TU Wien. "Such structures, however, are never perfect and they always come with certain imperfections; and surprisingly, these imperfections cause the electron wave to split up into branches -- an effect which is called branched flow."
Soon it turned out that this wave phenomenon does not only occur in quantum physics. In principle it can occur with all types of waves and on completely different length scales. If, for example, laser beams are sent into the surface of a soap bubble, they split into several partial beams, just like tsunami waves in the ocean: the latter do not spread regularly across the ocean, but instead they travel in a complicated, branched pattern that depends on the random shape of the corrugated ocean sea bed. As a result, it can happen that a distant island is hit very hard by a tsunami, while the neighboring island is only reached by much weaker wave fronts.
"We wanted to know whether these waves can be manipulated in such a way that they only travel along one single selected branch, instead of propagating along a whole branched network of paths in completely different directions," says Andre Brandstötter (TU Wien), first author of the publication. "And as it turns out, it is indeed possible to target individual branches in a controlled way."
Analyze and Adapt
The new procedure takes only two steps: First, the wave is allowed to branch out on all possible paths as usual. At one of the locations that are reached with high intensity, the wave is measured in detail. The method developed at the TU Wien can then be used to calculate how the wave has to be shaped at the origin, so that in the second step it can be sent along one selected path, while avoiding all other paths.
"We used numerical simulations to show how to find a wave that behaves exactly the way we want it to. This approach can be applied using a variety of different methods," says Stefan Rotter. "You can implement it with light waves that are adjusted with special mirror systems or with sound waves that you generate with a system of coupled loudspeakers. Sonar waves in the ocean would also be a possible field of application. In any case, the necessary technologies are already available."
With this new method, all these different types of waves could be sent out along a single trajectory pre-selected from a complex network of paths. "This trajectory doesn't even have to be straight," explains Andre Brandstötter. "Many of the possible paths are curved -- the irregularities of the surroundings act like a set of lenses by which the wave is focused and deflected again and again."
Read more at Science Daily
This kind of "branched flow" has first been observed in 2001. Scientists at TU Wien (Vienna) have now developed a method to exploit this effect. The core idea of this new approach is to send a wave signal exclusively along one single pre-selected branch, such that the wave is hardly noticeable anywhere else. The results have now been published in the journal PNAS.
From Quantum Particles to Tsunamis
"Originally, this effect was discovered when studying electrons moving as quantum waves through tiny microstructures," says Prof. Stefan Rotter from the Institute of Theoretical Physics at TU Wien. "Such structures, however, are never perfect and they always come with certain imperfections; and surprisingly, these imperfections cause the electron wave to split up into branches -- an effect which is called branched flow."
Soon it turned out that this wave phenomenon does not only occur in quantum physics. In principle it can occur with all types of waves and on completely different length scales. If, for example, laser beams are sent into the surface of a soap bubble, they split into several partial beams, just like tsunami waves in the ocean: the latter do not spread regularly across the ocean, but instead they travel in a complicated, branched pattern that depends on the random shape of the corrugated ocean sea bed. As a result, it can happen that a distant island is hit very hard by a tsunami, while the neighboring island is only reached by much weaker wave fronts.
"We wanted to know whether these waves can be manipulated in such a way that they only travel along one single selected branch, instead of propagating along a whole branched network of paths in completely different directions," says Andre Brandstötter (TU Wien), first author of the publication. "And as it turns out, it is indeed possible to target individual branches in a controlled way."
Analyze and Adapt
The new procedure takes only two steps: First, the wave is allowed to branch out on all possible paths as usual. At one of the locations that are reached with high intensity, the wave is measured in detail. The method developed at the TU Wien can then be used to calculate how the wave has to be shaped at the origin, so that in the second step it can be sent along one selected path, while avoiding all other paths.
"We used numerical simulations to show how to find a wave that behaves exactly the way we want it to. This approach can be applied using a variety of different methods," says Stefan Rotter. "You can implement it with light waves that are adjusted with special mirror systems or with sound waves that you generate with a system of coupled loudspeakers. Sonar waves in the ocean would also be a possible field of application. In any case, the necessary technologies are already available."
With this new method, all these different types of waves could be sent out along a single trajectory pre-selected from a complex network of paths. "This trajectory doesn't even have to be straight," explains Andre Brandstötter. "Many of the possible paths are curved -- the irregularities of the surroundings act like a set of lenses by which the wave is focused and deflected again and again."
Read more at Science Daily
Visible light from 2D lead halide perovskites explained
Researchers drew attention three years ago when they reported that a two-dimensional perovskite -- a material with a specific crystal structure -- composed of cesium, lead and bromine emitted a strong green light. Crystals that produce light on the green spectrum are desirable because green light, while valuable in itself, can also be relatively easily converted to other forms that emit blue or red light, making it especially important for optical applications ranging from light-emitting devices to sensitive diagnostic tools.
But there was no agreement about how the crystal, CsPB2Br5, produced the green photoluminescence. Several theories emerged, without a definitive answer.
Now, however, researchers from the United States, Mexico and China, led by an electrical engineer from the University of Houston, have reported in the journal Advanced Materials they have used sophisticated optical and high-pressure diamond anvil cell techniques to determine not only the mechanism for the light emission but also how to replicate it.
They initially synthesized CsPB2Br5 from a related material known as CsPbBr3 and found that the root cause of the light emission is a small overgrowth of nanocrystals composed of that original material, growing along the edge of the CsPB2Br5 crystals. While CsPbBr3, the base crystal, is three-dimensional and appears green under ultraviolet light, the new material, CsPB2Br5, has a layered structure and is optically inactive.
"Now that the mechanism for emitting this light is understood, it can be replicated," said Jiming Bao, associate professor of electrical and computer engineering at UH and corresponding author on the paper. "Both crystals have the same chemical composition, much like diamond versus graphite, but they have very different optical and electronic properties. People will be able to integrate the two materials to make better devices."
Potential applications range from solar cells to LED lighting and other electronic devices.
Bao began working on the problem in 2016, a project that ultimately involved 19 researchers from UH and institutions in China and Mexico. At the time, there were two schools of scientific thought on the light emission from the cesium crystal: that it emitted green light due to a defect, mainly a lack of bromine, rather than the material itself, or that a variation had unintentionally been introduced, resulting in the emission.
His group started with the synthesis of a clean sample by dropping CsPbBr3 powder in water, resulting in sharper-edged crystals. The sharper edges emitted a stronger green light, Bao said.
The researchers then used an optical microscope to study the individual crystals of the compound, which Bao said allowed them to determine that although the compound is transparent, "something was going on at the edge, resulting in the photoluminescence."
They relied on Raman spectroscopy -- an optical technique that uses information about how light interacts with a material to determine the material's lattice properties -- to identify nanocrystals of the original source material, CsPbBr3, along the edges of the crystal as the source of the light.
Bao said CsPbBr3 is too unstable to use on its own, but the stability of the converted form isn't hampered by the small amount of the original crystal.
Read more at Science Daily
But there was no agreement about how the crystal, CsPB2Br5, produced the green photoluminescence. Several theories emerged, without a definitive answer.
Now, however, researchers from the United States, Mexico and China, led by an electrical engineer from the University of Houston, have reported in the journal Advanced Materials they have used sophisticated optical and high-pressure diamond anvil cell techniques to determine not only the mechanism for the light emission but also how to replicate it.
They initially synthesized CsPB2Br5 from a related material known as CsPbBr3 and found that the root cause of the light emission is a small overgrowth of nanocrystals composed of that original material, growing along the edge of the CsPB2Br5 crystals. While CsPbBr3, the base crystal, is three-dimensional and appears green under ultraviolet light, the new material, CsPB2Br5, has a layered structure and is optically inactive.
"Now that the mechanism for emitting this light is understood, it can be replicated," said Jiming Bao, associate professor of electrical and computer engineering at UH and corresponding author on the paper. "Both crystals have the same chemical composition, much like diamond versus graphite, but they have very different optical and electronic properties. People will be able to integrate the two materials to make better devices."
Potential applications range from solar cells to LED lighting and other electronic devices.
Bao began working on the problem in 2016, a project that ultimately involved 19 researchers from UH and institutions in China and Mexico. At the time, there were two schools of scientific thought on the light emission from the cesium crystal: that it emitted green light due to a defect, mainly a lack of bromine, rather than the material itself, or that a variation had unintentionally been introduced, resulting in the emission.
His group started with the synthesis of a clean sample by dropping CsPbBr3 powder in water, resulting in sharper-edged crystals. The sharper edges emitted a stronger green light, Bao said.
The researchers then used an optical microscope to study the individual crystals of the compound, which Bao said allowed them to determine that although the compound is transparent, "something was going on at the edge, resulting in the photoluminescence."
They relied on Raman spectroscopy -- an optical technique that uses information about how light interacts with a material to determine the material's lattice properties -- to identify nanocrystals of the original source material, CsPbBr3, along the edges of the crystal as the source of the light.
Bao said CsPbBr3 is too unstable to use on its own, but the stability of the converted form isn't hampered by the small amount of the original crystal.
Read more at Science Daily
The solution to antibiotic resistance could be in your kitchen sponge
Researchers from the New York Institute of Technology (NYIT) have discovered bacteriophages, viruses that infect bacteria, living in their kitchen sponges. As the threat of antibiotic resistance increases, bacteriophages, or phages for short, may prove useful in fighting bacteria that cannot be killed by antibiotics alone. The research is presented at ASM Microbe, the annual meeting of the American Society for Microbiology.
A kitchen sponge is exposed to all kinds of different microbes, which form a vast microbiome of bacteria. Phages are the most abundant biological particles on the planet and are typically found wherever bacteria reside. With this understanding, kitchen sponges seemed a likely place to find them.
Students in a research class isolated bacteria from their own used kitchen sponges and then used the bacteria as bait to find phages that could attack it. Two students successfully discovered phages that infect bacteria living in their kitchen sponges. "Our study illustrates the value in searching any microbial environment that could harbor potentially useful phages," said Brianna Weiss, a Life Sciences student at New York Institute of Technology.
The researchers decided to "swap" these two phages and see if they could cross-infect the other person's isolated bacteria. Consequently, the phages did kill the other's bacteria. "This led us to wonder if the bacteria strains were coincidentally the same, even though they came from two different sponges," said Weiss.
The researchers compared the DNA of both isolated strains of bacteria and discovered that they were both members of the Enterobacteriaceae family. These bacteria belong to a rod-shaped group of microbes commonly found in feces, where some cause infections in hospital settings. Although the strains are closely related, when performing biochemical testing they found chemical variations between them.
"These differences are important in understanding the range of bacteria that a phage can infect, which is also key to determining its ability to treat specific antibiotic-resistant infections," said Weiss. "Continuing our work, we hope to isolate and characterize more phages that can infect bacteria from a variety of microbial ecosystems, where some of these phages might be used to treat antibiotic-resistant bacterial infections."
Read more at Science Daily
A kitchen sponge is exposed to all kinds of different microbes, which form a vast microbiome of bacteria. Phages are the most abundant biological particles on the planet and are typically found wherever bacteria reside. With this understanding, kitchen sponges seemed a likely place to find them.
Students in a research class isolated bacteria from their own used kitchen sponges and then used the bacteria as bait to find phages that could attack it. Two students successfully discovered phages that infect bacteria living in their kitchen sponges. "Our study illustrates the value in searching any microbial environment that could harbor potentially useful phages," said Brianna Weiss, a Life Sciences student at New York Institute of Technology.
The researchers decided to "swap" these two phages and see if they could cross-infect the other person's isolated bacteria. Consequently, the phages did kill the other's bacteria. "This led us to wonder if the bacteria strains were coincidentally the same, even though they came from two different sponges," said Weiss.
The researchers compared the DNA of both isolated strains of bacteria and discovered that they were both members of the Enterobacteriaceae family. These bacteria belong to a rod-shaped group of microbes commonly found in feces, where some cause infections in hospital settings. Although the strains are closely related, when performing biochemical testing they found chemical variations between them.
"These differences are important in understanding the range of bacteria that a phage can infect, which is also key to determining its ability to treat specific antibiotic-resistant infections," said Weiss. "Continuing our work, we hope to isolate and characterize more phages that can infect bacteria from a variety of microbial ecosystems, where some of these phages might be used to treat antibiotic-resistant bacterial infections."
Read more at Science Daily
Helping the body's ability to grow bone
For the first time, scientists have been able to study how well synthetic bone grafts stand up to the rigors and 'strains' of life, and how quickly they help bone re-grow and repair.
Researchers led by Dr Gianluca Tozzi, at the University of Portsmouth, are the first to examine the strains between bone and graft from animal models in 3D and in microscopic detail.
Dr Tozzi hopes this window on to living bone grafts will help scientists find ways to improve the body's ability to regrow its own bone, and more chance surgeons can predict the success of a synthetic graft.
He said: "Every three seconds a person breaks a bone due to increased bone fragility. Fragile bones break easily and are also more difficult to repair, particularly when the defect area is extended. It's vital we understand what is happening where bone meets graft so we can better engineer sophisticated replacement materials.
"Bones are very complex biological tissues and a synthetic bone substitute needs to have specific requirements to allow blood supply and encourage new bone growth.
"In this sense, the new generation of synthetic grafts have the potential to be resorbed by the body in time, allowing gradual bone regeneration in the defect site. However, biomaterials that degrade too quickly don't allow enough time for the new bone to grow, and grafts that degrade too slowly can cause mechanical instability to the implantation site. It's important to get it right."
Millions of people a year in the UK are given a bone graft. They're commonly used in the spine, hip, knee and ankle. Their role is bridge gaps in a broken bone that are too large for the bone to close on its own. They're also used in dental implants, to help teeth attach to the jawbone.
Some grafts can be made using a fragment of the patient's own bone or other sources, but this is more invasive and can cause adverse reactions. Therefore, it's becoming increasingly common for grafts to be made by synthetic materials, including glass, ceramics and even, in very small joints, plaster of Paris.
Dr Tozzi and colleagues have been using synchrotron X-ray computed tomography (SR-XCT) at the Diamond Light Source and in lab-based systems at the Zeiss Global Centre at the University of Portsmouth to better understand the performance of graft materials and their ability to promote bone healing.
In a recently published study in ACS Biomaterials Science & Engineering, they examined the micromechanics and microdamage evolution of four different bone-biomaterial systems combining high-resolution synchrotron tomography, in situ mechanics and digital volume correlation.
Dr Tozzi said: "It's essential we are able to look at the interface between bone and graft and judge their load-bearing capability in order to understand both biological integration and structural integrity of the intervention.
Read more at Science Daily
Researchers led by Dr Gianluca Tozzi, at the University of Portsmouth, are the first to examine the strains between bone and graft from animal models in 3D and in microscopic detail.
Dr Tozzi hopes this window on to living bone grafts will help scientists find ways to improve the body's ability to regrow its own bone, and more chance surgeons can predict the success of a synthetic graft.
He said: "Every three seconds a person breaks a bone due to increased bone fragility. Fragile bones break easily and are also more difficult to repair, particularly when the defect area is extended. It's vital we understand what is happening where bone meets graft so we can better engineer sophisticated replacement materials.
"Bones are very complex biological tissues and a synthetic bone substitute needs to have specific requirements to allow blood supply and encourage new bone growth.
"In this sense, the new generation of synthetic grafts have the potential to be resorbed by the body in time, allowing gradual bone regeneration in the defect site. However, biomaterials that degrade too quickly don't allow enough time for the new bone to grow, and grafts that degrade too slowly can cause mechanical instability to the implantation site. It's important to get it right."
Millions of people a year in the UK are given a bone graft. They're commonly used in the spine, hip, knee and ankle. Their role is bridge gaps in a broken bone that are too large for the bone to close on its own. They're also used in dental implants, to help teeth attach to the jawbone.
Some grafts can be made using a fragment of the patient's own bone or other sources, but this is more invasive and can cause adverse reactions. Therefore, it's becoming increasingly common for grafts to be made by synthetic materials, including glass, ceramics and even, in very small joints, plaster of Paris.
Dr Tozzi and colleagues have been using synchrotron X-ray computed tomography (SR-XCT) at the Diamond Light Source and in lab-based systems at the Zeiss Global Centre at the University of Portsmouth to better understand the performance of graft materials and their ability to promote bone healing.
In a recently published study in ACS Biomaterials Science & Engineering, they examined the micromechanics and microdamage evolution of four different bone-biomaterial systems combining high-resolution synchrotron tomography, in situ mechanics and digital volume correlation.
Dr Tozzi said: "It's essential we are able to look at the interface between bone and graft and judge their load-bearing capability in order to understand both biological integration and structural integrity of the intervention.
Read more at Science Daily
Subscribe to:
Posts (Atom)