The clinical presentation of Covid-19 varies from patient to patient and understanding individual genetic susceptibility to the disease is therefore vital to prognosis, prevention, and the development of new treatments. For the first time, Italian scientists have been able to identify the genetic and molecular basis of this susceptibility to infection as well as to the possibility of contracting a more severe form of the disease. The research will be presented to the 53rd annual conference of the European Society of Human Genetics, being held entirely on-line due to the Covid-19 pandemic, today [Saturday].
Professor Alessandra Renieri, Director of the Medical Genetics Unit at the University Hospital of Siena, Italy, will describe her team's GEN-COVID project to collect genomic samples from Covid patients across the whole of Italy in order to try to identify the genetic bases of the high level of clinical variability they showed. Using whole exome sequencing (WES) to study the first data from 130 Covid patients from Siena and other Tuscan institutions, they were able to uncover a number of common susceptibility genes that were linked to a favourable or unfavourable outcome of infection. "We believe that variations in these genes may determine disease progression," says Prof Renieri. "To our knowledge, this is the first report on the results of WES in Covid-19."
Searching for common genes in affected patients against a control group did not give statistically significant results with the exception of a few genes. So the researchers decided to treat each patient as an independent case, following the example of autism spectrum disorder. "In this way we were able to identify for each patient an average of three pathogenic (disease-causing) mutations involved in susceptibility to Covid infection," says Prof Renieri. "This result was not unexpected, since we already knew from studies of twins that Covid-19 has a strong genetic basis."
Although presentation of Covid is different in each individual, this does not rule out the possibility of the same treatment being effective in many cases. "The model we are proposing includes common genes and our results point to some of them. For example, ACE2 remains one of the major targets. All our Covid patients have an intact ACE2 protein, and the biological pathway involving this gene remains a major focus for drug development," says Prof Renieri. ACE2 is an enzyme attached to the outer surface of several organs, including the lungs, that lowers blood pressure. It serves as an entry point for some coronaviruses, including Covid-19.
These results will have significant implications for health and healthcare policy. Understanding the genetic profile of patients may allow the repurposing of existing medicines for specific therapeutic approaches against Covid-19 as well as speeding the development of new antiviral drugs. Being able to identify patients susceptible to severe pneumonia and their responsiveness to specific drugs will allow rapid public health treatment interventions. And future research will be aided, too, by the development of a Covid Biobank accessible to academic and industry partners.
The researchers will now analyze a further 2000 samples from other Italian regions, specifically from 35 Italian Hospitals belonging to the GEN-COVID project.
"Our data, although preliminary, are promising, and now we plan to validate them in a wider population," says Prof Renieri. "Going beyond our specific results, the outcome of our study underlines the need for a new method to fully assess the basis of one of the more complex genetic traits, with an environmental causation (the virus), but a high rate of heritability. We need to develop new mathematical models using artificial intelligence in order to be able to understand the complexity of this trait, which is derived from a combination of common and rare genetic factors.
"We have developed this approach in collaboration with the Siena Artificial Intelligence Lab, and now intend to compare it with classical genome-wide association studies in the context of the Covid-19 Host Genetics Initiative, which brings together the human genetics community to generate, share, and analyse data to learn the genetic determinants of COVID-19 susceptibility, severity, and outcomes. As a research community, we need to do everything we can to help public health interventions move forward at this time."
Read more at Science Daily
Jun 6, 2020
Human activity threatens vertebrate evolutionary history
Aye-aye |
Research for the study was led by Dr. Rikki Gumbs of the EDGE of Existence Programme at the Zoological Society of London and Imperial College London and Dr. James Rosindell of Imperial College London in collaboration with Prof. Shai Meiri of the School of Zoology at Tel Aviv University's George S. Wise Faculty of Life Sciences and Steinhardt Museum of Natural History and other colleagues. The study was published in Nature Communications on May 26.
"Being 'evolutionarily distinct' means that you have no close living relatives," explains Prof. Meiri, who generated and interpreted the reptile-related data for the study. "In other words, you are alone on your branch of the evolutionary tree of life. Aardvarks, crocodiles, and kiwis were all separated from their closest evolutionary relatives tens of millions of years ago and bear a unique evolutionary history.
"The new research will provide a clear understanding of how best to protect nature given the current threats to specific locations and endangered species."
The researchers developed two new metrics that combine phylogenetic diversity and the extent of human pressure across the spatial distribution of species -- one metric valuing regions and another prioritizing species. They evaluated these metrics for reptiles, which have been largely neglected in previous studies, and contrasted these results with equivalent calculations for all terrestrial vertebrate groups. The researchers found that regions under high human pressure coincided with those containing irreplaceable reptilian diversity.
"Our analyses reveal the incomprehensible scale of the losses we face if we don't work harder to save global biodiversity," says Dr. Gumbs, the lead author on the paper. "To put some of the numbers into perspective, reptiles alone stand to lose at least 13 billion years of unique evolutionary history, roughly the same number of years as have passed since the beginning of the entire universe."
Using extinction-risk data for around 25,000 species, the researchers found at least 50 billion years of evolutionary heritage to be under threat, as well as a large number of potentially threatened species for which we lack adequate extinction risk data. This suggests that the calculation underestimates the number of species that may be affected.
According to the study's calculations, the Caribbean, the Western Ghats of India, and large parts of Southeast Asia -- regions that are home to the most unique evolutionary history -- are facing unprecedented levels of human-related devastation.
"This new study highlights which species should be prioritized for conservation, based on their evolutionary uniqueness and the intense human impact on environments where they are thought to dwell," Prof. Meiri says.
According to the research, the greatest losses of evolutionary history will be driven by the extinction of entire groups of closely-related species, such as pangolins and tapirs, and by the loss of highly evolutionarily distinct species, such as the ancient Chinese crocodile lizard (Shinisaurus crocodilurus); the Shoebill (Balaeniceps rex), a gigantic bird that stalks the wetlands of Africa; and the Aye-aye (Daubentonia madagascariensis), a nocturnal lemur with large yellow eyes and long spindly fingers.
The study highlights several unusual species as urgent conservation priorities, including the punk-haired Mary River turtle (Elusor macrurus), the Purple frog (Nasikabatrachus sahyadrensis), and the Numbat (Myrmecobius fasciatus). It also highlights many lesser known species, about which little is now understood by scientists, as priorities for further research. Adequate extinction risk data is currently lacking for more than half of the priority lizards and snakes identified.
"These are some of the most incredible and overlooked animals on Planet Earth," says Dr. Gumbs. "From legless lizards and tiny blind snakes to pink worm-like amphibians called caecilians, we know precious little about these fascinating creatures, many of which may be sliding silently toward extinction."
The study also identifies regions where concentrations of irreplaceable diversity are currently under little to no human pressure, particularly across the Amazon rainforest, the highlands of Borneo, and parts of southern Africa.
Read more at Science Daily
Jun 4, 2020
New test of dark energy and expansion from cosmic structures
Starry night sky |
The study uses a new method based on a combination of cosmic voids -- large expanding bubbles of space containing very few galaxies -- and the faint imprint of sound waves in the very early Universe, known as baryon acoustic oscillations (BAO), that can be seen in the distribution of galaxies. This provides a precise ruler to measure the direct effects of dark energy driving the accelerated expansion of the Universe.
This new method gives much more precise results than the technique based on the observation of exploding massive stars, or supernovae, which has long been the standard method for measuring the direct effects of dark energy.
The research was led by the University of Portsmouth, and is published in Physical Review Letters.
The study makes use of data from over a million galaxies and quasars gathered over more than a decade of operations by the Sloan Digital Sky Survey.
The results confirm the model of a cosmological constant dark energy and spatially flat Universe to unprecedented accuracy, and strongly disfavour recent suggestions of positive spatial curvature inferred from measurements of the cosmic microwave background (CMB) by the Planck satellite.
Lead author Dr Seshadri Nadathur, research fellow at the University's Institute of Cosmology and Gravitation (ICG), said: "This result shows the power of galaxy surveys to pin down the amount of dark energy and how it evolved over the last billion years. We're making really precise measurements now and the data is going to get even better with new surveys coming online very soon."
Dr Florian Beutler, a senior research fellow at the ICG, who was also involved in the work, said that the study also reported a new precise measurement of the Hubble constant, the value of which has recently been the subject of intense debate among astronomers.
He said: "We see tentative evidence that data from relatively nearby voids and BAO favour the high Hubble rate seen from other low-redshift methods, but including data from more distant quasar absorption lines brings it in better agreement with the value inferred from Planck CMB data."
From Science Daily
Discovery of ancient super-eruptions indicates the Yellowstone hotspot may be waning
Grand Prismatic Spring in Yellowstone National Park |
Now, in a study published in Geology, researchers have announced the discovery of two newly identified super-eruptions associated with the Yellowstone hotspot track, including what they believe was the volcanic province's largest and most cataclysmic event. The results indicate the hotspot, which today fuels the famous geysers, mudpots, and fumaroles in Yellowstone National Park, may be waning in intensity.
The team used a combination of techniques, including bulk chemistry, magnetic data, and radio-isotopic dates, to correlate volcanic deposits scattered across tens of thousands of square kilometers. "We discovered that deposits previously believed to belong to multiple, smaller eruptions were in fact colossal sheets of volcanic material from two previously unknown super-eruptions at about 9.0 and 8.7 million years ago," says Thomas Knott, a volcanologist at the University of Leicester and the paper's lead author.
"The younger of the two, the Grey's Landing super-eruption, is now the largest recorded event of the entire Snake-River-Yellowstone volcanic province," says Knott. Based on the most recent collations of super-eruption sizes, he adds, "It is one of the top five eruptions of all time."
The team, which also includes researchers from the British Geological Survey and the University of California, Santa Cruz, estimates the Grey's Landing super-eruption was 30% larger than the previous record-holder (the well-known Huckleberry Ridge Tuff) and had devastating local and global effects. "The Grey's Landing eruption enamelled an area the size of New Jersey in searing-hot volcanic glass that instantly sterilized the land surface," says Knott. Anything located within this region, he says, would have been buried and most likely vaporized during the eruption. "Particulates would have choked the stratosphere," adds Knott, "raining fine ash over the entire United States and gradually encompassing the globe."
Both of the newly discovered super-eruptions occurred during the Miocene, the interval of geologic time spanning 23-5.3 million years ago. "These two new eruptions bring the total number of recorded Miocene super-eruptions at the Yellowstone-Snake River volcanic province to six," says Knott. This means that the recurrence rate of Yellowstone hotspot super-eruptions during the Miocene was, on average, once every 500,000 years.
By comparison, Knott says, two super-eruptions have -- so far -- taken place in what is now Yellowstone National Park during the past three million years. "It therefore seems that the Yellowstone hotspot has experienced a three-fold decrease in its capacity to produce super-eruption events," says Knott. "This is a very significant decline."
These findings, says Knott, have little bearing on assessing the risk of another super-eruption occurring today in Yellowstone. "We have demonstrated that the recurrence rate of Yellowstone super-eruptions appears to be once every 1.5 million years," he says. "The last super-eruption there was 630,000 years ago, suggesting we may have up to 900,000 years before another eruption of this scale occurs." But this estimate, Knott hastens to add, is far from exact, and he emphasizes that continuous monitoring in the region, which is being conducted by the U.S. Geological Survey, "is a must" and that warnings of any uptick in activity would be issued well in advance.
This study, which builds on decades of contributions by many other researchers, grew out of a larger project investigating the productivity of major continental volcanic provinces. Those with super-eruptions are the result of colossal degrees of crustal melting over prolonged periods of time, says Knott, and therefore have a profound impact on the structure and composition of Earth's crust in the regions where they occur.
Read more at Science Daily
Researchers document the first use of maize in Mesoamerica
Corn varieties |
Almost any grocery store is filled with products made from corn, also known as maize, in every aisle: fresh corn, canned corn, corn cereal, taco shells, tortilla chips, popcorn, corn sweeteners in hundreds of products, corn fillers in pet food, in soaps and cosmetics, and the list goes on.
Maize is perhaps the most important plant ever domesticated by people, topping 1 billion tonnes produced in 2019, double that of rice, according to University of New Mexico Anthropology professor Keith Prufer, Principle Investigator of a team that just released new research that sheds light on when people started eating maize.
Recently published research from his team in the journal Science Advances reveals new information about when the now-ubiquitous maize became a key part of people's diets. Until now, little was known about when humans living in the tropics of Central America first started eating corn. But the "unparalleled" discovery of remarkably well-preserved ancient human skeletons in Central American rock shelters has revealed when corn became a key part of people's diet in the Americas.
"Today, much of the popularity of maize has to do with its high carbohydrate and protein value in animal feed and sugar content which makes it the preferred ingredient of many processed foods including sugary drinks. Traditionally it has also been used as fermented drink in Mesoamerica. Given its humble beginnings 9,000 years ago in Mexico, understanding how it came to be the most dominant plant in the world benefits from deciphering what attracted people to this crop to begin with. Our paper is the first direct measure of the adoption of maize as a dietary staple in humans," Prufer observed.
Prufer said the international team of researchers led by UNM and University of California, Santa Barbara is investigating the earliest humans in Central America and how they adapted over time to new and changing environments, and how those changes have affected human life histories and societies.
"One of the key issues for understanding these changes from an evolutionary perspective is to know what the change from hunting and gathers pathways to the development of agriculture looked like, and the pace and tempo of innovative new subsistence strategies. Food production and agriculture were among most important cultural innovations in human history.
"Farming allowed us to live in larger groups, in the same location, and to develop permanent villages around food production. These changes ultimately led in the Maya area to the development of the Classic Period city states of the Maya between 3,000 and 1,000 years ago. However, until this study, we did not know when early Mesoamericans first became farmers, or how quickly they accepted the new cultigen maize as a stable of their diet. Certainly, they were very successful in their previous foraging, hunting, and horticultural pursuits before farming, so it is of considerable interest to understand the timing and underlying processes," he said.
Radiocarbon dating of the skeletal samples shows the transition from pre-maize hunter-gatherer diets, where people consumed wild plants and animals, to the introduction and increasing reliance on the corn. Maize made up less than 30 percent of people's diets in the area by 4,700 years ago, rising to 70 percent 700 years later.
Maize was domesticated from teosinte, a wild grass growing in the lower reaches of the Balsas River Valley of Central Mexico, around 9,000 years ago. There is evidence maize was first cultivated in the Maya lowlands around 6,500 years ago, at about the same time that it appears along the Pacific coast of Mexico. But there is no evidence that maize was a staple grain at that time.
The first use of corn may have been for an early form of liquor.
"We hypothesize that maize stalk juice just may have been the original use of early domesticated maize plants, at a time when the cobs and seeds were essentially too small to be of much dietary significance. Humans are really good at fermenting sugary liquids into alcoholic drinks. This changed as human selection of corn plants with larger and larger seeds coincided with genetic changes in the plants themselves, leading eventually to larger cobs, with more and larger seeds in more seed rows," Prufer explained.
To determine the presence of maize in the diet of the ancient individuals, Prufer and his colleagues measured the carbon isotopes in the bones and teeth of 52 skeletons. The study involved the remains of male and female adults and children providing a wholistic sample of the population. The oldest remains date from between 9,600 and 8,600 years ago and continues to about 1,000 years ago
The analysis shows the oldest remains were people who ate wild plants, palms, fruits and nuts found in tropical forests and savannahs, along with meat from hunting terrestrial animals.
By 4,700 years ago, diets had become more diverse, with some individuals showing the first consumption of maize. The isotopic signature of two young nursing infants shows that their mothers were consuming substantial amounts of maize. The results show an increasing consumption of maize over the next millennium as the population transitioned to sedentary farming.
Prufer noted, "We can directly observe in isotopes of bone how maize became a staple grain in the early populations we are studying. We know that people had been experimenting with the wild ancestor of maize, teosintle, and with the earliest early maize for thousands of years, but it does not appear to have been a staple grain until about 4000 BP. After that, people never stopped eating corn, leading it to become perhaps the most important food crop in the Americas, and then in the world."
Excavations were directed by Prufer along with an international team of archaeologists, biologists, ecologists and geologists. Numerous UNM graduate and undergraduate students took part in the field research as well as collaborators with the protected area co-management team, a Belizean NGO the Ya'axche' Conservation Trust.
Conditions weren't easy for the excavation teams, Prufer noted: "We did five years of fieldwork in two very remote rock shelter sites in the Bladen Nature Reserve in the Maya Mountains of Belize, a vast wilderness area that is a two-day walk from the nearest road. To work in this area we had to camp with no electricity, running water, or even cell service for a month at a time each year."
Analysis was conducted at Penn State University, UNM Center for Stable Isotopes, UCSB, and Exeter University in the UK. Prufer was the project director along with his colleague Doug Kennett from UCSB. The project was funded by the Alphawood Foundation and the National Science Foundation. The study was conducted by researchers from UNM, UCSB, Pennsylvania State University, University of Exeter, The US Army Central Identification Laboratory, University of Mississippi, Northern Arizona University, and the Ya'axche Conservation Trust in Belize.
Read more at Science Daily
Synthetic red blood cells mimic natural ones, and have new abilities
Illustration of red blood cells |
Red blood cells (RBCs) take up oxygen from the lungs and deliver it to the body's tissues. These disk-shaped cells contain millions of molecules of hemoglobin -- an iron-containing protein that binds oxygen. RBCs are highly flexible, which allows them to squeeze through tiny capillaries and then bounce back to their former shape. The cells also contain proteins on their surface that allow them to circulate through blood vessels for a long time without being gobbled up by immune cells. Wei Zhu, C. Jeffrey Brinker and colleagues wanted to make artificial RBCs that had similar properties to natural ones, but that could also perform new jobs such as therapeutic drug delivery, magnetic targeting and toxin detection.
The researchers made the synthetic cells by first coating donated human RBCs with a thin layer of silica. They layered positively and negatively charged polymers over the silica-RBCs, and then etched away the silica, producing flexible replicas. Finally, the team coated the surface of the replicas with natural RBC membranes. The artificial cells were similar in size, shape, charge and surface proteins to natural cells, and they could squeeze through model capillaries without losing their shape. In mice, the synthetic RBCs lasted for more than 48 hours, with no observable toxicity. The researchers loaded the artificial cells with either hemoglobin, an anticancer drug, a toxin sensor or magnetic nanoparticles to demonstrate that they could carry cargoes. The team also showed that the new RBCs could act as decoys for a bacterial toxin. Future studies will explore the potential of the artificial cells in medical applications, such as cancer therapy and toxin biosensing, the researchers say.
The authors acknowledge funding from the Air Force Office of Scientific Research, the Laboratory Directed Research & Development Program at Sandia National Laboratories, the Department of Energy Office of Science, the National Institutes of Health and the National Natural Science Foundation of China.
From Science Daily
Jun 3, 2020
Scientists discover what an armored dinosaur ate for its last meal
More than 110 million years ago, a lumbering 1,300-kilogram, armour-plated dinosaur ate its last meal, died, and was washed out to sea in what is now northern Alberta. This ancient beast then sank onto its thorny back, churning up mud in the seabed that entombed it -- until its fossilized body was discovered in a mine near Fort McMurray in 2011.
Since then, researchers at the Royal Tyrrell Museum of Palaeontology in Drumheller, Alta., Brandon University, and the University of Saskatchewan (USask) have been working to unlock the extremely well-preserved nodosaur's many secrets -- including what this large armoured dinosaur (a type of ankylosaur) actually ate for its last meal.
"The finding of the actual preserved stomach contents from a dinosaur is extraordinarily rare, and this stomach recovered from the mummified nodosaur by the museum team is by far the best-preserved dinosaur stomach ever found to date," said USask geologist Jim Basinger, a member of the team that analyzed the dinosaur's stomach contents, a distinct mass about the size of a soccer ball.
"When people see this stunning fossil and are told that we know what its last meal was because its stomach was so well preserved inside the skeleton, it will almost bring the beast back to life for them, providing a glimpse of how the animal actually carried out its daily activities, where it lived, and what its preferred food was."
There has been lots of speculation about what dinosaurs ate, but very little known. In a just-published article in Royal Society Open Science, the team led by Royal Tyrrell Museum palaeontologist Caleb Brown and Brandon University biologist David Greenwood provides detailed and definitive evidence of the diet of large, plant-eating dinosaurs -- something that has not been known conclusively for any herbivorous dinosaur until now.
"This new study changes what we know about the diet of large herbivorous dinosaurs," said Brown. "Our findings are also remarkable for what they can tell us about the animal's interaction with its environment, details we don't usually get just from the dinosaur skeleton."
Previous studies had shown evidence of seeds and twigs in the gut but these studies offered no information as to the kinds of plants that had been eaten. While tooth and jaw shape, plant availability and digestibility have fuelled considerable speculation, the specific plants herbivorous dinosaurs consumed has been largely a mystery.
So what was the last meal of Borealopelta markmitchelli (which means "northern shield" and recognizes Mark Mitchell, the museum technician who spent more than five years carefully exposing the skin and bones of the dinosaur from the fossilized marine rock)?
"The last meal of our dinosaur was mostly fern leaves -- 88 per cent chewed leaf material and seven per cent stems and twigs," said Greenwood, who is also a USask adjunct professor.
"When we examined thin sections of the stomach contents under a microscope, we were shocked to see beautifully preserved and concentrated plant material. In marine rocks we almost never see such superb preservation of leaves, including the microscopic, spore-producing sporangia of ferns."
Team members Basinger, Greenwood and Brandon University graduate student Jessica Kalyniuk compared the stomach contents with food plants known to be available from the study of fossil leaves from the same period in the region. They found that the dinosaur was a picky eater, choosing to eat particular ferns (leptosporangiate, the largest group of ferns today) over others, and not eating many cycad and conifer leaves common to the Early Cretaceous landscape.
Specifically, the team identified 48 palynomorphs (microfossils like pollen and spores) including moss or liverwort, 26 clubmosses and ferns, 13 gymnosperms (mostly conifers), and two angiosperms (flowering plants).
"Also, there is considerable charcoal in the stomach from burnt plant fragments, indicating that the animal was browsing in a recently burned area and was taking advantage of a recent fire and the flush of ferns that frequently emerges on a burned landscape," said Greenwood.
"This adaptation to a fire ecology is new information. Like large herbivores alive today such as moose and deer, and elephants in Africa, these nodosaurs by their feeding would have shaped the vegetation on the landscape, possibly maintaining more open areas by their grazing."
The team also found gastroliths, or gizzard stones, generally swallowed by animals such as herbivorous dinosaurs and today's birds such as geese to aid digestion.
"We also know that based on how well-preserved both the plant fragments and animal itself are, the animal's death and burial must have followed shortly after the last meal," said Brown. "Plants give us a much better idea of season than animals, and they indicate that the last meal and the animal's death and burial all happened in the late spring to mid-summer."
"Taken together, these findings enable us to make inferences about the ecology of the animal, including how selective it was in choosing which plants to eat and how it may have exploited forest fire regrowth. It will also assist in understanding of dinosaur digestion and physiology."
Borealopelta markmitchelli, discovered during mining operations at the Suncor Millennium open pit mine north of Fort McMurray, has been on display at the Royal Tyrrell Museum since 2017. The main chunk of the stomach mass is on display with the skeleton.
Other members of the team include museum scientists Donald Henderson and Dennis Braman, and Brandon University research associate and USask alumna Cathy Greenwood.
Research continues on Borealopelta markmitchelli -- the best fossil of a nodosaur ever found -- to learn more about its environment and behaviour while it was alive. Student Kalyniuk is currently expanding her work on fossil plants of this age to better understand the composition of the forests in which it lived. Many of the fossils she will examine are in Basinger' collections at USask.
Read more at Science Daily
Since then, researchers at the Royal Tyrrell Museum of Palaeontology in Drumheller, Alta., Brandon University, and the University of Saskatchewan (USask) have been working to unlock the extremely well-preserved nodosaur's many secrets -- including what this large armoured dinosaur (a type of ankylosaur) actually ate for its last meal.
"The finding of the actual preserved stomach contents from a dinosaur is extraordinarily rare, and this stomach recovered from the mummified nodosaur by the museum team is by far the best-preserved dinosaur stomach ever found to date," said USask geologist Jim Basinger, a member of the team that analyzed the dinosaur's stomach contents, a distinct mass about the size of a soccer ball.
"When people see this stunning fossil and are told that we know what its last meal was because its stomach was so well preserved inside the skeleton, it will almost bring the beast back to life for them, providing a glimpse of how the animal actually carried out its daily activities, where it lived, and what its preferred food was."
There has been lots of speculation about what dinosaurs ate, but very little known. In a just-published article in Royal Society Open Science, the team led by Royal Tyrrell Museum palaeontologist Caleb Brown and Brandon University biologist David Greenwood provides detailed and definitive evidence of the diet of large, plant-eating dinosaurs -- something that has not been known conclusively for any herbivorous dinosaur until now.
"This new study changes what we know about the diet of large herbivorous dinosaurs," said Brown. "Our findings are also remarkable for what they can tell us about the animal's interaction with its environment, details we don't usually get just from the dinosaur skeleton."
Previous studies had shown evidence of seeds and twigs in the gut but these studies offered no information as to the kinds of plants that had been eaten. While tooth and jaw shape, plant availability and digestibility have fuelled considerable speculation, the specific plants herbivorous dinosaurs consumed has been largely a mystery.
So what was the last meal of Borealopelta markmitchelli (which means "northern shield" and recognizes Mark Mitchell, the museum technician who spent more than five years carefully exposing the skin and bones of the dinosaur from the fossilized marine rock)?
"The last meal of our dinosaur was mostly fern leaves -- 88 per cent chewed leaf material and seven per cent stems and twigs," said Greenwood, who is also a USask adjunct professor.
"When we examined thin sections of the stomach contents under a microscope, we were shocked to see beautifully preserved and concentrated plant material. In marine rocks we almost never see such superb preservation of leaves, including the microscopic, spore-producing sporangia of ferns."
Team members Basinger, Greenwood and Brandon University graduate student Jessica Kalyniuk compared the stomach contents with food plants known to be available from the study of fossil leaves from the same period in the region. They found that the dinosaur was a picky eater, choosing to eat particular ferns (leptosporangiate, the largest group of ferns today) over others, and not eating many cycad and conifer leaves common to the Early Cretaceous landscape.
Specifically, the team identified 48 palynomorphs (microfossils like pollen and spores) including moss or liverwort, 26 clubmosses and ferns, 13 gymnosperms (mostly conifers), and two angiosperms (flowering plants).
"Also, there is considerable charcoal in the stomach from burnt plant fragments, indicating that the animal was browsing in a recently burned area and was taking advantage of a recent fire and the flush of ferns that frequently emerges on a burned landscape," said Greenwood.
"This adaptation to a fire ecology is new information. Like large herbivores alive today such as moose and deer, and elephants in Africa, these nodosaurs by their feeding would have shaped the vegetation on the landscape, possibly maintaining more open areas by their grazing."
The team also found gastroliths, or gizzard stones, generally swallowed by animals such as herbivorous dinosaurs and today's birds such as geese to aid digestion.
"We also know that based on how well-preserved both the plant fragments and animal itself are, the animal's death and burial must have followed shortly after the last meal," said Brown. "Plants give us a much better idea of season than animals, and they indicate that the last meal and the animal's death and burial all happened in the late spring to mid-summer."
"Taken together, these findings enable us to make inferences about the ecology of the animal, including how selective it was in choosing which plants to eat and how it may have exploited forest fire regrowth. It will also assist in understanding of dinosaur digestion and physiology."
Borealopelta markmitchelli, discovered during mining operations at the Suncor Millennium open pit mine north of Fort McMurray, has been on display at the Royal Tyrrell Museum since 2017. The main chunk of the stomach mass is on display with the skeleton.
Other members of the team include museum scientists Donald Henderson and Dennis Braman, and Brandon University research associate and USask alumna Cathy Greenwood.
Research continues on Borealopelta markmitchelli -- the best fossil of a nodosaur ever found -- to learn more about its environment and behaviour while it was alive. Student Kalyniuk is currently expanding her work on fossil plants of this age to better understand the composition of the forests in which it lived. Many of the fossils she will examine are in Basinger' collections at USask.
Read more at Science Daily
Rivers help lock carbon from fires into oceans for thousands of years
The extent to which rivers transport burned carbon to oceans -- where it can be stored for tens of millennia -- is revealed in new research led by the University of East Anglia (UEA).
The study, published today in Nature Communications, calculates how much burned carbon is being flushed out by rivers and locked up in the oceans.
Oceans store a surprising amount of carbon from burned vegetation, for example as a result of wildfires and managed burning. The research team describe it as a natural -- if unexpected -- quirk of the Earth system.
The international interdisciplinary team, including collaborators from the Universities of Exeter, Swansea, Zurich, Oldenburg and Florida International, studied the amount of dissolved carbon flowing through 78 rivers on every continent except Antarctica.
Lead researcher Dr Matthew Jones, of the Tyndall Centre for Climate Change Research at UEA, said: "Fires leave behind carbon-rich materials, like charcoal and ash, which break down very slowly in soils. We care about this burned carbon because it is essentially 'locked out' of the atmosphere for the distant future -- it breaks down to greenhouse gases extremely slowly in comparison to most unburned carbon.
"We know that this burned carbon takes about 10 times longer to break down in the oceans than on land. Rivers are the conveyor belts that shift carbon from the land to the oceans, so they determine how long it takes for burned carbon to break down. So, we set out to estimate how much burned carbon reaches the oceans via rivers."
Based on a large dataset of 409 observations from 78 rivers around the world, the researchers analysed how the burned fraction of dissolved carbon in rivers varies at different latitudes and in different ecosystems. They then upscaled their findings to estimate that 18 million tonnes of dissolved burned carbon are transported annually by rivers. When combined with the burned carbon that is exported with sediments, the estimate rises to 43 million tonnes of burned carbon per year.
Dr Jones said: "We found that a surprising amount -- around 12% per cent -- of all carbon flowing through rivers comes from burned vegetation.
"While fires emit two billion tonnes of carbon each year, they also leave behind around 250 million tonnes of carbon as burned residues, like charcoal and ash. Around half of the carbon in these residues is in the particularly long-lived form of 'black carbon', and we show that about one-third of all black carbon reaches the oceans."
"This is a good thing because that carbon gets locked up and stored for very long periods -- it takes tens of millennia for black carbon to degrade to carbon dioxide in the oceans. By comparison, only about one per cent of carbon taken up by land plants ends up in the ocean.
"With wildfires anticipated to increase in the future because of climate change, we can expect more burned carbon to be flushed out by rivers and locked up in the oceans.
"It's a natural quirk of the Earth system -- a moderating 'negative feedback' of the warming climate that could trap some extra carbon in a more fire-prone world."
Read more at Science Daily
The study, published today in Nature Communications, calculates how much burned carbon is being flushed out by rivers and locked up in the oceans.
Oceans store a surprising amount of carbon from burned vegetation, for example as a result of wildfires and managed burning. The research team describe it as a natural -- if unexpected -- quirk of the Earth system.
The international interdisciplinary team, including collaborators from the Universities of Exeter, Swansea, Zurich, Oldenburg and Florida International, studied the amount of dissolved carbon flowing through 78 rivers on every continent except Antarctica.
Lead researcher Dr Matthew Jones, of the Tyndall Centre for Climate Change Research at UEA, said: "Fires leave behind carbon-rich materials, like charcoal and ash, which break down very slowly in soils. We care about this burned carbon because it is essentially 'locked out' of the atmosphere for the distant future -- it breaks down to greenhouse gases extremely slowly in comparison to most unburned carbon.
"We know that this burned carbon takes about 10 times longer to break down in the oceans than on land. Rivers are the conveyor belts that shift carbon from the land to the oceans, so they determine how long it takes for burned carbon to break down. So, we set out to estimate how much burned carbon reaches the oceans via rivers."
Based on a large dataset of 409 observations from 78 rivers around the world, the researchers analysed how the burned fraction of dissolved carbon in rivers varies at different latitudes and in different ecosystems. They then upscaled their findings to estimate that 18 million tonnes of dissolved burned carbon are transported annually by rivers. When combined with the burned carbon that is exported with sediments, the estimate rises to 43 million tonnes of burned carbon per year.
Dr Jones said: "We found that a surprising amount -- around 12% per cent -- of all carbon flowing through rivers comes from burned vegetation.
"While fires emit two billion tonnes of carbon each year, they also leave behind around 250 million tonnes of carbon as burned residues, like charcoal and ash. Around half of the carbon in these residues is in the particularly long-lived form of 'black carbon', and we show that about one-third of all black carbon reaches the oceans."
"This is a good thing because that carbon gets locked up and stored for very long periods -- it takes tens of millennia for black carbon to degrade to carbon dioxide in the oceans. By comparison, only about one per cent of carbon taken up by land plants ends up in the ocean.
"With wildfires anticipated to increase in the future because of climate change, we can expect more burned carbon to be flushed out by rivers and locked up in the oceans.
"It's a natural quirk of the Earth system -- a moderating 'negative feedback' of the warming climate that could trap some extra carbon in a more fire-prone world."
Read more at Science Daily
First optical measurements of Milky Way's Fermi Bubbles probe their origin
Using the Wisconsin H-Alpha Mapper telescope, astronomers have for the first time measured the Fermi Bubbles in the visible light spectrum. The Fermi Bubbles are two enormous outflows of high-energy gas that emanate from the Milky Way and the finding refines our understanding of the properties of these mysterious blobs.
The research team from the University of Wisconsin-Madison, UW-Whitewater and Embry-Riddle Aeronautical University measured the emission of light from hydrogen and nitrogen in the Fermi Bubbles at the same position as recent ultraviolet absorption measurements made by the Hubble Telescope.
"We combined those two measurements of emission and absorption to estimate the density, pressure and temperature of the ionized gas, and that lets us better understand where this gas is coming from," says Dhanesh Krishnarao, lead author of the new study and an astronomy graduate student at UW-Madison.
The researchers announced their findings June 3 at the 236th meeting of the American Astronomical Society, which was held virtually for the first time since 1899, in response to the COVID-19 pandemic.
Extending 25,000 light years both above and below the center of the Milky Way, the Fermi Bubbles were discovered in 2010 by the Fermi Gamma Ray Telescope. These faint but highly energetic outflows of gas are racing away from the center of the Milky Way at millions of miles per hour. But while the origin of the phenomenon has been inferred to date back several million years ago, the events that produced the bubbles remain a mystery.
Now, with new measurements of the density and pressure of the ionized gas, researchers can test models of the Fermi Bubbles against observations.
"The other significant thing is that we now have the possibility of measuring the density and pressure and the velocity structure in many locations," with the all-sky WHAM telescope, says Bob Benjamin, a professor of astronomy at UW-Whitewater and co-author of the study. "We can do an extensive mapping effort across the Fermi Bubbles above and below the plane of the galaxy to see if the models that people have developed are holding up. Because, unlike the ultraviolet data, we're not limited to just specific lines of sight."
Matt Haffner, professor of physics and astronomy at Embry-Riddle Aeronautical University and a co-author of the report, says the work demonstrates the usefulness of the WHAM telescope, developed at UW-Madison, to tell us more about the workings of the Milky Way. The central region of our home galaxy has long been difficult to study because of gas blocking out view, but WHAM has provided new opportunities to gather the kind of information we have for distant galaxies.
Read more at Science Daily
The research team from the University of Wisconsin-Madison, UW-Whitewater and Embry-Riddle Aeronautical University measured the emission of light from hydrogen and nitrogen in the Fermi Bubbles at the same position as recent ultraviolet absorption measurements made by the Hubble Telescope.
"We combined those two measurements of emission and absorption to estimate the density, pressure and temperature of the ionized gas, and that lets us better understand where this gas is coming from," says Dhanesh Krishnarao, lead author of the new study and an astronomy graduate student at UW-Madison.
The researchers announced their findings June 3 at the 236th meeting of the American Astronomical Society, which was held virtually for the first time since 1899, in response to the COVID-19 pandemic.
Extending 25,000 light years both above and below the center of the Milky Way, the Fermi Bubbles were discovered in 2010 by the Fermi Gamma Ray Telescope. These faint but highly energetic outflows of gas are racing away from the center of the Milky Way at millions of miles per hour. But while the origin of the phenomenon has been inferred to date back several million years ago, the events that produced the bubbles remain a mystery.
Now, with new measurements of the density and pressure of the ionized gas, researchers can test models of the Fermi Bubbles against observations.
"The other significant thing is that we now have the possibility of measuring the density and pressure and the velocity structure in many locations," with the all-sky WHAM telescope, says Bob Benjamin, a professor of astronomy at UW-Whitewater and co-author of the study. "We can do an extensive mapping effort across the Fermi Bubbles above and below the plane of the galaxy to see if the models that people have developed are holding up. Because, unlike the ultraviolet data, we're not limited to just specific lines of sight."
Matt Haffner, professor of physics and astronomy at Embry-Riddle Aeronautical University and a co-author of the report, says the work demonstrates the usefulness of the WHAM telescope, developed at UW-Madison, to tell us more about the workings of the Milky Way. The central region of our home galaxy has long been difficult to study because of gas blocking out view, but WHAM has provided new opportunities to gather the kind of information we have for distant galaxies.
Read more at Science Daily
Astronomers capture a pulsar 'powering up'
The research, led by PhD candidate Adelle Goodwin from the Monash School of Physics and Astronomy will be featured at an upcoming American Astronomical Society meeting this week before it is published in Monthly Notices of the Royal Astronomical Society. Adelle leads a team of international researchers, including her supervisor, Monash University Associate Professor Duncan Galloway, and Dr David Russell from New York University Abu Dhabi.
The scientists observed an 'accreting' neutron star as it entered an outburst phase in an international collaborative effort involving five groups of researchers, seven telescopes (five on the ground, two in space), and 15 collaborators.
It is the first time such an event has been observed in this detail -- in multiple frequencies, including high-sensitivity measurements in both optical and X-ray.
The physics behind this 'switching on' process has eluded physicists for decades, partly because there are very few comprehensive observations of the phenomenon.
The researchers caught one of these accreting neutron star systems in the act of entering outburst, revealing that it took 12 days for material to swirl inwards and collide with the neutron star, substantially longer than the two- to three-days most theories suggest.
"These observations allow us to study the structure of the accretion disk, and determine how quickly and easily material can move inwards to the neutron star," Adelle said.
"Using multiple telescopes that are sensitive to light in different energies we were able to trace that the initial activity happened near the companion star, in the outer edges of the accretion disk, and it took 12 days for the disk to be brought into the hot state and for material to spiral inward to the neutron star, and X-rays to be produced," she said.
In an 'accreting' neutron star system, a pulsar (a dense remnant of an old star) strips material away from a nearby star, forming an accretion disk of material spiralling in towards the pulsar, where it releases extraordinary amounts of energy -- about the total energy output of the sun in 10 years, over the period of a few short weeks.
The pulsar observed is SAX J1808.4−3658 which rotates at a rapid 400 times per second and is located 11,000 light-years away in the constellation Saggitarius.
"This work enables us to shed some light on the physics of accreting neutron star systems, and to understand how these explosive outbursts are triggered in the first place, which has puzzled astronomers for a long time," said New York University Abu Dhabi researcher, Dr David Russell, one of the study's co-authors.
Accretion disks are usually made of hydrogen, but this particular object has a disk that is made up of 50% helium, more helium than most disks. The scientists think that this excess helium may be slowing down the heating of the disk because helium 'burns' at a higher temperature, causing the 'powering up' to take 12 days.
Read more at Science Daily
The scientists observed an 'accreting' neutron star as it entered an outburst phase in an international collaborative effort involving five groups of researchers, seven telescopes (five on the ground, two in space), and 15 collaborators.
It is the first time such an event has been observed in this detail -- in multiple frequencies, including high-sensitivity measurements in both optical and X-ray.
The physics behind this 'switching on' process has eluded physicists for decades, partly because there are very few comprehensive observations of the phenomenon.
The researchers caught one of these accreting neutron star systems in the act of entering outburst, revealing that it took 12 days for material to swirl inwards and collide with the neutron star, substantially longer than the two- to three-days most theories suggest.
"These observations allow us to study the structure of the accretion disk, and determine how quickly and easily material can move inwards to the neutron star," Adelle said.
"Using multiple telescopes that are sensitive to light in different energies we were able to trace that the initial activity happened near the companion star, in the outer edges of the accretion disk, and it took 12 days for the disk to be brought into the hot state and for material to spiral inward to the neutron star, and X-rays to be produced," she said.
In an 'accreting' neutron star system, a pulsar (a dense remnant of an old star) strips material away from a nearby star, forming an accretion disk of material spiralling in towards the pulsar, where it releases extraordinary amounts of energy -- about the total energy output of the sun in 10 years, over the period of a few short weeks.
The pulsar observed is SAX J1808.4−3658 which rotates at a rapid 400 times per second and is located 11,000 light-years away in the constellation Saggitarius.
"This work enables us to shed some light on the physics of accreting neutron star systems, and to understand how these explosive outbursts are triggered in the first place, which has puzzled astronomers for a long time," said New York University Abu Dhabi researcher, Dr David Russell, one of the study's co-authors.
Accretion disks are usually made of hydrogen, but this particular object has a disk that is made up of 50% helium, more helium than most disks. The scientists think that this excess helium may be slowing down the heating of the disk because helium 'burns' at a higher temperature, causing the 'powering up' to take 12 days.
Read more at Science Daily
Jun 2, 2020
On the hunt for megafauna in North America
Research from Curtin University has found that pre-historic climate change does not explain the extinction of megafauna in North America at the end of the last Ice Age.
The research, published today in Nature Communications, analysed ancient DNA from bone fragments and soil found inside Hall's Cave, located in central Texas. The researchers discovered important genetic clues to the past biodiversity in North America and provided new insights into the causes of animal extinctions during the Ice Age.
The research was an international collaboration between Curtin University, University of Texas-Austin, Texas A&M University and Stafford Research Labs.
Lead researcher Mr Frederik Seersholm, Forrest Foundation Scholar and PhD candidate from Curtin's School of Molecular and Life Sciences, said the analysis tracks how biodiversity in Texas changed as temperatures dropped, and then recovered around 13,000 years ago.
"At the end of the last ice-age, Earth experienced drastic climate changes that significantly altered plant and animal biodiversity. In North America these changes coincided with the arrival of humans," Mr Seersholm said.
"When we combined our new data with existing fossil studies in the region, we obtained a detailed picture of the biodiversity turnover against the backdrop of both human predation and pre-historic climate changes.
"Our findings show that while plant diversity recovered as the climate warmed, large animal diversity did not recover.
"Of the large-bodied animals, known as megafauna, identified at the cave, nine became extinct and five disappeared permanently from the region.
"In contrast, small animals which are not believed to have been hunted intensely by humans, adapted well to the changing climate by migrating. Hence, the data suggests a factor other than climate may have contributed to the extinction of the large mammals."
While the research team acknowledges it is difficult to assess the exact impact of human hunting on the megafauna, they believe there is now sufficient evidence to suggest our ancestors were the main driver of the disappearance of ice age species such as the mammoth and sabre-toothed cat.
Mr Seersholm said the findings demonstrate how much information is stored in seemingly insignificant bone fragments.
"The study builds on years of research at Hall's cave, which have helped shape our understanding of the North American megafauna since the first analyses were conducted in the 1990s," Mr Seersholm said.
"By combining new genetic methods with classic stratigraphy and vertebrate palaeontology, our research adds to this story.
Read more at Science Daily
The research, published today in Nature Communications, analysed ancient DNA from bone fragments and soil found inside Hall's Cave, located in central Texas. The researchers discovered important genetic clues to the past biodiversity in North America and provided new insights into the causes of animal extinctions during the Ice Age.
The research was an international collaboration between Curtin University, University of Texas-Austin, Texas A&M University and Stafford Research Labs.
Lead researcher Mr Frederik Seersholm, Forrest Foundation Scholar and PhD candidate from Curtin's School of Molecular and Life Sciences, said the analysis tracks how biodiversity in Texas changed as temperatures dropped, and then recovered around 13,000 years ago.
"At the end of the last ice-age, Earth experienced drastic climate changes that significantly altered plant and animal biodiversity. In North America these changes coincided with the arrival of humans," Mr Seersholm said.
"When we combined our new data with existing fossil studies in the region, we obtained a detailed picture of the biodiversity turnover against the backdrop of both human predation and pre-historic climate changes.
"Our findings show that while plant diversity recovered as the climate warmed, large animal diversity did not recover.
"Of the large-bodied animals, known as megafauna, identified at the cave, nine became extinct and five disappeared permanently from the region.
"In contrast, small animals which are not believed to have been hunted intensely by humans, adapted well to the changing climate by migrating. Hence, the data suggests a factor other than climate may have contributed to the extinction of the large mammals."
While the research team acknowledges it is difficult to assess the exact impact of human hunting on the megafauna, they believe there is now sufficient evidence to suggest our ancestors were the main driver of the disappearance of ice age species such as the mammoth and sabre-toothed cat.
Mr Seersholm said the findings demonstrate how much information is stored in seemingly insignificant bone fragments.
"The study builds on years of research at Hall's cave, which have helped shape our understanding of the North American megafauna since the first analyses were conducted in the 1990s," Mr Seersholm said.
"By combining new genetic methods with classic stratigraphy and vertebrate palaeontology, our research adds to this story.
Read more at Science Daily
Large simulation finds new origin of supermassive black holes
Computer simulations conducted by astrophysicists at Tohoku University in Japan, have revealed a new theory for the origin of supermassive black holes. In this theory, the precursors of supermassive black holes grow by swallowing up not only interstellar gas, but also smaller stars as well. This helps to explain the large number of supermassive black holes observed today.
Almost every galaxy in the modern Universe has a supermassive black hole at its center. Their masses can sometimes reach up to 10 billion times the mass of the Sun. However, their origin is still one of the great mysteries of astronomy. A popular theory is the direct collapse model where primordial clouds of interstellar gas collapse under self-gravity to form supermassive stars which then evolve into supermassive black holes. But previous studies have shown that direct collapse only works with pristine gas consisting of only hydrogen and helium. Heavier elements such as carbon and oxygen change the gas dynamics, causing the collapsing gas to fragment into many smaller clouds which form small stars of their own, rather than a few supermassive stars. Direct collapse from pristine gas alone can't explain the large number of supermassive blackholes seen today.
Sunmyon Chon, a postdoctoral fellow at the Japan Society for the Promotion of Science and Tohoku University and his team used the National Astronomical Observatory of Japan's supercomputer "ATERUI II" to perform long-term 3D high-resolution simulations to test the possibility that supermassive stars could form even in heavy-element-enriched gas. Star formation in gas clouds including heavy elements has been difficult to simulate because of the computational cost of simulating the violent splitting of the gas, but advances in computing power, specifically the high calculation speed of "ATERUI II" commissioned in 2018, allowed the team to overcome this challenge. These new simulations make it possible to study the formation of stars from gas clouds in more detail.
Contrary to previous predictions, the research team found that supermassive stars can still form from heavy-element enriched gas clouds. As expected, the gas cloud breaks up violently and many smaller stars form. However, there is a strong gas flow towards the center of the cloud; the smaller stars are dragged by this flow and are swallowed-up by the massive stars in the center. The simulations resulted in the formation of a massive star 10,000 time more massive than the Sun. "This is the first time that we have shown the formation of such a large black hole precursor in clouds enriched in heavy-elements. We believe that the giant star thus formed will continue to grow and evolve into a giant black hole," says Chon.
Read more at Science Daily
Almost every galaxy in the modern Universe has a supermassive black hole at its center. Their masses can sometimes reach up to 10 billion times the mass of the Sun. However, their origin is still one of the great mysteries of astronomy. A popular theory is the direct collapse model where primordial clouds of interstellar gas collapse under self-gravity to form supermassive stars which then evolve into supermassive black holes. But previous studies have shown that direct collapse only works with pristine gas consisting of only hydrogen and helium. Heavier elements such as carbon and oxygen change the gas dynamics, causing the collapsing gas to fragment into many smaller clouds which form small stars of their own, rather than a few supermassive stars. Direct collapse from pristine gas alone can't explain the large number of supermassive blackholes seen today.
Sunmyon Chon, a postdoctoral fellow at the Japan Society for the Promotion of Science and Tohoku University and his team used the National Astronomical Observatory of Japan's supercomputer "ATERUI II" to perform long-term 3D high-resolution simulations to test the possibility that supermassive stars could form even in heavy-element-enriched gas. Star formation in gas clouds including heavy elements has been difficult to simulate because of the computational cost of simulating the violent splitting of the gas, but advances in computing power, specifically the high calculation speed of "ATERUI II" commissioned in 2018, allowed the team to overcome this challenge. These new simulations make it possible to study the formation of stars from gas clouds in more detail.
Contrary to previous predictions, the research team found that supermassive stars can still form from heavy-element enriched gas clouds. As expected, the gas cloud breaks up violently and many smaller stars form. However, there is a strong gas flow towards the center of the cloud; the smaller stars are dragged by this flow and are swallowed-up by the massive stars in the center. The simulations resulted in the formation of a massive star 10,000 time more massive than the Sun. "This is the first time that we have shown the formation of such a large black hole precursor in clouds enriched in heavy-elements. We believe that the giant star thus formed will continue to grow and evolve into a giant black hole," says Chon.
Read more at Science Daily
From dark to light in a flash: Smart film lets windows switch autonomously
Researchers have developed a new easy-to-use smart optical film technology that allows smart window devices to autonomously switch between transparent and opaque states in response to the surrounding light conditions.
The proposed 3D hybrid nanocomposite film with a highly periodic network structure has empirically demonstrated its high speed and performance, enabling the smart window to quantify and self-regulate its high-contrast optical transmittance. As a proof of concept, a mobile-app-enabled smart window device for Internet of Things (IoT) applications has been realized using the proposed smart optical film with successful expansion to the 3-by-3-inch scale. This energy-efficient and cost-effective technology holds great promise for future use in various applications that require active optical transmission modulation.
Flexible optical transmission modulation technologies for smart applications including privacy-protection windows, zero-energy buildings, and beam projection screens have been in the spotlight in recent years. Conventional technologies that used external stimuli such as electricity, heat, or light to modulate optical transmission had only limited applications due to their slow response speeds, unnecessary color switching, and low durability, stability, and safety.
The optical transmission modulation contrast achieved by controlling the light scattering interfaces on non-periodic 2D surface structures that often have low optical density such as cracks, wrinkles, and pillars is also generally low. In addition, since the light scattering interfaces are exposed and not subject to any passivation, they can be vulnerable to external damage and may lose optical transmission modulation functions. Furthermore, in-plane scattering interfaces that randomly exist on the surface make large-area modulation with uniformity difficult.
Inspired by these limitations, a KAIST research team led by Professor Seokwoo Jeon from the Department of Materials Science and Engineering and Professor Jung-Wuk Hong of the Civil and Environmental Engineering Department used proximity-field nanopatterning (PnP) technology that effectively produces highly periodic 3D hybrid nanostructures, and an atomic layer deposition (ALD) technique that allows the precise control of oxide deposition and the high-quality fabrication of semiconductor devices.
The team then successfully produced a large-scale smart optical film with a size of 3 by 3 inches in which ultrathin alumina nanoshells are inserted between the elastomers in a periodic 3D nanonetwork.
This "mechano-responsive" 3D hybrid nanocomposite film with a highly periodic network structure is the largest smart optical transmission modulation film that exists. The film has been shown to have state-of-the-art optical transmission modulation of up to 74% at visible wavelengths from 90% initial transmission to 16% in the scattering state under strain. Its durability and stability were proved by more than 10,000 tests of harsh mechanical deformation including stretching, releasing, bending, and being placed under high temperatures of up to 70°C. When this film was used, the transmittance of the smart window device was adjusted promptly and automatically within one second in response to the surrounding light conditions. Through these experiments, the underlying physics of optical scattering phenomena occurring in the heterogeneous interfaces were identified. Their findings were reported in the online edition of Advanced Science on April 26. KAIST Professor Jong-Hwa Shin's group and Professor Young-Seok Shim at Silla University also collaborated on this project.
Donghwi Cho, a PhD candidate in materials science and engineering at KAIST and co-lead author of the study, said, "Our smart optical film technology can better control high-contrast optical transmittance by relatively simple operating principles and with low energy consumption and costs."
Read more at Science Daily
The proposed 3D hybrid nanocomposite film with a highly periodic network structure has empirically demonstrated its high speed and performance, enabling the smart window to quantify and self-regulate its high-contrast optical transmittance. As a proof of concept, a mobile-app-enabled smart window device for Internet of Things (IoT) applications has been realized using the proposed smart optical film with successful expansion to the 3-by-3-inch scale. This energy-efficient and cost-effective technology holds great promise for future use in various applications that require active optical transmission modulation.
Flexible optical transmission modulation technologies for smart applications including privacy-protection windows, zero-energy buildings, and beam projection screens have been in the spotlight in recent years. Conventional technologies that used external stimuli such as electricity, heat, or light to modulate optical transmission had only limited applications due to their slow response speeds, unnecessary color switching, and low durability, stability, and safety.
The optical transmission modulation contrast achieved by controlling the light scattering interfaces on non-periodic 2D surface structures that often have low optical density such as cracks, wrinkles, and pillars is also generally low. In addition, since the light scattering interfaces are exposed and not subject to any passivation, they can be vulnerable to external damage and may lose optical transmission modulation functions. Furthermore, in-plane scattering interfaces that randomly exist on the surface make large-area modulation with uniformity difficult.
Inspired by these limitations, a KAIST research team led by Professor Seokwoo Jeon from the Department of Materials Science and Engineering and Professor Jung-Wuk Hong of the Civil and Environmental Engineering Department used proximity-field nanopatterning (PnP) technology that effectively produces highly periodic 3D hybrid nanostructures, and an atomic layer deposition (ALD) technique that allows the precise control of oxide deposition and the high-quality fabrication of semiconductor devices.
The team then successfully produced a large-scale smart optical film with a size of 3 by 3 inches in which ultrathin alumina nanoshells are inserted between the elastomers in a periodic 3D nanonetwork.
This "mechano-responsive" 3D hybrid nanocomposite film with a highly periodic network structure is the largest smart optical transmission modulation film that exists. The film has been shown to have state-of-the-art optical transmission modulation of up to 74% at visible wavelengths from 90% initial transmission to 16% in the scattering state under strain. Its durability and stability were proved by more than 10,000 tests of harsh mechanical deformation including stretching, releasing, bending, and being placed under high temperatures of up to 70°C. When this film was used, the transmittance of the smart window device was adjusted promptly and automatically within one second in response to the surrounding light conditions. Through these experiments, the underlying physics of optical scattering phenomena occurring in the heterogeneous interfaces were identified. Their findings were reported in the online edition of Advanced Science on April 26. KAIST Professor Jong-Hwa Shin's group and Professor Young-Seok Shim at Silla University also collaborated on this project.
Donghwi Cho, a PhD candidate in materials science and engineering at KAIST and co-lead author of the study, said, "Our smart optical film technology can better control high-contrast optical transmittance by relatively simple operating principles and with low energy consumption and costs."
Read more at Science Daily
Scientists find a switch to flip and turn off breast cancer growth and metastasis
Researchers at Tulane University School of Medicine identified a gene that causes an aggressive form of breast cancer to rapidly grow. More importantly, they have also discovered a way to "turn it off" and inhibit cancer from occurring. The animal study results have been so compelling that the team is now working on FDA approval to begin clinical trials and has published details in the journal Scientific Reports.
The team led by Dr. Reza Izadpanah examined the role two genes, including one whose involvement in cancer was discovered by Tulane researchers, play in causing triple negative breast cancer (TNBC). TNBC is considered to be the most aggressive of breast cancers, with a much poorer prognosis for treatment and survival. Izadpanah's team specifically identified an inhibitor of the TRAF3IP2 gene, which was proven to suppress the growth and spread (metastasis) of TNBC in mouse models that closely resemble humans.
In parallel studies looking at a duo of genes -- TRAF3IP2 and Rab27a, which play roles in the secretion of substances that can cause tumor formation -- the research teams studied what happens when they were stopped from functioning. Suppressing the expression of either gene led to a decline in both tumor growth and the spread of cancer to other organs. Izadpanah says that when Rab27a was silenced, the tumor did not grow but was still spreading a small number of cancer cells to other parts of the body. However, when the TRAF3IP2 gene was turned off, they found no spread (known as "metastasis" or "micrometastasis") of the original tumor cells for a full year following the treatment. Even more beneficial, inhibiting the TRAF3IP2 gene not only stopped future tumor growth but caused existing tumors to shrink to undetectable levels.
"Our findings show that both genes play a role in breast cancer growth and metastasis," says Izadpanah. "While targeting Rab27a delays progression of tumor growth, it fails to affect the spread of tiny amounts of cancer cells, or micrometastasis. On the contrary, targeting TRAF3IP2 suppresses tumor growth and spread, and interfering with it both shrinks pre-formed tumors and prevents additional spread. This exciting discovery has revealed that TRAF3IP2 can play a role as a novel therapeutic target in breast cancer treatment."
Read more at Science Daily
The team led by Dr. Reza Izadpanah examined the role two genes, including one whose involvement in cancer was discovered by Tulane researchers, play in causing triple negative breast cancer (TNBC). TNBC is considered to be the most aggressive of breast cancers, with a much poorer prognosis for treatment and survival. Izadpanah's team specifically identified an inhibitor of the TRAF3IP2 gene, which was proven to suppress the growth and spread (metastasis) of TNBC in mouse models that closely resemble humans.
In parallel studies looking at a duo of genes -- TRAF3IP2 and Rab27a, which play roles in the secretion of substances that can cause tumor formation -- the research teams studied what happens when they were stopped from functioning. Suppressing the expression of either gene led to a decline in both tumor growth and the spread of cancer to other organs. Izadpanah says that when Rab27a was silenced, the tumor did not grow but was still spreading a small number of cancer cells to other parts of the body. However, when the TRAF3IP2 gene was turned off, they found no spread (known as "metastasis" or "micrometastasis") of the original tumor cells for a full year following the treatment. Even more beneficial, inhibiting the TRAF3IP2 gene not only stopped future tumor growth but caused existing tumors to shrink to undetectable levels.
"Our findings show that both genes play a role in breast cancer growth and metastasis," says Izadpanah. "While targeting Rab27a delays progression of tumor growth, it fails to affect the spread of tiny amounts of cancer cells, or micrometastasis. On the contrary, targeting TRAF3IP2 suppresses tumor growth and spread, and interfering with it both shrinks pre-formed tumors and prevents additional spread. This exciting discovery has revealed that TRAF3IP2 can play a role as a novel therapeutic target in breast cancer treatment."
Read more at Science Daily
Gene discovery in fruit flies 'opens new doors' for hearing loss cure in elderly
Scientists at UCL have discovered sets of regulatory genes, which are responsible for maintaining healthy hearing. The finding, made in fruit flies, could potentially lead to treatments for age-related hearing loss (ARHL) in humans.
Globally one third of people (1.23 billion people) aged over 65 experience hearing impairment, and while there are thought to be more than 150 candidate genes which may affect hearing loss, there is no unified view on how to use these to develop novel preventive or curative hearing loss therapies.
In the study, published in Scientific Reports, researchers at the UCL Ear Institute assessed the hearing ability of the common fruit fly (Drosophila melanogaster) across its life span (around 70 days*), to see if their hearing declines with age.
The fruit fly is a powerful model in biology and its ear shares many molecular similarities with the ears of humans, which make it an ideal tool for the study of human hearing loss. However, so far, no study had assessed the fruit flies' hearing across their life course.
Using advanced biomechanical, neurophysiological and behavioural techniques**, the researchers found that the antennal ears of fruit flies also display ARHL with nearly all measures of sensitive hearing starting to decline after 50 days of age.
With this knowledge, the researchers turned their interest to the time before flies developed ARHL: they wanted to know if there were any 'age-variable' genes in the flies' Johnston's Organ (their 'inner ear'), which have kept the ears healthy for 50 days of their lives.
Using a combination of molecular biology, bioinformatics and mutant analysis, the researchers identified a new set of transcriptional regulator genes: these are so called 'homeostasis genes', meaning they are the genetic actuators, so they control the activity which keeps the ear sensitive.
For researchers, one of the principle advantages of the fruit fly model is that it allows for easily testing the roles of individual genes by either increasing their function (overexpression) or silencing them (RNAi interference). Exploiting these tools, researchers also found that manipulating some of the homeostasis genes could prevent the flies from getting ARHL.
Lead author Professor Joerg Albert (UCL Ear Institute) said: "While many studies have been conducted into the hearing function of fruit flies, ours is the first to look at the mechanistic and molecular detail of their auditory life course.
"Our twin discoveries that fruit flies experience age-related hearing loss and that their prior auditory health is controlled by a particular set of genes, is a significant breakthrough. The fact that these genes are conserved in humans will also help to focus future clinical research in humans and thereby accelerate the discovery of novel pharmacological or gene-therapeutic strategies.
"Building on our findings from Drosophila, we have already started a follow-up drug discovery project designed to fast-track novel treatments for human ARHL."
Dr Ralph Holme, Executive Director of Research at Action on Hearing Loss, said: 'We urgently need to find effective treatments able to prevent or slow the loss of hearing as we age.
"Hearing loss affects 70% of people aged over 70 years old, cutting people off from friends and family.
"Action on Hearing Loss is proud to have been able to support this exciting research that has identified genes involved in maintaining hearing.
"It not only advances our understanding of why hearing declines with age, but importantly also opens the door to the future development of treatments to prevent it."
*At 25 degrees, one day for a fruit fly is equivalent (approximately) to one year for a human.
Read more at Science Daily
Globally one third of people (1.23 billion people) aged over 65 experience hearing impairment, and while there are thought to be more than 150 candidate genes which may affect hearing loss, there is no unified view on how to use these to develop novel preventive or curative hearing loss therapies.
In the study, published in Scientific Reports, researchers at the UCL Ear Institute assessed the hearing ability of the common fruit fly (Drosophila melanogaster) across its life span (around 70 days*), to see if their hearing declines with age.
The fruit fly is a powerful model in biology and its ear shares many molecular similarities with the ears of humans, which make it an ideal tool for the study of human hearing loss. However, so far, no study had assessed the fruit flies' hearing across their life course.
Using advanced biomechanical, neurophysiological and behavioural techniques**, the researchers found that the antennal ears of fruit flies also display ARHL with nearly all measures of sensitive hearing starting to decline after 50 days of age.
With this knowledge, the researchers turned their interest to the time before flies developed ARHL: they wanted to know if there were any 'age-variable' genes in the flies' Johnston's Organ (their 'inner ear'), which have kept the ears healthy for 50 days of their lives.
Using a combination of molecular biology, bioinformatics and mutant analysis, the researchers identified a new set of transcriptional regulator genes: these are so called 'homeostasis genes', meaning they are the genetic actuators, so they control the activity which keeps the ear sensitive.
For researchers, one of the principle advantages of the fruit fly model is that it allows for easily testing the roles of individual genes by either increasing their function (overexpression) or silencing them (RNAi interference). Exploiting these tools, researchers also found that manipulating some of the homeostasis genes could prevent the flies from getting ARHL.
Lead author Professor Joerg Albert (UCL Ear Institute) said: "While many studies have been conducted into the hearing function of fruit flies, ours is the first to look at the mechanistic and molecular detail of their auditory life course.
"Our twin discoveries that fruit flies experience age-related hearing loss and that their prior auditory health is controlled by a particular set of genes, is a significant breakthrough. The fact that these genes are conserved in humans will also help to focus future clinical research in humans and thereby accelerate the discovery of novel pharmacological or gene-therapeutic strategies.
"Building on our findings from Drosophila, we have already started a follow-up drug discovery project designed to fast-track novel treatments for human ARHL."
Dr Ralph Holme, Executive Director of Research at Action on Hearing Loss, said: 'We urgently need to find effective treatments able to prevent or slow the loss of hearing as we age.
"Hearing loss affects 70% of people aged over 70 years old, cutting people off from friends and family.
"Action on Hearing Loss is proud to have been able to support this exciting research that has identified genes involved in maintaining hearing.
"It not only advances our understanding of why hearing declines with age, but importantly also opens the door to the future development of treatments to prevent it."
*At 25 degrees, one day for a fruit fly is equivalent (approximately) to one year for a human.
Read more at Science Daily
Jun 1, 2020
New study provides maps, ice favorability index to companies looking to mine the moon
The 49ers who panned for gold during California's Gold Rush didn't really know where they might strike it rich. They had word of mouth and not much else to go on.
Researchers at the University of Central Florida want to give prospectors looking to mine the moon better odds of striking gold, which on the moon means rich deposits of water ice that can be turned into resources, like fuel, for space missions.
A team lead by planetary scientist Kevin Cannon created an Ice Favorability Index. The geological model explains the process for ice formation at the poles of the moon, and mapped the terrain, which includes craters that may hold ice deposits. The model, which has been published in the peer-reviewed journal Icarus, accounts for what asteroid impacts on the surface of the moon may do to deposits of ice found meters beneath the surface.
"Despite being our closest neighbor, we still don't know a lot about water on the moon, especially how much there is beneath the surface," Cannon says. "It's important for us to consider the geologic processes that have gone on to better understand where we may find ice deposits and how to best get to them with the least amount of risk."
The team was inspired by mining companies on Earth, which conduct detailed geological work, and take core samples before investing in costly extraction sites. Mining companies conduct field mappings, take core samples from the potential site and try to understand the geological reasons behind the formation of the particular mineral they are looking for in an area of interest. In essence they create a model for what a mining zone might look like before deciding to plunk down money to drill.
The team at UCF followed the same approach using data collected about the moon over the years and ran simulations in the lab. While they couldn't collect core samples, they had data from satellite observations and from the first trip to the moon.
Why Mine the Moon
In order for humans to explore the solar system and beyond, spacecraft have to be able to launch and continue on their long missions. One of the challenges is fuel. There are no gas stations in space, which means spacecraft have to carry extra fuel with them for long missions and that fuel weighs a lot. Mining the moon could result in creating fuel , which would help ease the cost of flights since spacecraft wouldn't have to haul the extra fuel.
Water ice can be purified and processed to produce both hydrogen and oxygen for propellent, according to several previously published studies. Sometime in the future, this process could be completed on the moon effectively producing a gas station for spacecraft. Asteroids may also provide similar resources for fuel.
Some believe a system of these "gas stations" would be the start of the industrialization of space.
Several private companies are exploring mining techniques to employ on the moon. Both Luxembourg and the United States have adopted legislation giving citizens and corporations ownership rights over resources mined in space, including the moon, according to the study.
"The idea of mining the moon and asteroids isn't science fiction anymore," says UCF physics Professor and co-author Dan Britt. "There are teams around the world looking to find ways to make this happen and our work will help get us closer to making the idea a reality."
Read more at Science Daily
Researchers at the University of Central Florida want to give prospectors looking to mine the moon better odds of striking gold, which on the moon means rich deposits of water ice that can be turned into resources, like fuel, for space missions.
A team lead by planetary scientist Kevin Cannon created an Ice Favorability Index. The geological model explains the process for ice formation at the poles of the moon, and mapped the terrain, which includes craters that may hold ice deposits. The model, which has been published in the peer-reviewed journal Icarus, accounts for what asteroid impacts on the surface of the moon may do to deposits of ice found meters beneath the surface.
"Despite being our closest neighbor, we still don't know a lot about water on the moon, especially how much there is beneath the surface," Cannon says. "It's important for us to consider the geologic processes that have gone on to better understand where we may find ice deposits and how to best get to them with the least amount of risk."
The team was inspired by mining companies on Earth, which conduct detailed geological work, and take core samples before investing in costly extraction sites. Mining companies conduct field mappings, take core samples from the potential site and try to understand the geological reasons behind the formation of the particular mineral they are looking for in an area of interest. In essence they create a model for what a mining zone might look like before deciding to plunk down money to drill.
The team at UCF followed the same approach using data collected about the moon over the years and ran simulations in the lab. While they couldn't collect core samples, they had data from satellite observations and from the first trip to the moon.
Why Mine the Moon
In order for humans to explore the solar system and beyond, spacecraft have to be able to launch and continue on their long missions. One of the challenges is fuel. There are no gas stations in space, which means spacecraft have to carry extra fuel with them for long missions and that fuel weighs a lot. Mining the moon could result in creating fuel , which would help ease the cost of flights since spacecraft wouldn't have to haul the extra fuel.
Water ice can be purified and processed to produce both hydrogen and oxygen for propellent, according to several previously published studies. Sometime in the future, this process could be completed on the moon effectively producing a gas station for spacecraft. Asteroids may also provide similar resources for fuel.
Some believe a system of these "gas stations" would be the start of the industrialization of space.
Several private companies are exploring mining techniques to employ on the moon. Both Luxembourg and the United States have adopted legislation giving citizens and corporations ownership rights over resources mined in space, including the moon, according to the study.
"The idea of mining the moon and asteroids isn't science fiction anymore," says UCF physics Professor and co-author Dan Britt. "There are teams around the world looking to find ways to make this happen and our work will help get us closer to making the idea a reality."
Read more at Science Daily
Asymmetry found in spin directions of galaxies
An analysis of more than 200,000 spiral galaxies has revealed unexpected links between spin directions of galaxies, and the structure formed by these links might suggest that the early universe could have been spinning, according to a Kansas State University study.
Lior Shamir, a K-State computational astronomer and computer scientist, presented the findings at the 236th American Astronomical Society meeting in June 2020. The findings are significant because the observations conflict with some previous assumptions about the large-scale structure of the universe.
Since the time of Edwin Hubble, astronomers have believed that the universe is inflating with no particular direction and that the galaxies in it are distributed with no particular cosmological structure. But Shamir's recent observations of geometrical patterns of more than 200,000 spiral galaxies suggest that the universe could have a defined structure and that the early universe could have been spinning. Patterns in the distribution of these galaxies suggest that spiral galaxies in different parts of the universe, separated by both space and time, are related through the directions toward which they spin, according to the study.
"Data science in astronomy has not just made astronomy research more cost-effective, but it also allows us to observe the universe in a completely different way," said Shamir, also a K-State associate professor of computer science. "The geometrical pattern exhibited by the distribution of the spiral galaxies is clear, but can only be observed when analyzing a very large number of astronomical objects."
A spiral galaxy is a unique astronomical object because its visual appearance depends on the observer's perspective. For instance, a spiral galaxy that spins clockwise when observed from Earth, would seem to spin counterclockwise when the observer is located in the opposite side of that galaxy. If the universe is isotropic and has no particular structure -- as previous astronomers have predicted -- the number of galaxies that spin clockwise would be roughly equal to the number of galaxies that spin counterclockwise. Shamir used data from modern telescopes to show that this is not the case.
With traditional telescopes, counting galaxies in the universe is a daunting task. But modern robotic telescopes such as the Sloan Digital Sky Survey, or SDSS, and the Panoramic Survey Telescope and Rapid Response System, or Pan-STARRS, are able to image many millions of galaxies automatically as they survey the sky. Machine vision can then sort millions of galaxies by their spin direction far faster than any person or group of people.
When comparing the number of galaxies with different spin directions, the number of galaxies that spin clockwise is not equal to the number of galaxies that spin counterclockwise. The difference is small, just over 2%, but with the high number of galaxies, there is a probability of less than 1 to 4 billion to have such asymmetry by chance, according to Shamir's research.
The patterns span over more than 4 billion light-years, but the asymmetry in that range is not uniform. The study found that the asymmetry gets higher when the galaxies are more distant from Earth, which shows that the early universe was more consistent and less chaotic than the current universe.
But the patterns do not just show that the universe is not symmetric, but also that the asymmetry changes in different parts of the universe, and the differences exhibit a unique pattern of multipoles.
"If the universe has an axis, it is not a simple single axis like a merry-go-round," Shamir said. "It is a complex alignment of multiple axes that also have a certain drift."
The concept of cosmological multipoles is not new. Previous space-based observatories -- such as the Cosmic Background Explorer, or COBE, satellite; the Wilkinson Microwave Anisotropy Probe, or WMAP mission; and the Planck observatory -- showed that the cosmic microwave background, which is electromagnetic radiation from the very early universe, also exhibits multiple poles. But the measurement of the cosmic microwave background is sensitive to foreground contamination -- such as the obstruction of the Milky Way -- and cannot show how these poles changed over time. The asymmetry between spin directions of spiral galaxies is a measurement that is not sensitive to obstruction. What can obstruct galaxies spinning in one direction in a certain field will necessarily also obstruct galaxies spinning in the opposite way.
Read more at Science Daily
Lior Shamir, a K-State computational astronomer and computer scientist, presented the findings at the 236th American Astronomical Society meeting in June 2020. The findings are significant because the observations conflict with some previous assumptions about the large-scale structure of the universe.
Since the time of Edwin Hubble, astronomers have believed that the universe is inflating with no particular direction and that the galaxies in it are distributed with no particular cosmological structure. But Shamir's recent observations of geometrical patterns of more than 200,000 spiral galaxies suggest that the universe could have a defined structure and that the early universe could have been spinning. Patterns in the distribution of these galaxies suggest that spiral galaxies in different parts of the universe, separated by both space and time, are related through the directions toward which they spin, according to the study.
"Data science in astronomy has not just made astronomy research more cost-effective, but it also allows us to observe the universe in a completely different way," said Shamir, also a K-State associate professor of computer science. "The geometrical pattern exhibited by the distribution of the spiral galaxies is clear, but can only be observed when analyzing a very large number of astronomical objects."
A spiral galaxy is a unique astronomical object because its visual appearance depends on the observer's perspective. For instance, a spiral galaxy that spins clockwise when observed from Earth, would seem to spin counterclockwise when the observer is located in the opposite side of that galaxy. If the universe is isotropic and has no particular structure -- as previous astronomers have predicted -- the number of galaxies that spin clockwise would be roughly equal to the number of galaxies that spin counterclockwise. Shamir used data from modern telescopes to show that this is not the case.
With traditional telescopes, counting galaxies in the universe is a daunting task. But modern robotic telescopes such as the Sloan Digital Sky Survey, or SDSS, and the Panoramic Survey Telescope and Rapid Response System, or Pan-STARRS, are able to image many millions of galaxies automatically as they survey the sky. Machine vision can then sort millions of galaxies by their spin direction far faster than any person or group of people.
When comparing the number of galaxies with different spin directions, the number of galaxies that spin clockwise is not equal to the number of galaxies that spin counterclockwise. The difference is small, just over 2%, but with the high number of galaxies, there is a probability of less than 1 to 4 billion to have such asymmetry by chance, according to Shamir's research.
The patterns span over more than 4 billion light-years, but the asymmetry in that range is not uniform. The study found that the asymmetry gets higher when the galaxies are more distant from Earth, which shows that the early universe was more consistent and less chaotic than the current universe.
But the patterns do not just show that the universe is not symmetric, but also that the asymmetry changes in different parts of the universe, and the differences exhibit a unique pattern of multipoles.
"If the universe has an axis, it is not a simple single axis like a merry-go-round," Shamir said. "It is a complex alignment of multiple axes that also have a certain drift."
The concept of cosmological multipoles is not new. Previous space-based observatories -- such as the Cosmic Background Explorer, or COBE, satellite; the Wilkinson Microwave Anisotropy Probe, or WMAP mission; and the Planck observatory -- showed that the cosmic microwave background, which is electromagnetic radiation from the very early universe, also exhibits multiple poles. But the measurement of the cosmic microwave background is sensitive to foreground contamination -- such as the obstruction of the Milky Way -- and cannot show how these poles changed over time. The asymmetry between spin directions of spiral galaxies is a measurement that is not sensitive to obstruction. What can obstruct galaxies spinning in one direction in a certain field will necessarily also obstruct galaxies spinning in the opposite way.
Read more at Science Daily
Cancer cells cause inflammation to protect themselves from viruses
Researchers at the Francis Crick Institute have uncovered how cancer cells protect themselves from viruses that are harmful to tumours but not to healthy cells. These findings could lead to improved viral treatments for the disease.
In their study, published in Nature Cell Biology, the researchers identified a mechanism that protects cancer cells from oncolytic viruses, which preferentially infect and kill cancer cells.
These viruses are sometimes used as a treatment to destroy cancer cells and stimulate an immune response against the tumour. However, they only work in a minority of patients and the reasons whether they are effective or not are not yet fully understood.
The team examined the environment surrounding a tumour and how cancer cells interact with their neighbours, in particular, cancer-associated fibroblasts (CAFs), which researchers know play a significant role in cancer protection, growth and spread.
They found that when cancer cells are in direct contact with CAFs, this leads to inflammation that can alert the surrounding tissue, making it harder for viruses to invade and replicate within the cancer cell.
This protective inflammatory response occurs when cancer cells pass small amounts of cytoplasm, the fluid in their cells, through to the CAFs. This triggers the fibroblasts to signal to nearby cells to release cytokines, molecules that cause inflammation.*
Erik Sahai, paper author and group leader of the Tumour Cell Biology Laboratory at the Crick says: "This process only occurs when cancer cells and fibroblasts are in direct contact with each other. In healthy tissue, this type of inflammatory response would only happen during injury, as there is usually a membrane keeping them apart.
"This is an excellent example of the way cancer hijacks our body's protective mechanisms for its own gain."
Importantly, when the researchers blocked the signalling pathway in cell cultures and in tumours grown in the laboratory, they found that the cancer cells became more sensitive to oncolytic viruses.
They hope these findings may, in the future, help to develop a treatment that could modulate the inflammation and so help oncolytic viruses to more effectively target cancer cells.
Emma Milford, co-lead author and Phd student in the Tumour Cell Biology Laboratory at the Crick says: "If we can more fully understand how cancer cells protect themselves from oncolytic viruses and find effective ways to stop these protective mechanisms, these viruses could become a more powerful tool doctors can use to treat cancer. This research is an important, early step towards this."
Antonio Rullan, co-lead author and clinical research fellow in the Tumour Cell Biology Laboratory at the Crick adds: "These viruses prefer to target cancer cells over healthy cells, which has made them of interest for scientists over the last few decades. However, much more remains to be understood about how they interact with tumours and the immune system."
Read more at Science Daily
In their study, published in Nature Cell Biology, the researchers identified a mechanism that protects cancer cells from oncolytic viruses, which preferentially infect and kill cancer cells.
These viruses are sometimes used as a treatment to destroy cancer cells and stimulate an immune response against the tumour. However, they only work in a minority of patients and the reasons whether they are effective or not are not yet fully understood.
The team examined the environment surrounding a tumour and how cancer cells interact with their neighbours, in particular, cancer-associated fibroblasts (CAFs), which researchers know play a significant role in cancer protection, growth and spread.
They found that when cancer cells are in direct contact with CAFs, this leads to inflammation that can alert the surrounding tissue, making it harder for viruses to invade and replicate within the cancer cell.
This protective inflammatory response occurs when cancer cells pass small amounts of cytoplasm, the fluid in their cells, through to the CAFs. This triggers the fibroblasts to signal to nearby cells to release cytokines, molecules that cause inflammation.*
Erik Sahai, paper author and group leader of the Tumour Cell Biology Laboratory at the Crick says: "This process only occurs when cancer cells and fibroblasts are in direct contact with each other. In healthy tissue, this type of inflammatory response would only happen during injury, as there is usually a membrane keeping them apart.
"This is an excellent example of the way cancer hijacks our body's protective mechanisms for its own gain."
Importantly, when the researchers blocked the signalling pathway in cell cultures and in tumours grown in the laboratory, they found that the cancer cells became more sensitive to oncolytic viruses.
They hope these findings may, in the future, help to develop a treatment that could modulate the inflammation and so help oncolytic viruses to more effectively target cancer cells.
Emma Milford, co-lead author and Phd student in the Tumour Cell Biology Laboratory at the Crick says: "If we can more fully understand how cancer cells protect themselves from oncolytic viruses and find effective ways to stop these protective mechanisms, these viruses could become a more powerful tool doctors can use to treat cancer. This research is an important, early step towards this."
Antonio Rullan, co-lead author and clinical research fellow in the Tumour Cell Biology Laboratory at the Crick adds: "These viruses prefer to target cancer cells over healthy cells, which has made them of interest for scientists over the last few decades. However, much more remains to be understood about how they interact with tumours and the immune system."
Read more at Science Daily
Monitoring environmental exposures in dogs could be early warning system for human health
Man's best friend may also be man's best bet for figuring out how environmental chemicals could impact our health. Researchers from North Carolina State University and Duke University's Nicholas School of the Environment used silicone dog tags as passive environmental samplers to collect information about everyday chemical exposures, and found that dogs could be an important sentinel species for the long term effects of environmental chemicals.
"Silicone monitoring devices are still relatively new, but they represent an inexpensive and effective way to measure exposure to the chemicals we encounter in daily life -- from pesticides to flame retardants," says Catherine Wise, Ph.D. candidate at NC State and lead author of a paper describing the work. "And we know that many human diseases caused by environmental exposure are similar clinically and biologically to those found in dogs."
Wise and researchers from NC State and Duke recruited 30 dogs and their owners to wear silicone monitors for a five-day period in July 2018. Humans wore wristbands, while the dogs wore tags on their collars.
The researchers analyzed the wristbands and tags for exposures to chemicals within three classes of environmental toxicants that are often found in human blood and urine: pesticides, flame retardants, and phthalates, which are found in plastic food packaging and personal care products. They found high correlations between exposure levels for owners and their pets. Urinalysis also revealed the presence of organophosphate esters (found in some flame retardants) in both owners and dogs.
"What was remarkable about these results were the similar patterns of exposure between people and their pets," says Heather Stapleton, Ronie-Richelle Garcia-Johnson Distinguished Professor, director of the Duke Environmental Analysis Laboratory at the Nicholas School and co-author of the research. "It's quite clear that the home environment contributes strongly to our daily exposure to chemical contaminants."
However, while dogs and humans may share similar exposures, the health effects do not follow similar timelines -- a fact that could aid researchers in teasing out relationships between chemical exposure and human health. "Dogs are special when it comes to linking exposures and disease outcomes because effects that may take decades to show up in humans can occur in one to two years in a dog," Wise says.
"Humans spend incredible amounts of time with their dogs -- that's especially true right now," says Matthew Breen, Oscar J. Fletcher Distinguished Professor of Comparative Oncology Genetics at NC State and corresponding author of the paper. "If we develop ways to correlate dog disease with their exposures over time, it may give human-health professionals the opportunity to mitigate these exposures for both species. Dogs are a powerful biological sentinel species for human disease."
From Science Daily
"Silicone monitoring devices are still relatively new, but they represent an inexpensive and effective way to measure exposure to the chemicals we encounter in daily life -- from pesticides to flame retardants," says Catherine Wise, Ph.D. candidate at NC State and lead author of a paper describing the work. "And we know that many human diseases caused by environmental exposure are similar clinically and biologically to those found in dogs."
Wise and researchers from NC State and Duke recruited 30 dogs and their owners to wear silicone monitors for a five-day period in July 2018. Humans wore wristbands, while the dogs wore tags on their collars.
The researchers analyzed the wristbands and tags for exposures to chemicals within three classes of environmental toxicants that are often found in human blood and urine: pesticides, flame retardants, and phthalates, which are found in plastic food packaging and personal care products. They found high correlations between exposure levels for owners and their pets. Urinalysis also revealed the presence of organophosphate esters (found in some flame retardants) in both owners and dogs.
"What was remarkable about these results were the similar patterns of exposure between people and their pets," says Heather Stapleton, Ronie-Richelle Garcia-Johnson Distinguished Professor, director of the Duke Environmental Analysis Laboratory at the Nicholas School and co-author of the research. "It's quite clear that the home environment contributes strongly to our daily exposure to chemical contaminants."
However, while dogs and humans may share similar exposures, the health effects do not follow similar timelines -- a fact that could aid researchers in teasing out relationships between chemical exposure and human health. "Dogs are special when it comes to linking exposures and disease outcomes because effects that may take decades to show up in humans can occur in one to two years in a dog," Wise says.
"Humans spend incredible amounts of time with their dogs -- that's especially true right now," says Matthew Breen, Oscar J. Fletcher Distinguished Professor of Comparative Oncology Genetics at NC State and corresponding author of the paper. "If we develop ways to correlate dog disease with their exposures over time, it may give human-health professionals the opportunity to mitigate these exposures for both species. Dogs are a powerful biological sentinel species for human disease."
From Science Daily
May 31, 2020
Dinosaur-dooming asteroid struck Earth at 'deadliest possible' angle
Dinosaurs and asteroid illustration |
The simulations show that the asteroid hit Earth at an angle of about 60 degrees, which maximised the amount of climate-changing gases thrust into the upper atmosphere.
Such a strike likely unleashed billions of tonnes of sulphur, blocking the sun and triggering the nuclear winter that killed the dinosaurs and 75 per cent of life on Earth 66 million years ago.
Drawn from a combination of 3D numerical impact simulations and geophysical data from the site of the impact, the new models are the first ever fully 3D simulations to reproduce the whole event -- from the initial impact to the moment the final crater, now known as Chicxulub, was formed.
The simulations were performed on the Science and Technology Facilities Council (STFC) DiRAC High Performance Computing Facility.
Lead researcher Professor Gareth Collins, of Imperial's Department of Earth Science and Engineering, said: "For the dinosaurs, the worst-case scenario is exactly what happened. The asteroid strike unleashed an incredible amount of climate-changing gases into the atmosphere, triggering a chain of events that led to the extinction of the dinosaurs. This was likely worsened by the fact that it struck at one of the deadliest possible angles.
"Our simulations provide compelling evidence that the asteroid struck at a steep angle, perhaps 60 degrees above the horizon, and approached its target from the north-east. We know that this was among the worst-case scenarios for the lethality on impact, because it put more hazardous debris into the upper atmosphere and scattered it everywhere -- the very thing that led to a nuclear winter."
The results are published today in Nature Communications.
Crater creation
The upper layers of earth around the Chicxulub crater in present-day Mexico contain high amounts of water as well as porous carbonate and evaporite rocks. When heated and disturbed by the impact, these rocks would have decomposed, flinging vast amounts of carbon dioxide, sulphur and water vapour into the atmosphere.
The sulphur would have been particularly hazardous as it rapidly forms aerosols -- tiny particles that would have blocked the sun's rays, halting photosynthesis in plants and rapidly cooling the climate. This eventually contributed to the mass extinction event that killed 75 per cent of life on Earth.
The team of researchers from Imperial, the University of Freiburg, and The University of Texas at Austin, examined the shape and subsurface structure of the crater using geophysical data to feed into the simulations that helped diagnose the impact angle and direction. Their analysis was also informed by recent results from drilling into the 200 km-wide crater, which brought up rocks containing evidence of the extreme forces generated by the impact.
Peak performance
Pivotal to diagnosing the angle and direction of impact was the relationship between the centre of the crater, the centre of the peak ring -- a ring of mountains made of heavily fractured rock inside the crater rim -- and the centre of dense uplifted mantle rocks, some 30 km beneath the crater.
At Chicxulub, these centres are aligned in a southwest-northeast direction, with the crater centre in between the peak-ring and mantle-uplift centres. The team's 3D Chicxulub crater simulations at an angle of 60 degrees reproduced these observations almost exactly.
The simulations reconstructed the crater formation in unprecedented detail and give us more clues as to how the largest craters on Earth are formed. Previous fully 3D simulations of the Chicxulub impact have covered only the early stages of impact, which include the production of a deep bowl-shaped hole in the crust known as the transient crater and the expulsion of rocks, water and sediment into the atmosphere.
These simulations are the first to continue beyond this intermediate point in the formation of the crater and reproduce the final stage of the crater's formation, in which the transient crater collapses to form the final structure. This allowed the researchers to make the first comparison between 3D Chicxulub crater simulations and the present-day structure of the crater revealed by geophysical data.
Co-author Dr Auriol Rae of the University of Freiburg said: "Despite being buried beneath nearly a kilometre of sedimentary rocks, it is remarkable that geophysical data reveals so much about the crater structure -- enough to describe the direction and angle of the impact."
The researchers say that while the study has given us important insights into the dinosaur-dooming impact, it also helps us understand how large craters on other planets form.
Co-author Dr Thomas Davison, also of Imperial's Department of Earth Science and Engineering, said: "Large craters like Chicxulub are formed in a matter of minutes, and involve a spectacular rebound of rock beneath the crater. Our findings could help advance our understanding of how this rebound can be used to diagnose details of the impacting asteroid."
Read more at Science Daily
Rarely heard narwhal vocalizations
Narwhal couple |
Narwhals are difficult to study because they are notoriously shy and skittish and spend most of their time deep in the freezing Arctic Ocean. They tend to summer in glacial fjords around Greenland and Canada, but scientists often have trouble getting close enough to study them. Glacier fronts can be dangerous and hard to access, and the animals tend to swim off when approached by motorized boats.
But Inuit hunters familiar with the mysterious cetaceans can get closer to the animals without disturbing them. In July 2019, researchers accompanied several Inuit whale hunting expeditions in Northwest Greenland to study the narwhals that summer there in more detail.
Using underwater microphones attached to small boats, the researchers captured narwhal social calls and foraging sounds, getting as close as 25 meters (82 feet) to the elusive cetaceans.
The recordings help the researchers provide a baseline of the kinds of sounds that permeate the narwhals' pristine habitat. In combination with sightings, they also show narwhals get closer to glacier ice than previously thought for this area and the animals do forage for food in summer, contrary to some previous findings.
"Their world is the soundscape of this glacial fjord," said Evgeny Podolskiy, a geophysicist at Hokkaido University in Sapporo, Japan and lead author of a new study detailing the findings in AGU's Journal of Geophysical Research: Oceans. "There are many questions we can answer by listening to glacier fjords in general."
Getting close
Podolskiy and his colleagues had been working in Greenland fjords for several years, studying the sounds made by melting glaciers. Coincidentally, a population of narwhals summers in the fjords they were studying, and Podolskiy saw an opportunity to study the wily creatures.
"I realized working in the area and not paying attention to the elephant in the room -- the key endemic legendary Arctic unicorn just flowing around our glacier -- was a big mistake," he said.
The researchers tagged along on several Inuit hunting expeditions departing from the village of Qaanaaq, placing microphones underwater and recording the baseline sounds of the fjord.
They captured several types of sounds made by narwhals, including social calls, or whistles, and clicks used for echolocation, the biological sonar used by dolphins, bats, some whales and other animals to navigate and find food.
The closer narwhals get to their food, the faster they click, until the noise becomes a buzz not unlike that of a chainsaw. This terminal buzz helps the narwhals pinpoint the location of their prey.
"If you approach and target these fast fish, you better know precisely where they are; you need to gather this information more frequently," Podolskiy said.
Few studies have documented narwhals feeding in the summertime. Because the microphones picked up terminal buzz, a sound associated with finding food, the new study provides further evidence that narwhals do forage in summer.
Surprisingly, the researchers found narwhals come roughly within 1 kilometer (half a mile) of a glacier calving front, despite the fact that these areas are some of the noisiest places in the ocean and calving icebergs can be dangerous.
Read more at Science Daily
Subscribe to:
Posts (Atom)