Dec 10, 2022

Hubble detects ghostly glow surrounding our solar system

Imagine walking into a room at night, turning out all the lights and closing the shades. Yet an eerie glow comes from the walls, ceiling, and floor. The faint light is barely enough to see your hands before your face, but it persists. Sounds like a scene out of "Ghost Hunters?" No, for astronomers this is the real deal. But looking for something that's close to nothing is not easy.

One possible explanation is that a shell of dust envelops our solar system all the way out to Pluto, and is reflecting sunlight. Seeing airborne dust caught in sunbeams is no surprise when cleaning the house. But this must have a more exotic origin. Because the glow is so smoothy distributed, the likely source is innumerable comets -- free-flying dusty snowballs of ice. They fall in toward the Sun from all different directions, spewing out an exhaust of dust as the ices sublimate due to heat from the Sun. If real, this would be a newly discovered architectural element of the solar system. It has remained invisible until very imaginative and curious astronomers, and the power of Hubble, came along.

Aside from a tapestry of glittering stars, and the glow of the waxing and waning Moon, the nighttime sky looks inky black to the casual observer. But how dark is dark?

To find out, astronomers decided to sort through 200,000 images from NASA's Hubble Space Telescope and made tens of thousands of measurements on these images to look for any residual background glow in the sky, in an ambitious project called SKYSURF. This would be any leftover light after subtracting the glow from planets, stars, galaxies, and from dust in the plane of our solar system (called zodiacal light).

When researchers completed this inventory, they found an exceedingly tiny excess of light, equivalent to the steady glow of 10 fireflies spread across the entire sky. That's like turning out all the lights in a shuttered room and still finding an eerie glow coming from the walls, ceiling, and floor.

The researchers say that one possible explanation for this residual glow is that our inner solar system contains a tenuous sphere of dust from comets that are falling into the solar system from all directions, and that the glow is sunlight reflecting off this dust. If real, this dust shell could be a new addition to the known architecture of the solar system.

This idea is bolstered by the fact that in 2021 another team of astronomers used data from NASA's New Horizons spacecraft to also measure the sky background. New Horizons flew by Pluto in 2015, and a small Kuiper belt object in 2018, and is now heading into interstellar space. The New Horizons measurements were done at a distance of 4 billion to 5 billion miles from the Sun. This is well outside the realm of the planets and asteroids where there is no contamination from interplanetary dust.

New Horizons detected something a bit fainter that is apparently from a more distant source than Hubble detected. The source of the background light seen by New Horizons also remains unexplained. There are numerous theories ranging from the decay of dark matter to a huge unseen population of remote galaxies.

"If our analysis is correct there's another dust component between us and the distance where New Horizons made measurements. That means this is some kind of extra light coming from inside our solar system," said Tim Carleton, of Arizona State University (ASU).

"Because our measurement of residual light is higher than New Horizons we think it is a local phenomenon that is not from far outside the solar system. It may be a new element to the contents of the solar system that has been hypothesized but not quantitatively measured until now," said Carleton.

Hubble veteran astronomer Rogier Windhorst, also of ASU, first got the idea to assemble Hubble data to go looking for any "ghost light." "More than 95% of the photons in the images from Hubble's archive come from distances less than 3 billion miles from Earth. Since Hubble's very early days, most Hubble users have discarded these sky-photons, as they are interested in the faint discrete objects in Hubble's images such as stars and galaxies," said Windhorst. "But these sky-photons contain important information which can be extracted thanks to Hubble's unique ability to measure faint brightness levels to high precision over its three decades of lifetime."

Read more at Science Daily

How intensive agriculture turned a wild plant into a pervasive weed

New research in Science is showing how the rise of modern agriculture has turned a North American native plant, common waterhemp, into a problematic agricultural weed.

An international team led by researchers at the University of British Columbia (UBC) compared 187 waterhemp samples from modern farms and neighbouring wetlands with more than 100 historical samples dating as far back as 1820 that had been stored in museums across North America. Much like the sequencing of ancient human and neanderthal remains has resolved key mysteries about human history, studying the plant's genetic makeup over the last two centuries allowed the researchers to watch evolution in action across changing environments.

"The genetic variants that help the plant do well in modern agricultural settings have risen to high frequencies remarkably quickly since agricultural intensification in the 1960s," said first author Dr. Julia Kreiner, a postdoctoral researcher in UBC's Department of Botany.

The researchers discovered hundreds of genes across the weed's genome that aid its success on farms, with mutations in genes related to drought tolerance, rapid growth and resistance to herbicides appearing frequently. "The types of changes we're imposing in agricultural environments are so strong that they have consequences in neighbouring habitats that we'd usually think were natural," said Dr. Kreiner.

The findings could inform conservation efforts to preserve natural areas in landscapes dominated by agriculture. Reducing gene flow out of agricultural sites and choosing more isolated natural populations for protection could help limit the evolutionary influence of farms.

Common waterhemp is native to North America and was not always a problematic plant. Yet in recent years, the weed has become nearly impossible to eradicate from farms thanks to genetic adaptations including herbicide resistance.

"While waterhemp typically grows near lakes and streams, the genetic shifts that we're seeing allow the plant to survive on drier land and to grow quickly to outcompete crops," said co-author Dr. Sarah Otto, Killam University Professor at the University of British Columbia. "Waterhemp has basically evolved to become more of a weed given how strongly it's been selected to thrive alongside human agricultural activities."

Notably, five out of seven herbicide-resistant mutations found in current samples were absent from the historical samples. "Modern farms impose a strong filter determining which plant species and mutations can persist through time," said Dr. Kreiner. "Sequencing the plant's genes, herbicides stood out as one of the strongest agricultural filter determining which plants survive and which die."

Waterhemp carrying any of the seven herbicide resistant mutations have produced an average of 1.2 times as many surviving offspring per year since 1960 compared to plants that don't have the mutations.

Herbicide resistant mutations were also discovered in natural habitats, albeit at a lower frequency, which raises questions about the costs of these adaptations for plant life in non-agricultural settings. "In the absence of herbicide applications, being resistant can actually be costly to a plant, so the changes happening on the farms are impacting the fitness of the plant in the wild," said Dr. Kreiner.

Agricultural practices have also reshaped where particular genetic variants are found across the landscape. Over the last 60 years, a weedy southwestern variety has made an increasing progression eastward across North America, spreading their genes into local populations as a result of their competitive edge in agricultural contexts.

"These results highlight the enormous potential of studying historical genomes to understand plant adaptation on short timescales," says Dr. Stephen Wright, co-author and Professor in Ecology and Evolutionary Biology at the University of Toronto. "Expanding this research across scales and species will broaden our understanding of how farming and climate change are driving rapid plant evolution."

Read more at Science Daily

Dec 9, 2022

Your dog's behavior is a product of their genes

From the excitable sheep dog to the aloof Shiba Inu, and all breeds in between, dogs have unique and diverse behavioral traits. By analyzing DNA samples from over 200 dog breeds along with nearly 50,000 pet-owner surveys, researchers at the National Institutes of Health have pinpointed many of the genes associated with the behaviors of specific dog breeds. Their work appears December 8th in the journal Cell.

"The largest, most successful genetic experiment that humans have ever done is the creation of 350 dog breeds," says senior author Elaine Ostrander, founder of the Dog Genome Project at the National Human Genome Research Institute. "We needed dogs to herd, we needed them to guard, we needed them to help us hunt, and our survival was intimately dependent on that."

"Identification of the genes behind dog behavior has historically been challenging," says first author Emily Dutrow, postdoctoral fellow at the National Human Genome Research Institute. "The inherent complexity of canine population dynamics features varying degrees of selective pressure for aesthetic and morphological traits, some of which may be linked to behavioral traits, so pinpointing the genetics of canine behavior can be complicated."

Kennel clubs generally categorize dog breeds on the basis of the jobs they are best suited for. To find the genetic drivers of the behavioral tendencies that make dogs good at specific tasks, the researchers gathered whole-genome data from over 4,000 purebred, mixed-breed, and semi-feral dogs, as well as wild canids. By applying computational tools originally developed for studying single cells rather than whole organisms, Dutrow and team identified 10 major genetic lineages among hundreds of dog breeds, solely on the basis of DNA data. The researchers found that each lineage corresponded to a specific category of breeds historically used for tasks such as hunting by scent versus sight or herding versus protecting livestock, indicating that common sets of genes were responsible for behaviors among dog breeds well suited for similar tasks.

To understand the nature of these behaviors, the researchers turned to individual dog experts: pet owners. Using 46,000 behavioral assessment surveys sent to owners of purebred dogs, the researchers identified unique sets of behavioral tendencies among the 10 lineages of dogs. For example, behaviors associated with increased prey drive were associated with the terrier lineage, which contains breeds historically used for catching and killing prey.

"Having established significant behavioral tendencies correlated with the major canine lineages, we then identified genetic drivers of these behaviors by performing a genome-wide association study on the DNA samples," says Dutrow. "We were particularly interested in livestock-herding dogs, who display one of the most easily defined breed-typical behaviors, characterized by an instinctive herding drive coupled with unique motor patterns that move herds in complex ways."

The researchers' search led them to specific genes involved in brain wiring in herding dogs. They found that variants near genes involved in axon guidance, a process that shapes brain circuitry, appeared highly enriched. They also saw an enrichment for genes important for development of areas of the brain involved in social cognition and learned fear responses.

"When you get a certain input or stimulus, the degree to which that creates a reaction in different parts of the brain shapes how we behave," says Ostrander. "So, if nerves within and between brain regions don't communicate in specific ways, then the behavior doesn't happen, and this is where axon-guidance genes come in to play."

Genetic variants associated with sheep dogs are often located near genes involved in ephrin signaling, an axon-guidance process that is involved in brain development and is implicated in behavior in other species, including humans. For example, the sheep-dog-associated gene EPHA5 has also been associated with human attention-deficit hyperactivity disorder (ADHD) and anxiety-like behaviors in other mammals. These findings could help us understand the high energy requirement of sheep dogs and their hyper focus when given a task.

"The same pathways involved in human neurodiversity are implicated in behavioral differences among dog lineages, indicating that the same genetic toolkit may be used in humans and dogs alike," says Dutrow.

Read more at Science Daily

Florida mints radiated as peninsula sank and resurfaced during ice ages

During the ice ages of the Pleistocene, the Florida peninsula regularly grew to twice its current size as glaciers expanded near the planet's poles, only to be reduced to a series of islands as melting ice returned to the sea during warm periods. All told, glaciers advanced and retreated 17 times, and according to a new study, the resulting environmental instability may have contributed to the incredible plant diversity found in Florida today.

Researchers from the Florida Museum of Natural History came to this conclusion while studying scrub mints, a unique group of plants endemic to the southeastern U.S. that radiated during the ice ages. Cyclically marooned on isolated islands as seas rose and fell, mint populations became genetically isolated and diverged over time, generating multiple new species.

Despite their long winning streak, scrub mints have recently been hit head on with the threat of extinction due to human-mediated habitat destruction and impending climate change. Of the 13 species endemic to Florida, eight are listed as either state or federally endangered.

"The most remarkable thing about this group is how rare they are," said lead author Andre Naranjo, who completed the study while working as a doctoral student in the University of Florida's department of biology. "One species, Conradina etonia, only grows within a 30-square mile area, and if you were to pave over that, that'd be it. The species would be gone."

Other scrub mints share a similar pattern. Lakela's mint (Dicerandra immaculata) has been reduced to just a single population, most of which is located on privately owned lands. Scrub balm (Dicerandra frutescens) is restricted to Highlands County, FL where it grows along an elevated ridge increasingly checkered by development. Further west, the Apalachicola rosemary (Conradina glabra) has been reduced to just ten known locations in a single region of the Florida panhandle.

The current plight of scrub mints and other groups like them offers only a partial explanation of why they have been pushed so far to the sidelines. To get the full picture, says Naranjo, you have to take a much longer view of how species have fared over time, one that covers several million years of their natural history.

Naranjo wanted to know where scrub mints came from, when they originated and how they ended up with their current distributions. Building on his previous work, Naranjo used a new method for reconstructing historical environments developed by co-author Ryan Folk, a former postdoctoral associate at the Florida Museum of Natural History who joined the faculty at Mississippi State University in 2019.

By inputting information about the plants' current habitats, such as temperature, precipitation and soil type, Naranjo could then trace their geographic history. The result was a detailed map that pinpointed the most suitable environments for each of the 22 species, half of which are endemic to Florida.

Nearly four million years ago, a scrub mint species growing in the Apalachicola River Basin of Florida shed a fine layer of seeds on the sandy soil below. Each no larger than a coarse grain of sand, the seeds don't often travel far, which researchers suspect is a major cause of their rarity. But they're also equipped with an opportunistic deployment mechanism that occasionally enables long-distance transport.

Scrub mint seed coats are perforated with glands that exude small amounts of viscous oil, Naranjo said. "When it rains really hard, the water forms little streams that drain the sand away from the scrub habitats. If the seeds land in these streams, their mucilaginous coating reduces friction, which helps carry them a few meters away from the parent plant."

Whether all at once or in stages, a seed or seeds from the original population in Apalachicola somehow traveled potentially hundreds of miles east, ultimately leading to the establishment of mints in the Altamaha region of Georgia. Seeds from these newly established populations may have floated down rivers and streams into peninsular Florida, where they washed ashore on the ancient Lake Wales and Atlantic Coastal Ridges.

Throughout the ice ages, the population that remained in the west radiated into the false rosemaries (genus Conradina), while those in the east gave rise to the genus Dicerandra. The groups occasionally crisscrossed in a complex migration pattern that resulted in distant relatives sharing the same environment, a family reunion on a millennial timescale.

Scrub mints are merely one example of unique Florida plants that originated in the peninsula that are now imperiled due to habitat destruction, fire suppression and competition with invasive species. The Lake Wales Ridge, where many scrub mints evolved and which functioned as an ark for plants and animals retreating from rising seas, has lost more than 85% of its natural habitat to urbanization and agriculture.

Florida is also part of the North American Coastal Plain, which was listed in 2015 as one of Earth's 36 biodiversity hotspots, defined as a region harboring at least 1,500 endemic species and which has lost 70% or more of its original vegetation.

"We need to start thinking about conservation in a broader context than just individual species," Naranjo said, emphasizing the focus instead should be shifted toward preserving entire regions and environments. "Our hope is that this research can be used as a rubric to study other endemic plants and further refine a comprehensive conservation approach for those areas most at risk of being developed."

Read more at Science Daily

Smilodon's sabre teeth

A team of researchers led by Narimane Chatar, a doctoral student at the EDDyLab of the University of Liège (Belgium), has tested the biting efficiency of Smilodon, an extinct species of carnivore close to the extant felines. Using high-precision 3D scans and simulation methods, the team has just revealed how these animals managed to bite despite the impressive length of their teeth.

Ancient carnivorous mammals developed a wide range of skull and tooth shapes throughout their evolution. However, few of these evolutions have yet matched those of the iconic sabre-toothed felid Smilodon. Other groups of mammals, such as the now extinct nimravids, have also evolved a similar morphology, with species having sabre teeth but also much shorter canines, similar to those of the lions, tigers, caracals, domestic cats, etc. that we know today. This phenomenon of similar morphologies appearing in different groups of organisms is known as convergent evolution; felids and nimravids being an amazing example of convergence. As there are no modern equivalents of animals with such sabre-shaped teeth, the hunting method of Smilodon and similar species has remained obscure and hotly debated. It was first suggested that all sabre-toothed species hunted in the same way, regardless of the length of their canines, a hypothesis that is now controversial. So the question remained ... how did this variety of 'sabre-toothed cat' hunt?

The enormous canines of the extinct sabre-toothed cat Smilodon imply that this animal had to open its jaw extremely wide, 110° according to some authors, in order to use them effectively," explains Prof. Valentin Fischer, director of the EDDyLab at ULiège. However, the mechanical feasibility and efficiency of Smilodon and its relatives to bite at such a large angle is unknown, leaving a gap in our understanding of this very fundamental question about sabre-toothed predators." Using high-precision 3D scanners and analytical methods derived from engineering, an international team of Belgian and North American scientists has just revealed how these animals probably used their impressive weapons.

Narimane Chatar, a PhD student at the EDDyLab of the University of Liege and lead author of the study, collected a large amount of three-dimensional data. She first scanned and modelled the skulls, mandibles and muscles of numerous extinct and extant species of felids and nimravids. "Each species was analysed in several scenarios: a bite was simulated on each tooth at three different biting angles: 30°, as commonly seen in extant felids, but also larger angles (60° and 90°). In total, we carried out 1,074 bite simulations to cover all the possibilities," explains Narimane Chatar. To do this, the young researcher used the finite element method. This is an exciting application of the finite element approach, which allows palaeontologists to modify and computationally simulate different bite angles and to subject skull models to virtual stresses without damaging the precious fossil specimens," says Prof. Jack Tseng, Professor and Curator of Palaeontology at the University of California, Berkeley, and co-author of the study. Our comprehensive analyses provide the most detailed insight to date into the diversity and nuances of sabre tooth bite mechanics."

One of the results obtained by the team is the understanding of the distribution of stress (pressure) on the mandible during biting. This stress shows a continuum across the animals analysed, with the highest values measured in species with the shortest upper canines and the lowest stress values measured in the most extreme sabre-toothed species. The researchers also noted that stress decreased with increasing bite angle, but only in sabre-toothed species. However, the way in which these animals transmitted force to the bite point and the deformation of the mandible resulting from the bite were remarkably similar across the dataset, indicating comparable effectiveness regardless of canine length.

Read more at Science Daily

Patient's own immune cells effective as living medicine for melanoma

A patient's own immune cells, multiplied into an army of billions of immune cells in a lab, can be used as a living medicine against metastatic melanoma, an aggressive form of skin cancer, as the TIL trial has shown. The TIL trial is the world's first comparative phase 3 trial looking into the effect of T cell therapy in melanoma, and solid tumors in general. Now that the results have come in , the Dutch National Health Care Institute will assess whether TIL therapy () could become a standard treatment, meaning that it will be covered by basic health insurance. The results are published in The New Engeland Journal of Medicine (NEJM) on December 8. The trial was headed by the Netherlands Cancer Institute in collaboration with the National Center for Cancer Immune Therapy in Copenhagen.

Powerful immunotherapy for metastatic melanoma

Medical oncologist John Haanen from the Netherlands Cancer Institute, who is leading the TIL trial, is very happy with the results: "Remember: these are patients with metastatic melanoma. Ten years ago, melanoma was so deadly that I would be seeing an entirely new patient population every year. Now I've been seeing some patients for ten years. This is largely due to the discovery of immunotherapy, which has revolutionized treatment for melanomas. But we still find that about half of people diagnosed with metastatic melanoma lose their lives within five years, so we're still not where we want to be -- not by a long shot. The TIL trial has shown that cell therapy using the patient's own immune cells is an extremely powerful immunotherapy for metastatic melanoma, and that this therapy still offers a high chance of improvement, even if other immunotherapies fail.'

World's first phase 3 study T cell therapy for melanoma

A melanoma is an aggressive form of skin cancer with a high rate of occurrence Ten years ago, a diagnosis with metastatic melanoma would almost certainly lead to death within the same year. In early clinical trials, cell therapy using the patient's own T cells as a "living drug" showed promising results. However, a comparative phase 3 trial would be necessary to include TIL therapy in the arsenal of regular treatments, and no such trial had ever been conducted. Medical oncologist John Haanen from the Netherlands Cancer Institute decided to take on this task by initiating an international trial in 2014: the TIL trial, which compared TIL therapy to standard immunotherapy with the checkpoint inhibitor ipilimumab. The results of the TIL trial will now be presented at the annual conference of the European Society for Medical Oncology.

Metastases smaller in half of the patient group

In almost half (49%) of the patients with metastatic melanoma who received TIL therapy, the metastases had shrunk. In 20% of patients, the metastases had even disappeared completely. This also proved to be the case in patients who had already received another treatment prior to their trial participation. These percentages were significantly higher than those among the patient group receiving standard immunotherapy (ipilimumab). In the latter group, metastases had shrunk in 21% of patients, while 7% saw a disappearance of the condition.

Progression-free survival after six months is 53%

The progression-free survival, which refers to the percentage of patients who do not experience disease progression after a specified time period, was 53% after six months for patients receiving TIL therapy, and 21% in the control group. At a median follow-up time of 33 months for all patients, the median progression-free survival of patients who had received TIL therapy was significantly better (7 months) than that of patients treated with ipilimumab (3 months).

Quality of life: return to professional life

While assessing a treatment's efficacy, more clinical trials nowadays also consider the patients' quality of life. Patients treated with TIL scored better in this area than those treated with ipilimumab. This applied to their general physical and emotional functioning as well as symptoms like fatigue, pain, or insomnia. "We also looked at whether they could resume their careers and noticed that people were going back to work," says physician-scientist Maartje Rohaan, who coordinated the trial. "That's wonderful to see." The differences in quality of life between the TIL patients and the control group were still visible after 60 weeks. As an added bonus, TIL therapy is much more cost-effective than immunotherapy with ipilimumab.

Read more about the TIL therapy

Compared to checkpoint inhibitors

The trial compared TIL therapy to a different type of immunotherapy using checkpoint inhibitor ipilimumab, which is a drug that reactivates the body's T cells that have been thwarted by the tumor so they can continue to kill the tumor cells. In 2014, when the TIL trial started, this was the only registered immunotherapy for patients with metastatic melanomas. Research leader Haanen: "We have to remember that this form of immunotherapy has also experienced a lot of development in recent years, with more and more research looking to find more effective treatments for metastatic melanoma, even for patients who have already received treatment without the desired effects. The results of the TIL trial are a good addition to this. We have shown that treatment using the patient's own T cells that have been multiplied outside the body, can be very effective in patients with metastatic melanoma, even if previous systemic treatment failed." read more about the different types of immunotherapy

Not an easy treatment

The TIL therapy itself, a one-time treatment, is not easy on the patient. All TIL patients experienced side effects to some degree, as did 96% of patients treated with ipilimumab. The side effects of the TIL therapy are usually not caused by the T cells themselves, but rather by the chemotherapy pre-treatment, which is required to make room for the billions of T cells, and by the rapidly successive post-treatments with growth factor interleukin-2, which ensures rapid growth of the T cells. This can lead to high fevers and chills. Haanen: "In the future we would like to find a way to avoid the use of high dose interleukin-2 by developing a more precise form of the treatment by using a growth factor that causes fewer side effects."

What do these results mean for patients with metastatic melanomas?

Now that the phase III trial has concluded with positive results, the researchers want the treatment to be covered by basic health insurance, making it accessible to patients in the Netherlands. The Dutch National Health Care Institute (Zorginstituut Nederland) is currently assessing whether TIL therapy meets the requirements (in terms of science and clinical practice as well as cost-effectiveness) so it can be included as a standard treatment in the basic health insurance package.

Patients in the Netherlands can participate in the TIL trial until the end of 2022, through a referral by their practicing physician. Treatment as part of this trial will be covered by basic health insurance. Now that the TIL therapy is proven to be effective, patients will no longer be randomized, meaning that all patients automatically receive TIL therapy if they meet certain criteria.

EMA

One thing that makes T cell therapy unique, is that this 'living medicine', the patient's own T cells, is produced at the Netherlands Cancer Institute itself, and not, as is often seen, at a pharmaceutical company. This is also known as 'academic pharma'. T cell therapies must be produced under extremely strict, hygienic conditions. To facilitate this, the Netherlands Cancer Institute has set up a special Biotherapeutics Unit. In order to be able to produce TIL for the European market following the results of the trial, the EMA, European Medicines Agency, must first give its approval. The way in which production is to take place outside the Netherlands will also be examined.

Read more at Science Daily

Dec 8, 2022

Characterizing the earliest galaxies in the universe -- only 200 million years after the Big Bang

An international team of astrophysicists, including Prof. Rennan Barkana from the Sackler School of Physics and Astronomy at Tel Aviv University, has managed for the first time to statistically characterize the first galaxies in the Universe, which formed only 200 million years after the Big Bang. According to the groundbreaking results, the earliest galaxies were relatively small and dim. They were fainter than present-day galaxies, and likely processed only 5% or less of their gas into stars. Furthermore, the first galaxies did not emit radio waves at an intensity that was much higher than that of modern galaxies.

This new study, carried out together with the SARAS observation team, was led by the research group of Dr. Anastasia Fialkov from the University of Cambridge, England, a former PhD student of Prof. Barkana. The results of this innovative study were published in the journal Nature Astronomy.

"This is a very new field and a first-of-its-kind study," explains Prof. Barkana. "We are trying to understand the epoch of the first stars in the Universe, known as the 'cosmic dawn', about 200 million years after the Big Bang. The James Webb Space Telescope, for example, can't really see these stars. It might only detect a few particularly bright galaxies from a somewhat later period. Our goal is to probe the entire population of the first stars."

According to the standard picture, before stars began to fuse heavier elements inside their cores, our Universe was nothing but a cloud of hydrogen atoms from the Big Bang (other than some helium and a lot of dark matter). Today the Universe is also filled with hydrogen, but in the modern Universe it is mostly ionized due to radiation from stars.

"Hydrogen atoms naturally emit light at a wavelength of 21cm, which falls within the spectrum of radio waves," says Prof. Barkana. "Since stellar radiation affects the light emitted by hydrogen atoms, we use hydrogen as a detector in our search for the first stars: if we can detect the effect of stars on hydrogen, we will know when they were born, and in what types of galaxies. I was among the first theorists to develop this concept 20 years ago, and now observers are able to implement it in actual experiments. Teams of experimentalists all over the world are currently attempting to discover the 21cm signal from hydrogen in the early Universe."

One of these teams is EDGES, which uses a fairly small radio antenna that measures the average intensity on the entire sky of radio waves arriving from different periods of the cosmic dawn. In 2018, the EDGES team announced that it had found the 21cm signal from ancient hydrogen.

"There was a problem with their findings, however," says Prof. Barkana. "We could not be sure that the measured signal did indeed come from hydrogen in the early Universe. It could have been a fake signal produced by the electrical conductivity of the ground below the antenna. Therefore, we all waited for an independent measurement that would either confirm or refute these results. Last year astronomers in India carried out an experiment called SARAS, in which the antenna was made to float on a lake, a uniform surface of water that could not mimic the desired signal. According to the results of the new experiment, there was a 95% probability that EDGES did not in fact detect a real signal from the early Universe. SARAS found an upper limit for the genuine signal, implying that the signal from early hydrogen is likely significantly weaker than the one measured by EDGES. We modeled the SARAS result and worked out the implications for the first galaxies, i.e., what their properties were given the upper limit determined by SARAS. Now we can say for the first time that galaxies of certain types could not have existed at that early time."

Read more at Science Daily

NASA missions probe game-changing cosmic explosion

On Dec. 11, 2021, NASA's Neil Gehrels Swift Observatory and Fermi Gamma-ray Space Telescope detected a blast of high-energy light from the outskirts of a galaxy around 1 billion light-years away. The event has rattled scientists' understanding of gamma-ray bursts (GRBs), the most powerful events in the universe.

For the last few decades, astronomers have generally divided GRBs into two categories. Long bursts emit gamma rays for two seconds or more and originate from the formation of dense objects like black holes in the centers of massive collapsing stars. Short bursts emit gamma rays for less than two seconds and are caused by mergers of dense objects like neutron stars. Scientists sometimes observe short bursts with a following flare of visible and infrared light called a kilonova.

"This burst, named GRB 211211A, was paradigm-shifting as it is the first long-duration gamma-ray burst traced to a neutron star merger origin," said Jillian Rastinejad, a graduate student at Northwestern University in Evanston, Illinois, who led one team that studied the burst. "The high-energy burst lasted about a minute, and our follow-up observations led to the identification of a kilonova. This discovery has deep implications for how the universe's heavy elements came to be."

A classic short gamma-ray burst begins with two orbiting neutron stars, the crushed remnants of massive stars that exploded as supernovae. As the stars circle ever closer, they strip neutron-rich material from each other. They also generate gravitational waves, or ripples in space-time -- although none were detected from this event.

Eventually the neutron stars collide and merge, creating a cloud of hot debris emitting light across multiple wavelengths. Scientists hypothesize that jets of high-speed particles, launched by the merger, produce the initial gamma-ray flare before they collide with the wreckage. Heat generated by the radioactive decay of elements in the neutron-rich debris likely creates the kilonova's visible and infrared light. This decay results in the production of heavy elements like gold and platinum.

"Many years ago, Neil Gehrels, an astrophysicist and Swift's namesake, suggested that neutron star mergers could produce some long bursts," said Eleonora Troja, an astrophysicist at the University of Rome who led another team that studied the burst. "The kilonova we observed is the proof that connects mergers to these long-duration events, forcing us to rethink how black holes are formed."

Fermi and Swift detected the burst simultaneously, and Swift was able to rapidly identify its location in the constellation Boötes, enabling other facilities to quickly respond with follow-up observations. Their observations have provided the earliest look yet at the first stages of a kilonova.

Many research groups have delved into the observations collected by Swift, Fermi, the Hubble Space Telescope, and others. Some have suggested the burst's oddities could be explained by the merger of a neutron star with another massive object, like a black hole. The event was also relatively nearby, by gamma-ray burst standards, which may have allowed telescopes to catch the kilonova's fainter light. Perhaps some distant long bursts could also produce kilonovae, but we haven't been able to see them.

The light following the burst, called the afterglow emission, also exhibited unusual features. Fermi detected high-energy gamma rays starting 1.5 hours post-burst and lasting more than 2 hours. These gamma rays reached energies of up to 1 billion electron volts. (Visible light's energy measures between about 2 and 3 electron volts, for comparison.)

"This is the first time we've seen such an excess of high-energy gamma rays in the afterglow of a merger event. Normally that emission decreases over time," said Alessio Mei, a doctoral candidate at the Gran Sasso Science Institute in L'Aquila, Italy, who led a group that studied the data. "It's possible these high-energy gamma rays come from collisions between visible light from the kilonova and electrons in particle jets. The jets could be weakening ones from the original explosion or new ones powered by the resulting black hole or magnetar."

Scientists think neutron star mergers are a major source of the universe's heavy elements. They based their estimates on the rate of short bursts thought to occur across the cosmos. Now they'll need to factor long bursts into their calculations as well.

A team led by Benjamin Gompertz, an astrophysicist at the University of Birmingham in the United Kingdom, looked at the entire high-energy light curve, or the evolution of the event's brightness over time. The scientists noted features that might provide a key for identifying similar incidents -- long bursts from mergers -- in the future, even ones that are dimmer or more distant. The more astronomers can find, the more they can refine their understanding of this new class of phenomena.

On Dec. 7, 2022, papers led by Rastinejad, Troja, and Mei were published in the scientific journal Nature, and a paper led by Gompertz was published in Nature Astronomy.

"This result underscores the importance of our missions working together and with others to provide multiwavelength follow up of these kinds of phenomenon," said Regina Caputo, Swift project scientist, at NASA's Goddard Space Flight Center in Greenbelt, Maryland. "Similar coordinated efforts have hinted that some supernovae might produce short bursts, but this event is the final nail in the coffin for the simple dichotomy we've used for years. You never know when you might find something surprising."

Read more at Science Daily

Ancient stone tools from China provide earliest evidence of rice harvesting

A new Dartmouth-led study analyzing stone tools from southern China provides the earliest evidence of rice harvesting, dating to as early as 10,000 years ago. The researchers identified two methods of harvesting rice, which helped initiate rice domestication. The results are published in PLOS ONE.

Wild rice is different from domesticated rice in that wild rice naturally sheds ripe seeds, shattering them to the ground when they mature, while cultivated rice seeds stay on the plants when they mature.

To harvest rice, some sort of tools would have been needed. In harvesting rice with tools, early rice cultivators were selecting the seeds that stay on the plants, so gradually the proportion of seeds that remain increased, resulting in domestication.

"For quite a long time, one of the puzzles has been that harvesting tools have not been found in southern China from the early Neolithic period or New Stone Age (10,000 -- 7,000 Before Present) -- the time period when we know rice began to be domesticated," says lead author Jiajing Wang, an assistant professor of anthropology at Dartmouth. "However, when archaeologists were working at several early Neolithic sites in the Lower Yangtze River Valley, they found a lot of small pieces of stone, which had sharp edges that could have been used for harvesting plants."

"Our hypothesis was that maybe some of those small stone pieces were rice harvesting tools, which is what our results show."

In the Lower Yangtze River Valley, the two earliest Neolithic culture groups were the Shangshan and Kuahuqiao.

The researchers examined 52 flaked stone tools from the Shangshan and Hehuashan sites, the latter of which was occupied by Shangshan and Kuahuqiao cultures.

The stone flakes are rough in appearance and are not finely made but have sharp edges. On average, the flaked tools are small enough to be held by one hand and measured approximately 1.7 inches in width and length.

To determine if the stone flakes were used for harvesting rice, the team conducted use-wear and phytolith residue analyses.

For the use-wear analysis, micro-scratches on the tools' surfaces were examined under a microscope to determine how the stones were used. The results showed that 30 flakes have use-wear patterns similar to those produced by harvesting siliceous (silica-rich) plants, likely including rice.

Fine striations, high polish, and rounded edges distinguished the tools that were used for cutting plants from those that were used for processing hard materials, cutting animal tissues, and scraping wood.

Through the phytolith residue analysis, the researchers analyzed the microscopic residue left on the stone flakes known as "phytoliths" or silica skeleton of plants. They found that 28 of the tools contained rice phytoliths.

"What's interesting about rice phytoliths is that rice husk and leaves produce different kinds of phytolith, which enabled us to determine how the rice was harvested," says Wang.

The findings from the use-wear and phytolith analyses illustrated that two types of rice harvesting methods were used -- "finger-knife" and "sickle" techniques. Both methods are still used in Asia today.

The stone flakes from the early phase (10,000 -- 8,200 BP) showed that rice was largely harvested using the finger-knife method in which the panicles at the top of the rice plant are reaped. The results showed that the tools used for finger-knife harvesting had striations that were mainly perpendicular or diagonal to the edge of the stone flake, which suggests a cutting or scraping motion, and contained phytoliths from seeds or rice husk phytoliths, indicating that the rice was harvested from the top of the plant.

"A rice plant contains numerous panicles that mature at different times, so the finger-knife harvesting technique is especially useful when rice domestication was in the early stage," says Wang.

The stone flakes however, from the later phase (8,000 -- 7,000 BP) had more evidence of sickle harvesting in which the lower part of the plant was harvested. These tools had striations that were predominantly parallel to the tool's edge, reflecting that a slicing motion had likely been used.

"Sickle harvesting was more widely used when rice became more domesticated, and more ripe seeds stayed on the plant," says Wang. "Since you are harvesting the entire plant at the same time, the rice leaves and stems could also be used for fuel, building materials, and other purposes, making this a much more effective harvesting method."

Read more at Science Daily

Dinosaurs were on the up before asteroid downfall

The findings provide the strongest evidence yet that the dinosaurs were struck down in their prime and were not in decline, at the time the asteroid hit.

Scientists have long debated why non-bird dinosaurs, including Tyrannosaurus rex and Triceratops, became extinct -- whereas mammals and other species such as turtles and crocodiles survived.

The study, led by an international team of palaeontologists and ecologists, analysed 1,600 fossil records from North America. Researchers modelled the food chains and ecological habitats of land-living and freshwater animals during the last several million years of the Cretaceous, and the first few million years of the Paleogene period, after the asteroid hit.

Paleontologists have known for some time that many small mammals lived alongside the dinosaurs. But this research reveals that these mammals were diversifying their diets, adapting to their environments and becoming more important components of ecosystems as the Cretaceous unfolded. Meanwhile, the dinosaurs were entrenched in stable niches to which they were supremely well adapted.

Mammals didn't just take advantage of the dinosaurs dying, experts say. They were creating their own advantages through diversifying -- by occupying new ecological niches, evolving more varied diets and behaviours and enduring small shifts in climate, by rapidly adapting. These behaviours probably helped them to survive, as they were better able than the dinosaurs to cope with the radical and abrupt destruction caused by the asteroid.

First author, Jorge García-Girón, Geography Research Unit, University of Oulu, Finland and Department of Biodiversity and Environmental Management, University of León, Spain, said: "Our study provides a compelling picture of the ecological structure, food webs, and niches of the last dinosaur-dominated ecosystems of the Cretaceous period and the first mammal-dominated ecosystems after the asteroid hit. This helps us to understand one of the age-old mysteries of palaeontology: why all the non-bird dinosaurs died, but birds and mammals endured."

Co-lead author, Alfio Alessandro Chiarenza, Department of Ecology and Animal Biology, University of Vigo, Spain, said: "It seems that the stable ecology of the last dinosaurs actually hindered their survival in the wake of the asteroid impact, which abruptly changed the ecological rules of the time. Conversely, some birds, mammals, crocodilians, and turtles had previously been better adapted to unstable and rapid shifts in their environments, which might have made them better able to survive when things suddenly went bad when the asteroid hit."

Senior author, Professor Steve Brusatte, Personal Chair of Palaeontology and Evolution, School of GeoSciences, University of Edinburgh, said: "Dinosaurs were going strong, with stable ecosystems, right until the asteroid suddenly killed them off. Meanwhile, mammals were diversifying their diets, ecologies and behaviours while dinosaurs were still alive. So it wasn't simply that mammals took advantage of the dinosaurs dying, but they were making their own advantages, which ecologically preadapted them to survive the extinction and move into niches left vacant by the dead dinosaurs."

Read more at Science Daily

World's simplest animals get their place in the tree of life

The group with the world's simplest animals -- tiny blob-like life forms with no organs and just a few cell types -- finally has a fleshed-out family tree built by a research group led by the American Museum of Natural History, St. Francis College, and the University of Veterinary Medicine Hannover. The study comes more than 100 years after the discovery of these ameboid animals called placozoans and represents the first -- and potentially only -- time in the 21st century that a backbone Linnaean taxonomy is constructed for an entire animal phylum. Published today in the journal Frontiers in Ecology and Evolution, the research is based on genetic makeup -- the presence and absence of genes -- rather than outward physical appearance, which is traditionally used to classify organisms.

"Placozoans look like miniscule, shape-shifting disks -- basically, they are the pancake of the animal world," said the study's co-lead author Michael Tessler, a research associate at the Museum and an assistant professor at St. Francis College. "For a taxonomist looking through a microscope, even a powerful one, there are almost no characters to compare and differentiate them. Yet, despite most of them looking almost exactly the same, we know that on the genetic level, there are very distinct lineages."

The first placozoan species was described in 1883, and Placozoa remained a "phylum of one" until DNA-based research in the last 20 years revealed that it contains multiple lineages. Most placozoans, which generally live in tropical and subtropical waters across the globe, are about the size of a grain of sand, with hair-like structures that allow them to move. "After decades of turmoil, this most exciting phylum has finally gotten the attention it deserves," said senior author Bernd Schierwater, a professor at the University of Veterinary Medicine Hannover.

"We wanted to know the relationships within this ancient group of animals and where it sits in the tree of life," said co-lead author Johannes Neumann, a recent doctoral graduate from the Museum's Richard Gilder Graduate School. "People have been speculating about that for decades, but now, by looking at differences among placozoans on the molecular level, we're able to paint a clear picture of how these animals are related to one another."

The researchers used a method called molecular morphology -- using differences in DNA sequences and other molecular characters -- to make classifications. In doing so, they established a backbone taxonomy: two new classes, four orders, three families, one genus, and one species. Their research also suggests that placozoans are most closely related to cnidarians (a group of aquatic animals including jellyfish, corals, and sea anemones) and bilaterians (animals that have a left and right side, like insects and humans).

"I personally collected placozoans on six continents for almost 10 years, did lab work and bioinformatic work on them, but it took decades of effort from a great number of colleagues to finally get to this exciting first classification for this cryptic phylum," Neumann said. "This is why we call our newly described species Cladtertia collaboinventa, which means 'discovered in collaboration.'"

The authors suggest that this study could serve as a template to revisit systematics of other organisms that look very similar, such as bacteria, fungi, protists, and parasites. Tessler also is the lead author of a second paper out now in Frontiers in Ecology and Evolution that makes the case for molecular morphology in other groups of organisms that have few distinguishable visual features but are genetically diverse.

"Taxonomic blank slates are problematic. Without names, communication is hampered, and other scientific progress is slowed," said Tessler. "We suggest that the morphology of molecules, such as proteins -- which have distinctive structures -- should not be considered as anything less than traditional morphology."

Read more at Science Daily

Dec 7, 2022

Meteorites plus gamma rays could have given Earth the building blocks for life

Even as detailed images of distant galaxies from the James Webb Space Telescope show us more of the greater universe, scientists still disagree about how life began here on Earth. One hypothesis is that meteorites delivered amino acids -- life's building blocks -- to our planet. Now, researchers reporting in ACS Central Science have experimentally shown that amino acids could have formed in these early meteorites from reactions driven by gamma rays produced inside the space rocks.

Ever since Earth was a newly formed, sterile planet, meteorites have been hurtling through the atmosphere at high speeds toward its surface. If the initial space debris had included carbonaceous chondrites -- a class of meteorite whose members contain significant amounts of water and small molecules, such as amino acids -- then it could have contributed to the evolution of life on Earth. However, the source of amino acids in meteorites has been hard to pinpoint. In previous lab experiments, Yoko Kebukawa and colleagues showed that reactions between simple molecules, such as ammonia and formaldehyde, can synthesize amino acids and other macromolecules, but liquid water and heat are required. Radioactive elements, such as aluminum-26 (26Al) -- which is known to have existed in early carbonaceous chondrites -- release gamma rays, a form of high-energy radiation, when they decay. This process could have provided the heat needed to make biomolecules. So, Kebukawa and a new team wanted to see whether radiation could have contributed to the formation of amino acids in early meteorites.

The researchers dissolved formaldehyde and ammonia in water, sealed the solution in glass tubes and then irradiated the tubes with high-energy gamma rays produced from the decay of cobalt-60. They found that the production of α-amino acids, such as alanine, glycine, α-aminobutyric acid and glutamic acid, and β-amino acids, such as β-alanine and β-aminoisobutyric acid, rose in the irradiated solutions as the total gamma-ray dose increased. Based on these results and the expected gamma ray dose from the decay of 26Al in meteorites, the researchers estimated that it would have taken between 1,000 and 100,000 years to produce the amount of alanine and β-alanine found in the Murchison meteorite, which landed in Australia in 1969. This study provides evidence that gamma ray-catalyzed reactions can produce amino acids, possibly contributing to the origin of life on Earth, the researchers say.

Read more at Science Daily

Jawbone may represent earliest presence of humans in Europe

For over a century, one of the earliest human fossils ever discovered in Spain has been long considered a Neanderthal. However, new analysis from an international research team, including scientists at Binghamton University, State University of New York, dismantles this century-long interpretation, demonstrating that this fossil is not a Neanderthal; rather, it may actually represent the earliest presence of Homo sapiens ever documented in Europe.

In 1887, a fossil mandible was discovered during quarrying activities in the town of Banyoles, Spain, and has been studied by different researchers over the past century. The Banyoles fossil likely dates to between approximately 45,000-65,000 years ago, at a time when Europe was occupied by Neanderthals, and most researchers have generally linked it to this species.

"The mandible has been studied throughout the past century and was long considered to be a Neanderthal based on its age and location, and the fact that it lacks one of the diagnostic features of Homo sapiens: a chin," said Binghamton University graduate student Brian Keeling.

The new study relied on virtual techniques, including CT scanning of the original fossil. This was used to virtually reconstruct missing parts of the fossil, and then to generate a 3D model to be analyzed on the computer.

The authors studied the expressions of distinct features on the mandible from Banyoles that are different between our own species, Homo sapiens, and the Neanderthals, our closest evolutionary cousins.

The authors applied a methodology known as "three-dimensional geometric morphometrics" that analyzes the geometric properties of the bone's shape. This makes it possible to directly compare the overall shape of Banyoles to Neanderthals and H. sapiens.

"Our results found something quite surprising -- Banyoles shared no distinct Neanderthal traits and did not overlap with Neanderthals in its overall shape," said Keeling.

While Banyoles seemed to fit better with Homo sapiens in both the expression of its individual features and its overall shape, many of these features are also shared with earlier human species, complicating an immediate assignment to Homo sapiens. In addition, Banyoles lacks a chin, one of the most characteristic features of Homo sapiens mandibles.

"We were confronted with results that were telling us Banyoles is not a Neanderthal, but the fact that it does not have a chin made us think twice about assigning it to Homo sapiens," said Rolf Quam, professor of anthropology at Binghamton University, State University of New York. "The presence of a chin has long been considered a hallmark of our own species."

Given this, reaching a scientific consensus on what species Banyoles represents is a challenge. The authors also compared Banyoles with an early Homo sapiens mandible from a site called Peştera cu Oase in Romania. Unlike Banyoles, this mandible shows a full chin along with some Neanderthal features, and an ancient DNA analysis has revealed this individual had a Neanderthal ancestor four to six generations ago. Since the Banyoles mandible shared no distinct features with Neanderthals, the researchers ruled out the possibility of mixture between Neanderthals and H. sapiens to explain its anatomy.

The authors point out that some of the earliest Homo sapiens fossils from Africa, predating Banyoles by more than 100,000 years, do show less pronounced chins than in living populations.

Thus, these scientists developed two possibilities for what the Banyoles mandible may represent: a member of a previously unknown population of Homo sapiens that coexisted with the Neanderthals; or a hybrid between a member of this Homo sapiens group and a non-Neanderthal unidentified human species. However, at the time of Banyoles, the only fossils recovered from Europe are Neanderthals, making this latter hypothesis less likely.

"If Banyoles is really a member of our species, this prehistoric human would represent the earliest H. sapiens ever documented in Europe," said Keeling.

Whichever species this mandible belongs to, Banyoles is clearly not a Neanderthal at a time when Neanderthals were believed to be the sole occupants of Europe.

The authors conclude that "the present situation makes Banyoles a prime candidate for ancient DNA or proteomic analyses, which may shed additional light on its taxonomic affinities."

Read more at Science Daily

Ankylosaurs battled each other as much as they fought off T. rex

Scientists from the Royal Ontario Museum (ROM), Royal BC Museum, and North Carolina Museum of Natural Sciences have found new evidence for how armoured dinosaurs used their iconic tail clubs. The exceptional fossil of the ankylosaur Zuul crurivastator has spikes along its flanks that were broken and re-healed while the dinosaur was alive -- injuries that the scientists think were caused from a strike by another Zuul's massive tail club. This suggests ankylosaurs had complex behaviour, possibly battling for social and territorial dominance or even engaging in a "rutting" season for mates. The research is published in the journal Biology Letters.

The 76-million-year-old, plant-eating dinosaur, part of the Royal Ontario Museum's vertebrate fossil collection, is named after the fictional monster 'Zuul' from the 1984 movie Ghostbusters. Initially the skull and tail had been freed from the surrounding rock, but the body was still encased in 35,000 pounds of sandstone. After years of work, the body was revealed to have preserved most of the skin and bony armour across the entire back and flanks, giving a remarkable view of what the dinosaur looked like in life. Zuul's body was covered in bony plates of different shapes and sizes and the ones along its sides were particularly large and spiky. Interestingly, the scientists noticed that a number of spikes near the hips on both sides of the body are missing their tips and the bone and horny sheath has healed into a blunter shape. The pattern of these injuries is more consistent with being the result of some form of ritualized combat, or jousting with their tail clubs, and probably weren't caused by an attacking predator like a tyrannosaur because of where they are located on the body.

"I've been interested in how ankylosaurs used their tail clubs for years and this is a really exciting new piece of the puzzle," says lead author Dr. Victoria Arbour, Curator of Palaeontology at the Royal BC Museum and former NSERC postdoctoral fellow at the Royal Ontario Museum. "We know that ankylosaurs could use their tail clubs to deliver very strong blows to an opponent, but most people thought they were using their tail clubs to fight predators. Instead, ankylosaurs like Zuul may have been fighting each other."

Zuul's tail is about three metres (10 feet) long with sharp spikes running along its sides. The back half of the tail was stiff and the tip was encased in huge bony blobs, creating a formidable sledgehammer-like weapon. Zuul crurivastator means 'Zuul, the destroyer of shins', a nod to the idea that tail clubs were used to smash the legs of bipedal tyrannosaurs. The new research doesn't refute the idea that tail clubs could be used in self-defense against predators, but shows that tail clubs would also have functioned for within-species combat -- a factor that more likely drove their evolution. Today, specialized animal weapons like the antlers of deer or the horns of antelopes have usually evolved to be used mostly for fighting members of the same species during battles for mates or territory.

Years ago, Arbour had put forward the idea that ankylosaurs may have clubbed each other in the flanks, and that broken and healed ribs might provide evidence to support this idea. But ankylosaur skeletons are extremely rare, making it hard to test this hypothesis. The completely preserved back and tail of Zuul, including skin, allowed for an unusual glimpse into the lives of these incredible armoured dinosaurs.

"The fact that the skin and armour are preserved in place is like a snapshot of how Zuul looked when it was alive. And the injuries Zuul sustained during its lifetime tell us about how it may have behaved and interacted with other animals in its ancient environment," said Dr. David Evans, Temerty Chair and Curator of Vertebrate Palaeontology at the Royal Ontario Museum.

Read more at Science Daily

Built to last: The perovskite solar cells tough enough to match mighty silicon

Researchers at Oxford University and Exciton Science have demonstrated a new way to create stable perovskite solar cells, with fewer defects and the potential to finally rival silicon's durability.

By removing the solvent dimethyl-sulfoxide and introducing dimethylammonium chloride as a crystallisation agent, the researchers were able to better control the intermediate phases of the perovskite crystallisation process, leading to thin films of greater quality, with reduced defects and enhanced stability.

Large groups of up to 138 sample devices were then subjected to a rigorous accelerated ageing and testing process at high temperatures and in real-world conditions.

Formamidinium-caesium perovskite solar cells created using the new synthesis process significantly outperformed the control group and demonstrated resistance to thermal, humidity and light degradation.

This is a strong step forward to matching commercial silicon's stability and makes perovskite-silicon tandem devices a much more realistic candidate for becoming the dominant next-generation solar cell.

Led by Professor Henry Snaith (Oxford University) and Professor Udo Bach (Monash University), the work has been published in the journal Nature Materials and is available here.

Oxford University PhD student Philippe Holzhey, a Marie Curie Early Stage Researcher and joint first author on the work, said: "It's really important that people start shifting to realise there is no value in performance if it's not a stable performance.

"If the device lasts for a day or a week or something, there's not so much value in it. It has to last for years."

During testing, the best device operated above the T80 threshold for over 1,400 hours under simulated sunlight at 65°C. T80 is the time it takes for a solar cell to reduce to 80% of its initial efficiency, a common benchmark within the research field.

Beyond 1,600 hours, the control device fabricated using the conventional dimethyl-sulfoxide approach stopped functioning, while devices fabricated with the new, improved design retained 70% of their original efficiency, under accelerated aging conditions.

The same degradation study was performed on a group of devices at the very high temperature of 85°C, with the new cells again outperforming the control group.

Extrapolating from the data, the researchers calculated that the new cells age by a factor of 1.7 for each 10°C increase in the temperature they are exposed to, which is close to the 2-fold increase expected of commercial silicon devices.

Dr David McMeekin, the corresponding and joint first author on the paper, was an Australian Centre for Advanced Photovoltaics (ACAP) Postdoctoral Fellow at Monash University and is now a Marie Skłodowska-Curie Postdoctoral Fellow at Oxford University.

He said: "I think what separates us from other studies is that we've done a lot of accelerated aging. We've aged the cells at 65°C and 85°C under the whole light spectrum."

The number of devices used in the study is also significant, with many other perovskite research projects limited to just one or two prototypes.

"Most studies only show one curve without any standard deviation or any kind of statistical approach to determine if this design is more stable than the other," David added.

The researchers hope their work will encourage a greater focus on the intermediate phase of perovskite crystallisation as an important factor in achieving greater stability and commercial viability.

This work was supported by the Stanford Linear Accelerator Center (SLAC) and the National Renewable Energy Laboratory (NREL).

Background: About Perovskites

Artificially synthesised in laboratory conditions, semiconductor thin films made up of perovskite compounds are far cheaper to make than silicon solar cells, with greater flexibility and a tunable band gap.

They emerged unexpectedly in the last decade and have reached impressive power-conversion efficiencies of over 25%.

However, too much focus has been placed on creating the most efficient perovskite solar cell, rather than resolving the fundamental problems inhibiting the material from being used in widespread commercial applications.

Compared to silicon, perovskites can degrade rapidly in real world conditions, with exposure to heat and moisture causing damage and negatively impacting device performance.

Read more at Science Daily

Dec 6, 2022

Peekaboo! Tiny, hidden galaxy provides a peek into the past

Like someone living apart from modern conveniences, a dwarf galaxy in the local universe looks like it belongs in another time -- the early eras of galaxy evolution itself. NASA's Hubble Space Telescope has helped confirm an example of what astronomers call an "extremely metal-poor" galaxy, which has very few of the chemical elements or "metals" that stars produce and enrich their galaxies with over time. Most intriguingly, its stars indicate that it is also one of the youngest galaxies ever detected in the local universe. Despite the galaxy being nearly hidden behind the glare of a foreground star -- leading to its nickname, Peekaboo -- Hubble was able to pick out individual stars for analysis. The discovery provides the tantalizing opportunity to study a relic of the past in fine detail, like shaking hands with an ancient ancestor.

Peeking out from behind the glare of a bright foreground star, astronomers have uncovered the most extraordinary example yet of a nearby galaxy with characteristics that are more like galaxies in the distant, early universe. Only 1,200 light-years across, the tiny galaxy HIPASS J1131-31 has been nicknamed "Peekaboo" because of its emergence in the past 50-100 years from behind the fast-moving star that was obscuring astronomers' ability to detect it.

The discovery is a combined effort of telescopes on the ground and in space, including confirmation by NASA's Hubble Space Telescope. Together the research shows tantalizing evidence that the Peekaboo Galaxy is the nearest example of the galaxy formation processes that commonly took place not long after the big bang, 13.8 billion years ago.

"Uncovering the Peekaboo Galaxy is like discovering a direct window into the past, allowing us to study its extreme environment and stars at a level of detail that is inaccessible in the distant, early universe," said astronomer Gagandeep Anand of the Space Telescope Science Institute in Baltimore, Maryland, co-author of the new study on Peekaboo's intriguing properties.

Astronomers describe galaxies like Peekaboo as "extremely metal-poor" (XMP). In astronomy, "metals" refers to all elements heavier than hydrogen and helium. The very early universe was almost entirely made up of primordial hydrogen and helium, elements forged in the big bang. Heavier elements were forged by stars over the course of cosmic history, building up to the generally metal-rich universe humans find ourselves in today. Life as we know it is made from heavier element "building blocks" like carbon, oxygen, iron, and calcium.

While the universe's earliest galaxies were XMP by default, similarly metal-poor galaxies have also been found in the local universe. Peekaboo caught astronomers' attention because, not only is it an XMP galaxy without a substantial older stellar population, but at only 20 million light-years from Earth it is located at least half the distance of the previously known young XMP galaxies.

Peekaboo was first detected as a region of cold hydrogen more than 20 years ago with the Australian Parkes radio telescope Murriyang, in the HI Parkes All Sky Survey by professor Bärbel Koribalski, who is an astronomer at Australia's national science agency CSIRO and a co-author of the latest research study on Peekaboo's metallicity. Far-ultraviolet observations by NASA's space-based Galaxy Evolution Explorer mission showed it to be a compact blue dwarf galaxy.

"At first we did not realize how special this little galaxy is," Koribalski said of Peekaboo. "Now with combined data from the Hubble Space Telescope, the Southern African Large Telescope (SALT), and others, we know that the Peekaboo Galaxy is one of the most metal-poor galaxies ever detected."

NASA's Hubble Space Telescope was able to resolve about 60 stars in the tiny galaxy, almost all of which appear to be a few billion years old or younger. Measurements of Peekaboo's metallicity by SALT completed the picture. Together, these findings underline the major difference between Peekaboo and other galaxies in the local universe, which typically have ancient stars that are many billions of years old. Peekaboo's stars indicate that it is one of the youngest and least-chemically-enriched galaxies ever detected in the local universe. This is very unusual, as the local universe has had about 13 billion years of cosmic history to develop.

However, the picture is still a shallow one, Anand says, as the Hubble observations were made as part of a "snapshot" survey program called The Every Known Nearby Galaxy Survey -- an effort to get Hubble data of as many neighboring galaxies as possible. The research team plans to use Hubble and the James Webb Space Telescope to do further research on Peekaboo, to learn more about its stellar populations and their metal-makeup.

"Due to Peekaboo's proximity to us, we can conduct detailed observations, opening up possibilities of seeing an environment resembling the early universe in unprecedented detail," Anand said.

Read more at Science Daily

Bee study: Both habitat quality and biodiversity can impact bee health

Efforts to promote the future health of both wild bees and managed honeybee colonies need to consider specific habitat needs, such as the density of wildflowers.

At the same time, improving other habitat measures -- such as the amount of natural habitat surrounding croplands -- may increase bee diversity while having mixed effects on overall bee health.

Those are the key findings from a new analysis of several thousand Michigan bees from 60 species. The study looked at how the quality and quantity of bee habitat surrounding small farm fields affects the levels of common viral pathogens in bee communities.

"Future land management needs to consider that broadly improving habitat quality to benefit pollinator community diversity may not necessarily also benefit pollinator health," said University of Michigan biologist Michelle Fearon, lead author of a study published online Nov. 30 in the journal Ecology. The other authors are from U-M and the University of Washington.

"To promote pollinator health, we need to focus on improving specific habitat quality features that are linked to reducing pathogen prevalence, such as planting greater density of flowers," said Fearon, a postdoctoral fellow in the Department of Ecology and Evolutionary Biology.

Bees are indispensable pollinators, supporting both agricultural productivity and the diversity of flowering plants worldwide. But in recent decades, both native bees and managed honeybee colonies have seen population declines, which are blamed on multiple interacting factors including habitat loss, parasites and disease, and pesticide use.

As part of the work for her U-M doctoral dissertation, Fearon and her colleagues netted and trapped more than 4,900 bees at 14 winter squash farms in southeastern Michigan, where both honeybees and wild native bees pollinate the squash flowers.

The bees were analyzed for the presence of three common viral pathogens. Consistently, lower virus levels were strongly linked to greater species richness, or biodiversity, among local bee communities. The number of bee species at each farm ranged from seven to 49.

Those findings, published in February 2021 in Ecology, provided support for what ecologists call the dilution effect. This controversial hypothesis posits that increased biodiversity can decrease, or dilute, infectious disease transmission.

But an unresolved question lingered after that study was published: Was biodiversity truly responsible for the observed reductions in viral levels, or was there something about habitat quality that drove changes in both bee biodiversity and viral pathogen prevalence?

"Many studies have shown that high-biodiversity communities are ones with low rates of infectious disease. But we also know that better habitat quality often leads to greater biodiversity," said study co-author Chelsea Wood of the University of Washington, a former Michigan Fellow at U-M.

"So which factor is actually driving down disease risk: biodiversity or habitat? Do high-biodiversity communities dilute disease prevalence? Or do communities in high-quality habitat have healthier hosts, who are better at resisting infection? Our data show that some apparent 'dilution effects' could actually have nothing at all to do with biodiversity."

Previous studies have demonstrated that habitat factors can directly influence both an animal's nutritional status and the strength of its immune system, which in turn can influence its susceptibility to pathogens. For example, Eurasian red squirrels living in fragmented habitats host greater gastrointestinal parasite burdens than those living in continuous forest habitats.

To get to the root cause of their Michigan bee observations, Fearon and her co-authors generated models allowing them to rigorously disentangle the effects of habitat characteristics on patterns of pathogen prevalence.

They reexamined the previously collected bee data and added new information about local and landscape-level habitat. For the study, the researchers defined high-quality bee habitat as areas that provide sufficient quantity and diversity of floral resources (both pollen and nectar) to sustain good pollinator nutrition.

At the local level, floral richness (meaning flower species diversity) and floral density were the key indicators of high-quality habitat. At the landscape level, proportion of "natural areas" surrounding farm fields and landscape richness (meaning areas with more land cover types) were the key characteristics. Natural areas included deciduous, evergreen and mixed forest; herbaceous and woody wetland; shrubland; grass pasture; and wildflower meadow.

The researchers found that habitat can have both positive and negative impacts on pathogen levels in bee communities. This is evidence for what the authors called a habitat-disease relationship, where habitat quality has a direct impact on bee health.

In general, a higher proportion of natural area and a greater richness of land cover types were associated with increased viral prevalence, while greater floral density was associated with reduced viral prevalence.

"Areas with greater floral abundance could provide better pollen and nectar resources for bees to help them resist or fight off infection," said study co-author Elizabeth Tibbetts, a professor in the U-M Department of Ecology and Evolutionary Biology who was Fearon's dissertation adviser. "Additionally, greater floral abundance may reduce the effective foraging density of pollinators and result in reduced pathogen transmission."

More natural area was also associated with higher bee species diversity, which in turn contributed to reduced, or diluted, viral prevalence.

"Most importantly, we found that greater habitat quality in the surrounding landscape was a key driver of the dilution effect that we previously observed," Fearon said. "This provides evidence for a habitat-driven biodiversity-disease relationship, where habitat quality indirectly impacts bee health by altering bee species diversity.

"But different habitat-quality metrics impacted patterns of viral prevalence both positively and negatively. This means that habitat quality has the potential to decrease or increase viral prevalence in pollinators depending on the relative strengths of the habitat-disease and biodiversity-disease pathways.

"So, it is important to consider how improving specific habitat quality measures may impact bee diversity and bee health in different ways."

Read more at Science Daily

Parkinson's medication improved blood pressure in teens with Type 1 diabetes

Teens with Type 1 diabetes (T1D) who took bromocriptine, a medication used to treat Parkinson's disease and Type 2 diabetes, had lower blood pressure and less stiff arteries after one month of treatment compared to those who did not take the medicine, according to a small study published today in Hypertension, an American Heart Association journal.

High blood pressure and stiff arteries contribute to the development of heart disease. People with T1D, a lifelong, chronic condition in which the pancreas doesn't produce enough insulin to control blood sugar levels, have a higher risk of developing heart disease than those without the condition. Those diagnosed with T1D as children have even higher risks for heart disease than people diagnosed in adulthood. Therefore, researchers are interested in ways to slow down the onset of vascular disease in children with T1D.

"We know that abnormalities in the large vessels around the heart, the aorta and its primary branches, begin to develop in early childhood in people with Type 1 diabetes," said lead study author Michal Schäfer, Ph.D., a researcher and fourth-year medical student at the University of Colorado School of Medicine in Aurora, Colorado. "We found that bromocriptine has the potential to slow down the development of those abnormalities and decrease the risk for cardiovascular disease in this population."

The multidisciplinary team conducted this study to examine the impact of bromocriptine on blood pressure and aortic stiffness compared with a placebo in adolescents with Type 1 diabetes. Bromocriptine is in a class of medications called dopamine receptor agonists. It increases levels of dopamine, a chemical in the brain, which leads to an increase in the body's responsiveness to insulin, called insulin sensitivity. Bromocriptine has been FDA-approved since 2009 to treat adults with Type 2 diabetes due to its effect on insulin sensitivity.

The study included 34 participants (13 male, 21 female) ages 12 to 21 years who had been diagnosed with Type 1 diabetes for at least a year, and their HbA1c (glycosylated hemoglobin -- a measure of blood glucose) was 12% or less. An HbA1c level of 6.5% or higher indicates diabetes. They were randomly divided into two groups of 17, with one group receiving bromocriptine quick-release therapy and the other receiving a placebo once daily. The study was conducted in two phases. Participants took the first treatment or placebo for 4 weeks in phase 1, then had no treatment for a 4-week "wash-out" period, followed by phase 2 with 4 weeks on the opposite treatment. In this "crossover" design, each participant served as their own control for comparison.

Blood pressure and aortic stiffness were measured at the start of the study and at the end of each phase. Aortic stiffness was determined by assessing the large arteries with cardiovascular magnetic resonance imaging (MRI) and a measurement of the velocity of the blood pressure pulse called pulse wave velocity.

The study found:
 

  • Compared to placebo, blood pressure was significantly decreased with bromocriptine. On average, bromocriptine therapy resulted in a systolic blood pressure decrease of 5 mm Hg and a diastolic blood pressure decrease of 2 mm Hg at the end of 4 weeks of treatment.
  • Aortic stiffness was also reduced with bromocriptine therapy. The improvement in aortic stiffness was most pronounced in the ascending aorta with a lowered pulse wave velocity of about 0.4 meters/second, and an increase in distensibility, or elasticity, of 8%. In the thoraco-abdominal aorta, bromocriptine was associated with a lowered pulse wave velocity of about 0.2 meters/second, with a 5% increase in distensibility.


"A stiff aorta predisposes a patient to other health issues, such as organ dysfunction or atherosclerosis and higher stress or strain on cardiac muscle," Schäfer said. "We were able to take it a notch further and show, using more sophisticated metrics, that these central large arteries are impaired, and impairment among adolescents and young adults with Type 1 diabetes may be decelerated with this drug."

Read more at Science Daily

Feline genetics help pinpoint first-ever domestication of cats

Nearly 10,000 years ago, humans settling in the Fertile Crescent, the areas of the Middle East surrounding the Tigris and Euphrates rivers, made the first switch from hunter-gatherers to farmers. They developed close bonds with the rodent-eating cats that conveniently served as ancient pest-control in society's first civilizations.

A new study at the University of Missouri found this lifestyle transition for humans was the catalyst that sparked the world's first domestication of cats, and as humans began to travel the world, they brought their new feline friends along with them.

Leslie A. Lyons, a feline geneticist and Gilbreath-McLorn endowed professor of comparative medicine in the MU College of Veterinary Medicine, collected and analyzed DNA from cats in and around the Fertile Crescent area, as well as throughout Europe, Asia and Africa, comparing nearly 200 different genetic markers.

"One of the DNA main markers we studied were microsatellites, which mutate very quickly and give us clues about recent cat populations and breed developments over the past few hundred years," Lyons said. "Another key DNA marker we examined were single nucleotide polymorphisms, which are single-based changes all throughout the genome that give us clues about their ancient history several thousands of years ago. By studying and comparing both markers, we can start to piece together the evolutionary story of cats."

Lyons added that while horses and cattle have seen various domestication events caused by humans in different parts of the world at various times, her analysis of feline genetics in the study strongly supports the theory that cats were likely first domesticated only in the Fertile Crescent before migrating with humans all over the world. After feline genes are passed down to kittens throughout generations, the genetic makeup of cats in western Europe, for example, is now far different from cats in southeast Asia, a process known as 'isolation by distance.'

"We can actually refer to cats as semi-domesticated, because if we turned them loose into the wild, they would likely still hunt vermin and be able to survive and mate on their own due to their natural behaviors," Lyons said. "Unlike dogs and other domesticated animals, we haven't really changed the behaviors of cats that much during the domestication process, so cats once again prove to be a special animal."

Lyons, who has researched feline genetics for more than 30 years, said studies like this also support her broader research goal of using cats as a biomedical model to study genetic diseases that impact both cats and people, such as polycystic kidney disease, blindness and dwarfism.

"Comparative genetics and precision medicine play key roles in the 'One Health' concept, which means anything we can do to study the causes of genetic diseases in cats or how to treat their ailments can be useful for one day treating humans with the same diseases," Lyons said. "I am building genetic tools, genetic resources that ultimately help improve cat health. When building these tools, it is important to get a representative sample and understand the genetic diversity of cats worldwide so that our genetic toolbox can be useful to help cats all over the globe, not just in one specific region."

Throughout her career, Lyons has worked with cat breeders and research collaborators to develop comprehensive feline DNA databases that the scientific community can benefit from, including cat genome sequencing from felines all around the world. In a 2021 study, Lyons and colleagues found that the cat's genomic structure is more similar to humans than nearly any other non-primate mammal.

"Our efforts have helped stop the migration and passing-down of inherited genetic diseases around the world, and one example is polycystic kidney disease, as 38% of Persian cats had this disease when we first launched our genetic test for it back in 2004," Lyons said. "Now that percentage has gone down significantly thanks to our efforts, and our overall goal is to eradicate genetic diseases from cats down the road."

Currently, the only viable treatment for polycystic kidney disease has unhealthy side effects, including liver failure. Lyons is currently working with researchers at the University of California at Santa Barbara to develop a diet-based treatment trial for those suffering from the disease.

"If those trials are successful, we might be able to have humans try it as a more natural, healthier alternative to taking a drug that may cause liver failure or other health issues," Lyons said. "Our efforts will continue to help, and it feels good to be a part of it."

Read more at Science Daily

Dec 5, 2022

Researchers say space atomic clocks could help uncover the nature of dark matter

Studying an atomic clock on-board a spacecraft inside the orbit of Mercury and very near to the Sun might be the trick to uncovering the nature of dark matter, suggests a new study published in Nature Astronomy.

Dark matter makes up more than 80 per cent of mass in the universe, but it has so far evaded detection on Earth, despite decades of experimental efforts. A key component of these searches is an assumption about the local density of dark matter, which determines the number of dark matter particles passing through the detector at any given time, and therefore the experimental sensitivity. In some models, this density can be much higher than is usually assumed, and dark matter can become more concentrated in some regions compared to others.

One important class of experimental searches are those using atoms or nuclei, because these have achieved incredible sensitivity to signals of dark matter. This is possible, in part, because when dark matter particles have very small masses, they induce oscillations in the very constants of nature. These oscillations, for example in the mass of the electron or the interaction strength of the electromagnetic force, modify the transition energies of atoms and nucleii in predictable ways.

An international team of researchers, Kavli Institute for the Physics and Mathematics of the Universe (Kavli IPMU) Project Researcher Joshua Eby, University of California, Irvine, Postdoctoral Fellow Yu-Dai Tsai, and University of Delaware Professor Marianna S. Safronova, saw potential in these oscillating signals. They claimed that in a particular region of the Solar System, between the orbit of Mercury and the Sun, the density of dark matter may be exceedingly large, which would mean exceptional sensitivity to the oscillating signals.

These signals could be picked up by atomic clocks, which operate by carefully measuring the frequency of photons emitted in transitions of different states in atoms. Ultralight dark matter in the vicinity of the clock experiment could modify those frequencies, as the oscillations of the dark matter slightly increase and decrease the photon energy.

"The more dark matter there is around the experiment, the larger these oscillations are, so the local density of dark matter matters a lot when analyzing the signal," said Eby.

While the precise density of the dark matter near the Sun is not well-known, the researchers argue that even a relatively low-sensitivity search could provide important information.

The density of dark matter is only constrained in the Solar System by information about planet orbits. In the region between the Sun and Mercury, the planet nearest to the Sun, there is almost no constraint. So a measurement onboard a spacecraft could quickly uncover world-leading limits on dark matter in these models.

The technology to put their theory to the test already exists. Eby says the NASA Parker Solar Probe, which has been operating since 2018 with the help of shielding, has travelled closer to the Sun than any human-made craft in history, and is currently operating inside the orbit of Mercury, with plans to move even closer to the Sun within a year.

Atomic clocks in space are already well-motivated for many reasons other than searching for dark matter.

Read more at Science Daily

Playing the piano boosts brain processing power and helps lift the blues

A new study published by researchers at the University of Bath demonstrates the positive impact learning to play a musical instrument has on the brain's ability to process sights and sounds, and shows how it can also help to lift a blue mood.

Publishing their findings in the academic journal Nature Scientific Reports, the team behind the study shows how beginners who undertook piano lessons for just one hour a week over 11 weeks reported significant improvements in recognising audio-visual changes in the environment and reported less depression, stress and anxiety.

In the randomised control study, 31 adults were assigned into either a music training, music listening, or a control group. Individuals with no prior musical experiences or training were instructed to complete weekly one-hour sessions. Whilst the intervention groups played music, the control groups either listened to music or used the time to complete homework.

The researchers found that within just a few weeks of starting lessons*, people's ability to process multisensory information -- i.e., sight and sound -- was enhanced. Improved 'multisensory process' has benefits for almost every activity we participate in -- from driving a car and crossing a road, to finding someone in a crowd or watching TV.

These multisensory improvements extended beyond musical abilities. With musical training, people's audio-visual processing became more accurate across other tasks. Those who received piano lessons showed greater accuracy in tests where participants were asked to determine whether sound and vision 'events' occurred at the same time.

This was true both for simple displays presenting flashes and beeps, and for more complex displays showing a person talking. Such fine-tuning of individuals' cognitive abilities was not present for the music listening group (where participants listened to the same music as played by the music group), or for the non-music group (where members studied or read).

In addition, the findings went beyond improvements in cognitive abilities, showing that participants also had reduced depression, anxiety and stress scores after the training compared to before it. The authors suggest that music training could be beneficial for people with mental health difficulties, and further research is currently underway to test this.

Cognitive psychologist and music specialist Dr Karin Petrini from the University of Bath's Department of Psychology, explained: "We know that playing and listening to music often brings joy to our lives, but with this study we were interested in learning more about the direct effects a short period of music learning can have on our cognitive abilities.

"Learning to play an instrument like the piano is a complex task: it requires a musician to read a score, generate movements and monitor the auditory and tactile feedback to adjust their further actions. In scientific terms, the process couples visual with auditory cues and results in a multisensory training for individuals.

Read more at Science Daily

Fossil discovery in storeroom cupboard shifts origin of modern lizard back 35 million years

A specimen retrieved from a cupboard of the Natural History Museum in London has shown that modern lizards originated in the Late Triassic and not the Middle Jurassic as previously thought.

This fossilised relative of living lizards such as monitor lizards, gila monsters and slow worms was identified in a stored museum collection from the 1950s, including specimens from a quarry near Tortworth in Gloucestershire, South West England. The technology didn't exist then to expose its contemporary features.

As a modern-type lizard, the new fossil impacts all estimates of the origin of lizards and snakes, together called the Squamata, and affects assumptions about their rates of evolution, and even the key trigger for the origin of the group.

The team, led by Dr David Whiteside of Bristol's School of Earth Sciences, have named their incredible discovery Cryptovaranoides microlanius meaning 'small butcher' in tribute to its jaws that were filled with sharp-edged slicing teeth.

Dr Whiteside explained: "I first spotted the specimen in a cupboard full of Clevosaurus fossils in the storerooms of the Natural History Museum in London where I am a Scientific Associate. This was a common enough fossil reptile, a close relative of the New Zealand Tuatara that is the only survivor of the group, the Rhynchocephalia, that split from the squamates over 240 million years ago.

"Our specimen was simply labelled 'Clevosaurus and one other reptile.' As we continued to investigate the specimen, we became more and more convinced that it was actually more closely related to modern day lizards than the Tuatara group.

"We made X-ray scans of the fossils at the University, and this enabled us to reconstruct the fossil in three dimensions, and to see all the tiny bones that were hidden inside the rock."

Cryptovaranoides is clearly a squamate as it differs from the Rhynchocephalia in the braincase, in the neck vertebrae, in the shoulder region, in the presence of a median upper tooth in the front of the mouth, the way the teeth are set on a shelf in the jaws (rather than fused to the crest of the jaws) and in the skull architecture such as the lack of a lower temporal bar. There is only one major primitive feature not found in modern squamates, an opening on one side of the end of the upper arm bone, the humerus, where an artery and nerve pass through. Cryptovaranoides does have some other, apparently primitive characters such as a few rows of teeth on the bones of the roof of the mouth, but experts have observed the same in the living European Glass lizard and many snakes such as Boas and Pythons have multiple rows of large teeth in the same area. Despite this, it is advanced like most living lizards in its braincase and the bone connections in the skull suggest that it was flexible.

"In terms of significance, our fossil shifts the origin and diversification of squamates back from the Middle Jurassic to the Late Triassic," says co-author Professor Mike Benton. "This was a time of major restructuring of ecosystems on land, with origins of new plant groups, especially modern-type conifers, as well as new kinds of insects, and some of the first of modern groups such as turtles, crocodilians, dinosaurs, and mammals.

"Adding the oldest modern squamates then completes the picture. It seems these new plants and animals came on the scene as part of a major rebuilding of life on Earth after the end-Permian mass extinction 252 million years ago, and especially the Carnian Pluvial Episode, 232 million years ago when climates fluctuated between wet and dry and caused great perturbation to life."

PhD research student Sofia Chambi-Trowell commented: "The name of the new animal, Cryptovaranoides microlanius, reflects the hidden nature of the beast in a drawer but also in its likely lifestyle, living in cracks in the limestone on small islands that existed around Bristol at the time. The species name, meaning 'small butcher,' refers to its jaws that were filled with sharp-edged slicing teeth and it would have preyed on arthropods and small vertebrates."

Read more at Science Daily