Oct 3, 2020

Searching for the chemistry of life

 In the search for the chemical origins of life, researchers have found a possible alternative path for the emergence of the characteristic DNA pattern: According to the experiments, the characteristic DNA base pairs can form by dry heating, without water or other solvents. The team led by Ivan Halasz from the Rudjer Boskovic Institute and Ernest Mestrovic from the pharmaceutical company Xellia presents its observations from DESY's X-ray source PETRA III in the journal Chemical Communications.

"One of the most intriguing questions in the search for the origin of life is how the chemical selection occurred and how the first biomolecules formed," says Tomislav Stolar from the Rudjer Boskovic Institute in Zagreb, the first author on the paper. While living cells control the production of biomolecules with their sophisticated machinery, the first molecular and supramolecular building blocks of life were likely created by pure chemistry and without enzyme catalysis. For their study, the scientists investigated the formation of nucleobase pairs that act as molecular recognition units in the Deoxyribonucleic Acid (DNA).

Our genetic code is stored in the DNA as a specific sequence spelled by the nucleobases adenine (A), cytosine (C), guanine (G) and thymine (T). The code is arranged in two long, complementary strands wound in a double-helix structure. In the strands, each nucleobase pairs with a complementary partner in the other strand: adenine with thymine and cytosine with guanine.

"Only specific pairing combinations occur in the DNA, but when nucleobases are isolated they do not like to bind to each other at all. So why did nature choose these base pairs?" says Stolar. Investigations of pairing of nucleobases surged after the discovery of the DNA double helix structure by James Watson and Francis Crick in 1953. However, it was quite surprising that there has been little success in achieving specific nucleobase pairing in conditions that could be considered as prebiotically plausible.

"We have explored a different path," reports co-author Martin Etter from DESY. "We have tried to find out whether the base pairs can be generated by mechanical energy or simply by heating." To this end, the team studied methylated nucleobases. Having a methyl group (-CH3) attached to the respective nucleobases in principle allows them to form hydrogen bonds at the Watson-Crick side of the molecule. Methylated nucleobases occur naturally in many living organisms where they fulfil a variety of biological functions.

In the lab, the scientists tried to produce nucleobase pairs by grinding. Powders of two nucleobases were loaded into a milling jar along with steel balls, which served as the grinding media, while the jars were shaken in a controlled manner. The experiment produced A:T pairs which had also been observed by other scientists before. Grinding however, could not achieve formation of G:C pairs.

In a second step, the researchers heated the ground cytosine and guanine powders. "At about 200 degrees Celsius, we could indeed observe the formation of cytosine-guanine pairs," reports Stolar. In order to test whether the bases only form the known pairs under thermal conditions, the team repeated the experiments with mixtures of three and four nucleobases at the P02.1 measuring station of DESY's X-ray source PETRA III. Here, the detailed crystal structure of the mixtures could be monitored during heating and formation of new phases could be observed.

"At about 100 degrees Celsius, we were able to observe the formation of the adenine-thymine pairs, and at about 200 degrees Celsius the formation of Watson-Crick pairs of guanine and cytosine," says Etter, head of the measuring station. "Any other base pair did not form even when heated further until melting." This proves that the thermal reaction of nucleobase pairing has the same selectivity as in the DNA.

"Our results show a possible alternative route as to how the molecular recognition patterns that we observe in the DNA could have been formed," adds Stolar. "The conditions of the experiment are plausible for the young Earth that was a hot, seething cauldron with volcanoes, earthquakes, meteorite impacts and all sorts of other events. Our results open up many new paths in the search for the chemical origins of life." The team plans to investigate this route further with follow-up experiments at P02.1.

Read more at Science Daily

Babies' random choices become their preferences

 When a baby reaches for one stuffed animal in a room filled with others just like it, that seemingly random choice is very bad news for those unpicked toys: the baby has likely just decided she doesn't like what she didn't choose.

Though researchers have long known that adults build unconscious biases over a lifetime of making choices between things that are essentially the same, the new Johns Hopkins University finding that even babies engage in this phenomenon demonstrates that this way of justifying choice is intuitive and somehow fundamental to the human experience.

"The act of making a choice changes how we feel about our options," said co-author Alex Silver, a former Johns Hopkins undergraduate who's now a graduate student in cognitive psychology at the University of Pittsburgh. "Even infants who are really just at the start of making choices for themselves have this bias."

The findings are published today in the journal Psychological Science.

People assume they choose things that they like. But research suggests that's sometimes backwards: We like things because we choose them. And, we dislike things that we don't choose.

"I chose this, so I must like it. I didn't choose this other thing, so it must not be so good. Adults make these inferences unconsciously," said co-author Lisa Feigenson, a Johns Hopkins cognitive scientist specializing in child development. "We justify our choice after the fact."

This makes sense for adults in a consumer culture who must make arbitrary choices every day, between everything from toothpaste brands to makes of cars to styles of jeans. The question, for Feigenson and Silver, was when exactly people start doing this. So they turned to babies, who don't get many choices so, as Feigenson puts it, are "a perfect window into the origin of this tendency."

The team brought 10- to 20-month-old babies into the lab and gave them a choice of objects to play with: two equally bright and colorful soft blocks.

They set each block far apart, so the babies had to crawl to one or the other -- a random choice.

After the baby chose one of the toys, the researchers took it away and came back with a new option. The babies could then pick from the toy they didn't play with the first time, or a brand new toy.

"The babies reliably chose to play with the new object rather than the one they had previously not chosen, as if they were saying, 'Hmm, I didn't choose that object last time, I guess I didn't like it very much,' " Feigenson said. "That is the core phenomenon. Adults will like less the thing they didn't choose, even if they had no real preference in the first place. And babies, just the same, dis-prefer the unchosen object."

In follow-up experiments, when the researchers instead chose which toy the baby would play with, the phenomenon disappeared entirely. If you take the element of choice away, Feigenson said, the phenomenon goes away.

"They are really not choosing based on novelty or intrinsic preference," Silver said. "I think it's really surprising. We wouldn't expect infants to be making such methodical choices."

Read more at Science Daily

Oct 2, 2020

Einstein's description of gravity just got much harder to beat

 Einstein's theory of general relativity -- the idea that gravity is matter warping spacetime -- has withstood over 100 years of scrutiny and testing, including the newest test from the Event Horizon Telescope collaboration, published today in the latest issue of Physical Review Letters.

According to the findings, Einstein's theory just got 500 times harder to beat.

Despite its successes, Einstein's robust theory remains mathematically irreconcilable with quantum mechanics, the scientific understanding of the subatomic world. Testing general relativity is important because the ultimate theory of the universe must encompass both gravity and quantum mechanics.

"We expect a complete theory of gravity to be different from general relativity, but there are many ways one can modify it. We found that whatever the correct theory is, it can't be significantly different from general relativity when it comes to black holes. We really squeezed down the space of possible modifications," said UArizona astrophysics professor Dimitrios Psaltis, who until recently was the project scientist of the Event Horizon Telescope collaboration. Psaltis is lead author of a new paper that details the researchers' findings.

"This is a brand-new way to test general relativity using supermassive black holes," said Keiichi Asada, an EHT science council member and an expert on radio observations of black holes for Academia Sinica Institute of Astronomy and Astrophysics.

To perform the test, the team used the first image ever taken of the supermassive black hole at the center of nearby galaxy M87 obtained with the EHT last year. The first results had shown that the size of the black-hole shadow was consistent with the size predicted by general relativity.

"At that time, we were not able to ask the opposite question: How different can a gravity theory be from general relativity and still be consistent with the shadow size?" said UArizona Steward Theory Fellow Pierre Christian. "We wondered if there was anything we could do with these observations in order to cull some of the alternatives."

The team did a very broad analysis of many modifications to the theory of general relativity to identify the unique characteristic of a theory of gravity that determines the size of a black hole shadow.

"In this way, we can now pinpoint whether some alternative to general relativity is in agreement with the Event Horizon Telescope observations, without worrying about any other details," said Lia Medeiros, a postdoctoral fellow at the Institute for Advanced Study who has been part of the EHT collaboration since her time as a UArizona graduate student.

The team focused on the range of alternatives that had passed all the previous tests in the solar system.

"Using the gauge we developed, we showed that the measured size of the black hole shadow in M87 tightens the wiggle room for modifications to Einstein's theory of general relativity by almost a factor of 500, compared to previous tests in the solar system," said UArizona astrophysics professor Feryal Özel, a senior member of the EHT collaboration. "Many ways to modify general relativity fail at this new and tighter black hole shadow test."

"Black hole images provide a completely new angle for testing Einstein's theory of general relativity," said Michael Kramer, director of the Max Planck Institute for Radio Astronomy and EHT collaboration member.

"Together with gravitational wave observations, this marks the beginning of a new era in black hole astrophysics," Psaltis said.

Testing the theory of gravity is an ongoing quest: Are the general relativity predictions for various astrophysical objects good enough for astrophysicists to not worry about any potential differences or modifications to general relativity?

"We always say general relativity passed all tests with flying colors -- if I had a dime for every time I heard that," Özel said. "But it is true, when you do certain tests, you don't see that the results deviate from what general relativity predicts. What we're saying is that while all of that is correct, for the first time we have a different gauge by which we can do a test that's 500 times better, and that gauge is the shadow size of a black hole."

Next, the EHT team expects higher fidelity images that will be captured by the expanded array of telescopes, which includes the Greenland Telescope, the 12-meter Telescope on Kitt Peak near Tucson, and the Northern Extended Millimeter Array Observatory in France.

Read more at Science Daily

'I'll sleep when I'm dead': The sleep-deprived masculinity stereotype

 In the United States, the average American sleeps less than the minimum seven hours of sleep per night recommended by the Center for Disease Control, and nearly half of Americans report negative consequences from insufficient sleep. This problem appears to be especially prevalent in men, who report getting significantly less sleep, on average, than women.

A cultural complication is the notion that getting less than the recommended amount of sleep signals something positive about an individual. For example, US President Donald Trump has boasted about getting less than four hours of sleep per night and regularly derogates his political opponent Joe Biden as "Sleepy Joe."

"The Sleep-Deprived Masculinity Stereotype," a new paper in the Journal of the Association for Consumer Research, examines a possible stereotype connecting sleep and masculinity along with its underlying mechanisms and its social implications.

Authors Nathan B. Warren and Troy H. Campbell conducted 12 experiments involving 2,564 American participants to demonstrate that a sleep-deprived masculinity stereotype exists. In one experiment, participants were asked to imagine seeing a man shopping for a bed. Then, a salesperson asked the man, "How much do you normally sleep?" The results found that the mean masculinity rating for participants in the lots of sleep condition was significantly lower than the mean masculinity rating for participants in the little sleep condition.

In another experiment, participants were asked to ascribe different attributes to a male character, assigned to either a "very masculine and manly" man or a "not very masculine and not very manly" man. Participants in the masculine condition described their character sleeping 33 minutes less sleep per night than the characters described in the not masculine condition. A final experiment showed that participants who imagined stating they sleep more than average felt significantly less masculine than participants who imagined stating they sleep less than average.

Collectively, the experiments found that men who sleep less are seen as more masculine and more positively judged by society. The same patterns were not consistently observed for perceptions of women.

"The social nature of the sleep-deprived masculinity stereotype positively reinforces males who sleep less, even though sleeping less contributes to significant mental and physical health problems," the authors write. This may be particularly detrimental because men frequently have significantly more negative attitudes towards seeking psychological help. "Unfortunately, the problems created by the sleep-deprived masculinity stereotype may reach beyond individuals and into society, as men who sleep less are found to be more aggressive and violent." This is an example of the restrictive and toxic characteristics of masculinity, "which can be harmful to men's health and society at large."

Read more at Science Daily

Rare genetic form of dementia discovered

 A new, rare genetic form of dementia has been discovered by a team of Penn Medicine researchers. This discovery also sheds light on a new pathway that leads to protein build up in the brain -- which causes this newly discovered disease, as well as related neurodegenerative diseases like Alzheimer's Disease -- that could be targeted for new therapies. The study was published today in Science.

Alzheimer's disease (AD) is a neurodegenerative disease characterized by a buildup of proteins, called tau proteins, in certain parts of the brain. Following an examination of human brain tissue samples from a deceased donor with an unknown neurodegenerative disease, researchers discovered a novel mutation in the Valosin-containing protein (VCP) gene in the brain, a buildup of tau proteins in areas that were degenerating, and neurons with empty holes in them, called vacuoles. The team named the newly discovered disease Vacuolar Tauopathy (VT) -- a neurodegenerative disease now characterized by the accumulation of neuronal vacuoles and tau protein aggregates.

"Within a cell, you have proteins coming together, and you need a process to also be able to pull them apart, because otherwise everything kind of gets gummed up and doesn't work. VCP is often involved in those cases where it finds proteins in an aggregate and pulls them apart," Edward Lee, MD, PhD, an assistant professor of Pathology and Laboratory Medicine in the Perelman School of Medicine at the University of Pennsylvania. "We think that the mutation impairs the proteins' normal ability to break aggregates apart."

The researchers noted that the tau protein they observed building up looked very similar to the tau protein aggregates seen in Alzheimer's disease. With these similarities, they aimed to uncover how this VCP mutation is causing this new disease -- to aid in finding treatments for this disease and others. Rare genetic causes of diseases can very often offer insight into more prevalent ones.

The researchers first examined the proteins themselves, in addition to studying cells and an animal model, and found that the tau protein buildup is, in fact, due to the VCP mutation.

"What we found in this study is a pattern we've never seen before, together with a mutation that's never been described before," Lee said. "Given that this mutation inhibits VCP activity, that suggest the converse might be true -- that if you're able to boost VCP activity, that could help break up the protein aggregates. And if that's true, we may be able to break up tau aggregates not only for this extremely rare disease, but for Alzheimer's disease and other diseases associated with tau protein aggregation."

From Science Daily

Dinosaur feather study debunked

 

Archaeopteryx fossil
A new study provides substantial evidence that the first fossil feather ever to be discovered does belong to the iconic Archaeopteryx, a bird-like dinosaur named in Germany on this day in 1861. This debunks a recent theory that the fossil feather originated from a different species.

The research published in Scientific Reports finds that the Jurassic fossil matches a type of wing feather called a primary covert. Primary coverts overlay the primary feathers and help propel birds through the air. The international team of scientists led by the University of South Florida analyzed nine attributes of the feather, particularly the long quill, along with data from modern birds. They also examined the 13 known skeletal fossils of Archaeopteryx, three of which contain well-preserved primary coverts. The researchers discovered that the top surface of an Archaeopteryx wing has primary coverts that are identical to the isolated feather in size and shape. The isolated feather was also from the same fossil site as four skeletons of Archaeopteryx, confirming their findings.

"There's been debate for the past 159 years as to whether or not this feather belongs to the same species as the Archaeopteryx skeletons, as well as where on the body it came from and its original color," said lead author Ryan Carney, assistant professor of integrative biology at USF. "Through scientific detective work that combined new techniques with old fossils and literature, we were able to finally solve these centuries-old mysteries."

Using a specialized type of electron microscope, the researchers determined that the feather came from the left wing. They also detected melanosomes, which are microscopic pigment structures. After refining their color reconstruction, they found that the feather was entirely matte black, not black and white as another study has claimed.

Carney's expertise on Archaeopteryx and diseases led to the National Geographic Society naming him an "Emerging Explorer," an honor that comes with a $10,000 grant for research and exploration. He also teaches a course at USF, called "Digital Dinosaurs." Students digitize, animate and 3D-print fossils, providing valuable experience in paleontology and STEAM fields.

From Science Daily

Oct 1, 2020

Study traces the evolution of gill covers

 The emergence of jaws in primitive fish allowed vertebrates to become top predators. What is less appreciated is another evolutionary innovation that may have been just as important for the success of early vertebrates: the formation of covers to protect and pump water over the gills. In a new study published in the Proceedings of the National Academy of the Sciences (PNAS), USC Stem Cell scientists and their collaborators have identified a key modification to the genome that led to the evolution of gill covers more than 430 million years ago.

The scientists started by creating zebrafish with mutations in a gene called Pou3f3. Strikingly, fish lacking this gene, or the DNA element controlling its activity in the gills, failed to form gill covers. Conversely, zebrafish producing too much Pou3f3 developed extra rudimentary gill covers.

Intrigued by these findings, co-corresponding authors Gage Crump and Lindsey Barske collaborated with scientists from several universities to explore whether changes in Pou3f3 might account for the wide variation in gill covers across vertebrates. Crump is a professor of stem cell biology and regenerative medicine at the Keck School of Medicine of USC. Barske initiated the study in the Crump Lab, and is now an assistant professor at Cincinnati Children's Hospital Medical Center.

In jawless fish such as sea lampreys, which lack gill covers, the scientists found that the control element to produce Pou3f3 in the gill region is missing.

In contrast, in cartilaginous fish such as sharks and skates, the control element for Pou3f3 is active in all gills. Correspondingly, nearly all cartilaginous fish have a separate cover over each gill. In bony fish, including zebrafish, the control element produces Pou3f3 in one particular region, leading to a single cover for all gills.

"Remarkably, we have identified not only a gene responsible for gill cover formation," said Crump, "but also the ancient control element that allowed Pou3f3 to first make gill covers and then diversify them in cartilaginous versus bony fish."

Read more at Science Daily

Rodent ancestors combined portions of blood and venom genes to make pheromones

 Experts who study animal pheromones have traced the evolutionary origins of genes that allow mice, rats and other rodents to communicate through smell. The discovery is a clear example of how new genes can evolve through the random chance of molecular tinkering and may make identifying new pheromones easier in future studies. The results, representing a genealogy for the exocrine-gland secreting peptide (ESP) gene family, were published by researchers at the University of Tokyo in the journal Molecular Biology and Evolution.

Researchers led by Professor Kazushige Touhara in the University of Tokyo Laboratory of Biological Chemistry previously studied ESP proteins that affect mice's social or sexual behavior when secreted in one mouse's tears or saliva and spread to other animals through social touch.

Recently, Project Associate Professor Yoshihito Niimura led a search for the evolutionary origin of ESP genes using the wide variety of fully sequenced animal genomes available in modern DNA databases. Niimura looked for ESP genes in 100 different mammals and found them only in two evolutionarily closely related families of rodents: the Muridae family of mice, rats and gerbils, and the Cricetidae family of hamsters and voles.

Notably, the Cricetidae had few ESP genes usually all grouped together in the same stretch of DNA, but the Muridae had both that same small group of ESP genes as well as a second, larger group of additional ESP genes.

"We can imagine about 35 million years ago, the common ancestor of Muridae and Cricetidae formed the first ESP genes. Eventually, approximately 30 million years ago, the ancestor of Muridae duplicated and expanded these ESP genes. So now mice have many more ESP genes than the Cricetidae rodents," said Niimura.

To identify the source of what formed the first ESP gene, researchers compared additional genome sequences. They uncovered how random chance copied uniquely functional portions of two other genes, then coincidentally pasted them next to each other.

The DNA sequence of a gene includes portions called exons, which later become the functional protein, and other portions called introns, which do not become protein. Introns and exons are spaced throughout the gene with no apparent organization, introns interrupting essential functional portions of exons. Therefore, if a single exon were randomly copied and pasted elsewhere in the genome, any resulting protein fragment would have no meaningful function.

However, if an exon-only version of a gene were copied and reinserted into the genome, the chances of that new sequence remaining functional become much greater. Cells do create exon-only versions of genes called mRNA as part of the normal process of making protein from genes and cells do possess machinery, likely left over from viral infections, that can copy mRNA back into the DNA strand.

"This is not the normal way of things in cells, but it is a common source of evolution. We guess this is what happened to make ESP genes because the whole functional portion of the ESP gene is one exon, no intron interruption," said Niimura.

Specifically, the research team discovered for the first time that ESP proteins contain an uncommon spiral shape characteristic of alpha-globin, a component of the iron-carrying hemoglobin protein in blood. DNA sequence comparisons revealed that multiple alpha-globin gene exons spliced together show a subtle but distinctive similarity to the ESP gene sequence.

"It doesn't matter that hemoglobin is the source of the ESP pheromone. Any protein can become a pheromone if it is used for species-specific communication," said Niimura.

Regardless of its shape, no protein can function without being in the proper location. In ESP proteins, the alpha-globin-derived portion is attached to a signaling portion, which directs the protein to be secreted from salivary and tear glands. Researchers identified the ESP genes' location signaling sequence as resembling that of CRISP2, a gene expressed in mammalian reproductive tracts and salivary glands as well as the venom gland of some snakes.

The hemoglobin and CRISP genes are both ancient genes that existed in the shared evolutionary ancestor of vertebrates -- all animals with a backbone -- over 500 million years ago. The genetic shuffling that created ESP genes occurs relatively frequently in the cells of all organisms, but for these changes to become inherited evolutionary traits, the changes must occur in the sex cells so they can be passed on to future generations.

"The creation of new genes is not done from scratch, but nature utilizes pre-existing material. Evolution is like a tinkerer, using old things and broken parts to create some new device with a useful function," said Niimura.

Read more at Science Daily

Stellar explosion in Earth's proximity, eons ago

 When the brightness of the star Betelgeuse dropped dramatically a few months ago, some observers suspected an impending supernova -- a stellar explosion that could also cause damage on Earth. While Betelgeuse has returned to normal, physicists from the Technical University of Munich (TUM) have found evidence of a supernova that exploded near the Earth around 2.5 million years ago.

The life of stars with a mass more than ten times that of our sun ends in a supernova, a colossal stellar explosion. This explosion leads to the formation of iron, manganese and other heavy elements.

In layers of a manganese crust that are around two and a half million years old a research team led by physicists from the Technical University of Munich has now confirmed the existence of both iron-60 and manganese-53.

"The increased concentrations of manganese-53 can be taken as the "smoking gun" -- the ultimate proof that this supernova really did take place," says first author Dr. Gunther Korschinek.

While a very close supernova could inflict massive harm to life on Earth, this one was far enough away. It only caused a boost in cosmic rays over several thousand years. "However, this can lead to increased cloud formation," says co-author Dr. Thomas Faestermann. "Perhaps there is a link to the Pleistocene epoch, the period of the Ice Ages, which began 2.6 million years ago."

Ultra-trace analysis

Typically, manganese occurs on earth as manganese-55. Manganese-53, on the other hand, usually stems from cosmic dust, like that found in the asteroid belt of our solar system. This dust rains down onto the earth continuously; but only rarely do we perceive larger specks of dust that glow as meteorites.

New sediment layers that accumulate year for year on the sea floor preserve the distribution of the elements in manganese crusts and sediment samples. Using accelerator mass spectrometry, the team of scientists has now detected both iron-60 and increased levels of manganese-53 in layers that were deposited about two and a half million years ago.

"This is investigative ultra-trace analysis," says Korschinek. "We are talking about merely a few atoms here. But accelerator mass spectrometry is so sensitive that it even allows us to calculate from our measurements that the star that exploded must have had around 11 to 25 times the size of the sun."

Read more at Science Daily

Breaking COVID-19's 'clutch' to stop its spread

 Scripps Research chemist Matthew Disney, PhD, and colleagues have created drug-like compounds that, in human cell studies, bind and destroy the pandemic coronavirus' so-called "frameshifting element" to stop the virus from replicating. The frameshifter is a clutch-like device the virus needs to generate new copies of itself after infecting cells.

"Our concept was to develop lead medicines capable of breaking COVID-19's clutch," Disney says. "It doesn't allow the shifting of gears."

Viruses spread by entering cells and then using the cells' protein-building machinery to churn out new infectious copies. Their genetic material must be compact and efficient to make it into the cells.

The pandemic coronavirus stays small by having one string of genetic material encode multiple proteins needed to assemble new virus. A clutch-like frameshifting element forces the cells' protein-building engines, called ribosomes, to pause, slip to a different gear, or reading frame, and then restart protein assembly anew, thus producing different protein from the same sequence.

But making a medicine able to stop the process is far from simple. The virus that causes COVID-19 encodes its genetic sequence in RNA, chemical cousin of DNA. It has historically been very difficult to bind RNA with orally administered medicines, but Disney's group has been developing and refining tools to do so over more than a decade.

The scientists' report, titled "Targeting the SARS-CoV-2 RNA Genome with Small Molecule Binders and Ribonuclease Targeting Chimera (RIBOTAC) Degraders," appears Sept. 30 in the journal ACS Central Science.

Disney emphasizes this is a first step in a long process of refinement and research that lies ahead. Even so, the results demonstrate the feasibility of directly targeting viral RNA with small-molecule drugs, Disney says. Their study suggests other RNA viral diseases may eventually be treated through this strategy, he adds.

"This is a proof-of-concept study," Disney says. "We put the frameshifting element into cells and showed that our compound binds the element and degrades it. The next step will be to do this with the whole COVID virus, and then optimize the compound."

Disney's team collaborated with Iowa State University Assistant Professor Walter Moss, PhD, to analyze and predict the structure of molecules encoded by the viral genome, in search of its vulnerabilities.

"By coupling our predictive modeling approaches to the tools and technologies developed in the Disney lab, we can rapidly discover druggable elements in RNA," Moss says. "We're using these tools not only to accelerate progress toward treatments for COVID-19, but a host of other diseases, as well."

The scientists zeroed in on the virus' frameshifting element, in part, because it features a stable hairpin-shaped segment, one that acts like a joystick to control protein-building. Binding the joystick with a drug-like compound should disable its ability to control frameshifting, they predicted. The virus needs all of its proteins to make complete copies, so disturbing the shifter and distorting even one of the proteins should, in theory, stop the virus altogether.

Using a database of RNA-binding chemical entities developed by Disney, they found 26 candidate compounds. Further testing with different variants of the frameshifting structure revealed three candidates that bound them all well, Disney says.

Disney's team in Jupiter, Florida quickly set about testing the compounds in human cells carrying COVID-19's frameshifting element. Those tests revealed that one, C5, had the most pronounced effect, in a dose-dependent manner, and did not bind unintended RNA.

They then went further, engineering the C5 compound to carry an RNA editing signal that causes the cell to specifically destroy the viral RNA. With the addition of the RNA editor, "these compounds are designed to basically remove the virus," Disney says.

Cells need RNA to read DNA and build proteins. Cells have natural process to rid cells of RNA after they are done using them. Disney has chemically harnessed this waste-disposal system to chew up COVID-19 RNA. His system is called RIBOTAC, short for "Ribonuclease Targeting Chimera."

Adding a RIBOTAC to the C5 anti-COVID compound increases its potency by tenfold, Disney says. Much more work lies ahead for this to become a medicine that makes it to clinical trials. Because it's a totally new way of attacking a virus, there remains much to learn, he says.

Read more at Science Daily

Greenland is on track to lose ice faster than in any century over 12,000 years

 If human societies don't sharply curb emissions of greenhouse gases, Greenland's rate of ice loss this century is likely to greatly outpace that of any century over the past 12,000 years, a new study concludes.

The research will be published on Sept. 30 in the journal Nature. The study employs ice sheet modeling to understand the past, present and future of the Greenland Ice Sheet. Scientists used new, detailed reconstructions of ancient climate to drive the model, and validated the model against real-world measurements of the ice sheet's contemporary and ancient size.

The findings place the ice sheet's modern decline in historical context, highlighting just how extreme and unusual projected losses for the 21st century could be, researchers say.

"Basically, we've altered our planet so much that the rates of ice sheet melt this century are on pace to be greater than anything we've seen under natural variability of the ice sheet over the past 12,000 years. We'll blow that out of the water if we don't make severe reductions to greenhouse gas emissions," says Jason Briner, PhD, professor of geology in the University at Buffalo College of Arts and Sciences. Briner led the collaborative study, coordinating the work of scientists from multiple disciplines and institutions.

"If the world goes on a massive energy diet, in line with a scenario that the Intergovernmental Panel on Climate Change calls RCP2.6, our model predicts that the Greenland Ice Sheet's rate of mass loss this century will be only slightly higher than anything experienced in the past 12,000 years," Briner adds. "But, more worrisome, is that under a high-emissions RCP8.5 scenario -- the one the Greenland Ice Sheet is now following -- the rate of mass loss could be about four times the highest values experienced under natural climate variability over the past 12,000 years."

He and colleagues say the results reiterate the need for countries around the world to take action now to reduce emissions, slow the decline of ice sheets, and mitigate sea level rise. The research was largely funded by the U.S. National Science Foundation.

Combining ice sheet modeling with field work, real-life observations

The study brought together climate modelers, ice core scientists, remote sensing experts and paleoclimate researchers at UB, NASA's Jet Propulsion Laboratory (JPL), the University of Washington (UW), Columbia University's Lamont-Doherty Earth Observatory (LDEO), the University of California, Irvine (UCI) and other institutions.

This multidisciplinary team used a state-of-the-art ice sheet model to simulate changes to the southwestern sector of the Greenland Ice Sheet, starting from the beginning of the Holocene epoch some 12,000 years ago and extending forward 80 years to 2100.

Scientists tested the model's accuracy by comparing results of the model's simulations to historical evidence. The modeled results matched up well with data tied to actual measurements of the ice sheet made by satellites and aerial surveys in recent decades, and with field work identifying the ice sheet's ancient boundaries.

Though the project focused on southwestern Greenland, research shows that changes in the rates of ice loss there tend to correspond tightly with changes across the entire ice sheet.

"We relied on the same ice sheet model to simulate the past, the present and the future," says co-author Jessica Badgeley, a PhD student in the UW Department of Earth and Space Sciences. "Thus, our comparisons of the ice sheet mass change through these time periods are internally consistent, which makes for a robust comparison between past and projected ice sheet changes."

"We have significantly improved our understanding of how anomalous future Greenland change will be," says co-author Joshua Cuzzone, PhD, an assistant project scientist at UCI who completed much of his work on the study as a postdoctoral researcher at JPL and UCI. "This work represents a massive success for multidisciplinary science and collaboration, and represents a framework for future successful multidisciplinary work."

Cuzzone and other researchers at UCI and JPL led ice sheet modeling, leveraging the work of colleagues at UW, who used data from ice cores to create maps of temperatures and precipitation in the study region that were used to drive the ice sheet model simulations up to the year 1850. Previously published climate data was used to drive the simulations after that date.

UB and LDEO scientists partnered on field work that helped validate the model by identifying the ice sheet's boundaries in southwestern Greenland thousands of years ago.

"We built an extremely detailed geologic history of how the margin of the southwestern Greenland Ice Sheet moved through time by measuring beryllium-10 in boulders that sit on moraines," says co-author Nicolás Young, PhD, associate research professor at LDEO. "Moraines are large piles of debris that you can find on the landscape that mark the former edge of an ice sheet or glacier. A beryllium-10 measurement tells you how long that boulder and moraine have been sitting there, and therefore tells you when the ice sheet was at that exact spot and deposited that boulder.

"Amazingly, the model reproduced the geologic reconstruction really well. This gave us confidence that the ice sheet model was performing well and giving us meaningful results. You can model anything you want and your model will always spit out an answer, but we need some way to determine if the model is doing a good job."

A continuous timeline of changes to the Greenland Ice Sheet

The study makes an important contribution by creating a timeline of the past, present and future of the Greenland Ice Sheet, Briner says. The results are sobering.

"We have long timelines of temperature change, past to present to future, that show the influence of greenhouse gases on Earth's temperature," Briner says. "And now, for the first time, we have a long timeline of the impacts of that temperature -- in the form of Greenland Ice Sheet melt -- from the past to present to future. And what it shows is eye-opening."

"It is no secret that the Greenland Ice Sheet is in rough shape and is losing ice at an increasing rate," Young says. "But if someone wants to poke holes in this, they could simply ask, 'how do you know this isn't just part of the ice sheet's natural variability?' Well, what our study suggests is that the rate of ice loss for this century will exceed the rate of ice loss for any single century over the last 12,000 years. I think this is the first time that the current health of the Greenland Ice Sheet has been robustly placed into a long-term context."

Despite these sobering results, one vital takeaway from the model's future projections is that it's still possible for people and countries around the world to make an important difference by cutting emissions, Briner says. Models of the RCP2.6 and RCP8.5 scenarios yield very different results, with high-emission scenarios producing massive declines in the ice sheet's health, and significant sea level rise.

"Our findings are yet another wake-up call, especially for countries like the U.S.," Briner says. "Americans use more energy per person than any other nation in the world. Our nation has produced more of the CO2 that resides in the atmosphere today than any other country. Americans need to go on an energy diet. The most affluent Americans, who have the highest energy footprint, can afford to make lifestyle changes, fly less, install solar panels and drive an energy-efficient vehicle."

Read more at Science Daily

Sep 30, 2020

Cosmic diamonds formed during gigantic planetary collisions

 It is estimated that over 10 million asteroids are circling the Earth in the asteroid belt. They are relics from the early days of our solar system, when our planets formed out of a large cloud of gas and dust rotating around the sun. When asteroids are cast out of orbit, they sometimes plummet towards Earth as meteoroids. If they are big enough, they do not burn up completely when entering the atmosphere and can be found as meteorites. The geoscientific study of such meteorites makes it possible to draw conclusions not only about the evolution and development of planets in the solar system but also their extinction.

A special type of meteorites are ureilites. These are fragments of a larger celestial body -- probably a minor planet -- which was smashed to pieces through violent collisions with other minor planets or large asteroids. Ureilites often contain large quantities of carbon, among others in the form of graphite or nanodiamonds. The diamonds on the scale of over 0.1 and more millimetres now discovered cannot have formed when the meteoroids hit the Earth. Impact events with such vast energies would make the meteoroids evaporate completely. That is why it was so far assumed that these larger diamonds -- similar to those in the Earth's interior -- must have been formed by continuous pressure in the interior of planetary precursors the size of Mars or Mercury.

Together with scientists from Italy, the USA, Russia, Saudi Arabia, Switzerland and the Sudan, researchers from Goethe University have now found the largest diamonds ever discovered in ureilites from Morocco and the Sudan and analysed them in detail. Apart from the diamonds of up to several 100 micrometres in size, numerous nests of diamonds on just nanometre scale as well as nanographite were found in the ureilites. Closer analyses showed that what are known as londsdalite layers exist in the nanodiamonds, a modification of diamonds that only occurs through sudden, very high pressure. Moreover, other minerals (silicates) in the ureilite rocks under examination displayed typical signs of shock pressure. In the end, it was the presence of these larger diamonds together with nanodiamonds and nanographite that led to the breakthrough.

Professor Frank Brenker from the Department of Geosciences at Goethe University explains:

"Our extensive new studies show that these unusual extraterrestrial diamonds formed through the immense shock pressure that occurred when a large asteroid or even minor planet smashed into the surface of the ureilite parent body. It's by all means possible that it was precisely this enormous impact that ultimately led to the complete destruction of the minor planet. This means -- contrary to prior assumptions -- that the larger ureilite diamonds are not a sign that protoplanets the size of Mars or Mercury existed in the early period of our solar system, but nonetheless of the immense, destructive forces that prevailed at that time."

From Science Daily

Venus might be habitable today, if not for Jupiter

 Venus might not be a sweltering, waterless hellscape today, if Jupiter hadn't altered its orbit around the sun, according to new UC Riverside research.

Jupiter has a mass that is two-and-a-half times that of all other planets in our solar system -- combined. Because it is comparatively gigantic, it has the ability to disturb other planets' orbits.

Early in Jupiter's formation as a planet, it moved closer to and then away from the sun due to interactions with the disc from which planets form as well as the other giant planets. This movement in turn affected Venus.

Observations of other planetary systems have shown that similar giant planet migrations soon after formation may be a relatively common occurrence. These are among the findings of a new study published in the Planetary Science Journal.

Scientists consider planets lacking liquid water to be incapable of hosting life as we know it. Though Venus may have lost some water early on for other reasons, and may have continued to do so anyway, UCR astrobiologist Stephen Kane said that Jupiter's movement likely triggered Venus onto a path toward its current, inhospitable state.

"One of the interesting things about the Venus of today is that its orbit is almost perfectly circular," said Kane, who led the study. "With this project, I wanted to explore whether the orbit has always been circular and if not, what are the implications of that?"

To answer these questions, Kane created a model that simulated the solar system, calculating the location of all the planets at any one time and how they pull one another in different directions.

Scientists measure how noncircular a planet's orbit is between 0, which is completely circular, and 1, which is not circular at all. The number between 0 and 1 is called the eccentricity of the orbit. An orbit with an eccentricity of 1 would not even complete an orbit around a star; it would simply launch into space, Kane said.

Currently, the orbit of Venus is measured at 0.006, which is the most circular of any planet in our solar system. However, Kane's model shows that when Jupiter was likely closer to the sun about a billion years ago, Venus likely had an eccentricity of 0.3, and there is a much higher probability that it was habitable then.

"As Jupiter migrated, Venus would have gone through dramatic changes in climate, heating up then cooling off and increasingly losing its water into the atmosphere," Kane said.

Recently, scientists generated much excitement by discovering a gas in the clouds above Venus that may indicate the presence of life. The gas, phosphine, is typically produced by microbes, and Kane says it is possible that the gas represents "the last surviving species on a planet that went through a dramatic change in its environment."

For that to be the case, however, Kane notes the microbes would have had to sustain their presence in the sulfuric acid clouds above Venus for roughly a billion years since Venus last had surface liquid water -- a difficult to imagine though not impossible scenario.

"There are probably a lot of other processes that could produce the gas that haven't yet been explored," Kane said.

Ultimately, Kane says it is important to understand what happened to Venus, a planet that was once likely habitable and now has surface temperatures of up to 800 degrees Fahrenheit.

Read more at Science Daily

The key to lowering CO2 emissions is made of metal

 Carbon dioxide (CO2) levels are rising and our planet is heating up. What do we do? What if we used this excess CO2 as a raw material to produce things we need -- similar to how plants use it to produce oxygen.

This is one thing artificial photosynthesis has set out to do.

Artificial photosynthesis is a chemical process that mimics the natural process of photosynthesis to convert sunlight, water, and carbon dioxide into useful things like carbohydrates and oxygen. The problem is that current technologies can only produce molecules with 1 carbon atom. These molecules are too weak to be used for the production of more complex materials. Standard experimental conditions have not been stable enough to allow for molecules with bonds of more than one carbon atom to form.

New research at Osaka City University has found that simply adding metal ions like aluminum and iron was enough to allow the production of malic acid, which contains 4 carbon atoms. The study appeared recently online in the New Journal of Chemistry published by the Royal Society of Chemistry.

"I was surprised that the solution was found in such a common thing as aluminum ions" said lead author Takeyuki Katagiri.

"Our goal is to create groups of molecules with as many as 100 carbon atoms" added supporting author Yutaka Amao. "Then we can finally explore possibilities of using CO2 as a raw material."

From Science Daily

Modern humans reached westernmost Europe 5,000 years earlier than previously known

 

Illustration depicting prehistoric group of people
Modern humans arrived in the westernmost part of Europe 41,000 -- 38,000 years ago, about 5,000 years earlier than previously known, according to Jonathan Haws, Ph.D., professor and chair of the Department of Anthropology at the University of Louisville, and an international team of researchers. The team has revealed the discovery of stone tools used by modern humans dated to the earlier time period in a report published this week in the journal Proceedings of the National Academy of Sciences.

The tools, discovered in a cave named Lapa do Picareiro, located near the Atlantic coast of central Portugal, link the site with similar finds from across Eurasia to the Russian plain. The discovery supports a rapid westward dispersal of modern humans across Eurasia within a few thousand years of their first appearance in southeastern Europe. The tools document the presence of modern humans in westernmost Europe at a time when Neanderthals previously were thought to be present in the region. The finding has important ramifications for understanding the possible interaction between the two human groups and the ultimate disappearance of the Neanderthals.

"The question whether the last surviving Neanderthals in Europe have been replaced or assimilated by incoming modern humans is a long-standing, unsolved issue in paleoanthropology," said Lukas Friedl, an anthropologist at the University of West Bohemia in Pilsen, Czech Republic, and project co-leader. "The early dates for Aurignacian stone tools at Picareiro likely rule out the possibility that modern humans arrived into the land long devoid of Neanderthals, and that by itself is exciting."

Until now, the oldest evidence for modern humans south of the Ebro River in Spain came from Bajondillo, a cave site on the southern coast. The discovery of stone stools characterized as Aurignacian, technology associated with early modern humans in Europe, in a secure stratigraphic context at Picareiro provide definitive evidence of early modern human arrival.

"Bajondillo offered tantalizing but controversial evidence that modern humans were in the area earlier than we thought," Haws said. "The evidence in our report definitely supports the Bajondillo implications for an early modern human arrival, but it's still not clear how they got here. People likely migrated along east-west flowing rivers in the interior, but a coastal route is still possible."

"The spread of anatomically modern humans across Europe many thousands of years ago is central to our understanding of where we came from as a now-global species," said John Yellen, program director for archaeology and archaeometry at the National Science Foundation, which supported the work. "This discovery offers significant new evidence that will help shape future research investigating when and where anatomically modern humans arrived in Europe and what interactions they may have had with Neanderthals."

The Picareiro cave has been under excavation for 25 years and has produced a record of human occupation over the last 50,000 years. An international research team from the Interdisciplinary Center for Archaeology and Evolution of Human Behavior (ICArEHB) in Faro, Portugal, is investigating the arrival of modern humans and extinction of Neanderthals in the region.

The project is led by Haws, Michael Benedetti of the University of North Carolina Wilmington, and Friedl, in collaboration with Nuno Bicho and João Cascalheira of the University of Algarve, where ICArEHB is housed, and Telmo Pereira of the Autonomous University of Lisbon.

With support from U.S. National Science Foundation grants to Haws and Benedetti, the team has uncovered rich archaeological deposits that include stone tools in association with thousands of animal bones from hunting, butchery and cooking activities.

Sahra Talamo of the University of Bologna, Italy, and the Max Planck Institute for Evolutionary Anthropology in Leipzig, Germany, joined the research team to determine the age of the early modern human and Neanderthal occupations. She used state-of-the-art bone pretreatment and accelerator mass spectrometry (AMS) to date the bones that show evidence of butchery cut marks and intentional breakage by humans to extract bone marrow, a highly prized and nutritious food consumed by ancient people. The dating results place the modern human arrival to the interval between 41,000 and 38,000 years ago. The last Neanderthal occupation at the site took place between 45,000 and 42,000 years ago.

"The radiocarbon results from Lapa do Picareiro are not only very precise in terms of the dating method, but also demonstrate the meticulous work of the archeologists at the site," Talamo said. "The importance of collaboration between the radiocarbon specialist and the archaeologists is essential in order to obtain an accurate chronology like in the case of Picareiro."

Spatial analysis of high-resolution three-dimensional data confirmed the precise stratigraphic relationships between artifacts and radiocarbon samples and revealed discrete layers of occupation at the site.

"Analysis of high-resolution spatial data is crucial for documenting and observing lenses of human occupation and reconstructing occupational patterns, especially in cave environments where complex formation processes exist," said Grace Ellis, a Ph.D. student at Colorado State University studying landscape archaeology and ancient settlement patterns.

This was backed up by artifact refitting that showed the stone tools were not moved through post-depositional processes.

"Refitting is a task that requires a lot of time and patience, and in this case, it really was worthwhile because the results verified the geospatial observations," said Pereira, an archaeologist who specializes in stone technology.

While the dates suggest that modern humans arrived after Neanderthals disappeared, a nearby cave, Oliveira, has evidence for Neanderthals' survival until 37,000 years ago. The two groups may have overlapped for several thousand years in the area.

"If the two groups overlapped for some time in the highlands of Atlantic Portugal, they may have maintained contacts between each other and exchanged not only technology and tools, but also mates. This could possibly explain why many Europeans have Neanderthal genes," said Bicho, director of ICArEHB.

"Besides genetic and archeological evidence, high-resolution temporal context and fossil evidence across the continent is crucial for answering this question. With the preserved key layers dated to the transitional period, we are now awaiting human fossils to tell us more about the nature of the transition," Friedl said.

Despite the overlap in dates, there does not appear to be any evidence for direct contact between Neanderthals and modern humans. Neanderthals continued to use the same stone tools they had before modern humans arrived, bringing a completely different stone technology.

"Differences between the stone tool assemblages dated before and after about 41,000 years ago are striking at Picareiro," said Cascalheira, an ICArEHB board member and specialist on stone tool technology. "Older levels are dominated by quartzite and quartz raw materials and marked by the presence of Levallois technology, a typical element of Neanderthal occupations in Europe. Aurignacian levels, on the other hand, are dominated by flint and the production of very small blades that were likely used as inserts in arrow shafts for hunting."

Flint also was used to make tools for butchering animals such as red deer, ibex and possibly rabbits. The team recovered a few red deer canine teeth, often used as personal adornments, but so far these do not show traces of manufacturing jewelry.

"The bones from Lapa do Picareiro make up one of the largest Paleolithic assemblages in Portugal, and the preservation of these animal bones is remarkable," said Milena Carvalho, a Ph.D. candidate at the University of New Mexico and ICArEHB researcher studying the diets and paleoecology of Neanderthals and modern humans. "The collection will provide tremendous amounts of information on human behavior and paleoecology during the Paleolithic and we will be studying it for decades."

The cave sediments also contain a well-preserved paleoclimatic record that helps reconstruct environmental conditions at the time of the last Neanderthals and arrival of modern humans.

"We studied changes in the size of limestone clasts and the chemistry of muddy fine sediment filling the cave to understand the paleoclimatic context for the transition," Benedetti said. "Our analysis shows that the arrival of modern humans corresponds with, or slightly predates, a bitterly cold and extremely dry phase. Harsh environmental conditions during this period posed challenges that both modern human and Neanderthal populations had to contend with."

The cave itself has an enormous amount of sediment remaining for future work and the excavation still hasn't reached the bottom.

Read more at Science Daily

The ancient Neanderthal hand in severe COVID-19

 

Early human scene, concept photo
Since first appearing in late 2019, the novel virus, SARS-CoV-2, has had a range of impacts on those it infects. Some people become severely ill with COVID-19, the disease caused by the virus, and require hospitalization, whereas others have mild symptoms or are even asymptomatic.

There are several factors that influence a person's susceptibility to having a severe reaction, such as their age and the existence of other medical conditions. But one's genetics also plays a role, and, over the last few months, research by the COVID-19 Host Genetics Initiative has shown that genetic variants in one region on chromosome 3 impose a larger risk that their carriers will develop a severe form of the disease.

Now, a new study, published in Nature, has revealed that this genetic region is almost identical to that of a 50,000-year old Neanderthal from southern Europe. Further analysis has shown that, through interbreeding, the variants came over to the ancestors of modern humans about 60,000 years ago.

"It is striking that the genetic heritage from Neanderthals has such tragic consequences during the current pandemic," said Professor Svante Pääbo, who leads the Human Evolutionary Genomics Unit at the Okinawa Institute of Science and Technology Graduate University (OIST).

Is severe COVID-19 written in our genes?

Chromosomes are tiny structures that are found in the nucleus of cells and carry an organism's genetic material. They come in pairs with one chromosome in each pair inherited from each parent. Humans have 23 of these pairs. Thus, 46 chromosomes carry the entirety of our DNA -- millions upon millions of base pairs. And although the vast majority are the same between people, mutations do occur, and variations persist, at the DNA level.

The research by the COVID-19 Host Genetics Initiative looked at over 3,000 people including both people who were hospitalized with severe COVID-19 and people who were infected by the virus but weren't hospitalized. It identified a region on chromosome 3 that influences whether a person infected with the virus will become severely ill and needs to be hospitalized.

The identified genetic region is very long, spanning 49.4 thousand base pairs, and the variants that impose a higher risk to severe COVID-19 are strongly linked -- if a person has one of the variants then they're very likely to have all thirteen of them. Variants like these have previously been found to come from Neanderthals or Denisovans so Professor Pääbo, in collaboration with Professor Hugo Zeberg, first author of the paper and a researcher at the Max Planck Institute for Evolutionary Anthropology and Karolinska Institutet, decided to investigate whether this was the case.

They found that a Neanderthal from southern Europe carried an almost identical genetic region whereas two Neanderthals from southern Siberia and a Denisovan did not.

Next, they questioned whether the variants had come over from Neanderthals or had been inherited by both Neanderthals and present-day people through a common ancestor.

If the variants had come from interbreeding between the two groups of people, then this would have occurred as recently as 50,000 years ago. Whereas, if the variants had come from the last common ancestor, they would have been around in modern humans for about 550,000 years. But random genetic mutations, and recombination between chromosomes, would have also occurred during this time and because the variants between the Neanderthal from southern Europe and present-day people are so similar over such a long stretch of DNA, the researchers showed that it was much more likely that they came from interbreeding.

Professor Pääbo and Professor Zeberg concluded that Neanderthals related to the one from southern Europe contributed this DNA region to present-day people around 60,000 years ago when the two groups met.

Neanderthal variants pose up to three times the risk

Professor Zeberg explained that those who carry these Neanderthal variants have up to three times the risk of requiring mechanical ventilation. "Obviously, factors such as your age and other diseases you may have also affect how severely you are affected by the virus. But among genetic factors, this is the strongest one."

The researchers also found that there are major differences in how common these variants are in different parts of the world. In South Asia about 50% of the population carry them. However, in East Asia they're almost absent.

Read more at Science Daily

Sep 29, 2020

Naked prehistoric monsters! Evidence that prehistoric flying reptiles probably had feathers refuted

 The debate about when dinosaurs developed feathers has taken a new turn with a paper refuting earlier claims that feathers were also found on dinosaurs' relatives, the flying reptiles called pterosaurs.

Pterosaur expert Dr David Unwin from the University of Leicester's Centre for Palaeobiology Research, and Professor Dave Martill, of the University of Portsmouth have examined the evidence that these creatures had feathers and believe they were in fact bald

They have responded to a suggestion by a group of his colleagues led by Zixiao Yang that some pterosaur fossils show evidence of feather-like branching filaments, 'protofeathers', on the animal's skin.

Dr Yang, from Nanjing University, and colleagues presented their argument in a 2018 paper in the journal Nature Ecology and Evolution. Now Unwin and Martill, have offered an alternative, non-feather explanation for the fossil evidence in the same journal.

While this may seem like academic minutiae, it actually has huge palaeontological implications. Feathered pterosaurs would mean that the very earliest feathers first appeared on an ancestor shared by both pterosaurs and dinosaurs, since it is unlikely that something so complex developed separately in two different groups of animals.

This would mean that the very first feather-like elements evolved at least 80 million years earlier than currently thought. It would also suggest that all dinosaurs started out with feathers, or protofeathers but some groups, such as sauropods, subsequently lost them again -- the complete opposite of currently accepted theory.

The evidence rests on tiny, hair-like filaments, less than one tenth of a millimetre in diameter, which have been identified in about 30 pterosaur fossils. Among these, Yang and colleagues were only able to find just three specimens on which these filaments seem to exhibit a 'branching structure' typical of protofeathers.

Unwin and Martill propose that these are not protofeathers at all but tough fibres which form part of the internal structure of the pterosaur's wing membrane, and that the 'branching' effect may simply be the result of these fibres decaying and unravelling.

Dr Unwin said: "The idea of feathered pterosaurs goes back to the nineteenth century but the fossil evidence was then, and still is, very weak. Exceptional claims require exceptional evidence -- we have the former, but not the latter."

Read more at Science Daily

Second alignment plane of solar system discovered

 A study of comet motions indicates that the Solar System has a second alignment plane. Analytical investigation of the orbits of long-period comets shows that the aphelia of the comets, the point where they are farthest from the Sun, tend to fall close to either the well-known ecliptic plane where the planets reside or a newly discovered "empty ecliptic." This has important implications for models of how comets originally formed in the Solar System.

In the Solar System, the planets and most other bodies move in roughly the same orbital plane, known as the ecliptic, but there are exceptions such as comets. Comets, especially long-period comets taking tens-of-thousands of years to complete each orbit, are not confined to the area near the ecliptic; they are seen coming and going in various directions.

Models of Solar System formation suggest that even long-period comets originally formed near the ecliptic and were later scattered into the orbits observed today through gravitational interactions, most notably with the gas giant planets. But even with planetary scattering, the comet's aphelion, the point where it is farthest from the Sun, should remain near the ecliptic. Other, external forces are needed to explain the observed distribution. The Solar System does not exist in isolation; the gravitational field of the Milky Way Galaxy in which the Solar System resides also exerts a small but non-negligible influence. Arika Higuchi, an assistant professor at the University of Occupational and Environmental Health in Japan and previously a member of the NAOJ RISE Project, studied the effects of the Galactic gravity on long-period comets through analytical investigation of the equations governing orbital motion. She showed that when the Galactic gravity is taken into account, the aphelia of long-period comets tend to collect around two planes. First the well-known ecliptic, but also a second "empty ecliptic." The ecliptic is inclined with respect to the disk of the Milky Way by about 60 degrees. The empty ecliptic is also inclined by 60 degrees, but in the opposite direction. Higuchi calls this the "empty ecliptic" based on mathematical nomenclature and because initially it contains no objects, only later being populated with scattered comets.

Higuchi confirmed her predictions by cross-checking with numerical computations carried out in part on the PC Cluster at the Center for Computational Astrophysics of NAOJ. Comparing the analytical and computational results to the data for long-period comets listed in NASA's JPL Small Body Database showed that the distribution has two peaks, near the ecliptic and empty ecliptic as predicted. This is a strong indication that the formation models are correct and long-period comets formed on the ecliptic. However, Higuchi cautions, "The sharp peaks are not exactly at the ecliptic or empty ecliptic planes, but near them. An investigation of the distribution of observed small bodies has to include many factors. Detailed examination of the distribution of long-period comets will be our future work. The all-sky survey project known as the Legacy Survey of Space and Time (LSST) will provide valuable information for this study."

From Science Daily

Evolutionary and heritable axes shape our brain

 The location of a country on the earth says a lot about its climate, its neighboring countries, and the resources that might be found there. The location therefore determines what kind of country you would expect to find at that point.

The same seems to apply to the brain. Every network is located at a certain place, which determines its function and neighbors but also the kind of function that occurs there. However, the rules that describe the relationships different brain regions have to each-other were not well understood until now. Scientists at the Max Planck Institute for Human Cognitive and Brain Sciences in Leipzig, Germany, and the Forschungszentrum Juelich, together with an international team of collaborators, have deciphered two axes along which the human brain is organized. It was found that these axes are mainly determined by genetic factors.

One axis stretches from the posterior (back) to the frontal part of the cortex. This reflects a functional hierarchy from basic capabilities such as vision and movement to abstract, highly complex skills such as cognition, memory, and social skills. A second axis leads from the dorsal (upper) to the ventral (lower) part of the cortex. Whereas the ventral system has been associated with functions assigning meaning and motivation, the dorsal system may relate to space, time, and movement.

"Interestingly, this vertical arrangement aligns with the long-held hypothesis of dual origin," says Sofie Valk, research group leader at the MPI CBS and Forschungszentrum Juelich and first author of the study, published in Science Advances. According to this hypothesis, the cerebral cortex developed from two different origins, the amygdala and olfactory cortex on the one hand and the hippocampus on the other hand. From these origins two different lines of cortical development arose, reflecting waves from less to more differentiated areas starting at each origin. Such distinctions between ventral and dorsal areas have been found in various mammals, such as non-human primates, cats, and rats. The scientists around Valk, however, have now provided evidence for it for the entire human cortex, and shown this may be a second important organizational principle next to the posterior-frontal axis.

This two-axis-organization, in turn, is largely determined by the genetic relation between brain regions. This means that the association between the structure of two brain regions is driven by shared genetic effects. Moreover, similar axes have been found in the brains of macaque monkeys, indicating these axes are conserved through primate evolution. "At the same time, even if genes and evolution shape the organization of brain structure, we must not forget the environment also plays a crucial role in shaping our brains and minds," Valk says. "Though we focused specifically on these genetic effects in the current study, other work of our team has shown that behavioral training can also alter brain structure." Further studies are planned to understand how these two factors that shape brain structure interact.

To understand the major axes of brain organization is like having a compass, and can help to better navigate in the brain. "We may better understand the evolution and function of specific regions and better evaluate the impact of brain disorders," Valk adds. For example, previous work of the authors has shown that organizational axes differ between individuals with autism spectrum disorder and healthy controls.

Read more at Science Daily

Genetic risk of developing obesity is driven by variants that affect the brain

 Over the past decade, scientists have identified hundreds of different genetic variants that increase a person's risk of developing obesity. But a lot of work remains to understand how these variants translate into obesity. Now scientists at the University of Copenhagen have identified populations of cells in the body that play a role in the development of the disease -- and they are all in the brain.

"Our results provide evidence that biological processes outside the traditional organs investigated in obesity research, such as fat cells, play a key role in human obesity," says Associate Professor Tune H Pers from the Novo Nordisk Foundation Center for Basic Metabolic Research (CBMR), at the University of Copenhagen, who published his team's findings in the internationally-recognized journal eLife.

"We identified cell types in the brain that regulate memory, behavior and processing of sensory information that are involved in the development of the disease. Further investigation of these areas of the brain may tell us why some of us are more susceptible to develop obesity than others."

A mosaic of brain cell populations contribute to obesity

The discovery was made by developing computational tools that combine two different sets of data. The first set is genome-wide association study data from around 450,000 people. This data compares a person's health and physical attributes, such as their body weight, to their unique genome. Doing so reveals that people with obesity are much more likely to have a range of genetic variants in common.

The second set is single-cell RNA-sequencing data of more than 700 different types of mouse cell populations. Different cells express different parts of the genome, so this data set contains the unique genetic fingerprint for each cell population.

The team at CBMR integrated the two data sets and found that the genetic variants, which are strongly associated with obesity, are near genes expressed by 26 cell populations acting as different types of neurons.

Obesity is not a lack of willpower

We already know that the brain plays an important role in obesity by regulating how the body maintains its energy needs. It does so by processing signals from within the body about the energy stores and food intake, as well as external signals such as the sight and smell of food.

The new findings suggest that a person's risk of developing obesity is driven by populations of cells that process sensory stimuli and direct actions related to feeding and behavior. They also identified specific brain cell types that support a role of learning and memory in obesity.

"The next step is to explore how defects in parts of the brain traditionally known to regulate memory and integration of sensory signals actually makes us more vulnerable to become obese," says Tune H Pers.

Read more at Science Daily

Sep 27, 2020

The impact of human mobility on disease spread

 Due to continual improvements in transportation technology, people travel more extensively than ever before. Although this strengthened connection between faraway countries comes with many benefits, it also poses a serious threat to disease control and prevention. When infected humans travel to regions that are free of their particular contagions, they might inadvertently transmit their infections to local residents and cause disease outbreaks. This process has occurred repeatedly throughout history; some recent examples include the SARS outbreak in 2003, the H1N1 influenza pandemic in 2009, and -- most notably -- the ongoing COVID-19 pandemic.

Imported cases challenge the ability of nonendemic countries -- countries where the disease in question does not occur regularly -- to entirely eliminate the contagion. When combined with additional factors such as genetic mutation in pathogens, this issue makes the global eradication of many diseases exceedingly difficult, if not impossible. Therefore, reducing the number of infections is generally a more feasible goal. But to achieve control of a disease, health agencies must understand how travel between separate regions impacts its spread.

In a paper publishing on Tuesday in the SIAM Journal of Applied Mathematics, Daozhou Gao of Shanghai Normal University investigated the way in which human dispersal affects disease control and total extent of an infection's spread. Few previous studies have explored the impact of human movement on infection size or disease prevalence -- defined as the proportion of individuals in a population that are infected with a specific pathogen -- in different regions. This area of research is especially pertinent during severe disease outbreaks, when governing leaders may dramatically reduce human mobility by closing borders and restricting travel. During these times, it is essential to understand how limiting people's movements affects the spread of disease.

To examine the spread of disease throughout a population, researchers often use mathematical models that sort individuals into multiple distinct groups, or "compartments." In his study, Gao utilized a particular type of compartmental model called the susceptible-infected-susceptible (SIS) patch model. He divided the population in each patch -- a group of people such as a community, city, or country -- into two compartments: infected people who currently have the designated illness, and people who are susceptible to catching it. Human migration then connects the patches. Gao assumed that the susceptible and infected subpopulations spread out at the same rate, which is generally true for diseases like the common cold that often only mildly affect mobility.

Each patch in Gao's SIS model has a certain infection risk that is represented by its basic reproduction number (R0) -- the quantity that predicts how many cases will be caused by the presence of a single contagious person within a susceptible population. "The larger the reproduction number, the higher the infection risk," Gao said. "So the patch reproduction number of a higher-risk patch is assumed to be higher than that of a lower-risk patch." However, this number only measures the initial transmission potential; it can rarely predict the true extent of infection.

Gao first used his model to investigate the effect of human movement on disease control by comparing the total infection sizes that resulted when individuals dispersed quickly versus slowly. He found that if all patches recover at the same rate, large dispersal results in more infections than small dispersal. Surprisingly, an increase in the amount by which people spread can actually reduce R0 while still increasing the total amount of infections.

The SIS patch model can also help elucidate how dispersal impacts the distribution of infections and prevalence of the disease within each patch. Without diffusion between patches, a higher-risk patch will always have a higher prevalence of disease, but Gao wondered if the same was true when people can travel to and from that high-risk patch. The model revealed that diffusion can decrease infection size in the highest-risk patch since it exports more infections than it imports, but this consequently increases infections in the patch with the lowest risk. However, it is never possible for the highest-risk patch to have the lowest disease prevalence.

Using a numerical simulation based on the common cold -- the attributes of which are well-studied -- Gao delved deeper into human migration's impact on the total size of an infection. When Gao incorporated just two patches, his model exhibited a wide variety of behaviors under different environmental conditions. For example, the dispersal of humans often led to a larger total infection size than no dispersal, but rapid human scattering in one scenario actually reduced the infection size. Under different conditions, small dispersal was detrimental but large dispersal ultimately proved beneficial to disease management. Gao completely classifies the combinations of mathematical parameters for which dispersal causes more infections when compared to a lack of dispersal in a two-patch environment. However, the situation becomes more complex if the model incorporates more than two patches.

Read more at Science Daily

Chromium steel was first made in ancient Persia

 Chromium steel -- similar to what we know today as tool steel -- was first made in Persia, nearly a millennium earlier than experts previously thought, according to a new study led by UCL researchers.

The discovery, published in the Journal of Archaeological Science, was made with the aid of a number of medieval Persian manuscripts, which led the researchers to an archaeological site in Chahak, southern Iran.

The findings are significant given that material scientists, historians and archaeologists have long considered that chromium steel was a 20th century innovation.

Dr Rahil Alipour (UCL Archaeology), lead author on the study, said: "Our research provides the first evidence of the deliberate addition of a chromium mineral within steel production. We believe this was a Persian phenomenon.

"This research not only delivers the earliest known evidence for the production of chromium steel dating back as early as the 11th century CE, but also provides a chemical tracer that could aid the identification of crucible steel artefacts in museums or archaeological collections back to their origin in Chahak, or the Chahak tradition."

Chahak is described in a number of historical manuscripts dating from the 12th to 19th century as a once famous steel production centre, and is the only known archaeological site within Iran's borders with evidence of crucible steel making.

While Chahak is registered as a site of archaeological importance, the exact location of crucible steel production in Iran remained a mystery and difficult to locate today, given numerous villages in Iran are named Chahak.

The manuscript 'al-Jamahir fi Marifah al-Jawahir' ('A Compendium to Know the Gems', 10th-11th c. CE) written by the Persian polymath Abu-Rayhan Biruni, was of particular importance to the researchers given it provided the only known crucible steel making recipe.

This recipe recorded a mysterious ingredient that they identified as chromite mineral for the production of chromium crucible steel.

The team used radiocarbon dating of a number of charcoal pieces retrieved from within a crucible slag and a smithing slag (by-products left over after the metal has been separated) to date the industry to the 11th to 12th century CE.

Crucially, analyses using Scanning Electron Microscopy enabled them to identify remains of the ore mineral chromite, which was described in Biruni's manuscript as an essential additive to the process.

They also detected 1-2 weight percent of chromium in steel particles preserved in the crucible slags, demonstrating that the chromite ore did form chromium steel alloy -- a process that we do not see used again until the late 19th and early 20th century.

Professor Thilo Rehren (UCL Archaeology and The Cyprus Institute), co-author on the study, said: "In a 13th century Persian manuscript translated by Dr Alipour, Chahak steel was noted for its fine and exquisite patterns, but its swords were also brittle, hence they lost their market value. Today the site is a small modest village, which prior to being identified as a site of archaeological interest, was only known for its agriculture."

The researchers believe it marks a distinct Persian crucible steel-making tradition -- separate from the more widely known Central Asian methods in Uzbekistan and Turkmenistan -- for the production of low-chromium steel (produced at around 1 (one) weight percent of chromium).

Professor Marcos Martinon-Torres (University of Cambridge), last author on the study, said: "The process of identification can be quite long and complicated and this is for several reasons. Firstly, the language and the terms used to record technological processes or materials may not be used anymore, or their meaning and attribution may be different from those used in the modern science.

"Additionally, writing was restricted to social elites, rather than the individual that actually carried out the craft, which may have led to errors or omissions in the text."

Read more at Science Daily