Jul 30, 2022

Improving image sensors for machine vision

Image sensors measure light intensity, but angle, spectrum, and other aspects of light must also be extracted to significantly advance machine vision.

In Applied Physics Letters, published by AIP Publishing, researchers at the University of Wisconsin-Madison, Washington University in St. Louis, and OmniVision Technologies highlight the latest nanostructured components integrated on image sensor chips that are most likely to make the biggest impact in multimodal imaging.

The developments could enable autonomous vehicles to see around corners instead of just a straight line, biomedical imaging to detect abnormalities at different tissue depths, and telescopes to see through interstellar dust.

"Image sensors will gradually undergo a transition to become the ideal artificial eyes of machines," co-author Yurui Qu, from the University of Wisconsin-Madison, said. "An evolution leveraging the remarkable achievement of existing imaging sensors is likely to generate more immediate impacts."

Image sensors, which converts light into electrical signals, are composed of millions of pixels on a single chip. The challenge is how to combine and miniaturize multifunctional components as part of the sensor.

In their own work, the researchers detailed a promising approach to detect multiple-band spectra by fabricating an on-chip spectrometer. They deposited photonic crystal filters made up of silicondirectly on top of the pixels to create complex interactions between incident light and the sensor.

The pixels beneath the films record the distribution of light energy, from which light spectral information can be inferred. The device -- less than a hundredth of a square inch in size -- is programmable to meet various dynamic ranges, resolution levels, and almost any spectral regime from visible to infrared.

The researchers built a component that detects angular information to measure depth and construct 3D shapes at subcellular scales. Their work was inspired by directional hearing sensors found in animals, like geckos, whose heads are too small to determine where sound is coming from in the same way humans and other animals can. Instead, they use coupled eardrums to measure the direction of sound within a size that is orders of magnitude smaller than the corresponding acoustic wavelength.

Similarly, pairs of silicon nanowires were constructed as resonators to support optical resonance. The optical energy stored in two resonators is sensitive to the incident angle. The wire closest to the light sends the strongest current. By comparing the strongest and weakest currents from both wires, the angle of the incoming light waves can be determined.

Read more at Science Daily

Taking your time makes a difference

Neanderthals are the closest relatives to modern humans. Comparisons with them can therefore provide fascinating insights into what makes present-day humans unique, for example regarding the development of the brain. The neocortex, the largest part of the outer layer of the brain, is unique to mammals and crucial for many cognitive capacities. It expanded dramatically during human evolution in species ancestral to both Neanderthals and modern humans, resulting that both Neanderthals and modern humans having brains of similar sizes. However, almost nothing is known about how modern human and Neanderthal brains may have differed in terms of their development and function.

Researchers from the Max Planck Institute of Molecular Cell Biology and Genetics (MPI-CBG) in Dresden and the Max Planck Institute for Evolutionary Anthropology (MPI-EVA) in Leipzig have now discovered that neural stem cells -- the cells from which neurons in the developing neocortex derive -- spend more time preparing their chromosomes for division in modern humans than in Neanderthals. This results in fewer errors when chromosomes are distributed to the daughter cells in modern humans than in Neanderthals or chimpanzees, and could have consequences for how the brain develops and functions. This study shows cellular differences in the development of the brain between modern humans and Neanderthals.

After the ancestors of modern humans split from those of Neanderthals and Denisovans, their Asian relatives, about one hundred amino acids, the building blocks of proteins in cells and tissues, changed in modern humans and spread to almost all modern humans. The biological significance of these changes is largely unknown. However, six of those amino acid changes occurred in three proteins that play key roles in the distribution of chromosomes, the carriers of genetic information, to the two daughter cells during cell division.

The effects of the modern human variants on brain development

To investigate the significance of these six changes for neocortex development, the scientists first introduced the modern human variants in mice. Mice are identical to Neanderthals at those six amino acid positions, so these changes made them a model for the developing modern human brain. Felipe Mora-Bermúdez, the lead author of the study, describes the discovery: "We found that three modern human amino acids in two of the proteins cause a longer metaphase, a phase where chromosomes are prepared for cell division, and this results in fewer errors when the chromosomes are distributed to the daughter cells of the neural stem cells, just like in modern humans." To check if the Neanderthal set of amino acids have the opposite effect, the researchers then introduced the ancestral amino acids in human brain organoids -- miniature organ-like structures that can be grown from human stem cells in cell culture dishes in the lab and that mimic aspects of early human brain development. "In this case, metaphase became shorter and we found more chromosome distribution errors." According to Mora-Bermúdez, this shows that those three modern human amino acid changes in the proteins known as KIF18a and KNL1 are responsible for the fewer chromosome distribution mistakes seen in modern humans as compared to Neanderthal models and chimpanzees. He adds that "having mistakes in the number of chromosomes is usually not a good idea for cells, as can be seen in disorders like trisomies and cancer."

Red more at Science Daily

Jul 29, 2022

Researchers 3D print sensors for satellites

MIT scientists have created the first completely digitally manufactured plasma sensors for orbiting spacecraft. These plasma sensors, also known as retarding potential analyzers (RPAs), are used by satellites to determine the chemical composition and ion energy distribution of the atmosphere.

The 3D-printed and laser-cut hardware performed as well as state-of-the-art semiconductor plasma sensors that are manufactured in a cleanroom, which makes them expensive and requires weeks of intricate fabrication. By contrast, the 3D-printed sensors can be produced for tens of dollars in a matter of days.

Due to their low cost and speedy production, the sensors are ideal for CubeSats. These inexpensive, low-power, and lightweight satellites are often used for communication and environmental monitoring in Earth's upper atmosphere.

The researchers developed RPAs using a glass-ceramic material that is more durable than traditional sensor materials like silicon and thin-film coatings. By using the glass-ceramic in a fabrication process that was developed for 3D printing with plastics, there were able to create sensors with complex shapes that can withstand the wide temperature swings a spacecraft would encounter in lower Earth orbit.

"Additive manufacturing can make a big difference in the future of space hardware. Some people think that when you 3D-print something, you have to concede less performance. But we've shown that is not always the case. Sometimes there is nothing to trade off," says Luis Fernando Velásquez-García, a principal scientist in MIT's Microsystems Technology Laboratories (MTL) and senior author of a paper presenting the plasma sensors.

Joining Velásquez-García on the paper are lead author and MTL postdoc Javier Izquierdo-Reyes; graduate student Zoey Bigelow; and postdoc Nicholas K. Lubinsky. The research is published in Additive Manufacturing.

Versatile sensors

An RPA was first used in a space mission in 1959. The sensors detect the energy in ions, or charged particles, that are floating in plasma, which is a superheated mix of molecules present in the Earth's upper atmosphere. Aboard an orbiting spacecraft like a CubeSat, the versatile instruments measure energy and conduct chemical analyses that can help scientists predict the weather or monitor climate change.

The sensors contain a series of electrically charged meshes dotted with tiny holes. As plasma passes through the holes, electrons and other particles are stripped away until only ions remain. These ions create an electric current that the sensor measures and analyzes.

Key to the success of an RPA is the housing structure that aligns the meshes. It must be electrically insulating while also able to withstand sudden, drastic swings in temperature. The researchers used a printable, glass-ceramic material that displays these properties, known as Vitrolite.

Pioneered in the early 20th century, Vitrolite was often used in colorful tiles that became a common sight in art deco buildings.

The durable material can also withstand temperatures as high as 800 degrees Celsius without breaking down, whereas polymers used in semiconductor RPAs start to melt at 400 degrees Celsius.

"When you make this sensor in the cleanroom, you don't have the same degree of freedom to define materials and structures and how they interact together. What made this possible is the latest developments in additive manufacturing," Velásquez-García says.

Rethinking fabrication

The 3D printing process for ceramics typically involves ceramic powder that is hit with a laser to fuse it into shapes, but this process often leaves the material coarse and creates weak points due to the high heat from the lasers.

Instead, the MIT researchers used vat polymerization, a process introduced decades ago for additive manufacturing with polymers or resins. With vat polymerization, a 3D structure is built one layer at a time by submerging it repeatedly into a vat of liquid material, in this case Vitrolite. Ultraviolet light is used to cure the material after each layer is added, and then the platform is submerged in the vat again. Each layer is only 100 microns thick (roughly the diameter of a human hair), enabling the creation of smooth, pore-free, complex ceramic shapes.

In digital manufacturing, objects described in a design file can be very intricate. This precision allowed the researchers to create laser-cut meshes with unique shapes so the holes lined up perfectly when they were set inside the RPA housing. This enables more ions to pass through, which leads to higher-resolution measurements.

Because the sensors were cheap to produce and could be fabricated so quickly, the team prototyped four unique designs.

While one design was especially effective at capturing and measuring a wide range of plasmas, like those a satellite would encounter in orbit, another was well-suited for sensing extremely dense and cold plasmas, which are typically only measurable using ultraprecise semiconductor devices.

This high precision could enable 3D-printed sensors for applications in fusion energy research or supersonic flight. The rapid prototyping process could even spur more innovation in satellite and spacecraft design, Velásquez-García adds.

"If you want to innovate, you need to be able to fail and afford the risk. Additive manufacturing is a very different way to make space hardware. I can make space hardware and if it fails, it doesn't matter because I can make a new version very quickly and inexpensively, and really iterate on the design. It is an ideal sandbox for researchers," he says.

While Velásquez-García is pleased with these sensors, in the future he wants to enhance the fabrication process. Reducing the thickness of layers or pixel size in glass-ceramic vat polymerization could create complex hardware that is even more precise. Moreover, fully additively manufacturing the sensors would make them compatible with in-space manufacturing. He also wants to explore the use of artificial intelligence to optimize sensor design for specific use cases, such as greatly reducing their mass while ensuring they remain structurally sound.

Read more at Science Daily

Octopus lures from the Mariana Islands found to be oldest in the world

An archaeological study has determined that cowrie-shell artifacts found throughout the Mariana Islands were lures used for hunting octopuses and that the devices, similar versions of which have been found on islands across the Pacific, are the oldest known artifacts of their kind in the world.

The study used carbon dating of archaeological layers to confirm that lures found on the Northern Mariana Islands of Tinian and Saipan were from about 1500 B.C., or 3,500 years ago.

"That's back to the time when people were first living in the Mariana Islands. So we think these could be the oldest octopus lures in the entire Pacific region and, in fact, the oldest in the world," said Michael T. Carson, an archaeologist with the Micronesian Area Research Center at the University of Guam.

The study, titled "Let's catch octopus for dinner: Ancient inventions of octopus lures in the Mariana Islands of the remote tropical Pacific," is published in World Archaeology, a peer-reviewed academic journal. Carson, who holds a doctorate in anthropology, is the lead author of the study, assisted by Hsiao-chun Hung from The Australian National University in Canberra, Australia.

The fishing devices were made with cowrie shells, a type of sea snail and a favorite food of octopuses, that were connected by a fiber cord to a stone sinker and a hook.

They have been found in seven sites in the Mariana Islands. The oldest lures were excavated in 2011 from Sanhalom near the House of Taga in Tinian and in 2016 from Unai Bapot in Saipan. Other locations include Achugao in Saipan, Unai Chulu in Tinian, and Mochom at Mangilao Golf Course, Tarague Beach, and Ritidian Beach Cave in Guam.

Known artifacts, unknown purpose -- until now

"The artifacts have been known -- we knew about them. It just took a long time considering the possibilities, the different hypotheses, of what they could be," Carson said. "The conventional idea -- what we were told long ago from the Bishop Museum [in Honolulu] -- was that these must be for scraping breadfruit or other plants, like maybe taro. [But] they don't look like that."

The shells didn't have the serrated edge of other known food-scraping tools. With their holes and grooves where the fiber cord would have been attached as well as the stone sinker components, they appeared a closer match to octopus lures found in Tonga from about 3,000 years ago, or 1100 B.C.

"We're confident they are the pieces of octopus lures, and we're confident they date back to 1500 B.C.," Carson said.

An invention of the ancient CHamorus?

Carson said the question now becomes: Did the ancient CHamoru people invent this adaptation to their environment during the time when they first lived in the islands?"

That's a possibility, he said, the other being that they brought the tradition with them from their former homeland; however, no artifacts of this kind have yet been discovered in the potential homelands of the first Marianas settlers.

If the CHamoru people did invent the first octopus lures, it provides new insight into their ingenuity and ability to problem solve -- having to create novel and specialized ways to live in a new environment and take advantage of an available food source.

"It tells us that […] this kind of food resource was important enough for them that they invented something very particular to trap these foods," Carson said. "We can't say that it contributed to a massive percentage of their diet -- it probably did not -- but it was important enough that it became what we would call a 'tradition' in archaeology."

The next question to look at, Carson said, is whether there are similar objects anywhere else from an older time.

Read more at Science Daily

Chores, exercise, and social visits linked to lower risk of dementia

Physical and mental activities, such as household chores, exercise, and visiting with family and friends, may help lower the risk of dementia, according to a new study published in the July 27, 2022, online issue of Neurology®, the medical journal of the American Academy of Neurology. The study looked at the effects of these activities, as well as mental activities and use of electronic devices in people both with and without higher genetic risk for dementia.

"Many studies have identified potential risk factors for dementia, but we wanted to know more about a wide variety of lifestyle habits and their potential role in the prevention of dementia," said study author Huan Song, MD, PhD, of Sichuan University in Chengdu, China. "Our study found that exercise, household chores, and social visits were linked to a reduced risk of various types of dementia."

The study involved 501,376 people from a UK database without dementia with an average age of 56.

Participants filled out questionnaires at the beginning of the study, including one on physical activities. They were asked how often they participated in activities such as climbing a flight of stairs, walking, and participating in strenuous sports. They were also asked about household chores, job-related activities, and what kind of transportation they used, including walking or biking to work.

Participants completed another questionnaire on mental activities. They were asked about their education level, whether they attend adult education classes, how often they visit with friends and family, visit pubs or social clubs or religious groups, and how often they use electronic devices such as playing computer games, watching TV, and talking on the phone.

Additionally, participants reported whether they had any immediate family members with dementia. This helped researchers determine if they had a genetic risk for Alzheimer's disease. Study participants were followed an average of 11 years. At the end of the study, 5,185 people had developed dementia.

After adjusting for multiple factors such as age, income, and smoking, researchers found that most physical and mental activities studied showed links to the risk of dementia. Importantly, the findings remain after considering the high correlations and interactions of these activities. People who were highly engaged in activity patterns including frequent exercises, household chores, and daily visits of family and friends had 35%, 21%, and 15% lower risk of dementia, respectively, compared to people who were the least engaged in these activity patterns.

Researchers also looked at dementia incidence rates by identified activity patterns. The rate in people who exercised frequently was 0.45 cases for every 1,000 person-years compared to 1.59 for people who rarely exercised. Person-years take into account the number of people in a study as well as the amount of time spent in the study. Those who frequently did household chores had a rate of 0.86 cases for every 1,000 person-years compared to 1.02 for people who rarely did household chores. People who visited family daily had a rate of 0.62 cases for every 1,000 person-years compared to 0.8 cases for those who only visited friends and family once every few months.

"Our study has found that by engaging more frequently in healthy physical and mental activities people may reduce their risk of dementia," Song said. "More research is needed to confirm our findings. However, our results are encouraging that making these simple lifestyle changes may be beneficial."

The researchers found that all participants benefited from the protective effect of physical and mental activities, whether or not they had a family history of dementia.

A limitation of the study was that people reported their own physical and mental activity, so they may not have remembered and reported these activities correctly.

Read more at Science Daily

Some types of stress could be good for brain functioning

It may feel like an anvil hanging over your head, but that looming deadline stressing you out at work may actually be beneficial for your brain, according to new research from the Youth Development Institute at the University of Georgia.

Published in Psychiatry Research, the study found that low to moderate levels of stress can help individuals develop resilience and reduce the risk of developing mental health disorders, like depression and antisocial behaviors. Low to moderate stress can also help individuals to cope with future stressful encounters.

"If you're in an environment where you have some level of stress, you may develop coping mechanisms that will allow you to become a more efficient and effective worker and organize yourself in a way that will help you perform," said Assaf Oshri, lead author of the study and an associate professor in the College of Family and Consumer Sciences.

The stress that comes from studying for an exam, preparing for a big meeting at work or pulling longer hours to close the deal can all potentially lead to personal growth. Being rejected by a publisher, for example, may lead a writer to rethink their style. And being fired could prompt someone to reconsider their strengths and whether they should stay in their field or branch out to something new.

But the line between the right amount of stress and too much stress is a thin one.

"It's like when you keep doing something hard and get a little callous on your skin," continued Oshri, who also directs the UGA Youth Development Institute. "You trigger your skin to adapt to this pressure you are applying to it. But if you do too much, you're going to cut your skin."

Good stress can act as a vaccine against the effect of future adversity


The researchers relied on data from the Human Connectome Project, a national project funded by the National Institutes of Health that aims to provide insight into how the human brain functions. For the present study, the researchers analyzed the project's data from more than 1,200 young adults who reported their perceived stress levels using a questionnaire commonly used in research to measure how uncontrollable and stressful people find their lives.

Participants answered questions about how frequently they experienced certain thoughts or feelings, such as "in the last month, how often have you been upset because of something that happened unexpectedly?" and "in the last month, how often have you found that you could not cope with all the things that you had to do?"

Their neurocognitive abilities were then assessed using tests that measured attention and ability to suppress automatic responses to visual stimuli; cognitive flexibility, or ability to switch between tasks; picture sequence memory, which involves remembering an increasingly long series of objects; working memory and processing speed.

The researchers compared those findings with the participants' answers from multiple measures of anxious feelings, attention problems and aggression, among other behavioral and emotional problems.

The analysis found that low to moderate levels of stress were psychologically beneficial, potentially acting as a kind of inoculation against developing mental health symptoms.

"Most of us have some adverse experiences that actually make us stronger," Oshri said. "There are specific experiences that can help you evolve or develop skills that will prepare you for the future."

But the ability to tolerate stress and adversity varies greatly according to the individual.

Things like age, genetic predispositions and having a supportive community to fall back on in times of need all play a part in how well individuals handle challenges. While a little stress can be good for cognition, Oshri warns that continued levels of high stress can be incredibly damaging, both physically and mentally.

"At a certain point, stress becomes toxic," he said. "Chronic stress, like the stress that comes from living in abject poverty or being abused, can have very bad health and psychological consequences. It affects everything from your immune system, to emotional regulation, to brain functioning. Not all stress is good stress."

Read more at Science Daily

Jul 28, 2022

Scientists discover new 'origins of life' chemical reactions

Four billion years ago, the Earth looked very different than it does today, devoid of life and covered by a vast ocean. Over the course of millions of years, in that primordial soup, life emerged. Researchers have long theorized how molecules came together to spark this transition. Now, scientists at Scripps Research have discovered a new set of chemical reactions that use cyanide, ammonia and carbon dioxide -- all thought to be common on the early earth -- to generate amino acids and nucleic acids, the building blocks of proteins and DNA.

"We've come up with a new paradigm to explain this shift from prebiotic to biotic chemistry," says Ramanarayanan Krishnamurthy, PhD, an associate professor of chemistry at Scripps Research, and lead author of the new paper, published July 28, 2022 in the journal Nature Chemistry. "We think the kind of reactions we've described are probably what could have happened on early earth."

In addition to giving researchers insight into the chemistry of the early earth, the newly discovered chemical reactions are also useful in certain manufacturing processes, such as the generation of custom labeled biomolecules from inexpensive starting materials.

Earlier this year, Krishnamurthy's group showed how cyanide can enable the chemical reactions that turn prebiotic molecules and water into basic organic compounds required for life. Unlike previously proposed reactions, this one worked at room temperature and in a wide pH range. The researchers wondered whether, under the same conditions, there was a way to generate amino acids, more complex molecules that compose proteins in all known living cells.

In cells today, amino acids are generated from precursors called α-keto acids using both nitrogen and specialized proteins called enzymes. Researchers have found evidence that α-keto acids likely existed early in Earth's history. However, many have hypothesized that before the advent of cellular life, amino acids must have been generated from completely different precursors, aldehydes, rather than α-keto acids, since enzymes to carry out the conversion did not yet exist. But that idea has led to debate about how and when the switch occurred from aldehydes to α-keto acids as the key ingredient for making amino acids.

After their success using cyanide to drive other chemical reactions, Krishnamurthy and his colleagues suspected that cyanide, even without enzymes, might also help turn α-keto acids into amino acids. Because they knew nitrogen would be required in some form, they added ammonia -- a form of nitrogen that would have been present on the early earth. Then, through trial and error, they discovered a third key ingredient: carbon dioxide. With this mixture, they quickly started seeing amino acids form.

"We were expecting it to be quite difficult to figure this out, and it turned out to be even simpler than we had imagined," says Krishnamurthy. "If you mix only the keto acid, cyanide and ammonia, it just sits there. As soon as you add carbon dioxide, even trace amounts, the reaction picks up speed."

Because the new reaction is relatively similar to what occurs today inside cells -- except for being driven by cyanide instead of a protein -- it seems more likely to be the source of early life, rather than drastically different reactions, the researchers say. The research also helps bring together two sides of a long-standing debate about the importance of carbon dioxide to early life, concluding that carbon dioxide was key, but only in combination with other molecules.

In the process of studying their chemical soup, Krishnamurthy's group discovered that a byproduct of the same reaction is orotate, a precursor to nucleotides that make up DNA and RNA. This suggests that the same primordial soup, under the right conditions, could have given rise to a large number of the molecules that are required for the key elements of life.

"What we want to do next is continue probing what kind of chemistry can emerge from this mixture," says Krishnamurthy. "Can amino acids start forming small proteins? Could one of those proteins come back and begin to act as an enzyme to make more of these amino acids?"

Read more at Science Daily

Exploring factors that may underlie how domestic cats can live in groups

A new analysis explores relationships between domestic cats' hormone levels, gut microbiomes, and social behaviors, shedding light on how these solitary animals live in high densities. Hikari Koyasu of Azabu University in Kanagawa, Japan, and colleagues present these findings in the open-access journal PLOS ONE on July 27, 2022.

Most feline species display solitary and territorial behavior, but domestic cats often live in high densities, raising the question of what strategies cats use to establish cohabitating groups. Social behaviors of cats can be influenced by hormones and the mix of different microbe species living in their guts -- known as the gut microbiome. Studying these factors could help illuminate the group dynamics of cohabitating cats.

In that vein, Koyasu and colleagues conducted a two-week-long study of three different groups of five cats living together in a shelter. They used video cameras to observe the cats' behavior, measured hormone levels in their urine, and collected feces to evaluate the mix of microbial species in the cats' microbiomes.

Statistical analysis of the data revealed that cats with high levels of the hormones cortisol and testosterone had less contact with other cats, and those with high testosterone were more likely to try to escape. Meanwhile, cats with low cortisol and testosterone were more tolerant in their interactions with other cats. The researchers also found greater similarity of gut microbiomes between cats who had more frequent contact with each other, and they found links between the gut microbiome, social behavior, and cortisol levels.

Meanwhile, contrary to the researchers' expectations from research on animals that typically live in groups, cats with high levels of the hormone oxytocin did not display bonding behaviors described as "socially affiliative." This suggests that oxytocin might function differently for typically solitary animals living in groups than for animals that typically live in groups.

The researchers outline possible directions for future research to further deepen understanding of cohabitating cat dynamics, such as a follow-up study that observes cats for several months, rather than just two weeks, and investigations to tease out causal relationships between hormones and social behaviors.

Read more at Science Daily

Sprint then stop? Brain is wired for the math to make it happen

Your new apartment is just a couple of blocks down the street from the bus stop but today you are late and you see the bus roll past you. You break into a full sprint. Your goal is to get to the bus as fast as possible and then to stop exactly in front of the doors (which are never in exactly the same place along the curb) to enter before they close. To stop quickly and precisely enough, a new MIT study in mice finds, the mammalian brain is niftily wired to implement principles of calculus.

One might think that coming to a screeching halt at a target after a flat out run would be as simple as a reflex, but catching a bus or running right up to a visually indicated landmark to earn a water reward (as the mice did), is a learned, visually guided, goal-directed feat. In such tasks, which are a major interest in the lab of senior author Mriganka Sur, Newton Professor of Neuroscience in The Picower Institute for Learning and Memory at MIT, the crucial decision to switch from one behavior (running) to another (stopping) comes from the brain's cortex, where the brain integrates the learned rules of life with sensory information to guide plans and actions.

"The goal is where the cortex comes in," said Sur, a faculty member of MIT's Department of Brain and Cognitive Sciences. "Where am I supposed to stop to achieve this goal of getting on the bus."

And that's also where it gets complicated. The mathematical models of the behavior that postdoc and study lead author Elie Adam developed predicted that a "stop" signal going directly from the M2 region of the cortex to regions in the brainstem, which actually control the legs, would be processed too slowly.

"You have M2 that is sending a stop signal, but when you model it and go through the mathematics, you find that this signal, by itself, would not be fast enough to make the animal stop in time," said Adam, whose work appears in the journal Cell Reports.

So how does the brain speed up the process? What Adam, Sur and co-author Taylor Johns found was that M2 sends the signal to an intermediary region called the subthalamic nucleus (STN), which then sends out two signals down two separate paths that re-converge in the brainstem. Why? Because the difference made by those two signals, one inhibitory and one excitatory, arriving one right after the other turns the problem from one of integration, which is a relatively slow adding up of inputs, to differentiation, which is a direct recognition of change. The shift in calculus implements the stop signal much more quickly.

Adam's model employing systems and control theory from engineering -- accurately -- predicted the speed needed for a proper stop and that differentiation would be necessary to achieve it, but it took a series of anatomical investigations and experimental manipulations to confirm the model's predictions.

First, Adam confirmed that indeed M2 was producing a surge in neural activity only when the mice needed to achieve their trained goal of stopping at the landmark. He also showed it was sending the resulting signals to the STN. Other stops for other reasons did not employ that pathway. Moreover, artificially activating the M2-STN pathway compelled the mice to stop and artificially inhibiting it caused mice overrun the landmark somewhat more often.

The STN certainly then needed to signal the brainstem -- specifically the pedunculopontine nucleus (PPN) in the mesenecephalic locomotor region. But when the scientists looked at neural activity starting in the M2 and then quickly resulting in the PPN, they saw that different types of cells in the PPN responded with different timing. Particularly, before the stop, excitatory cells were active and their activity reflected the speed of the animal during stops. Then, looking at the STN, they saw two kinds of surges of activity around stops -- one slightly slower than the other -- that were conveyed either directly to PPN through excitation or indirectly via the substantia nigra pars reticulata (SNr) through inhibition. The net result of the interplay of these signals in the PPN was an inhibition sharpened by excitation. That sudden change could be quickly found by differentiation to implement stopping.

"An inhibitory surge followed by excitation can create a sharp [change of] signal," Sur said.

The study dovetails with other recent papers. Working with Picower Institute investigator Emery N. Brown, Adam recently produced a new model of how deep brain stimulation in the STN quickly corrects motor problems that result from Parkinson's disease. And last year members of Sur's lab, including Adam, published a study showing how the cortex overrides the brain's more deeply ingrained reflexes in visually guided motor tasks. Together such studies contribute to understanding how the cortex can consciously control instinctually wired motor behaviors but also how important deeper regions, such as the STN, are to quickly implementing goal-directed behavior. A recent review from the lab expounds on this.

Read more at Science Daily

Quantum cryptography: Hacking is futile

The Internet is teeming with highly sensitive information. Sophisticated encryption techniques generally ensure that such content cannot be intercepted and read. But in the future high-performance quantum computers could crack these keys in a matter of seconds. It is just as well, then, that quantum mechanical techniques not only enable new, much faster algorithms, but also exceedingly effective cryptography.

Quantum key distribution (QKD) -- as the jargon has it -- is secure against attacks on the communication channel, but not against attacks on or manipulations of the devices themselves. The devices could therefore output a key which the manufacturer had previously saved and might conceivably have forwarded to a hacker. With device- independent QKD (abbreviated to DIQKD), it is a different story. Here, the cryptographic protocol is independent of the device used. Theoretically known since the 1990s, this method has now been experimentally realized for the first time, by an international research group led by LMU physicist Harald Weinfurter and Charles Lim from the National University of Singapore (NUS).

For exchanging quantum mechanical keys, there are different approaches available. Either light signals are sent by the transmitter to the receiver, or entangled quantum systems are used. In the present experiment, the physicists used two quantum mechanically entangled rubidium atoms, situated in two laboratories located 400 meters from each other on the LMU campus. The two locations are connected via a fiber optic cable 700 meters in length, which runs beneath Geschwister Scholl Square in front of the main building.

To create an entanglement, first the scientists excite each of the atoms with a laser pulse. After this, the atoms spontaneously fall back into their ground state, each thereby emitting a photon. Due to the conservation of angular momentum, the spin of the atom is entangled with the polarization of its emitted photon. The two light particles travel along the fiber optic cable to a receiver station, where a joint measurement of the photons indicates an entanglement of the atomic quantum memories.

To exchange a key, Alice und Bob -- as the two parties are usually dubbed by cryptographers -- measure the quantum states of their respective atom. In each case, this is done randomly in two or four directions. If the directions correspond, the measurement results are identical on account of entanglement and can be used to generate a secret key. With the other measurement results, a so-called Bell inequality can be evaluated. Physicist John Stewart Bell originally developed these inequalities to test whether nature can be described with hidden variables. "It turned out that it cannot," says Weinfurter. In DIQKD, the test is used "specifically to ensure that there are no manipulations at the devices -- that is to say, for example, that hidden measurement results have not been saved in the devices beforehand," explains Weinfurter.

In contrast to earlier approaches, the implemented protocol, which was developed by researchers at NUS, uses two measurement settings for key generation instead of one: "By introducing the additional setting for key generation, it becomes more difficult to intercept information, and therefore the protocol can tolerate more noise and generate secret keys even for lower-quality entangled states," says Charles Lim.

With conventional QKD methods, by contrast, security is guaranteed only when the quantum devices used have been characterized sufficiently well. "And so, users of such protocols have to rely on the specifications furnished by the QKD providers and trust that the device will not switch into another operating mode during the key distribution," explains Tim van Leent, one of the four lead authors of the paper alongside Wei Zhang and Kai Redeker. It has been known for at least a decade that older QKD devices could easily be hacked from outside, continues van Leent.

"With our method, we can now generate secret keys with uncharacterized and potentially untrustworthy devices," explains Weinfurter. In fact, he had his doubts initially whether the experiment would work. But his team proved his misgivings were unfounded and significantly improved the quality of the experiment, as he happily admits. Alongside the cooperation project between LMU and NUS, another research group from the University of Oxford demonstrated the device-independent key distribution. To do this, the researchers used a system comprising two entangled ions in the same laboratory. "These two projects lay the foundation for future quantum networks, in which absolutely secure communication is possible between far distant locations," says Charles Lim.

Read more at Science Daily

Jul 27, 2022

Scientists discover places on the moon where it's always 'sweater weather'

Future human explorers on the moon might have 99 problems but staying warm or cool won't be one. A team led by planetary scientists at UCLA has discovered shady locations within pits on the moon that always hover around a comfortable 63 degrees Fahrenheit.

The pits, and caves to which they may lead, would make safer, more thermally stable base camps for lunar exploration and long-term habitation than the rest of the moon's surface, which heats up to 260 degrees during the day and drops to 280 degrees below zero at night.

Pits were first discovered on the moon in 2009, and since then, scientists have wondered if they led to caves that could be explored or used as shelters. About 16 of the more than 200 pits are probably collapsed lava tubes, said Tyler Horvath, a UCLA doctoral student in planetary science, who led the new research. Two of the most prominent pits have visible overhangs that clearly lead to some sort of cave or void, and there is strong evidence that another's overhang may also lead to a large cave.

Lava tubes, also found on Earth, form when molten lava flows beneath a field of cooled lava or a crust forms over a river of lava, leaving a long, hollow tunnel. If the ceiling of a solidified lava tube collapses, it opens a pit that can lead into the rest of the cavelike tube.

Horvath processed images from the Diviner Lunar Radiometer Experiment -- a thermal camera and one of six instruments on NASA's robotic Lunar Reconnaissance Orbiter -- to find out if the temperature within the pits diverged from those on the surface.

Focusing on a roughly cylindrical 100-meter-deep depression about the length and width of a football field in an area of the moon known as the Mare Tranquillitatis, Horvath and his colleagues used computer modeling to analyze the thermal properties of the rock and lunar dust and to chart the pit's temperatures over a period of time.

The results, recently published in the journal Geophysical Research Letters, revealed that temperatures within the permanently shadowed reaches of the pit fluctuate only slightly throughout the lunar day, remaining at around 63 degrees. If a cave extends from the bottom of the pit, as images taken by the Lunar Reconnaissance Orbiter Camera suggest, it too would have this relatively comfortable temperature.

The research team, which also included UCLA professor of planetary science David Paige and Paul Hayne of the University of Colorado Boulder, believes the shadowing overhang is responsible for the steady temperature, limiting how hot things get during the day and preventing heat from radiating away at night. Meanwhile, the sunbaked part of the pit floor hits daytime temperatures close to 300 degrees, some 40 degrees hotter than the moon's surface.

"Because the Tranquillitatis pit is the closest to the lunar equator, the illuminated floor at noon is probably the hottest place on the entire moon," said Horvath.

A day on the moon lasts nearly 15 Earth days, during which the surface is constantly bombarded by sunlight and is frequently hot enough to boil water. Unimaginably cold nights also last about 15 Earth days. Inventing heating and cooling equipment that can operate under these conditions and producing enough energy to power it nonstop could prove an insurmountable barrier to lunar exploration or habitation. Solar power -- NASA's most common form of power generation -- doesn't work at night, after all. (NASA currently has no plans to establish an exploration base camp or habitations on the moon.)

Building bases in the shadowed parts of these pits allows scientists to focus on other challenges, like growing food, providing oxygen for astronauts, gathering resources for experiments and expanding the base. The pits or caves would also offer some protection from cosmic rays, solar radiation and micrometeorites.

"Humans evolved living in caves, and to caves we might return when we live on the moon," said Paige, who leads the Diviner Lunar Radiometer Experiment.

Diviner has been mapping the moon continuously since 2009, producing NASA's second largest planetary dataset and providing the most detailed and comprehensive thermal measurements of any object in our solar system, including Earth. The team's current work on lunar pits has improved data from the Diviner experiment.

"Because nobody else had looked at things this small with Diviner, we found that it had a bit of double vision, causing all of our maps to a be a bit blurry," said Horvath. The team worked to align the many images taken by the instrument until they could achieve an accurate thermal reading down to the level of single pixel. This process yielded much higher resolution maps of the moon's surface.

Data from the early stages of this lunar pit thermal modeling project were used to help develop the thermal management system of the rover for NASA's proposed Moon Diver mission. Horvath and Hayne were part of the science team for this mission, which aims to have the rover rappel into the Tranquillitatis pit to research the layers of lava flows seen in its walls and to explore any existing cave.

Read more at Science Daily

Working memory depends on reciprocal interactions across the brain

How does the brain keep in mind a phone number before dialling? Working memory is an essential component of cognition, allowing the brain to remember information temporarily and use it to guide future behaviour. While many previous studies have revealed the involvement of several brain areas, until now it remained unclear as to how these multiple regions interact to represent and maintain working memory.

In a new study, published today in Nature, neuroscientists at the Sainsbury Wellcome Centre at UCL investigated the reciprocal interactions between two brain regions that represent visual working memory in mice. The team found that communication between these two loci of working memory, parietal cortex and premotor cortex, was co-dependent on instantaneous timescales.

"There are many different types of working memory and over the past 40 years scientists have been trying to work out how these are represented in the brain. Sensory working memory in particular has been challenging to study, as during standard laboratory tasks many other processes are happening simultaneously, such as timing, motor preparation, and reward expectation," said Dr Ivan Voitov, Research Fellow in the Mrsic-Flogel lab and first author on the paper.

To overcome this challenge, the SWC researchers compared a working memory-dependent task with a simpler working memory-independent task. In the working memory task, mice were given a sensory stimulus followed by a delay and then had to match the next stimulus to the one they saw prior to the delay. This meant that during the delay the mice needed a representation in their working memory of the first stimulus to succeed in the task and receive a reward. In contrast, in the working memory-independent task, the decision the mice made on the secondary stimulus was unrelated to the first stimulus.

By contrasting these two tasks, the researchers were able to observe the part of the neural activity that was dependent on working memory as opposed to the natural activity that was just related to the task environment. They found that most neural activity was unrelated to working memory, and instead working memory representations were embedded within 'high-dimensional' modes of activity, meaning that only small fluctuations around the mean firing of individual cells were together carrying the working memory information.

To understand how these representations are maintained in the brain, the neuroscientists used a technique called optogenetics to selectively silence parts of the brain during the delay period and observed the disruption to what the mice were remembering. Interestingly, they found that silencing working memory representations in either one of the parietal or premotor cortical areas led to similar deficits in the mice's ability to remember the previous stimulus, implying that these representations were instantaneously co-dependent on each other during the delay.

To test this hypothesis, the researchers disrupted one area while recording the activity that was being communicated back to it by the other area. When they disrupted parietal cortex, the activity that was being communicated by premotor cortex to parietal cortex was largely unchanged in terms of average activity. However, the representation of working memory activity specifically was disrupted. This was also true in the reverse experiment, when they disrupted premotor cortex and looked at parietal cortex and also observed working memory-specific disruption of cortical-cortical communication.

"By recording from and manipulating long-range circuits in the cerebral cortex, we uncovered that working memory resides within co-dependent activity patterns in cortical areas that are interconnected, thereby maintaining working memory through instantaneous reciprocal communication," said Professor Tom Mrsic-Flogel, Director of the Sainsbury Wellcome Centre and co-author on the paper.

The next step for the researchers is to look for patterns of activity that are shared between these areas. They also plan to study more sophisticated working memory tasks that modulate the specific information that is being stored in working memory in addition to its strength. For this, the neuroscientists will use interleaved distractors containing sensory information that bias what the mouse thinks is the next target. Such experiments will allow them to develop a more nuanced understanding of working memory representations.

Read more at Science Daily

Air quality can be better for active commuters than drivers, research shows

New Leicester research has found that people who commute by car can be subject to higher levels of harmful gases than those who walk or cycle to work.

The study, conducted by researchers at the University of Leicester in partnership with Leicester City Council, is published in the Journal of Transport & Health.

Experts found that in-cabin levels of nitrogen dioxide (NO2) -- a key indicator of air quality and harmful when breathed in by humans -- were higher for weekday morning commuters travelling by car, compared to their counterparts travelling by bike or on foot. However, the concentration of fine particulate matter (PM2.5) was shown to be slightly lower for drivers.

Researchers studied four typical routes used by Leicester commuters between city suburbs and the city centre, and used air quality sensors in volunteer walkers' and cyclists' backpacks to measure the concentrations of NO2 and PM2.5. The same devices were also fitted in the cabin of a Nissan Leaf electric vehicle. An electric car was used in order to determine driver exposure to pollutants without interference from the car's own exhaust.

Their findings show that NO2 concentrations can be higher in car cabins (even electric car cabins) than alongside the road where people are walking and cycling. Some PM2.5 can be removed, for example by pollen filters, meaning PM2.5 might be slightly lower in car cabins than alongside the road, but NO2 can be drawn directly into the cabin from the exhaust of traffic in front. This will change as more electric cars come into use, but provides evidence to support the benefits of getting out of a car and walking or cycling instead.

Dr Rikesh Panchal is a Research Associate within the University of Leicester's Centre for Environmental Health and Sustainability, and lead author for the study. He said:

"Anecdotal evidence on public perceptions of air quality during commuting collected by Leicester City Council suggested that people believed that exposure to harmful pollutants was higher for active commuters than for car occupants.

"However, the results of this study show that commuting by car in cities during rush hour can result in larger concentrations of pollutants for people inside the vehicle compared to walkers or cyclists making the same journey. This heightened exposure can have detrimental effects on health.

"Additionally, there are well known health benefits of exercise through walking or cycling. Therefore, policies and incentives that encourage drivers to get out of their car and take up active commuting will benefit many aspects of commuters' health as well as improving the overall air quality of the environment."

The study was conducted in conjunction with Leicester City Council's transport and public health teams. Hannah May, who runs the city council's business engagement programme in the active travel team, helped to set up the research. She said:

"It came out of conversations at our active travel roadshows, which we hold at workplaces to help businesses support their staff with sustainable travel. We were asked how air quality might affect people who travel on foot or by bike in Leicester. I wanted to know what the scientific evidence was.

"Thanks to our partnership with the University of Leicester, I was able to take this idea to them. We carried out 16 weeks of testing and the university came up with the methodology and protocol, and did the data analysis. Together, we've come up with a fascinating piece of research that will help people to make informed choices about the way they choose to travel. We were also able to use the city council's public health expertise to help analyse the benefits of active travel and measure them against the effects of air pollution."

Deputy city mayor for transport and environment, Cllr Adam Clarke, said:

"This strong partnership between the University of Leicester and the city council is providing us with high-quality evidence to support our vision for connected, healthy and green transport for Leicester.

"Leicester has seen big improvements in nitrogen dioxide levels against targets in recent years but there is no such thing as a safe limit. We need to keep improving, not only for the good of our health but for the climate too. This is why our plans to help people make the shift to more sustainable forms of transport are so important and so ambitious."

Read more at Science Daily

Oldest DNA from domesticated American horse lends credence to shipwreck folklore

Feral horses have roamed freely across the island of Assateague off the coast of Maryland and Virginia for hundreds of years, but exactly how they got there has remained a mystery. In a new study, ancient DNA extracted from a 16th century cow tooth from one of Spain's first Caribbean colonies turns out to be from a horse. Analysis of the DNA suggest that old folk tales claiming that horses were marooned on Assateague following the shipwreck of a Spanish galleon are likely more fact than fiction.

An abandoned Caribbean colony unearthed centuries after it had been forgotten and a case of mistaken identity in the archaeological record have conspired to rewrite the history of a barrier island off the Virginia and Maryland coasts.

These seemingly unrelated threads were woven together when Nicolas Delsol, a postdoctoral researcher at the Florida Museum of Natural History, set out to analyze ancient DNA recovered from cow bones found in archaeological sites. Delsol wanted to understand how cattle were domesticated in the Americas, and the genetic information preserved in centuries-old teeth held the answer. But they also held a surprise.

"It was a serendipitous finding," he said. "I was sequencing mitochondrial DNA from fossil cow teeth for my Ph.D. and realized something was very different with one of the specimens when I analyzed the sequences."

That's because the specimen in question, a fragment of an adult molar, wasn't a cow tooth at all but instead once belonged to a horse. According to a study published this Wednesday in the journal PLOS ONE, the DNA obtained from the tooth is also the oldest ever sequenced for a domesticated horse from the Americas. The tooth was excavated from one of Spain's first colonized settlements. Located on the island of Hispaniola, the town of Puerto Real was established in 1507 and served for decades as the last port of call for ships sailing from the Caribbean. But rampant piracy and the rise of illegal trade in the 16th century forced the Spanish to consolidate their power elsewhere on the island, and in 1578, residents were ordered to evacuate Puerto Real. The abandoned town was destroyed the following year by Spanish officials.

The remnants of the once-bustling port were inadvertently rediscovered by a medical missionary named William Hodges in 1975. Archaeological excavations of the site led by Florida Museum distinguished research curator Kathleen Deagan were carried out between 1979 and 1990.

Horse fossils and associated artifacts are incredibly rare at Puerto Real and similar sites from the time period, but cow remains are a common find. According to Delsol, this skewed ratio is primarily due to the way Spanish colonialists valued their livestock.

"Horses were reserved for individuals of high status, and owning one was a sign of prestige," he said. "There are full-page descriptions of horses in the documents that chronicle the arrival of [Hernán] Cortés in Mexico, demonstrating how important they were to the Spanish."

In contrast, cows were used as a source of meat and leather, and their bones were regularly discarded in communal waste piles called middens. But one community's trash is an archaeologist's treasure, as the refuse from middens often confers the clearest glimpse into what people ate and how they lived.

The specimen's biggest surprise wasn't revealed until Delsol compared its DNA with that of modern horses from around the world. Given that the Spanish brought their horses from the Iberian Peninsula in southern Europe, he expected horses still living in that region would be the closest living relatives of the 500-year-old Puerto Real specimen.

Instead, Delsol found its next of kin over 1,000 miles north of Hispaniola, on the island of Assateague off the coast of Maryland and Virginia. Feral horses have roamed freely across the long stretch of barrier island for hundreds of years, but exactly how they got there has remained a mystery.

According to the National Park Service, which manages the northern half of Assateague, the likeliest explanation is that the horses were brought over in the 1600s by English colonists from the mainland in an attempt to evade livestock taxes and fencing laws. Others believe the feral herds descended from horses that survived the shipwreck of a Spanish galleon and swam to shore, a theory popularized in the 1947 children's novel "Misty of Chincoteague." The book was later adapted to film, helping spread the shipwreck legend to an even wider audience.

Until now, there has been little evidence to support either theory. Proponents of the shipwreck theory claim it would be unlikely that English colonists would lose track of valuable livestock, while those in favor of an English origin of the herds point to the lack of sunken vessels nearby and the omission of feral horses in historical records of the region.

The results of the DNA analysis, however, unequivocally point to Spanish explorers as being the likeliest source of the horses on Assateague, Delsol explained.

"It's not widely reported in the historical literature, but the Spanish were exploring this area of the mid-Atlantic pretty early on in the 16th century. The early colonial literature is often patchy and not completely thorough. Just because they don't mention the horses doesn't mean they weren't there."

The feral herds on Assateague weren't the only horses to revert back to their wild heritage after arriving in the Americas. Colonists from all over Europe brought with them horses of various breeds and pedigrees, some of which bucked their bonds and escaped into the surrounding countryside.

Read more at Science Daily

Jul 26, 2022

Space study offers clearest understanding yet of the life cycle of supermassive black holes

Black holes with varying light signatures but that were thought to be the same objects being viewed from different angles are actually in different stages of the life cycle, according to a study led by Dartmouth researchers.

The research on black holes known as "active galactic nuclei," or AGNs, says that it definitively shows the need to revise the widely used "unified model of AGN" that characterizes supermassive black holes as all having the same properties.

The study, published in The Astrophysical Journal, provides answers to a nagging space mystery and should allow researchers to create more precise models about the evolution of the universe and how black holes develop.

"These objects have mystified researchers for over a half-century," said Tonima Tasnim Ananna, a postdoctoral research associate at Dartmouth and lead author of the paper. "Over time, we've made many assumptions about the physics of these objects. Now we know that the properties of obscured black holes are significantly different from the properties of AGNs that are not as heavily hidden."

Supermassive black holes are believed to reside at the center of nearly all large galaxies, including the Milky Way. The objects devour galactic gas, dust and stars, and they can become heavier than small galaxies.

For decades, researchers have been interested in the light signatures of active galactic nuclei, a type of supermassive black hole that is "accreting," or in a rapid growth stage.

Beginning in the late 1980s, astronomers realized that light signatures coming from space ranging from radio wavelengths to X-rays could be attributed to AGNs. It was assumed that the objects usually had a doughnut-shaped ring -- or "torus" -- of gas and dust around them. The different brightness and colors associated with the objects were thought to be the result of the angle from which they were being observed and how much of the torus was obscuring the view.

From this, the unified theory of AGNs became the prevalent understanding. The theory guides that if a black hole is being viewed through its torus, it should appear faint. If it is being viewed from below or above the ring, it should appear bright. According to the current study, however, the past research relied too heavily on data from the less obscured objects and skewed research results.

The new study focuses on how quickly black holes are feeding on space matter, or their accretion rates. The research found that the accretion rate does not depend upon the mass of a black hole, it varies significantly depending on how obscured it is by the gas and dust ring.

"This provides support for the idea that the torus structures around black holes are not all the same," said Ryan Hickox, professor of physics and astronomy and a co-author of the study. "There is a relationship between the structure and how it is growing."

The result shows that the amount of dust and gas surrounding an AGN is directly related to how much it is feeding, confirming that there are differences beyond orientation between different populations of AGNs. When a black hole is accreting at a high rate, the energy blows away dust and gas. As a result, it is more likely to be unobscured and appear brighter. Conversely, a less active AGN is surrounded by a denser torus and appears fainter.

"In the past, it was uncertain how the obscured AGN population varied from their more easily observable, unobscured counterparts," said Ananna. "This new research definitively shows a fundamental difference between the two populations that goes beyond viewing angle."

The study stems from a decade-long analysis of nearby AGNs detected by Swift-BAT, a high-energy NASA X-ray telescope. The telescope allows researchers to scan the local universe to detect obscured and unobscured AGNs.

The research is the result of an international scientific collaboration -- the BAT AGN Spectroscopic Survey (BASS) -- that has been working over a decade to collect and analyze optical/infrared spectroscopy for AGN observed by Swift BAT.

"We have never had such a large sample of X-ray detected obscured local AGN before," said Ananna. "This is a big win for high-energy X-ray telescopes."

The paper builds on previous research from the research team analyzing AGNs. For the study, Ananna developed a computational technique to assess the effect of obscuring matter on observed properties of black holes, and analyzed data collected by the wider research team using this technique.

According to the paper, by knowing a black hole's mass and how fast it is feeding, researchers can determine when most supermassive black holes underwent most of their growth, thus providing valuable information about the evolution of black holes and the universe.

"One of the biggest questions in our field is where do supermassive black holes come from," said Hickox. "This research provides a critical piece that can help us answer that question and I expect it to become a touchstone reference for this research discipline."

Future research could include focusing on wavelengths that allow the team to search beyond the local universe. In the nearer term, the team would like to understand what triggers AGNs to go into high accretion mode, and how long it takes rapidly accreting AGNs to transition from heavily obscured to unobscured.

Read more at Science Daily

Trilobites' growth may have resembled that of modern marine crustaceans

Trilobites -- extinct marine arthropods that roamed the world's oceans from about 520 million years ago until they went extinct 250 million years ago, at the end of the Permian period -- may have grown in a similar fashion and reached ages that match those of extant crustaceans, a new study has found.

In a paper published in the journal Paleobiology, researchers from the University of British Columbia and Uppsala University show that the Ordovician trilobite Triarthrus eatoni, some 450 million years ago, reached a length of just above 4 cm in about 10 years, with a growth curve very similar to that of small, slow-growing crustaceans.

"T. eatoni lived in low-oxygen environments and, similarly to extant crustaceans exposed to hypoxic conditions, exhibited low growth rates compared with growth under more oxygenated conditions," said Daniel Pauly, principal investigator of UBC's Sea Around Us initiative and lead author of the study. "Low-oxygen environments make is more difficult for water-breathers to grow, and add to the difficulties of breathing through gills, which, as 2D surfaces, cannot keep up with the growth of their 3D bodies. Thus, under hypoxic conditions, they must remain small if they are to maintain the rest of their body functions."

In the case of trilobites, their exopods -- external branches on the upper part of their limbs -- functioned as gills. Thus, these ancient animals had similar growth constraints to those of their modern counterparts.

To reach these conclusions, Pauly and his colleague from Uppsala University, paleontologist James Holmes, resorted to the analysis of length-frequency data, a method developed within fisheries science and marine biology for studying the growth of fish and invertebrates lacking the physical markings that indicate their age.

The information to perform their analysis was obtained from an earlier publication with information of the length frequency distribution of 295 exceptionally-preserved trilobite fossils collected at 'Beecher's Trilobite Bed' in New York State.

After estimating the parameters of a growth model widely used in fisheries science, the von Bertalanffy growth function, the researchers compared their results with published data on the growth of extant crustaceans. They found that the growth parameters they estimated for Triarthrus eatoni were well within the range of recent, slow-growing crustaceans.

Read more at Science Daily

Researchers find why bat cells do not get infected by SARS-CoV-2

Bat cells have specific molecular barriers to deal with SARS-CoV-2 replication, according to a study published in the Journal of Virology -- a publication of the American Society of Microbiology -- which includes the participation of Jordi Serra-Cobo, lecturer of the Faculty of Biology and the Biodiversity Research Institute (IRBio) of the University of Barcelona and expert on ecoepidemiological studies.

The study was carried out on primary cells of bat species which had been little studied and which circulate around Europe and Asia (specifically, Rhinolopuhs ferrumequinum, Myotis myotis, Eptesicus serotinus, Tadarida brasiliensis and Nyctalus noctula). These cellular lines were obtained through small biopsies carried out on the wings of the bats -- for instance, in bat colonies of Myotis myotis in Majorca and other cell lines brought by some research teams that took part in the study. As stated in the conclusions, these cellular models defined in chiropterans are shaped as tools of scientific interest to study the evolutionary relationship between bats and coronaviruses.

The study, led by the experts Nolwenn Jouvenet and Laurent Dacheux, from the Institute Pasteur in Paris, includes the collaboration of experts from research institutions in France, the Czech Republic and Switzerland.

How do bats protect themselves from viral infections?

Coronaviruses are present in many animal species worldwide, such as bats (chiropteans). In this context, the scientific literature has described for years the great resistance of some chiropteran species towards the viral infection. In these flying mammals, the immune system is on a pre-alert stage, a condition that allows a faster response to viral infections. For most mammals, having an immune system on a constant pre-alert state would involve inflammation problems but this is not the case for bats, which is why they are the focus of many international epidemiological and immunological studies.

As part of the study, the team analysed the ability of primary cells from different bat species to support SARS-CoV-2 replication. "The results reveal that none of these cells was permissive to the infection, not even those expressing detectable levels of angiotensin-converting enzyme 2 (ACE2), a metallopeptidase that serves as a viral receptor in many mammal species," says Jordi Serra-Cobo, member of the Department of Evolutionary Biology, Ecology and Environmental Sciences of the UB and the only expert in Spain to take part in this study.

"The cells did not allow the infection in the species Rhinolophus ferrumequinum, a chiropteran from the same genus as the Asian bat in which the BANAL-52 virus was found, a potential ancestor of SARS-CoV-2. Specifically, the genetic sequences of the BANAL-52 virus is 96.8% similar to that of SARS-CoV-2," says Serra-Cobo, distinguished expert in studies with bats as natural reservoirs of infectious agents like coronaviruses.

Humans and chiropterans vs. SARS-CoV-2 infection

Regarding the human species, it is known that the SARS-CoV-2 spike protein binds to the cell membrane receptor ACE2 and then the virus infects the cell. "In the case of the chiropteran cells, either the amount of ACE2 enzyme is small and it no longer enters the cell or, if the virus binds to ACE2, it cannot infect the cell," highlights Serra-Cobo.

From a global perspective, this study contributes to a better understanding of the fighting mechanisms against viral infections. This is a line of research that has been carried out for years by the team led by Serra-Cobo at the UB and IRBio and which is now gaining strength within the framework of the EvoDevo-Cat research group at the Faculty of Biology of the UB.

"Specifically, our team is working to understand the adaptations of the chiropterans regarding viral infections. An important number of zoonotic viruses circulate in chiropter populations without causing symptoms of the disease in the carriers," notes the researcher.

Read more at Science Daily

Studies link COVID-19 to wildlife sales at Chinese market, find alternative scenarios extremely unlikely

An international team of researchers has confirmed that live animals sold at the Huanan Seafood Wholesale Market were the likely source of the COVID-19 pandemic that has claimed 6.4 million lives since it began nearly three years ago.

Led by University of Arizona virus evolution expert Michael Worobey, international teams of researchers have traced the start of the pandemic to the market in Wuhan, China, where foxes, raccoon dogs and other live mammals susceptible to the virus were sold live immediately before the pandemic began. Their findings were published Tuesday in two papers in the journal Science, after being previously released in pre-print versions in February.

The publications, which have since gone through peer review and include additional analyses and conclusions, virtually eliminate alternative scenarios that have been suggested as origins of the pandemic. Moreover, the authors conclude that the first spread to humans from animals likely occurred in two separate transmission events in the Huanan market in late November 2019.

One study scrutinized the locations of the first known COVID-19 cases, as well as swab samples taken from surfaces at various locations at the market. The other focused on genomic sequences of SARS-CoV-2 from samples collected from COVID-19 patients during the first weeks of the pandemic in China.

The first paper, led by Worobey and Kristian Andersen at Scripps Research Institute in San Diego, California, examined the geographic pattern of COVID-19 cases in the first month of the outbreak, December 2019. The team was able to determine the locations of almost all of the 174 COVID-19 cases identified by the World Health Organization that month, 155 of which were in Wuhan.

Analyses showed that these cases were clustered tightly around the Huanan market, whereas later cases were dispersed widely throughout Wuhan -- a city of 11 million people. Notably, the researchers found that a striking percentage of early COVID patients with no known connection to the market -- meaning they neither worked there nor shopped there -- turned out to live near the market. This supports the idea that the market was the epicenter of the epidemic, Worobey said, with vendors getting infected first and setting off a chain of infections among community members in the surrounding area.

"In a city covering more than 3,000 square miles, the area with the highest probability of containing the home of someone who had one of the earliest COVID-19 cases in the world was an area of a few city blocks, with the Huanan market smack dab inside it," said Worobey, who heads UArizona Department of Ecology and Evolutionary Biology.

This conclusion was supported by another finding: When the authors looked at the geographical distribution of later COVID cases, from January and February 2020, they found a "polar opposite" pattern, Worobey said. While the cases from December 2019 mapped "like a bullseye" on the market, the later cases coincided with areas of the highest population density in Wuhan.

"This tells us the virus was not circulating cryptically," Worobey said. "It really originated at that market and spread out from there."

In an important addition to their earlier findings, Worobey and his collaborators addressed the question of whether health authorities found cases around the market simply because that's where they looked.

"It is important to realize that all these cases were people who were identified because they were hospitalized," Worobey said. "None were mild cases that might have been identified by knocking on doors of people who lived near the market and asking if they felt ill. In other words, these patients were recorded because they were in the hospital, not because of where they lived."

To rule out any potentially lingering possibility of bias, Worobey's team took one further step: Starting at the market, they began removing cases from their analyses, going farther in distance from the market as they went, and ran the stats again. The result: Even when two-thirds of cases were removed, the findings were the same.

"Even in that scenario, with the majority of cases, removed, we found that the remaining ones lived closer to the market than what would be expected if there was no geographical correlation between these earliest COVID cases and the market," Worobey said.

The study also looked at swab samples taken from market surfaces like floors and cages after Huanan market was shuttered. Samples that tested positive for SARS-CoV-2 were significantly associated with stalls selling live wildlife.

The researchers determined that mammals now known to be susceptible to SARS-CoV-2, including red foxes, hog badgers and raccoon dogs, were sold live at the Huanan market in the weeks preceding the first recorded COVID-19 cases. The scientists developed a detailed map of the market and showed that SARS-CoV-2-positive samples reported by Chinese researchers in early 2020 showed a clear association with the western portion of the market, where live or freshly butchered animals were sold in late 2019.

"Upstream events are still obscure, but our analyses of available evidence clearly suggest that the pandemic arose from initial human infections from animals for sale at the Huanan Seafood Wholesale Market in late November 2019," said Andersen, who was a co-senior author of both studies and is a professor in the Department of Immunology and Microbiology at Scripps Research.

Virus likely jumped from animals to humans more than once

The second study, an analysis of SARS-CoV-2 genomic data from early cases, was co-led by Jonathan Pekar and Joel Wertheim at the University of California, San Diego and Marc Suchard of the University of California Los Angeles, as well as Andersen and Worobey.

The researchers combined epidemic modeling with analyses of the virus's early evolution based on the earliest sampled genomes. They determined that the pandemic, which initially involved two subtly distinct lineages of SARS-CoV-2, likely arose from at least two separate infections of humans from animals at the Huanan market in November 2019 and perhaps in December 2019. The analyses also suggested that, in this period, there were many other animal-to-human transmissions of the virus at the market that failed to manifest in recorded COVID-19 cases.

The authors used a technique known as molecular clock analysis, which relies on the natural pace with which genetic mutations occur over time, to establish a framework for the evolution of the SARS-CoV-2 virus lineages. They found that a scenario of a singular introduction of the virus into humans rather than multiple introductions would be inconsistent with molecular clock data. Earlier studies had suggested that one lineage of the virus -- named A and closely related to viral relatives in bats -- gave rise to a second lineage, named B. More likely, according to the new data, is a scenario in which the two lineages jumped from animals into humans on separate occasions, both at the Huanan market, Worobey said.

"Otherwise, lineage A would have had to have been evolving in slow motion compared to the lineage B virus, which just doesn't make biological sense," said Worobey.

Read more at Science Daily

Jul 25, 2022

Explosive volcanic eruption produced rare mineral on Mars

Planetary scientists from Rice University, NASA's Johnson Space Center and the California Institute of Technology have an answer to a mystery that's puzzled the Mars research community since NASA's Curiosity rover discovered a mineral called tridymite in Gale Crater in 2016.

Tridymite is a high-temperature, low-pressure form of quartz that is extremely rare on Earth, and it wasn't immediately clear how a concentrated chunk of it ended up in the crater. GaleCrater was chosen as Curiosity's landing site due to the likelihood that it once held liquid water, and Curiosity found evidence that confirmed Gale Crater was a lake as recently as 1 billion years ago.

"The discovery of tridymite in a mudstone in Gale Crater is one of the most surprising observations that the Curiosity rover has made in 10 years of exploring Mars," said Rice's Kirsten Siebach, co-author of a study published online in Earth and Planetary Science Letters. "Tridymite is usually associated with quartz-forming, explosive, evolved volcanic systems on Earth, but we found it in the bottom of an ancient lake on Mars, where most of the volcanoes are very primitive."

Siebach, an assistant professor in Rice's Department of Earth, Environmental and Planetary Sciences, is a mission specialist on NASA's Curiosity team. To suss out the answer to the mystery, she partnered with two postdoctoral researchers in her Rice research group, Valerie Payré and Michael Thorpe, NASA's Elizabeth Rampe and Caltech's Paula Antoshechkina. Payré, the study's lead author, is now at Northern Arizona University and preparing to join the faculty of the University of Iowa in the fall.

Siebach and colleagues began by reevaluating data from every reported find of tridymite on Earth. They also reviewed volcanic materials from models of Mars volcanism and reexamined sedimentary evidence from the Gale Crater lake. They then came up with a new scenario that matched all the evidence: Martian magma sat for longer than usual in a chamber below a volcano, undergoing a process of partial cooling called fractional crystallization that concentrated silicon. In a massive eruption, the volcano spewed ash containing the extra silicon in the form of tridymite into the Gale Crater lake and surrounding rivers. Water helped break down the ash through natural processes of chemical weathering, and water also helped sort the minerals produced by weathering.

The scenario would have concentrated tridymite, producing minerals consistent with the 2016 find. It would also explain other geochemical evidence Curiosity found in the sample, including opaline silicates and reduced concentrations of aluminum oxide.

"It's actually a straightforward evolution of other volcanic rocks we found in the crater," Siebach said. "We argue that because we only saw this mineral once, and it was highly concentrated in a single layer, the volcano probably erupted at the same time the lake was there. Although the specific sample we analyzed was not exclusively volcanic ash, it was ash that had been weathered and sorted by water."

If a volcanic eruption like the one in the scenario did occur when Gale Crater contained a lake, it would mean explosive volcanism occurred more than 3 billion years ago, while Mars was transitioning from a wetter and perhaps warmer world to the dry and barren planet it is today.

"There's ample evidence of basaltic volcanic eruptions on Mars, but this is a more evolved chemistry," she said. "This work suggests that Mars may have a more complex and intriguing volcanic history than we would have imagined before Curiosity."

Read more at Science Daily

New study finds lowest risk of death was among adults who exercised 150-600 minutes/week

An analysis of more than 100,000 participants over a 30-year follow-up period found that adults who perform two to four times the currently recommended amount of moderate or vigorous physical activity per week have a significantly reduced risk of mortality, according to new research published today in the American Heart Association's flagship, peer-reviewed journal Circulation. The reduction was 21-23% for people who engaged in two to four times the recommended amount of vigorous physical activity, and 26-31% for people who engaged in two to four times the recommended amount of moderate physical activity each week.

It is well documented that regular physical activity is associated with reduced risk of cardiovascular disease and premature death. In 2018, the United States Department of Health and Human Services' Physical Activity Guidelines for Americans recommended that adults engage in at least 150-300 minutes/week of moderate physical activity or 75-150 minutes/week of vigorous physical activity, or an equivalent combination of both intensities. The American Heart Association's current recommendations, which are based on HHS's Physical Activity Guidelines, are for at least 150 minutes per week of moderate-intensity aerobic exercise or 75 minutes per week or vigorous aerobic exercise, or a combination of both.

"The potential impact of physical activity on health is great, yet it remains unclear whether engaging in high levels of prolonged, vigorous or moderate intensity physical activity above the recommended levels provides any additional benefits or harmful effects on cardiovascular health," said Dong Hoon Lee, Sc.D., M.S., a research associate in the department of nutrition at the Harvard T.H. Chan School of Public Health in Boston. "Our study leveraged repeated measures of self-reported physical activity over decades to examine the association between long-term physical activity during middle and late adulthood and mortality."

Researchers analyzed mortality data and medical records for more than 100,000 adults gathered from two large prospective studies: the all-female Nurses' Health Study and the all-male Health Professionals Follow-up Study from 1988-2018. Participants whose data were examined were 63% female, and more than 96% were white adults. They had an average age of 66 years and an average body mass index (BMI) of 26 kg/m2 over the 30-year follow-up period.

Participants self-reported their leisure-time physical activity by completing a validated questionnaire for either the Nurses' Health Study or Health Professionals Follow-Up Study every two years. The publicly available questionnaires, which were updated and expanded every two years, included questions about health information, physician-diagnosed illnesses, family medical histories and personal habits such as cigarette and alcohol consumption and frequency of exercise. Exercise data was reported as the average time spent per week on various physical activities over the past year. Moderate activity was defined as walking, lower-intensity exercise, weightlifting and calisthenics. Vigorous activity included jogging, running, swimming, bicycling and other aerobic exercises.

The analysis found that adults who performed double the currently recommended range of either moderate or vigorous physical activity each week had the lowest long-term risk of mortality.

The analysis also found:

  • Participants who met the guidelines for vigorous physical activity had an observed 31% lower risk of CVD mortality and 15% lower risk of non-CVD mortality, for an overall 19% lower risk of death from all causes.
  • Participants who met the guidelines for moderate physical activity had an observed 22-25% lower risk of CVD mortality and 19-20% lower risk of non-CVD mortality, for an overall 20-21% lower risk of death from all causes.
  • Participants who performed two to four times above the recommended amount of long-term vigorous physical activity (150-300 min/week) had an observed 27-33% lower risk of CVD mortality and 19% non-CVD mortality, for an overall 21-23% lower risk of death from all causes.
  • Participants who performed two to four times above the recommended amount of moderate physical activity (300-600 min/week) had an observed 28-38% lower risk of CVD mortality and 25-27% non-CVD mortality, for an overall 26-31% lower risk of mortality from all causes.


In addition, no harmful cardiovascular health effects were found among the adults who reported engaging in more than four times the recommended minimum activity levels. Previous studies have found evidence that long-term, high-intensity, endurance exercise, such as marathons, triathlons and long-distance bicycle races, may increase the risk of adverse cardiovascular events, including myocardial fibrosis, coronary artery calcification, atrial fibrillation and sudden cardiac death.

"This finding may reduce the concerns around the potential harmful effect of engaging in high levels of physical activity observed in several previous studies," Lee noted.

However, engaging in long-term, high intensity physical activity (?300 minutes/week) or moderate intensity physical activity (?600 minutes/week) at levels more than four times the recommended weekly minimum did not provide any additional reduction in risk of death.

"Our study provides evidence to guide individuals to choose the right amount and intensity of physical activity over their lifetime to maintain their overall health," Lee said. "Our findings support the current national physical activity guidelines and further suggest that the maximum benefits may be achieved by performing medium to high levels of either moderate or vigorous activity or a combination."

He also noted that people who perform less than 75 minutes of vigorous activity or less than 150 minutes of moderate activity per week may have greater benefits on mortality reduction by consistently performing approximately 75-150 minutes of vigorous activity or 150-300 minutes of moderate exercise per week, or an equivalent combination of both, over the long term.

Read more at Science Daily

Study shows link between frequent naps and high blood pressure

Napping on a regular basis is associated with higher risks for high blood pressure and stroke, according to new research published today in Hypertension, an American Heart Association journal.

Researchers in China examined whether frequent naps could be a potential causal risk factor for high blood pressure and/or stroke. This is the first study to use both observational analysis of participants over a long period of time and Mendelian randomization -- a genetic risk validation to investigate whether frequent napping was associated with high blood pressure and ischemic stroke.

"These results are especially interesting since millions of people might enjoy a regular, or even daily nap," says E Wang, Ph.D., M.D., a professor and chair of the Department of Anesthesiology at Xiangya Hospital Central South University, and the study's corresponding author.

Researchers used information from UK Biobank, a large biomedical database and research resource containing anonymized genetic, lifestyle and health information from half a million UK participants. UK Biobank recruited more than 500,000 participants between the ages of 40 and 69 who lived in the United Kingdom between 2006 and 2010. They regularly provided blood, urine and saliva samples, as well as detailed information about their lifestyle. The daytime napping frequency survey occurred 4 times from 2006 -- 2019 in a small proportion of UK Biobank participants.

Wang's group excluded records of people who had already had a stroke or had high blood pressure before the start of the study. This left about 360,000 participants to analyze the association between napping and first-time reports of stroke or high blood pressure, with an average follow-up of about 11 years. Participants were divided into groups based on self-reported napping frequency: "never/rarely," "sometimes," or "usually."

The study found:

  • A higher percentage of usual-nappers were men, had lower education and income levels, and reported cigarette smoking, daily drinking, insomnia, snoring and being an evening person compared to never- or sometimes-nappers;
  • When compared to people who reported never taking a nap, people who usually nap had a 12% higher likelihood of developing high blood pressure and 24% higher likelihood of having a stroke;
  • Participants younger than age 60 who usually napped had a 20% higher risk of developing high blood pressure compared to people the same age who never napped. After age 60, usual napping was associated with 10% higher risk of high blood pressure compared to those who reported never napping;
  • About three-fourths of participants remained in the same napping category throughout the study;
  • The Mendelian randomization result showed that If napping frequency increased by one category (from never to sometimes or sometimes to usually) high blood pressure risk increased 40%. Higher napping frequency was related to the genetic propensity for high blood pressure risk.


"This may be because, although taking a nap itself is not harmful, many people who take naps may do so because of poor sleep at night. Poor sleep at night is associated with poorer health, and naps are not enough to make up for that," said Michael A. Grandner, Ph.D., MTR, a sleep expert and co-author of the American Heart Association's new Life's Essential 8 cardiovascular health score, which added sleep duration in June 2022 as the 8th metric for measuring optimal heart and brain health. "This study echoes other findings that generally show that taking more naps seems to reflect increased risk for problems with heart health and other issues." Grander is director of the Sleep Health Research Program and the Behavioral Sleep Medicine Clinic and associate professor of psychiatry at the University of Arizona in Tucson.

The authors recommend further examination of the associations between a healthy sleep pattern, including daytime napping, and heart health.

The study has several important limitations to consider. Researchers only collected daytime napping frequency, not duration, so there is no information how or whether the length of nap affects blood pressure or stroke risks. Additionally, nap frequency was self-reported without any objective measurements, making estimates nonquantifiable. The study's participants were mostly middle-aged and elderly with European ancestry, so the results may not be generalizable. Finally, researchers have not yet discovered the biological mechanism for the effect of daytime napping on blood pressure regulation or stroke.

Read more at Science Daily

Pre-teen children believe 'brilliance' is a male trait, and this stereotype increases in strength up to the age of twelve

Children hold stereotypical views that 'brilliance' is a male trait, and this belief strengthens as they grow up to the age of twelve, researchers from Singapore and the United States have reported.

The study led by Nanyang Technological University, Singapore (NTU Singapore) in collaboration with New York University, was published in the scientific journal Child Development in May 2022. It involved 389 Chinese Singaporean parents and 342 of their children aged 8 to 12.

Tests were carried out to measure the extent to which parents and their children associate the notion of brilliance with men, and to probe the relationship between parents and their children's views.

The study defined brilliance as an exceptional level of intellectual ability and results showed that children are as likely to associate brilliance with men, as their parents are.

This belief was stronger among older children and stronger among those children whose parents held the same view.

While previous research on gender stereotypes has found the idea that giftedness is a male trait can emerge at around the age of six, it was not known whether and how this stereotype changes over the course of childhood, until now.

Lead author of the study, Associate Professor Setoh Peipei from NTU Singapore's School of Social Sciences, said the Singapore-based study is the first to identify that the tendency to associate brilliance with men (also known as the 'brilliance equals to men' stereotype) increases in strength through the primary school years, and reaches the level of belief seen in adults by the age of 13.

"Stereotypical views about how boys are smarter than girls can take root in childhood and become a self-fulfilling prophecy," said Prof Setoh. "For girls, this may lead them to doubt their abilities, thus limiting their ideas about their interests and what they can achieve in life."

"Our research work shows parents must also be included in policies and school programmes to effectively combat children's gender stereotypes from a young age," she added.

For example, as previous studies have found that parents use different explanation styles for daughters and for sons, the research team said programmes to train parents and teachers to be mindful of balancing their behaviour during interactions with children -- especially with girls -- could be introduced.

The authors say the study offers evidence to support Singapore's push to close the gender gap in the Science, Technology, Engineering, and Mathematics (STEM) sectors.

While Singapore has the second highest in the world OECD PISA scores in mathematics, science and reading, a recent study by the Promotion of Women in Engineering, Research, and Science (POWERS) programme at NTU Singapore found that women in Singapore are less confident in their math and science abilities compared to men. Women are also more likely than men to perceive gender barriers to STEM career entry and career progress.

How the study was conducted

The researchers used the Implicit Association Test (IAT) -- a commonly-used implicit measure of stereotyping -- to evaluate parents' and children's behaviour. During the test, participants were asked to categorise photographs of men and women, along with two sets of words. One set of 'genius words' referred to the notion of brilliance and included words such as "super-smart" and "genius," while the other set of words referred to creativity (control attribute).

During the first half of the trials, participants had to press a key to categorise the male photographs with the genius words. This process was repeated in the second half of the trials with female photographs and genius words. Participants with an implicit association of men being brilliant will react faster to the task of categorising genius words with the male photographs than the same task with female photographs.

Results revealed an average D score (a metric of the strength of the stereotypical 'intellectual brilliance = men' association) of 0.16, indicating that Singaporean children associate brilliance with men more than women and that this stereotypical belief increased in strength with age among the child sample and reached stereotype levels comparable to those of adults by age 12. Thereafter, there was little change in their perspective.

In the second part of the study, the researchers investigated scores from parent-child pairs who took the tests separately but at the same time and found that children's scores were correlated to their parents' test scores. This finding suggests that during the earlier years of primary school, parents may play a role in their children's acquisition of the 'brilliance equals to men' stereotype.

Further analysis revealed that as the age of the boys that were tested got older, they were less likely to hold the same stereotypical views of males as more brilliant as their parents. However, for girls, their stereotypes remained closely linked to their parents' stereotypes throughout the primary school years.

Co-author Andrei Cimpian, Professor of Psychology at New York University said, "This study adds to the evidence that the gender imbalances observed in many prestigious careers are not a function of differences between women and men in their inherent aptitudes or interests. Rather, these imbalances are the product of the messages that young people are getting from those around them about what women and men are supposedly -- and supposed to be -- like. As a society, we have a responsibility to work toward addressing this issue."

Moving forward, the research team is studying whether this gender stereotype about brilliance may differently impact primary and secondary school girls' and boys' outcomes in math -- a core STEM subject that is typically believed to require intellectual brilliance to excel in.

Read more at Science Daily