A breakthrough in fertility science by researchers from Bristol and Mexico has shattered the universally accepted view of how sperm 'swim'.
More than three hundred years after Antonie van Leeuwenhoek used one of the earliest microscopes to describe human sperm as having a "tail, which, when swimming, lashes with a snakelike movement, like eels in water," scientists have revealed this is an optical illusion.
Using state-of-the-art 3D microscopy and mathematics, Dr Hermes Gadelha from the University of Bristol, Dr Gabriel Corkidi and Dr Alberto Darszon from the Universidad Nacional Autonoma de Mexico, have pioneered the reconstruction of the true movement of the sperm tail in 3D.
Using a high-speed camera capable of recording over 55,000 frames in one second, and a microscope stage with a piezoelectric device to move the sample up and down at an incredibly high rate, they were able to scan the sperm swimming freely in 3D.
The ground-breaking study, published in the journal Science Advances, reveals the sperm tail is in fact wonky and only wiggles on one side. While this should mean the sperm's one-sided stroke would have it swimming in circles, sperm have found a clever way to adapt and swim forwards.
"Human sperm figured out if they roll as they swim, much like playful otters corkscrewing through water, their one-sided stoke would average itself out, and they would swim forwards," said Dr Gadelha, head of the Polymaths Laboratory at Bristol's Department of Engineering Mathematics and an expert in the mathematics of fertility.
"The sperms' rapid and highly synchronised spinning causes an illusion when seen from above with 2D microscopes -- the tail appears to have a side-to-side symmetric movement, "like eels in water," as described by Leeuwenhoek in the 17th century.
"However, our discovery shows sperm have developed a swimming technique to compensate for their lop-sidedness and in doing so have ingeniously solved a mathematical puzzle at a microscopic scale: by creating symmetry out of asymmetry," said Dr Gadelha.
"The otter-like spinning of human sperm is however complex: the sperm head spins at the same time that the sperm tail rotates around the swimming direction. This is known in physics as precession, much like when the orbits of Earth and Mars precess around the sun."
Computer-assisted semen analysis systems in use today, both in clinics and for research, still use 2D views to look at sperm movement. Therefore, like Leeuwenhoek's first microscope, they are still prone to this illusion of symmetry while assessing semen quality. This discovery, with its novel use of 3D microscope technology combined with mathematics, may provide fresh hope for unlocking the secrets of human reproduction.
"With over half of infertility caused by male factors, understanding the human sperm tail is fundamental to developing future diagnostic tools to identify unhealthy sperm," adds Dr Gadelha, whose work has previously revealed the biomechanics of sperm bendiness and the precise rhythmic tendencies that characterise how a sperm moves forward.
Dr Corkidi and Dr Darszon pioneered the 3D microscopy for sperm swimming.
"This was an incredible surprise, and we believe our state-of the-art 3D microscope will unveil many more hidden secrets in nature. One day this technology will become available to clinical centres," said Dr Corkidi.
Read more at Science Daily
Aug 1, 2020
Remember the first time you...? Mysterious brain structure sheds light on addiction
Do you remember where you were when you first heard that two planes had crashed into New York's Twin Towers? Or where you had your first kiss? Our brains are wired to retain information that relates to the context in which highly significant events occurred. This mechanism also underlies drug addiction and is the reason why hanging out in an environment or with people associated with memories of drug use often leads to relapse.
How our brains create this strong association, however, is less clear. Now, new research by Professor Ami Citri and PhD student Anna Terem at Hebrew University of Jerusalem (HU)'s Edmond and Lily Safra Center for Brain Sciences and the Alexander Silberman Institute of Life Science, shows that a relatively obscure brain region known as the claustrum plays a significant role in making these connections. They published their findings in the latest edition of Current Biology.
The researchers' findings fit the idea of "incentive salience," the process that determines the desirability of an otherwise neutral stimulus. For example, a candy store façade becomes very attractive to kids after repeated associations with the rewarding treats that lie within. In time, children unconsciously learn to "want" to see the store stimulus, which is separate from their "liking" the actual candy reward. Taking a closer look at how context becomes associated with cocaine, the researchers found a group of neurons within the claustrum that lit up during cocaine use. Further, these neurons are pivotal in the formation of an incentive salience that links context with the pleasure of cocaine.
To determine when and how the claustrum participates in incentive salience, Citri and his team employed a conditioned-place preference (CPP) test for a group of lab mice. During this test, the mice learned to associate reward with context. The researcher administered cocaine to the mice and placed them in an area with distinctive flooring (rugged) and wall patterns (dots), ones that a mouse would notice, as the drug started to kick in. After a few times of this, when placed in a room where the mice could choose either to hang out in a region similar to the one paired with cocaine (rugged floors and dots wall) or a neutral area (smooth floor and striped walls), the mice would quickly congregate in the area where their drug high had played out.
To test the claustrum's involvement in how a context becomes associated with a given reward, Citri and his team observed the changes in mice behavior when they inhibited these claustral neurons. They found that the inhibition of these neurons inhibited the mice's behavioral responses to cocaine, meaning they no longer preferred hanging out in the cocaine-paired environment. On the other hand, activating these neurons -- even in the absence of any cocaine -- caused the mice to develop a preference for this context.
Importantly, the team found that the activity of the claustrum was not necessary for retrieval of the cocaine memory. Once the mice had been placed in a cocaine-paired context several times to enjoy their cocaine high, the memory for this context was encoded and inhibition of the claustrum had no effect on their preference for the cocaine-paired context. "These findings boosted our confidence that the claustrum is indeed integral to incentive salience, heightening the awareness of the mouse to the context in which it experienced the drug high" shared Citri.
Read more at Science Daily
How our brains create this strong association, however, is less clear. Now, new research by Professor Ami Citri and PhD student Anna Terem at Hebrew University of Jerusalem (HU)'s Edmond and Lily Safra Center for Brain Sciences and the Alexander Silberman Institute of Life Science, shows that a relatively obscure brain region known as the claustrum plays a significant role in making these connections. They published their findings in the latest edition of Current Biology.
The researchers' findings fit the idea of "incentive salience," the process that determines the desirability of an otherwise neutral stimulus. For example, a candy store façade becomes very attractive to kids after repeated associations with the rewarding treats that lie within. In time, children unconsciously learn to "want" to see the store stimulus, which is separate from their "liking" the actual candy reward. Taking a closer look at how context becomes associated with cocaine, the researchers found a group of neurons within the claustrum that lit up during cocaine use. Further, these neurons are pivotal in the formation of an incentive salience that links context with the pleasure of cocaine.
To determine when and how the claustrum participates in incentive salience, Citri and his team employed a conditioned-place preference (CPP) test for a group of lab mice. During this test, the mice learned to associate reward with context. The researcher administered cocaine to the mice and placed them in an area with distinctive flooring (rugged) and wall patterns (dots), ones that a mouse would notice, as the drug started to kick in. After a few times of this, when placed in a room where the mice could choose either to hang out in a region similar to the one paired with cocaine (rugged floors and dots wall) or a neutral area (smooth floor and striped walls), the mice would quickly congregate in the area where their drug high had played out.
To test the claustrum's involvement in how a context becomes associated with a given reward, Citri and his team observed the changes in mice behavior when they inhibited these claustral neurons. They found that the inhibition of these neurons inhibited the mice's behavioral responses to cocaine, meaning they no longer preferred hanging out in the cocaine-paired environment. On the other hand, activating these neurons -- even in the absence of any cocaine -- caused the mice to develop a preference for this context.
Importantly, the team found that the activity of the claustrum was not necessary for retrieval of the cocaine memory. Once the mice had been placed in a cocaine-paired context several times to enjoy their cocaine high, the memory for this context was encoded and inhibition of the claustrum had no effect on their preference for the cocaine-paired context. "These findings boosted our confidence that the claustrum is indeed integral to incentive salience, heightening the awareness of the mouse to the context in which it experienced the drug high" shared Citri.
Read more at Science Daily
Jul 31, 2020
Coastal cities leave up to 75% of seafloor exposed to harmful light pollution
The global expansion of coastal cities could leave more than three quarters of their neighbouring seafloor exposed to potentially harmful levels of light pollution.
A study led by the University of Plymouth (UK) showed that under both cloudy and clear skies, quantities of light used in everyday street lighting permeated all areas of the water column.
This could pose a significant threat to coastal species, with recent research showing the presence of artificial skyglow can disrupt the lunar compass species use when covering long distances.
However, the current study found that the colour of the wavelengths shone at the surface had a marked difference on how much biologically important light pollution reached the seafloor.
Many of the white LEDs now being used to illuminate the world's towns and cities use a mixture of green, blue and red wavelengths to generate their brightness.
Green and blue wavelengths left up to 76% and 70% of the three-dimensional seafloor area exposed to light pollution respectively, while the presence of red light was less than 1%.
The research -- which also involved Bangor University, the University of Strathclyde and Plymouth Marine Laboratory -- is published in Scientific Reports, an online journal from the publishers of Nature.
It is the first study in the world to quantify the extent to which biologically important artificial light is prevalent on the seafloor and could, in turn, be having a detrimental effect on marine species.
Dr Thomas Davies, Lecturer in Marine Conservation at the University of Plymouth and the paper's lead author, said: "The areas exposed here are not trivial. Our results focused on a busy marine area and demonstrate the light from coastal urban centres is widespread across the sea surface, sub surface and seafloor of adjacent marine habitats. But Plymouth is still just one coastal city with a population of 240,000 people.
"Seventy-five per cent of the world's megacities are now located in coastal regions and coastal populations are projected to more than double by 2060. So unless we take action now it is clear that biologically important light pollution on the seafloor is likely to be globally widespread, increasing in intensity and extent, and putting marine habitats at risk."
The study focussed on Plymouth Sound and the Tamar Estuary which together form a busy waterway and are home to the largest naval port in Western Europe.
It was conducted over four nights in 2018, when there was little or no moonlight, and blue, green, and red artificial light was shone at the sea surface during both clear and cloudy conditions, and at low and high tide.
A combination of mapping and radiative transfer modelling tools were then used to measure exposure at the surface, beneath the surface, and at the seafloor.
The researchers are now calling for a more comprehensive review of the full impacts of coastal light pollution, to try and mitigate against the most harmful effects as coastal cities grow globally.
Read more at Science Daily
A study led by the University of Plymouth (UK) showed that under both cloudy and clear skies, quantities of light used in everyday street lighting permeated all areas of the water column.
This could pose a significant threat to coastal species, with recent research showing the presence of artificial skyglow can disrupt the lunar compass species use when covering long distances.
However, the current study found that the colour of the wavelengths shone at the surface had a marked difference on how much biologically important light pollution reached the seafloor.
Many of the white LEDs now being used to illuminate the world's towns and cities use a mixture of green, blue and red wavelengths to generate their brightness.
Green and blue wavelengths left up to 76% and 70% of the three-dimensional seafloor area exposed to light pollution respectively, while the presence of red light was less than 1%.
The research -- which also involved Bangor University, the University of Strathclyde and Plymouth Marine Laboratory -- is published in Scientific Reports, an online journal from the publishers of Nature.
It is the first study in the world to quantify the extent to which biologically important artificial light is prevalent on the seafloor and could, in turn, be having a detrimental effect on marine species.
Dr Thomas Davies, Lecturer in Marine Conservation at the University of Plymouth and the paper's lead author, said: "The areas exposed here are not trivial. Our results focused on a busy marine area and demonstrate the light from coastal urban centres is widespread across the sea surface, sub surface and seafloor of adjacent marine habitats. But Plymouth is still just one coastal city with a population of 240,000 people.
"Seventy-five per cent of the world's megacities are now located in coastal regions and coastal populations are projected to more than double by 2060. So unless we take action now it is clear that biologically important light pollution on the seafloor is likely to be globally widespread, increasing in intensity and extent, and putting marine habitats at risk."
The study focussed on Plymouth Sound and the Tamar Estuary which together form a busy waterway and are home to the largest naval port in Western Europe.
It was conducted over four nights in 2018, when there was little or no moonlight, and blue, green, and red artificial light was shone at the sea surface during both clear and cloudy conditions, and at low and high tide.
A combination of mapping and radiative transfer modelling tools were then used to measure exposure at the surface, beneath the surface, and at the seafloor.
The researchers are now calling for a more comprehensive review of the full impacts of coastal light pollution, to try and mitigate against the most harmful effects as coastal cities grow globally.
Read more at Science Daily
Surprising number of exoplanets could host life
Our solar system has one habitable planet -- Earth. A new study shows other stars could have as many as seven Earth-like planets in the absence of a gas giant like Jupiter.
This is the conclusion of a study led by UC Riverside astrobiologist Stephen Kane published this week in the Astronomical Journal.
The search for life in outer space is typically focused on what scientists call the "habitable zone," which is the area around a star in which an orbiting planet could have liquid water oceans -- a condition for life as we know it.
Kane had been studying a nearby solar system called Trappist-1, which has three Earth-like planets in its habitable zone.
"This made me wonder about the maximum number of habitable planets it's possible for a star to have, and why our star only has one," Kane said. "It didn't seem fair!"
His team created a model system in which they simulated planets of various sizes orbiting their stars. An algorithm accounted for gravitational forces and helped test how the planets interacted with each other over millions of years.
They found it is possible for some stars to support as many as seven, and that a star like our sun could potentially support six planets with liquid water.
"More than seven, and the planets become too close to each other and destabilize each other's orbits," Kane said.
Why then does our solar system only have one habitable planet if it is capable of supporting six? It helps if the planets' movement is circular rather than oval or irregular, minimizing any close contact and maintain stable orbits.
Kane also suspects Jupiter, which has a mass two-and-a-half times that of all the other planets in the solar system combined, limited our system's habitability.
"It has a big effect on the habitability of our solar system because it's massive and disturbs other orbits," Kane said.
Only a handful of stars are known to have multiple planets in their habitable zones. Moving forward, Kane plans to search for additional stars surrounded entirely by smaller planets. These stars will be prime targets for direct imaging with NASA telescopes like the one at Jet Propulsion Laboratory's Habitable Exoplanet Observatory.
Kane's study identified one such star, Beta CVn, which is relatively close by at 27 light years away. Because it doesn't have a Jupiter-like planet, it will be included as one of the stars checked for multiple habitable zone planets.
Future studies will also involve the creation of new models that examine the atmospheric chemistry of habitable zone planets in other star systems.
Projects like these offer more than new avenues in the search for life in outer space. They also offer scientists insight into forces that might change life on our own planet one day.
Read more at Science Daily
This is the conclusion of a study led by UC Riverside astrobiologist Stephen Kane published this week in the Astronomical Journal.
The search for life in outer space is typically focused on what scientists call the "habitable zone," which is the area around a star in which an orbiting planet could have liquid water oceans -- a condition for life as we know it.
Kane had been studying a nearby solar system called Trappist-1, which has three Earth-like planets in its habitable zone.
"This made me wonder about the maximum number of habitable planets it's possible for a star to have, and why our star only has one," Kane said. "It didn't seem fair!"
His team created a model system in which they simulated planets of various sizes orbiting their stars. An algorithm accounted for gravitational forces and helped test how the planets interacted with each other over millions of years.
They found it is possible for some stars to support as many as seven, and that a star like our sun could potentially support six planets with liquid water.
"More than seven, and the planets become too close to each other and destabilize each other's orbits," Kane said.
Why then does our solar system only have one habitable planet if it is capable of supporting six? It helps if the planets' movement is circular rather than oval or irregular, minimizing any close contact and maintain stable orbits.
Kane also suspects Jupiter, which has a mass two-and-a-half times that of all the other planets in the solar system combined, limited our system's habitability.
"It has a big effect on the habitability of our solar system because it's massive and disturbs other orbits," Kane said.
Only a handful of stars are known to have multiple planets in their habitable zones. Moving forward, Kane plans to search for additional stars surrounded entirely by smaller planets. These stars will be prime targets for direct imaging with NASA telescopes like the one at Jet Propulsion Laboratory's Habitable Exoplanet Observatory.
Kane's study identified one such star, Beta CVn, which is relatively close by at 27 light years away. Because it doesn't have a Jupiter-like planet, it will be included as one of the stars checked for multiple habitable zone planets.
Future studies will also involve the creation of new models that examine the atmospheric chemistry of habitable zone planets in other star systems.
Projects like these offer more than new avenues in the search for life in outer space. They also offer scientists insight into forces that might change life on our own planet one day.
Read more at Science Daily
Laughter acts as a stress buffer -- and even smiling helps
People who laugh frequently in their everyday lives may be better equipped to deal with stressful events -- although this does not seem to apply to the intensity of laughter. These are the findings reported by a research team from the University of Basel in the journal PLOS ONE.
It is estimated that people typically laugh 18 times a day -- generally during interactions with other people and depending on the degree of pleasure they experience. Researchers have also reported differences related to time of day, age, and gender -- for example, it is known that women smile more than men on average. Now, researchers from the Division of Clinical Psychology and Epidemiology of the Department of Psychology at the University of Basel have recently conducted a study on the relationship between stressful events and laughter in terms of perceived stress in everyday life.
Questions asked by app
In the intensive longitudinal study, an acoustic signal from a mobile phone app prompted participants to answer questions eight times a day at irregular intervals for a period of 14 days. The questions related to the frequency and intensity of laughter and the reason for laughing -- as well as any stressful events or stress symptoms experienced -- in the time since the last signal.
Using this method, the researchers working with the lead authors, Dr. Thea Zander-Schellenberg and Dr. Isabella Collins, were able to study the relationships between laughter, stressful events, and physical and psychological symptoms of stress ("I had a headache" or "I felt restless") as part of everyday life. The newly published analysis was based on data from 41 psychology students, 33 of whom were women, with an average age of just under 22.
Intensity of laughter has less influence
The first result of the observational study was expected based on the specialist literature: in phases in which the subjects laughed frequently, stressful events were associated with more minor symptoms of subjective stress. However, the second finding was unexpected. When it came to the interplay between stressful events and intensity of laughter (strong, medium or weak), there was no statistical correlation with stress symptoms. "This could be because people are better at estimating the frequency of their laughter, rather than its intensity, over the last few hours," says the research team.
From Science Daily
It is estimated that people typically laugh 18 times a day -- generally during interactions with other people and depending on the degree of pleasure they experience. Researchers have also reported differences related to time of day, age, and gender -- for example, it is known that women smile more than men on average. Now, researchers from the Division of Clinical Psychology and Epidemiology of the Department of Psychology at the University of Basel have recently conducted a study on the relationship between stressful events and laughter in terms of perceived stress in everyday life.
Questions asked by app
In the intensive longitudinal study, an acoustic signal from a mobile phone app prompted participants to answer questions eight times a day at irregular intervals for a period of 14 days. The questions related to the frequency and intensity of laughter and the reason for laughing -- as well as any stressful events or stress symptoms experienced -- in the time since the last signal.
Using this method, the researchers working with the lead authors, Dr. Thea Zander-Schellenberg and Dr. Isabella Collins, were able to study the relationships between laughter, stressful events, and physical and psychological symptoms of stress ("I had a headache" or "I felt restless") as part of everyday life. The newly published analysis was based on data from 41 psychology students, 33 of whom were women, with an average age of just under 22.
Intensity of laughter has less influence
The first result of the observational study was expected based on the specialist literature: in phases in which the subjects laughed frequently, stressful events were associated with more minor symptoms of subjective stress. However, the second finding was unexpected. When it came to the interplay between stressful events and intensity of laughter (strong, medium or weak), there was no statistical correlation with stress symptoms. "This could be because people are better at estimating the frequency of their laughter, rather than its intensity, over the last few hours," says the research team.
From Science Daily
Compounds show promise in search for tuberculosis antibiotics
Compounds tested for their potential as antibiotics have demonstrated promising activity against one of the deadliest infectious diseases -- tuberculosis (TB).
Researchers from the John Innes Centre evaluated two compounds with antibacterial properties, which had been produced by the company Redx Pharma as antibiotic candidates, particularly against TB.
TB, which is caused by the bacterium Mycobacterium tuberculosis, is often thought of as a disease of the past. But in recent years it has been increasing due, in part, to rising resistance to treatments and decreasing efficacy of vaccines.
One strategy in the search for new treatments is to find compounds that exploit well-known existing targets for drugs such as the bacterial enzyme DNA gyrase. This member of the DNA topoisomerase family of enzymes is required for bacterial DNA functionality, so compounds that inhibit its activity are much sought after as antibiotic candidates.
Using X-ray crystallography, the team elucidated the molecular details of the action of the compounds against their target.
Surprisingly, a very common mutation in DNA gyrase that causes bacteria to be resistant to a related group of antibiotics, the aminocoumarins, did not lead to resistance to the compounds under scrutiny here.
"We hope that companies and academic groups working to develop new antibiotics will find this study useful. It opens the way for further synthesis and investigation of compounds that interact with this target," says Professor Tony Maxwell one of the authors of the study which appears in the Journal of Antimicrobial Chemotherapy.
To date, efforts to develop new treatments for TB have been unsuccessful, with current treatments having been used for over 50 years.
World Health Organisation (WHO) figures reveal that each day over 4000 people die from TB and 300,000 people fall ill from the disease. Nearly 500,000 people fell ill with drug-resistant TB in 2018.
Read more at Science Daily
Researchers from the John Innes Centre evaluated two compounds with antibacterial properties, which had been produced by the company Redx Pharma as antibiotic candidates, particularly against TB.
TB, which is caused by the bacterium Mycobacterium tuberculosis, is often thought of as a disease of the past. But in recent years it has been increasing due, in part, to rising resistance to treatments and decreasing efficacy of vaccines.
One strategy in the search for new treatments is to find compounds that exploit well-known existing targets for drugs such as the bacterial enzyme DNA gyrase. This member of the DNA topoisomerase family of enzymes is required for bacterial DNA functionality, so compounds that inhibit its activity are much sought after as antibiotic candidates.
Using X-ray crystallography, the team elucidated the molecular details of the action of the compounds against their target.
Surprisingly, a very common mutation in DNA gyrase that causes bacteria to be resistant to a related group of antibiotics, the aminocoumarins, did not lead to resistance to the compounds under scrutiny here.
"We hope that companies and academic groups working to develop new antibiotics will find this study useful. It opens the way for further synthesis and investigation of compounds that interact with this target," says Professor Tony Maxwell one of the authors of the study which appears in the Journal of Antimicrobial Chemotherapy.
To date, efforts to develop new treatments for TB have been unsuccessful, with current treatments having been used for over 50 years.
World Health Organisation (WHO) figures reveal that each day over 4000 people die from TB and 300,000 people fall ill from the disease. Nearly 500,000 people fell ill with drug-resistant TB in 2018.
Read more at Science Daily
Jul 30, 2020
Simulating quantum 'time travel' disproves butterfly effect in quantum realm
Using a quantum computer to simulate time travel, researchers have demonstrated that, in the quantum realm, there is no "butterfly effect." In the research, information -- qubits, or quantum bits -- "time travel" into the simulated past. One of them is then strongly damaged, like stepping on a butterfly, metaphorically speaking. Surprisingly, when all qubits return to the "present," they appear largely unaltered, as if reality is self-healing.
"On a quantum computer, there is no problem simulating opposite-in-time evolution, or simulating running a process backwards into the past," said Nikolai Sinitsyn, a theoretical physicist at Los Alamos National Laboratory and coauthor of the paper with Bin Yan, a post doc in the Center for Nonlinear Studies, also at Los Alamos. "So we can actually see what happens with a complex quantum world if we travel back in time, add small damage, and return. We found that our world survives, which means there's no butterfly effect in quantum mechanics."
In Ray Bradbury's 1952 science fiction story, "A Sound of Thunder," a character used a time machine to travel to the deep past, where he stepped on a butterfly. Upon returning to the present time, he found a different world. This story is often credited with coining the term "butterfly effect," which refers to the extremely high sensitivity of a complex, dynamic system to its initial conditions. In such a system, early, small factors go on to strongly influence the evolution of the entire system.
Instead, Yan and Sinitsyn found that simulating a return to the past to cause small local damage in a quantum system leads to only small, insignificant local damage in the present.
This effect has potential applications in information-hiding hardware and testing quantum information devices. Information can be hidden by a computer by converting the initial state into a strongly entangled one.
"We found that even if an intruder performs state-damaging measurements on the strongly entangled state, we still can easily recover the useful information because this damage is not magnified by a decoding process," Yan said. "This justifies talks about creating quantum hardware that will be used to hide information."
This new finding could also be used to test whether a quantum processor is, in fact, working by quantum principles. Since the newfound no-butterfly effect is purely quantum, if a processor runs Yan and Sinitsyn's system and shows this effect, then it must be a quantum processor.
To test the butterfly effect in quantum systems, Yan and Sinitsyn used theory and simulations with the IBM-Q quantum processor to show how a circuit could evolve a complex system by applying quantum gates, with forwards and backwards cause and effect.
Presto, a quantum time-machine simulator.
In the team's experiment, Alice, a favorite stand-in agent used for quantum thought experiments, prepares one of her qubits in the present time and runs it backwards through the quantum computer. In the deep past, an intruder -- Bob, another favorite stand-in -- meaures Alice's qubit. This action disturbs the qubit and destroys all its quantum correlations with the rest of the world. Next, the system is run forward to the present time.
According to Ray Bradbury, Bob's small damage to the state and all those correlations in the past should be quickly magnified during the complex forward-in-time evolution. Hence, Alice should be unable to recover her information at the end.
But that's not what happened. Yan and Sinitsyn found that most of the presently local information was hidden in the deep past in the form of essentially quantum correlations that could not be damaged by minor tampering. They showed that the information returns to Alice's qubit without much damage despite Bob's interference. Counterintuitively, for deeper travels to the past and for bigger "worlds," Alice's final information returns to her even less damaged.
Read more at Science Daily
"On a quantum computer, there is no problem simulating opposite-in-time evolution, or simulating running a process backwards into the past," said Nikolai Sinitsyn, a theoretical physicist at Los Alamos National Laboratory and coauthor of the paper with Bin Yan, a post doc in the Center for Nonlinear Studies, also at Los Alamos. "So we can actually see what happens with a complex quantum world if we travel back in time, add small damage, and return. We found that our world survives, which means there's no butterfly effect in quantum mechanics."
In Ray Bradbury's 1952 science fiction story, "A Sound of Thunder," a character used a time machine to travel to the deep past, where he stepped on a butterfly. Upon returning to the present time, he found a different world. This story is often credited with coining the term "butterfly effect," which refers to the extremely high sensitivity of a complex, dynamic system to its initial conditions. In such a system, early, small factors go on to strongly influence the evolution of the entire system.
Instead, Yan and Sinitsyn found that simulating a return to the past to cause small local damage in a quantum system leads to only small, insignificant local damage in the present.
This effect has potential applications in information-hiding hardware and testing quantum information devices. Information can be hidden by a computer by converting the initial state into a strongly entangled one.
"We found that even if an intruder performs state-damaging measurements on the strongly entangled state, we still can easily recover the useful information because this damage is not magnified by a decoding process," Yan said. "This justifies talks about creating quantum hardware that will be used to hide information."
This new finding could also be used to test whether a quantum processor is, in fact, working by quantum principles. Since the newfound no-butterfly effect is purely quantum, if a processor runs Yan and Sinitsyn's system and shows this effect, then it must be a quantum processor.
To test the butterfly effect in quantum systems, Yan and Sinitsyn used theory and simulations with the IBM-Q quantum processor to show how a circuit could evolve a complex system by applying quantum gates, with forwards and backwards cause and effect.
Presto, a quantum time-machine simulator.
In the team's experiment, Alice, a favorite stand-in agent used for quantum thought experiments, prepares one of her qubits in the present time and runs it backwards through the quantum computer. In the deep past, an intruder -- Bob, another favorite stand-in -- meaures Alice's qubit. This action disturbs the qubit and destroys all its quantum correlations with the rest of the world. Next, the system is run forward to the present time.
According to Ray Bradbury, Bob's small damage to the state and all those correlations in the past should be quickly magnified during the complex forward-in-time evolution. Hence, Alice should be unable to recover her information at the end.
But that's not what happened. Yan and Sinitsyn found that most of the presently local information was hidden in the deep past in the form of essentially quantum correlations that could not be damaged by minor tampering. They showed that the information returns to Alice's qubit without much damage despite Bob's interference. Counterintuitively, for deeper travels to the past and for bigger "worlds," Alice's final information returns to her even less damaged.
Read more at Science Daily
Breakthrough method for predicting solar storms
Extensive power outages and satellite blackouts that affect air travel and the internet are some of the potential consequences of massive solar storms. These storms are believed to be caused by the release of enormous amounts of stored magnetic energy due to changes in the magnetic field of the sun's outer atmosphere -- something that until now has eluded scientists' direct measurement. Researchers believe this recent discovery could lead to better "space weather" forecasts in the future.
"We are becoming increasingly dependent on space-based systems that are sensitive to space weather. Earth-based networks and the electrical grid can be severely damaged if there is a large eruption," says Tomas Brage, Professor of Mathematical Physics at Lund University in Sweden.
Solar flares are bursts of radiation and charged particles, and can cause geomagnetic storms on Earth if they are large enough. Currently, researchers focus on sunspots on the surface of the sun to predict possible eruptions. Another and more direct indication of increased solar activity would be changes in the much weaker magnetic field of the outer solar atmosphere -- the so-called Corona.
However, no direct measurement of the actual magnetic fields of the Corona has been possible so far.
"If we are able to continuously monitor these fields, we will be able to develop a method that can be likened to meteorology for space weather. This would provide vital information for our society which is so dependent on high-tech systems in our everyday lives," says Dr Ran Si, post-doc in this joint effort by Lund and Fudan Universities.
The method involves what could be labelled a quantum-mechanical interference. Since basically all information about the sun reaches us through "light" sent out by ions in its atmosphere, the magnetic fields must be detected by measuring their influence on these ions. But the internal magnetic fields of ions are enormous -- hundreds or thousands of times stronger than the fields humans can generate even in their most advanced labs. Therefore, the weak coronal fields will leave basically no trace, unless we can rely on this very delicate effect -- the interference between two "constellations" of the electrons in the ion that are close -- very close -- in energy.
The breakthrough for the research team was to predict and analyze this "needle in the haystack" in an ion (nine times ionized iron) that is very common in the corona.
The work is based on state-of-the art calculations performed in the Mathematical Physics division of Lund University and combined with experiments using a device that could be thought of as being able to produce and capture small parts of the solar corona -- the Electron Beam Ion Trap, EBIT, in Professor Roger Hutton's group in Fudan University in Shanghai.
Read more at Science Daily
"We are becoming increasingly dependent on space-based systems that are sensitive to space weather. Earth-based networks and the electrical grid can be severely damaged if there is a large eruption," says Tomas Brage, Professor of Mathematical Physics at Lund University in Sweden.
Solar flares are bursts of radiation and charged particles, and can cause geomagnetic storms on Earth if they are large enough. Currently, researchers focus on sunspots on the surface of the sun to predict possible eruptions. Another and more direct indication of increased solar activity would be changes in the much weaker magnetic field of the outer solar atmosphere -- the so-called Corona.
However, no direct measurement of the actual magnetic fields of the Corona has been possible so far.
"If we are able to continuously monitor these fields, we will be able to develop a method that can be likened to meteorology for space weather. This would provide vital information for our society which is so dependent on high-tech systems in our everyday lives," says Dr Ran Si, post-doc in this joint effort by Lund and Fudan Universities.
The method involves what could be labelled a quantum-mechanical interference. Since basically all information about the sun reaches us through "light" sent out by ions in its atmosphere, the magnetic fields must be detected by measuring their influence on these ions. But the internal magnetic fields of ions are enormous -- hundreds or thousands of times stronger than the fields humans can generate even in their most advanced labs. Therefore, the weak coronal fields will leave basically no trace, unless we can rely on this very delicate effect -- the interference between two "constellations" of the electrons in the ion that are close -- very close -- in energy.
The breakthrough for the research team was to predict and analyze this "needle in the haystack" in an ion (nine times ionized iron) that is very common in the corona.
The work is based on state-of-the art calculations performed in the Mathematical Physics division of Lund University and combined with experiments using a device that could be thought of as being able to produce and capture small parts of the solar corona -- the Electron Beam Ion Trap, EBIT, in Professor Roger Hutton's group in Fudan University in Shanghai.
Read more at Science Daily
Single-shot COVID-19 vaccine protects non-human primates
The development of a safe and effective vaccine will likely be required to end the COVID-19 pandemic. A group of scientists, led by Beth Israel Deaconess Medical Center (BIDMC) immunologist Dan H. Barouch, MD, PhD, now report that a leading candidate COVID-19 vaccine developed at BIDMC in collaboration with Johnson & Johnson raised neutralizing antibodies and robustly protected non-human primates (NHPs) against SARS-CoV-2, the virus that causes COVID-19. This study builds on the team's previous results and is published in the journal Nature.
"This vaccine led to robust protection against SARS-CoV-2 in rhesus macaques and is now being evaluated in humans," said Barouch, who is Director of BIDMC's Center for Virology and Vaccine Research.
The vaccine uses a common cold virus, called adenovirus serotype 26 (Ad26), to deliver the SARS-CoV-2 spike protein into host cells, where it stimulates the body to raise immune responses against the coronavirus. Barouch has been working on the development of a COVID-19 vaccine since January, when Chinese scientists released the SARS-CoV-2 genome. Barouch's group, in collaboration with Johnson & Johnson, developed a series of vaccine candidates designed to express different variants of the SARS-CoV-2 spike protein, which is the major target for neutralizing antibodies.
Barouch and colleagues conducted a study in 52 NHPs, immunizing 32 adult rhesus macaques with a single dose of one of seven different versions of the Ad26-based vaccine, and giving 20 animals sham vaccines as placebo controls. All vaccinated animals developed neutralizing antibodies following immunization. Six weeks after the immunization, all animals were exposed to SARS-CoV-2. All 20 animals that received the sham vaccine became infected and showed high levels of virus in their lungs and nasal swabs. Of the six animals that received the optimal vaccine candidate, Ad26.COV2.S, none showed virus in their lungs, and only one animal showed low levels of virus in nasal swabs.
Moreover, neutralizing antibody responses correlated with protection, suggesting that this biomarker will be useful in the clinical development of COVID-19 vaccines for use in humans.
"Our data show that a single immunization with Ad26.COV2.S robustly protected rhesus macaques against SARS-CoV-2 challenge," said Barouch, who is also the William Bosworth Castle Professor of Medicine at Harvard Medical School, a member of the Ragon Institute of MGH, MIT, and Harvard, and a co-leader of the vaccine working group of the Massachusetts Consortium on Pathogen Readiness. "A single-shot immunization has practical and logistical advantages over a two-shot regimen for global deployment and pandemic control, but a two-shot vaccine will likely be more immunogenic, and thus both regimens are being evaluated in clinical trials. We look forward to the results of the clinical trials that will determine the safety and immunogenicity, and ultimately the efficacy, of the Ad26.COV2.S vaccine in humans."
Investigators at Beth Israel Deaconess Medical Center (BIDMC) and other institutions have initiated a first-in-human Phase 1/2 clinical trial of the Ad26.COV2.S vaccine in healthy volunteers. Kathryn E. Stephenson, MD, MPH, is the principal investigator for the trial at BIDMC, which is funded by Janssen Vaccines & Prevention, B.V., a pharmaceutical research arm of Johnson & Johnson.
Read more at Science Daily
"This vaccine led to robust protection against SARS-CoV-2 in rhesus macaques and is now being evaluated in humans," said Barouch, who is Director of BIDMC's Center for Virology and Vaccine Research.
The vaccine uses a common cold virus, called adenovirus serotype 26 (Ad26), to deliver the SARS-CoV-2 spike protein into host cells, where it stimulates the body to raise immune responses against the coronavirus. Barouch has been working on the development of a COVID-19 vaccine since January, when Chinese scientists released the SARS-CoV-2 genome. Barouch's group, in collaboration with Johnson & Johnson, developed a series of vaccine candidates designed to express different variants of the SARS-CoV-2 spike protein, which is the major target for neutralizing antibodies.
Barouch and colleagues conducted a study in 52 NHPs, immunizing 32 adult rhesus macaques with a single dose of one of seven different versions of the Ad26-based vaccine, and giving 20 animals sham vaccines as placebo controls. All vaccinated animals developed neutralizing antibodies following immunization. Six weeks after the immunization, all animals were exposed to SARS-CoV-2. All 20 animals that received the sham vaccine became infected and showed high levels of virus in their lungs and nasal swabs. Of the six animals that received the optimal vaccine candidate, Ad26.COV2.S, none showed virus in their lungs, and only one animal showed low levels of virus in nasal swabs.
Moreover, neutralizing antibody responses correlated with protection, suggesting that this biomarker will be useful in the clinical development of COVID-19 vaccines for use in humans.
"Our data show that a single immunization with Ad26.COV2.S robustly protected rhesus macaques against SARS-CoV-2 challenge," said Barouch, who is also the William Bosworth Castle Professor of Medicine at Harvard Medical School, a member of the Ragon Institute of MGH, MIT, and Harvard, and a co-leader of the vaccine working group of the Massachusetts Consortium on Pathogen Readiness. "A single-shot immunization has practical and logistical advantages over a two-shot regimen for global deployment and pandemic control, but a two-shot vaccine will likely be more immunogenic, and thus both regimens are being evaluated in clinical trials. We look forward to the results of the clinical trials that will determine the safety and immunogenicity, and ultimately the efficacy, of the Ad26.COV2.S vaccine in humans."
Investigators at Beth Israel Deaconess Medical Center (BIDMC) and other institutions have initiated a first-in-human Phase 1/2 clinical trial of the Ad26.COV2.S vaccine in healthy volunteers. Kathryn E. Stephenson, MD, MPH, is the principal investigator for the trial at BIDMC, which is funded by Janssen Vaccines & Prevention, B.V., a pharmaceutical research arm of Johnson & Johnson.
Read more at Science Daily
Mars 2020 Perseverance Rover Mission to Red Planet successfully launched
Humanity's most sophisticated rover launched with the Ingenuity Mars Helicopter at 7:50 a.m. EDT (4:50 a.m. PDT) Friday on a United Launch Alliance (ULA) Atlas V rocket from Space Launch Complex 41 at Cape Canaveral Air Force Station in Florida.
"With the launch of Perseverance, we begin another historic mission of exploration," said NASA Administrator Jim Bridenstine. "This amazing explorer's journey has already required the very best from all of us to get it to launch through these challenging times. Now we can look forward to its incredible science and to bringing samples of Mars home even as we advance human missions to the Red Planet. As a mission, as an agency, and as a country, we will persevere."
The ULA Atlas V's Centaur upper stage initially placed the Mars 2020 spacecraft into a parking orbit around Earth. The engine fired for a second time and the spacecraft separated from the Centaur as expected. Navigation data indicate the spacecraft is perfectly on course to Mars.
Mars 2020 sent its first signal to ground controllers viaNASA's Deep Space Networkat 9:15 a.m. EDT (6:15 a.m. PDT). However, telemetry (more detailed spacecraft data) had not yet been acquired at that point. Around 11:30 a.m. EDT (8:30 a.m. PDT), a signal with telemetry was received from Mars 2020 by NASA ground stations. Data indicate the spacecraft had entered a state known as safe mode, likely because a part of the spacecraft was a little colder than expected while Mars 2020 was in Earth's shadow. All temperatures are now nominal and the spacecraft is out of Earth's shadow.
When a spacecraft enters safe mode, all but essential systems are turned off until it receives new commands from mission control. An interplanetary launch is fast-paced and dynamic, so a spacecraft is designed to put itself in safe mode if its onboard computer perceives conditions are not within its preset parameters. Right now, the Mars 2020 mission is completing a full health assessment on the spacecraft and is working to return the spacecraft to a nominal configuration for its journey to Mars.
The Perseverance rover's astrobiology mission is to seek out signs of past microscopic life on Mars, explore the diverse geology of its landing site,Jezero Crater, and demonstrate key technologies that will help us prepare for future robotic and human exploration.
"Jezero Crater is the perfect place to search for signs of ancient life," said Thomas Zurbuchen, associate administrator for NASA's Science Mission Directorate at the agency's headquarters in Washington. "Perseverance is going to make discoveries that cause us to rethink our questions about what Mars was like and how we understand it today. As our instruments investigate rocks along an ancient lake bottom and select samples to return to Earth, we may very well be reaching back in time to get the information scientists need to say that life has existed elsewhere in the universe."
The Martian rock and dust Perseverance's Sample Caching System collects could answer fundamental questions about the potential for life to exist beyond Earth. Two future missions currently under consideration by NASA, in collaboration with ESA (European Space Agency), will work together to get the samples to an orbiter for return to Earth. When they arrive on Earth, the Mars samples will undergo in-depth analysis by scientists around the world using equipment far too large to send to the Red Planet.
An Eye to a Martian Tomorrow
While most of Perseverance's seven instruments are geared toward learning more about the planet's geology and astrobiology, the MOXIE (Mars Oxygen In-Situ Resource Utilization Experiment) instrument's job is focused on missions yet to come. Designed to demonstrate that converting Martian carbon dioxide into oxygen is possible, it could lead to future versions of MOXIE technology that become staples on Mars missions, providing oxygen for rocket fuel and breathable air.
Also future-leaning is the Ingenuity Mars Helicopter, which will remain attached to the belly of Perseverance for the flight to Mars and the first 60 or so days on the surface. A technology demonstrator, Ingenuity's goal is a pure flight test -- it carries no science instruments.
Over 30 sols (31 Earth days), the helicopter will attempt up to five powered, controlled flights. The data acquired during these flight tests will help the next generation of Mars helicopters provide an aerial dimension to Mars explorations -- potentially scouting for rovers and human crews, transporting small payloads, or investigating difficult-to-reach destinations.
The rover's technologies for entry, descent, and landing also will provide information to advance future human missions to Mars.
"Perseverance is the most capable rover in history because it is standing on the shoulders of our pioneers Sojourner, Spirit, Opportunity, and Curiosity," said Michael Watkins, director of NASA's Jet Propulsion Laboratory in Southern California. "In the same way, the descendants of Ingenuity and MOXIE will become valuable tools for future explorers to the Red Planet and beyond."
About seven cold, dark, unforgiving months of interplanetary space travel lay ahead for the mission -- a fact never far from the mind of Mars 2020 project team.
"There is still a lot of road between us and Mars," said John McNamee, Mars 2020 project manager at JPL. "About 290 million miles of them. But if there was ever a team that could make it happen, it is this one. We are going to Jezero Crater. We will see you there Feb. 18, 2021."
The Mars 2020 Perseverance mission is part of America's larger Moon to Mars exploration approach that includes missions to the Moon as a way to prepare for human exploration of the Red Planet. Charged with sending the first woman and next man to the Moon by 2024, NASA will establish a sustained human presence on and around the Moon by 2028 through NASA's Artemis program.
Read more at Science Daily
Labels:
Ancient Life,
Life,
Mars,
Mars Rover,
NASA,
Science
Jul 29, 2020
How stony-iron meteorites form
Meteorites give us insight into the early development of the solar system. Using the SAPHiR instrument at the Research Neutron Source Heinz Maier-Leibnitz (FRM II) at the Technical University of Munich (TUM), a scientific team has for the first time simulated the formation of a class of stony-iron meteorites, so-called pallasites, on a purely experimental basis.
"Pallasites are the optically most beautiful and unusual meteorites," says Dr. Nicolas Walte, the first author of the study, in an enthusiastic voice. They belong to the group of stony-iron meteorites and comprise green olivine crystals embedded in nickel and iron. Despite decades of research, their exact origins remained shrouded in mystery.
To solve this puzzle, Dr. Nicolas Walte, an instrument scientist at the Heinz Maier-Leibnitz Zentrum (MLZ) in Garching, together with colleagues from the Bavarian Geoinstitute at the University of Bayreuth and the Royal Holloway University of London, investigated the pallasite formation process. In a first, they succeeded in experimentally reproducing the structures of all types of pallasites.
Deployment of the SAPHiR instrument
For its experiments, the team used the SAPHiR multi-anvil press which was set up under the lead of Prof. Hans Keppler of the Bavarian Geoinstitute at the MLZ and the similar MAVO press in Bayreuth. Although neutrons from the FRM II have not yet been fed into SAPHiR, experiments under high pressures and at high temperatures can already be performed.
"With a press force of 2400 tons, SAPHiR can exert a pressure of 15 gigapascals (GPa) on samples at over 2000 °C," explains Walte. "That is double the pressures needed to convert graphite into diamond." To simulate the collision of two celestial bodies, the research team required a pressure of merely 1 GPa at 1300 °C.
How are pallasites formed?
Until recently, pallasites were believed to form at the boundary between the metallic core and the rocky mantle of asteroids. According to an alternative scenario, pallasites form closer to the surface after the collision with another celestial body. During the impact molten iron from the core of the impactor mingles with the olivine-rich mantle of the parent body.
The experiments carried out have now confirmed this impact hypothesis. Another prerequisite for the formation of pallasites is that the iron core and rocky mantle of the asteroid have partially separated beforehand.
All this happened shortly after their formation about 4.5 billion years ago. During this phase, the asteroids heated up until the denser metallic components melted and sank to the center of the celestial bodies.
The key finding of the study is that both processes -- the partial separation of core and mantle, and the subsequent impact of another celestial body -- are required for pallasites to form.
Insights into the origins of the solar system
"Generally, meteorites are the oldest directly accessible constituents of our solar system. The age of the solar system and its early history are inferred primarily from the investigation of meteorites," explains Walte.
"Like many asteroids, the Earth and moon are stratified into multiple layers, consisting of core, mantle and crust," says Nicolas Walte. "In this way, complex worlds were created through the agglomeration of cosmic debris. In the case of the Earth, this ultimately laid the foundations for the emergence of life."
Read more at Science Daily
"Pallasites are the optically most beautiful and unusual meteorites," says Dr. Nicolas Walte, the first author of the study, in an enthusiastic voice. They belong to the group of stony-iron meteorites and comprise green olivine crystals embedded in nickel and iron. Despite decades of research, their exact origins remained shrouded in mystery.
To solve this puzzle, Dr. Nicolas Walte, an instrument scientist at the Heinz Maier-Leibnitz Zentrum (MLZ) in Garching, together with colleagues from the Bavarian Geoinstitute at the University of Bayreuth and the Royal Holloway University of London, investigated the pallasite formation process. In a first, they succeeded in experimentally reproducing the structures of all types of pallasites.
Deployment of the SAPHiR instrument
For its experiments, the team used the SAPHiR multi-anvil press which was set up under the lead of Prof. Hans Keppler of the Bavarian Geoinstitute at the MLZ and the similar MAVO press in Bayreuth. Although neutrons from the FRM II have not yet been fed into SAPHiR, experiments under high pressures and at high temperatures can already be performed.
"With a press force of 2400 tons, SAPHiR can exert a pressure of 15 gigapascals (GPa) on samples at over 2000 °C," explains Walte. "That is double the pressures needed to convert graphite into diamond." To simulate the collision of two celestial bodies, the research team required a pressure of merely 1 GPa at 1300 °C.
How are pallasites formed?
Until recently, pallasites were believed to form at the boundary between the metallic core and the rocky mantle of asteroids. According to an alternative scenario, pallasites form closer to the surface after the collision with another celestial body. During the impact molten iron from the core of the impactor mingles with the olivine-rich mantle of the parent body.
The experiments carried out have now confirmed this impact hypothesis. Another prerequisite for the formation of pallasites is that the iron core and rocky mantle of the asteroid have partially separated beforehand.
All this happened shortly after their formation about 4.5 billion years ago. During this phase, the asteroids heated up until the denser metallic components melted and sank to the center of the celestial bodies.
The key finding of the study is that both processes -- the partial separation of core and mantle, and the subsequent impact of another celestial body -- are required for pallasites to form.
Insights into the origins of the solar system
"Generally, meteorites are the oldest directly accessible constituents of our solar system. The age of the solar system and its early history are inferred primarily from the investigation of meteorites," explains Walte.
"Like many asteroids, the Earth and moon are stratified into multiple layers, consisting of core, mantle and crust," says Nicolas Walte. "In this way, complex worlds were created through the agglomeration of cosmic debris. In the case of the Earth, this ultimately laid the foundations for the emergence of life."
Read more at Science Daily
Strange dismembered star cluster found at Galaxy's edge
An international team of astronomers has discovered the remnant of an ancient collection of stars that was torn apart by our own galaxy, the Milky Way, more than two billion years ago.
The extraordinary discovery of this shredded 'globular cluster' is surprising, as the stars in this galactic archaeological find have much lower quantities of heavier elements than in other such clusters. The evidence strongly suggests the original structure was the last of its kind, a globular cluster whose birth and life were different to those remaining today.
Our Galaxy is home to about 150 globular clusters, each a ball of a million or so stars that orbit in the Galaxy's tenuous stellar halo. These globular clusters are old and have witnessed the growth of the Milky Way over billions of years.
The study, published in Nature, was led by University of Sydney PhD student, Zhen Wan, and his supervisor, Professor Geraint Lewis, as part of the S5 international collaboration.
Using the Anglo-Australian Telescope in outback New South Wales, this collaboration measured the speeds of a stream of stars in the Phoenix constellation, revealing them to be remnants of a globular cluster that was pulled apart by the gravity of the Milky Way about two billion years ago.
Mr Wan said: "Once we knew which stars belonged to the stream, we measured their abundance of elements heavier than hydrogen and helium; something astronomers refer to as metallicity. We were really surprised to find that the Phoenix Stream has a very low metallicity, making it distinctly different to all of the other globular clusters in the Galaxy.
"Even though the cluster was destroyed billions of years ago, we can still tell it formed in the early Universe from the composition of its stars."
HEAVY METALS
After the Big Bang, only hydrogen and helium existed in any substantial amount in the Universe. These elements formed the first generation of stars many billions of years ago. It is within these and later stellar generations that heavier elements were formed, such as the calcium, oxygen and phosphorus that in part make up your bones.
Observations of other globular clusters have found that their stars are enriched with heavier elements forged in earlier generations of stars. Current formation theories suggest that this dependence on previous stars means that no globular cluster should be found unenriched and that there is a minimum metallicity 'floor' below which no cluster can form.
But the metallicity of the Phoenix Stream progenitor sits well below this minimum, posing a significant problem for our ideas of globular cluster origins.
"This stream comes from a cluster that, by our understanding, shouldn't have existed," said co-author Associate Professor Daniel Zucker from Macquarie University.
S5 team leader, Dr Ting Li from Carnegie Observatories, said: "One possible explanation is that the Phoenix Stream represents the last of its kind, the remnant of a population of globular clusters that was born in radically different environments to those we see today."
While potentially numerous in the past, this population of globular clusters was steadily depleted by the gravitational forces of the Galaxy, which tore them to pieces, absorbing their stars into the main body of the galactic system. This means that the stream is a relatively temporary phenomenon, which will dissipate in time.
"We found the remains of this cluster before it faded forever into the Galaxy's halo," Mr Wan said.
As yet, there is no clear explanation for the origins of the Phoenix Stream progenitor cluster and where it sits in the evolution of galaxies remains unclear.
Professor Lewis said: "There is plenty of theoretical work left to do. There are now many new questions for us to explore about how galaxies and globular clusters form, which is incredibly exciting."
Is the Phoenix Stream unique? "In astronomy, when we find a new kind of object, it suggests that there are more of them out there," said co-author Dr Jeffrey Simpson from the University of New South Wales. While globular clusters like the progenitor of the Phoenix Stream may no longer exist, their remnants may live on as faint streams."
Read more at Science Daily
The extraordinary discovery of this shredded 'globular cluster' is surprising, as the stars in this galactic archaeological find have much lower quantities of heavier elements than in other such clusters. The evidence strongly suggests the original structure was the last of its kind, a globular cluster whose birth and life were different to those remaining today.
Our Galaxy is home to about 150 globular clusters, each a ball of a million or so stars that orbit in the Galaxy's tenuous stellar halo. These globular clusters are old and have witnessed the growth of the Milky Way over billions of years.
The study, published in Nature, was led by University of Sydney PhD student, Zhen Wan, and his supervisor, Professor Geraint Lewis, as part of the S5 international collaboration.
Using the Anglo-Australian Telescope in outback New South Wales, this collaboration measured the speeds of a stream of stars in the Phoenix constellation, revealing them to be remnants of a globular cluster that was pulled apart by the gravity of the Milky Way about two billion years ago.
Mr Wan said: "Once we knew which stars belonged to the stream, we measured their abundance of elements heavier than hydrogen and helium; something astronomers refer to as metallicity. We were really surprised to find that the Phoenix Stream has a very low metallicity, making it distinctly different to all of the other globular clusters in the Galaxy.
"Even though the cluster was destroyed billions of years ago, we can still tell it formed in the early Universe from the composition of its stars."
HEAVY METALS
After the Big Bang, only hydrogen and helium existed in any substantial amount in the Universe. These elements formed the first generation of stars many billions of years ago. It is within these and later stellar generations that heavier elements were formed, such as the calcium, oxygen and phosphorus that in part make up your bones.
Observations of other globular clusters have found that their stars are enriched with heavier elements forged in earlier generations of stars. Current formation theories suggest that this dependence on previous stars means that no globular cluster should be found unenriched and that there is a minimum metallicity 'floor' below which no cluster can form.
But the metallicity of the Phoenix Stream progenitor sits well below this minimum, posing a significant problem for our ideas of globular cluster origins.
"This stream comes from a cluster that, by our understanding, shouldn't have existed," said co-author Associate Professor Daniel Zucker from Macquarie University.
S5 team leader, Dr Ting Li from Carnegie Observatories, said: "One possible explanation is that the Phoenix Stream represents the last of its kind, the remnant of a population of globular clusters that was born in radically different environments to those we see today."
While potentially numerous in the past, this population of globular clusters was steadily depleted by the gravitational forces of the Galaxy, which tore them to pieces, absorbing their stars into the main body of the galactic system. This means that the stream is a relatively temporary phenomenon, which will dissipate in time.
"We found the remains of this cluster before it faded forever into the Galaxy's halo," Mr Wan said.
As yet, there is no clear explanation for the origins of the Phoenix Stream progenitor cluster and where it sits in the evolution of galaxies remains unclear.
Professor Lewis said: "There is plenty of theoretical work left to do. There are now many new questions for us to explore about how galaxies and globular clusters form, which is incredibly exciting."
Is the Phoenix Stream unique? "In astronomy, when we find a new kind of object, it suggests that there are more of them out there," said co-author Dr Jeffrey Simpson from the University of New South Wales. While globular clusters like the progenitor of the Phoenix Stream may no longer exist, their remnants may live on as faint streams."
Read more at Science Daily
Astronomers pinpoint the best place on Earth for a telescope: High on a frigid Antarctic plateau
Dome A, the highest ice dome on the Antarctic Plateau, could offer the clearest view on Earth of the stars at night, according to new research. The challenge? The location is one of the coldest and most remote places on Earth.
The findings were published today in Nature.
"A telescope located at Dome A could out-perform a similar telescope located at any other astronomical site on the planet," said UBC astronomer Paul Hickson, a co-author of the study. "The combination of high altitude, low temperature, long periods of continuous darkness, and an exceptionally stable atmosphere, makes Dome A a very attractive location for optical and infrared astronomy. A telescope located there would have sharper images and could detect fainter objects."
One of the biggest challenges in Earth-based astronomy is overcoming the effect of atmospheric turbulence on telescope image quality. This turbulence makes stars twinkle, and measurement of its impact is referred to as 'seeing'. The less turbulence (the lower the seeing number) the better.
"The thinner boundary layer at Dome A makes it less challenging to locate a telescope above it, thereby giving greater access to the free atmosphere," said UBC astronomer Bin Ma, lead author on the paper.
Currently, the highest performing observatories are located in high-altitude locations along the equator (Chile and Hawai?i) and offer seeing in the range of 0.6 to 0.8 arcseconds. In general, the Antarctic has the potential for better seeing, owing to weaker turbulence in the free atmosphere, with an estimated range of 0.23 to 0.36 arcseconds at a location called Dome C.
Ma, Hickson and colleagues in China and Australia evaluated a different location, Dome A -- also referred to as Dome Argus. Dome A is located near the centre of East Antartica, 1,200 kilometres inland.
The researchers estimated the location has a thinner boundary layer (the lowest part of the atmosphere, which is influenced by friction from the Earth's surface) than Dome C. Previous measurements from Dome A have been taken in the daytime, but the authors report a median night-time seeing of 0.31 arcseconds, reaching as low as 0.13 arcseconds.
The measurements from Dome A, taken at a height of eight metres, were much better than those from the same height at Dome C and comparable to those at a height of 20 metres at Dome C.
Not surprisingly, the viewing capabilities of the researchers' equipment were also hampered by frost -- overcoming this issue could improve seeing by 10 to 12 per cent. But the site has promise, according to Ma.
Read more at Science Daily
The findings were published today in Nature.
"A telescope located at Dome A could out-perform a similar telescope located at any other astronomical site on the planet," said UBC astronomer Paul Hickson, a co-author of the study. "The combination of high altitude, low temperature, long periods of continuous darkness, and an exceptionally stable atmosphere, makes Dome A a very attractive location for optical and infrared astronomy. A telescope located there would have sharper images and could detect fainter objects."
One of the biggest challenges in Earth-based astronomy is overcoming the effect of atmospheric turbulence on telescope image quality. This turbulence makes stars twinkle, and measurement of its impact is referred to as 'seeing'. The less turbulence (the lower the seeing number) the better.
"The thinner boundary layer at Dome A makes it less challenging to locate a telescope above it, thereby giving greater access to the free atmosphere," said UBC astronomer Bin Ma, lead author on the paper.
Currently, the highest performing observatories are located in high-altitude locations along the equator (Chile and Hawai?i) and offer seeing in the range of 0.6 to 0.8 arcseconds. In general, the Antarctic has the potential for better seeing, owing to weaker turbulence in the free atmosphere, with an estimated range of 0.23 to 0.36 arcseconds at a location called Dome C.
Ma, Hickson and colleagues in China and Australia evaluated a different location, Dome A -- also referred to as Dome Argus. Dome A is located near the centre of East Antartica, 1,200 kilometres inland.
The researchers estimated the location has a thinner boundary layer (the lowest part of the atmosphere, which is influenced by friction from the Earth's surface) than Dome C. Previous measurements from Dome A have been taken in the daytime, but the authors report a median night-time seeing of 0.31 arcseconds, reaching as low as 0.13 arcseconds.
The measurements from Dome A, taken at a height of eight metres, were much better than those from the same height at Dome C and comparable to those at a height of 20 metres at Dome C.
Not surprisingly, the viewing capabilities of the researchers' equipment were also hampered by frost -- overcoming this issue could improve seeing by 10 to 12 per cent. But the site has promise, according to Ma.
Read more at Science Daily
New maps of chemical marks on DNA pinpoint regions relevant to many developmental diseases
In research that aims to illuminate the causes of human developmental disorders, Salk scientists have generated 168 new maps of chemical marks on strands of DNA -- called methylation -- in developing mice.
The data, published July 29, 2020, in a special edition of Nature devoted to the ENCODE Project (a public research effort aimed at identifying all functional elements in the human and mouse genomes), can help narrow down regions of the human genome that play roles in diseases such as schizophrenia and Rett Syndrome. The paper's authors are also on two additional papers in the special edition.
"This is the only available dataset that looks at the methylation in a developing mouse over time, tissue by tissue," says senior author and Howard Hughes Medical Institute Investigator Joseph Ecker, a professor in Salk's Genomic Analysis Laboratory. "It's going to be a valuable resource to help in narrowing down the causal tissues of human developmental diseases."
While the sequence of DNA contained in every cell of your body is virtually identical, chemical marks on those strands of DNA give the cells their unique identities. The patterns of methylation on adult brain cells, for instance, are different than those on adult liver cells. That's in part because of short stretches in the genome called enhancers. When transcription factor proteins bind to these enhancer regions, a target gene is much more likely to be expressed. When an enhancer is methylated, however, transcription factors generally can't bind and the associated gene is less likely to be activated; these methyl marks are akin to applying the hand brake after parking a car.
Researchers know that mutations in these enhancer regions -- by affecting the expression levels of a corresponding gene -- can cause disease. But there are hundreds of thousands of enhancers and they can be located far from the gene they help regulate. So narrowing down which enhancer mutations may play a role in a developmental disease has been a challenge.
In the new work, Ecker and collaborators used experimental technologies and computational algorithms that they previously developed to study the DNA methylation patterns of cells in samples of a dozen types of tissues from mice over eight developmental stages.
"The breadth of samples that we applied this technology to is what's really key," says first author Yupeng He, who was previously a Salk postdoctoral research fellow and is now a senior bioinformatics scientist at Guardant Health.
They discovered more than 1.8 million regions of the mouse genome that had variations in methylation based on tissue, developmental stage or both. Early in development, those changes were mostly the loss of methylation on DNA -- akin to removing the brake on gene expression and allowing developmental genes to turn on. After birth, however, most sites became highly methylated again, putting the brakes on gene expression as the mouse approaches birth.
"We think that the removal of methylation makes the whole genome more open to dynamic regulation during development," says He. "After birth, genes critical for early development need to be more stably silenced because we don't want them turned on in mature tissue, so that's when methylation comes in and helps shut down the early developmental enhancers."
In the past, many researchers have studied methylation by homing in on areas of the genome near genes called CpG islands -- sections of DNA that have a lot of cytosine and guanine base pairs in them, since typical methylation occurs when a methyl is added to a cytosine that's followed by a guanine. However, in the new work, He and Ecker showed that 91.5 percent of the methylation variations they found during development far away from CpG islands.
"If you only look at those CpG island regions near genes, as many people do, you'll miss a lot of the meaningful DNA changes that could be directly related to your research questions," says He.
Read more at Science Daily
The data, published July 29, 2020, in a special edition of Nature devoted to the ENCODE Project (a public research effort aimed at identifying all functional elements in the human and mouse genomes), can help narrow down regions of the human genome that play roles in diseases such as schizophrenia and Rett Syndrome. The paper's authors are also on two additional papers in the special edition.
"This is the only available dataset that looks at the methylation in a developing mouse over time, tissue by tissue," says senior author and Howard Hughes Medical Institute Investigator Joseph Ecker, a professor in Salk's Genomic Analysis Laboratory. "It's going to be a valuable resource to help in narrowing down the causal tissues of human developmental diseases."
While the sequence of DNA contained in every cell of your body is virtually identical, chemical marks on those strands of DNA give the cells their unique identities. The patterns of methylation on adult brain cells, for instance, are different than those on adult liver cells. That's in part because of short stretches in the genome called enhancers. When transcription factor proteins bind to these enhancer regions, a target gene is much more likely to be expressed. When an enhancer is methylated, however, transcription factors generally can't bind and the associated gene is less likely to be activated; these methyl marks are akin to applying the hand brake after parking a car.
Researchers know that mutations in these enhancer regions -- by affecting the expression levels of a corresponding gene -- can cause disease. But there are hundreds of thousands of enhancers and they can be located far from the gene they help regulate. So narrowing down which enhancer mutations may play a role in a developmental disease has been a challenge.
In the new work, Ecker and collaborators used experimental technologies and computational algorithms that they previously developed to study the DNA methylation patterns of cells in samples of a dozen types of tissues from mice over eight developmental stages.
"The breadth of samples that we applied this technology to is what's really key," says first author Yupeng He, who was previously a Salk postdoctoral research fellow and is now a senior bioinformatics scientist at Guardant Health.
They discovered more than 1.8 million regions of the mouse genome that had variations in methylation based on tissue, developmental stage or both. Early in development, those changes were mostly the loss of methylation on DNA -- akin to removing the brake on gene expression and allowing developmental genes to turn on. After birth, however, most sites became highly methylated again, putting the brakes on gene expression as the mouse approaches birth.
"We think that the removal of methylation makes the whole genome more open to dynamic regulation during development," says He. "After birth, genes critical for early development need to be more stably silenced because we don't want them turned on in mature tissue, so that's when methylation comes in and helps shut down the early developmental enhancers."
In the past, many researchers have studied methylation by homing in on areas of the genome near genes called CpG islands -- sections of DNA that have a lot of cytosine and guanine base pairs in them, since typical methylation occurs when a methyl is added to a cytosine that's followed by a guanine. However, in the new work, He and Ecker showed that 91.5 percent of the methylation variations they found during development far away from CpG islands.
"If you only look at those CpG island regions near genes, as many people do, you'll miss a lot of the meaningful DNA changes that could be directly related to your research questions," says He.
Read more at Science Daily
New blood test shows great promise in the diagnosis of Alzheimer's disease
Alzheimer's blood test, photo concept |
For many years, the diagnosis of Alzheimer's has been based on the characterization of amyloid plaques and tau tangles in the brain, typically after a person dies. An inexpensive and widely available blood test for the presence of plaques and tangles would have a profound impact on Alzheimer's research and care. According to the new study, measurements of phospho-tau217 (p-tau217), one of the tau proteins found in tangles, could provide a relatively sensitive and accurate indicator of both plaques and tangles -- corresponding to the diagnosis of Alzheimer's -- in living people.
"The p-tau217 blood test has great promise in the diagnosis, early detection, and study of Alzheimer's," said Oskar Hansson, MD, PhD, Professor of Clinical Memory Research at Lund University, Sweden, who leads the Swedish BioFINDER Study and senior author on the study who spearheaded the international collaborative effort. "While more work is needed to optimize the assay and test it in other people before it becomes available in the clinic, the blood test might become especially useful to improve the recognition, diagnosis, and care of people in the primary care setting."
Researchers evaluated a new p-tau217 blood test in 1,402 cognitively impaired and unimpaired research participants from well-known studies in Arizona, Sweden, and Colombia. The study, which was coordinated from Lund University in Sweden, included 81 Arizona participants in Banner Sun Health Research Institute's Brain Donation program who had clinical assessments and provided blood samples in their last years of life and then had neuropathological assessments after they died; 699 participants in the Swedish BioFINDER Study who had clinical, brain imaging, cerebrospinal fluid (CSF), and blood-based biomarker assessments; and 522 Colombian autosomal dominant Alzheimer's disease (ADAD)-causing mutation carriers and non-carriers from the world's largest ADAD cohort.
- In the Arizona (Banner Sun Health Research Institute) Brain Donation Cohort, the plasma p-tau217 assay discriminated between Arizona Brain donors with and without the subsequent neuropathological diagnosis of "intermediate or high likelihood Alzheimer's" (i.e., characterized by plaques, as well as tangles that have at least spread to temporal lobe memory areas or beyond) with 89% accuracy; it distinguished between those with and without a diagnosis of "high likelihood Alzheimer's" with 98% accuracy; and higher ptau217 measurements were correlated with higher brain tangle counts only in those persons who also had amyloid plaques.
- In the Swedish BioFINDER Study, the assay discriminated between persons with the clinical diagnoses of Alzheimer's and other neurodegenerative diseases with 96% accuracy, similar to tau PET scans and CSF biomarkers and better than several other blood tests and MRI measurements; and it distinguished between those with and without an abnormal tau PET scan with 93% accuracy.
- In the Colombia Cohort, the assay began to distinguish between mutation carriers and non-carriers 20 years before their estimated age at the onset of mild cognitive impairment.
In each of these analyses, p-tau217 (a major component of Alzheimer's disease-related tau tangles) performed better than p-tau181 (another component of tau tangles and a blood test recently found to have promise in the diagnosis of Alzheimer's) and several other studied blood tests.
Other study leaders include Jeffrey Dage, PhD, from Eli Lilly and Company, who developed the p-tau217 assay, co-first authors Sebastian Palmqvist, MD, PhD, and Shorena Janelidz, PhD, from Lund University, and Eric Reiman, MD, Banner Alzheimer's Institute, who organized the analysis of Arizona and Colombian cohort data.
In the last two years, researchers have made great progress in the development of amyloid blood tests, providing valuable information about one of the two cardinal features of Alzheimer's. While more work is needed before the test is ready for use in the clinic, a p-tau217 blood test has the potential to provide information about both plaques and tangles, corresponding to the diagnosis of Alzheimer's. It has the potential to advance the disease's research and care in other important ways.
"Blood tests like p-tau217 have the potential to revolutionize Alzheimer's research, treatment and prevention trials, and clinical care," said Eric Reiman, MD, Executive Director of Banner Alzheimer's Institute in Phoenix and a senior author on the study.
"While there's more work to do, I anticipate that their impact in both the research and clinical setting will become readily apparent within the next two years."
Read more at Science Daily
Jul 28, 2020
Astrophysicist investigates the possibility of life below the surface of Mars
Although no life has been detected on the Martian surface, a new study from astrophysicist and research scientist at the Center for Space Science at NYU Abu Dhabi, Dimitra Atri finds that conditions below the surface could potentially support it. The subsurface -- which is less harsh and has traces of water -- has never been explored. According to Atri, the steady bombardment of penetrating Galactic Cosmic Rays (GCRs) might provide the energy needed to catalyze organic activity there.
Atri's findings are reported in the study Investigating the biological potential of galactic cosmic ray-induced radiation-driven chemical disequilibrium in the Martian subsurface environment in the journal Scientific Reports, Springer Nature.
There is growing evidence suggesting the presence of an aqueous environment on ancient Mars, raising the question of the possibility of a life-supporting environment. The erosion of the Martian atmosphere resulted in drastic changes in its climate, surface water disappeared, shrinking habitable spaces on the planet, with only a limited amount of water remaining near the surface in form of brines and water-ice deposits. Life, if it ever existed, would have had to adapt to harsh modern conditions, which include low temperatures and surface pressure, and high radiation dose.
The subsurface of Mars has traces of water in the form of water-ice and brines, and undergoes radiation-driven redox chemistry. Using a combination of numerical models, space mission data, and studies of deep-cave ecosystems on Earth for his research, Atri proposes mechanisms through which life, if it ever existed on Mars, could survive and be detected with the upcoming ExoMars mission (2022) by the European Space Agency and Roscosmos. He hypothesizes that galactic cosmic radiation, which can penetrate several meters below the surface, will induce chemical reactions that can be used for metabolic energy by extant life, and host organisms using mechanisms seen in similar chemical and radiation environments on Earth.
"It is exciting to contemplate that life could survive in such a harsh environment, as few as two meters below the surface of Mars," said Atri. "When the Rosalind Franklin rover on board the ExoMars mission (ESA and Roscosmos), equipped with a subsurface drill, is launched in 2022, it will be well-suited to detect extant microbial life and hopefully provide some important insights."
From Science Daily
Atri's findings are reported in the study Investigating the biological potential of galactic cosmic ray-induced radiation-driven chemical disequilibrium in the Martian subsurface environment in the journal Scientific Reports, Springer Nature.
There is growing evidence suggesting the presence of an aqueous environment on ancient Mars, raising the question of the possibility of a life-supporting environment. The erosion of the Martian atmosphere resulted in drastic changes in its climate, surface water disappeared, shrinking habitable spaces on the planet, with only a limited amount of water remaining near the surface in form of brines and water-ice deposits. Life, if it ever existed, would have had to adapt to harsh modern conditions, which include low temperatures and surface pressure, and high radiation dose.
The subsurface of Mars has traces of water in the form of water-ice and brines, and undergoes radiation-driven redox chemistry. Using a combination of numerical models, space mission data, and studies of deep-cave ecosystems on Earth for his research, Atri proposes mechanisms through which life, if it ever existed on Mars, could survive and be detected with the upcoming ExoMars mission (2022) by the European Space Agency and Roscosmos. He hypothesizes that galactic cosmic radiation, which can penetrate several meters below the surface, will induce chemical reactions that can be used for metabolic energy by extant life, and host organisms using mechanisms seen in similar chemical and radiation environments on Earth.
"It is exciting to contemplate that life could survive in such a harsh environment, as few as two meters below the surface of Mars," said Atri. "When the Rosalind Franklin rover on board the ExoMars mission (ESA and Roscosmos), equipped with a subsurface drill, is launched in 2022, it will be well-suited to detect extant microbial life and hopefully provide some important insights."
From Science Daily
Randomness theory could hold key to internet security
Is there an unbreakable code?
The question has been central to cryptography for thousands of years, and lies at the heart of efforts to secure private information on the internet. In a new paper, Cornell Tech researchers identified a problem that holds the key to whether all encryption can be broken -- as well as a surprising connection to a mathematical concept that aims to define and measure randomness.
"Our result not only shows that cryptography has a natural 'mother' problem, it also shows a deep connection between two quite separate areas of mathematics and computer science -- cryptography and algorithmic information theory," said Rafael Pass, professor of computer science at Cornell Tech.
Pass is co-author of "On One-Way Functions and Kolmogorov Complexity," which will be presented at the IEEE Symposium on Foundations of Computer Science, to be held Nov. 16-19 in Durham, North Carolina.
"The result," he said, "is that a natural computational problem introduced in the 1960s in the Soviet Union characterizes the feasibility of basic cryptography -- private-key encryption, digital signatures and authentication, for example."
For millennia, cryptography was considered a cycle: Someone invented a code, the code was effective until someone eventually broke it, and the code became ineffective. In the 1970s, researchers seeking a better theory of cryptography introduced the concept of the one-way function -- an easy task or problem in one direction that is impossible in the other.
For example, it's easy to light a match, but impossible to return a burning match to its unlit state without rearranging its atoms -- an immensely difficult task.
"The idea was, if we have such a one-way function, maybe that's a very good starting point for understanding cryptography," Pass said. "Encrypting the message is very easy. And if you have the key, you can also decrypt it. But someone who doesn't know the key should have to do the same thing as restoring a lit match."
But researchers have not been able to prove the existence of a one-way function. The most well-known candidate -- which is also the basis of the most commonly used encryption schemes on the internet -- relies on integer factorization. It's easy to multiply two random prime numbers -- for instance, 23 and 47 -- but significantly harder to find those two factors if only given their product, 1,081.
It is believed that no efficient factoring algorithm exists for large numbers, Pass said, though researchers may not have found the right algorithms yet.
"The central question we're addressing is: Does it exist? Is there some natural problem that characterizes the existence of one-way functions?" he said. "If it does, that's the mother of all problems, and if you have a way to solve that problem, you can break all purported one-way functions. And if you don't know how to solve that problem, you can actually get secure cryptography."
Meanwhile, mathematicians in the 1960s identified what's known as Kolmogorov Complexity, which refers to quantifying the amount of randomness or pattern of a string of numbers. The Kolmogorov Complexity of a string of numbers is defined as the length of the shortest computer program that can generate the string; for some strings, such as 121212121212121212121212121212, there is a short program that generates it -- alternate 1s and 2s. But for more complicated and apparently random strings of numbers, such as 37539017332840393452954329, there may not exist a program that is shorter than the length of the string itself.
The problem has long interested mathematicians and computer scientists, including Juris Hartmanis, professor emeritus of computer science and engineering. Because the computer program attempting to generate the number could take millions or even billions of years, researchers in the Soviet Union in the 1960s, as well as Hartmanis and others in the 1980s, developed the time-bounded Kolmogorov Complexity -- the length of the shortest program that can output a string of numbers in a certain amount of time.
In the paper, Pass and doctoral student Yanyi Liu showed that if computing time-bounded Kolmogorov Complexity is hard, then one-way functions exist.
Although their finding is theoretical, it has potential implications across cryptography, including internet security.
"If you can come up with an algorithm to solve the time-bounded Kolmogorov complexity problem, then you can break all crypto, all encryption schemes, all digital signatures," Pass said. "However, if no efficient algorithm exists to solve this problem, you can get a one-way function, and therefore you can get secure encryption and digital signatures and so forth."
Read more at Science Daily
The question has been central to cryptography for thousands of years, and lies at the heart of efforts to secure private information on the internet. In a new paper, Cornell Tech researchers identified a problem that holds the key to whether all encryption can be broken -- as well as a surprising connection to a mathematical concept that aims to define and measure randomness.
"Our result not only shows that cryptography has a natural 'mother' problem, it also shows a deep connection between two quite separate areas of mathematics and computer science -- cryptography and algorithmic information theory," said Rafael Pass, professor of computer science at Cornell Tech.
Pass is co-author of "On One-Way Functions and Kolmogorov Complexity," which will be presented at the IEEE Symposium on Foundations of Computer Science, to be held Nov. 16-19 in Durham, North Carolina.
"The result," he said, "is that a natural computational problem introduced in the 1960s in the Soviet Union characterizes the feasibility of basic cryptography -- private-key encryption, digital signatures and authentication, for example."
For millennia, cryptography was considered a cycle: Someone invented a code, the code was effective until someone eventually broke it, and the code became ineffective. In the 1970s, researchers seeking a better theory of cryptography introduced the concept of the one-way function -- an easy task or problem in one direction that is impossible in the other.
For example, it's easy to light a match, but impossible to return a burning match to its unlit state without rearranging its atoms -- an immensely difficult task.
"The idea was, if we have such a one-way function, maybe that's a very good starting point for understanding cryptography," Pass said. "Encrypting the message is very easy. And if you have the key, you can also decrypt it. But someone who doesn't know the key should have to do the same thing as restoring a lit match."
But researchers have not been able to prove the existence of a one-way function. The most well-known candidate -- which is also the basis of the most commonly used encryption schemes on the internet -- relies on integer factorization. It's easy to multiply two random prime numbers -- for instance, 23 and 47 -- but significantly harder to find those two factors if only given their product, 1,081.
It is believed that no efficient factoring algorithm exists for large numbers, Pass said, though researchers may not have found the right algorithms yet.
"The central question we're addressing is: Does it exist? Is there some natural problem that characterizes the existence of one-way functions?" he said. "If it does, that's the mother of all problems, and if you have a way to solve that problem, you can break all purported one-way functions. And if you don't know how to solve that problem, you can actually get secure cryptography."
Meanwhile, mathematicians in the 1960s identified what's known as Kolmogorov Complexity, which refers to quantifying the amount of randomness or pattern of a string of numbers. The Kolmogorov Complexity of a string of numbers is defined as the length of the shortest computer program that can generate the string; for some strings, such as 121212121212121212121212121212, there is a short program that generates it -- alternate 1s and 2s. But for more complicated and apparently random strings of numbers, such as 37539017332840393452954329, there may not exist a program that is shorter than the length of the string itself.
The problem has long interested mathematicians and computer scientists, including Juris Hartmanis, professor emeritus of computer science and engineering. Because the computer program attempting to generate the number could take millions or even billions of years, researchers in the Soviet Union in the 1960s, as well as Hartmanis and others in the 1980s, developed the time-bounded Kolmogorov Complexity -- the length of the shortest program that can output a string of numbers in a certain amount of time.
In the paper, Pass and doctoral student Yanyi Liu showed that if computing time-bounded Kolmogorov Complexity is hard, then one-way functions exist.
Although their finding is theoretical, it has potential implications across cryptography, including internet security.
"If you can come up with an algorithm to solve the time-bounded Kolmogorov complexity problem, then you can break all crypto, all encryption schemes, all digital signatures," Pass said. "However, if no efficient algorithm exists to solve this problem, you can get a one-way function, and therefore you can get secure encryption and digital signatures and so forth."
Read more at Science Daily
How day- and night-biting mosquitoes respond differently to colors of light and time of day
In a new study, researchers found that night- versus day-biting species of mosquitoes are behaviorally attracted and repelled by different colors of light at different times of day. Mosquitoes are among major disease vectors impacting humans and animals around the world and the findings have important implications for using light to control them.
The University of California, Irvine School of Medicine-led team studied mosquito species that bite in the daytime (Aedes aegypti, aka the Yellow Fever mosquito) and those that bite at night (Anopheles coluzzi, a member of the Anopheles gambiae family, the major vector for malaria). They found distinct responses to ultraviolet light and other colors of light between the two species. Researchers also found light preference is dependent on the mosquito's sex and species, the time of day and the color of the light.
"Conventional wisdom has been that insects are non-specifically attracted to ultraviolet light, hence the widespread use of ultraviolet light "bug zappers" for insect control. We find that day-biting mosquitoes are attracted to a wide range of light spectra during the daytime, whereas night-biting mosquitoes are strongly photophobic to short-wavelength light during the daytime," said principal investigator Todd C. Holmes, PhD, a professor in the Department of Physiology and Biophysics at the UCI School of Medicine. "Our results show that timing and light spectra are critical for species-specific light control of harmful mosquitoes."
The new study titled, "Circadian Regulation of Light-Evoked Attraction and Avoidance Behaviors in Daytime- versus Nighttime-Biting Mosquitoes," is published in Current Biology. Lisa S. Baik, a UCI School of Medicine graduate student researcher who recently completed her PhD work, is first author.
Mosquitoes pose widespread threats to humans and other animals as disease vectors. It is estimated historically that diseases spread by mosquitoes have contributed to the deaths of half of all humans ever to have lived. The new work shows that day-biting mosquitoes, particularly females that require blood meals for their fertilized eggs, are attracted to light during the day regardless of spectra. In contrast, night-biting mosquitoes specifically avoid ultraviolet (UV) and blue light during the day. Previous work in the Holmes lab using fruit flies (which are related to mosquitoes) has determined the light sensors and circadian molecular mechanisms for light mediated attraction/avoidance behaviors. Accordingly, molecular disruption of the circadian clock severely interferes with light-evoked attraction and avoidance behaviors in mosquitoes. At present, light-based insect controls do not take into consideration the day versus night behavioral profiles that change with daily light and dark cycles.
Read more at Science Daily
The University of California, Irvine School of Medicine-led team studied mosquito species that bite in the daytime (Aedes aegypti, aka the Yellow Fever mosquito) and those that bite at night (Anopheles coluzzi, a member of the Anopheles gambiae family, the major vector for malaria). They found distinct responses to ultraviolet light and other colors of light between the two species. Researchers also found light preference is dependent on the mosquito's sex and species, the time of day and the color of the light.
"Conventional wisdom has been that insects are non-specifically attracted to ultraviolet light, hence the widespread use of ultraviolet light "bug zappers" for insect control. We find that day-biting mosquitoes are attracted to a wide range of light spectra during the daytime, whereas night-biting mosquitoes are strongly photophobic to short-wavelength light during the daytime," said principal investigator Todd C. Holmes, PhD, a professor in the Department of Physiology and Biophysics at the UCI School of Medicine. "Our results show that timing and light spectra are critical for species-specific light control of harmful mosquitoes."
The new study titled, "Circadian Regulation of Light-Evoked Attraction and Avoidance Behaviors in Daytime- versus Nighttime-Biting Mosquitoes," is published in Current Biology. Lisa S. Baik, a UCI School of Medicine graduate student researcher who recently completed her PhD work, is first author.
Mosquitoes pose widespread threats to humans and other animals as disease vectors. It is estimated historically that diseases spread by mosquitoes have contributed to the deaths of half of all humans ever to have lived. The new work shows that day-biting mosquitoes, particularly females that require blood meals for their fertilized eggs, are attracted to light during the day regardless of spectra. In contrast, night-biting mosquitoes specifically avoid ultraviolet (UV) and blue light during the day. Previous work in the Holmes lab using fruit flies (which are related to mosquitoes) has determined the light sensors and circadian molecular mechanisms for light mediated attraction/avoidance behaviors. Accordingly, molecular disruption of the circadian clock severely interferes with light-evoked attraction and avoidance behaviors in mosquitoes. At present, light-based insect controls do not take into consideration the day versus night behavioral profiles that change with daily light and dark cycles.
Read more at Science Daily
Gene variations at birth reveal origins of inflammation and immune disease
A study published in the journal Nature Communications has pinpointed a number of areas of the human genome that may help explain the neonatal origins of chronic immune and inflammatory diseases of later life, including type 1 diabetes, rheumatoid arthritis and celiac disease.
The research, led by scientists at the Cambridge Baker Systems Genomics Initiative, identified several genes that appear to drive disease risk at birth, and which could be targeted for therapeutic intervention to stop these diseases in their tracks, well before symptoms occur.
Dr Michael Inouye, Munz Chair of Cardiovascular Prediction and Prevention at the Baker Institute and Principal Researcher at Cambridge University, said chronic immune and inflammatory diseases of adulthood often originated in early childhood, with an individual's genetic make-up causing changes to the function of different genes involved in disease.
For this study, the team collected cord blood samples from more than 100 Australian newborns as part of the Childhood Asthma Study, and investigated the role of genetic variation in DNA in changing how genes are expressed in the two main arms of the immune system.
The neonatal immune cells were exposed to certain stimuli, to see how the cells responded and to identify genetic variants that changed these responses.
"We looked for overlap between these genetic signals and those that are known to be associated with diseases where we know the immune system plays a role," Dr Inouye said.
"We then used statistical analysis to search for possible links between the cell response in newborns and immune diseases in adulthood."
Chronic immune diseases -- including type 1 diabetes, celiac disease and multiple sclerosis -- are caused by an overactive immune system and affect about 5 per cent of Australians. Allergies are immune-mediated too and affect one in five Australians, with hay fever, asthma, eczema, anaphylaxis and food allergies the most common. Inflammation and autoimmunity are also known to be driving factors in cardiovascular diseases, for example when an overactive immune system mistakenly attacks the heart.
Dr Qinqin Huang, lead author of the study and now a researcher at the Wellcome Sanger Institute in Cambridge, said the findings were unique in their scale, with thousands of genetic variants driving gene expression across different immune and inflammatory conditions, some of which had wide-ranging effects.
"Our study showed the potential roles of gene expression in disease development, which has helped us to better understand the link between DNA variation and disease risk," Dr Huang said.
"To date, similar studies have only been conducted in adult immune cells. Given the huge difference between neonatal and adult immunity, it is not surprising to see many signals that were unique to newborns."
The study is part of the Cambridge Baker Systems Genomics Initiative's wider work in developing polygenic risk scores to predict an individual's likelihood of developing particular chronic diseases. To date, the team have already developed potential methods to test for future risk of stroke and coronary artery disease.
"Disease is partly due to changes, both large and small, in our genome -- the DNA that we're born with and which is a major driving force in all our cells. That means, genomics can be used to estimate disease risk from a very early age," Dr Inouye said.
"Common diseases, such as type 2 diabetes and cardiovascular disease, tend to be polygenic -- influenced by a large number of genetic variants scattered throughout the genome, which combine with environmental and lifestyle factors. By using new genomic technology and supercomputing capabilities, we can sift through this DNA data and piece together the puzzles that underlie each disease.
"With so many diseases sharing a root in the immune system and inflammation we can leverage this information to better understand where each disease has a molecular weak spot and to what extent these are shared among different diseases.
Read more at Science Daily
The research, led by scientists at the Cambridge Baker Systems Genomics Initiative, identified several genes that appear to drive disease risk at birth, and which could be targeted for therapeutic intervention to stop these diseases in their tracks, well before symptoms occur.
Dr Michael Inouye, Munz Chair of Cardiovascular Prediction and Prevention at the Baker Institute and Principal Researcher at Cambridge University, said chronic immune and inflammatory diseases of adulthood often originated in early childhood, with an individual's genetic make-up causing changes to the function of different genes involved in disease.
For this study, the team collected cord blood samples from more than 100 Australian newborns as part of the Childhood Asthma Study, and investigated the role of genetic variation in DNA in changing how genes are expressed in the two main arms of the immune system.
The neonatal immune cells were exposed to certain stimuli, to see how the cells responded and to identify genetic variants that changed these responses.
"We looked for overlap between these genetic signals and those that are known to be associated with diseases where we know the immune system plays a role," Dr Inouye said.
"We then used statistical analysis to search for possible links between the cell response in newborns and immune diseases in adulthood."
Chronic immune diseases -- including type 1 diabetes, celiac disease and multiple sclerosis -- are caused by an overactive immune system and affect about 5 per cent of Australians. Allergies are immune-mediated too and affect one in five Australians, with hay fever, asthma, eczema, anaphylaxis and food allergies the most common. Inflammation and autoimmunity are also known to be driving factors in cardiovascular diseases, for example when an overactive immune system mistakenly attacks the heart.
Dr Qinqin Huang, lead author of the study and now a researcher at the Wellcome Sanger Institute in Cambridge, said the findings were unique in their scale, with thousands of genetic variants driving gene expression across different immune and inflammatory conditions, some of which had wide-ranging effects.
"Our study showed the potential roles of gene expression in disease development, which has helped us to better understand the link between DNA variation and disease risk," Dr Huang said.
"To date, similar studies have only been conducted in adult immune cells. Given the huge difference between neonatal and adult immunity, it is not surprising to see many signals that were unique to newborns."
The study is part of the Cambridge Baker Systems Genomics Initiative's wider work in developing polygenic risk scores to predict an individual's likelihood of developing particular chronic diseases. To date, the team have already developed potential methods to test for future risk of stroke and coronary artery disease.
"Disease is partly due to changes, both large and small, in our genome -- the DNA that we're born with and which is a major driving force in all our cells. That means, genomics can be used to estimate disease risk from a very early age," Dr Inouye said.
"Common diseases, such as type 2 diabetes and cardiovascular disease, tend to be polygenic -- influenced by a large number of genetic variants scattered throughout the genome, which combine with environmental and lifestyle factors. By using new genomic technology and supercomputing capabilities, we can sift through this DNA data and piece together the puzzles that underlie each disease.
"With so many diseases sharing a root in the immune system and inflammation we can leverage this information to better understand where each disease has a molecular weak spot and to what extent these are shared among different diseases.
Read more at Science Daily
Jul 27, 2020
Narcissists don't learn from their mistakes because they don't think they make any
When most people find that their actions have resulted in an undesirable outcome, they tend to rethink their decisions and ask, "What should I have done differently to avoid this outcome?"
When narcissists face the same situation, however, their refrain is, "No one could have seen this coming!"
In refusing to acknowledge that they have made a mistake, narcissists fail to learn from those mistakes, a recent study from Oregon State University -- Cascades found.
The mental process of analyzing past actions to see what one should have done differently is called "should counterfactual thinking." Counterfactual thinking is the mental process of imagining a different outcome or scenario from what actually occurred.
All of us engage in some level of self-protective thinking, said study author Satoris Howes, a researcher at OSU-Cascades with the OSU College of Business. We tend to attribute success to our own efforts, but blame our failures on outside forces -- while often blaming other people's failure on their own deficiencies.
"But narcissists do this way more because they think they're better than others," Howes said. "They don't take advice from other people; they don't trust others' opinions. ... You can flat-out ask, 'What should you have done differently?' And it might be, 'Nothing, it turned out; it was good.'"
Narcissism is typically defined as a belief in one's superiority and entitlement, with narcissists believing they are better and more deserving than others.
The study, published recently in the Journal of Management, consisted of four variations on the same experiment with four different participant groups, including students, employees and managers with significant experience in hiring. One of the four was conducted in Chile with Spanish-speaking participants.
Participants first took a test that ranked their narcissism by having them choose among pairs of statements ("I think I am a special person" versus "I am no better or worse than most people"). In the first of the four variations, they then read the qualifications of hypothetical job candidates and had to choose whom to hire. After choosing, they were given details about how this hypothetical employee fared in the job, and were assessed regarding the extent they engaged in "should counterfactual thinking" about whether they made the right decision.
The four variations employed different methods to analyze how counterfactual thinking was affected by hindsight bias, which is the tendency to exaggerate in hindsight what one actually knew in foresight. The researchers cite the example of President Donald Trump saying in 2004 that he "predicted the Iraq war better than anybody."
The authors note that prior research has shown that hindsight bias is often reversed as a form of self-protection when a prediction proves to be inaccurate -- e.g., Trump saying in 2017 that "No one knew health care could be so complicated" after failing to put forth a successful alternative to the Affordable Care Act.
In the OSU study, researchers found that when narcissists predicted an outcome correctly, they felt it was more foreseeable than non-narcissists did ("I knew it all along"); and when they predicted incorrectly, they felt the outcome was less foreseeable than non-narcissists did ("Nobody could have guessed").
Either way, the narcissists didn't feel they needed to do something differently or engage in self-critical thinking that might have positive effects on future decisions.
"They're falling prey to the hindsight bias, and they're not learning from it when they make mistakes. And when they get things right, they're still not learning," Howes said.
Narcissists often rise in the ranks within organizations because they exude total confidence, take credit for the successes of others and deflect blame from themselves when something goes wrong, Howes said.
However, she said, over time this can be damaging to the organization, both because of low morale of employees who work for the narcissist and because of the narcissist's continuing poor decisions.
To avoid the trap of hindsight bias, Howes said individuals should set aside time for reflection and review after a decision, even if the outcome is positive. Whether the decision was favorable or unfavorable, they should ask themselves what they should have done differently. And because narcissists don't engage in this process, Howes said it would be wise to have advisory panels provide checks and balances when narcissists have decision-making authority.
Read more at Science Daily
When narcissists face the same situation, however, their refrain is, "No one could have seen this coming!"
In refusing to acknowledge that they have made a mistake, narcissists fail to learn from those mistakes, a recent study from Oregon State University -- Cascades found.
The mental process of analyzing past actions to see what one should have done differently is called "should counterfactual thinking." Counterfactual thinking is the mental process of imagining a different outcome or scenario from what actually occurred.
All of us engage in some level of self-protective thinking, said study author Satoris Howes, a researcher at OSU-Cascades with the OSU College of Business. We tend to attribute success to our own efforts, but blame our failures on outside forces -- while often blaming other people's failure on their own deficiencies.
"But narcissists do this way more because they think they're better than others," Howes said. "They don't take advice from other people; they don't trust others' opinions. ... You can flat-out ask, 'What should you have done differently?' And it might be, 'Nothing, it turned out; it was good.'"
Narcissism is typically defined as a belief in one's superiority and entitlement, with narcissists believing they are better and more deserving than others.
The study, published recently in the Journal of Management, consisted of four variations on the same experiment with four different participant groups, including students, employees and managers with significant experience in hiring. One of the four was conducted in Chile with Spanish-speaking participants.
Participants first took a test that ranked their narcissism by having them choose among pairs of statements ("I think I am a special person" versus "I am no better or worse than most people"). In the first of the four variations, they then read the qualifications of hypothetical job candidates and had to choose whom to hire. After choosing, they were given details about how this hypothetical employee fared in the job, and were assessed regarding the extent they engaged in "should counterfactual thinking" about whether they made the right decision.
The four variations employed different methods to analyze how counterfactual thinking was affected by hindsight bias, which is the tendency to exaggerate in hindsight what one actually knew in foresight. The researchers cite the example of President Donald Trump saying in 2004 that he "predicted the Iraq war better than anybody."
The authors note that prior research has shown that hindsight bias is often reversed as a form of self-protection when a prediction proves to be inaccurate -- e.g., Trump saying in 2017 that "No one knew health care could be so complicated" after failing to put forth a successful alternative to the Affordable Care Act.
In the OSU study, researchers found that when narcissists predicted an outcome correctly, they felt it was more foreseeable than non-narcissists did ("I knew it all along"); and when they predicted incorrectly, they felt the outcome was less foreseeable than non-narcissists did ("Nobody could have guessed").
Either way, the narcissists didn't feel they needed to do something differently or engage in self-critical thinking that might have positive effects on future decisions.
"They're falling prey to the hindsight bias, and they're not learning from it when they make mistakes. And when they get things right, they're still not learning," Howes said.
Narcissists often rise in the ranks within organizations because they exude total confidence, take credit for the successes of others and deflect blame from themselves when something goes wrong, Howes said.
However, she said, over time this can be damaging to the organization, both because of low morale of employees who work for the narcissist and because of the narcissist's continuing poor decisions.
To avoid the trap of hindsight bias, Howes said individuals should set aside time for reflection and review after a decision, even if the outcome is positive. Whether the decision was favorable or unfavorable, they should ask themselves what they should have done differently. And because narcissists don't engage in this process, Howes said it would be wise to have advisory panels provide checks and balances when narcissists have decision-making authority.
Read more at Science Daily
How the zebrafish got its stripes
Animal patterns -- the stripes, spots and rosettes seen in the wild -- are a source of endless fascination, and now researchers at the University Bath have developed a robust mathematical model to explain how one important species, the zebrafish, develops its stripes.
In the animal kingdom, the arrangement of skin pigment cells starts during the embryonic stage of development, making pattern formation an area of keen interest not only for a lay audience but also for scientists -- in particular, developmental biologists and mathematicians.
Zebrafish are invaluable for studying human disease. These humble freshwater minnows may seem to have little in common with mammals but in fact they show many genetic similarities to our species and boast a similar list of physical characteristics (including most major organs).
Zebrafish also provide fundamental insights into the complex, and often wondrous, processes that underpin biology. Studying their striking appearance may, in time, be relevant to medicine, since pattern formation is an important general feature of organ development. therefore, a better understanding of pigment pattern formation might give us insights into diseases caused by disruption to cell arrangements within organs.
The new mathematical model devised in Bath paves the way for further explorations into pigment patterning systems, and their similarity across different species. Pigmentation in zebrafish is an example of an emergent phenomenon -- one in which individuals (cells in this case), all acting according to their own local rules, can self-organise to form an ordered pattern at a scale much larger than one might expect. Other examples of emergent phenomena in biology include the flocking of starlings and the synchronised swimming seen in schools of fish.
Dr Kit Yates, the mathematician from Bath who led the study, said: "It's fascinating to think that these different pigment cells, all acting without coordinated centralised control, can reliably produce the striped patterns we see in zebrafish. Our modelling highlights the local rules that these cells use to interact with each other in order to generate these patterns robustly."
"Why is it important for us to find a correct mathematical model to explain the stripes on zebrafish?" asks Professor Robert Kelsh, co-author of the study. "Partly, because pigment patterns are interesting and beautiful in their own right. But also because these stripes are an example of a key developmental process. If we can understand what's going on in the pattern development of a fish embryo, we may be able to gain deeper insight into the complex choreography of cells within embryos more generally."
The stripes of an adult 'wild type' zebrafish are formed from pigment-containing cells called chromatophores. There are three different types of chromatophore in the fish, and as the animal develops, these pigment cells shift around on the animal's surface, interacting with one other and self-organising into the stripy pattern for which the fish are named. Occasionally, mutations appear, changing how the cells interact with each other during pattern development resulting in spotty, leopard-skin or maze-like labyrinthine markings.
Scientists know a lot about the biological interactions needed for the self-organisation of a zebrafish's pigment cells, but there has been some uncertainty over whether these interactions offer a comprehensive explanation for how these patterns form. To test the biological theories, the Bath team developed a mathematical model that incorporated the three cell types and all their known interactions. The model has proven successful, predicting the pattern development of both wild type and mutant fish.
Read more at Science Daily
In the animal kingdom, the arrangement of skin pigment cells starts during the embryonic stage of development, making pattern formation an area of keen interest not only for a lay audience but also for scientists -- in particular, developmental biologists and mathematicians.
Zebrafish are invaluable for studying human disease. These humble freshwater minnows may seem to have little in common with mammals but in fact they show many genetic similarities to our species and boast a similar list of physical characteristics (including most major organs).
Zebrafish also provide fundamental insights into the complex, and often wondrous, processes that underpin biology. Studying their striking appearance may, in time, be relevant to medicine, since pattern formation is an important general feature of organ development. therefore, a better understanding of pigment pattern formation might give us insights into diseases caused by disruption to cell arrangements within organs.
The new mathematical model devised in Bath paves the way for further explorations into pigment patterning systems, and their similarity across different species. Pigmentation in zebrafish is an example of an emergent phenomenon -- one in which individuals (cells in this case), all acting according to their own local rules, can self-organise to form an ordered pattern at a scale much larger than one might expect. Other examples of emergent phenomena in biology include the flocking of starlings and the synchronised swimming seen in schools of fish.
Dr Kit Yates, the mathematician from Bath who led the study, said: "It's fascinating to think that these different pigment cells, all acting without coordinated centralised control, can reliably produce the striped patterns we see in zebrafish. Our modelling highlights the local rules that these cells use to interact with each other in order to generate these patterns robustly."
"Why is it important for us to find a correct mathematical model to explain the stripes on zebrafish?" asks Professor Robert Kelsh, co-author of the study. "Partly, because pigment patterns are interesting and beautiful in their own right. But also because these stripes are an example of a key developmental process. If we can understand what's going on in the pattern development of a fish embryo, we may be able to gain deeper insight into the complex choreography of cells within embryos more generally."
The stripes of an adult 'wild type' zebrafish are formed from pigment-containing cells called chromatophores. There are three different types of chromatophore in the fish, and as the animal develops, these pigment cells shift around on the animal's surface, interacting with one other and self-organising into the stripy pattern for which the fish are named. Occasionally, mutations appear, changing how the cells interact with each other during pattern development resulting in spotty, leopard-skin or maze-like labyrinthine markings.
Scientists know a lot about the biological interactions needed for the self-organisation of a zebrafish's pigment cells, but there has been some uncertainty over whether these interactions offer a comprehensive explanation for how these patterns form. To test the biological theories, the Bath team developed a mathematical model that incorporated the three cell types and all their known interactions. The model has proven successful, predicting the pattern development of both wild type and mutant fish.
Read more at Science Daily
New approach refines the Hubble's constant and age of universe
Using known distances of 50 galaxies from Earth to refine calculations in Hubble's constant, a research team led by a University of Oregon astronomer estimates the age of the universe at 12.6 billion years.
Approaches to date the Big Bang, which gave birth to the universe, rely on mathematics and computational modeling, using distance estimates of the oldest stars, the behavior of galaxies and the rate of the universe's expansion. The idea is to compute how long it would take all objects to return to the beginning.
A key calculation for dating is the Hubble's constant, named after Edwin Hubble who first calculated the universe's expansion rate in 1929. Another recent technique uses observations of leftover radiation from the Big Bang. It maps bumps and wiggles in spacetime -- the cosmic microwave background, or CMB -- and reflects conditions in the early universe as set by Hubble's constant.
However, the methods reach different conclusions, said James Schombert, a professor of physics at the UO. In a paper published July 17 in the Astronomical Journal, he and colleagues unveil a new approach that recalibrates a distance-measuring tool known as the baryonic Tully-Fisher relation independently of Hubble's constant.
"The distance scale problem, as it is known, is incredibly difficult because the distances to galaxies are vast and the signposts for their distances are faint and hard to calibrate," Schombert said.
Schombert's team recalculated the Tully-Fisher approach, using accurately defined distances in a linear computation of the 50 galaxies as guides for measuring the distances of 95 other galaxies. The universe, he noted, is ruled by a series of mathematical patterns expressed in equations. The new approach more accurately accounts for the mass and rotational curves of galaxies to turn those equations into numbers like age and expansion rate.
His team's approach determines the Hubble's constant -- the universe's expansion rate -- at 75.1 kilometers per second per megaparsec, give or take 2.3. A megaparsec, a common unit of space-related measurements, is equal to one million parsecs. A parsec is about 3.3 light years.
All Hubble's constant values lower than 70, his team wrote, can be ruled out with 95 percent degree of confidence.
Traditionally used measuring techniques over the past 50 years, Schombert said, have set the value at 75, but CMB computes a rate of 67. The CMB technique, while using different assumptions and computer simulations, should still arrive at the same estimate, he said.
"The tension in the field occurs from the fact that it does not," Schombert said. "This difference is well outside the observational errors and produced a great deal of friction in the cosmological community."
Calculations drawn from observations of NASA's Wilkinson Microwave Anisotropy Probe in 2013 put the age of the universe at 13.77 billion years, which, for the moment, represents the standard model of Big Bang cosmology. The differing Hubble's constant values from the various techniques generally estimate the universe's age at between 12 billion and 14.5 billion years.
The new study, based in part on observations made with the Spitzer Space Telescope, adds a new element to how calculations to reach Hubble's constant can be set, by introducing a purely empirical method, using direct observations, to determine the distance to galaxies, Schombert said.
Read more at Science Daily
Approaches to date the Big Bang, which gave birth to the universe, rely on mathematics and computational modeling, using distance estimates of the oldest stars, the behavior of galaxies and the rate of the universe's expansion. The idea is to compute how long it would take all objects to return to the beginning.
A key calculation for dating is the Hubble's constant, named after Edwin Hubble who first calculated the universe's expansion rate in 1929. Another recent technique uses observations of leftover radiation from the Big Bang. It maps bumps and wiggles in spacetime -- the cosmic microwave background, or CMB -- and reflects conditions in the early universe as set by Hubble's constant.
However, the methods reach different conclusions, said James Schombert, a professor of physics at the UO. In a paper published July 17 in the Astronomical Journal, he and colleagues unveil a new approach that recalibrates a distance-measuring tool known as the baryonic Tully-Fisher relation independently of Hubble's constant.
"The distance scale problem, as it is known, is incredibly difficult because the distances to galaxies are vast and the signposts for their distances are faint and hard to calibrate," Schombert said.
Schombert's team recalculated the Tully-Fisher approach, using accurately defined distances in a linear computation of the 50 galaxies as guides for measuring the distances of 95 other galaxies. The universe, he noted, is ruled by a series of mathematical patterns expressed in equations. The new approach more accurately accounts for the mass and rotational curves of galaxies to turn those equations into numbers like age and expansion rate.
His team's approach determines the Hubble's constant -- the universe's expansion rate -- at 75.1 kilometers per second per megaparsec, give or take 2.3. A megaparsec, a common unit of space-related measurements, is equal to one million parsecs. A parsec is about 3.3 light years.
All Hubble's constant values lower than 70, his team wrote, can be ruled out with 95 percent degree of confidence.
Traditionally used measuring techniques over the past 50 years, Schombert said, have set the value at 75, but CMB computes a rate of 67. The CMB technique, while using different assumptions and computer simulations, should still arrive at the same estimate, he said.
"The tension in the field occurs from the fact that it does not," Schombert said. "This difference is well outside the observational errors and produced a great deal of friction in the cosmological community."
Calculations drawn from observations of NASA's Wilkinson Microwave Anisotropy Probe in 2013 put the age of the universe at 13.77 billion years, which, for the moment, represents the standard model of Big Bang cosmology. The differing Hubble's constant values from the various techniques generally estimate the universe's age at between 12 billion and 14.5 billion years.
The new study, based in part on observations made with the Spitzer Space Telescope, adds a new element to how calculations to reach Hubble's constant can be set, by introducing a purely empirical method, using direct observations, to determine the distance to galaxies, Schombert said.
Read more at Science Daily
Life in the pits: Scientists identify key enzyme behind body odor
Scientists have discovered a unique enzyme responsible for the pungent characteristic smell we call body odour or BO.
Researchers from the University of York have previously shown that only a few bacteria in your armpit are the real culprits behind BO. Now the same team, in collaboration with Unilever scientists, has gone a step further to discover a unique "BO enzyme" found only within these bacteria and responsible for the characteristic armpit odour.
This new research highlights how particular bacteria have evolved a specialised enzyme to produce some of the key molecules we recognise as BO.
Co-first author Dr Michelle Rudden from the group of Prof. Gavin Thomas in the University of York's Department of Biology, said: "Solving the structure of this 'BO enzyme' has allowed us to pinpoint the molecular step inside certain bacteria that makes the odour molecules. This is a key advancement in understanding how body odour works, and will enable the development of targeted inhibitors that stop BO production at source without disrupting the armpit microbiome."
Your armpit hosts a diverse community of bacteria that is part of your natural skin microbiome. This research highlights Staphylococcus hominis as one of the main microbes behind body odour.
Furthermore, the researchers say that this "BO enzyme" was present in S. hominis long before the emergence of Homo sapiens as a species, suggesting that body odour existed prior to the evolution of modern humans, and may have had an important role in societal communication among ancestral primates.
This research represents an important discovery for Unilever R&D, made possible by its long-standing academic-industry collaboration with the University of York. Unilever co-author Dr Gordon James said: "This research was a real eye-opener. It was fascinating to discover that a key odour-forming enzyme exists in only a select few armpit bacteria -- and evolved there tens of millions of years ago."
From Science Daily
Researchers from the University of York have previously shown that only a few bacteria in your armpit are the real culprits behind BO. Now the same team, in collaboration with Unilever scientists, has gone a step further to discover a unique "BO enzyme" found only within these bacteria and responsible for the characteristic armpit odour.
This new research highlights how particular bacteria have evolved a specialised enzyme to produce some of the key molecules we recognise as BO.
Co-first author Dr Michelle Rudden from the group of Prof. Gavin Thomas in the University of York's Department of Biology, said: "Solving the structure of this 'BO enzyme' has allowed us to pinpoint the molecular step inside certain bacteria that makes the odour molecules. This is a key advancement in understanding how body odour works, and will enable the development of targeted inhibitors that stop BO production at source without disrupting the armpit microbiome."
Your armpit hosts a diverse community of bacteria that is part of your natural skin microbiome. This research highlights Staphylococcus hominis as one of the main microbes behind body odour.
Furthermore, the researchers say that this "BO enzyme" was present in S. hominis long before the emergence of Homo sapiens as a species, suggesting that body odour existed prior to the evolution of modern humans, and may have had an important role in societal communication among ancestral primates.
This research represents an important discovery for Unilever R&D, made possible by its long-standing academic-industry collaboration with the University of York. Unilever co-author Dr Gordon James said: "This research was a real eye-opener. It was fascinating to discover that a key odour-forming enzyme exists in only a select few armpit bacteria -- and evolved there tens of millions of years ago."
From Science Daily
Subscribe to:
Posts (Atom)