Asteroid strikes upset the environment and provide clues via the elements they leave behind. Now, University of Tsukuba researchers have linked elements that are enriched in the Cretaceous-Paleogene (KPg) boundary clays from Stevns Klint, Denmark, to the impact of the asteroid that produced the Chicxulub crater at the Yucatán Peninsula, Mexico. This corresponds to one of the "Big Five" mass extinctions, which occurred at the KPg boundary at the end of the Cretaceous, 66 million years ago. The findings provide a better understanding of which processes lead to enrichment of these types of elements -- an understanding that may be applied to other geological boundary events as well.
In a study published in the Geological Society of America Bulletin, the researchers analyzed the concentrations of certain elements within the KPg boundary clays -- such as copper, silver, and lead -- to determine which processes led to the element enrichment after the end-Cretaceous asteroid impact. Two enriched components were found in the boundary layer, each with distinctly different compositions of elements. One component was incorporated in pyrite (FeS2), whereas the other component was not related to pyrite.
"Since the enrichments of elements in these two components of the boundary clay were accompanied by enrichments of iridium," says first author Professor Teruyuki Maruoka, "both two components might have been induced by processes related to the asteroid impact."
Iron oxides/hydroxides acted as a carrier phase that supplied chalcophile elements (elements concentrated in sulfide minerals) to the KPg boundary clays on the sea floor. The vapor cloud of the asteroid impact produced iron oxides/hydroxides, which could have carried chalcophile elements in oceans and been the source of iron in the pyrite grains holding chalcophile elements.
"These could have been incorporated into the pyrite as impurities," explains Professor Maruoka. "Furthermore, both iron oxides/hydroxides and chalcophile elements could have been released to the environment from the rocks that were struck by the asteroid impact."
Additionally, organic matter in the oceans could have accumulated copper and silver. As such matter degraded on the sea floor, it could have released these elements, which then formed copper- or silver-enriched grains in the KPg boundary clays. This, in turn, may have led to the formation of discrete grains that differ from pyrite. Acid rain that occurred after the end-Cretaceous asteroid impact could have supplied elements such as copper, silver, and lead to the ocean, as these elements are typical constituents of acid-soluble sulfides and were enriched in the second chalcophile component not related to pyrite.
Read more at Science Daily
Feb 29, 2020
Betelgeuse: A massive star's dying breaths
Betelgeuse has been the center of significant media attention lately. The red supergiant is nearing the end of its life, and when a star over 10 times the mass of the Sun dies, it goes out in spectacular fashion. With its brightness recently dipping to the lowest point in the last hundred years, many space enthusiasts are excited that Betelgeuse may soon go supernova, exploding in a dazzling display that could be visible even in daylight.
While the famous star in Orion's shoulder will likely meet its demise within the next million years -- practically couple days in cosmic time -- scientists maintain that its dimming is due to the star pulsating. The phenomenon is relatively common among red supergiants, and Betelgeuse has been known for decades to be in this group.
Coincidentally, researchers at UC Santa Barbara have already made predictions about the brightness of the supernova that would result when a pulsating star like Betelgeuse explodes.
Physics graduate student Jared Goldberg has published a study with Lars Bildsten, director of the campus's Kavli Institute for Theoretical Physics (KITP) and Gluck Professor of Physics, and KITP Senior Fellow Bill Paxton detailing how a star's pulsation will affect the ensuing explosion when it does reach the end. The paper appears in the Astrophysical Journal.
"We wanted to know what it looks like if a pulsating star explodes at different phases of pulsation," said Goldberg, a National Science Foundation graduate research fellow. "Earlier models are simpler because they don't include the time-dependent effects of pulsations."
When a star the size of Betelgeuse finally runs out of material to fuse in its center, it loses the outward pressure that kept it from collapsing under its own immense weight. The resultant core collapse happens in half a second, far faster than it takes the star's surface and puffy outer layers to notice.
As the iron core collapses the atoms disassociate into electrons and protons. These combine to form neutrons, and in the process release high-energy particles called neutrinos. Normally, neutrinos barely interact with other matter -- 100 trillion of them pass through your body every second without a single collision. That said, supernovae are among the most powerful phenomena in the universe. The numbers and energies of the neutrinos produced in the core collapse are so immense that even though only a tiny fraction collides with the stellar material, it's generally more than enough to launch a shockwave capable of exploding the star.
That resulting explosion smacks into the star's outer layers with stupefying energy, creating a burst that can briefly outshine an entire galaxy. The explosion remains bright for around 100 days, since the radiation can escape only once ionized hydrogen recombines with lost electrons to become neutral again. This proceeds from the outside in, meaning that astronomers see deeper into the supernova as time goes on until finally the light from the center can escape. At that point, all that's left is the dim glow of radioactive fallout, which can continue to shine for years.
A supernova's characteristics vary with the star's mass, total explosion energy and, importantly, its radius. This means Betelgeuse's pulsation makes predicting how it will explode rather more complicated.
The researchers found that if the entire star is pulsating in unison -- breathing in and out, if you will -- the supernova will behave as though Betelgeuse was a static star with a given radius. However, different layers of the star can oscillate opposite each other: the outer layers expand while the middle layers contract, and vice versa.
For the simple pulsation case, the team's model yielded similar results to the models that didn't account for pulsation. "It just looks like a supernova from a bigger star or a smaller star at different points in the pulsation," Goldberg explained. "It's when you start considering pulsations that are more complicated, where there's stuff moving in at the same time as stuff moving out -- then our model actually does produce noticeable differences," he said.
In these cases, the researchers discovered that as light leaks out from progressively deeper layers of the explosion, the emissions would appear as though they were the result of supernovae from different sized stars.
"Light from the part of the star that is compressed is fainter," Goldberg explained, "just as we would expect from a more compact, non-pulsating star." Meanwhile, light from parts of the star that were expanding at the time would appear brighter, as though it came from a larger, non-pulsating star.
Read more at Science Daily
While the famous star in Orion's shoulder will likely meet its demise within the next million years -- practically couple days in cosmic time -- scientists maintain that its dimming is due to the star pulsating. The phenomenon is relatively common among red supergiants, and Betelgeuse has been known for decades to be in this group.
Coincidentally, researchers at UC Santa Barbara have already made predictions about the brightness of the supernova that would result when a pulsating star like Betelgeuse explodes.
Physics graduate student Jared Goldberg has published a study with Lars Bildsten, director of the campus's Kavli Institute for Theoretical Physics (KITP) and Gluck Professor of Physics, and KITP Senior Fellow Bill Paxton detailing how a star's pulsation will affect the ensuing explosion when it does reach the end. The paper appears in the Astrophysical Journal.
"We wanted to know what it looks like if a pulsating star explodes at different phases of pulsation," said Goldberg, a National Science Foundation graduate research fellow. "Earlier models are simpler because they don't include the time-dependent effects of pulsations."
When a star the size of Betelgeuse finally runs out of material to fuse in its center, it loses the outward pressure that kept it from collapsing under its own immense weight. The resultant core collapse happens in half a second, far faster than it takes the star's surface and puffy outer layers to notice.
As the iron core collapses the atoms disassociate into electrons and protons. These combine to form neutrons, and in the process release high-energy particles called neutrinos. Normally, neutrinos barely interact with other matter -- 100 trillion of them pass through your body every second without a single collision. That said, supernovae are among the most powerful phenomena in the universe. The numbers and energies of the neutrinos produced in the core collapse are so immense that even though only a tiny fraction collides with the stellar material, it's generally more than enough to launch a shockwave capable of exploding the star.
That resulting explosion smacks into the star's outer layers with stupefying energy, creating a burst that can briefly outshine an entire galaxy. The explosion remains bright for around 100 days, since the radiation can escape only once ionized hydrogen recombines with lost electrons to become neutral again. This proceeds from the outside in, meaning that astronomers see deeper into the supernova as time goes on until finally the light from the center can escape. At that point, all that's left is the dim glow of radioactive fallout, which can continue to shine for years.
A supernova's characteristics vary with the star's mass, total explosion energy and, importantly, its radius. This means Betelgeuse's pulsation makes predicting how it will explode rather more complicated.
The researchers found that if the entire star is pulsating in unison -- breathing in and out, if you will -- the supernova will behave as though Betelgeuse was a static star with a given radius. However, different layers of the star can oscillate opposite each other: the outer layers expand while the middle layers contract, and vice versa.
For the simple pulsation case, the team's model yielded similar results to the models that didn't account for pulsation. "It just looks like a supernova from a bigger star or a smaller star at different points in the pulsation," Goldberg explained. "It's when you start considering pulsations that are more complicated, where there's stuff moving in at the same time as stuff moving out -- then our model actually does produce noticeable differences," he said.
In these cases, the researchers discovered that as light leaks out from progressively deeper layers of the explosion, the emissions would appear as though they were the result of supernovae from different sized stars.
"Light from the part of the star that is compressed is fainter," Goldberg explained, "just as we would expect from a more compact, non-pulsating star." Meanwhile, light from parts of the star that were expanding at the time would appear brighter, as though it came from a larger, non-pulsating star.
Read more at Science Daily
Why is there any matter in the universe at all? New study sheds light
Scientists at the University of Sussex have measured a property of the neutron -- a fundamental particle in the universe -- more precisely than ever before. Their research is part of an investigation into why there is matter left over in the universe, that is, why all the antimatter created in the Big Bang didn't just cancel out the matter.
The team -- which included the Science and Technology Facilities Council's (STFC) Rutherford Appleton Laboratory in the UK, the Paul Scherrer Institute (PSI) in Switzerland, and a number of other institutions -- was looking into whether or not the neutron acts like an "electric compass." Neutrons are believed to be slightly asymmetrical in shape, being slightly positive at one end and slightly negative at the other -- a bit like the electrical equivalent of a bar magnet. This is the so-called "electric dipole moment" (EDM), and is what the team was looking for.
This is an important piece of the puzzle in the mystery of why matter remains in the Universe, because scientific theories about why there is matter left over also predict that neutrons have the "electric compass" property, to a greater or lesser extent. Measuring it then it helps scientists to get closer to the truth about why matter remains.
The team of physicists found that the neutron has a significantly smaller EDM than predicted by various theories about why matter remains in the universe; this makes these theories less likely to be correct, so they have to be altered, or new theories found. In fact it's been said in the literature that over the years, these EDM measurements, considered as a set, have probably disproved more theories than any other experiment in the history of physics. The results are reported today, Friday 28 February 2020, in the journal Physical Review Letters.
Professor Philip Harris, Head of the School of Mathematical and Physical Sciences and leader of the EDM group at the University of Sussex, said:
"After more than two decades of work by researchers at the University of Sussex and elsewhere, a final result has emerged from an experiment designed to address one of the most profound problems in cosmology for the last fifty years: namely, the question of why the Universe contains so much more matter than antimatter, and, indeed, why it now contains any matter at all. Why didn't the antimatter cancel out all the matter? Why is there any matter left?
"The answer relates to a structural asymmetry that should appear in fundamental particles like neutrons. This is what we've been looking for. We've found that the "electric dipole moment" is smaller than previously believed. This helps us to rule out theories about why there is matter left over -- because the theories governing the two things are linked.
"We have set a new international standard for the sensitivity of this experiment. What we're searching for in the neutron -- the asymmetry which shows that it is positive at one end and negative at the other -- is incredibly tiny. Our experiment was able to measure this in such detail that if the asymmetry could be scaled up to the size of a football, then a football scaled up by the same amount would fill the visible Universe."
The experiment is an upgraded version of apparatus originally designed by researchers at the University of Sussex and the Rutherford Appleton Laboratory (RAL), and which has held the world sensitivity record continuously from 1999 until now.
Dr Maurits van der Grinten, from the neutron EDM group at the Rutherford Appleton Laboratory (RAL), said:
"The experiment combines various state of the art technologies that all need to perform simultaneously. We're pleased that the equipment, technology and expertise developed by scientists from RAL has contributed to the work to push the limit on this important parameter"
Dr Clark Griffith, Lecturer in Physics from the School of Mathematical and Physical Sciences at the University of Sussex, said:
"This experiment brings together techniques from atomic and low energy nuclear physics, including laser-based optical magnetometry and quantum-spin manipulation. By using these multi-disciplinary tools to measure the properties of the neutron extremely precisely, we are able to probe questions relevant to high-energy particle physics and the fundamental nature of the symmetries underlying the universe. "
50,000 measurements
Any electric dipole moment that a neutron may have is tiny, and so is extremely difficult to measure. Previous measurements by other researchers have borne this out. In particular, the team had to go to great lengths to keep the local magnetic field very constant during their latest measurement. For example, every truck that drove by on the road next to the institute disturbed the magnetic field on a scale that would have been significant for the experiment, so this effect had to be compensated for during the measurement.
Also, the number of neutrons observed needed to be large enough to provide a chance to measure the electric dipole moment. The measurements ran over a period of two years. So-called ultracold neutrons, that is, neutrons with a comparatively slow speed, were measured. Every 300 seconds, a bunch of more than 10,000 neutrons was directed to the experiment and examined in detail. The researchers measured a total of 50,000 such bunches.
A new international standard is set
The researchers' latest results supported and enhanced those of their predecessors: a new international standard has been set. The size of the EDM is still too small to measure with the instruments that have been used up until now, so some theories that attempted to explain the excess of matter have become less likely. The mystery therefore remains, for the time being.
The next, more precise, measurement is already being constructed at PSI. The PSI collaboration expects to start their next series of measurements by 2021.
Search for "new physics"
The new result was determined by a group of researchers at 18 institutes and universities in Europe and the USA on the basis of data collected at PSI's ultracold neutron source. The researchers collected measurement data there over a period of two years, evaluated it very carefully in two separate teams, and were then able to obtain a more accurate result than ever before.
The research project is part of the search for "new physics" that would go beyond the so-called Standard Model of Physics, which sets out the properties of all known particles. This is also a major goal of experiments at larger facilities such as the Large Hadron Collider (LHC) at CERN.
Read more at Science Daily
The team -- which included the Science and Technology Facilities Council's (STFC) Rutherford Appleton Laboratory in the UK, the Paul Scherrer Institute (PSI) in Switzerland, and a number of other institutions -- was looking into whether or not the neutron acts like an "electric compass." Neutrons are believed to be slightly asymmetrical in shape, being slightly positive at one end and slightly negative at the other -- a bit like the electrical equivalent of a bar magnet. This is the so-called "electric dipole moment" (EDM), and is what the team was looking for.
This is an important piece of the puzzle in the mystery of why matter remains in the Universe, because scientific theories about why there is matter left over also predict that neutrons have the "electric compass" property, to a greater or lesser extent. Measuring it then it helps scientists to get closer to the truth about why matter remains.
The team of physicists found that the neutron has a significantly smaller EDM than predicted by various theories about why matter remains in the universe; this makes these theories less likely to be correct, so they have to be altered, or new theories found. In fact it's been said in the literature that over the years, these EDM measurements, considered as a set, have probably disproved more theories than any other experiment in the history of physics. The results are reported today, Friday 28 February 2020, in the journal Physical Review Letters.
Professor Philip Harris, Head of the School of Mathematical and Physical Sciences and leader of the EDM group at the University of Sussex, said:
"After more than two decades of work by researchers at the University of Sussex and elsewhere, a final result has emerged from an experiment designed to address one of the most profound problems in cosmology for the last fifty years: namely, the question of why the Universe contains so much more matter than antimatter, and, indeed, why it now contains any matter at all. Why didn't the antimatter cancel out all the matter? Why is there any matter left?
"The answer relates to a structural asymmetry that should appear in fundamental particles like neutrons. This is what we've been looking for. We've found that the "electric dipole moment" is smaller than previously believed. This helps us to rule out theories about why there is matter left over -- because the theories governing the two things are linked.
"We have set a new international standard for the sensitivity of this experiment. What we're searching for in the neutron -- the asymmetry which shows that it is positive at one end and negative at the other -- is incredibly tiny. Our experiment was able to measure this in such detail that if the asymmetry could be scaled up to the size of a football, then a football scaled up by the same amount would fill the visible Universe."
The experiment is an upgraded version of apparatus originally designed by researchers at the University of Sussex and the Rutherford Appleton Laboratory (RAL), and which has held the world sensitivity record continuously from 1999 until now.
Dr Maurits van der Grinten, from the neutron EDM group at the Rutherford Appleton Laboratory (RAL), said:
"The experiment combines various state of the art technologies that all need to perform simultaneously. We're pleased that the equipment, technology and expertise developed by scientists from RAL has contributed to the work to push the limit on this important parameter"
Dr Clark Griffith, Lecturer in Physics from the School of Mathematical and Physical Sciences at the University of Sussex, said:
"This experiment brings together techniques from atomic and low energy nuclear physics, including laser-based optical magnetometry and quantum-spin manipulation. By using these multi-disciplinary tools to measure the properties of the neutron extremely precisely, we are able to probe questions relevant to high-energy particle physics and the fundamental nature of the symmetries underlying the universe. "
50,000 measurements
Any electric dipole moment that a neutron may have is tiny, and so is extremely difficult to measure. Previous measurements by other researchers have borne this out. In particular, the team had to go to great lengths to keep the local magnetic field very constant during their latest measurement. For example, every truck that drove by on the road next to the institute disturbed the magnetic field on a scale that would have been significant for the experiment, so this effect had to be compensated for during the measurement.
Also, the number of neutrons observed needed to be large enough to provide a chance to measure the electric dipole moment. The measurements ran over a period of two years. So-called ultracold neutrons, that is, neutrons with a comparatively slow speed, were measured. Every 300 seconds, a bunch of more than 10,000 neutrons was directed to the experiment and examined in detail. The researchers measured a total of 50,000 such bunches.
A new international standard is set
The researchers' latest results supported and enhanced those of their predecessors: a new international standard has been set. The size of the EDM is still too small to measure with the instruments that have been used up until now, so some theories that attempted to explain the excess of matter have become less likely. The mystery therefore remains, for the time being.
The next, more precise, measurement is already being constructed at PSI. The PSI collaboration expects to start their next series of measurements by 2021.
Search for "new physics"
The new result was determined by a group of researchers at 18 institutes and universities in Europe and the USA on the basis of data collected at PSI's ultracold neutron source. The researchers collected measurement data there over a period of two years, evaluated it very carefully in two separate teams, and were then able to obtain a more accurate result than ever before.
The research project is part of the search for "new physics" that would go beyond the so-called Standard Model of Physics, which sets out the properties of all known particles. This is also a major goal of experiments at larger facilities such as the Large Hadron Collider (LHC) at CERN.
Read more at Science Daily
Feb 28, 2020
Tracking down the mystery of matter
Researchers at the Paul Scherrer Institute PSI have measured a property of the neutron more precisely than ever before. In the process they found out that the elementary particle has a significantly smaller electric dipole moment than was previously assumed. With that, it has also become less likely that this dipole moment can help to explain the origin of all matter in the universe. The researchers achieved this result using the ultracold neutron source at PSI. They report their results today in the journal Physical Review Letters.
The Big Bang created both the matter in the universe and the antimatter -- at least according to the established theory. Since the two mutually annihilate each other, however, there must have been a surplus of matter, which has remained to this day. The cause of this excess of matter is one of the great mysteries of physics and astronomy. Researchers hope to find a clue to the underlying phenomenon with the help of neutrons, the electrically uncharged elementary building blocks of atoms. The assumption: If the neutron had a so-called electric dipole moment (abbreviated nEDM) with a measurable non-zero value, this could be due to the same physical principle that would also explain the excess of matter after the Big Bang.
50,000 measurements
The search for the nEDM can be expressed in everyday language as the question of whether or not the neutron is an electric compass. It has long been clear that the neutron is a magnetic compass and reacts to a magnetic field, or, in technical jargon: has a magnetic dipole moment. If in addition the neutron also had an electric dipole moment, its value would be very much less -- and thus much more difficult to measure. Previous measurements by other researchers have borne this out. Therefore, the researchers at PSI had to go to great lengths to keep the local magnetic field very constant during their latest measurement. Every truck that drove by on the road next to PSI disturbed the magnetic field on a scale that was relevant for the experiment, so this effect had to be calculated and removed from the experimental data.
Also, the number of neutrons observed needed to be large enough to provide a chance to measure the nEDM. The measurements at PSI therefore ran over a period of two years. So-called ultracold neutrons, that is, neutrons with a comparatively slow speed, were measured. Every 300 seconds, an 8 second long bundle with over 10,000 neutrons was directed to the experiment and examined. The researchers measured a total of 50,000 such bundles.
"Even for PSI with its large research facilities, this was a fairly extensive study," says Philipp Schmidt-Wellenburg, a researcher on the nEDM project on the part of PSI. "But that is exactly what is needed these days if we are looking for physics beyond the Standard Model."
Search for "new physics"
The new result was determined by a group of researchers at 18 institutes and universities in Europe and the USA, amongst them the ETH Zurich, the University of Bern and the University of Fribourg. The data had been gathered at PSI's ultracold neutron source. The researchers had collected measurement data there over two years, evaluated it very carefully in two teams, and through that obtained a more accurate result than ever before.
The nEDM research project is part of the search for "new physics" that would go beyond the so-called Standard Model. This is also being sought at even larger facilities such as the Large Hadron Collider LHC at CERN. "The research at CERN is broad and generally searches for new particles and their properties," explains Schmidt-Wellenburg. "We on the other hand are going deep, because we are only looking at the properties of one particle, the neutron. In exchange, however, we achieve an accuracy in this detail that the LHC might only reach in 100 years."
"Ultimately," says Georg Bison, who like Schmidt-Wellenburg is a researcher in the Laboratory for Particle Physics at PSI, "various measurements on the cosmological scale show deviations from the Standard Model. In contrast, no one has yet been able to reproduce these results in the laboratory. This is one of the very big questions in modern physics, and that's what makes our work so exciting."
Even more precise measurements are planned
With their latest experiment, the researchers have confirmed previous laboratory results. "Our current result too yielded a value for nEDM that is too small to measure with the instruments that have been used up to now -- the value is too close to zero," says Schmidt-Wellenburg. "So it has become less likely that the neutron will help explain the excess of matter. But it still can't be completely ruled out. And in any case, science is interested in the exact value of the nEDM in order to find out if it can be used to discover new physics."
Therefore, the next, more precise measurement is already being planned. "When we started up the current source for ultracold neutrons here at PSI in 2010, we already knew that the rest of the experiment wouldn't quite do it justice. So we are currently building an appropriately larger experiment," explains Bison. The PSI researchers expect to start the next series of measurements of the nEDM by 2021 and, in turn, to surpass the current one in terms of accuracy.
Read more at Science Daily
The Big Bang created both the matter in the universe and the antimatter -- at least according to the established theory. Since the two mutually annihilate each other, however, there must have been a surplus of matter, which has remained to this day. The cause of this excess of matter is one of the great mysteries of physics and astronomy. Researchers hope to find a clue to the underlying phenomenon with the help of neutrons, the electrically uncharged elementary building blocks of atoms. The assumption: If the neutron had a so-called electric dipole moment (abbreviated nEDM) with a measurable non-zero value, this could be due to the same physical principle that would also explain the excess of matter after the Big Bang.
50,000 measurements
The search for the nEDM can be expressed in everyday language as the question of whether or not the neutron is an electric compass. It has long been clear that the neutron is a magnetic compass and reacts to a magnetic field, or, in technical jargon: has a magnetic dipole moment. If in addition the neutron also had an electric dipole moment, its value would be very much less -- and thus much more difficult to measure. Previous measurements by other researchers have borne this out. Therefore, the researchers at PSI had to go to great lengths to keep the local magnetic field very constant during their latest measurement. Every truck that drove by on the road next to PSI disturbed the magnetic field on a scale that was relevant for the experiment, so this effect had to be calculated and removed from the experimental data.
Also, the number of neutrons observed needed to be large enough to provide a chance to measure the nEDM. The measurements at PSI therefore ran over a period of two years. So-called ultracold neutrons, that is, neutrons with a comparatively slow speed, were measured. Every 300 seconds, an 8 second long bundle with over 10,000 neutrons was directed to the experiment and examined. The researchers measured a total of 50,000 such bundles.
"Even for PSI with its large research facilities, this was a fairly extensive study," says Philipp Schmidt-Wellenburg, a researcher on the nEDM project on the part of PSI. "But that is exactly what is needed these days if we are looking for physics beyond the Standard Model."
Search for "new physics"
The new result was determined by a group of researchers at 18 institutes and universities in Europe and the USA, amongst them the ETH Zurich, the University of Bern and the University of Fribourg. The data had been gathered at PSI's ultracold neutron source. The researchers had collected measurement data there over two years, evaluated it very carefully in two teams, and through that obtained a more accurate result than ever before.
The nEDM research project is part of the search for "new physics" that would go beyond the so-called Standard Model. This is also being sought at even larger facilities such as the Large Hadron Collider LHC at CERN. "The research at CERN is broad and generally searches for new particles and their properties," explains Schmidt-Wellenburg. "We on the other hand are going deep, because we are only looking at the properties of one particle, the neutron. In exchange, however, we achieve an accuracy in this detail that the LHC might only reach in 100 years."
"Ultimately," says Georg Bison, who like Schmidt-Wellenburg is a researcher in the Laboratory for Particle Physics at PSI, "various measurements on the cosmological scale show deviations from the Standard Model. In contrast, no one has yet been able to reproduce these results in the laboratory. This is one of the very big questions in modern physics, and that's what makes our work so exciting."
Even more precise measurements are planned
With their latest experiment, the researchers have confirmed previous laboratory results. "Our current result too yielded a value for nEDM that is too small to measure with the instruments that have been used up to now -- the value is too close to zero," says Schmidt-Wellenburg. "So it has become less likely that the neutron will help explain the excess of matter. But it still can't be completely ruled out. And in any case, science is interested in the exact value of the nEDM in order to find out if it can be used to discover new physics."
Therefore, the next, more precise measurement is already being planned. "When we started up the current source for ultracold neutrons here at PSI in 2010, we already knew that the rest of the experiment wouldn't quite do it justice. So we are currently building an appropriately larger experiment," explains Bison. The PSI researchers expect to start the next series of measurements of the nEDM by 2021 and, in turn, to surpass the current one in terms of accuracy.
Read more at Science Daily
How much does black carbon contribute to climate warming?
Researchers at Michigan Technological University and Brookhaven National Laboratory, along with partners at other universities, industry, and national labs, have determined that while the shape of particles containing black carbon does have some effect on atmospheric warming, it's important to account for the structural differences in soot particles, as well as how the particles interact with other organic and inorganic materials that coat black carbon as it travels through the atmosphere.
Published today in the Proceedings of the National Academy of Sciences, the article provides a framework that reconciles model simulations with laboratory and empirical observations, and that can be used to improve estimates of black carbon's impact on climate.
Every Black Carbon Particle is Unique
Black carbon's absorption of solar radiation is comparable to that of carbon dioxide. Yet black carbon only remains in the atmosphere for days to weeks, while carbon dioxide can remain in the atmosphere for hundreds of years.
Scientists for years have approximated black carbon particles as spherically shaped in models, that frequently became coated by other organic materials. The thought was that as the soot particles travel through the atmosphere, the coating had what is called a "lensing effect"; the coat focuses light down on the black carbon, causing increased radiation absorption. And while soot particles are indeed coated in organic materials, that coating is not uniform from particle to particle.
"When you take an image under the microscope, the particles never looks perfectly like a sphere with the same coating," said Claudio Mazzoleni, professor of physics at Michigan Tech and one of the article's co-authors. "If you do a numerical calculation about perfect spheres coated by a shell, a model will show an enhanced absorption of the black carbon particles by a factor of up to three."
Empirical studies of black carbon particles demonstrate that absorption is much less than models would suggest, calling into question the effectiveness of the model as well as our understanding of black carbon's climate forcing effect.
Research suggests that the organic material coating is not fully spherical; depending on how the organic materials cling to a black carbon particle, the resulting shape can cause the particle to act very differently even if it has the same amount of material as another soot particle that is differently shaped. But even more important is that the amount of coating might change disparately from particle to particle. These two attributes both decrease the expected absorption enhancement.
Laura Fierce, an associate atmospheric scientist at Brookhaven National Laboratory, applied the particle-resolved model to account for particle heterogeneity while modeling black carbon.
"Whereas most aerosol models simplify the representation of particle composition, the particle-resolved model tracks the composition of individual particles as they evolve in the atmosphere," Fierce said. "This model is uniquely suited to evaluate error resulting from common approximations applied in global-scale aerosol models."
Less Effect on Climate Warming Than We Thought
Essentially, the researchers have introduced into climate modeling the diversity of organic and inorganic coating on particles and the non-uniform nature of the particles themselves. By combining an empirical model with laboratory measurements, the model predicted a much lower enhancement increase in absorption by black carbon than previously thought. The updated modeling also brings the model's output much closer to what was measured in the field.
"People think black carbon has a very strong warming effect on the atmosphere, which depends on absorption," Mazzoleni said. "If you have larger absorption, it contributes to warming and has greater climate impact. To understand how much black carbon contributes to the warming of climate, we need to understand these details because they can make a difference."
Read more at Science Daily
Published today in the Proceedings of the National Academy of Sciences, the article provides a framework that reconciles model simulations with laboratory and empirical observations, and that can be used to improve estimates of black carbon's impact on climate.
Every Black Carbon Particle is Unique
Black carbon's absorption of solar radiation is comparable to that of carbon dioxide. Yet black carbon only remains in the atmosphere for days to weeks, while carbon dioxide can remain in the atmosphere for hundreds of years.
Scientists for years have approximated black carbon particles as spherically shaped in models, that frequently became coated by other organic materials. The thought was that as the soot particles travel through the atmosphere, the coating had what is called a "lensing effect"; the coat focuses light down on the black carbon, causing increased radiation absorption. And while soot particles are indeed coated in organic materials, that coating is not uniform from particle to particle.
"When you take an image under the microscope, the particles never looks perfectly like a sphere with the same coating," said Claudio Mazzoleni, professor of physics at Michigan Tech and one of the article's co-authors. "If you do a numerical calculation about perfect spheres coated by a shell, a model will show an enhanced absorption of the black carbon particles by a factor of up to three."
Empirical studies of black carbon particles demonstrate that absorption is much less than models would suggest, calling into question the effectiveness of the model as well as our understanding of black carbon's climate forcing effect.
Research suggests that the organic material coating is not fully spherical; depending on how the organic materials cling to a black carbon particle, the resulting shape can cause the particle to act very differently even if it has the same amount of material as another soot particle that is differently shaped. But even more important is that the amount of coating might change disparately from particle to particle. These two attributes both decrease the expected absorption enhancement.
Laura Fierce, an associate atmospheric scientist at Brookhaven National Laboratory, applied the particle-resolved model to account for particle heterogeneity while modeling black carbon.
"Whereas most aerosol models simplify the representation of particle composition, the particle-resolved model tracks the composition of individual particles as they evolve in the atmosphere," Fierce said. "This model is uniquely suited to evaluate error resulting from common approximations applied in global-scale aerosol models."
Less Effect on Climate Warming Than We Thought
Essentially, the researchers have introduced into climate modeling the diversity of organic and inorganic coating on particles and the non-uniform nature of the particles themselves. By combining an empirical model with laboratory measurements, the model predicted a much lower enhancement increase in absorption by black carbon than previously thought. The updated modeling also brings the model's output much closer to what was measured in the field.
"People think black carbon has a very strong warming effect on the atmosphere, which depends on absorption," Mazzoleni said. "If you have larger absorption, it contributes to warming and has greater climate impact. To understand how much black carbon contributes to the warming of climate, we need to understand these details because they can make a difference."
Read more at Science Daily
Antarctic ice walls protect the climate
Ice formation in Antarctica |
The ocean can store much more heat than the atmosphere. The deep sea around Antarctica stores thermal energy that is the equivalent of heating the air above the continent by 400 degrees.
Now, a Swedish-led international research group has explored the physics behind the ocean currents close to the floating glaciers that surround the Antarctic coast.
"Current measurements indicate an increase in melting, particularly near the coast in some parts of Antarctica and Greenland. These increases can likely be linked to the warm, salty ocean currents that circulate on the continental shelf, melting the ice from below," says Anna Wåhlin, lead author of the study and professor of oceanography at the University of Gothenburg.
"What we found here is a crucial feedback process: the ice shelves are their own best protection against warm water intrusions. If the ice thins, more oceanic heat comes in and melts the ice shelf, which becomes even thinner etc. It is worrying, as the ice shelves are already thinning because of global air and ocean warming," says Céline Heuzé, climate researcher at the Department of Earth Sciences of Gothenburg University.
The stability of ice is a mystery
Inland Antarctic ice gradually moves towards the ocean. Despite the ice being so important, its stability remains a mystery -- as does the answer to what could make it melt faster.
Since the glaciers are difficult to access, researchers have been unable to find out much information about the active processes.
More knowledge has now been obtained from studying the measurement data collected from instruments that Anna Wåhlin and her researcher colleagues placed in the ocean around the Getz glacier in West Antarctica.
The ice's edge blocks warm seawater
Gertz has a floating section that is approximately 300 to 800 metres thick, beneath which there is seawater that connects to the ocean beyond. The glacier culminates in a vertical edge, a wall of ice that continues 300-400 metres down into the ocean. Warm seawater flows beneath this edge, towards the continent and the deeper ice further south.
"Studying the measurement data from the instruments, we found that the ocean currents are blocked by the ice edge. This limits the extent to which the warm water can reach the continent. We have long been stumped in our attempts to establish a clear link between the transport of warm water up on the continental shelf and melting glaciers.
Now, we understand that only a small amount of the current can make its way beneath the glacier. This means that around two-thirds of the thermal energy that travels up towards the continental shelf from the deep sea never reaches the ice."
Can lead to better prognoses
The results of the studies have provided researchers with a greater understanding of how these glacier areas work.
"From the Getz glacier, we are receiving measurements of heat transport in the ocean that correspond with the melting ice being measured by satellites. This also means that the floating glaciers -- the ice fronts in particular -- are key areas that should be closely monitored. If the ice walls were to disappear, much greater levels of thermal energy would be released towards the ice on land.
Consequently, we no longer expect to see a direct link between increasing westerly winds and growing levels of melting ice. Instead, the increased water levels can be caused by the processes that pump up warmer, heavier water to the continental shelf, for example as low-pressure systems move closer to the continent."
Read more at Science Daily
Labels:
Antarctica,
Climate,
Glaciers,
Ice,
Science,
Sea Levels,
Water
Astronomers detect biggest explosion in the history of the Universe
The blast came from a supermassive black hole at the centre of a galaxy hundreds of millions of light-years away.
It released five times more energy than the previous record holder.
Professor Melanie Johnston-Hollitt, from the Curtin University node of the International Centre for Radio Astronomy Research, said the event was extraordinarily energetic.
"We've seen outbursts in the centres of galaxies before but this one is really, really massive," she said.
"And we don't know why it's so big.
"But it happened very slowly -- like an explosion in slow motion that took place over hundreds of millions of years."
The explosion occurred in the Ophiuchus galaxy cluster, about 390 million light-years from Earth.
It was so powerful it punched a cavity in the cluster plasma -- the super-hot gas surrounding the black hole.
Lead author of the study Dr Simona Giacintucci, from the Naval Research Laboratory in the United States, said the blast was similar to the 1980 eruption of Mount St. Helens, which ripped the top off the mountain.
"The difference is that you could fit 15 Milky Way galaxies in a row into the crater this eruption punched into the cluster's hot gas," she said.
Professor Johnston-Hollitt said the cavity in the cluster plasma had been seen previously with X-ray telescopes.
But scientists initially dismissed the idea that it could have been caused by an energetic outburst, because it would have been too big.
"People were sceptical because the size of outburst," she said. "But it really is that. The Universe is a weird place."
The researchers only realised what they had discovered when they looked at the Ophiuchus galaxy cluster with radio telescopes.
"The radio data fit inside the X-rays like a hand in a glove," said co-author Dr Maxim Markevitch, from NASA's Goddard Space Flight Center.
"This is the clincher that tells us an eruption of unprecedented size occurred here."
The discovery was made using four telescopes; NASA's Chandra X-ray Observatory, ESA's XMM-Newton, the Murchison Widefield Array (MWA) in Western Australia and the Giant Metrewave Radio Telescope (GMRT) in India.
Professor Johnston-Hollitt, who is the director of the MWA and an expert in galaxy clusters, likened the finding to discovering the first dinosaur bones.
"It's a bit like archaeology," she said.
"We've been given the tools to dig deeper with low frequency radio telescopes so we should be able to find more outbursts like this now."
The finding underscores the importance of studying the Universe at different wavelengths, Professor Johnston-Hollitt said.
"Going back and doing a multi-wavelength study has really made the difference here," she said.
Professor Johnston-Hollitt said the finding is likely to be the first of many.
"We made this discovery with Phase 1 of the MWA, when the telescope had 2048 antennas pointed towards the sky," she said.
Read more at Science Daily
Feb 27, 2020
Turbulent times revealed on Asteroid 4 Vesta
Planetary scientists at Curtin University have shed some light on the tumultuous early days of the largely preserved protoplanet Asteroid 4 Vesta, the second largest asteroid in our Solar System.
Research lead Professor Fred Jourdan, from Curtin University's school of Earth and Planetary Sciences, said Vesta is of tremendous interest to scientists trying to understand more about what planets are made of, and how they evolved.
"Vesta is the only largely intact asteroid which shows complete differentiation with a metallic core, a silicate mantle and a thin basaltic crust, and it's also very small, with a diameter of only about 525 kilometres," Professor Jourdan said.
"In a sense it's like a baby planet, and therefore it is easier for scientists to understand it than say, a fully developed, large, rocky planet."
To give you an idea of its size, you could squeeze at least three Vesta-size asteroids side by side in the state of New South Wales, Australia.
Vesta was visited by the NASA Dawn spacecraft in 2011, when it was observed that the asteroid had a more complex geological history than previously thought. With the aim of hoping to understand more about the asteroid, the Curtin research team analysed well-preserved samples of volcanic meteorites found in Antarctica that were identified as having fallen to Earth from Vesta.
"Using an argon-argon dating technique, we obtained a series of very precise ages for the meteorites, which gave us four very important pieces of new information about timelines on Vesta," Professor Jourdan said.
"Firstly, the data showed that Vesta was volcanically active for at least 30 million years after its original formation, which happened 4,565 million years ago. While this may seem short, it is in fact significantly longer than what most other numerical models predicted, and was unexpected for such a small asteroid.
"Considering that all the heat-providing radioactive elements such as aluminium 26 would have completely decayed by that time, our research suggests pockets of magmas must have survived on Vesta, and were potentially related to a slow-cooling partial magma ocean located inside the asteroid's crust."
Co-researcher Dr Trudi Kennedy, also from Curtin's School of Earth and Planetary Sciences, said the research also showed the timeframes when very large impacts from asteroids striking Vesta were carving out craters of ten or more kilometres deep from the asteroid's volcanically active crust.
"To put this into perspective, imagine a large asteroid smashing into the main volcanic island of Hawaii and excavating a crater 15 kilometres deep -- that gives you an idea of what tumultuous activity was happening on Vesta in the early days of our Solar System," Dr Kennedy said.
Scientists further explored the data to understand what was happening deeper in the asteroid by calculating how long it took for Vesta's deep crustal layer to cool down. Some of these rocks were located too deep in the crust to be affected by asteroid impacts, and yet, being relatively close to the mantle, they were strongly affected by the natural heat gradient of the protoplanet and were metamorphosed as a result.
"What makes this interesting is that our data further confirms the suggestion that the first flows of erupted lava on Vesta were buried deep into its crust by more recent lava flows, essentially layering them on top of each other. They were then 'cooked' by the heat of the protoplanet's mantle, modifying the rocks," Dr Kennedy said.
The team also concluded that the meteorites they analysed were excavated from Vesta during a large impact, possibly 3.5 billion years ago, and were agglomerated deep into a rubble pile asteroid, where they were protected from any subsequent impacts.
A rubble pile asteroid is formed when a group of ejected rocks assemble under their own gravity, creating an asteroid that is essentially a pile of rocks clumped together.
"This is very exciting for us because our new data brings lots of new information about the first 50 million years or so of Vesta's early history, which any future models will now have to take in to account," Dr Kennedy said.
Read more at Science Daily
Research lead Professor Fred Jourdan, from Curtin University's school of Earth and Planetary Sciences, said Vesta is of tremendous interest to scientists trying to understand more about what planets are made of, and how they evolved.
"Vesta is the only largely intact asteroid which shows complete differentiation with a metallic core, a silicate mantle and a thin basaltic crust, and it's also very small, with a diameter of only about 525 kilometres," Professor Jourdan said.
"In a sense it's like a baby planet, and therefore it is easier for scientists to understand it than say, a fully developed, large, rocky planet."
To give you an idea of its size, you could squeeze at least three Vesta-size asteroids side by side in the state of New South Wales, Australia.
Vesta was visited by the NASA Dawn spacecraft in 2011, when it was observed that the asteroid had a more complex geological history than previously thought. With the aim of hoping to understand more about the asteroid, the Curtin research team analysed well-preserved samples of volcanic meteorites found in Antarctica that were identified as having fallen to Earth from Vesta.
"Using an argon-argon dating technique, we obtained a series of very precise ages for the meteorites, which gave us four very important pieces of new information about timelines on Vesta," Professor Jourdan said.
"Firstly, the data showed that Vesta was volcanically active for at least 30 million years after its original formation, which happened 4,565 million years ago. While this may seem short, it is in fact significantly longer than what most other numerical models predicted, and was unexpected for such a small asteroid.
"Considering that all the heat-providing radioactive elements such as aluminium 26 would have completely decayed by that time, our research suggests pockets of magmas must have survived on Vesta, and were potentially related to a slow-cooling partial magma ocean located inside the asteroid's crust."
Co-researcher Dr Trudi Kennedy, also from Curtin's School of Earth and Planetary Sciences, said the research also showed the timeframes when very large impacts from asteroids striking Vesta were carving out craters of ten or more kilometres deep from the asteroid's volcanically active crust.
"To put this into perspective, imagine a large asteroid smashing into the main volcanic island of Hawaii and excavating a crater 15 kilometres deep -- that gives you an idea of what tumultuous activity was happening on Vesta in the early days of our Solar System," Dr Kennedy said.
Scientists further explored the data to understand what was happening deeper in the asteroid by calculating how long it took for Vesta's deep crustal layer to cool down. Some of these rocks were located too deep in the crust to be affected by asteroid impacts, and yet, being relatively close to the mantle, they were strongly affected by the natural heat gradient of the protoplanet and were metamorphosed as a result.
"What makes this interesting is that our data further confirms the suggestion that the first flows of erupted lava on Vesta were buried deep into its crust by more recent lava flows, essentially layering them on top of each other. They were then 'cooked' by the heat of the protoplanet's mantle, modifying the rocks," Dr Kennedy said.
The team also concluded that the meteorites they analysed were excavated from Vesta during a large impact, possibly 3.5 billion years ago, and were agglomerated deep into a rubble pile asteroid, where they were protected from any subsequent impacts.
A rubble pile asteroid is formed when a group of ejected rocks assemble under their own gravity, creating an asteroid that is essentially a pile of rocks clumped together.
"This is very exciting for us because our new data brings lots of new information about the first 50 million years or so of Vesta's early history, which any future models will now have to take in to account," Dr Kennedy said.
Read more at Science Daily
Scientists find link between genes and ability to exercise
A team of researchers have discovered a genetic mutation that reduces a patient's ability to exercise efficiently.
In a study published in The New England Journal of Medicine, a team including researchers from King's College London have found a link between a genetic mutation that affects cellular oxygen sensing and a patient's limited exercise capacity.
The team identified a patient who had a reduced rate of growth, persistent low blood sugar, a limited exercise capacity and a very high number of red blood cells.
The team carried out genetic and protein analysis of the patient, examined their respiratory physiology in simulated high altitude, measured their exercise capacity, and performed a series of metabolic tests.
The von Hippel-Lindau (VHL) gene is fundamental for cells to survive when oxygen availability is reduced. Following genetic analysis, an alteration on the VHL gene was identified and associated with impaired functionality in the patient's mitochondria, the powerhouse of the cell that uses oxygen to fuel cellular life. This reduced mitochondrial function efficiency limits the patient's aerobic exercise capacity compared to people without the mutation.
Dr Federico Formenti, School of Basic & Medical Biosciences, one of the leading authors of the study, comments: "The discovery of this mutation and the associated phenotype is exciting because it enables a deeper understanding of human physiology, especially in terms of how the human body senses and responds to reduced oxygen availability."
A new syndrome has been discovered that can alter the regulation of human metabolism and skeletal muscle function. This research puts the basis for the study of new mutations that affect the oxygen sensing pathways and the way these mutations are associated with the integrative function of the human body as a whole. Improving our understanding of these mechanisms may also contribute to the treatment of hypoxic conditions.
Read more at Science Daily
In a study published in The New England Journal of Medicine, a team including researchers from King's College London have found a link between a genetic mutation that affects cellular oxygen sensing and a patient's limited exercise capacity.
The team identified a patient who had a reduced rate of growth, persistent low blood sugar, a limited exercise capacity and a very high number of red blood cells.
The team carried out genetic and protein analysis of the patient, examined their respiratory physiology in simulated high altitude, measured their exercise capacity, and performed a series of metabolic tests.
The von Hippel-Lindau (VHL) gene is fundamental for cells to survive when oxygen availability is reduced. Following genetic analysis, an alteration on the VHL gene was identified and associated with impaired functionality in the patient's mitochondria, the powerhouse of the cell that uses oxygen to fuel cellular life. This reduced mitochondrial function efficiency limits the patient's aerobic exercise capacity compared to people without the mutation.
Dr Federico Formenti, School of Basic & Medical Biosciences, one of the leading authors of the study, comments: "The discovery of this mutation and the associated phenotype is exciting because it enables a deeper understanding of human physiology, especially in terms of how the human body senses and responds to reduced oxygen availability."
A new syndrome has been discovered that can alter the regulation of human metabolism and skeletal muscle function. This research puts the basis for the study of new mutations that affect the oxygen sensing pathways and the way these mutations are associated with the integrative function of the human body as a whole. Improving our understanding of these mechanisms may also contribute to the treatment of hypoxic conditions.
Read more at Science Daily
Job insecurity negatively affects your personality
New research shows that experiencing chronic job insecurity can change your personality for the worse.
The study, published in the Journal of Applied Psychology, found those exposed to job insecurity over more than four years became less emotionally stable, less agreeable, and less conscientious.
Report co-author Dr Lena Wang from RMIT University's School of Management said the study built on a growing evidence base about the negative consequences of job insecurity.
"Traditionally, we've thought about the short-term consequences of job insecurity -- that it hurts your well-being, physical health, sense of self-esteem," Wang said.
"But now we are looking at how that actually changes who you are as a person over time, a long-term consequence that you may not even be aware of."
The study used nationally representative data from the Household, Income and Labour Dynamics in Australia (HILDA) Survey in relation to answers about job security and personality for 1,046 employees over a nine-year period.
It applied a well-established personality framework known as the Big Five, which categorises human personality into five broad traits: emotional stability, agreeableness, conscientiousness, extraversion and openness.
The study results showed that long-term job insecurity negatively affected the first three traits, which relate to a person's tendency to reliably achieve goals, get along with others, and cope with stress.
Wang said the results went against some assumptions about job insecurity.
"Some might believe that insecure work increases productivity because workers will work harder to keep their jobs, but our research suggests this may not be the case if job insecurity persists," Wang said.
"We found that those chronically exposed to job insecurity are in fact more likely to withdraw their effort and shy away from building strong, positive working relationships, which can undermine their productivity in the long run."
Previous research has shown that insecure work -- including labour hire practices, contract and casual work, and underemployment -- is on the rise in Australia and globally.
The HILDA data drew on responses from employees from a broad cross-section of professions and jobs, who were asked about how secure they perceived their jobs to be.
Study lead author Professor Chia-Huei Wu from Leeds University Business School said types of job insecurity might include short-term contracts or casual work, jobs threatened by automation, and positions that could be in line for a redundancy.
Importantly, said Wu, there are ways that employers can support workers who are feeling worried about their jobs.
"This is as much about perceived job insecurity as actual insecure contracts," Wu said.
"Some people simply feel daunted by the changing nature of their roles or fear they'll be replaced by automation.
"But while some existing jobs can be replaced by automation, new jobs will be created.
Read more at Science Daily
The study, published in the Journal of Applied Psychology, found those exposed to job insecurity over more than four years became less emotionally stable, less agreeable, and less conscientious.
Report co-author Dr Lena Wang from RMIT University's School of Management said the study built on a growing evidence base about the negative consequences of job insecurity.
"Traditionally, we've thought about the short-term consequences of job insecurity -- that it hurts your well-being, physical health, sense of self-esteem," Wang said.
"But now we are looking at how that actually changes who you are as a person over time, a long-term consequence that you may not even be aware of."
The study used nationally representative data from the Household, Income and Labour Dynamics in Australia (HILDA) Survey in relation to answers about job security and personality for 1,046 employees over a nine-year period.
It applied a well-established personality framework known as the Big Five, which categorises human personality into five broad traits: emotional stability, agreeableness, conscientiousness, extraversion and openness.
The study results showed that long-term job insecurity negatively affected the first three traits, which relate to a person's tendency to reliably achieve goals, get along with others, and cope with stress.
Wang said the results went against some assumptions about job insecurity.
"Some might believe that insecure work increases productivity because workers will work harder to keep their jobs, but our research suggests this may not be the case if job insecurity persists," Wang said.
"We found that those chronically exposed to job insecurity are in fact more likely to withdraw their effort and shy away from building strong, positive working relationships, which can undermine their productivity in the long run."
Previous research has shown that insecure work -- including labour hire practices, contract and casual work, and underemployment -- is on the rise in Australia and globally.
The HILDA data drew on responses from employees from a broad cross-section of professions and jobs, who were asked about how secure they perceived their jobs to be.
Study lead author Professor Chia-Huei Wu from Leeds University Business School said types of job insecurity might include short-term contracts or casual work, jobs threatened by automation, and positions that could be in line for a redundancy.
Importantly, said Wu, there are ways that employers can support workers who are feeling worried about their jobs.
"This is as much about perceived job insecurity as actual insecure contracts," Wu said.
"Some people simply feel daunted by the changing nature of their roles or fear they'll be replaced by automation.
"But while some existing jobs can be replaced by automation, new jobs will be created.
Read more at Science Daily
Large exoplanet could have the right conditions for life
Exoplanet illustration |
A team from the University of Cambridge used the mass, radius, and atmospheric data of the exoplanet K2-18b and determined that it's possible for the planet to host liquid water at habitable conditions beneath its hydrogen-rich atmosphere. The results are reported in The Astrophysical Journal Letters.
The exoplanet K2-18b, 124 light-years away, is 2.6 times the radius and 8.6 times the mass of Earth, and orbits its star within the habitable zone, where temperatures could allow liquid water to exist. The planet was the subject of significant media coverage in the autumn of 2019, as two different teams reported detection of water vapour in its hydrogen-rich atmosphere. However, the extent of the atmosphere and the conditions of the interior underneath remained unknown.
"Water vapour has been detected in the atmospheres of a number of exoplanets but, even if the planet is in the habitable zone, that doesn't necessarily mean there are habitable conditions on the surface," said Dr Nikku Madhusudhan from Cambridge's Institute of Astronomy, who led the new research. "To establish the prospects for habitability, it is important to obtain a unified understanding of the interior and atmospheric conditions on the planet -- in particular, whether liquid water can exist beneath the atmosphere."
Given the large size of K2-18b, it has been suggested that it would be more like a smaller version of Neptune than a larger version of Earth. A 'mini-Neptune' is expected to have a significant hydrogen 'envelope' surrounding a layer of high-pressure water, with an inner core of rock and iron. If the hydrogen envelope is too thick, the temperature and pressure at the surface of the water layer beneath would be far too great to support life.
Now, Madhusudhan and his team have shown that despite the size of K2-18b, its hydrogen envelope is not necessarily too thick and the water layer could have the right conditions to support life. They used the existing observations of the atmosphere, as well as the mass and radius, to determine the composition and structure of both the atmosphere and interior using detailed numerical models and statistical methods to explain the data.
The researchers confirmed the atmosphere to be hydrogen-rich with a significant amount of water vapour. They also found that levels of other chemicals such as methane and ammonia were lower than expected for such an atmosphere. Whether these levels can be attributed to biological processes remains to be seen.
The team then used the atmospheric properties as boundary conditions for models of the planetary interior. They explored a wide range of models that could explain the atmospheric properties as well as the mass and radius of the planet. This allowed them to obtain the range of possible conditions in the interior, including the extent of the hydrogen envelope and the temperatures and pressures in the water layer.
"We wanted to know the thickness of the hydrogen envelope -- how deep the hydrogen goes," said co-author Matthew Nixon, a PhD student at the Institute of Astronomy. "While this is a question with multiple solutions, we've shown that you don't need much hydrogen to explain all the observations together."
The researchers found that the maximum extent of the hydrogen envelope allowed by the data is around 6% of the planet's mass, though most of the solutions require much less. The minimum amount of hydrogen is about one-millionth by mass, similar to the mass fraction of the Earth's atmosphere. In particular, a number of scenarios allow for an ocean world, with liquid water below the atmosphere at pressures and temperatures similar to those found in Earth's oceans.
Read more at Science Daily
Feb 26, 2020
The force is strong in neutron stars
Most ordinary matter is held together by an invisible subatomic glue known as the strong nuclear force -- one of the four fundamental forces in nature, along with gravity, electromagnetism, and the weak force. The strong nuclear force is responsible for the push and pull between protons and neutrons in an atom's nucleus, which keeps an atom from collapsing in on itself.
In atomic nuclei, most protons and neutrons are far enough apart that physicists can accurately predict their interactions. However, these predictions are challenged when the subatomic particles are so close as to be practically on top of each other.
While such ultrashort-distance interactions are rare in most matter on Earth, they define the cores of neutron stars and other extremely dense astrophysical objects. Since scientists first began exploring nuclear physics, they have struggled to explain how the strong nuclear force plays out at such ultrashort distances.
Now physicists at MIT and elsewhere have for the first time characterized the strong nuclear force, and the interactions between protons and neutrons, at extremely short distances.
They performed an extensive data analysis on previous particle accelerator experiments, and found that as the distance between protons and neutrons becomes shorter, a surprising transition occurs in their interactions. Where at large distances, the strong nuclear force acts primarily to attract a proton to a neutron, at very short distances, the force becomes essentially indiscriminate: Interactions can occur not just to attract a proton to a neutron, but also to repel, or push apart pairs of neutrons.
"This is the first very detailed look at what happens to the strong nuclear force at very short distances," says Or Hen, assistant professor of physicist at MIT. "This has huge implications, primarily for neutron stars and also for the understanding of nuclear systems as a whole."
Hen and his colleagues have published their results in the journal Nature. His co-authors include first author Axel Schmidt PhD '16, a former graduate student and postdoc, along with graduate student Jackson Pybus, undergraduate student Adin Hrnjic and additional colleagues from MIT, the Hebrew University, Tel-Aviv University, Old Dominion University, and members of the CLAS Collaboration, a multi-institutional group of scientists involved with the CEBAF Large Accelerator Spectrometer (CLAS), a particle accelerator at Jefferson Laboratory in Newport News, Virginia.
Star drop snapshot
Ultra-short-distance interactions between protons and neutrons are rare in most atomic nuclei. Detecting them requires pummeling atoms with a huge number of extremely high-energy electrons, a fraction of which might have a chance of kicking out a pair of nucleons (protons or neutrons) moving at high momentum -- an indication that the particles must be interacting at extremely short distances.
"To do these experiments, you need insanely high-current particle accelerators," Hen says. "It's only recently where we have the detector capability, and understand the processes well enough to do this type of work."
Hen and his colleagues looked for the interactions by mining data previously collected by CLAS, a house-sized particle detector at Jefferson Laboratory; the JLab accelerator produces unprecedently high intensity and high-energy beams of electrons. The CLAS detector was operational from 1988 to 2012, and the results of those experiments have since been available for researchers to look through for other phenomena buried in the data.
In their new study, the researchers analyzed a trove of data, amounting to some quadrillion electrons hitting atomic nuclei in the CLAS detector. The electron beam was aimed at foils made from carbon, lead, aluminum, and iron, each with atoms of varying ratios of protons to neutrons. When an electron collides with a proton or neutron in an atom, the energy at which it scatters away is proportional to the energy and momentum of the corresponding nucleon.
"If I know how hard I kicked something and how fast it came out, I can reconstruct the initial momentum of the thing that was kicked," Hen explains.
With this general approach, the team looked through the quadrillion electron collisions and managed to isolate and calculate the momentum of several hundred pairs of high-momentum nucleons. Hen likens these pairs to "neutron star droplets," as their momentum, and their inferred distance between each other, is similar to the extremely dense conditions in the core of a neutron star.
They treated each isolated pair as a "snapshot" and organized the several hundred snapshots along a momentum distribution. At the low end of this distribution, they observed a suppression of proton-proton pairs, indicating that the strong nuclear force acts mostly to attract protons to neutrons at intermediate high-momentum, and short distances.
Further along the distribution, they observed a transition: There appeared to be more proton-proton and, by symmetry, neutron-neutron pairs, suggesting that, at higher momentum, or increasingly short distances, the strong nuclear force acts not just on protons and neutrons, but also on protons and protons and neutrons and neutrons. This pairing force is understood to be repulsive in nature, meaning that at short distances, neutrons interact by strongly repelling each other.
"This idea of a repulsive core in the strong nuclear force is something thrown around as this mythical thing that exists, but we don't know how to get there, like this portal from another realm," Schmidt says. "And now we have data where this transition is staring us in the face, and that was really surprising."
The researchers believe this transition in the strong nuclear force can help to better define the structure of a neutron star. Hen previously found evidence that in the outer core of neutron stars, neutrons mostly pair with protons through the strong attraction. With their new study, the researchers have found evidence that when particles are packed in much denser configurations and separated by shorter distances, the strong nuclear force creates a repulsive force between neutrons that, at a neutron star's core, helps keep the star from collapsing in on itself.
Less than a bag of quarks
The team made two additional discoveries. For one, their observations match the predictions of a surprisingly simple model describing the formation of short-ranged correlations due to the strong nuclear force. For another, against expectations, the core of a neutron star can be described strictly by the interactions between protons and neutrons, without needing to explicitly account for more complex interactions between the quarks and gluons that make up individual nucleons.
When the researchers compared their observations with several existing models of the strong nuclear force, they found a remarkable match with predictions from Argonne V18, a model developed by a research group at Argonne National Laboratory, that considered 18 different ways nucleons may interact, as they are separated by shorter and shorter distances.
This means that if scientists want to calculate properties of a neutron star, Hen says they can use this particular Argonne V18 model to accurately estimate the strong nuclear force interactions between pairs of nucleons in the core. The new data can also be used to benchmark alternate approaches to modeling the cores of neutron stars.
What the researchers found most exciting was that this same model, as it is written, describes the interaction of nucleons at extremely short distances, without explicitly taking into account quarks and gluons. Physicists had assumed that in extremely dense, chaotic environments such as neutron star cores, interactions between neutrons should give way to the more complex forces between quarks and gluons. Because the model does not take these more complex interactions into account, and because its predictions at short distances match the team's observations, Hen says it's likely that a neutron star's core can be described in a less complicated manner.
Read more at Science Daily
In atomic nuclei, most protons and neutrons are far enough apart that physicists can accurately predict their interactions. However, these predictions are challenged when the subatomic particles are so close as to be practically on top of each other.
While such ultrashort-distance interactions are rare in most matter on Earth, they define the cores of neutron stars and other extremely dense astrophysical objects. Since scientists first began exploring nuclear physics, they have struggled to explain how the strong nuclear force plays out at such ultrashort distances.
Now physicists at MIT and elsewhere have for the first time characterized the strong nuclear force, and the interactions between protons and neutrons, at extremely short distances.
They performed an extensive data analysis on previous particle accelerator experiments, and found that as the distance between protons and neutrons becomes shorter, a surprising transition occurs in their interactions. Where at large distances, the strong nuclear force acts primarily to attract a proton to a neutron, at very short distances, the force becomes essentially indiscriminate: Interactions can occur not just to attract a proton to a neutron, but also to repel, or push apart pairs of neutrons.
"This is the first very detailed look at what happens to the strong nuclear force at very short distances," says Or Hen, assistant professor of physicist at MIT. "This has huge implications, primarily for neutron stars and also for the understanding of nuclear systems as a whole."
Hen and his colleagues have published their results in the journal Nature. His co-authors include first author Axel Schmidt PhD '16, a former graduate student and postdoc, along with graduate student Jackson Pybus, undergraduate student Adin Hrnjic and additional colleagues from MIT, the Hebrew University, Tel-Aviv University, Old Dominion University, and members of the CLAS Collaboration, a multi-institutional group of scientists involved with the CEBAF Large Accelerator Spectrometer (CLAS), a particle accelerator at Jefferson Laboratory in Newport News, Virginia.
Star drop snapshot
Ultra-short-distance interactions between protons and neutrons are rare in most atomic nuclei. Detecting them requires pummeling atoms with a huge number of extremely high-energy electrons, a fraction of which might have a chance of kicking out a pair of nucleons (protons or neutrons) moving at high momentum -- an indication that the particles must be interacting at extremely short distances.
"To do these experiments, you need insanely high-current particle accelerators," Hen says. "It's only recently where we have the detector capability, and understand the processes well enough to do this type of work."
Hen and his colleagues looked for the interactions by mining data previously collected by CLAS, a house-sized particle detector at Jefferson Laboratory; the JLab accelerator produces unprecedently high intensity and high-energy beams of electrons. The CLAS detector was operational from 1988 to 2012, and the results of those experiments have since been available for researchers to look through for other phenomena buried in the data.
In their new study, the researchers analyzed a trove of data, amounting to some quadrillion electrons hitting atomic nuclei in the CLAS detector. The electron beam was aimed at foils made from carbon, lead, aluminum, and iron, each with atoms of varying ratios of protons to neutrons. When an electron collides with a proton or neutron in an atom, the energy at which it scatters away is proportional to the energy and momentum of the corresponding nucleon.
"If I know how hard I kicked something and how fast it came out, I can reconstruct the initial momentum of the thing that was kicked," Hen explains.
With this general approach, the team looked through the quadrillion electron collisions and managed to isolate and calculate the momentum of several hundred pairs of high-momentum nucleons. Hen likens these pairs to "neutron star droplets," as their momentum, and their inferred distance between each other, is similar to the extremely dense conditions in the core of a neutron star.
They treated each isolated pair as a "snapshot" and organized the several hundred snapshots along a momentum distribution. At the low end of this distribution, they observed a suppression of proton-proton pairs, indicating that the strong nuclear force acts mostly to attract protons to neutrons at intermediate high-momentum, and short distances.
Further along the distribution, they observed a transition: There appeared to be more proton-proton and, by symmetry, neutron-neutron pairs, suggesting that, at higher momentum, or increasingly short distances, the strong nuclear force acts not just on protons and neutrons, but also on protons and protons and neutrons and neutrons. This pairing force is understood to be repulsive in nature, meaning that at short distances, neutrons interact by strongly repelling each other.
"This idea of a repulsive core in the strong nuclear force is something thrown around as this mythical thing that exists, but we don't know how to get there, like this portal from another realm," Schmidt says. "And now we have data where this transition is staring us in the face, and that was really surprising."
The researchers believe this transition in the strong nuclear force can help to better define the structure of a neutron star. Hen previously found evidence that in the outer core of neutron stars, neutrons mostly pair with protons through the strong attraction. With their new study, the researchers have found evidence that when particles are packed in much denser configurations and separated by shorter distances, the strong nuclear force creates a repulsive force between neutrons that, at a neutron star's core, helps keep the star from collapsing in on itself.
Less than a bag of quarks
The team made two additional discoveries. For one, their observations match the predictions of a surprisingly simple model describing the formation of short-ranged correlations due to the strong nuclear force. For another, against expectations, the core of a neutron star can be described strictly by the interactions between protons and neutrons, without needing to explicitly account for more complex interactions between the quarks and gluons that make up individual nucleons.
When the researchers compared their observations with several existing models of the strong nuclear force, they found a remarkable match with predictions from Argonne V18, a model developed by a research group at Argonne National Laboratory, that considered 18 different ways nucleons may interact, as they are separated by shorter and shorter distances.
This means that if scientists want to calculate properties of a neutron star, Hen says they can use this particular Argonne V18 model to accurately estimate the strong nuclear force interactions between pairs of nucleons in the core. The new data can also be used to benchmark alternate approaches to modeling the cores of neutron stars.
What the researchers found most exciting was that this same model, as it is written, describes the interaction of nucleons at extremely short distances, without explicitly taking into account quarks and gluons. Physicists had assumed that in extremely dense, chaotic environments such as neutron star cores, interactions between neutrons should give way to the more complex forces between quarks and gluons. Because the model does not take these more complex interactions into account, and because its predictions at short distances match the team's observations, Hen says it's likely that a neutron star's core can be described in a less complicated manner.
Read more at Science Daily
Ancient meteorite site on Earth could reveal new clues about Mars' past
Scientists have devised new analytical tools to break down the enigmatic history of Mars' atmosphere -- and whether life was once possible there.
A paper detailing the work was published today in the journal Science Advances. It could help astrobiologists understand the alkalinity, pH and nitrogen content of ancient waters on Mars, and by extension, the carbon dioxide composition of the planet's ancient atmosphere.
Mars of today is too cold to have liquid water on its surface, a requirement for hosting life as we know it.
"The question that drives our interests isn't whether there's life on present-day Mars," said Tim Lyons, UCR distinguished professor of biogeochemistry. "We are driven instead by asking whether there was life on Mars billions of years ago, which seems significantly more likely."
However, "Overwhelming evidence exists that Mars had liquid water oceans roughly 4 billion years ago," Lyons noted.
The central question astrobiologists ask is how that was possible. The red planet is farther from the sun than Earth is, and billions of years ago the sun generated less heat than it does today.
"To have made the planet warm enough for liquid surface water, its atmosphere would likely have needed an immense amount of greenhouse gas, carbon dioxide specifically," explained Chris Tino, a UCR graduate student and co-first-author of the paper along with Eva Stüeken, a lecturer at the University of St. Andrews in Scotland.
Since sampling Mars' atmosphere from billions of years ago to learn its carbon dioxide content is impossible, the team concluded that a site on Earth whose geology and chemistry bear similarities to the Martian surface might provide some of the missing pieces. They found it in southern Germany's Nordlinger Ries crater.
Formed roughly 15 million years ago after being struck by a meteorite, Ries crater features layers of rocks and minerals better preserved than almost anywhere on Earth.
The Mars 2020 rover will land in a similarly structured, well-preserved ancient crater. Both places featured liquid water in their distant past, making their chemical compositions comparable.
According to Tino, it's unlikely that ancient Mars had enough oxygen to have hosted complex life forms like humans or animals.
However, some microorganisms could have survived if ancient Martian water had both a neutral pH level and was highly alkaline. Those conditions imply sufficient carbon dioxide in the atmosphere -- perhaps thousands of times more than what surrounds Earth today -- to warm the planet and make liquid water possible.
While pH measures the concentration of hydrogen ions in a solution, alkalinity is a measure dependent on several ions and how they interact to stabilize pH.
"Ries crater rock samples have ratios of nitrogen isotopes that can best be explained by high pH," Stüeken said. "What's more, the minerals in the ancient sediments tell us that alkalinity was also very high."
However, Martian samples with mineral indicators for high alkalinity and nitrogen isotope data pointing to relatively low pH would demand extremely high levels of carbon dioxide in the past atmosphere.
The resulting carbon dioxide estimates could help solve the long-standing mystery of how an ancient Mars located so far from a faint early sun could have been warm enough for surface oceans and perhaps life. How such high levels could have been maintained and what might have lived beneath them remain important questions.
"Before this study, it wasn't clear that something as straightforward as nitrogen isotopes could be used to estimate the pH of ancient waters on Mars; pH is a key parameter in calculating the carbon dioxide in the atmosphere," Tino said.
Funding for this study came from the NASA Astrobiology Institute, where Lyons leads the Alternative Earths team based at UCR.
Included in the study were Gernot Arp of the Georg-August University of Göttingen and Dietmar Jung of the Bavarian State Office for the Environment.
When samples from NASA's Mars 2020 rover mission are brought back to Earth, they could be analyzed for their nitrogen isotope ratios. These data could confirm the team's suspicion that very high levels of carbon dioxide made liquid water possible and maybe even some forms of microbial life long ago.
Read more at Science Daily
A paper detailing the work was published today in the journal Science Advances. It could help astrobiologists understand the alkalinity, pH and nitrogen content of ancient waters on Mars, and by extension, the carbon dioxide composition of the planet's ancient atmosphere.
Mars of today is too cold to have liquid water on its surface, a requirement for hosting life as we know it.
"The question that drives our interests isn't whether there's life on present-day Mars," said Tim Lyons, UCR distinguished professor of biogeochemistry. "We are driven instead by asking whether there was life on Mars billions of years ago, which seems significantly more likely."
However, "Overwhelming evidence exists that Mars had liquid water oceans roughly 4 billion years ago," Lyons noted.
The central question astrobiologists ask is how that was possible. The red planet is farther from the sun than Earth is, and billions of years ago the sun generated less heat than it does today.
"To have made the planet warm enough for liquid surface water, its atmosphere would likely have needed an immense amount of greenhouse gas, carbon dioxide specifically," explained Chris Tino, a UCR graduate student and co-first-author of the paper along with Eva Stüeken, a lecturer at the University of St. Andrews in Scotland.
Since sampling Mars' atmosphere from billions of years ago to learn its carbon dioxide content is impossible, the team concluded that a site on Earth whose geology and chemistry bear similarities to the Martian surface might provide some of the missing pieces. They found it in southern Germany's Nordlinger Ries crater.
Formed roughly 15 million years ago after being struck by a meteorite, Ries crater features layers of rocks and minerals better preserved than almost anywhere on Earth.
The Mars 2020 rover will land in a similarly structured, well-preserved ancient crater. Both places featured liquid water in their distant past, making their chemical compositions comparable.
According to Tino, it's unlikely that ancient Mars had enough oxygen to have hosted complex life forms like humans or animals.
However, some microorganisms could have survived if ancient Martian water had both a neutral pH level and was highly alkaline. Those conditions imply sufficient carbon dioxide in the atmosphere -- perhaps thousands of times more than what surrounds Earth today -- to warm the planet and make liquid water possible.
While pH measures the concentration of hydrogen ions in a solution, alkalinity is a measure dependent on several ions and how they interact to stabilize pH.
"Ries crater rock samples have ratios of nitrogen isotopes that can best be explained by high pH," Stüeken said. "What's more, the minerals in the ancient sediments tell us that alkalinity was also very high."
However, Martian samples with mineral indicators for high alkalinity and nitrogen isotope data pointing to relatively low pH would demand extremely high levels of carbon dioxide in the past atmosphere.
The resulting carbon dioxide estimates could help solve the long-standing mystery of how an ancient Mars located so far from a faint early sun could have been warm enough for surface oceans and perhaps life. How such high levels could have been maintained and what might have lived beneath them remain important questions.
"Before this study, it wasn't clear that something as straightforward as nitrogen isotopes could be used to estimate the pH of ancient waters on Mars; pH is a key parameter in calculating the carbon dioxide in the atmosphere," Tino said.
Funding for this study came from the NASA Astrobiology Institute, where Lyons leads the Alternative Earths team based at UCR.
Included in the study were Gernot Arp of the Georg-August University of Göttingen and Dietmar Jung of the Bavarian State Office for the Environment.
When samples from NASA's Mars 2020 rover mission are brought back to Earth, they could be analyzed for their nitrogen isotope ratios. These data could confirm the team's suspicion that very high levels of carbon dioxide made liquid water possible and maybe even some forms of microbial life long ago.
Read more at Science Daily
Babies from bilingual homes switch attention faster
Babies born into bilingual homes change the focus of their attention more quickly and more frequently than babies in homes where only one language is spoken, according to new research published in the journal Royal Society Open Science.
The study, led by Anglia Ruskin University (ARU), used eye-tracking technology to record the gaze of 102 infants carrying out a variety of tasks.
The researchers chose to test babies aged between seven and nine months to rule out any benefits gained from being able to speak a second language, often referred to as the "bilingual advantage." Instead, the study focused on the effects of growing up hearing two or more languages.
When shown two pictures side by side, infants from bilingual homes shifted attention from one picture to another more frequently than infants from monolingual homes, suggesting these babies were exploring more of their environment.
The study also found that when a new picture appeared on the screen, babies from bilingual homes were 33% faster at redirecting their attention towards the new picture.
Lead author Dr Dean D'Souza, Senior Lecturer in Psychology at Anglia Ruskin University (ARU), said: "Bilingual environments may be more variable and unpredictable than monolingual environments -- and therefore more challenging to learn in.
"We know that babies can easily acquire multiple languages, so we wanted to investigate how they manage it. Our research suggests that babies in bilingual homes adapt to their more complex environment by seeking out additional information.
"Scanning their surroundings faster and more frequently might help the infants in a number of ways. For example, redirecting attention from a toy to a speaker's mouth could help infants to match ambiguous speech sounds with mouth movements."
The researchers are currently investigating whether faster and more frequent switching in infancy has cascading effects over developmental time, for example affecting behaviour in older children and adults.
From Science Daily
The study, led by Anglia Ruskin University (ARU), used eye-tracking technology to record the gaze of 102 infants carrying out a variety of tasks.
The researchers chose to test babies aged between seven and nine months to rule out any benefits gained from being able to speak a second language, often referred to as the "bilingual advantage." Instead, the study focused on the effects of growing up hearing two or more languages.
When shown two pictures side by side, infants from bilingual homes shifted attention from one picture to another more frequently than infants from monolingual homes, suggesting these babies were exploring more of their environment.
The study also found that when a new picture appeared on the screen, babies from bilingual homes were 33% faster at redirecting their attention towards the new picture.
Lead author Dr Dean D'Souza, Senior Lecturer in Psychology at Anglia Ruskin University (ARU), said: "Bilingual environments may be more variable and unpredictable than monolingual environments -- and therefore more challenging to learn in.
"We know that babies can easily acquire multiple languages, so we wanted to investigate how they manage it. Our research suggests that babies in bilingual homes adapt to their more complex environment by seeking out additional information.
"Scanning their surroundings faster and more frequently might help the infants in a number of ways. For example, redirecting attention from a toy to a speaker's mouth could help infants to match ambiguous speech sounds with mouth movements."
The researchers are currently investigating whether faster and more frequent switching in infancy has cascading effects over developmental time, for example affecting behaviour in older children and adults.
From Science Daily
Unique non-oxygen breathing animal discovered
Salmon. The new parasitic organism lives in salmon muscle. |
A study on the finding was published on February 25 in the Proceedings of the National Academy of Sciences by TAU researchers led by Prof. Dorothee Huchon of the School of Zoology at TAU's Faculty of Life Sciences and Steinhardt Museum of Natural History.
The tiny, less than 10-celled parasite Henneguya salminicola lives in salmon muscle. As it evolved, the animal, which is a myxozoan relative of jellyfish and corals, gave up breathing and consuming oxygen to produce energy.
"Aerobic respiration was thought to be ubiquitous in animals, but now we confirmed that this is not the case," Prof. Huchon explains. "Our discovery shows that evolution can go in strange directions. Aerobic respiration is a major source of energy, and yet we found an animal that gave up this critical pathway."
Some other organisms like fungi, amoebas or ciliate lineages in anaerobic environments have lost the ability to breathe over time. The new study demonstrates that the same can happen to an animal -- possibly because the parasite happens to live in an anaerobic environment.
Its genome was sequenced, along with those of other myxozoan fish parasites, as part of research supported by the U.S.-Israel Binational Science Foundation and conducted with Prof. Paulyn Cartwright of the University of Kansas, and Prof. Jerri Bartholomew and Dr. Stephen Atkinson of Oregon State University.
The parasite's anaerobic nature was an accidental discovery. While assembling the Henneguya genome, Prof. Huchon found that it did not include a mitochondrial genome. The mitochondria is the powerhouse of the cell where oxygen is captured to make energy, so its absence indicated that the animal was not breathing oxygen.
Until the new discovery, there was debate regarding the possibility that organisms belonging to the animal kingdom could survive in anaerobic environments. The assumption that all animals are breathing oxygen was based, among other things, on the fact that animals are multicellular, highly developed organisms, which first appeared on Earth when oxygen levels rose.
"It's not yet clear to us how the parasite generates energy," Prof. Huchon says. "It may be drawing it from the surrounding fish cells, or it may have a different type of respiration such as oxygen-free breathing, which typically characterizes anaerobic non-animal organisms."
According to Prof. Huchon, the discovery bears enormous significance for evolutionary research.
Read more at Science Daily
Feb 25, 2020
Human Populations survived the Toba volcanic super-eruption 74,000 years ago
The Toba super-eruption was one of the largest volcanic events over the last two million years, about 5,000 times larger than Mount St. Helen's eruption in the 1980s. The eruption occurred 74,000 years ago on the island of Sumatra, Indonesia, and was argued to have ushered in a "volcanic winter" lasting six to ten years, leading to a 1,000 year-long cooling of the Earth's surface. Theories purported that the volcanic eruption would have led to major catastrophes, including the decimation of hominin populations and mammal populations in Asia, and the near extinction of our own species. The few surviving Homo sapiens in Africa were said to have survived by developing sophisticated social, symbolic and economic strategies that enabled them to eventually re-expand and populate Asia 60,000 years ago in a single, rapid wave along the Indian Ocean coastline.
Fieldwork in southern India conducted in 2007 by some of this study's authors challenged these theories, leading to major debates between archaeologists, geneticists and earth scientists about the timing of human dispersals Out of Africa and the impact of the Toba super-eruption on climate and environments. The current study continues the debate, providing evidence that Homo sapiens were present in Asia earlier than expected and that the Toba super-eruption wasn't as apocalyptic as believed.
The Toba volcanic super-eruption and human evolution
The current study reports on a unique 80,000 year-long stratigraphic record from the Dhaba site in northern India's Middle Son Valley. Stone tools uncovered at Dhaba in association with the timing of the Toba event provide strong evidence that Middle Palaeolithic tool-using populations were present in India prior to and after 74,000 years ago. Professor J.N. Pal, principal investigator from the University of Allahabad in India notes that "Although Toba ash was first identified in the Son Valley back in the 1980s, until now we did not have associated archaeological evidence, so the Dhaba site fills in a major chronological gap."
Professor Chris Clarkson of the University of Queensland, lead author of the study, adds, "Populations at Dhaba were using stone tools that were similar to the toolkits being used by Homo sapiens in Africa at the same time. The fact that these toolkits did not disappear at the time of the Toba super-eruption or change dramatically soon after indicates that human populations survived the so-called catastrophe and continued to create tools to modify their environments." This new archaeological evidence supports fossil evidence that humans migrated out of Africa and expanded across Eurasia before 60,000 years ago. It also supports genetic findings that humans interbred with archaic species of hominins, such as Neanderthals, before 60,000 years ago.
Toba, climate change and human resilience
Though the Toba super-eruption was a colossal event, few climatologists and earth scientists continue to support the original formulation of the "volcanic winter" scenario, suggesting that the Earth's cooling was more muted and that Toba may not have actually caused the subsequent glacial period. Recent archaeological evidence in Asia, including the findings unearthed in this study, does not support the theory that hominin populations went extinct on account of the Toba super-eruption.
Read more at Science Daily
Fieldwork in southern India conducted in 2007 by some of this study's authors challenged these theories, leading to major debates between archaeologists, geneticists and earth scientists about the timing of human dispersals Out of Africa and the impact of the Toba super-eruption on climate and environments. The current study continues the debate, providing evidence that Homo sapiens were present in Asia earlier than expected and that the Toba super-eruption wasn't as apocalyptic as believed.
The Toba volcanic super-eruption and human evolution
The current study reports on a unique 80,000 year-long stratigraphic record from the Dhaba site in northern India's Middle Son Valley. Stone tools uncovered at Dhaba in association with the timing of the Toba event provide strong evidence that Middle Palaeolithic tool-using populations were present in India prior to and after 74,000 years ago. Professor J.N. Pal, principal investigator from the University of Allahabad in India notes that "Although Toba ash was first identified in the Son Valley back in the 1980s, until now we did not have associated archaeological evidence, so the Dhaba site fills in a major chronological gap."
Professor Chris Clarkson of the University of Queensland, lead author of the study, adds, "Populations at Dhaba were using stone tools that were similar to the toolkits being used by Homo sapiens in Africa at the same time. The fact that these toolkits did not disappear at the time of the Toba super-eruption or change dramatically soon after indicates that human populations survived the so-called catastrophe and continued to create tools to modify their environments." This new archaeological evidence supports fossil evidence that humans migrated out of Africa and expanded across Eurasia before 60,000 years ago. It also supports genetic findings that humans interbred with archaic species of hominins, such as Neanderthals, before 60,000 years ago.
Toba, climate change and human resilience
Though the Toba super-eruption was a colossal event, few climatologists and earth scientists continue to support the original formulation of the "volcanic winter" scenario, suggesting that the Earth's cooling was more muted and that Toba may not have actually caused the subsequent glacial period. Recent archaeological evidence in Asia, including the findings unearthed in this study, does not support the theory that hominin populations went extinct on account of the Toba super-eruption.
Read more at Science Daily
Shrinking sea ice is creating an ecological trap for polar bears
San Diego Zoo Global researchers studying the effects of climate change on polar bears are using innovative technologies to understand why polar bears in the Southern Beaufort Sea are showing divergent movement patterns in the summer. In recent decades, about a quarter of this population of bears have chosen to come on land instead of staying on the shrinking summer sea ice platform. Historically, the polar bears in this region remained on the ice year-round. The decision of each individual bear to stay on the ice or to move to land appears to be linked to the energetic cost or benefit of either option, and the potential of having to swim to reach land.
"We found that bears who moved to land expended more energy on average during the summer than bears that remained on the receding sea ice," said Anthony Pagano, Ph.D., a postdoctoral research fellow co-mentored between San Diego Zoo Global, the U.S. Geological Survey and Polar Bears International. "And in the late summer, as the ice became even more restricted, a greater percentage of energy was expended by bears swimming to land. This means the immediate cost of moving to land exceeded the cost of remaining on the receding summer pack ice -- even though bears are having to move greater distances to follow the retreating sea ice than they would have historically."
However, prior research has shown that bears on land in this region have access to whale carcasses in the summer, while bears on the sea ice appear to be fasting. Researchers are concerned that the decision by each individual bear to stay on the ice is creating an ecological trap that may be contributing to population decreases that have already been documented in this population.
The Southern Beaufort Sea subpopulation of polar bears has experienced increased sea ice retreat in recent decades. A basic understanding of polar bear energetics that can be applied to this research has come from studies that include polar bears at the San Diego Zoo and at the Oregon Zoo.
"The polar bear conservation program at the San Diego Zoo has supported research such as this by engaging in studies to measure the energetic costs of polar bear metabolism," said Megan Owen, Ph.D., director of Population Sustainability, San Diego Zoo Global. "These studies have enhanced the capacity of field researchers to interpret data collected on free-ranging bears, providing a better understanding of what it costs a polar bear to move about their rapidly changing habitat."
Read more at Science Daily
"We found that bears who moved to land expended more energy on average during the summer than bears that remained on the receding sea ice," said Anthony Pagano, Ph.D., a postdoctoral research fellow co-mentored between San Diego Zoo Global, the U.S. Geological Survey and Polar Bears International. "And in the late summer, as the ice became even more restricted, a greater percentage of energy was expended by bears swimming to land. This means the immediate cost of moving to land exceeded the cost of remaining on the receding summer pack ice -- even though bears are having to move greater distances to follow the retreating sea ice than they would have historically."
However, prior research has shown that bears on land in this region have access to whale carcasses in the summer, while bears on the sea ice appear to be fasting. Researchers are concerned that the decision by each individual bear to stay on the ice is creating an ecological trap that may be contributing to population decreases that have already been documented in this population.
The Southern Beaufort Sea subpopulation of polar bears has experienced increased sea ice retreat in recent decades. A basic understanding of polar bear energetics that can be applied to this research has come from studies that include polar bears at the San Diego Zoo and at the Oregon Zoo.
"The polar bear conservation program at the San Diego Zoo has supported research such as this by engaging in studies to measure the energetic costs of polar bear metabolism," said Megan Owen, Ph.D., director of Population Sustainability, San Diego Zoo Global. "These studies have enhanced the capacity of field researchers to interpret data collected on free-ranging bears, providing a better understanding of what it costs a polar bear to move about their rapidly changing habitat."
Read more at Science Daily
Surprising evolutionary shift in snakes
In the animal kingdom, survival essentially boils down to eat or be eaten. How organisms accomplish the former and avoid the latter reveals a clever array of defense mechanisms. Maybe you can outrun your prey. Perhaps you sport an undetectable disguise. Or maybe you develop a death-defying resistance to your prey's heart-stopping defensive chemicals that you can store in your own body to protect you from predators.
Such is the case with most snake species of the Rhabdophis genus. Commonly called "keelbacks" and found primarily in southeast Asia, the snakes sport glands in their skin, sometimes just around the neck, where they store bufadienolides, a class of lethal steroids they get from toads, their toxic prey of choice.
"These snakes bend their necks in a defensive posture that surprises unlucky predators with a mouthful of toxins," says Utah State University herpetologist Alan Savitzky, who has long studied the slithery reptiles.
"Scientists once thought these snakes produced their own toxins, but learned, instead, they obtain it from their food -- namely, toads."
In a surprising twist, Savitzky and colleagues discovered not all members of the genus derive their defensive toxin from the same source. The multi-national team, consisting of researchers from USU; Kyoto University, University of the Ryukyus and Nihon University in Japan; the Chinese Academy of Sciences and Leshan Normal University in China; the National Pingtung University of Science and Technology in Taiwan; the University of Sri Jayewardenepura in Sri Lanka; and the Vietnam Academy of Science and Technology, reports a species group of the snakes, found in western China and Japan, shifted its primary diet from frogs (including toads) to earthworms.
The earthworms don't produce the toxins; instead, the snakes also snack on firefly larvae, which produce the same class of toxins as the toads. Their findings appear in the Feb. 24, 2020, early online issue of the Proceedings of the National Academy of Sciences.
"This is the first documented case of a vertebrate predator switching from a vertebrate prey to an invertebrate prey for the selective advantage of getting the same chemical class of defensive toxin," says Savitzky, professor in USU's Department of Biology and the USU Ecology Center.
Given the distant relationship between toads and fireflies, he says, the dramatic dietary shift most likely involved a chemical cue shared by the toads and fireflies; perhaps the toxins themselves.
Read more at Science Daily
Such is the case with most snake species of the Rhabdophis genus. Commonly called "keelbacks" and found primarily in southeast Asia, the snakes sport glands in their skin, sometimes just around the neck, where they store bufadienolides, a class of lethal steroids they get from toads, their toxic prey of choice.
"These snakes bend their necks in a defensive posture that surprises unlucky predators with a mouthful of toxins," says Utah State University herpetologist Alan Savitzky, who has long studied the slithery reptiles.
"Scientists once thought these snakes produced their own toxins, but learned, instead, they obtain it from their food -- namely, toads."
In a surprising twist, Savitzky and colleagues discovered not all members of the genus derive their defensive toxin from the same source. The multi-national team, consisting of researchers from USU; Kyoto University, University of the Ryukyus and Nihon University in Japan; the Chinese Academy of Sciences and Leshan Normal University in China; the National Pingtung University of Science and Technology in Taiwan; the University of Sri Jayewardenepura in Sri Lanka; and the Vietnam Academy of Science and Technology, reports a species group of the snakes, found in western China and Japan, shifted its primary diet from frogs (including toads) to earthworms.
The earthworms don't produce the toxins; instead, the snakes also snack on firefly larvae, which produce the same class of toxins as the toads. Their findings appear in the Feb. 24, 2020, early online issue of the Proceedings of the National Academy of Sciences.
"This is the first documented case of a vertebrate predator switching from a vertebrate prey to an invertebrate prey for the selective advantage of getting the same chemical class of defensive toxin," says Savitzky, professor in USU's Department of Biology and the USU Ecology Center.
Given the distant relationship between toads and fireflies, he says, the dramatic dietary shift most likely involved a chemical cue shared by the toads and fireflies; perhaps the toxins themselves.
Read more at Science Daily
A year of surprising science from NASA's InSight Mars mission
Mars |
Five of the papers were published in Nature Geoscience. An additional paper in Nature Communications details the InSight spacecraft's landing site, a shallow crater nicknamed "Homestead hollow" in a region called Elysium Planitia.
InSight is the first mission dedicated to looking deep beneath the Martian surface. Among its science tools are a seismometer for detecting quakes, sensors for gauging wind and air pressure, a magnetometer, and a heat flow probe designed to take the planet's temperature.
While the team continues to work on getting the probe into the Martian surface as intended, the ultra-sensitive seismometer, called the Seismic Experiment for Interior Structure (SEIS), has enabled scientists to "hear" multiple trembling events from hundreds to thousands of miles away.
Seismic waves are affected by the materials they move through, giving scientists a way to study the composition of the planet's inner structure. Mars can help the team better understand how all rocky planets, including Earth, first formed.
Underground
Mars trembles more often -- but also more mildly -- than expected. SEIS has found more than 450 seismic signals to date, the vast majority of which are probably quakes (as opposed to data noise created by environmental factors, like wind). The largest quake was about magnitude 4.0 in size -- not quite large enough to travel down below the crust into the planet's lower mantle and core. Those are "the juiciest parts of the apple" when it comes to studying the planet's inner structure, said Bruce Banerdt, InSight principal investigator at JPL.
Scientists are ready for more: It took months after InSight's landing in November 2018 before they recorded the first seismic event. By the end of 2019, SEIS was detecting about two seismic signals a day, suggesting that InSight just happened to touch down at a particularly quiet time. Scientists still have their fingers crossed for "the Big One."
Mars doesn't have tectonic plates like Earth, but it does have volcanically active regions that can cause rumbles. A pair of quakes was strongly linked to one such region, Cerberus Fossae, where scientists see boulders that may have been shaken down cliffsides. Ancient floods there carved channels nearly 800 miles (1,300 kilometers) long. Lava flows then seeped into those channels within the past 10 million years -- the blink of an eye in geologic time.
Some of these young lava flows show signs of having been fractured by quakes less than 2 million years ago. "It's just about the youngest tectonic feature on the planet," said planetary geologist Matt Golombek of JPL. "The fact that we're seeing evidence of shaking in this region isn't a surprise, but it's very cool."
At the Surface
Billions of years ago, Mars had a magnetic field. It is no longer present, but it left ghosts behind, magnetizing ancient rocks that are now between 200 feet (61 meters) to several miles below ground. InSight is equipped with a magnetometer -- the first on the surface of Mars to detect magnetic signals.
The magnetometer has found that the signals at Homestead hollow are 10 times stronger than what was predicted based on data from orbiting spacecraft that study the area. The measurements of these orbiters are averaged over a couple of hundred miles, whereas InSight's measurements are more local.
Because most surface rocks at InSight's location are too young to have been magnetized by the planet's former field, "this magnetism must be coming from ancient rocks underground," said Catherine Johnson, a planetary scientist at the University of British Columbia and the Planetary Science Institute. "We're combining these data with what we know from seismology and geology to understand the magnetized layers below InSight. How strong or deep would they have to be for us to detect this field?"
In addition, scientists are intrigued by how these signals change over time. The measurements vary by day and night; they also tend to pulse around midnight. Theories are still being formed as to what causes such changes, but one possibility is that they're related to the solar wind interacting with the Martian atmosphere.
In the Wind
InSight measures wind speed, direction and air pressure nearly continuously, offering more data than previous landed missions. The spacecraft's weather sensors have detected thousands of passing whirlwinds, which are called dust devils when they pick up grit and become visible. "This site has more whirlwinds than any other place we've landed on Mars while carrying weather sensors," said Aymeric Spiga, an atmospheric scientist at Sorbonne University in Paris.
Despite all that activity and frequent imaging, InSight's cameras have yet to see dust devils. But SEIS can feel these whirlwinds pulling on the surface like a giant vacuum cleaner. "Whirlwinds are perfect for subsurface seismic exploration," said Philippe Lognonné of Institut de Physique du Globe de Paris (IPGP), principal investigator of SEIS.
Still to Come: The Core
InSight has two radios: one for regularly sending and receiving data, and a more powerful radio designed to measure the "wobble" of Mars as it spins. This X-band radio, also known as the Rotation and Interior Structure Experiment (RISE), can eventually reveal whether the planet's core is solid or liquid. A solid core would cause Mars to wobble less than a liquid one would.
This first year of data is just a start. Watching over a full Martian year (two Earth years) will give scientists a much better idea of the size and speed of the planet's wobble.
Read more at Science Daily
Feb 24, 2020
One billion-year-old green seaweed fossils identified, relative of modern land plants
Virginia Tech paleontologists have made a remarkable discovery in China: 1 billion-year-old micro-fossils of green seaweeds that could be related to the ancestor of the earliest land plants and trees that first developed 450 million years ago.
The micro-fossil seaweeds -- a form of algae known as Proterocladus antiquus -- are barely visible to the naked eyed at 2 millimeters in length, or roughly the size of a typical flea. Professor Shuhai Xiao said the fossils are the oldest green seaweeds ever found. They were imprinted in rock taken from an area of dry land -- formerly ocean -- near the city of Dalian in the Liaoning Province of northern China. Previously, the earliest convincing fossil record of green seaweeds were found in rock dated at roughly 800 million years old.
The findings -- led by Xiao and Qing Tang, a post-doctoral researcher, both in the Department of Geosciences, part of the Virginia Tech College of Science -- are featured in the latest issue of Nature Ecology & Evolution.
"These new fossils suggest that green seaweeds were important players in the ocean long before their land-plant descendants moved and took control of dry land," Xiao said.
"The entire biosphere is largely dependent on plants and algae for food and oxygen, yet land plants did not evolve until about 450 million years ago," Xiao said. "Our study shows that green seaweeds evolved no later than 1 billion years ago, pushing back the record of green seaweeds by about 200 million years. What kind of seaweeds supplied food to the marine ecosystem?"
Shuhai said the current hypothesis is that land plants -- the trees, grasses, food crops, bushes, even kudzu -- evolved from green seaweeds, which were aquatic plants. Through geological time -- millions upon millions of years -- they moved out of the water and became adapted to and prospered on dry land, their new natural environment. "These fossils are related to the ancestors of all the modern land plants we see today."
However, Xiao added the caveat that not all geobiologists are on the same page -- that debate on the origins of green plants remains debated. "Not everyone agrees with us; some scientists think that green plants started in rivers and lakes, and then conquered the ocean and land later," added Xiao, a member of the Virginia Tech Global Change Center.
There are three main types of seaweed: brown (Phaeophyceae), green (Chlorophyta), and red (Rhodophyta), and thousands of species of each kind. Fossils of red seaweed, which are now common on ocean floors, have been dated as far back as 1.047 billion years old.
"There are some modern green seaweeds that look very similar to the fossils that we found," Xiao said. "A group of modern green seaweeds, known as siphonocladaleans, are particularly similar in shape and size to the fossils we found."
Photosynthetic plants are, of course, vital to the ecological balance of the planet because they produce organic carbon and oxygen through photosynthesis, and they provide food and the basis of shelter for untold numbers of mammals, fish, and more. Yet, going back 2 billion years, Earth had no green plants at all in oceans, Xiao said.
It was Tang who discovered the micro-fossils of the seaweeds using an electronic microscope at Virginia Tech's campus and brought it to Xiao's attention. To more easily see the fossils, mineral oil was dripped onto the fossil to create a strong contrast.
"These seaweeds display multiple branches, upright growths, and specialized cells known as akinetes that are very common in this type of fossil," he said. "Taken together, these features strongly suggest that the fossil is a green seaweed with complex multicellularity that is circa 1 billion years old. These likely represent the earliest fossil of green seaweeds. In short, our study tells us that the ubiquitous green plants we see today can be traced back to at least 1 billion years."
Read more at Science Daily
The micro-fossil seaweeds -- a form of algae known as Proterocladus antiquus -- are barely visible to the naked eyed at 2 millimeters in length, or roughly the size of a typical flea. Professor Shuhai Xiao said the fossils are the oldest green seaweeds ever found. They were imprinted in rock taken from an area of dry land -- formerly ocean -- near the city of Dalian in the Liaoning Province of northern China. Previously, the earliest convincing fossil record of green seaweeds were found in rock dated at roughly 800 million years old.
The findings -- led by Xiao and Qing Tang, a post-doctoral researcher, both in the Department of Geosciences, part of the Virginia Tech College of Science -- are featured in the latest issue of Nature Ecology & Evolution.
"These new fossils suggest that green seaweeds were important players in the ocean long before their land-plant descendants moved and took control of dry land," Xiao said.
"The entire biosphere is largely dependent on plants and algae for food and oxygen, yet land plants did not evolve until about 450 million years ago," Xiao said. "Our study shows that green seaweeds evolved no later than 1 billion years ago, pushing back the record of green seaweeds by about 200 million years. What kind of seaweeds supplied food to the marine ecosystem?"
Shuhai said the current hypothesis is that land plants -- the trees, grasses, food crops, bushes, even kudzu -- evolved from green seaweeds, which were aquatic plants. Through geological time -- millions upon millions of years -- they moved out of the water and became adapted to and prospered on dry land, their new natural environment. "These fossils are related to the ancestors of all the modern land plants we see today."
However, Xiao added the caveat that not all geobiologists are on the same page -- that debate on the origins of green plants remains debated. "Not everyone agrees with us; some scientists think that green plants started in rivers and lakes, and then conquered the ocean and land later," added Xiao, a member of the Virginia Tech Global Change Center.
There are three main types of seaweed: brown (Phaeophyceae), green (Chlorophyta), and red (Rhodophyta), and thousands of species of each kind. Fossils of red seaweed, which are now common on ocean floors, have been dated as far back as 1.047 billion years old.
"There are some modern green seaweeds that look very similar to the fossils that we found," Xiao said. "A group of modern green seaweeds, known as siphonocladaleans, are particularly similar in shape and size to the fossils we found."
Photosynthetic plants are, of course, vital to the ecological balance of the planet because they produce organic carbon and oxygen through photosynthesis, and they provide food and the basis of shelter for untold numbers of mammals, fish, and more. Yet, going back 2 billion years, Earth had no green plants at all in oceans, Xiao said.
It was Tang who discovered the micro-fossils of the seaweeds using an electronic microscope at Virginia Tech's campus and brought it to Xiao's attention. To more easily see the fossils, mineral oil was dripped onto the fossil to create a strong contrast.
"These seaweeds display multiple branches, upright growths, and specialized cells known as akinetes that are very common in this type of fossil," he said. "Taken together, these features strongly suggest that the fossil is a green seaweed with complex multicellularity that is circa 1 billion years old. These likely represent the earliest fossil of green seaweeds. In short, our study tells us that the ubiquitous green plants we see today can be traced back to at least 1 billion years."
Read more at Science Daily
Magnetic field at Martian surface ten times stronger than expected
Mars |
In a study published today in Nature Geoscience, scientists reveal that the magnetic field at the InSight landing site is ten times stronger than anticipated, and fluctuates over time-scales of seconds to days.
"One of the big unknowns from previous satellite missions was what the magnetization looked like over small areas," said lead author Catherine Johnson, a professor at the University of British Columbia and senior scientist at the Planetary Science Institute. "By placing the first magnetic sensor at the surface, we have gained valuable new clues about the interior structure and upper atmosphere of Mars that will help us understand how it -- and other planets like it -- formed."
Zooming in on magnetic fields
Before the InSight mission, the best estimates of Martian magnetic fields came from satellites orbiting high above the planet, and were averaged over large distances of more than 150 kilometres.
"The ground-level data give us a much more sensitive picture of magnetization over smaller areas, and where it's coming from," said Johnson. "In addition to showing that the magnetic field at the landing site was ten times stronger than the satellites anticipated, the data implied it was coming from nearby sources."
Scientists have known that Mars had an ancient global magnetic field billions of years ago that magnetized rocks on the planet, before mysteriously switching off. Because most rocks at the surface are too young to have been magnetized by this ancient field, the team thinks it must be coming from deeper underground.
"We think it's coming from much older rocks that are buried anywhere from a couple hundred feet to ten kilometres below ground," said Johnson. "We wouldn't have been able to deduce this without the magnetic data and the geology and seismic information InSight has provided."
The team hopes that by combining these InSight results with satellite magnetic data and future studies of Martian rocks, they can identify exactly which rocks carry the magnetization and how old they are.
Day-night fluctuations and things that pulse in the dark
The magnetic sensor has also provided new clues about phenomena that occur high in the upper atmosphere and the space environment around Mars.
Just like Earth, Mars is exposed to solar wind, which is a stream of charged particles from the Sun that carries an interplanetary magnetic field (IMF) with it, and can cause disturbances like solar storms. But because Mars lacks a global magnetic field, it is less protected from solar weather.
"Because all of our previous observations of Mars have been from the top of its atmosphere or even higher altitudes, we didn't know whether disturbances in solar wind would propagate to the surface," said Johnson. "That's an important thing to understand for future astronaut missions to Mars."
The sensor captured fluctuations in the magnetic field between day and night and short, mysterious pulsations around midnight, confirming that events in and above the upper atmosphere can be detected at the surface.
The team believe that the day-night fluctuations arise from a combination of how the solar wind and IMF drape around the planet, and solar radiation charging the upper atmosphere and producing electrical currents, which in turn generate magnetic fields.
"What we're getting is an indirect picture of the atmospheric properties of Mars -- how charged it becomes and what currents are in the upper atmosphere," said co-author Anna Mittelholz, a postdoctoral fellow at the University of British Columbia.
And the mysterious pulsations that mostly appear at midnight and last only a few minutes?
"We think these pulses are also related to the solar wind interaction with Mars, but we don't yet know exactly what causes them," said Johnson. "Whenever you get to make measurements for the first time, you find surprises and this is one of our 'magnetic' surprises."
In the future, the InSight team wants to observe the surface magnetic field at the same time as the MAVEN orbiter passes over InSight, allowing them to compare data.
Read more at Science Daily
CRISPR gene cuts may offer new way to chart human genome
In search of new ways to sequence human genomes and read critical alterations in DNA, researchers at Johns Hopkins Medicine say they have successfully used the gene cutting tool CRISPR to make cuts in DNA around lengthy tumor genes, which can be used to collect sequence information.
A report on the proof-of-principle experiments using genomes from human breast cancer cells and tissue appears in the Feb. 10 issue of Nature Biotechnology.
The researchers say that pairing CRISPR with tools that sequence the DNA components of human cancer tissue is a technique that could, one day, enable fast, relatively cheap sequencing of patients' tumors, streamlining the selection and use of treatments that target highly specific and personal genetic alterations.
"For tumor sequencing in cancer patients, you don't necessarily need to sequence the whole cancer genome," says Winston Timp, Ph.D., assistant professor of biomedical engineering and molecular biology and genetics at the Johns Hopkins University School of Medicine. "Deep sequencing of particular areas of genetic interest can be very informative."
In conventional genome sequencing, scientists have to make many copies of the DNA at issue, randomly break the DNA into segments, and feed the broken segments through a computerized machine that reads the string of chemical compounds called nucleic acids, made up of the four "bases" that form DNA, and are lettered A, C, G and T. Then, scientists look for overlapping regions of the broken segments and fit them together like tiles on a roof to form long regions of DNA that make up a gene.
In their experiments, Timp and M.D./Ph.D. student Timothy Gilpatrick were able to skip the DNA-copying part of conventional sequencing by using CRISPR to make targeted cuts in DNA isolated from a sliver of tissue taken from a patient's breast cancer tumor.
Then, the scientists glued so-called "sequencing adaptors" to the CRISPR-snipped ends of the DNA sections. The adaptors serve as a kind of handle that guide DNA to tiny holes or "nanopores" which read the sequence.
By passing DNA through the narrow hole, a sequencer can build a read-out of DNA letters based on the unique electrical current that occurs when each chemical code "letter" slides through the hole.
Among 10 breast cancer genes the team focused on, the Johns Hopkins scientists were able to use nanopore sequencing on breast cancer cell lines and tissue samples to detect a type of DNA alteration called methylation, where chemicals called methyl groups are added to DNA around genes that affect how genes are read.
The researchers found a location of decreased DNA methylation in a gene called keratin 19 (KRT19), which is important in cell structure and scaffolding. Previous studies have shown that a decrease in DNA methylation in KRT19 is associated with tumor spread.
In the breast cancer cell lines they studied, the Johns Hopkins team was able to generate an average of 400 "reads" per basepair, a reading "depth" hundreds of times better than some conventional sequencing tools.
Among their samples of human breast cancer tumor tissue taken at biopsies, the team was able to produce an average of 100 reads per region. "This is certainly less than what we can do with cell lines, but we have to be more gentle with DNA from human tissue samples because it's been frozen and thawed several times," says Timp.
In addition to their studies of DNA methylation and small mutations, Timp and Gilpatrick sequenced the gene commonly associated with breast cancer: BRCA1, which spans a region on the genome more than 80,000 bases long. "This gene is really long, and we were able to collect sequencing reads which went all the way through this large and complex region," says Gilpatrick.
"Because we can use this technique to sequence really long genes, we may be able to catch big missing blocks of DNA we wouldn't be able to find with more conventional sequencing tools," says Timp.
Read more at Science Daily
A report on the proof-of-principle experiments using genomes from human breast cancer cells and tissue appears in the Feb. 10 issue of Nature Biotechnology.
The researchers say that pairing CRISPR with tools that sequence the DNA components of human cancer tissue is a technique that could, one day, enable fast, relatively cheap sequencing of patients' tumors, streamlining the selection and use of treatments that target highly specific and personal genetic alterations.
"For tumor sequencing in cancer patients, you don't necessarily need to sequence the whole cancer genome," says Winston Timp, Ph.D., assistant professor of biomedical engineering and molecular biology and genetics at the Johns Hopkins University School of Medicine. "Deep sequencing of particular areas of genetic interest can be very informative."
In conventional genome sequencing, scientists have to make many copies of the DNA at issue, randomly break the DNA into segments, and feed the broken segments through a computerized machine that reads the string of chemical compounds called nucleic acids, made up of the four "bases" that form DNA, and are lettered A, C, G and T. Then, scientists look for overlapping regions of the broken segments and fit them together like tiles on a roof to form long regions of DNA that make up a gene.
In their experiments, Timp and M.D./Ph.D. student Timothy Gilpatrick were able to skip the DNA-copying part of conventional sequencing by using CRISPR to make targeted cuts in DNA isolated from a sliver of tissue taken from a patient's breast cancer tumor.
Then, the scientists glued so-called "sequencing adaptors" to the CRISPR-snipped ends of the DNA sections. The adaptors serve as a kind of handle that guide DNA to tiny holes or "nanopores" which read the sequence.
By passing DNA through the narrow hole, a sequencer can build a read-out of DNA letters based on the unique electrical current that occurs when each chemical code "letter" slides through the hole.
Among 10 breast cancer genes the team focused on, the Johns Hopkins scientists were able to use nanopore sequencing on breast cancer cell lines and tissue samples to detect a type of DNA alteration called methylation, where chemicals called methyl groups are added to DNA around genes that affect how genes are read.
The researchers found a location of decreased DNA methylation in a gene called keratin 19 (KRT19), which is important in cell structure and scaffolding. Previous studies have shown that a decrease in DNA methylation in KRT19 is associated with tumor spread.
In the breast cancer cell lines they studied, the Johns Hopkins team was able to generate an average of 400 "reads" per basepair, a reading "depth" hundreds of times better than some conventional sequencing tools.
Among their samples of human breast cancer tumor tissue taken at biopsies, the team was able to produce an average of 100 reads per region. "This is certainly less than what we can do with cell lines, but we have to be more gentle with DNA from human tissue samples because it's been frozen and thawed several times," says Timp.
In addition to their studies of DNA methylation and small mutations, Timp and Gilpatrick sequenced the gene commonly associated with breast cancer: BRCA1, which spans a region on the genome more than 80,000 bases long. "This gene is really long, and we were able to collect sequencing reads which went all the way through this large and complex region," says Gilpatrick.
"Because we can use this technique to sequence really long genes, we may be able to catch big missing blocks of DNA we wouldn't be able to find with more conventional sequencing tools," says Timp.
Read more at Science Daily
Oldest reconstructed bacterial genomes link farming, herding with emergence of new disease
The Neolithic revolution, and the corresponding transition to agricultural and pastoralist lifestyles, represents one of the greatest cultural shifts in human history, and it has long been hypothesized that this might have also provided the opportunity for the emergence of human-adapted diseases. A new study published in Nature Ecology & Evolution led by Felix M. Key, Alexander Herbig, and Johannes Krause of the Max Planck Institute for the Science of Human History studied human remains excavated across Western Eurasia and reconstructed eight ancient Salmonella enterica genomes -- all part of a related group within the much larger diversity of modern S. enterica. These results illuminate what was likely a serious health concern in the past and reveal how this bacterial pathogen evolved over a period of 6,500 years.
Searching for ancient pathogens
Most pathogens do not cause any lasting impact on the skeleton, which can make identifying affected archeological remains difficult for scientists. In order to identify past diseases and reconstruct their histories, researchers have turned to genetic techniques. Using a newly developed bacterial screening pipeline called HOPS, Key and colleagues were able to overcome many of the challenges of finding ancient pathogens in metagenomics data.
"With our newly developed methodologies we were able to screen thousands of archaeological samples for traces of Salmonella DNA," says Herbig. The researchers screened 2,739 ancient human remains in total, eventually reconstructing eight Salmonella genomes up to 6,500 years old -- the oldest reconstructed bacterial genomes to date. This highlights an inherent difficulty in the field of ancient pathogen research, as hundreds of human samples are often required to recover just a single microbial genome. The genomes in the current study were recovered by taking samples from the teeth of the deceased. The presence of S. enterica in the teeth of these ancient individuals suggests they were suffering from systemic disease at their time of death.
The individuals whose remains were studied came from sites located from Russia to Switzerland, representing different cultural groups, from late hunter-gatherers to nomadic herders to early farmers. "This broad spectrum in time, geography and culture allowed us, for the first time, to apply molecular genetics to link the evolution of a pathogen to the development of a new human lifestyle," explained Herbig.
"Neolithization process" provided opportunities for pathogen evolution
With the introduction of domesticated animals, increased contact with both human and animal excrement, and a dramatic change in mobility, it has long been hypothesized that "Neolithization" -- the transition to a sedentary, agricultural lifestyle -- enabled more constant and recurrent exposure to pathogens and thus the emergence of new diseases. However, prior to the current study, there was no direct molecular evidence.
"Ancient metagenomics provides an unprecedented window into the past of human diseases," says lead author Felix M. Key, formerly of the Max Planck Institute for the Science of Human History and now at the Massachusetts Institute of Technology. "We now have molecular data to understand the emergence and spread of pathogens thousands of years ago, and it is exciting how we can utilize high-throughput technology to address long standing questions about microbial evolution."
Humans, Pigs, and the Origin of Paratyphi C
The researchers were able to determine that all six Salmonella genomes recovered from herders and farmers are progenitors to a strain that specifically infects humans but is rare today, Paratyphi C. Those ancient Salmonella, however, were probably not yet adapted to humans, and instead infected humans and animals alike, which suggests the cultural practices uniquely associated with the Neolithization process facilitated the emergence of those progenitors and subsequently human-specific disease. It was previously suggested that this strain of Salmonella spread from domesticated pigs to humans around 4000 years ago, but the discovery of progenitor strains in humans more than 5000 years ago suggests they might have spread from humans to pigs. However, the authors argue for a more moderate hypothesis, where both human and pig specific Salmonella evolved independently from unspecific progenitors within the permissive environment of close human-animal contact.
"The fascinating possibilities of ancient DNA allow us to examine infectious microbes in the past, which sometimes puts the spotlight on diseases that today most people don't consider to be a major health concern," says Johannes Krause, director at the Max Planck Institute for the Science of Human History.
The current study allows the scientists to gain a perspective on the changes in the disease over time and in different human cultural contexts. "We're beginning to understand the genetics of host adaptation in Salmonella," says Key, "and we can translate that knowledge into mechanistic understanding about the emergence of human and animal adapted diseases."
Read more at Science Daily
Searching for ancient pathogens
Most pathogens do not cause any lasting impact on the skeleton, which can make identifying affected archeological remains difficult for scientists. In order to identify past diseases and reconstruct their histories, researchers have turned to genetic techniques. Using a newly developed bacterial screening pipeline called HOPS, Key and colleagues were able to overcome many of the challenges of finding ancient pathogens in metagenomics data.
"With our newly developed methodologies we were able to screen thousands of archaeological samples for traces of Salmonella DNA," says Herbig. The researchers screened 2,739 ancient human remains in total, eventually reconstructing eight Salmonella genomes up to 6,500 years old -- the oldest reconstructed bacterial genomes to date. This highlights an inherent difficulty in the field of ancient pathogen research, as hundreds of human samples are often required to recover just a single microbial genome. The genomes in the current study were recovered by taking samples from the teeth of the deceased. The presence of S. enterica in the teeth of these ancient individuals suggests they were suffering from systemic disease at their time of death.
The individuals whose remains were studied came from sites located from Russia to Switzerland, representing different cultural groups, from late hunter-gatherers to nomadic herders to early farmers. "This broad spectrum in time, geography and culture allowed us, for the first time, to apply molecular genetics to link the evolution of a pathogen to the development of a new human lifestyle," explained Herbig.
"Neolithization process" provided opportunities for pathogen evolution
With the introduction of domesticated animals, increased contact with both human and animal excrement, and a dramatic change in mobility, it has long been hypothesized that "Neolithization" -- the transition to a sedentary, agricultural lifestyle -- enabled more constant and recurrent exposure to pathogens and thus the emergence of new diseases. However, prior to the current study, there was no direct molecular evidence.
"Ancient metagenomics provides an unprecedented window into the past of human diseases," says lead author Felix M. Key, formerly of the Max Planck Institute for the Science of Human History and now at the Massachusetts Institute of Technology. "We now have molecular data to understand the emergence and spread of pathogens thousands of years ago, and it is exciting how we can utilize high-throughput technology to address long standing questions about microbial evolution."
Humans, Pigs, and the Origin of Paratyphi C
The researchers were able to determine that all six Salmonella genomes recovered from herders and farmers are progenitors to a strain that specifically infects humans but is rare today, Paratyphi C. Those ancient Salmonella, however, were probably not yet adapted to humans, and instead infected humans and animals alike, which suggests the cultural practices uniquely associated with the Neolithization process facilitated the emergence of those progenitors and subsequently human-specific disease. It was previously suggested that this strain of Salmonella spread from domesticated pigs to humans around 4000 years ago, but the discovery of progenitor strains in humans more than 5000 years ago suggests they might have spread from humans to pigs. However, the authors argue for a more moderate hypothesis, where both human and pig specific Salmonella evolved independently from unspecific progenitors within the permissive environment of close human-animal contact.
"The fascinating possibilities of ancient DNA allow us to examine infectious microbes in the past, which sometimes puts the spotlight on diseases that today most people don't consider to be a major health concern," says Johannes Krause, director at the Max Planck Institute for the Science of Human History.
The current study allows the scientists to gain a perspective on the changes in the disease over time and in different human cultural contexts. "We're beginning to understand the genetics of host adaptation in Salmonella," says Key, "and we can translate that knowledge into mechanistic understanding about the emergence of human and animal adapted diseases."
Read more at Science Daily
Subscribe to:
Posts (Atom)