Astronomers using the Subaru Telescope have determined that the Earth-like planets of the TRAPPIST-1 system are not significantly misaligned with the rotation of the star. This is an important result for understanding the evolution of planetary systems around very low-mass stars in general, and in particular the history of the TRAPPIST-1 planets including the ones near the habitable zone.
Stars like the Sun are not static, but rotate about an axis. This rotation is most noticeable when there are features like sunspots on the surface of the star. In the Solar System, the orbits of all of the planets are aligned to within 6 degrees with the Sun's rotation. In the past it was assumed that planetary orbits would be aligned with the rotation of the star, but there are now many known examples of exoplanet systems where the planetary orbits are strongly misaligned with the central star's rotation. This raises the question: can planetary systems form out of alignment, or did the observed misaligned systems start out aligned and were later thrown out of alignment by some perturbation? The TRAPPIST-1 system has attracted attention because it has three small rocky planets located in or near the habitable zone where liquid water can exist. The central star is a very low-mass and cool star, called an M dwarf, and those planets are situated very close to the central star. Therefore, this planetary system is very different from our Solar System. Determining the history of this system is important because it could help determine if any of the potentially habitable planets are actually inhabitable. But it is also an interesting system because it lacks any nearby objects which could have perturbed the orbits of the planets, meaning that the orbits should still be located close to where the planets first formed. This gives astronomers a chance to investigate the primordial conditions of the system.
Because stars rotate, the side rotating into view has a relative velocity towards the viewer, while the side rotating out of view has a relative velocity away from the viewer. If a planet transits, passes between the star and the Earth and blocks a small portion of the light from the star, it is possible to tell which edge of the star the planet blocks first. This phenomenon is called the Rossiter-McLaughlin effect. Using this method, it is possible to measure the misalignment between the planetary orbit and the star's rotation. However, until now those observations have been limited to large planets such as Jupiter-like or Neptune-like ones.
A team of researchers, including members from the Tokyo Institute of Technology and the Astrobiology Center in Japan, observed TRAPPIST-1 with the Subaru Telescope to look for misalignment between the planetary orbits and the star. The team took advantage of a chance on August 31, 2018, when three of the exoplanets orbiting TRAPPIST-1 transited in front of the star in a single night. Two of the three were rocky planets near the habitable zone. Since low-mass stars are generally faint, it had been impossible to probe the stellar obliquity (spin-orbit angle) for TRAPPIST-1. But thanks to the light gathering power of the Subaru Telescope and high spectral resolution of the new infrared spectrograph IRD, the team was able to measure the obliquity. They found that the obliquity was low, close to zero. This is the first measurement of the stellar obliquity for a very low-mass star like TRAPPIST-1 and also the first Rossiter-McLaughlin measurement for planets in the habitable zone.
Read more at Science Daily
May 14, 2020
Is video game addiction real?
For most adolescents, playing video games is an enjoyable and often social form of entertainment. While playing video games is a fun pastime, there is a growing concern that spending too much time playing video games is related to negative developmental outcomes and can become an addiction.
A recent six-year study, the longest study ever done on video game addiction, found that about 90% of gamers do not play in a way that is harmful or causes negative long-term consequences. A significant minority, though, can become truly addicted to video games and as a result can suffer mentally, socially and behaviorally.
"The aim of this particular study is to look at the longer-term impact of having a particular relationship with video games and what it does to a person over time," said Sarah Coyne, a professor of family life at BYU and lead author of the research. "To see the impact, we examined the trajectories of pathological video gameplay across six years, from early adolescence to emerging adulthood."
In addition to finding long-term consequences for addicted gamers, this study, published in Developmental Psychology, also breaks down gamer stereotypes and found that pathological gaming is not a one size fits all disorder.
Pathological video gameplay is characterized by excessive time spent playing video games, difficulty disengaging from them and disruption to healthy functioning due to gaming.
Only about 10% of gamers fall into the pathological video gameplay category. When compared to the non-pathological group, those in the study displayed higher levels of depression, aggression, shyness, problematic cell phone use and anxiety by emerging adulthood. This was despite the groups being the same in all these variables at the initial time point, suggesting that video games may have been important in developing these negative outcomes.
To measure predictors and outcomes to video game addiction, Coyne studied 385 adolescents as they transitioned into adulthood. Each individual completed multiple questionnaires once a year over a six-year period. These questionnaires measured depression, anxiety, aggression, delinquency, empathy, prosocial behavior, shyness, sensory reactivity, financial stress and problematic cell phone use.
Two main predictors for video game addiction were found: being male and having low levels of prosocial behavior. Having higher levels of prosocial behavior, or voluntary behavior meant to benefit another person, tended to be a protective factor against the addiction symptoms.
Aside from the predictors, Coyne also found three distinct trajectories of video game use. Seventy-two percent of adolescents were relatively low in addiction symptoms across the six years of data collection. Another 18% of adolescents started with moderate symptoms that did not change over time, and only 10% of adolescents showed increasing levels of pathological gaming symptoms throughout the study.
The results suggest that while about 90% of gamers are not playing in a way that is dysfunctional or detrimental to the individual's life, there is still a sizable minority who are truly addicted to video games and suffer addiction symptoms over time.
These findings also go against the stereotype of gamers living in their parent's basement, unable to support themselves financially or get a job because of their fixation on video games. At least in their early twenties, pathological users of video games appear to be just as financially stable and forward-moving as gamers who are not addicted.
Read more at Science Daily
A recent six-year study, the longest study ever done on video game addiction, found that about 90% of gamers do not play in a way that is harmful or causes negative long-term consequences. A significant minority, though, can become truly addicted to video games and as a result can suffer mentally, socially and behaviorally.
"The aim of this particular study is to look at the longer-term impact of having a particular relationship with video games and what it does to a person over time," said Sarah Coyne, a professor of family life at BYU and lead author of the research. "To see the impact, we examined the trajectories of pathological video gameplay across six years, from early adolescence to emerging adulthood."
In addition to finding long-term consequences for addicted gamers, this study, published in Developmental Psychology, also breaks down gamer stereotypes and found that pathological gaming is not a one size fits all disorder.
Pathological video gameplay is characterized by excessive time spent playing video games, difficulty disengaging from them and disruption to healthy functioning due to gaming.
Only about 10% of gamers fall into the pathological video gameplay category. When compared to the non-pathological group, those in the study displayed higher levels of depression, aggression, shyness, problematic cell phone use and anxiety by emerging adulthood. This was despite the groups being the same in all these variables at the initial time point, suggesting that video games may have been important in developing these negative outcomes.
To measure predictors and outcomes to video game addiction, Coyne studied 385 adolescents as they transitioned into adulthood. Each individual completed multiple questionnaires once a year over a six-year period. These questionnaires measured depression, anxiety, aggression, delinquency, empathy, prosocial behavior, shyness, sensory reactivity, financial stress and problematic cell phone use.
Two main predictors for video game addiction were found: being male and having low levels of prosocial behavior. Having higher levels of prosocial behavior, or voluntary behavior meant to benefit another person, tended to be a protective factor against the addiction symptoms.
Aside from the predictors, Coyne also found three distinct trajectories of video game use. Seventy-two percent of adolescents were relatively low in addiction symptoms across the six years of data collection. Another 18% of adolescents started with moderate symptoms that did not change over time, and only 10% of adolescents showed increasing levels of pathological gaming symptoms throughout the study.
The results suggest that while about 90% of gamers are not playing in a way that is dysfunctional or detrimental to the individual's life, there is still a sizable minority who are truly addicted to video games and suffer addiction symptoms over time.
These findings also go against the stereotype of gamers living in their parent's basement, unable to support themselves financially or get a job because of their fixation on video games. At least in their early twenties, pathological users of video games appear to be just as financially stable and forward-moving as gamers who are not addicted.
Read more at Science Daily
'Cell pores' discovery gives hope to millions of brain and spinal cord injury patients
Scientists have discovered a new treatment to dramatically reduce swelling after brain and spinal cord injuries, offering hope to 75 million victims worldwide each year.
The breakthrough in treating such injuries -- referred to as central nervous system (CNS) edema -- is thought to be hugely significant because current options are limited to putting patients in an induced coma or performing risky surgery.
Brain and spinal cord injuries affect all age groups. Older people are more at risk of sustaining them from strokes or falls, while for younger age groups, major causes include road traffic accidents and injuries from sports such as rugby, US-style football and other contact games.
The high-profile example of Formula 1 racing driver Michael Schumacher demonstrates the difficulties physicians currently face in treating such injuries. After falling and hitting his head on a rock while skiing in Switzerland in 2013, Schumacher developed a swelling on his brain from water rushing into the affected cells. He spent six months in a medically-induced coma and underwent complex surgery, but his rehabilitation continues to this day.
The new treatment, developed by an international team of scientists working at Aston University (UK), Harvard Medical School (US), University of Birmingham (UK), University of Calgary (Canada), Lund University (Sweden), Copenhagen University (Denmark) and University of Wolverhampton (UK), features in the latest edition of the scientific journal Cell.
The researchers used an already-licensed anti-psychotic medicine -- trifluoperazine (TFP) -- to alter the behaviour of tiny water channel 'pores' in cells known as aquaporins.
Testing the treatment on injured rats, they found those animals given a single dose of the drug at the trauma site recovered full movement and sensitivity in as little as two weeks, compared to an untreated group that continued to show motor and sensory impairment beyond six weeks after the injury.
The treatment works by counteracting the cells' normal reaction to a loss of oxygen in the CNS -- the brain and spinal cord -- caused by trauma. Under such conditions, cells quickly become 'saltier' because of a build-up of ions, causing a rush of water through the aquaporins which makes the cells swell and exerts pressure on the skull and spine. This build-up of pressure damages fragile brain and spinal cord tissues, disrupting the flow of electrical signals from the brain to the body and vice versa.
But the scientists discovered that TFP can stop this from happening. Focusing their efforts on important star-shaped brain and spinal cord cells called astrocytes, they found TFP prevents a protein called calmodulin from binding with the aquaporins. Normally, this binding effect sends the aquaporins shooting to the surface of the cell, letting in more water. By halting this action, the permeability of the cells is reduced.
Traditionally, TFP has been used to treat patients with schizophrenia and other mental health conditions. Its long-term use is associated with adverse side effects, but the researchers said their experiments suggested that just a single dose could have a significant long-lasting impact for CNS edema patients.
Since TFP is already licensed for use in humans by the US Federal Drug Administration (FDA) and UK National Institute for Health and Care Excellence (NICE) it could be rapidly deployed as a treatment for brain injuries. But the researchers stressed that further work would allow them to develop new, even better medicines based on their understanding of TFP's properties.
According to the World Health Organisation (WHO), each year around 60 million people sustain a traumatic brain or spinal cord injury and a further 15 million people suffer a stroke. These injuries can be fatal or lead to long-term disability, psychiatric disorders, substance abuse or self-harm.
Professor Roslyn Bill of the Biosciences Research Group at Aston University said:
"Every year, millions of people of all ages suffer brain and spinal injuries, whether from falls, accidents, road traffic collisions, sports injuries or stroke. To date, their treatment options have been very limited and, in many cases, very risky.
"This discovery, based on a new understanding of how our cells work at the molecular level, gives injury victims and their doctors hope. By using a drug already licensed for human use, we have shown how it is possible to stop the swelling and pressure build-up in the CNS that is responsible for long-term harm.
"While further research will help us to refine our understanding, the exciting thing is that doctors could soon have an effective, non-invasive way of helping brain and spinal cord injury patients at their disposal."
Dr Zubair Ahmed of the University of Birmingham's Institute of Inflammation and Ageing said:
"This is a significant advance from current therapies, which only treat the symptoms of brain and spinal injuries but do nothing to prevent the neurological deficits that usually occur as a result of swelling. The re-purposed drug offers a real solution to these patients and can be fast-tracked to the clinic."
Dr Alex Conner of the University of Birmingham's Institute of Clinical Sciences said:
"It is amazing that our work studying tiny water channels in the brain can tell us something about traumatic brain swelling that affects millions of people every year."
Read more at Science Daily
The breakthrough in treating such injuries -- referred to as central nervous system (CNS) edema -- is thought to be hugely significant because current options are limited to putting patients in an induced coma or performing risky surgery.
Brain and spinal cord injuries affect all age groups. Older people are more at risk of sustaining them from strokes or falls, while for younger age groups, major causes include road traffic accidents and injuries from sports such as rugby, US-style football and other contact games.
The high-profile example of Formula 1 racing driver Michael Schumacher demonstrates the difficulties physicians currently face in treating such injuries. After falling and hitting his head on a rock while skiing in Switzerland in 2013, Schumacher developed a swelling on his brain from water rushing into the affected cells. He spent six months in a medically-induced coma and underwent complex surgery, but his rehabilitation continues to this day.
The new treatment, developed by an international team of scientists working at Aston University (UK), Harvard Medical School (US), University of Birmingham (UK), University of Calgary (Canada), Lund University (Sweden), Copenhagen University (Denmark) and University of Wolverhampton (UK), features in the latest edition of the scientific journal Cell.
The researchers used an already-licensed anti-psychotic medicine -- trifluoperazine (TFP) -- to alter the behaviour of tiny water channel 'pores' in cells known as aquaporins.
Testing the treatment on injured rats, they found those animals given a single dose of the drug at the trauma site recovered full movement and sensitivity in as little as two weeks, compared to an untreated group that continued to show motor and sensory impairment beyond six weeks after the injury.
The treatment works by counteracting the cells' normal reaction to a loss of oxygen in the CNS -- the brain and spinal cord -- caused by trauma. Under such conditions, cells quickly become 'saltier' because of a build-up of ions, causing a rush of water through the aquaporins which makes the cells swell and exerts pressure on the skull and spine. This build-up of pressure damages fragile brain and spinal cord tissues, disrupting the flow of electrical signals from the brain to the body and vice versa.
But the scientists discovered that TFP can stop this from happening. Focusing their efforts on important star-shaped brain and spinal cord cells called astrocytes, they found TFP prevents a protein called calmodulin from binding with the aquaporins. Normally, this binding effect sends the aquaporins shooting to the surface of the cell, letting in more water. By halting this action, the permeability of the cells is reduced.
Traditionally, TFP has been used to treat patients with schizophrenia and other mental health conditions. Its long-term use is associated with adverse side effects, but the researchers said their experiments suggested that just a single dose could have a significant long-lasting impact for CNS edema patients.
Since TFP is already licensed for use in humans by the US Federal Drug Administration (FDA) and UK National Institute for Health and Care Excellence (NICE) it could be rapidly deployed as a treatment for brain injuries. But the researchers stressed that further work would allow them to develop new, even better medicines based on their understanding of TFP's properties.
According to the World Health Organisation (WHO), each year around 60 million people sustain a traumatic brain or spinal cord injury and a further 15 million people suffer a stroke. These injuries can be fatal or lead to long-term disability, psychiatric disorders, substance abuse or self-harm.
Professor Roslyn Bill of the Biosciences Research Group at Aston University said:
"Every year, millions of people of all ages suffer brain and spinal injuries, whether from falls, accidents, road traffic collisions, sports injuries or stroke. To date, their treatment options have been very limited and, in many cases, very risky.
"This discovery, based on a new understanding of how our cells work at the molecular level, gives injury victims and their doctors hope. By using a drug already licensed for human use, we have shown how it is possible to stop the swelling and pressure build-up in the CNS that is responsible for long-term harm.
"While further research will help us to refine our understanding, the exciting thing is that doctors could soon have an effective, non-invasive way of helping brain and spinal cord injury patients at their disposal."
Dr Zubair Ahmed of the University of Birmingham's Institute of Inflammation and Ageing said:
"This is a significant advance from current therapies, which only treat the symptoms of brain and spinal injuries but do nothing to prevent the neurological deficits that usually occur as a result of swelling. The re-purposed drug offers a real solution to these patients and can be fast-tracked to the clinic."
Dr Alex Conner of the University of Birmingham's Institute of Clinical Sciences said:
"It is amazing that our work studying tiny water channels in the brain can tell us something about traumatic brain swelling that affects millions of people every year."
Read more at Science Daily
Study confirms cats can become infected with and may transmit COVID-19 to other cats
In a study published today (May 13, 2020) in the New England Journal of Medicine, scientists in the U.S. and Japan report that in the laboratory, cats can readily become infected with SARS-CoV-2, the virus that causes COVID-19, and may be able to pass the virus to other cats.
Professor of Pathobiological Sciences at the University of Wisconsin School of Veterinary Medicine Yoshihiro Kawaoka led the study, in which researchers administered to three cats SARS-CoV-2 isolated from a human patient. The following day, the researchers swabbed the nasal passages of the cats and were able to detect the virus in two of the animals. Within three days, they detected the virus in all of the cats.
The day after the researchers administered virus to the first three cats, they placed another cat in each of their cages. Researchers did not administer SARS-CoV-2 virus to these cats.
Each day, the researchers took nasal and rectal swabs from all six cats to assess them for the presence of the virus. Within two days, one of the previously uninfected cats was shedding virus, detected in the nasal swab, and within six days, all of the cats were shedding virus. None of the rectal swabs contained virus.
Each cat shed SARS-CoV-2 from their nasal passages for up to six days. The virus was not lethal and none of the cats showed signs of illness. All of the cats ultimately cleared the virus.
"That was a major finding for us -- the cats did not have symptoms," says Kawaoka, who also holds a faculty appointment at the University of Tokyo. Kawaoka is also helping lead an effort to create a human COVID-19 vaccine called CoroFlu.
The findings suggest cats may be capable of becoming infected with the virus when exposed to people or other cats positive for SARS-CoV-2. It follows a study published in Science by scientists at the Chinese Academy of Agricultural Sciences that also showed cats (and ferrets) could become infected with and potentially transmit the virus. The virus is known to be transmitted in humans through contact with respiratory droplets and saliva.
"It's something for people to keep in mind," says Peter Halfmann, a research professor at UW-Madison who helped lead the study. "If they are quarantined in their house and are worried about passing COVID-19 to children and spouses, they should also worry about giving it to their animals."
Both researchers advise that people with symptoms of COVID-19 avoid contact with cats. They also advise cat owners to keep their pets indoors, in order to limit the contact their cats have with other people and animals.
Kawaoka is concerned about the welfare of animals. The World Organization for Animal Health and the Centers for Disease Control and Prevention say there is "no justification in taking measures against companion animals that may compromise their welfare."
Humans remain the biggest risk to other humans in transmission of the virus. There is no evidence cats readily transmit the virus to humans, nor are there documented cases in which humans have become ill with COVID-19 because of contact with cats.
There are, however, confirmed instances of cats becoming infected because of close contact with humans infected with the virus, and several large cats at the Bronx Zoo have also tested positive for the virus.
For instance, according to an April 22 announcement from the U.S. Department of Agriculture, two cats in two private homes in New York state tested positive for COVID-19. One had been in a home with a person with a confirmed case of the viral disease. The cats showed mild signs of respiratory illness and were expected to make a full recovery.
Additional cats have also tested positive for COVID-19 after close contact with their human companions, says Sandra Newbury, director of the UW-Madison Shelter Medicine Program. Newbury is leading a research study in several states in the U.S. to test animal-shelter cats that might have previously been exposed to human COVID-19 cases.
"Animal welfare organizations are working very hard in this crisis to maintain the human-animal bond and keep pets with their people," says Newbury. "It's a stressful time for everyone, and now, more than ever, people need the comfort and support that pets provide."
Newbury has worked with the CDC and the American Veterinary Medical Association to develop recommendations for shelters housing potentially exposed pets, which they may do while owners are hospitalized or otherwise unable to provide care because of their illness. The UW-Madison study helps confirm experimentally that cats can become infected, though the risk of natural infection from exposure to SARS-CoV-2 seems to be quite low, Newbury says. Of the 22 animals the program has tested, none have had positive polymerase chain reaction tests for the virus, she adds.
"Cats are still much more likely to get COVID-19 from you, rather than you get it from a cat," says Keith Poulsen, director of the Wisconsin Veterinary Diagnostic Laboratory, who recommends that pet owners first talk to their veterinarians about whether to have their animals tested. Testing should be targeted to populations of cats and other species shown to be susceptible to the virus and virus transmission.
With respect to pets, "we're targeting companion animals in communal residences with at-risk populations, such as nursing homes and assisted living facilities," Poulsen says. "There is a delicate balance of needing more information through testing and the limited resources and clinical implications of positive tests."
So, what should pet owners do?
Ruthanne Chun, associate dean for clinical affairs at UW Veterinary Care, offers the following advice:
"As always, animal owners should include pets and other animals in their emergency preparedness planning, including keeping on hand a two-week supply of food and medications," she says. "Preparations should also be made for the care of animals should you need to be quarantined or hospitalized due to illness."
Read more at Science Daily
Professor of Pathobiological Sciences at the University of Wisconsin School of Veterinary Medicine Yoshihiro Kawaoka led the study, in which researchers administered to three cats SARS-CoV-2 isolated from a human patient. The following day, the researchers swabbed the nasal passages of the cats and were able to detect the virus in two of the animals. Within three days, they detected the virus in all of the cats.
The day after the researchers administered virus to the first three cats, they placed another cat in each of their cages. Researchers did not administer SARS-CoV-2 virus to these cats.
Each day, the researchers took nasal and rectal swabs from all six cats to assess them for the presence of the virus. Within two days, one of the previously uninfected cats was shedding virus, detected in the nasal swab, and within six days, all of the cats were shedding virus. None of the rectal swabs contained virus.
Each cat shed SARS-CoV-2 from their nasal passages for up to six days. The virus was not lethal and none of the cats showed signs of illness. All of the cats ultimately cleared the virus.
"That was a major finding for us -- the cats did not have symptoms," says Kawaoka, who also holds a faculty appointment at the University of Tokyo. Kawaoka is also helping lead an effort to create a human COVID-19 vaccine called CoroFlu.
The findings suggest cats may be capable of becoming infected with the virus when exposed to people or other cats positive for SARS-CoV-2. It follows a study published in Science by scientists at the Chinese Academy of Agricultural Sciences that also showed cats (and ferrets) could become infected with and potentially transmit the virus. The virus is known to be transmitted in humans through contact with respiratory droplets and saliva.
"It's something for people to keep in mind," says Peter Halfmann, a research professor at UW-Madison who helped lead the study. "If they are quarantined in their house and are worried about passing COVID-19 to children and spouses, they should also worry about giving it to their animals."
Both researchers advise that people with symptoms of COVID-19 avoid contact with cats. They also advise cat owners to keep their pets indoors, in order to limit the contact their cats have with other people and animals.
Kawaoka is concerned about the welfare of animals. The World Organization for Animal Health and the Centers for Disease Control and Prevention say there is "no justification in taking measures against companion animals that may compromise their welfare."
Humans remain the biggest risk to other humans in transmission of the virus. There is no evidence cats readily transmit the virus to humans, nor are there documented cases in which humans have become ill with COVID-19 because of contact with cats.
There are, however, confirmed instances of cats becoming infected because of close contact with humans infected with the virus, and several large cats at the Bronx Zoo have also tested positive for the virus.
For instance, according to an April 22 announcement from the U.S. Department of Agriculture, two cats in two private homes in New York state tested positive for COVID-19. One had been in a home with a person with a confirmed case of the viral disease. The cats showed mild signs of respiratory illness and were expected to make a full recovery.
Additional cats have also tested positive for COVID-19 after close contact with their human companions, says Sandra Newbury, director of the UW-Madison Shelter Medicine Program. Newbury is leading a research study in several states in the U.S. to test animal-shelter cats that might have previously been exposed to human COVID-19 cases.
"Animal welfare organizations are working very hard in this crisis to maintain the human-animal bond and keep pets with their people," says Newbury. "It's a stressful time for everyone, and now, more than ever, people need the comfort and support that pets provide."
Newbury has worked with the CDC and the American Veterinary Medical Association to develop recommendations for shelters housing potentially exposed pets, which they may do while owners are hospitalized or otherwise unable to provide care because of their illness. The UW-Madison study helps confirm experimentally that cats can become infected, though the risk of natural infection from exposure to SARS-CoV-2 seems to be quite low, Newbury says. Of the 22 animals the program has tested, none have had positive polymerase chain reaction tests for the virus, she adds.
"Cats are still much more likely to get COVID-19 from you, rather than you get it from a cat," says Keith Poulsen, director of the Wisconsin Veterinary Diagnostic Laboratory, who recommends that pet owners first talk to their veterinarians about whether to have their animals tested. Testing should be targeted to populations of cats and other species shown to be susceptible to the virus and virus transmission.
With respect to pets, "we're targeting companion animals in communal residences with at-risk populations, such as nursing homes and assisted living facilities," Poulsen says. "There is a delicate balance of needing more information through testing and the limited resources and clinical implications of positive tests."
So, what should pet owners do?
Ruthanne Chun, associate dean for clinical affairs at UW Veterinary Care, offers the following advice:
- If your pet lives indoors with you and is not in contact with any COVID-19 positive individual, it is safe to pet, cuddle and interact with your pet.
- If you are COVID-19 positive, you should limit interactions with your pets to protect them from exposure to the virus.
- Additional guidance on managing pets in homes where people are sick with COVID-19 is available from the American Veterinary Medical Association and CDC, including in this FAQ from AVMA.
"As always, animal owners should include pets and other animals in their emergency preparedness planning, including keeping on hand a two-week supply of food and medications," she says. "Preparations should also be made for the care of animals should you need to be quarantined or hospitalized due to illness."
Read more at Science Daily
May 13, 2020
What's Mars made of?
Earth-based experiments on iron-sulfur alloys thought to comprise the core of Mars reveal details about the planet's seismic properties for the first time. This information will be compared to observations made by Martian space probes in the near future. Whether the results between experiment and observation coincide or not will either confirm existing theories about Mars' composition or call into question the story of its origin.
Mars is one of our closest terrestrial neighbors, yet it's still very far away -- between about 55 million and 400 million kilometers depending on where Earth and Mars are relative to the sun. At the time of writing, Mars is around 200 million kilometers away, and in any case, it is extremely difficult, expensive and dangerous to get to. For these reasons, it is sometimes more sensible to investigate the red planet through simulations here on Earth than it is to send an expensive space probe or, perhaps one day, people.
Keisuke Nishida, an Assistant Professor from the University of Tokyo's Department of Earth and Planetary Science at the time of the study, and his team are keen to investigate the inner workings of Mars. They look at seismic data and composition which tell researchers not just about the present state of the planet, but also about its past, including its origins.
"The exploration of the deep interiors of Earth, Mars and other planets is one of the great frontiers of science," said Nishida. "It's fascinating partly because of the daunting scales involved, but also because of how we investigate them safely from the surface of the Earth."
For a long time it has been theorized that the core of Mars probably consists of an iron-sulfur alloy. But given how inaccessible the Earth's core is to us, direct observations of Mars' core will likely have to wait some time. This is why seismic details are so important, as seismic waves, akin to enormously powerful sound waves, can travel through a planet and offer a glimpse inside, albeit with some caveats.
"NASA's Insight probe is already on Mars collecting seismic readings," said Nishida. "However, even with the seismic data there was an important missing piece of information without which the data could not be interpreted. We needed to know the seismic properties of the iron-sulfur alloy thought to make up the core of Mars."
Nishida and team have now measured the velocity for what is known as P-waves (one of two types of seismic wave, the other being S-waves) in molten iron-sulfur alloys.
"Due to technical hurdles, it took more than three years before we could collect the ultrasonic data we needed, so I am very pleased we now have it," said Nishida. "The sample is extremely small, which might surprise some people given the huge scale of the planet we are effectively simulating. But microscale high-pressure experiments help exploration of macroscale structures and long time-scale evolutionary histories of planets."
A molten iron-sulfur alloy just above its melting point of 1,500 degrees Celsius and subject to 13 gigapascals of pressure has a P-Wave velocity of 4,680 meters per second; this is over 13 times faster than the speed of sound in air, which is 343 meters per second. The researchers used a device called a Kawai-type multianvil press to compress the sample to such pressures. They used X-ray beams from two synchrotron facilities, KEK-PF and SPring-8, to help them image the samples in order to then calculate the P-wave values.
Read more at Science Daily
Mars is one of our closest terrestrial neighbors, yet it's still very far away -- between about 55 million and 400 million kilometers depending on where Earth and Mars are relative to the sun. At the time of writing, Mars is around 200 million kilometers away, and in any case, it is extremely difficult, expensive and dangerous to get to. For these reasons, it is sometimes more sensible to investigate the red planet through simulations here on Earth than it is to send an expensive space probe or, perhaps one day, people.
Keisuke Nishida, an Assistant Professor from the University of Tokyo's Department of Earth and Planetary Science at the time of the study, and his team are keen to investigate the inner workings of Mars. They look at seismic data and composition which tell researchers not just about the present state of the planet, but also about its past, including its origins.
"The exploration of the deep interiors of Earth, Mars and other planets is one of the great frontiers of science," said Nishida. "It's fascinating partly because of the daunting scales involved, but also because of how we investigate them safely from the surface of the Earth."
For a long time it has been theorized that the core of Mars probably consists of an iron-sulfur alloy. But given how inaccessible the Earth's core is to us, direct observations of Mars' core will likely have to wait some time. This is why seismic details are so important, as seismic waves, akin to enormously powerful sound waves, can travel through a planet and offer a glimpse inside, albeit with some caveats.
"NASA's Insight probe is already on Mars collecting seismic readings," said Nishida. "However, even with the seismic data there was an important missing piece of information without which the data could not be interpreted. We needed to know the seismic properties of the iron-sulfur alloy thought to make up the core of Mars."
Nishida and team have now measured the velocity for what is known as P-waves (one of two types of seismic wave, the other being S-waves) in molten iron-sulfur alloys.
"Due to technical hurdles, it took more than three years before we could collect the ultrasonic data we needed, so I am very pleased we now have it," said Nishida. "The sample is extremely small, which might surprise some people given the huge scale of the planet we are effectively simulating. But microscale high-pressure experiments help exploration of macroscale structures and long time-scale evolutionary histories of planets."
A molten iron-sulfur alloy just above its melting point of 1,500 degrees Celsius and subject to 13 gigapascals of pressure has a P-Wave velocity of 4,680 meters per second; this is over 13 times faster than the speed of sound in air, which is 343 meters per second. The researchers used a device called a Kawai-type multianvil press to compress the sample to such pressures. They used X-ray beams from two synchrotron facilities, KEK-PF and SPring-8, to help them image the samples in order to then calculate the P-wave values.
Read more at Science Daily
Not all psychopaths are violent; a new study may explain why some are 'successful' instead
Psychopathy is widely recognized as a risk factor for violent behavior, but many psychopathic individuals refrain from antisocial or criminal acts. Understanding what leads these psychopaths to be "successful" has been a mystery.
A new study conducted by researchers at Virginia Commonwealth University sheds light on the mechanisms underlying the formation of this "successful" phenotype.
"Psychopathic individuals are very prone to engaging in antisocial behaviors but what our findings suggest is that some may actually be better able to inhibit these impulses than others," said lead author Emily Lasko, a doctoral candidate in the Department of Psychology in the College of Humanities and Sciences. "Although we don't know exactly what precipitates this increase in conscientious impulse control over time, we do know that this does occur for individuals high in certain psychopathy traits who have been relatively more 'successful' than their peers."
The study, "What Makes a 'Successful' Psychopath? Longitudinal Trajectories of Offenders' Antisocial Behavior and Impulse Control as a Function of Psychopathy," will be published in a forthcoming issue of the journal Personality Disorders: Theory, Research, and Treatment.
When describing certain psychopathic individuals as "successful" versus "unsuccessful," the researchers are referring to life trajectories or outcomes. A "successful" psychopath, for example, might be a CEO or lawyer high in psychopathic traits, whereas an "unsuccessful" psychopath might have those same traits but is incarcerated.
The study tests a compensatory model of "successful" psychopathy, which theorizes that relatively "successful" psychopathic individuals develop greater conscientious traits that serve to inhibit their heightened antisocial impulses.
"The compensatory model posits that people higher in certain psychopathic traits (such as grandiosity and manipulation) are able to compensate for and overcome, to some extent, their antisocial impulses via increases in trait conscientiousness, specifically impulse control," Lasko said.
To test this model, the researchers studied data collected about 1,354 serious juvenile offenders who were adjudicated in court systems in Arizona and Pennsylvania.
"Although these participants are not objectively 'successful,' this was an ideal sample to test our hypotheses for two main reasons," the researchers write. "First, adolescents are in a prime developmental phase for the improvement of impulse control. Allowing us the longitudinal variability we would need to test our compensatory model. Second, offenders are prone to antisocial acts, by definition, and their rates of recidivism provided a real-world index of 'successful' versus 'unsuccessful' psychopathy phenotypes."
The study found that higher initial psychopathy was associated with steeper increases in general inhibitory control and the inhibition of aggression over time. That effect was magnified among "successful" offenders, or those who reoffended less.
Its findings lend support to the compensatory model of "successful" psychopathy, Lasko said.
"Our findings support a novel model of psychopathy that we propose, which runs contradictory to the other existing models of psychopathy in that it focuses more on the strengths or 'surpluses' associated with psychopathy rather than just deficits," she said. "Psychopathy is not a personality trait simply composed of deficits -- there are many forms that it can take."
Lasko is a researcher in VCU's Social Psychology and Neuroscience Lab, which seeks to understand why people try to harm one another. David Chester, Ph.D., director of the lab and an assistant professor of psychology, is co-author of the study.
Read more at Science Daily
A new study conducted by researchers at Virginia Commonwealth University sheds light on the mechanisms underlying the formation of this "successful" phenotype.
"Psychopathic individuals are very prone to engaging in antisocial behaviors but what our findings suggest is that some may actually be better able to inhibit these impulses than others," said lead author Emily Lasko, a doctoral candidate in the Department of Psychology in the College of Humanities and Sciences. "Although we don't know exactly what precipitates this increase in conscientious impulse control over time, we do know that this does occur for individuals high in certain psychopathy traits who have been relatively more 'successful' than their peers."
The study, "What Makes a 'Successful' Psychopath? Longitudinal Trajectories of Offenders' Antisocial Behavior and Impulse Control as a Function of Psychopathy," will be published in a forthcoming issue of the journal Personality Disorders: Theory, Research, and Treatment.
When describing certain psychopathic individuals as "successful" versus "unsuccessful," the researchers are referring to life trajectories or outcomes. A "successful" psychopath, for example, might be a CEO or lawyer high in psychopathic traits, whereas an "unsuccessful" psychopath might have those same traits but is incarcerated.
The study tests a compensatory model of "successful" psychopathy, which theorizes that relatively "successful" psychopathic individuals develop greater conscientious traits that serve to inhibit their heightened antisocial impulses.
"The compensatory model posits that people higher in certain psychopathic traits (such as grandiosity and manipulation) are able to compensate for and overcome, to some extent, their antisocial impulses via increases in trait conscientiousness, specifically impulse control," Lasko said.
To test this model, the researchers studied data collected about 1,354 serious juvenile offenders who were adjudicated in court systems in Arizona and Pennsylvania.
"Although these participants are not objectively 'successful,' this was an ideal sample to test our hypotheses for two main reasons," the researchers write. "First, adolescents are in a prime developmental phase for the improvement of impulse control. Allowing us the longitudinal variability we would need to test our compensatory model. Second, offenders are prone to antisocial acts, by definition, and their rates of recidivism provided a real-world index of 'successful' versus 'unsuccessful' psychopathy phenotypes."
The study found that higher initial psychopathy was associated with steeper increases in general inhibitory control and the inhibition of aggression over time. That effect was magnified among "successful" offenders, or those who reoffended less.
Its findings lend support to the compensatory model of "successful" psychopathy, Lasko said.
"Our findings support a novel model of psychopathy that we propose, which runs contradictory to the other existing models of psychopathy in that it focuses more on the strengths or 'surpluses' associated with psychopathy rather than just deficits," she said. "Psychopathy is not a personality trait simply composed of deficits -- there are many forms that it can take."
Lasko is a researcher in VCU's Social Psychology and Neuroscience Lab, which seeks to understand why people try to harm one another. David Chester, Ph.D., director of the lab and an assistant professor of psychology, is co-author of the study.
Read more at Science Daily
Tracing the evolution of self-control
Human self-control evolved in our early ancestors, becoming particularly evident around 500,000 years ago when they developed the skills to make sophisticated tools, a new study suggests.
While early hominins such as Homo erectus could craft basic handaxes as early as 1.8 million years ago, our hominin ancestors began to create more elaborate and carefully designed versions of these tools sometime before 500,000 years ago.
The authors of the study, from the University of York, say these advances in craftsmanship suggest individuals at this time possessed characteristics which demonstrate significant self-control, such as concentration and frustration tolerance.
The study highlights a collection of 500,000 year-old flint axes unearthed from a gravel quarry in the village of Boxgrove in West Sussex. The axes are highly symmetrical suggesting careful workmanship and the forgoing of immediate needs for longer term aims.
Senior author of the study, Dr Penny Spikins, from the Department of Archaeology said: "More sophisticated tools like the Boxgrove handaxes start to appear around the same time as our hominin ancestors were developing much bigger brains.
"The axes demonstrate characteristics that can be related to self-control such as the investment of time and energy in something that does not produce an immediate reward, forward planning and a level of frustration tolerance for completing a painstaking task.
"In the present day our capacity for self-control has become particularly important. Without the advanced levels of self-control we possess as a species, lockdown would be impossible. It takes self-control to put the needs of the community first rather than focus on our own immediate ends. Our study offers some clues as to where in human history this ability originated."
The researchers also point to evidence that the production of highly symmetrical and elaborate axes would have required knowledge and skill accumulated over a life time.
In one study, it took people trying to replicate the axes discovered at Boxgrove 16 hours of practice to even produce a recognisable handaxe.
Lead author of the study, James Green, a PhD student in the Department of archaeology at the University of York, added: "By deciphering the mental and physical processes involved in the production of prehistoric artefacts, we can gain valuable insights into the abilities of the individuals who made them.
"These axes demonstrate social learning and effortful activity directed at honing skills. They also provide some of the earliest evidence of something being deliberately made in a sequence from a picture in someone's mind.
Read more at Science Daily
While early hominins such as Homo erectus could craft basic handaxes as early as 1.8 million years ago, our hominin ancestors began to create more elaborate and carefully designed versions of these tools sometime before 500,000 years ago.
The authors of the study, from the University of York, say these advances in craftsmanship suggest individuals at this time possessed characteristics which demonstrate significant self-control, such as concentration and frustration tolerance.
The study highlights a collection of 500,000 year-old flint axes unearthed from a gravel quarry in the village of Boxgrove in West Sussex. The axes are highly symmetrical suggesting careful workmanship and the forgoing of immediate needs for longer term aims.
Senior author of the study, Dr Penny Spikins, from the Department of Archaeology said: "More sophisticated tools like the Boxgrove handaxes start to appear around the same time as our hominin ancestors were developing much bigger brains.
"The axes demonstrate characteristics that can be related to self-control such as the investment of time and energy in something that does not produce an immediate reward, forward planning and a level of frustration tolerance for completing a painstaking task.
"In the present day our capacity for self-control has become particularly important. Without the advanced levels of self-control we possess as a species, lockdown would be impossible. It takes self-control to put the needs of the community first rather than focus on our own immediate ends. Our study offers some clues as to where in human history this ability originated."
The researchers also point to evidence that the production of highly symmetrical and elaborate axes would have required knowledge and skill accumulated over a life time.
In one study, it took people trying to replicate the axes discovered at Boxgrove 16 hours of practice to even produce a recognisable handaxe.
Lead author of the study, James Green, a PhD student in the Department of archaeology at the University of York, added: "By deciphering the mental and physical processes involved in the production of prehistoric artefacts, we can gain valuable insights into the abilities of the individuals who made them.
"These axes demonstrate social learning and effortful activity directed at honing skills. They also provide some of the earliest evidence of something being deliberately made in a sequence from a picture in someone's mind.
Read more at Science Daily
Children face risk for severe complications and death from COVID-19
Hospital patient concept |
The study, published in JAMA Pediatrics, is the first to describe the characteristics of seriously ill pediatric COVID-19 patients in North America.
"The idea that COVID-19 is sparing of young people is just false," said study coauthor Lawrence C. Kleinman, professor and vice chair for academic development and chief of the Department of Pediatrics' Division of Population Health, Quality and Implementation Science at Rutgers Robert Wood Johnson Medical School. "While children are more likely to get very sick if they have other chronic conditions, including obesity, it is important to note that children without chronic illness are also at risk. Parents need to continue to take the virus seriously."
The study followed 48 children and young adults -- from newborns to 21 years old -- who were admitted to pediatric intensive care units (PICUs) in the United States and Canada for COVID-19 in March and April. More than 80 percent had chronic underlying conditions, such as immune suppression, obesity, diabetes, seizures or chronic lung disease. Of those, 40 percent depended on technological support due to developmental delays or genetic anomalies.
More than 20 percent experienced failure of two or more organ systems due to COVID-19, and nearly 40 percent required a breathing tube and ventilator. At the end of the follow-up period, nearly 33 percent of the children were still hospitalized due to COVID-19, with three still requiring ventilator support and one on life support. Two of the children admitted during the three-week study period died.
"This study provides a baseline understanding of the early disease burden of COVID-19 in pediatric patients," said Hariprem Rajasekhar, a pediatric intensivist involved in conducting the study at Robert Wood Johnson Medical School's Department of Pediatrics. "The findings confirm that this emerging disease was already widespread in March and that it is not universally benign among children."
The researchers said they were "cautiously encouraged" by hospital outcomes for the children studied, citing the 4.2 percent mortality rate for PICU patients compared with published mortality rates of up to 62 percent among adults admitted to ICUs, as well as lower incidences of respiratory failure.
Kleinman noted that doctors in the New York metropolitan area are seeing what appears to be a new COVID-related syndrome in children.
Read more at Science Daily
May 12, 2020
Early experiences determine how birds build their first nest
Early life experiences of zebra finches have a big effect on the construction of their first homes, according to a new study by researchers at the University of Alberta's Faculty of Science and the University of St Andrews' School of Biology.
The study shows that the presence of an adult bird as well as the types of materials available in early adolescence influence two key aspects of first-time nest building: material preference and construction speed.
"Interestingly, we noted that the preference for different materials, differentiated by colour in our study, is shaped by the juvenile experience of this material -- but only in the presence of an adult," said Lauren Guillette, assistant professor in the Department of Psychology and project lead.
"This work is important because it debunks the long-held myth that birds build nests that look like the nest in which they hatched -- making nest-building a useful model system to experimentally test how animals learn about physical properties of the world."
In this study, the researchers controlled the environment in which zebra finches grew up. Each bird hatched into a nest of a specific colour -- pink or orange. As the birds grew up, they were paired with another bird of the same age. Then, some pairs were grouped with an adult bird in an environment that contained a different colour of nest material than the colour in which they hatched. Other young pairs of birds experienced only an adult as company, or only nest material and no adult, other pairs just had each other.
Using these methods the researchers could determine if birds build their first nest with a colour that matches their natal nest, or the colour they experienced while growing up.
The results show that as juvenile zebra finches embarked on building their first nest, most birds preferred to use materials to which they'd had access to while growing up -- but only if an adult had also been present during this time. Further, birds who had not had juvenile access to an adult or material were between three and four times slower at nest building.
"Together, these results show that juvenile zebra finches combine relevant social and ecological cues -- here, adult presence and material colour -- when developing their material preference," explained Alexis Breen, lead author on the study who recently obtained a PhD at the University of St Andrews in Scotland.
Read more at Science Daily
The study shows that the presence of an adult bird as well as the types of materials available in early adolescence influence two key aspects of first-time nest building: material preference and construction speed.
"Interestingly, we noted that the preference for different materials, differentiated by colour in our study, is shaped by the juvenile experience of this material -- but only in the presence of an adult," said Lauren Guillette, assistant professor in the Department of Psychology and project lead.
"This work is important because it debunks the long-held myth that birds build nests that look like the nest in which they hatched -- making nest-building a useful model system to experimentally test how animals learn about physical properties of the world."
In this study, the researchers controlled the environment in which zebra finches grew up. Each bird hatched into a nest of a specific colour -- pink or orange. As the birds grew up, they were paired with another bird of the same age. Then, some pairs were grouped with an adult bird in an environment that contained a different colour of nest material than the colour in which they hatched. Other young pairs of birds experienced only an adult as company, or only nest material and no adult, other pairs just had each other.
Using these methods the researchers could determine if birds build their first nest with a colour that matches their natal nest, or the colour they experienced while growing up.
The results show that as juvenile zebra finches embarked on building their first nest, most birds preferred to use materials to which they'd had access to while growing up -- but only if an adult had also been present during this time. Further, birds who had not had juvenile access to an adult or material were between three and four times slower at nest building.
"Together, these results show that juvenile zebra finches combine relevant social and ecological cues -- here, adult presence and material colour -- when developing their material preference," explained Alexis Breen, lead author on the study who recently obtained a PhD at the University of St Andrews in Scotland.
Read more at Science Daily
Can we really tell male and female dinosaurs apart?
Scientists worldwide have long debated our ability to identify male and female dinosaurs. Now, research led by Queen Mary University of London has shown that despite previous claims of success, it's very difficult to spot differences between the sexes.
In the new study, researchers analysed skulls from modern-day gharials, an endangered and giant crocodilian species, to see how easy it is to distinguish between males and females using only fossil records.
Male gharials are larger in size than females and possess a fleshy growth on the end of their snout, known as a ghara. Whilst the ghara is made from soft tissue, it is supported by a bony hollow near the nostrils, known as the narial fossa, which can be identified in their skulls.
The research team, which included Jordan Mallon from the Canadian Museum of Nature, Patrick Hennessey from Georgia Southern University and Lawrence Witmer from Ohio University, studied 106 gharial specimens in museums across the world. They found that aside from the presence of the narial fossa in males, it was still very hard to tell the sexes apart.
Dr David Hone, Senior Lecturer in Zoology at Queen Mary University of London and author of the study, said: "Like dinosaurs, gharials are large, slow growing reptiles that lay eggs, which makes them a good model for studying extinct dinosaur species. Our research shows that even with prior knowledge of the sex of the specimen, it can still be difficult to tell male and female gharials apart. With most dinosaurs we don't have anywhere near that size of the dataset used for this study, and we don't know the sex of the animals, so we'd expect this task to be much harder."
In many species, males and females can look very different from each other. For example, antlers are largely only found in male deer and in peacocks, males are normally brightly-coloured with large, iridescent tail feathers whereas females are much more subdued in their colouration. This is known as sexual dimorphism and is very common within the animal kingdom. It is expected that dinosaurs also exhibit these differences, however this research suggests that in most cases this is far too difficult to tell from the skeleton alone.
Dr Hone said: "Some animals show extraordinarily high levels of sexual dimorphism, for example huge size differences between males and females. Gharials sit somewhere in the middle as they do possess these large narial fossa that can help with identification. Our study suggests that unless the differences between the dinosaurs are really striking, or there is a clear feature like the fossa, we will struggle to tell a male and female dinosaur apart using our existing dinosaur skeletons."
The new research also challenges previous studies that have hinted at differences between the sexes in popular dinosaur species such as the Tyrannosaurus rex (T. rex), and led to common misconceptions amongst the general public.
Read more at Science Daily
In the new study, researchers analysed skulls from modern-day gharials, an endangered and giant crocodilian species, to see how easy it is to distinguish between males and females using only fossil records.
Male gharials are larger in size than females and possess a fleshy growth on the end of their snout, known as a ghara. Whilst the ghara is made from soft tissue, it is supported by a bony hollow near the nostrils, known as the narial fossa, which can be identified in their skulls.
The research team, which included Jordan Mallon from the Canadian Museum of Nature, Patrick Hennessey from Georgia Southern University and Lawrence Witmer from Ohio University, studied 106 gharial specimens in museums across the world. They found that aside from the presence of the narial fossa in males, it was still very hard to tell the sexes apart.
Dr David Hone, Senior Lecturer in Zoology at Queen Mary University of London and author of the study, said: "Like dinosaurs, gharials are large, slow growing reptiles that lay eggs, which makes them a good model for studying extinct dinosaur species. Our research shows that even with prior knowledge of the sex of the specimen, it can still be difficult to tell male and female gharials apart. With most dinosaurs we don't have anywhere near that size of the dataset used for this study, and we don't know the sex of the animals, so we'd expect this task to be much harder."
In many species, males and females can look very different from each other. For example, antlers are largely only found in male deer and in peacocks, males are normally brightly-coloured with large, iridescent tail feathers whereas females are much more subdued in their colouration. This is known as sexual dimorphism and is very common within the animal kingdom. It is expected that dinosaurs also exhibit these differences, however this research suggests that in most cases this is far too difficult to tell from the skeleton alone.
Dr Hone said: "Some animals show extraordinarily high levels of sexual dimorphism, for example huge size differences between males and females. Gharials sit somewhere in the middle as they do possess these large narial fossa that can help with identification. Our study suggests that unless the differences between the dinosaurs are really striking, or there is a clear feature like the fossa, we will struggle to tell a male and female dinosaur apart using our existing dinosaur skeletons."
The new research also challenges previous studies that have hinted at differences between the sexes in popular dinosaur species such as the Tyrannosaurus rex (T. rex), and led to common misconceptions amongst the general public.
Read more at Science Daily
Scientists reveal solar system's oldest molecular fluids could hold the key to early life
The oldest molecular fluids in the solar system could have supported the rapid formation and evolution of the building blocks of life, new research in the journal Proceedings of the National Academy of Sciences reveals.
An international group of scientists, led by researchers from the Royal Ontario Museum (ROM) and co-authors from McMaster University and York University, used state-of-the-art techniques to map individual atoms in minerals formed in fluids on an asteroid over 4.5 billion years ago.
Studying the ROM's iconic Tagish Lake meteorite, scientists used atom-probe tomography, a technique capable of imaging atoms in 3D, to target molecules along boundaries and pores between magnetite grains that likely formed on the asteroid's crust. There, they discovered water precipitates left in the grain boundaries on which they conducted their ground-breaking research.
"We know water was abundant in the early solar system," explains lead author Dr. Lee White, Hatch postdoctoral fellow at the ROM, "but there is very little direct evidence of the chemistry or acidity of these liquids, even though they would have been critical to the early formation and evolution of amino acids and, eventually, microbial life."
This new atomic-scale research provides the first evidence of the sodium-rich (and alkaline) fluids in which the magnetite framboids formed. These fluid conditions are preferential for the synthesis of amino acids, opening the door for microbial life to form as early as 4.5 billion years ago.
"Amino acids are essential building blocks of life on Earth, yet we still have a lot to learn about how they first formed in our solar system," says Beth Lymer, a PhD student at York University and co-author of the study. "The more variables that we can constrain, such as temperature and pH, allows us to better understand the synthesis and evolution of these very important molecules into what we now know as biotic life on Earth."
The Tagish Lake carbonaceous chondrite was retrieved from an ice sheet in B.C.'s Tagish Lake in 2000, and later acquired by the ROM, where it is now considered to be one of the museums iconic objects. This history means that the sample used by the team has never been above room temperature or exposed to liquid water, allowing the scientists to confidently link the measured fluids to the parent asteroid.
By using new techniques, such as atom probe tomography, the scientists hope to develop analytical methods for planetary materials returned to Earth by space craft, such as by NASA's OSIRIS-REx mission or a planned sample-return mission to Mars in the near future.
Read more at Science Daily
An international group of scientists, led by researchers from the Royal Ontario Museum (ROM) and co-authors from McMaster University and York University, used state-of-the-art techniques to map individual atoms in minerals formed in fluids on an asteroid over 4.5 billion years ago.
Studying the ROM's iconic Tagish Lake meteorite, scientists used atom-probe tomography, a technique capable of imaging atoms in 3D, to target molecules along boundaries and pores between magnetite grains that likely formed on the asteroid's crust. There, they discovered water precipitates left in the grain boundaries on which they conducted their ground-breaking research.
"We know water was abundant in the early solar system," explains lead author Dr. Lee White, Hatch postdoctoral fellow at the ROM, "but there is very little direct evidence of the chemistry or acidity of these liquids, even though they would have been critical to the early formation and evolution of amino acids and, eventually, microbial life."
This new atomic-scale research provides the first evidence of the sodium-rich (and alkaline) fluids in which the magnetite framboids formed. These fluid conditions are preferential for the synthesis of amino acids, opening the door for microbial life to form as early as 4.5 billion years ago.
"Amino acids are essential building blocks of life on Earth, yet we still have a lot to learn about how they first formed in our solar system," says Beth Lymer, a PhD student at York University and co-author of the study. "The more variables that we can constrain, such as temperature and pH, allows us to better understand the synthesis and evolution of these very important molecules into what we now know as biotic life on Earth."
The Tagish Lake carbonaceous chondrite was retrieved from an ice sheet in B.C.'s Tagish Lake in 2000, and later acquired by the ROM, where it is now considered to be one of the museums iconic objects. This history means that the sample used by the team has never been above room temperature or exposed to liquid water, allowing the scientists to confidently link the measured fluids to the parent asteroid.
By using new techniques, such as atom probe tomography, the scientists hope to develop analytical methods for planetary materials returned to Earth by space craft, such as by NASA's OSIRIS-REx mission or a planned sample-return mission to Mars in the near future.
Read more at Science Daily
Multitasking in the workplace can lead to negative emotions
From writing papers to answering emails, it's common for office workers to juggle multiple tasks at once. But those constant interruptions can actually create sadness and fear and eventually, a tense working environment, according to a new study aimed at understanding what shapes the emotional culture of a workplace.
"Not only do people experience stress with multitasking, but their faces may also express unpleasant emotions and that can have negative consequences for the entire office culture," said study senior author Ioannis Pavlidis, director of the Computational Physiology Laboratory at the University of Houston.
Pavlidis, along with Gloria Mark at the University of California Irvine and Ricardo Gutierrez-Osuna at Texas A&M University, used a novel algorithm, based on co-occurrence matrices, to analyze mixed emotions manifested on the faces of so-called knowledge workers amidst an essay writing task. One group answered a single batch of emails before they began writing, thus limiting the amount of distraction, while the other group was frequently interrupted to answer emails as they came in.
The findings are published in the Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems.
"Individuals who engaged in multitasking appeared significantly sadder than those who did not. Interestingly, sadness tended to mix with a touch of fear in the multitasking cohort," Pavlidis said. "Multitasking imposes an onerous mental load and is associated with elevated stress, which appears to trigger the displayed sadness. The simultaneous onset of fear is intriguing and is likely rooted to subconscious anticipation of the next disruption," he added. Because multitasking is a widespread practice, the display of these negative emotions can persist throughout the workday for many people. It is this ubiquitous, continuous and persistent character of the phenomenon that renders it such a dangerous `climate maker', the researchers emphasized.
The facial expressions of the workers who answered emails in one batch remained mostly neutral during the course of their uninterrupted writing task. However, there was an element of anger during the separate email task, perhaps attributed to the realization of the amount of work needed to process all the emails in one session, the researchers theorize. The good news is that email batching is localized in time and thus its emotional effects don't last long. Solutions are possible in this case; the team suggests addressing the email batch at a later time when responding to emails is the only task, recognizing that won't always be possible due to office pressure.
Negative displayed emotions -- especially in open office settings -- can have significant consequences on company culture, according to the paper. "Emotional contagion can spread in a group or workplace through the influence of conscious or unconscious processes involving emotional states or physiological responses."
Upon return to normalcy following the COVID-19 crisis, the results suggest organizations should pay attention to multi-tasking practices to ensure a cohesive working environment. "Currently, an intriguing question is what the emotional effect of multitasking at home would be, where knowledge workers moved their operation during the COVID 19 pandemic," said Pavlidis.
Read more at Science Daily
"Not only do people experience stress with multitasking, but their faces may also express unpleasant emotions and that can have negative consequences for the entire office culture," said study senior author Ioannis Pavlidis, director of the Computational Physiology Laboratory at the University of Houston.
Pavlidis, along with Gloria Mark at the University of California Irvine and Ricardo Gutierrez-Osuna at Texas A&M University, used a novel algorithm, based on co-occurrence matrices, to analyze mixed emotions manifested on the faces of so-called knowledge workers amidst an essay writing task. One group answered a single batch of emails before they began writing, thus limiting the amount of distraction, while the other group was frequently interrupted to answer emails as they came in.
The findings are published in the Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems.
"Individuals who engaged in multitasking appeared significantly sadder than those who did not. Interestingly, sadness tended to mix with a touch of fear in the multitasking cohort," Pavlidis said. "Multitasking imposes an onerous mental load and is associated with elevated stress, which appears to trigger the displayed sadness. The simultaneous onset of fear is intriguing and is likely rooted to subconscious anticipation of the next disruption," he added. Because multitasking is a widespread practice, the display of these negative emotions can persist throughout the workday for many people. It is this ubiquitous, continuous and persistent character of the phenomenon that renders it such a dangerous `climate maker', the researchers emphasized.
The facial expressions of the workers who answered emails in one batch remained mostly neutral during the course of their uninterrupted writing task. However, there was an element of anger during the separate email task, perhaps attributed to the realization of the amount of work needed to process all the emails in one session, the researchers theorize. The good news is that email batching is localized in time and thus its emotional effects don't last long. Solutions are possible in this case; the team suggests addressing the email batch at a later time when responding to emails is the only task, recognizing that won't always be possible due to office pressure.
Negative displayed emotions -- especially in open office settings -- can have significant consequences on company culture, according to the paper. "Emotional contagion can spread in a group or workplace through the influence of conscious or unconscious processes involving emotional states or physiological responses."
Upon return to normalcy following the COVID-19 crisis, the results suggest organizations should pay attention to multi-tasking practices to ensure a cohesive working environment. "Currently, an intriguing question is what the emotional effect of multitasking at home would be, where knowledge workers moved their operation during the COVID 19 pandemic," said Pavlidis.
Read more at Science Daily
COVID-19 lockdowns significantly impacting global air quality
Beijing |
Two new studies in AGU's journal Geophysical Research Letters find nitrogen dioxide pollution over northern China, Western Europe and the U.S. decreased by as much as 60 percent in early 2020 as compared to the same time last year. Nitrogen dioxide is a highly reactive gas produced during combustion that has many harmful effects on the lungs. The gas typically enters the atmosphere through emissions from vehicles, power plants and industrial activities.
In addition to nitrogen dioxide, one of the new studies finds particulate matter pollution (particles smaller than 2.5 microns) has decreased by 35 percent in northern China. Particulate matter is composed of solid particles and liquid droplets that are small enough to penetrate deep into the lungs and cause damage.
The two new papers are part of an ongoing special collection of research in AGU journals related to the current pandemic.
Such a significant drop in emissions is unprecedented since air quality monitoring from satellites began in the 1990s, said Jenny Stavrakou, an atmospheric scientist at the Royal Belgian Institute for Space Aeronomy in Brussels and co-author of one of the papers. The only other comparable events are short-term reductions in China's emissions due to strict regulations during events like the 2008 Beijing Olympics.
The improvements in air quality will likely be temporary, but the findings give scientists a glimpse into what air quality could be like in the future as emissions regulations become more stringent, according to the researchers.
"Maybe this unintended experiment could be used to understand better the emission regulations," Stavrakou said. "It is some positive news among a very tragic situation."
However, the drop in nitrogen dioxide pollution has caused an increase in surface ozone levels in China, according to one of the new studies. Ozone is a secondary pollutant formed when sunlight and high temperature catalyze chemical reactions in the lower atmosphere. Ozone is harmful to humans at ground-level, causing pulmonary and heart disease.
In highly polluted areas, particularly in winter, surface ozone can be destroyed by nitrogen oxides, so ozone levels can increase when nitrogen dioxide pollution goes down. As a result, although air quality has largely improved in many regions, surface ozone can still be a problem, according to Guy Brasseur, an atmospheric scientist at the Max Planck Institute for Meteorology in Hamburg, Germany, and lead author of one of the new studies.
"It means that by just reducing the [nitrogen dioxide] and the particles, you won't solve the ozone problem," Brasseur said.
Worldwide emissions
Stavrakou and her colleagues used satellite measurements of air quality to estimate the changes in nitrogen dioxide pollution over the major epicenters of the outbreak: China, South Korea, Italy, Spain, France, Germany, Iran and the United States.
They found that nitrogen dioxide pollution decreased by an average of 40 percent over Chinese cities and by 20 to 38 percent over Western Europe and the United States during the 2020 lockdown, as compared to the same time in 2019.
However, the study found nitrogen dioxide pollution did not decrease over Iran, one of the earliest and hardest-hit countries. The authors suspect this is because complete lockdowns weren't in place until late March and before that, stay-at-home orders were largely ignored. The authors did see a dip in emissions during the Iranian New Year holiday after March 20, but this dip is observed during the celebration every year.
Air quality in China
The second study looked at air quality changes in northern China where the virus was first reported and where lockdowns have been most strict.
Brasseur analyzed levels of nitrogen dioxide and several other types of air pollution measured by 800 ground-level air quality monitoring stations in northern China.
Brasseur and his colleague found particulate matter pollution decreased by an average of 35 percent and nitrogen dioxide decreased by an average of 60 percent after the lockdowns began on January 23.
Read more at Science Daily
May 11, 2020
Chemical evidence of dairying by hunter-gatherers in Lesotho in the first millennium AD
After analysing organic residues from ancient pots, a team of scientists led by the University of Bristol has uncovered new evidence of dairying by hunter-gatherers in the landlocked South African country of Lesotho in the mid-late first millennium AD.
The study on organic residue analysis from South African hunter-gatherer pots is being published today in Nature Human Behaviour. Extensive archaeological evidence shows that Early Iron Age agricultural communities settled in the coastal regions of KwaZulu-Natal in South Africa from around AD 400.
Although these farmers appear to have been in contact with local lowland hunter-gatherer groups, it was long assumed that they had little or no direct contact with hunter-gatherers already occupying the mountainous regions of Lesotho, as they did not settle the region until the 19th century due to the unsuitability of the mountains for crop cultivation.
Over the past several decades however, remains of domestic animal bones have been uncovered in several sites in the Maloti-Drakensberg Mountains in Lesotho in hunter-gatherer contexts dating to the 1st and 2nd millennia AD.
At one site in particular -- Likoaeng -- domestic animal bones were found in association with an Early Iron Age potsherd and some fragments of iron. This discovery led to the suggestion that the hunter-gatherers occupying the site were following a 'hunters-with-sheep' mode of subsistence that incorporated the keeping of small numbers of livestock into what was otherwise a foraging economy and that they must have obtained these animals and objects through on-going contact with agricultural groups based on the coast.
In the past five years however, several studies have sequenced DNA from supposed domestic animal bones from these highland sites, and instead found them to belong to wild species. This led to the suggestion that the presence of domestic animals in the highlands, and therefore the level of contact, had been overestimated, yet the zooarchaeologists involved stand by their original morphological assessment of the bones.
Lead researcher, Helen Fewlass, now based at the Max Planck Institute for Evolutionary Anthropology (Leipzig) but who carried out the work as part of her master's project in the University of Bristol's Department of Anthropology and Archaeology, said: "We used organic residue analysis to investigate fats that become absorbed into the porous clay matrix of a pot during its use.
"We extracted and analysed lipid residues from pots from two hunter-gatherer sites with domestic livestock remains in the highlands of Lesotho, Likoaeng and Sehonghong, dating to the mid-late first millennium AD and compared them to lipids extracted from pots from a recent nearby agricultural settlement, Mokatlapoli.
"This allowed us to explore the subsistence practices of the hunter-gatherers occupying these sites to see if there was any evidence for their contact with farming groups."
The team found that dairy residues were present in approximately a third of the hunter-gatherer pots. They directly radiocarbon dated a dairy residue from Likoaeng to AD 579-654 and another from Sehonghong to AD 885-990. The results confirm the presence of domestic animals at these sites in the 1st millennium AD.
The team also observed patterning in the stable carbon isotopic values of fatty acids in the residues, which imply that different methods of animal husbandry were practised by the 1st millennium hunter-gatherer groups compared to the recent agricultural group occupying the same region.
The stable carbon isotopic values of dairy residues from the agricultural site clearly reflect the introduction of crops such as maize and sorghum into the region in the late nineteenth century and the foddering of domestic animals upon them.
As the hunter-gatherer groups must have learnt animal husbandry techniques, the results support the notion that hunter-gatherer groups in the highlands of Lesotho had on-going contact with farming communities in the lowlands, rather than just obtaining the animals through raids or long-distance exchange networks. Based on the direct date of the dairy residue from Likoaeng, contact must have been established within a few centuries of the arrival of agricultural groups in the coastal regions of South Africa.
The results also have implications for the on-going debate about the molecular vs morphological assessment of the faunal remains. The results of the organic residue analysis support the osteoarchaeological evidence for the presence of domestic animals at Likoaeng and Sehonghong. However, as large amounts of milk can be generated from one domestic animal, the prevalence of dairy residues does not tell us how many domestic animals were present.
Direct radiocarbon dating of domestic faunal remains in these contexts has been hampered by poor collagen preservation. The new method (published earlier this month in Nature) for direct dating of fats extracted from potsherds represents a new avenue for placing the arrival and presence of domestic animals in the area in a secure chronological context.
Helen Fewlass added: "The presence of dairy fats in pots from Likoaeng and Sehonghong in highland Lesotho shows that hunter-gatherers in the mountains had adopted at least sporadic use of livestock from agricultural groups in South Africa not long after their arrival in the 1st millennium AD."
Co-author, Dr Emmanuelle Casanova, from the University of Bristol's Organic Geochemistry Unit -- part of the School of Chemistry, added: "In addition to the identification of dairying practices we were able apply a brand-new dating method for pottery vessels to verify the antiquity of the dairy residues which perfectly fits with the age of the hunter-gatherer groups."
Read more at Science Daily
The study on organic residue analysis from South African hunter-gatherer pots is being published today in Nature Human Behaviour. Extensive archaeological evidence shows that Early Iron Age agricultural communities settled in the coastal regions of KwaZulu-Natal in South Africa from around AD 400.
Although these farmers appear to have been in contact with local lowland hunter-gatherer groups, it was long assumed that they had little or no direct contact with hunter-gatherers already occupying the mountainous regions of Lesotho, as they did not settle the region until the 19th century due to the unsuitability of the mountains for crop cultivation.
Over the past several decades however, remains of domestic animal bones have been uncovered in several sites in the Maloti-Drakensberg Mountains in Lesotho in hunter-gatherer contexts dating to the 1st and 2nd millennia AD.
At one site in particular -- Likoaeng -- domestic animal bones were found in association with an Early Iron Age potsherd and some fragments of iron. This discovery led to the suggestion that the hunter-gatherers occupying the site were following a 'hunters-with-sheep' mode of subsistence that incorporated the keeping of small numbers of livestock into what was otherwise a foraging economy and that they must have obtained these animals and objects through on-going contact with agricultural groups based on the coast.
In the past five years however, several studies have sequenced DNA from supposed domestic animal bones from these highland sites, and instead found them to belong to wild species. This led to the suggestion that the presence of domestic animals in the highlands, and therefore the level of contact, had been overestimated, yet the zooarchaeologists involved stand by their original morphological assessment of the bones.
Lead researcher, Helen Fewlass, now based at the Max Planck Institute for Evolutionary Anthropology (Leipzig) but who carried out the work as part of her master's project in the University of Bristol's Department of Anthropology and Archaeology, said: "We used organic residue analysis to investigate fats that become absorbed into the porous clay matrix of a pot during its use.
"We extracted and analysed lipid residues from pots from two hunter-gatherer sites with domestic livestock remains in the highlands of Lesotho, Likoaeng and Sehonghong, dating to the mid-late first millennium AD and compared them to lipids extracted from pots from a recent nearby agricultural settlement, Mokatlapoli.
"This allowed us to explore the subsistence practices of the hunter-gatherers occupying these sites to see if there was any evidence for their contact with farming groups."
The team found that dairy residues were present in approximately a third of the hunter-gatherer pots. They directly radiocarbon dated a dairy residue from Likoaeng to AD 579-654 and another from Sehonghong to AD 885-990. The results confirm the presence of domestic animals at these sites in the 1st millennium AD.
The team also observed patterning in the stable carbon isotopic values of fatty acids in the residues, which imply that different methods of animal husbandry were practised by the 1st millennium hunter-gatherer groups compared to the recent agricultural group occupying the same region.
The stable carbon isotopic values of dairy residues from the agricultural site clearly reflect the introduction of crops such as maize and sorghum into the region in the late nineteenth century and the foddering of domestic animals upon them.
As the hunter-gatherer groups must have learnt animal husbandry techniques, the results support the notion that hunter-gatherer groups in the highlands of Lesotho had on-going contact with farming communities in the lowlands, rather than just obtaining the animals through raids or long-distance exchange networks. Based on the direct date of the dairy residue from Likoaeng, contact must have been established within a few centuries of the arrival of agricultural groups in the coastal regions of South Africa.
The results also have implications for the on-going debate about the molecular vs morphological assessment of the faunal remains. The results of the organic residue analysis support the osteoarchaeological evidence for the presence of domestic animals at Likoaeng and Sehonghong. However, as large amounts of milk can be generated from one domestic animal, the prevalence of dairy residues does not tell us how many domestic animals were present.
Direct radiocarbon dating of domestic faunal remains in these contexts has been hampered by poor collagen preservation. The new method (published earlier this month in Nature) for direct dating of fats extracted from potsherds represents a new avenue for placing the arrival and presence of domestic animals in the area in a secure chronological context.
Helen Fewlass added: "The presence of dairy fats in pots from Likoaeng and Sehonghong in highland Lesotho shows that hunter-gatherers in the mountains had adopted at least sporadic use of livestock from agricultural groups in South Africa not long after their arrival in the 1st millennium AD."
Co-author, Dr Emmanuelle Casanova, from the University of Bristol's Organic Geochemistry Unit -- part of the School of Chemistry, added: "In addition to the identification of dairying practices we were able apply a brand-new dating method for pottery vessels to verify the antiquity of the dairy residues which perfectly fits with the age of the hunter-gatherer groups."
Read more at Science Daily
Hayabusa2 reveals more secrets from Ryugu
In February and July of 2019, the Hayabusa2 spacecraft briefly touched down on the surface of near-Earth asteroid Ryugu. The readings it took with various instruments at those times have given researchers insight into the physical and chemical properties of the 1-kilometer-wide asteroid. These findings could help explain the history of Ryugu and other asteroids, as well as the solar system at large.
When our solar system formed around 5 billion years ago, most of the material it formed from became the sun, and a fraction of a percent became the planets and solid bodies, including asteroids. Planets have changed a lot since the early days of the solar system due to geological processes, chemical changes, bombardments and more. But asteroids have remained more or less the same as they are too small to experience those things, and are therefore useful for researchers who investigate the early solar system and our origins.
"I believe knowledge of the evolutionary processes of asteroids and planets are essential to understand the origins of the Earth and life itself," said Associate Professor Tomokatsu Morota from the Department of Earth and Planetary Science at the University of Tokyo. "Asteroid Ryugu presents an amazing opportunity to learn more about this as it is relatively close to home, so Hayabusa2 could make a return journey relatively easily. "
Hayabusa2 launched in December 2014 and reached Ryugu in June 2018. At the time of writing, Hayabusa2 is on its way back to Earth and is scheduled to deliver a payload in December 2020. This payload consists of small samples of surface material from Ryugu collected during two touchdowns in February and July of 2019. Researchers will learn much from the direct study of this material, but even before it reaches us, Hayabusa2 helped researchers to investigate the physical and chemical makeup of Ryugu.
"We used Hayabusa2's ONC-W1 and ONC-T imaging instruments to look at dusty matter kicked up by the spacecraft's engines during the touchdowns," said Morota. "We discovered large amounts of very fine grains of dark-red colored minerals. These were produced by solar heating, suggesting at some point Ryugu must have passed close by the sun."
Morota and his team investigated the spatial distribution of the dark-red matter around Ryugu as well as its spectra or light signature. The strong presence of the material around specific latitudes corresponded to the areas that would have received the most solar radiation in the asteroid's past; hence, the belief that Ryugu must have passed by the sun.
"From previous studies we know Ryugu is carbon-rich and contains hydrated minerals and organic molecules. We wanted to know how solar heating chemically changed these molecules," said Morota. "Our theories about solar heating could change what we know of orbital dynamics of asteroids in the solar system. This in turn alters our knowledge of broader solar system history, including factors that may have affected the early Earth."
When Hayabusa2 delivers material it collected during both touchdowns, researchers will unlock even more secrets of our solar history. Based on spectral readings and albedo, or reflectivity, from within the touchdown sites, researchers are confident that both dark-red solar-heated material and gray unheated material were collected by Hayabusa2. Morota and his team hope to study larger properties of Ryugu, such as its many craters and boulders.
Read more at Science Daily
When our solar system formed around 5 billion years ago, most of the material it formed from became the sun, and a fraction of a percent became the planets and solid bodies, including asteroids. Planets have changed a lot since the early days of the solar system due to geological processes, chemical changes, bombardments and more. But asteroids have remained more or less the same as they are too small to experience those things, and are therefore useful for researchers who investigate the early solar system and our origins.
"I believe knowledge of the evolutionary processes of asteroids and planets are essential to understand the origins of the Earth and life itself," said Associate Professor Tomokatsu Morota from the Department of Earth and Planetary Science at the University of Tokyo. "Asteroid Ryugu presents an amazing opportunity to learn more about this as it is relatively close to home, so Hayabusa2 could make a return journey relatively easily. "
Hayabusa2 launched in December 2014 and reached Ryugu in June 2018. At the time of writing, Hayabusa2 is on its way back to Earth and is scheduled to deliver a payload in December 2020. This payload consists of small samples of surface material from Ryugu collected during two touchdowns in February and July of 2019. Researchers will learn much from the direct study of this material, but even before it reaches us, Hayabusa2 helped researchers to investigate the physical and chemical makeup of Ryugu.
"We used Hayabusa2's ONC-W1 and ONC-T imaging instruments to look at dusty matter kicked up by the spacecraft's engines during the touchdowns," said Morota. "We discovered large amounts of very fine grains of dark-red colored minerals. These were produced by solar heating, suggesting at some point Ryugu must have passed close by the sun."
Morota and his team investigated the spatial distribution of the dark-red matter around Ryugu as well as its spectra or light signature. The strong presence of the material around specific latitudes corresponded to the areas that would have received the most solar radiation in the asteroid's past; hence, the belief that Ryugu must have passed by the sun.
"From previous studies we know Ryugu is carbon-rich and contains hydrated minerals and organic molecules. We wanted to know how solar heating chemically changed these molecules," said Morota. "Our theories about solar heating could change what we know of orbital dynamics of asteroids in the solar system. This in turn alters our knowledge of broader solar system history, including factors that may have affected the early Earth."
When Hayabusa2 delivers material it collected during both touchdowns, researchers will unlock even more secrets of our solar history. Based on spectral readings and albedo, or reflectivity, from within the touchdown sites, researchers are confident that both dark-red solar-heated material and gray unheated material were collected by Hayabusa2. Morota and his team hope to study larger properties of Ryugu, such as its many craters and boulders.
Read more at Science Daily
Scientists have modeled Mars climate to understand habitability
A Southwest Research Institute scientist modeled the atmosphere of Mars to help determine that salty pockets of water present on the Red Planet are likely not habitable by life as we know it on Earth. A team that also included scientists from Universities Space Research Association (USRA) and the University of Arkansas helped allay planetary protection concerns about contaminating potential Martian ecosystems. These results were published this month in Nature Astronomy.
Due to Mars' low temperatures and extremely dry conditions, a droplet of liquid water on its surface would instantly freeze, boil or evaporate, unless the droplet had dissolved salts in it. This brine would have a lower freezing temperature and would evaporate more slowly than pure liquid water. Salts are found across Mars, so brines could form there.
"Our team looked at specific regions on Mars -- areas where liquid water temperature and accessibility limits could possibly allow known terrestrial organisms to replicate -- to understand if they could be habitable," said SwRI's Dr. Alejandro Soto, a senior research scientist and co-author of the study. "We used Martian climate information from both atmospheric models and spacecraft measurements. We developed a model to predict where, when and for how long brines are stable on the surface and shallow subsurface of Mars."
Mars' hyper-arid conditions require lower temperatures to reach high relative humidities and tolerable water activities, which are measures of how easily the water content may be utilized for hydration. The maximum brine temperature expected is -55 F -- at the boundary of the theoretical low temperature limit for life.
"Even extreme life on Earth has its limits, and we found that brine formation from some salts can lead to liquid water over 40% of the Martian surface but only seasonally, during 2% of the Martian year," Soto continued. "This would preclude life as we know it."
While pure liquid water is unstable on the Martian surface, models showed that stable brines can form and persist from the equator to high latitudes on the surface of Mars for a few percent of the year for up to six consecutive hours, a broader range than previously thought. However, the temperatures are well below the lowest temperatures to support life.
"These new results reduce some of the risk of exploring the Red Planet while also contributing to future work on the potential for habitable conditions on Mars," Soto said.
From Science Daily
Due to Mars' low temperatures and extremely dry conditions, a droplet of liquid water on its surface would instantly freeze, boil or evaporate, unless the droplet had dissolved salts in it. This brine would have a lower freezing temperature and would evaporate more slowly than pure liquid water. Salts are found across Mars, so brines could form there.
"Our team looked at specific regions on Mars -- areas where liquid water temperature and accessibility limits could possibly allow known terrestrial organisms to replicate -- to understand if they could be habitable," said SwRI's Dr. Alejandro Soto, a senior research scientist and co-author of the study. "We used Martian climate information from both atmospheric models and spacecraft measurements. We developed a model to predict where, when and for how long brines are stable on the surface and shallow subsurface of Mars."
Mars' hyper-arid conditions require lower temperatures to reach high relative humidities and tolerable water activities, which are measures of how easily the water content may be utilized for hydration. The maximum brine temperature expected is -55 F -- at the boundary of the theoretical low temperature limit for life.
"Even extreme life on Earth has its limits, and we found that brine formation from some salts can lead to liquid water over 40% of the Martian surface but only seasonally, during 2% of the Martian year," Soto continued. "This would preclude life as we know it."
While pure liquid water is unstable on the Martian surface, models showed that stable brines can form and persist from the equator to high latitudes on the surface of Mars for a few percent of the year for up to six consecutive hours, a broader range than previously thought. However, the temperatures are well below the lowest temperatures to support life.
"These new results reduce some of the risk of exploring the Red Planet while also contributing to future work on the potential for habitable conditions on Mars," Soto said.
From Science Daily
Giant meteorite impacts formed parts of the Moon's crust, new evidence shows
Moon's south region |
A group of international scientists led by the Royal Ontario Museum has discovered that the formation of ancient rocks on the Moon may be directly linked to large-scale meteorite impacts.
The scientists conducted new research of a unique rock collected by NASA astronauts during the 1972 Apollo 17 mission to the Moon. They found it contains mineralogical evidence that it formed at incredibly high temperatures (in excess of 2300 °C/ 4300 °F) that can only be achieved by the melting of the outer layer of a planet in a large impact event.
In the rock, the researchers discovered the former presence of cubic zirconia, a mineral phase often used as a substitute for diamond in jewellery. The phase would only form in rocks heated to above 2300 °C, and though it has since reverted to a more stable phase (the mineral known as baddeleyite), the crystal retains distinctive evidence of a high-temperature structure. An interactive image of the complex crystal used in the study can be seen here using the Virtual Microscope.
While looking at the structure of the crystal, the researchers also measured the age of the grain, which reveals the baddeleyite formed over 4.3 billion years ago. It was concluded that the high-temperature cubic zirconia phase must have formed before this time, suggesting that large impacts were critically important to forming new rocks on the early Moon.
Fifty years ago, when the first samples were brought back from the surface of the Moon, lunar scientists raised questions about how lunar crustal rocks formed. Even today, a key question remains unanswered: how did the outer and inner layers of the Moon mix after the Moon formed? This new research suggests that large impacts over 4 billion years ago could have driven this mixing, producing the complex range of rocks seen on the surface of the Moon today.
"Rocks on Earth are constantly being recycled, but the Moon doesn't exhibit plate tectonics or volcanism, allowing older rocks to be preserved," explains Dr. Lee White, Hatch Postdoctoral Fellow at the ROM. "By studying the Moon, we can better understand the earliest history of our planet. If large, super-heated impacts were creating rocks on the Moon, the same process was probably happening here on Earth."
"By first looking at this rock, I was amazed by how differently the minerals look compared to other Apollo 17 samples," says Dr. Ana Cernok, Hatch Postdoctoral Fellow at the ROM and co-author of the study. "Although smaller than a millimetre, the baddeleyite grain that caught our attention was the largest one I have ever seen in Apollo samples. This small grain is still holding the evidence for formation of an impact basin that was hundreds of kilometres in diameter. This is significant, because we do not see any evidence of these old impacts on Earth."
Read more at Science Daily
May 10, 2020
Virgin birth has scientists buzzing
In a study published today in Current Biology, researchers from University of Sydney have identified the single gene that determines how Cape honey bees reproduce without ever having sex. One gene, GB45239 on chromosome 11, is responsible for virgin births.
"It is extremely exciting," said Professor Benjamin Oldroyd in the School of Life and Environmental Sciences. "Scientists have been looking for this gene for the last 30 years. Now that we know it's on chromosome 11, we have solved a mystery."
Behavioural geneticist Professor Oldroyd said: "Sex is a weird way to reproduce and yet it is the most common form of reproduction for animals and plants on the planet. It's a major biological mystery why there is so much sex going on and it doesn't make evolutionary sense. Asexuality is a much more efficient way to reproduce, and every now and then we see a species revert to it."
In the Cape honey bee, found in South Africa, the gene has allowed worker bees to lay eggs that only produce females instead of the normal males that other honey bees do. "Males are mostly useless," Professor Oldroyd said. "But Cape workers can become genetically reincarnated as a female queen and that prospect changes everything."
But it also causes problems. "Instead of being a cooperative society, Cape honey bee colonies are riven with conflict because any worker can be genetically reincarnated as the next queen. When a colony loses its queen the workers fight and compete to be the mother of the next queen," Professor Oldroyd said.
The ability to produce daughters asexually, known as "thelytokous parthenogenesis," is restricted to a single subspecies inhabiting the Cape region of South Africa, the Cape honey bee or Apis mellifera capensis.
Several other traits distinguish the Cape honey bee from other honey bee subspecies. In particular, the ovaries of worker bees are larger and more readily activated and they are able to produce queen pheromones, allowing them to assert reproductive dominance in a colony.
These traits also lead to a propensity for social parasitism, a behaviour where Cape bee workers invade foreign colonies, reproduce and persuade the host colony workers to feed their larvae. Every year in South Africa, 10,000 colonies of commercial beehives die because of the social parasite behaviour in Cape honey bees.
"This is a bee we must keep out of Australia," Professor Oldroyd said.
The existence of Cape bees with these characters has been known for over a hundred years, but it is only recently, using modern genomic tools, that we have been able to understand the actual gene that gives rise to virgin birth.
"Further study of Cape bees could give us insight into two major evolutionary transitions: the origin of sex and the origin of animal societies," Professor Oldroyd said.
Read more at Science Daily
"It is extremely exciting," said Professor Benjamin Oldroyd in the School of Life and Environmental Sciences. "Scientists have been looking for this gene for the last 30 years. Now that we know it's on chromosome 11, we have solved a mystery."
Behavioural geneticist Professor Oldroyd said: "Sex is a weird way to reproduce and yet it is the most common form of reproduction for animals and plants on the planet. It's a major biological mystery why there is so much sex going on and it doesn't make evolutionary sense. Asexuality is a much more efficient way to reproduce, and every now and then we see a species revert to it."
In the Cape honey bee, found in South Africa, the gene has allowed worker bees to lay eggs that only produce females instead of the normal males that other honey bees do. "Males are mostly useless," Professor Oldroyd said. "But Cape workers can become genetically reincarnated as a female queen and that prospect changes everything."
But it also causes problems. "Instead of being a cooperative society, Cape honey bee colonies are riven with conflict because any worker can be genetically reincarnated as the next queen. When a colony loses its queen the workers fight and compete to be the mother of the next queen," Professor Oldroyd said.
The ability to produce daughters asexually, known as "thelytokous parthenogenesis," is restricted to a single subspecies inhabiting the Cape region of South Africa, the Cape honey bee or Apis mellifera capensis.
Several other traits distinguish the Cape honey bee from other honey bee subspecies. In particular, the ovaries of worker bees are larger and more readily activated and they are able to produce queen pheromones, allowing them to assert reproductive dominance in a colony.
These traits also lead to a propensity for social parasitism, a behaviour where Cape bee workers invade foreign colonies, reproduce and persuade the host colony workers to feed their larvae. Every year in South Africa, 10,000 colonies of commercial beehives die because of the social parasite behaviour in Cape honey bees.
"This is a bee we must keep out of Australia," Professor Oldroyd said.
The existence of Cape bees with these characters has been known for over a hundred years, but it is only recently, using modern genomic tools, that we have been able to understand the actual gene that gives rise to virgin birth.
"Further study of Cape bees could give us insight into two major evolutionary transitions: the origin of sex and the origin of animal societies," Professor Oldroyd said.
Read more at Science Daily
To climb like a gecko, robots need toes
Robots with toes? Experiments suggest that climbing robots could benefit from having flexible, hairy toes, like those of geckos, that can adjust quickly to accommodate shifting weight and slippery surfaces.
Biologists from the University of California, Berkeley, and Nanjing University of Aeronautics and Astronautics observed geckos running horizontally along walls to learn how they use their five toes to compensate for different types of surfaces without slowing down.
"The research helped answer a fundamental question: Why have many toes?" said Robert Full, UC Berkeley professor of integrative biology.
As his previous research showed, geckos' toes can stick to the smoothest surfaces through the use of intermolecular forces, and uncurl and peel in milliseconds. Their toes have up to 15,000 hairs per foot, and each hair has "an awful case of split ends, with as many as a thousand nano-sized tips that allow close surface contact," he said.
These discoveries have spawned research on new types of adhesives that use intermolecular forces, or van der Waals forces, to stick almost anywhere, even underwater.
One puzzle, he said, is that gecko toes only stick in one direction. They grab when pulled in one direction, but release when peeled in the opposite direction. Yet, geckos move agilely in any orientation.
To determine how geckos have learned to deal with shifting forces as they move on different surfaces, Yi Song, a UC Berkeley visiting student from Nanjing, China, ran geckos sideways along a vertical wall while making high-speed video recordings to show the orientation of their toes. The sideways movement allowed him to distinguish downward gravity from forward running forces to best test the idea of toe compensation.
Using a technique called frustrated total internal reflection, Song, also measured the area of contact of each toe. The technique made the toes light up when they touched a surface.
To the researcher's surprise, geckos ran sideways just as fast as they climbed upward, easily and quickly realigning their toes against gravity. The toes of the front and hind top feet during sideways wall-running shifted upward and acted just like toes of the front feet during climbing.
To further explore the value of adjustable toes, researchers added slippery patches and strips, as well as irregular surfaces. To deal with these hazards, geckos took advantage of having multiple, soft toes. The redundancy allowed toes that still had contact with the surface to reorient and distribute the load, while the softness let them conform to rough surfaces.
"Toes allowed agile locomotion by distributing control among multiple, compliant, redundant structures that mitigate the risks of moving on challenging terrain," Full said. "Distributed control shows how biological adhesion can be deployed more effectively and offers design ideas for new robot feet, novel grippers and unique manipulators."
From Science Daily
Biologists from the University of California, Berkeley, and Nanjing University of Aeronautics and Astronautics observed geckos running horizontally along walls to learn how they use their five toes to compensate for different types of surfaces without slowing down.
"The research helped answer a fundamental question: Why have many toes?" said Robert Full, UC Berkeley professor of integrative biology.
As his previous research showed, geckos' toes can stick to the smoothest surfaces through the use of intermolecular forces, and uncurl and peel in milliseconds. Their toes have up to 15,000 hairs per foot, and each hair has "an awful case of split ends, with as many as a thousand nano-sized tips that allow close surface contact," he said.
These discoveries have spawned research on new types of adhesives that use intermolecular forces, or van der Waals forces, to stick almost anywhere, even underwater.
One puzzle, he said, is that gecko toes only stick in one direction. They grab when pulled in one direction, but release when peeled in the opposite direction. Yet, geckos move agilely in any orientation.
To determine how geckos have learned to deal with shifting forces as they move on different surfaces, Yi Song, a UC Berkeley visiting student from Nanjing, China, ran geckos sideways along a vertical wall while making high-speed video recordings to show the orientation of their toes. The sideways movement allowed him to distinguish downward gravity from forward running forces to best test the idea of toe compensation.
Using a technique called frustrated total internal reflection, Song, also measured the area of contact of each toe. The technique made the toes light up when they touched a surface.
To the researcher's surprise, geckos ran sideways just as fast as they climbed upward, easily and quickly realigning their toes against gravity. The toes of the front and hind top feet during sideways wall-running shifted upward and acted just like toes of the front feet during climbing.
To further explore the value of adjustable toes, researchers added slippery patches and strips, as well as irregular surfaces. To deal with these hazards, geckos took advantage of having multiple, soft toes. The redundancy allowed toes that still had contact with the surface to reorient and distribute the load, while the softness let them conform to rough surfaces.
"Toes allowed agile locomotion by distributing control among multiple, compliant, redundant structures that mitigate the risks of moving on challenging terrain," Full said. "Distributed control shows how biological adhesion can be deployed more effectively and offers design ideas for new robot feet, novel grippers and unique manipulators."
From Science Daily
Subscribe to:
Posts (Atom)