Cancer needs energy to drive its out-of-control growth. It gets energy in the form of glucose, in fact consuming so much glucose that one method for imaging cancer simply looks for areas of extreme glucose consumption -- where there is consumption, there is cancer. But how does cancer get this glucose? A University of Colorado Cancer Center study published today in the journal Cancer Cell shows that leukemia undercuts the ability of normal cells to consume glucose, thus leaving more glucose available to feed its own growth.
"Leukemia cells create a diabetic-like condition that reduces glucose going to normal cells, and as a consequence, there is more glucose available for the leukemia cells. Literally, they are stealing glucose from normal cells to drive growth of the tumor," says Craig Jordan, PhD, investigator at University of Colorado Cancer Center, division chief of the Division of Hematology and the Nancy Carroll Allen Professor of Hematology at the University of Colorado School of Medicine.
Like diabetes, cancer's strategies depend on insulin. Healthy cells need insulin to use glucose. In diabetes, either the pancreas under-produces insulin or tissues cannot not respond to insulin and so cells are left starved for energy while glucose builds up in the blood. The current study shows that leukemia goes about creating similar conditions of glucose buildup in two ways.
First, tumor cells trick fat cells into over-producing a protein called IGFBP1. This protein makes healthy cells less sensitive to insulin, meaning that when IGFBP1 is high, it takes more insulin to use glucose than it does when IGFBP1 is low. Unless the supply of insulin goes up, high IGFBP1 means that the glucose consumption of healthy cells goes down. (This protein may also be a link in the chain connecting cancer and obesity: The more fat cells, the more IGFBP1, and the more glucose is available to the cancer.)
Of course, cancer has a second strategy that ensures insulin production does not go up to meet the need created by increased IGFBP1. In fact, cancers turn insulin production down. In large part, they do this in the gut.
"In the course of doing this systemic analysis, we realized that some of the factors that help regulate glucose are made by the gut or bacteria in the gut. We looked there and found that the composition of the microbiome in leukemic animals was different than in control mice," Jordan says.
One major difference in the guts of leukemic mice was the lack of a specific kind of bacteria known as bacteroids. These bacteroids produce short-chain fatty acids that in turn feed the health of cells lining your gut. Without bacteroids, gut health suffers. And the current study shows that without bacteroids, gut health suffers in ways that specifically aid cancer.
One way is the loss of hormones called incretins. When blood glucose gets high, for example after you eat, your gut releases incretins, which tamp down blood glucose, reducing it back into the normal range. Working through the gut, leukemia inactivates these incretins, allowing blood glucose to remain higher than it should. Leukemia also nixes the activity of serotonin. Serotonin is well-known as a "feel good" chemical that helps to regulate mood and is found in many antidepressants. But serotonin is also essential for the manufacture of insulin in the pancreas, and by attacking serotonin, leukemia reduces insulin production (and thus, down the line, glucose use).
The result of less insulin secretion and less insulin sensitivity is that cancer undercuts healthy cells' use of insulin from both sides: Healthy cells need more insulin, just as there is less insulin available. Less insulin use by healthy cells leaves more glucose for the cancer.
"It's a classic parasite trick: Take advantage of something the host does and subvert it for your own purposes," Jordan says.
Interestingly, just as a parasite might eat a host's food leading to malnourishment, cancer's energy theft may play a role in the fatigue and weight loss common in cancer patients.
"The fairly prevalent observation is that cancer patients have a condition called cachexia, basically wasting away -- you lose weight. If cancers are inducing systemic changes that result in depletion of normal energy stores, this could be part of that story," Jordan says.
However, Jordan and colleagues including first author Haobin Ye, PhD, not only showed how leukemia dysregulates healthy cells' glucose consumption, but also showed how to "re-regulate" this consumption.
"When we administered agents to recalibrate the glucose system, we found that we could restore glucose regulation and slow the growth of leukemia cells," Ye says.
These "agents" were surprisingly low-tech. One was serotonin. Another was tributyrin, a fatty acid found in butter and other foods. Serotonin supplementation replaced the serotonin nixed by leukemia and tributyrin helped to replace the short-chain fatty acids that were absent due to loss of bacteroids.
The group calls the combination Ser-Tri therapy. And they show that it is more than a theory. Ser-Tri therapy led to the recovery of insulin levels and reduction of IGFPB1. And leukemic mice treated with Ser-Tri therapy lived longer than those without. Twenty-two days after leukemia was introduced in mice, all of the untreated mice had died, while more than half of the mice treated with Ser-Tri were still alive.
The continuing line of work shows that cancer may depend on the ability to out-compete healthy cells for limited energy. Healthy tissues have strategies to regulate insulin, glucose and other factors controlling energy consumption; cancer cells have strategies to subvert this regulation with the goal of making more energy available for their own use.
"We now have evidence that what we observed in our mouse models is also true for leukemia patients." Ye says.
Understanding these mechanisms that cancer uses to unbalance the body's system of energy in their favor is helping doctors and researchers learn to thumb the scale in favor of healthy cells.
Read more at Science Daily
Sep 29, 2018
When neglected children become adolescents
What happens to neglected children when they become adolescents? |
BEIP has shown that children reared in very stark institutional settings, with severe social deprivation and neglect, are at risk for cognitive problems, depression, anxiety, disruptive behavior and attention-deficit hyperactivity disorder. But BEIP has also shown that placing children with quality foster families can mitigate some of these effects, if it's done early.
The latest BEIP study, published this week by JAMA Psychiatry, asked what happens to the mental health of institutionalized children as they transition to adolescence. Outcomes at ages 8, 12 and 16 suggest diverging trajectories between children who remained in institutions versus those randomly chosen for placement with carefully vetted foster families.
Researchers led by Mark Wade, PhD, and Charles Nelson, PhD, of the Division of Developmental Medicine at Boston Children's Hospital, studied 220 children of whom 119 had spent at least some time in institutions. Of the 119, half had been placed in foster care.
Over the years, teachers and caregivers completed the MacArthur Health and Behavior Questionnaire, which includes subscales on depression, overanxious, social anxiety/withdrawal, oppositional defiant behavior, conduct problems, overt aggression, relational aggression and ADHD. The surveys revealed that children who were placed early in quality foster care, compared with those who remained in institutions, had less psychopathology, and in particular fewer externalizing behaviors such as rule-breaking, excessive arguing with authority figures, stealing or assaulting peers. Differences began to emerge at 12 years and became significant at 16 years.
While conditions at Romanian orphanages aren't the same as those in U.S. immigration detention systems, the researchers think the findings underscore the importance of keeping families together.
"Our results add to a growing literature on what might happen to a child's long-term psychological development when they experience separation from a primary caregiver early in development," says Wade. "Although this picture is very complex, we now know that many children who experience early neglect are at risk for an array of mental health problems later on. The good news is that if they are placed in high-quality homes with good caregiving, this risk is reduced. Yet they still tend to have more difficulties than their peers who never experienced this form of deprivation. So what we really need is policies and social programs prevent separation from primary caregivers in the first place."
From Science Daily
Sep 28, 2018
Where are they? Cosmologists search Andromeda for signs of alien life
"Are we alone in the universe?" The question has fascinated, tantalized and even disconcerted humans for as long as we can remember.
So far, it would seem that intelligent extraterrestrial life -- at least as fits our narrow definition of it -- is nowhere to be found. Theories and assumptions abound as to why we have neither made contact with nor seen evidence of advanced extraterrestrial civilizations despite decades-long efforts to make our presence known and to communicate with them.
Meanwhile, a steady stream of discoveries are demonstrating the presence of Earth analogues -- planets that, like our own, exist at a "Goldilocks zone" distance from their own respective stars, in which conditions are "just right" for liquid water (and thus life) to exist. Perhaps even more mind-blowing is the idea that there are, on average, as many planets as there are stars.
"That is, I think, one of the amazing discoveries of the last century or so -- that planets are common," said Philip Lubin, an experimental cosmologist and professor of physics at UC Santa Barbara. Given that, and the assumption that planets provide the conditions for life, the question for Lubin's group has become: Are we looking hard enough for these extraterrestrials?
That is the driver behind the Trillion Planet Survey, a project of Lubin's student researchers. The ambitious experiment, run almost entirely by students, uses a suite of telescopes near and far aimed at the nearby galaxy of Andromeda as well as other galaxies including our own, a "pipeline" of software to process images and a little bit of game theory.
"First and foremost, we are assuming there is a civilization out there of similar or higher class than ours trying to broadcast their presence using an optical beam, perhaps of the 'directed energy' arrayed-type currently being developed here on Earth," said lead researcher Andrew Stewart, who is a student at Emory University and a member of Lubin's group. "Second, we assume the transmission wavelength of this beam to be one that we can detect. Lastly, we assume that this beacon has been left on long enough for the light to be detected by us. If these requirements are met and the extraterrestrial intelligence's beam power and diameter are consistent with an Earth-type civilization class, our system will detect this signal."
From Radio Waves to Light Waves
For the last half-century, the dominant broadcast from Earth has taken the form of radio, TV and radar signals, and seekers of alien life, such as the scientists at the Search for Extraterrestrial Intelligence (SETI) Institute, have been using powerful radio telescopes to look for those signals from other civilizations. Recently however, and thanks to the exponentially accelerating progress of photonic technology, optical and infrared wavelengths are offering opportunities to search via optical signals that allow for vastly longer range detection for comparable systems.
In a paper published in 2016 called "The Search for Directed Intelligence" or SDI, Lubin outlined the fundamental detection and game theory of a "blind-blind" system where neither we, nor the extraterrestrial civilization are aware of each other but wish to find each other. That paper was based on the application of photonics developed at UC Santa Barbara in Lubin's group for the propulsion of small spacecraft through space at relativistic speeds (i.e. a significant fraction of the speed of light) to enable the first interstellar missions. That ongoing project is funded by NASA's Starlight and billionaire Yuri Milner's Breakthrough Starshot programs, both of which use the technology developed at UCSB. The 2016 paper shows that the technology we are developing today would be the brightest light in the universe and thus capable of being seen across the entire universe.
Of course, not everyone is comfortable with advertising our presence to other, potentially advanced, extraterrestrial civilizations.
"Broadcasting our presence to the universe, believe it or not, turns out to be a very controversial topic," Stewart said, citing bureaucratic issues that arise whenever beaconing is discussed, as well as the difficulty in obtaining the necessary technology of the scale required. Consequently, only a few, tentative signals have ever been sent in a directed fashion, including the famous Voyager 1 probe with its message-in-a-bottle-like golden record.
Tipping the concept on its head, the researchers asked, 'What if there are other civilizations out there that are less shy about broadcasting their presence?'
"At the moment, we're assuming that they're not using gravity waves or neutrinos or something that's very difficult for us to detect," Lubin said. But optical signals could be detected by small (meter class) diameter telescopes such as those at the Las Cumbres Observatory's robotically controlled global network.
"In no way are we suggesting that radio SETI should be abandoned in favor of optical SETI," Stewart added. "We just think the optical bands should be explored as well."
Searching the Stars
"We're in the process of surveying (Andromeda) right now and getting what's called 'the pipeline' up and running," said researcher Alex Polanski, a UC Santa Barbara undergraduate in Lubin's group. A set of photos taken by the telescopes, each of which takes a 1/30th slice of Andromeda, will be knit together to create a single image, he explained. That one photograph will then be compared to a more pristine image in which there are no known transient signals -- interfering signals from, say, satellites or spacecraft -- in addition to the optical signals emanating from the stellar systems themselves. The survey photo would be expected to have the same signal values as the pristine "control" photo, leading to a difference of zero. But a difference greater than zero could indicate a transient signal source, Polanski explained. Those transient signals would then be further processed in the software pipeline developed by Stewart to kick out false positives. In the future the team plans to use simultaneous multiple color imaging to will help remove false positives as well.
"One of the things the software checks for is, say, a satellite that did go through our image," said Kyle Friedman, a senior from Granada Hills High School in Los Angeles, who is conducting research in Lubin's group. "It wouldn't be small; it would be pretty big, and if that were to happen the software would immediately recognize it and throw out that image before we actually even process it."
Other vagaries, according to the researchers, include sky conditions, which is why it's important to have several telescopes monitoring Andromeda during their data run.
Thanks to the efforts of Santa Barbara-based computer engineer Kelley Winters and the guidance of Lubin group project scientist Jatila van der Veen, the data is in good hands. Winters' cloud-based Linux server provides a flexible, highly connected platform for the data pipeline software to perform its image analysis, while van der Veen will apply her digital image processing expertise to bring this project to future experimental cosmologists.
For Laguna Blanca School senior and future physicist Caitlin Gainey, who joins the UCSB physics freshman class this year, the project is a unique opportunity.
"In the Trillion Planet Survey especially, we experience something very inspiring: We have the opportunity to look out of our earthly bubble at entire galaxies, which could potentially have other beings looking right back at us," she said. "The mere possibility of extraterrestrial intelligence is something very new and incredibly intriguing, so I'm excited to really delve into the search this coming year."
The search, for any SETI-watcher, is an exercise in patience and optimism. Andromeda is 2.5 million light-years away, van der Veen pointed out, so any signal detected now would have been sent at least 2.5 million years ago -- more than long enough for the civilization that sent it to have died out by the time the light reaches us.
"That does not mean we should not look," van der Veen said. "After all, we look for archaeological relics and fossils, which tell us about the history of Earth. Finding ancient signals will definitely give us information about the history of evolution of life in the cosmos, and that would be amazing."
While the data run and processing time for this particular project could occur in a span of weeks, according to the researchers this sequence could be repeated indefinitely. Theoretically, like all the sunrise and sunset watchers, and stargazers before us, we could look at the sky forever.
"I think if you were to take someone outside and you were to point at some random star in the night sky and see that is where life is, I think you would be hard pressed to find anyone who would not look at that star and just feel something very deep within themselves," Polanski said. "Some very deep connection to whatever is up there or some kind of solace, I think, knowing that we're not alone."
Read more at Science Daily
So far, it would seem that intelligent extraterrestrial life -- at least as fits our narrow definition of it -- is nowhere to be found. Theories and assumptions abound as to why we have neither made contact with nor seen evidence of advanced extraterrestrial civilizations despite decades-long efforts to make our presence known and to communicate with them.
Meanwhile, a steady stream of discoveries are demonstrating the presence of Earth analogues -- planets that, like our own, exist at a "Goldilocks zone" distance from their own respective stars, in which conditions are "just right" for liquid water (and thus life) to exist. Perhaps even more mind-blowing is the idea that there are, on average, as many planets as there are stars.
"That is, I think, one of the amazing discoveries of the last century or so -- that planets are common," said Philip Lubin, an experimental cosmologist and professor of physics at UC Santa Barbara. Given that, and the assumption that planets provide the conditions for life, the question for Lubin's group has become: Are we looking hard enough for these extraterrestrials?
That is the driver behind the Trillion Planet Survey, a project of Lubin's student researchers. The ambitious experiment, run almost entirely by students, uses a suite of telescopes near and far aimed at the nearby galaxy of Andromeda as well as other galaxies including our own, a "pipeline" of software to process images and a little bit of game theory.
"First and foremost, we are assuming there is a civilization out there of similar or higher class than ours trying to broadcast their presence using an optical beam, perhaps of the 'directed energy' arrayed-type currently being developed here on Earth," said lead researcher Andrew Stewart, who is a student at Emory University and a member of Lubin's group. "Second, we assume the transmission wavelength of this beam to be one that we can detect. Lastly, we assume that this beacon has been left on long enough for the light to be detected by us. If these requirements are met and the extraterrestrial intelligence's beam power and diameter are consistent with an Earth-type civilization class, our system will detect this signal."
From Radio Waves to Light Waves
For the last half-century, the dominant broadcast from Earth has taken the form of radio, TV and radar signals, and seekers of alien life, such as the scientists at the Search for Extraterrestrial Intelligence (SETI) Institute, have been using powerful radio telescopes to look for those signals from other civilizations. Recently however, and thanks to the exponentially accelerating progress of photonic technology, optical and infrared wavelengths are offering opportunities to search via optical signals that allow for vastly longer range detection for comparable systems.
In a paper published in 2016 called "The Search for Directed Intelligence" or SDI, Lubin outlined the fundamental detection and game theory of a "blind-blind" system where neither we, nor the extraterrestrial civilization are aware of each other but wish to find each other. That paper was based on the application of photonics developed at UC Santa Barbara in Lubin's group for the propulsion of small spacecraft through space at relativistic speeds (i.e. a significant fraction of the speed of light) to enable the first interstellar missions. That ongoing project is funded by NASA's Starlight and billionaire Yuri Milner's Breakthrough Starshot programs, both of which use the technology developed at UCSB. The 2016 paper shows that the technology we are developing today would be the brightest light in the universe and thus capable of being seen across the entire universe.
Of course, not everyone is comfortable with advertising our presence to other, potentially advanced, extraterrestrial civilizations.
"Broadcasting our presence to the universe, believe it or not, turns out to be a very controversial topic," Stewart said, citing bureaucratic issues that arise whenever beaconing is discussed, as well as the difficulty in obtaining the necessary technology of the scale required. Consequently, only a few, tentative signals have ever been sent in a directed fashion, including the famous Voyager 1 probe with its message-in-a-bottle-like golden record.
Tipping the concept on its head, the researchers asked, 'What if there are other civilizations out there that are less shy about broadcasting their presence?'
"At the moment, we're assuming that they're not using gravity waves or neutrinos or something that's very difficult for us to detect," Lubin said. But optical signals could be detected by small (meter class) diameter telescopes such as those at the Las Cumbres Observatory's robotically controlled global network.
"In no way are we suggesting that radio SETI should be abandoned in favor of optical SETI," Stewart added. "We just think the optical bands should be explored as well."
Searching the Stars
"We're in the process of surveying (Andromeda) right now and getting what's called 'the pipeline' up and running," said researcher Alex Polanski, a UC Santa Barbara undergraduate in Lubin's group. A set of photos taken by the telescopes, each of which takes a 1/30th slice of Andromeda, will be knit together to create a single image, he explained. That one photograph will then be compared to a more pristine image in which there are no known transient signals -- interfering signals from, say, satellites or spacecraft -- in addition to the optical signals emanating from the stellar systems themselves. The survey photo would be expected to have the same signal values as the pristine "control" photo, leading to a difference of zero. But a difference greater than zero could indicate a transient signal source, Polanski explained. Those transient signals would then be further processed in the software pipeline developed by Stewart to kick out false positives. In the future the team plans to use simultaneous multiple color imaging to will help remove false positives as well.
"One of the things the software checks for is, say, a satellite that did go through our image," said Kyle Friedman, a senior from Granada Hills High School in Los Angeles, who is conducting research in Lubin's group. "It wouldn't be small; it would be pretty big, and if that were to happen the software would immediately recognize it and throw out that image before we actually even process it."
Other vagaries, according to the researchers, include sky conditions, which is why it's important to have several telescopes monitoring Andromeda during their data run.
Thanks to the efforts of Santa Barbara-based computer engineer Kelley Winters and the guidance of Lubin group project scientist Jatila van der Veen, the data is in good hands. Winters' cloud-based Linux server provides a flexible, highly connected platform for the data pipeline software to perform its image analysis, while van der Veen will apply her digital image processing expertise to bring this project to future experimental cosmologists.
For Laguna Blanca School senior and future physicist Caitlin Gainey, who joins the UCSB physics freshman class this year, the project is a unique opportunity.
"In the Trillion Planet Survey especially, we experience something very inspiring: We have the opportunity to look out of our earthly bubble at entire galaxies, which could potentially have other beings looking right back at us," she said. "The mere possibility of extraterrestrial intelligence is something very new and incredibly intriguing, so I'm excited to really delve into the search this coming year."
The search, for any SETI-watcher, is an exercise in patience and optimism. Andromeda is 2.5 million light-years away, van der Veen pointed out, so any signal detected now would have been sent at least 2.5 million years ago -- more than long enough for the civilization that sent it to have died out by the time the light reaches us.
"That does not mean we should not look," van der Veen said. "After all, we look for archaeological relics and fossils, which tell us about the history of Earth. Finding ancient signals will definitely give us information about the history of evolution of life in the cosmos, and that would be amazing."
While the data run and processing time for this particular project could occur in a span of weeks, according to the researchers this sequence could be repeated indefinitely. Theoretically, like all the sunrise and sunset watchers, and stargazers before us, we could look at the sky forever.
"I think if you were to take someone outside and you were to point at some random star in the night sky and see that is where life is, I think you would be hard pressed to find anyone who would not look at that star and just feel something very deep within themselves," Polanski said. "Some very deep connection to whatever is up there or some kind of solace, I think, knowing that we're not alone."
Read more at Science Daily
Plate tectonics may have been active on Earth since the very beginning
Earth |
The paper, published in Earth and Planetary Science Letters, has important implications in the fields of geochemistry and geophysics. For example, a better understanding of plate tectonics could help predict whether planets beyond our solar system could be hospitable to life.
"Plate tectonics set up the conditions for life," said Nick Dygert, assistant professor of petrology and geochemistry in UT's Department of Earth and Planetary Sciences and coauthor of the study. "The more we know about ancient plate tectonics, the better we can understand how Earth got to be the way it is now."
For the research, Dygert and his team looked into the distribution of two very specific noble gas isotopes: Helium-3 and Neon-22. Noble gases are those that don't react to any other chemical element.
Previous models have explained Earth's current Helium-3/Neon-22 ratio by arguing that a series of large-scale impacts (like the one that produced our moon) resulted in massive magma oceans, which degassed and incrementally increased the ratio each time.
However, Dygert believes the scenario is unlikely.
"While there is no conclusive evidence that this didn't happen," he said, "it could have only raised the Earth's Helium-3/Neon-22 ratio under very specific conditions."
Instead, Dygert and his team believe the Helium-3/Neon-22 ratio raised in a different way.
As Earth's crust is continuously formed, the ratio of helium to neon in the mantle beneath the crust increases. By calculating this ratio in the mantle beneath the crust, and considering how this process would affect the bulk Earth over long periods of time, a rough timeline of Earth's tectonic plate cycling can be established.
Read more at Science Daily
Did key building blocks for life come from deep space?
Churyumov-Gerasimenko. |
Little was known about a key element in the building blocks, phosphates, until now. University of Hawaii at Manoa researchers, in collaboration with colleagues in France and Taiwan, provide compelling new evidence that this component for life was found to be generated in outer space and delivered to Earth in its first one billion years by meteorites or comets. The phosphorus compounds were then incorporated in biomolecules found in cells in living beings on Earth.
The breakthrough research is outlined in "An Interstellar Synthesis of Phosphorus Oxoacids," authored by UH Manoa graduate student Andrew Turner, now assistant professor at the University of Pikeville, and UH Manoa chemistry Professor Ralf Kaiser in the September issue of Nature Communications.
According to the study, phosphates and diphosphoric acid are two major elements that are essential for these building blocks in molecular biology. They are the main constituents of chromosomes, the carriers of genetic information in which DNA is found. Together with phospholipids in cell membranes and adenosine triphosphate, which function as energy carriers in cells, they form self-replicating material present in all living organisms.
In an ultra-high vacuum chamber cooled down to 5 K (-450°F) in the W.M. Keck Research Laboratory in Astrochemistry at UH Manoa, the Hawaii team replicated interstellar icy grains coated with carbon dioxide and water, which are ubiquitous in cold molecular clouds, and phosphine. When exposed to ionizing radiation in the form of high-energy electrons to simulate the cosmic rays in space, multiple phosphorus oxoacids like phosphoric acid and diphosphoric acid were synthesized via non-equilibrium reactions.
"On Earth, phosphine is lethal to living beings," said Turner, lead author. "But in the interstellar medium, an exotic phosphine chemistry can promote rare chemical reaction pathways to initiate the formation of biorelevant molecules such as oxoacids of phosphorus, which eventually might spark the molecular evolution of life as we know it."
Kaiser added, "The phosphorus oxoacids detected in our experiments by combination of sophisticated analytics involving lasers, coupled to mass spectrometers along with gas chromatographs, might have also been formed within the ices of comets such as 67P/Churyumov-Gerasimenko, which contains a phosphorus source believed to derive from phosphine." Kaiser says these techniques can also be used to detect trace amounts of explosives and drugs.
"Since comets contain at least partially the remnants of the material of the protoplanetary disk that formed our solar system, these compounds might be traced back to the interstellar medium wherever sufficient phosphine in interstellar ices is available," said Cornelia Meinert of the University of Nice (France).
Upon delivery to Earth by meteorites or comets, these phosphorus oxoacids might have been available for Earth's prebiotic phosphorus chemistry. Hence an understanding of the facile synthesis of these oxoacids is essential to untangle the origin of water-soluble prebiotic phosphorus compounds and how they might have been incorporated into organisms not only on Earth, but potentially in our universe as well.
Read more at Science Daily
Ledumahadi mafube: South Africa's new jurassic giant
Ledumahadi mafube is the first of the giant sauropodomorphs of the Jurassic. |
A team of international scientists, led by University of the Witwatersrand (Wits) palaeontologist Professor Jonah Choiniere, described the new species in the journal Current Biology today.
The dinosaur's name is Sesotho for "a giant thunderclap at dawn" (Sesotho is one of South Africa's 11 official languages and an indigenous language in the area where the dinosaur was found).
"The name reflects the great size of the animal as well as the fact that its lineage appeared at the origins of sauropod dinosaurs," said Choiniere. "It honours both the recent and ancient heritage of southern Africa."
Ledumahadi mafube is one of the closest relatives of sauropod dinosaurs. Sauropods, weighing up to 60 tonnes, include well-known species like Brontosaurus. All sauropods ate plants and stood on four legs, with a posture like modern elephants. Ledumahadi evolved its giant size independently from sauropods, and although it stood on four legs, its forelimbs would have been more crouched. This caused the scientific team to consider Ledumahadi an evolutionary "experiment" with giant body size.
Ledumahadi's fossil tells a fascinating story not only of its individual life history, but also the geographic history of where it lived, and of the evolutionary history of sauropod dinosaurs.
"The first thing that struck me about this animal is the incredible robustness of the limb bones," says lead author, Dr Blair McPhee. "It was of similar size to the gigantic sauropod dinosaurs, but whereas the arms and legs of those animals are typically quite slender, Ledumahadi's are incredibly thick. To me this indicated that the path towards gigantism in sauropodomorphs was far from straightforward, and that the way that these animals solved the usual problems of life, such as eating and moving, was much more dynamic within the group than previously thought."
The research team developed a new method, using measurements from the "arms" and "legs" to show that Ledumahadi walked on all fours, like the later sauropod dinosaurs, but unlike many other members of its own group alive at its time such as Massospondylus. The team also showed that many earlier relatives of sauropods stood on all fours, that this body posture evolved more than once, and that it appeared earlier than scientists previously thought.
"Many giant dinosaurs walked on four legs but had ancestors that walked on two legs. Scientists want to know about this evolutionary change, but amazingly, no-one came up with a simple method to tell how each dinosaur walked, until now," says Dr Roger Benson.
By analysing the fossil's bone tissue through osteohistological analysis, Dr Jennifer Botha-Brink from the South African National Museum in Bloemfontein established the animal's age.
"We can tell by looking at the fossilised bone microstructure that the animal grew rapidly to adulthood. Closely-spaced, annually deposited growth rings at the periphery show that the growth rate had decreased substantially by the time it died," says Botha-Brink. This indicates that the animal had reached adulthood.
"It was also interesting to see that the bone tissues display aspects of both basal sauropodomorphs and the more derived sauropods, showing that Ledumahadi represents a transitional stage between these two major groups of dinosaurs."
Ledumahadi lived in the area around Clarens in South Africa's Free State Province. This is currently a scenic mountainous area, but looked much different at that time, with a flat, semi-arid landscape and shallow, intermittently dry streambeds.
"We can tell from the properties of the sedimentary rock layers in which the bone fossils are preserved that 200 million years ago most of South Africa looked a lot more like the current region around Musina in the Limpopo Province of South Africa, or South Africa's central Karoo," says Dr Emese Bordy.
Ledumahadi is closely related to other gigantic dinosaurs from Argentina that lived at a similar time, which reinforces that the supercontinent of Pangaea was still assembled in the Early Jurassic. "It shows how easily dinosaurs could have walked from Johannesburg to Buenos Aires at that time," says Choiniere.
South Africa's Minister of Science and Technology Mmamoloko Kubayi-Ngubane says the discovery of this dinosaur underscores just how important South African palaeontology is to the world.
"Not only does our country hold the Cradle of Humankind, but we also have fossils that help us understand the rise of the gigantic dinosaurs. This is another example of South Africa taking the high road and making scientific breakthroughs of international significance on the basis of its geographic advantage, as it does in astronomy, marine and polar research, indigenous knowledge, and biodiversity," says Kubayi-Ngubane.
The research team behind Ledumahadi includes South African-based palaeoscientists, Dr Emese Bordy and Dr Jennifer Botha-Brink, from the University of Cape Town and the South African National Museum in Bloemfontein, respectively.
The project also had a strong international component with the collaboration of Professor Roger BJ Benson of Oxford University and Dr Blair McPhee, currently residing in Brazil.
Read more at Science Daily
Sep 27, 2018
Taller plants moving into Arctic because of climate change
Plants in the Arctic tundra are growing taller because of climate change, according to new research from a global collaboration led by the University of Edinburgh. |
While the Arctic is usually thought of as a vast, desolate landscape of ice, it is in fact home to hundreds of species of low-lying shrubs, grasses and other plants that play a critical role in carbon cycling and energy balance.
Now, Arctic experts have discovered that the effects of climate change are behind an increase in plant height across the tundra over the past 30 years.
As well as the Arctic's native plants growing in stature, in the southern reaches of the Arctic taller species of plants are spreading across the tundra. Vernal sweetgrass, which is common in lowland Europe, has now moved in to sites in Iceland and Sweden.
Dr Isla Myers-Smith of the School of Geosciences at the University of Edinburgh, and Dr Anne Bjorkman from the Senckenberg Biodiversity and Climate Research Centre (BiK-F) in Frankfurt, led the international team of 130 scientists in the Natural Environment Research Council (NERC)-funded project.
More than 60,000 data observations from hundreds of sites across the Arctic and alpine tundra were analysed to produce the findings, which were published in Nature today.
Dr Bjorkman said: "Rapid climate warming in the Arctic and alpine regions is driving changes in the structure and composition of plant communities, with important consequences for how this vast and sensitive ecosystem functions.
"Arctic regions have long been a focus for climate change research, as the permafrost lying under the northern latitudes contains 30 to 50 percent of the world's soil carbon.
"Taller plants trap more snow, which insulates the underlying soil and prevents it from freezing as quickly in winter.
"An increase in taller plants could speed up the thawing of this frozen carbon bank, and lead to an increase in the release of greenhouse gases.
"We found that the increase in height didn't happen in just a few sites, it was nearly everywhere across the tundra.
"If taller plants continue to increase at the current rate, the plant community height could increase by 20 to 60 percent by the end of the century."
Dr Myers-Smith, of the School of Geosciences at the University of Edinburgh, said: "Quantifying the link between environment and plant traits is critical to understanding the consequences of climate change, but such research has rarely extended into the Northern hemisphere, home to the planet's coldest tundra ecosystems.
"This is the first time that a biome-scale study has been carried out to get to the root of the critical role that plants play in this rapidly-warming part of the planet."
The team now has a comprehensive data set on Arctic tundra plants, collected from sites in Alaska, Canada, Iceland, Scandinavia and Russia. Alpine sites in the European Alps and Colorado Rockies were also included in the study.
The team assessed relationships between temperature, soil moisture and key traits that represent plants' form and function. Plant height and leaf area were analysed and tracked, along with specific leaf area, leaf nitrogen content and leaf dry matter content, as well as woodiness and evergreenness.
Surprisingly, only height was found to increase strongly over time. Plant traits were strongly influenced by moisture levels in addition to temperature.
Dr Myers-Smith said: "While most climate change models and research have focused on increasing temperatures, our research has shown that soil moisture can play a much greater role in changing plant traits than we previously thought.
"We need to understand more about soil moisture in the Arctic. Precipitation is likely to increase in the region, but that's just one factor that affects soil moisture levels."
Helen Beadman, Head of Polar, Climate and Weather at NERC, said: "This research is a vital step in improving our understanding of how Arctic and alpine vegetation is responding to climate change.
Read more at Science Daily
More persistent weather patterns in US linked to Arctic warming
Persistent weather conditions can lead to weather extremes such as drought, heat waves, prolonged cold and storms that can cost millions of dollars in damage and disrupt societies and ecosystems, the study says.
Scientists at Rutgers University-New Brunswick and the University of Wisconsin-Madison examined daily precipitation data at 17 stations across the U.S., along with large upper-level circulation patterns over the eastern Pacific Ocean and North America.
Overall, dry and wet spells lasting four or more days occurred more frequently in recent decades, according to the study published online today in Geophysical Research Letters. The frequency of persistent large-scale circulation patterns over North America also increased when the Arctic was abnormally warm.
In recent decades, the Arctic has been warming at least twice as fast as the global average temperature, the study notes. The persistence of warm Arctic patterns has also increased, suggesting that long-duration weather conditions will occur more often as rapid Arctic warming continues, said lead author Jennifer Francis, a research professor in Rutgers' Department of Marine and Coastal Sciences.
"While we cannot say for sure that Arctic warming is the cause, we found that large-scale patterns with Arctic warming are becoming more frequent, and the frequency of long-duration weather conditions increases most for those patterns," said Francis, who works in the School of Environmental and Biological Sciences.
The results suggest that as the Arctic continues to warm and melt, it's likely that long-duration events will continue to occur more often, meaning that weather patterns -- heat waves, droughts, cold spells and stormy conditions -- will likely become more persistent, she said.
"When these conditions last a long time, they can become extreme events, as we've seen so often in recent years," she said. "Knowing which types of events will occur more often in which regions and under what background conditions -- such as certain ocean temperature patterns -- will help decision-makers plan for the future in terms of infrastructure improvements, agricultural practices, emergency preparedness and managed retreat from hazardous areas."
Read more at Science Daily
Fossil evidence of large flowering trees in N. America 15 millions years earlier
This is an illustration of the research finds by Sae Bom Ra, an Adelphi University scientific illustration major. |
"These discoveries add much more detail to our picture of the landscape during the Turonian period than we had previously," says Michael D'Emic, assistant professor of biology at Adelphi, who organized the study. "Since Darwin, the evolution of flowering plants has been a topic of debate for paleontologists because of their cryptic fossil record. Our paper shows that even today it is possible for a single fossil specimen to change a lot about what we know about the early evolution of the group.
"Understanding the past is the key to managing the future," D'Emic added. "Learning how environments evolved and changed in the past teaches us how to better prepare for future environmental change."
Aside from the large petrified log, the team reports fossilized foliage from ferns, conifers and angiosperms, which confirm that there was forest or woodland vegetation 90 million years ago in the area, covering a large delta extending into the sea. The team also reports the first turtle and crocodile remains from this geologic layer, as well as part of the pelvis of a duck-billed dinosaur; previously, the only known vertebrate remains found were shark teeth, two short dinosaur trackways, and a fragmentary pterosaur.
"Until now most of what we knew about plants from the Ferron Sandstone came from fossil pollen and spores," says Nathan Jud, co-author and assistant professor of biology at William Jewell College. "The discovery of fossil wood and leaves allows us to develop a more complete picture of the flora."
From Science Daily
Unlocking the secret of how the brain encodes speech
Brain encoding, artist's concept. |
Scientists want to help these completely paralyzed, or "locked-in," individuals communicate more intuitively by developing a brain machine interface to decode the commands the brain is sending to the tongue, palate, lips and larynx (articulators.)
The person would simply try to say words and the brain machine interface (BMI) would translate into speech.
New research from Northwestern Medicine and Weinberg College of Arts and Sciences has moved science closer to creating speech-brain machine interfaces by unlocking new information about how the brain encodes speech.
Scientists have discovered the brain controls speech production in a similar manner to how it controls the production of arm and hand movements. To do this, researchers recorded signals from two parts of the brain and decoded what these signals represented. Scientists found the brain represents both the goals of what we are trying to say (speech sounds like "pa" and "ba") and the individual movements that we use to achieve those goals (how we move our lips, palate, tongue and larynx). The different representations occur in two different parts of the brain.
"This can help us build better speech decoders for BMIs, which will move us closer to our goal of helping people that are locked-in speak again," said lead author Dr. Marc Slutzky, associate professor of neurology and of physiology at Northwestern University Feinberg School of Medicine and a Northwestern Medicine neurologist.
The study will be published Sept. 26 in the Journal of Neuroscience.
The discovery could also potentially help people with other speech disorders, such as apraxia of speech, which is seen in children as well as after stroke in adults. In speech apraxia, an individual has difficulty translating speech messages from the brain into spoken language.
How words are translated from your brain into speech
Speech is composed of individual sounds, called phonemes, that are produced by coordinated movements of the lips, tongue, palate and larynx. However, scientists didn't know exactly how these movements, called articulatory gestures, are planned by the brain. In particular, it was not fully understood how the cerebral cortex controls speech production, and no evidence of gesture representation in the brain had been shown.
"We hypothesized speech motor areas of the brain would have a similar organization to arm motor areas of the brain," Slutzky said. "The precentral cortex would represent movements (gestures) of the lips, tongue, palate and larynx, and the higher level cortical areas would represent the phonemes to a greater extent."
That's exactly what they found.
"We studied two parts of the brain that help to produce speech," Slutzky said. "The precentral cortex represented gestures to a greater extent than phonemes. The inferior frontal cortex, which is a higher level speech area, represented both phonemes and gestures."
Chatting up patients in brain surgery to decode their brain signals
Northwestern scientists recorded brain signals from the cortical surface using electrodes placed in patients undergoing brain surgery to remove brain tumors. The patients had to be awake during their surgery, so researchers asked them to read words from a screen.
After the surgery, scientists marked the times when the patients produced phonemes and gestures. Then they used the recorded brain signals from each cortical area to decode which phonemes and gestures had been produced, and measured the decoding accuracy. The brain signals in the precentral cortex were more accurate at decoding gestures than phonemes, while those in the inferior frontal cortex were equally good at decoding both phonemes and gestures. This information helped support linguistic models of speech production. It will also help guide engineers in designing brain machine interfaces to decode speech from these brain areas.
The next step for the research is to develop an algorithm for brain machine interfaces that would not only decode gestures but also combine those decoded gestures to form words.
Read more at Science Daily
The notorious luminous blue variable star
Its appearance tends to fluctuate radically over time, and that has piqued the curiosity of astrophysicists who wonder what processes may be at play.
"The luminous blue variable is a supermassive, unstable star," said Yan-Fei Jiang, a researcher at UC Santa Barbara's Kavli Institute for Theoretical Physics (KITP). Unlike our own comparatively smaller and steady-burning Sun, he explained, LBVs have been shown to burn bright and hot, then cool and fade so as to be almost indistinguishable from other stars, only to flare up again. Because of these changes, Jiang added, conventional one-dimensional models have been less than adequate at explaining the special physics of these stars.
However, thanks to special, data-intensive supercomputer modeling conducted at Argonne National Laboratory's Argonne Leadership Computing Facility (ALCF) for its INCITE program, Jiang and colleagues -- Matteo Cantiello of the Flatiron Institute, Lars Bildsten of KITP, Eliot Quataert at UC Berkeley, Omer Blaes of UCSB, and James Stone of Princeton -- have now developed a three-dimensional simulation. It not only shows the stages of an LBV as it becomes progressively more luminous, then erupts, but also depicts the physical forces that contribute to that behavior. The simulation was developed also with computational resources from NASA and the National Energy Research Scientific Computing Center.
The researchers' paper, "Luminous Blue Variable Outbursts from the Variations of Helium Opacity," is published in the journal Nature.
Of particular interest to the researchers are the stars' mass loss rates, which are significant compared to those of less massive stars. Understanding how these stellar bodies lose mass, Jiang said, could lead to greater insights into just how they end their lives as bright supernova.
Among the physical processes never before seen with one-dimensional models are the supersonic turbulent motions -- the ripples and wrinkles radiating from the star's deep envelope as it prepares for a series of outbursts.
"These stars can have a surface temperature of about 9,000 degrees Kelvin during these outbursts," Jiang said. That translates to 15,740 degrees Fahrenheit or 8,726 degrees Celsius.
Also seen for the first time in three dimensions is the tremendous expansion of the star immediately before and during the outbursts -- phenomena not captured with previous one-dimensional models. The three dimensional simulations show that it is the opacity of the helium that sets the observed temperature during the outburst.
According to Jiang, in a one-dimensional stellar evolution code, helium opacity -- the extent to which helium atoms prevent photons (light) from escaping -- is not very important in the outer envelope because the gas density at the cooler outer envelope is far too low.
The paper's co-author and KITP Director Lars Bildsten explained that the three-dimensional model demonstrates that "the region deep within the star has such vigorous convection that the layers above that location get pushed out to much larger radii, allowing the material at the location where helium recombines to be much denser." The radiation escaping from the star's hot core pushes on the cooler, opaque outer region to trigger dramatic outbursts during which the star loses large amounts mass. Hence, convection -- the same phenomena responsible for thundercloud formation -- causes not only variations in the star's radius but also in the amount of mass leaving in the form of a stellar wind.
Additional work is underway on more simulations, according to Jiang, including models of the same stars but with different parameters such as metallicity, rotation and magnetic fields.
Read more at Science Daily
Sep 26, 2018
Psychologists define the 'dark core of personality'
Egoism, Machiavellianism, narcissism, psychopathy, sadism and spitefulness are often grouped together. Those who exhibit one of these traits are more likely to exhibit others from this list. |
Both world history and everyday life are full of examples of people acting ruthlessly, maliciously, or selfishly. In psychology as well as in everyday language, we have diverse names for the various dark tendencies human may have, most prominently psychopathy (lack of empathy), narcissism (excessive self-absorption), and Machiavellianism (the belief that the ends justify the means), the so-called 'dark triad', along with many others such as egoism, sadism, or spitefulness.
Although at first glance there appear to be noteworthy differences between these traits -- and it may seem more 'acceptable' to be an egoist than a psychopath -- new research shows that all dark aspects of human personality are very closely linked and are based on the same tendency. That is, most dark traits can be understood as flavoured manifestations of a single common underlying disposition: The dark core of personality. In practice, this implies that if you have a tendency to show one of these dark personality traits, you are also more likely to have a strong tendency to display one or more of the others.
As the new research reveals, the common denominator of all dark traits, the D-factor, can be defined as the general tendency to maximize one's individual utility -- disregarding, accepting, or malevolently provoking disutility for others -- , accompanied by beliefs that serve as justifications.
In other words, all dark traits can be traced back to the general tendency of placing one's own goals and interests over those of others even to the extent of taking pleasure in hurting other's -- along with a host of beliefs that serve as justifications and thus prevent feelings of guilt, shame, or the like. The research shows that dark traits in general can be understood as instances of this common core -- although they may differ in which aspects are predominant (e.g., the justifications-aspect is very strong in narcissism whereas the aspect of malevolently provoking disutility is the main feature of sadism) .
Ingo Zettler, Professor of Psychology at the University of Copenhagen, and two German colleagues, Morten Moshagen from Ulm University and Benjamin E. Hilbig from the University of Koblenz-Landau, have demonstrated how this common denominator is present in nine of the most commonly studied dark personality traits:
- Egoism: an excessive preoccupation with one's own advantage at the expense of others and the community
- Machiavellianism: a manipulative, callous attitude and a belief that the ends justify the means
- Moral disengagement: cognitive processing style that allow behaving unethically without feeling distress
- Narcissism: excessive self-absorption, a sense of superiority, and an extreme need for attention from others
- Psychological entitlement: a recurring belief that one is better than others and deserves better treatment
- Psychopathy: lack of empathy and self-control, combined with impulsive behaviour
- Sadism: a desire to inflict mental or physical harm on others for one's own pleasure or to benefit oneself
- Self-interest: a desire to further and highlight one's own social and financial status
- Spitefulness: destructiveness and willingness to cause harm to others, even if one harms oneself in the process
In a series of studies with more than 2,500 people, the researchers asked to what extent people agreed or disagreed with statements such as "It is hard to get ahead without cutting corners here and there.," "It is sometimes worth a little suffering on my part to see others receive the punishment they deserve.," or "I know that I am special because everyone keeps telling me so." In addition, they studied other self-reported tendencies and behaviors such as aggression or impulsivity and objective measures of selfish and unethical behaviour.
The researchers' mapping of the common D-factor, which has just been published in the academic journal Psychological Review, can be compared to how Charles Spearman showed about 100 years ago that people who score highly in one type of intelligence test typically also score highly in other types of intelligence tests, because there is something like a general factor of intelligence.
"In the same way, the dark aspects of human personality also have a common denominator, which means that -- similar to intelligence -- one can say that they are all an expression of the same dispositional tendency," Ingo Zettler explains.
'For example, in a given person, the D-factor can mostly manifest itself as narcissism, psychopathy or one of the other dark traits, or a combination of these. But with our mapping of the common denominator of the various dark personality traits, one can simply ascertain that the person has a high D-factor. This is because the D-factor indicates how likely a person is to engage in behaviour associated with one or more of these dark traits', he says. In practice, this means that an individual who exhibits a particular malevolent behaviour (such as likes to humiliate others) will have a higher likelihood to engage in other malevolent activities, too (such as cheating, lying, or stealing).
The nine dark traits are by no means the same, and each can result in specific kinds of behaviour. However, at their core, the dark traits typically have far more in common that actually sets them apart. And knowledge about this 'dark core' can play a crucial role for researchers or therapists who work with people with specific dark personality traits, as it is this D-factor that affects different types of reckless and malicious human behaviour and actions, often reported in the media.
Read more at Science Daily
Scientists investigate how DEET confuses countless critters
When surrounded by DEET, C. elegans fail to wiggle toward their favorite fragrances. |
It's not that DEET doesn't keep away critters -- it verifiably does. However, Leslie B. Vosshall, Rockefeller's Robin Chemers Neustein Professor, has shown that DEET acts not by repelling bugs, but rather by confusing them, messing with neurons that help the animals smell their surroundings. Moreover, the effects of DEET are not limited to insects: spiders, ticks, and many other pests also act strangely in the chemical's presence.
In this sense, DEET may be less of an insect repellent and more of an invertebrate confusant. The term doesn't exactly roll off the tongue, but new research from the Vosshall lab supports this rebranding of the chemical.
In a recent paper, published in Nature, Vosshall and former graduate fellow Emily Dennis show that, like insects, the nematode C. elegans succumbs to confusion when DEET is around. The team also describes the genetic and cellular mechanisms underlying this response, shedding light on how a single chemical might confound the senses of vastly different species.
All in the DEET-ails
First developed in the 1940s, DEET can be found in most bug sprays used today. Research has shown that, in flies and mosquitoes, the chemical works by interacting with odor receptors that are unique to insects. This research, however, cannot explain how DEET exerts its effect on non-insect species.
Seeking an explanation, Dennis and Vosshall teamed up with Cori Bargmann, Rockefeller's Torsten N. Wiesel Professor, to examine whether and how DEET changes the behavior of the roundworm C. elegans, a relatively simple animal with an elaborate sense of smell. When the researchers presented the tiny worms with samples of DEET alone, the animals didn't go out of their way to avoid the chemical, indicating that DEET doesn't simply repel every organism that crosses its path.
The scientists then mixed small amounts of DEET into agar, the gel-like substance that C. elegans crawl on in Petri dishes. The presence of DEET limited the worms' movement toward isoamyl alcohol, a chemical that usually attracts them; it also reduced their avoidance of 2-nonanone, a compound that they typically dodge. Still, the worms reacted normally to some other chemicals. These findings suggest that DEET can interfere with responses to both "good" and "bad" smells, but that it does not entirely shut down olfaction.
The researchers also found that the worms' DEET sensitivity depends on a gene called str-217, which is expressed in neurons called ADL cells. When the researchers artificially activated these neurons, the worms paused in place -- a behavior also observed among C. elegans navigating DEET-infused agar. Together, these results indicate that the chemical works, in part, by turning on neurons that induce pausing.
"Somehow activating ADL puts the worms into a frame of mind where they're more introspective, they're pausing more, they're not paying as much attention to odors," says Vosshall. "But if you take away the right gene or neuron, this spell is broken."
Indeed, the researchers showed that worms lacking either str-217 or ADL neurons are less affected by DEET. They conclude that str-217 likely codes for a DEET receptor, and that ADL cells play an important role in mediating response to the chemical.
A special chemical
The Vosshall lab previously demonstrated that DEET keeps mosquitoes away by interacting with odor receptors in a way that confuses the animals' sense of smell. This latest study shows that DEET causes similar confusion in C. elegans, but through entirely different mechanisms.
"We went into this study thinking perhaps we'd find some magical conserved DEET receptor common to all species," says Dennis. "But we found that, in C. elegans, a completely unique gene is required for DEET response."
Though the study did not lead to the discovery of a magical receptor, it nonetheless provides insight into the chemical's effectiveness across highly diverse species.
"The one common theme in all of these organisms is that DEET is doing something to affect odor perception -- it's like sensory system sabotage," says Vosshall.
Read more at Science Daily
Powerful jet discovered coming from 'wrong' kind of star
Artist's conception shows magnetic field lines around neutron star, accretion disk of material orbiting the neutron star, and jets of material propelled outward. |
Neutron stars are superdense objects, the remnants of massive stars that exploded as supernovas. When in binary pairs with "normal" stars, their powerful gravity can pull material away from their companions. That material forms a disk, called an accretion disk, rotating around the neutron star. Jets of material are propelled at nearly the speed of light, perpendicular to the disk.
"We've seen jets coming from all types of neutron stars that are pulling material from their companions, with a single exception. Never before have we seen a jet coming from a neutron star with a very strong magnetic field," said Jakob van den Eijnden of the University of Amsterdam. "That led to a theory that strong magnetic fields prevent jets from forming," he added.
The new discovery contradicts that theory.
The scientists studied an object called Swift J0243.6+6124 (Sw J0243), discovered on October 3, 2017, by NASA's orbiting Neil Gehrels Swift Observatory, when the object emitted a burst of X-rays. The object is a slowly-spinning neutron star pulling material from a companion star that is likely significantly more massive than the Sun. The VLA observations began a week after the Swift discovery and continued until January 2018.
Both the fact that the object's emission at X-ray and radio wavelengths weakened together over time and the characteristics of the radio emission itself convinced the astronomers that they were seeing radio waves produced by a jet.
"This combination is what we see in other jet-producing systems. Alternative mechanisms just don't explain it," van den Eijnden said.
Common theories for jet formation in systems like Sw J0243 say the jets are launched by magnetic field lines anchored in the inner parts of the accretion disks. In this scenario, if the neutron star has a very strong magnetic field, that field is overpowering and prevents the jet from forming.
"Our clear discovery of a jet in Sw J0243 disproves that longstanding idea," van den Eijnden said.
Alternatively, the scientists suggest that Sw J0243's jet-launching region of the accretion disk could be much farther out than in other types of systems, where the star's magnetic field is weaker. Another idea, they said, is that the jets may be powered by the neutron star's rotation, instead of being launched by magnetic field lines in the inner accretion disk.
"Interestingly, the rotation-powered idea predicts that the jet will be significantly weaker from more slowly rotating neutron stars, which is exactly what we see in Sw J0243," Nathalie Degenaar, also of the University of Amsterdam, said.
The new discovery also implies that Sw J0243 may represent a large group of objects whose radio emission has been too weak to detect until new capabilities provided by the VLA's major upgrade, completed in 2012, were available. If more such objects are found, the scientists said, they could test the idea that jets are produced by the neutron star's spin.
The astronomers added that a jet from SwJ0243 may mean that another category of objects, called ultra-luminous X-ray pulsars, also highly magnetized, might produce jets.
"This discovery not only means we have to revise our ideas about jets from such systems, but also opens up exciting new areas of research," Degenaar said.
Read more at Science Daily
Hyper Suprime-Cam survey maps dark matter in the universe
The present-day universe is a pretty lumpy place. As the universe has expanded over the last 14 billion years or so, galaxies and dark matter have been increasingly drawn together by gravity, creating a clumpy landscape with large aggregates of matter separated by voids where there is little or no matter.
The gravity that pulls matter together also impacts how we observe astronomical objects. As light travels from distant galaxies towards Earth, the gravitational pull of the other matter in its path, including dark matter, bends the light. As a result, the images of galaxies that telescopes see are slightly distorted, a phenomenon called weak gravitation lensing. Within those distortions is a great amount of information that researchers can mine to better understand the distribution of matter in the universe, and it provides clues to the nature of dark energy.
The HSC map, created from data gathered by Japan's Subaru telescope located in Hawaii, allowed researchers to measure the gravitational distortion in images of about 10 million galaxies.
The Subaru telescope allowed them to see the galaxies further back in time than in other similar surveys. For example, the Dark Energy Survey analyzes a much larger area of the sky at a similar level of precision as HSC, but only surveys the nearby universe. HSC takes a narrower, but deeper view, which allowed researchers to see fainter galaxies and make a sharper map of dark matter distribution.
The research team compared their map with the fluctuations predicted by the European Space Agency Planck satellite's observations of the cosmic microwave background radiation -- radiation from the earliest days of the universe. The HSC measurements were slightly lower than, but still statistically consistent with Planck's. The fact that HSC and other weak lensing surveys all find slightly lower results than Planck raises the tantalizing question of whether dark energy truly behaves like Einstein's cosmological constant.
"Our map gives us a better picture of how much dark energy there is and tells us a little more about its properties and how it's making the expansion of the universe accelerate," Mandelbaum said. "The HSC is a great complement to other surveys. Combining data across projects will be a powerful tool as we try uncover more and more about the nature of dark matter and dark energy."
Measuring the distortions caused by weak gravitational lensing isn't easy. The effect is quite small and distortions in galaxy shapes can also be caused by the atmosphere, the telescope and the detector. To get precise, accurate results, researchers need to know that they are only measuring effects from weak lensing.
Mandelbaum, associate professor of physics and member of the McWilliams Center for Cosmology at Carnegie Mellon, is an expert at controlling for these outside distortions. She and her team created a detailed image simulation of the HSC survey data based on images from the Hubble Space Telescope. From these simulations, they were able to apply corrections to the galaxy shapes to remove the shape distortions caused by effects other than lensing.
Read more at Science Daily
Sep 25, 2018
New species of dazzling, neon-colored fish
Tosanoides aphrodite inhabits rocky crevices of twilight zone reefs. |
"This is one of the most beautiful fishes I've ever seen," says Dr. Luiz Rocha, the Academy's Curator of Fishes and co-leader of the Hope for Reefs initiative. "It was so enchanting it made us ignore everything around it."
The sixgill shark stretched nearly ten feet long and cruised overhead as Rocha and post-doctoral fellow Dr. Hudson Pinheiro delicately collected the fish for further study back at the Academy. Behind the camera, the team's diving officer Mauritius Bell enthusiastically announced the behemoth visitor to the duo, but to no avail. Aptly named, Tosanoides aphrodite enchanted its discoverers much like Aphrodite, Greek goddess of love and beauty, enchanted the ancient Greek gods.
"Fishes from the twilight zone tend to be pink or reddish in color," says Pinheiro. "Red light doesn't penetrate to these dark depths, rendering the fishes invisible unless illuminated by a light like the one we carry while diving."
Back at the Academy, laboratory and collections manager Claudia Rocha helped the diving duo describe the new species: Males are outfitted with alternating pink and yellow stripes while females sport a solid, blood-orange color. Using a microscope, the team counted fins and measured spine length; DNA analysis revealed the new species is the first Atlantic-dwelling member of its genus.
The new denizen of the deep is a remarkable testament to the vast ocean habitats that still remain unexplored. Rocha and Pinheiro are part of a deep-diving research team that ventures to twilight zone reefs -- mysterious coral habitats stretching across a narrow band of ocean 200 -- 500 feet beneath the surface. In these deep reefs, animals live in partial darkness -- beyond recreational diving limits, yet above the deep trenches patrolled by submarines and ROVs. As part of its Hope for Reefs initiative, the Academy team and their collaborators are exploring this unknown frontier with the help of high-tech equipment like closed-circuit rebreathers that allow scientists to extend their research time underwater.
Nearly 600 miles offshore the coast of Brazil, St. Paul's Rocks is so remote that the diving team had to use the research vessel M/V Alucia as their homebase to explore the archipelago. The rocky outcroppings are extensions of the Mid-Atlantic Ridge -- an active, tectonic plate boundary -- puncturing the ocean's surface. Given the region's unique geology and isolated location, many of the species that live there are found nowhere else on Earth. Through their research, the Hope for Reefs team is finding that twilight zone habitats also host many location-specific species.
In a recent landmark paper, the team found that twilight zone reefs are unique ecosystems bursting with life and are just as vulnerable to climate change threats as their shallow counterparts. Their findings upended the long-standing assumption that species might migrate between habitats to avoid human-related stressors. As documented in the footage from this new fish's discovery, a piece of fishing line can be seen streaming behind the sixgill shark -- evidence that human impacts extend to depth too.
Read more at Science Daily
Termites: Advanced animal society can thrive without males
Cooperative colony foundation by multiple queens of an asexual female population of Glyptotermes nakajimai. |
The findings provide new evidence that males aren't required to maintain some advanced animal populations. They add momentum to questions about the impact and function of males in animal societies.
It is well known that many hymenopteran insect species -- which include bees and ants -- are essentially all-female societies. But termites are from a different insect order and typically contain male and female reproductives and workers.
Population analysis by a team of researchers -- including Professor Nathan Lo and postdoctoral researcher Dr Toshihisa Yoshiro from the University's School of Life and Environmental Sciences -- has for the first time identified termite colonies that completely lack males.
Findings published in BMC Biology today showed six of 10 termite populations scrutinized by the researchers in Japan were entirely comprised of asexual females. Aside from a lack of males, the colonies' queens contained no sperm in their sperm storage organs and their eggs remained unfertilised. Furthermore, there was no significant difference in the hatching rate of these unfertilised eggs and that of fertilised eggs among mixed-sex termite populations.
All the populations studied were of the Glyptotermes nakajimai species based in Japan. This species typically inhabits forest areas and is not considered a pest.
"These results demonstrate males are not essential for the maintenance of animal societies in which they previously played an active social role," said Professor Lo.
The occasional development of unfertilised eggs in mixed-sex colonies suggests asexual female populations may have evolved from their mixed-sex counterparts.
Professor Lo says asexual reproduction could allow termites to successfully adapt to a range of new environments.
"All else being equal, asexual populations grow at twice the rate of sexual populations because only females are required to reproduce. This increased growth rate of colonies makes it easier for populations to entrench themselves in new environments."
While it is possible some of the 276 termite species in Australia, among the most primitive and ecologically diverse in the world, reproduce asexually further research is required to determine this, Dr Yashiro said.
Read more at Science Daily
Genome duplication drives evolution of species
The plant species Arabidopsis kamchatica, which belongs to the rockcress genus, originated from the combination of two species. |
Evolutionary biological theory confirmed by experiment
An international team of researches headed up by Timothy Paape and Kentaro Shimizu from the Department of Evolutionary Biology and Environmental Studies of the University of Zurich (UZH) has now been able to provide the experimental confirmation of this theory. To do so, the scientists from Switzerland and Japan used plant species Arabidopsis kamchatica, which is part of the rockcress genus. They sequenced the genome of 25 different individuals of the polyploid species from various regions of the world, as well as 18 different individuals of its parental species in order to study its natural genetic diversity.
Genomes sequenced thanks to latest technologies
Arabidopsis kamchatica arose through the natural hybridization of the two parental species A. halleri and A. lyrata between 65,000 and 145,000 years ago. With 450 million base pairs, its genome is somewhat small for a polyploid plant, but still very complex. Using state-of-the-art sequencing methods and technology as well as bioinformatics tools, the researchers were able to determine the genetic sequence of the plant individuals.
Advantageous genetic mutations in addition to spare copies
Due to the large amount of genetic information, A. kamchatica is better equipped to adapt to new environmental conditions. "With these results, we have demonstrated on a molecular-genetic level that genome duplications can positively affect the adaptability of organisms," says plant scientist Timothy Paape. The multiple gene copies enable the plant to assume advantageous mutations while keeping an original copy of important genes.
Astonishing wide spread
The usefulness of the double genome for A. kamchatica can be seen in its wider distribution -- at both low and high altitude -- compared with its parental plants. Its habitat ranges from Taiwan, Japan and the Russian Far East to Alaska and the Pacific Northwest region of the United States. "Knowing the genomic and evolutionary context also helps us understand how genetic diversity allows plants to adapt to changing environmental conditions," says Kentaro Shimizu. The recently published research was supported by the University Research Priority Program Evolution in Action: From Genomes to Ecosystems of the University of Zurich.
From Science Daily
Liquid metal discovery to make toxic water safe and drinkable
An inexpensive new gadget makes contaminated water drinkable in minutes. |
Recent UNSW SHARP hire Professor Kourosh Kalantar-zadeh and his former colleagues at RMIT showed that nano-filters made of aluminium oxide could be cheaply produced using virtually no energy from a fixed amount of liquid metal gallium.
In a paper published in Advanced Functional Materials, lead author Dr Ali Zavabeti (RMIT) and Professor Kalantar-zadeh explained that when a chunk of aluminium is added to the core of liquid gallium at room temperature, layers of aluminium oxide are quickly produced at the surface of the gallium.
The authors discovered that these aluminium oxide nano-sheets were highly porous and went on to prove they were suitable for filtering both heavy metal ions and oil contamination at unprecedented, ultra-fast rates.
Professor Kalantar-zadeh, who was recently awarded an ARC Australian Laureate Fellowship soon after joining UNSW's School of Chemical Engineering, said that low cost and portable filters produced by this new liquid metal based manufacturing process could be used by people without access to clean drinking water to remove substances like lead and other toxic metals in a matter of minutes.
"Because it's super porous, water passes through very rapidly," Professor Kalantar-zadeh said.
"Lead and other heavy metals have a very high affinity to aluminium oxide. As the water passes through billions of layers, each one of these lead ions get attracted to one of these aluminium oxide sheets.
"But at the same time, it's very safe because with repeated use, the water flow cannot detach the heavy metal ions from the aluminium oxide."
Professor Kalantar-zadeh believes the technology could be put to good use in Africa and Asia in places where heavy metal ions in the water are at levels well beyond safe human consumption. It is estimated that 790 million people, or one in 10 of the Earth's population, do not have access to clean water.
"If you've got bad quality water, you just take a gadget with one of these filters with you," he said.
"You pour the contaminated water in the top of a flask with the aluminium oxide filter. Wait two minutes and the water that passes through the filter is now very clean water, completely drinkable.
"And the good thing is, this filter is cheap."
There are portable filtration products available that do remove heavy metals from water, but they are comparatively expensive, often costing more than $100.
By contrast, aluminium oxide filters produced from liquid gallium could be produced for as little as 10 cents, making them attractive to prospective manufacturers.
"Up until now, to produce aluminium oxide, you need to process aluminium at above 1000 degrees or use other energy intensive processes," Professor Kalantar-zadeh said.
"It would normally consume so much energy to make anything like this filter, making it hugely expensive.
"Now we're talking about something you can do even under the sun in summer at 35 degrees."
While aluminium is a plentiful and cheap metal, gallium is relatively expensive. But what makes gallium the hero in the process is the fact that it remains pure and unchanged after each production of aluminium oxide.
"You just add aluminium to the gallium and out comes aluminium oxide when its surface is exposed to water. You can use gallium again and again. Gallium never participates in the reaction," Professor Kalanter-zadeh said.
Professor Kalantar-zadeh said the manufacture process is so cheap and requiring such low expenditure of energy, these filters could even be made out of a kitchen.
"We are publishing this concept and releasing it to the public domain, so people around the world can use the idea for free and implement it for enhancing the quality of their lives," he said.
"This is all about a new paradigm. We haven't even begun to explore how we can use liquid metals as a base for manufacturing things that are cheap, green and safe for humans."
Read more at Science Dialy
Infectious bacteria hibernate to evade antibiotics
Rendering of E. coli bacteria. |
Almost all pathogenic bacteria develop a small number of antibiotic-tolerant variants. This means that a significant fraction of bacteria survive courses of antibiotics.
While it is no secret that pathogenic bacteria are able to develop antibiotic resistant variants, a less well-appreciated fact is that a small number of bacteria, including some of nature's nastiest pathogens, can resist antibiotics and escape antibiotic treatments without relying on variants.
How's that? Researchers at the University of Copenhagen now have an answer. They have found examples of a small portion of pathogenic bacteria hiding out in a dormant, hibernation-like state, until the danger posed to them by antibiotics passes. When safe, they awaken and resume their regular functions.
"We studied E. coli bacteria from urinary tract infections that had been treated with antibiotics and were supposedly under control. In time, the bacteria re-awoke and began to spread once again," explains Professor Kenn Gerdes of the University of Copenhagen's Department of Biology.
The study, led by Professor Gerdes of UCPH, and Boris Macek of the University of Tübingen, has just been published in the latest edition of the journal Science Signaling.
The bacterium's stop growth mechanism
Antibiotics usually target a bacteria cell's ability to grow, which means that a hibernating bacterium is exempt from attack.
"A bacterium in hibernation is not resistant. It is temporarily tolerant because it stops growing, which allows it to survive the effects of an antibiotic," says Professor Gerdes.
Genetically, hibernating bacteria share the same characteristics as other bacteria in a given population, an E. coli population for example. So, for now, there are no clear rules as to why certain bacteria survive antibiotics by going dormant while others do not.
The researchers used a new method to study what happens in the disease-causing cells that go dormant and hide in the body.
Enzyme catalyzes hibernation
The researchers found an enzyme in dormant bacteria that is responsible for catalyzing hibernation, which allows the bacteria to avoid being attacked.
"The discovery of this enzyme is a good foundation for the future development of a substance capable of combatting dormant bacteria cells," says Professor Gerdes.
The road ahead will not be easy and will require many years of hard work, expertise and research funding to develop new antibiotics. For Gerdes, it obvious that Denmark ought to play a leading role in this area of research.
Read more at Science Daily
Sep 24, 2018
Technology and therapy help individuals with chronic spinal cord injuries take steps
Participant Kelly Thomas, trainer Katie Vogt. |
This ground-breaking progress is the newest development in a string of outcomes at UofL, all pointing to the potential of technology in improving quality of life -- and even recovery -- following spinal cord injury. This latest study builds on initial research published in The Lancet in 2011 that documented the success of the first epidural stimulation participant, Rob Summers, who recovered a number of motor functions as a result of the intervention. Three years later, a study published in the medical journal Brain discussed how epidural stimulation of the spinal cord allowed Summers and three other young men who had been paralyzed for years to move their legs. Later research from UofL demonstrated this technology improved blood pressure regulation.
"This research demonstrates that some brain-to-spine connectivity may be restored years after a spinal cord injury as these participants living with motor complete paralysis were able to walk, stand, regain trunk mobility and recover a number of motor functions without physical assistance when using the epidural stimulator and maintaining focus to take steps," said author Susan Harkema, Ph.D., professor and associate director of the Kentucky Spinal Cord Injury Research Center at the University of Louisville. "We must expand this research -- hopefully, with improved stimulator technology -- to more participants to realize the full potential of the progress we're seeing in the lab, as the potential this provides for the 1.2 million people living with paralysis from a spinal cord injury is tremendous."
Progress for Individuals Living with Paralysis
The American Spinal Injury Association Impairment Scale (AIS) was used to classify the spinal cord injuries of each of the four participants. When the four participants joined the study, they were at least 2.5 years post injury. They were unable to stand, walk or voluntarily move their legs. Eight to nine weeks prior to the implantation of an epidural stimulator, they started daily locomotor training -- manual facilitation of stepping on a treadmill -- five days per week for two hours each day. Although there were no changes to their locomotor abilities prior to the implant, following the epidural stimulation participants were able to step when the stimulator was on and the individual intended to walk. Participants 3 and 4 were able to achieve walking over ground -- in addition to on a treadmill -- with assistive devices, such as a walker and horizontal poles for balance while the stimulator was on.
"Being a participant in this study truly changed my life, as it has provided me with a hope that I didn't think was possible after my car accident," said Kelly Thomas, a 23-year-old from Florida, also referred to as Participant 4. "The first day I took steps on my own was an emotional milestone in my recovery that I'll never forget as one minute I was walking with the trainer's assistance and, while they stopped, I continued walking on my own. It's amazing what the human body can accomplish with help from research and technology."
Jeff Marquis, a 35-year-old Wisconsin native who now lives in Louisville, was the first participant in this study to attain bilateral steps. "The first steps after my mountain biking accident were such a surprise, and I am thrilled to have progressed by continuing to take more steps each day. In addition, my endurance has improved, as I've regained strength and the independence to do things I used to take for granted like cooking and cleaning," said Marquis, who is participant 3 in New England Journal of Medicine study. "My main priority is to be a participant in this research and further the findings, as what the University of Louisville team does each day is instrumental for the millions of individuals living with paralysis from a spinal cord injury."
"While more clinical research must be done with larger cohorts, these findings confirm that the spinal cord has the capacity to recover the ability to walk with the right combination of epidural stimulation, daily training and the intent to step independently with each footstep," said Claudia Angeli, Ph.D., senior researcher, Human Locomotor Research Center at Frazier Rehab Institute, and assistant professor, University of Louisville's Kentucky Spinal Cord Injury Research Center.
Advancements for Spinal Cord Injury Community
This research is based on two distinct treatments: epidural stimulation of the spinal cord and locomotor training. Epidural stimulation is the application of continuous electrical current at varying frequencies and intensities to specific locations on the lumbosacral spinal cord. This location corresponds to the dense neural networks that largely control movement of the hips, knees, ankles and toes. Locomotor training aims to ultimately retrain the spinal cord to "remember" the pattern of walking by repetitively practicing standing and stepping. In a locomotor training therapy session, the participant's body weight is supported in a harness while specially trained staff move his or her legs to simulate walking while on a treadmill.
Read more at Science Daily
Subscribe to:
Posts (Atom)