Sep 30, 2017
Small collisions make big impact on Mercury's thin atmosphere
Recent modeling along with previously published results from NASA's MESSENGER spacecraft -- short for Mercury Surface, Space Environment, Geochemistry and Ranging, a mission that observed Mercury from 2011 to 2015 -- has shed new light on how certain types of comets influence the lopsided bombardment of Mercury's surface by tiny dust particles called micrometeoroids. This study also gave new insight into how these micrometeoroid showers can shape Mercury's very thin atmosphere, called an exosphere.
The research, led by Petr Pokorný, Menelaos Sarantos and Diego Janches of NASA's Goddard Space Flight Center in Greenbelt, Maryland, simulated the variations in meteoroid impacts, revealing surprising patterns in the time of day impacts occur. These findings were reported in the Astrophysical Journal Letters on June 19, 2017.
"Observations by MESSENGER indicated that dust must predominantly arrive at Mercury from specific directions, so we set out to prove this with models," Pokorný said. This is the first such simulation of meteoroid impacts on Mercury. "We simulated meteoroids in the solar system, particularly those originating from comets, and let them evolve over time."
Earlier findings based on data from MESSENGER's Ultraviolet and Visible Spectrometer revealed the effect of meteoroid impacts on Mercury's surface throughout the planet's day. The presence of magnesium and calcium in the exosphere is higher at Mercury's dawn -- indicating that meteoroid impacts are more frequent on whatever part of the planet is experiencing dawn at a given time.
This dawn-dusk asymmetry is created by a combination of Mercury's long day, in comparison to its year, and the fact that many meteroids in the solar system travel around the Sun in the direction opposite the planets. Because Mercury rotates so slowly -- once every 58 Earth days, compared to a Mercury year, a complete trip around the Sun, lasting only 88 Earth days -- the part of the planet at dawn spends a disproportionately long time in the path of one of the solar system's primary populations of micrometeoroids. This population, called retrograde meteoroids, orbits the Sun in the direction opposite the planets and comprises pieces from disintegrated long-period comets. These retrograde meteroids are traveling against the flow of planetary traffic in our solar system, so their collisions with planets -- Mercury, in this case -- hit much harder than if they were traveling in the same direction.
These harder collisions helped the team further key in on the source of the micrometeoroids pummeling Mercury's surface. Meteroids that originally came from asteroids wouldn't be moving fast enough to create the observed impacts. Only meteoroids created from two certain types of comets -- Jupiter-family and Halley-type -- had the speed necessary to match the obseravations.
"The velocity of cometary meteoroids, like Halley-type, can exceed 224,000 miles per hour," Pokorný said. "Meteoroids from asteroids only impact Mercury at a fraction of that speed."
Jupiter-family comets, which are primarily influenced by our largest planet's gravity, have a relatively short orbit of less than 20 years. These comets are thought to be small pieces of objects originating in the Kuiper Belt, where Pluto orbits. The other contributor, Halley-type comets, have a longer orbit lasting upwards of 200 years. They come from the Oort Cloud, the most distant objects of our solar system -- more than a thousand times farther from the Sun than Earth.
Read more at Science Daily
Meteorite Strikes Greatly Influenced the Early Composition of Earth and Mars
Prevailing theory holds that planets grow by accretion, which means gradually gathering gas and dust over time. This sometimes also includes collisions with smaller neighbors. A famous example of the latter was Earth’s long-ago collision with a Mars-sized body; the debris from the crash eventually created the Earth’s moon.
The new research, which was recently published in the journal Nature, shows that the larger bodies that struck Earth and Mars — traveling at several miles a second — created a lot of heat that generated magma oceans, and the temporary vaporized-rock atmosphere. However, Earth as a young planet (when it was smaller than Mars) was too small to have enough gravitational attraction to hold on to this atmosphere, so it eventually bled into space.
Over time, the researchers said, these collisions, and the gain and loss of temporary atmospheres, greatly altered the makeup of Earth and Mars. The evidence is based on looking at samples from Earth rocks, as well as meteorites from Mars and the asteroid Vesta.
“We have provided evidence that such a sequence of events occurred in the formation of the Earth and Mars, using high precision measurements of their magnesium isotope compositions,” said Remco Hin, a senior research associate with the University of Bristol’s school of Earth sciences in a statement.
Isotopes are different forms of an element. In this case, the element studied was magnesium.
“Magnesium isotope ratios change as a result of silicate vapour loss, which preferentially contains the lighter isotopes,” Hin went on. “In this way, we estimated that more than 40 per cent of the Earth’s mass was lost during its construction. This cowboy building job, as one of my co-authors described it, was also responsible for creating the Earth’s unique composition.”
Researchers carried out the work to add weight to a lengthy debate about why planets have poor volatile compositions — a “volatile” is a chemical element or chemical compound with a low boiling point, such as helium. The debate has two main points: the volatiles were lost either because of planetary growth, or due to some process related to the gas and dust environment in which planets in the solar system were born.
Read more at Seeker
Sep 29, 2017
Video gamers have an advantage in learning
Do video games give gamers an edge in learning? |
The weather prediction task
The research team studied 17 volunteers who -- according to their own statement -- played action-based games on the computer or a console for more than 15 hours a week. The control group consisted of 17 volunteers who didn't play video games on a regular basis. Both teams did the so-called weather prediction task, a well-established test to investigate the learning of probabilities. The researchers simultaneously recorded the brain activity of the participants via magnetic resonance imaging.
The participants were shown a combination of three cue cards with different symbols. They should estimate whether the card combination predicted sun or rain and got a feedback if their choice was right or wrong right away. The volunteers gradually learned, on the basis of the feedback, which card combination stands for which weather prediction. The combinations were thereby linked to higher or lower probabilities for sun and rain. After completing the task, the study participants filled out a questionnaire to sample their acquired knowledge about the cue card combinations.
Video gamers better with high uncertainties
The gamers were notably better in combining the cue cards with the weather predictions than the control group. They fared even better with cue card combinations that had a high uncertainty such as a combination that predicted 60 percent rain and 40 percent sunshine.
The analysis of the questionnaire revealed that the gamers had acquired more knowledge about the meaning of the card combinations than the control group. "Our study shows that gamers are better in analysing a situation quickly, to generate new knowledge and to categorise facts -- especially in situations with high uncertainties," says first author Sabrina Schenk.
Read more at Science Daily
New study changes our view on flying insects
Manduca sexta. |
Previous studies of bumblebees have shown that they consume as much energy in forward flight as when they hover, i.e. remain still in the air. New findings from Lund University in Sweden show that this does not apply to all insects.
Biologist Kajsa Warfvinge, together with her colleagues at Lund University, has studied the large moths known as tobacco hawkmoths or Manduca sexta. The results show that these moths, like birds, consume different amounts of energy depending on their flight speed. Flying really slowly or really fast requires the most effort.
The discovery may help other researchers who study how insects migrate from one environment to another.
"I imagine that our results could be used indirectly to predict how well different species respond to changing temperatures in view of global warming. By knowing how much energy is needed to fly at different speeds, we can calculate how far and fast the animals can travel given a certain amount of energy," says Kajsa Warfvinge.
The experiments were performed in a wind tunnel. Using a specially developed technique known as tomographic PIV, the researchers can record the way the air moves in three dimensions when the tobacco hawkmoth flaps its wings. The vortices left in the air can be seen as the insect's aerodynamic footprint. The vortex strength reflects the amount of kinetic energy added by the insect, which in itself is a measurement of how exerting it is to fly at different speeds.
The results show that classic aviation theory can be applied also to tobacco hawkmoths; that is, it takes a lot of energy to fly slowly (one metre per second), since it is difficult to create lift at these speeds. The same applies when insects fly fast (four metres per second), but here it is the air resistance that makes the flight less efficient from an energy perspective.
"We demonstrate that moths have the same U-shaped relationship between speed and power as birds and aircrafts do. Flying slowly or fast is exhausting and requires more energy. You could say that flying at a moderate speed is optimal. The most energy-efficient speed for these moths is 2-3 metres per second," says Kajsa Warfvinge.
Read more at Science Daily
Raccoons solve an ancient puzzle, but do they really understand it?
Raccoon. |
The research team included Sarah Benson-Amram and Emily Davis from the University of Wyoming, as well as Shylo Johnson and Amy Gilbert from the USDA National Wildlife Research Center, where the experiments were performed. The scientists first tested whether eight raccoons (Procyon lotor) held in captivity would spontaneously drop stones into a clear fifty centimetre tube of water to retrieve floating pieces of marshmallow. They found that, similar to studies of birds, the raccoons did not spontaneously drop stones into the tube from the start.
Following previous studies on birds and human children, the scientists then trained the raccoons to drop stones into the tube. They did this by balancing stones on a rim on top of the tube. If the raccoons accidently knocked the stones in, this raised the water level high enough to bring the marshmallow reward within reach. Raccoons could then learn that the stones falling into the tube brought the marshmallow closer.
During training, seven raccoons interacted with the stones, and four raccoons retrieved the marshmallow reward after accidentally knocking the stones into the water. Two of the four raccoons that got the marshmallow during training then learned on their own to pick up stones off the ground and drop them into the water to get a reward. A third raccoon surprised the scientists by inventing an entirely new method for solving the problem. She found a way to overturn the entire, very heavy, tube and base to get the marshmallow reward.
The two raccoons that successfully dropped stones into the tube were then presented with different objects that they could drop into the tube to solve the problem, such as large versus small stones, and sinking versus floating balls. These experiments enabled the researchers to determine whether the raccoons really understood the problem. If the raccoons understand water displacement, they should select the objects that displace the most water, like the large stones and sinking balls.
The raccoons performed differently than birds and human children did in previous Aesop's Fable studies, and they did not always pick the most functional option. Stanton, however, believes the raccoons' performance is not necessarily a reflection of their cognitive abilities, but more so of their exploratory behaviour and the build of their dexterous paws.
"We found raccoons to be innovative in many aspects of this task, and we observed diverse, investigative behaviours that are unique to raccoons," says Stanton, adding that the way in which the experiment was conducted might also have played a role. She explains that the raccoons had fewer opportunities to interact with the puzzle than did many of the birds that were tested in previous studies. Therefore, the performance of the raccoons might improve if they have more time to familiarize themselves with the stones and the water tube.
Read more at Science Daily
Climate Change 5 Million Years Ago Coincided With More Mediterranean Volcanoes
An international team of scientists found that volcanic eruptions and similar activity more than doubled during the so-called Messinian Age around 5.3-6 million years ago when the Strait of Gibraltar became a barrier that separated the Mediterranean Sea from the Atlantic Ocean.
“By removing some of the load on the Earth’s surface, you decrease the pressure at depth,” said Pietro Sternai, a geodynamics researcher at the University of Geneva and lead author on a study on the team’s findings published in the journal Nature Geoscience.
Debates have sprung up about fate of the Mediterranean Sea in the Messinian Era.
Some like Sterni believed it evaporated, as evidenced by deep layers of salt on the bottom on the Mediterranean today, a trace of what could have been left behind. Deep canyons on the bottom of the sea also suggest big rivers once carved their way through the area. The recent discovery of hominids on Crete almost six million years ago also would indicate they somehow walked to the ancient island.
Others argued the sea didn’t dry up but became stagnant and salty like the Dead Sea today.
“We are used to thinking in terms of volcanos and CO2 and how they affect climate,” he said. “This is very common. What is less common to think about is that the climate, by changing the distribution of pressure on the surface, affects the production of magma and therefore volcanism.”
The same dynamic could be occurring today as climate change melts the glaciers in the North and South Poles, removing pressure on the glowing seams and fissures below. Consider how eruptions of Iceland’s volcanoes have spiked as that country’s glacial ice has melted.
“There is a lot of debate going on about the worldwide increase in volcanic activity due to the melting of the continental ice sheets,” he said. “In this interglacial period when we are living, you are basically unloading a continent.”
Conversely, oceanic volcanoes were erupting less frequently recently as melting glacial ice raised sea levels and more water bore down on the ocean floor. “There is a balance between the continental and the oceanic volcanic activity,” he said. “When you increase one there should be a decrease in the other.”
Read more at Seeker
Sep 28, 2017
Southern African Ancestors Reveal Modern Humans Emerged 350,000 Years Ago
The findings, published in the journal Science, push back the origin of modern humans by 170,000 years, since the fossil record only goes back to 180,000 years ago. Little doubt remains that southern Africa has an important role to play in writing the history of humankind.
Lead author of the study, Carina Schlebusch, a population geneticist at Uppsala University in Sweden, explained to Seeker that the Khoe-San are the furthest related to all other populations in the world.
“For example, if you draw a tree of relatedness of all human populations, the Khoe-San populations represent the first split, or divergence event, in the tree,” Schlebusch said. “Thereafter, rainforest hunter gatherers — ‘pygmy’ groups — split from other groups, thereafter east versus west Africans split, and thereafter all non-Africans diverge from the tree.”
“Because Khoe-San groups split first from the rest of human populations,” she added, “they carry the most divergent, or different, and unique DNA compared to the other human groups.”
The researchers next compared these sequences with those from other early humans, such as an ancient Stone Age hunter-gatherer boy from Ballito Bay on the east coast of South Africa and the West African Mandinka.
“If we assume that DNA mutations occur in a clock-like manner, one can date when these populations or individuals split from each other,” Schlebusch said. “The mutation rate thus does have an influence on how the number of differences are converted to calendar years.”
Earlier DNA investigations of the Khoe-San involved people that had interbred with farming groups who migrated into southern Africa over the last 2,000 years. That is why Schlebusch and her team looked at DNA from individuals who lived before that time.
While the new data puts the spotlight on southern Africa and evolution taking place 350,000 years ago, Homo sapiens were not the only residents of Africa then.
“Our DNA work now shows that modern Homo sapiens may have been present on the landscape, perhaps simultaneously with Homo heidelbergensis or archaic Homo sapiens as represented by the fossils from Florisbad and Hoedjiespunt, and with Homo naledi,” co-author Marlize Lombard, an archaeologist from the University of Johannesburg said.
The researchers are not sure if different human species interacted in Africa.
“At the moment, that question remains open,” Lombard said, “but as work continues, we might start to shed light on such a possibility.”
Geological barriers probably did not keep the various populations apart, but climate-related factors might have done so.
“In southern Africa, extreme drought is always a possibility,” Lombard explained, “but we also need to keep in mind that it is a diverse landscape in which species adapted to different niches without so-called barriers.”
It appears that archaic forms of our species did not just emerge in one place, such as South Africa.
“We see this gradual transition from archaic to modern forms in North, East, and southern Africa, and fossils of modern, transitional and archaic forms are found in all three of these regions,” Schlebusch said.
The scientific consensus is that Homo erectus — a species with the ability to make hand axes out of stone and to cook with fire — evolved into H. heidelbergensis, which then gradually evolved into H. sapiens.
Researchers also believe H. heidelbergensis was the common ancestor of H. sapiens and Neanderthals. The latter two populations split around 700,000 years ago.
“When Homo heidelbergensis moved out of Africa,” Schlebusch said, “they became Neanderthals in Europe and West Asia and Denisovans in Asia, although Denisovans might have had additional input from another archaic species. But the Homo heidelbergensis that remained in Africa continued to evolve and turned into modern humans.”
Anthropologists now question whether or not Neanderthals should be classified as a different species.
“The species definition is actually very vague, especially for hominid evolution. My personal opinion is that they should not be regarded as a separate species,” Schlebusch said. “What is important for me, at least, is that they were still close enough genetically to humans to produce viable offspring and that, for me, is enough to motivate that they were in the same species as us.”
People today of European and Asian heritage retain Neanderthal and Denisovan DNA. Genetic variants tied to heritage can impact human health. For example, prior research found that Tibetans can tolerate high altitudes better than others due to a genetic variant inherited from an ancient human ancestor, possibly Neanderthals.
Likewise, the researchers found that the African Iron Age individuals carried a genetic variant that protected against malaria, and two had a variant that offered resistance against sleeping sickness.
“These variants would have evolved in regions where these diseases are endemic, i.e. tropical Africa,” Schlebusch explained.
Read more at Seeker
Oldest Known Life on Earth Found in 3.95-Billion-Year-Old Rocks in Labrador
The new finding represents the earliest sign of life yet on Earth by 200 million years or more, the researchers said.
Evidence of life early in Earth's history remains sparse because few well-preserved rocks have survived from the Eoarchean era, which spanned from about 4 billion to 3.6 billion years ago. During that time, Earth's primitive atmosphere and oceans — as well as the oldest signs of life — first emerged.
Until now, the earliest hints of life in the 4.5-billion-year history of Earth were inside a 3.7-billion-year-old rock from Greenland revealed in 2016. Prior work, from 1996, also claimed to have found signs of life in 3.8-billion-year-old rocks from Greenland's Akilia Island, although those findings remain hotly debated.
Now, scientists analyzing 3.95-billion-year-old rocks from northern Labrador in northeastern Canada suggest they have found materials generated by microbes. These may represent the oldest evidence of life found yet on Earth, said study senior author Tsuyoshi Komiya, a geologist at the University of Tokyo.
The researchers examined the oldest known metasedimentary rocks, ones made from sediment that got buried underneath subsequent rock and subjected to high pressures and temperatures, causing the sediment to crystallize.
The area they collected the rocks from "is very far from any village or town," Komiya told Live Science. "Many polar bears inhabit the area."
The scientists focused on grains of graphite, a material made of sheets of carbon. Previous research suggested that life could result in graphite that is enriched in lighter isotopes of carbon. (Isotopes of an element vary in how many neutrons they possess in their atomic nuclei.)
Based on the carbon isotopes found in the graphite within the Labrador rocks, the researchers suggested it was biological in origin. The way ancient rock encased this graphite suggested that these newfound signs of life did not originate as contamination from later periods in time. "The finding was surprising and exciting," Komiya said.
Read more at Seeker
Gravitational Waves From Merging Black Holes Detected by New Joint Network
“Today marks an exciting milestone in the growing international effort to unlock the extraordinary mysteries of the universe,” said France Córdova, director of the National Science Foundation, speaking at a press briefing from the G7 Science Conference in Turin, Italy. “We are delighted to announce the first discovery made in partnership between the Virgo gravitational-wave observatory in Italy and the LIGO Scientific Collaboration, the first time a gravitational wave detection was observed by these observatories, located thousands of miles apart.”
The first detection of gravitational waves, which was announced in early 2016, confirmed Einstein’s general theory of relativity, as he predicted the waves as part of his theory that proposed space-time as a concept.
Gravitational waves are undulations in space and time created when two massive, compact objects such as black holes merge. Their detection allows for a new way to look at the universe because, until recently, astronomers could only study objects in space by observing the different wavelengths of light. Being able to observe gravitational waves provides a new way to study objects that are notoriously difficult to observe, such as black holes and neutron stars.
The recent detection of gravitational waves took place on August 14 at 6:30 am EDT. They were produced by merger of two black holes with masses about 31 and 25 times the mass of the sun, located about 1.8 billion light-years away.
The newly produced single, spinning black hole has about 53 times the mass of our sun, which means that about three solar masses were converted into gravitational-wave energy during the merger.
LIGO — the twin Laser Interferometer Gravitational-Wave Observatory (LIGO) detectors, located in Livingston, Louisiana, and Hanford, Washington — has made the three previous detections of gravitational waves. The Virgo detector, located near Pisa, Italy, has been in operation since 2011, but recently had an upgrade to match the upgraded LIGO detectors. The two observatories started working in concert at the beginning of August this year.
Just 14 days later, the gravitational wave signal arrived first at the LIGO Livingston detector. Six milliseconds later, it arrived at the Hanford detector. Then it arrived at the Virgo detector another six milliseconds later. Using three detectors has allowed the research team to identify the source of the waves with greater accuracy.
Jo van den Brand, spokesperson of the Virgo collaboration, explained at the press briefing that combining the signals from all three detectors allows the researchers to locate the source of the waves with greater accuracy.
“The time differences allow for very accurate triangulation,” he said, “and with this we can locate the source that is emitting these gravitational waves with a precision that is more than 20 times higher than we could do before. This is important since we expect that many of such merger events will also emit other messengers, such as light, x-rays, radio waves, neutrinos, or other subatomic particles. So these events can be studied by both astronomers and astroparticle scientists. This opens a new field of multinational astronomy and I think we have now take the first step in that process.”
Van den Brand added that the detection also highlights the scientific potential of a three-detector network. Astronomers hope the combined detectors can eventually help them determine the sources of gravitational waves with even greater accuracy, and also detect them more often.
David Shoemaker of MIT said in a statement that he expects detections “weekly or even more often” starting in the next observing run, planned for the fall of 2018.
While gravitational waves are not a form of sound, like sound waves, they cause vibrations in the material they pass through. Astronomers have called these vibrations “chirps” because it turns out the frequencies (or the number of vibrations per second) of gravitational waves are the same as the frequencies of sound waves audible to humans.
Read more at Seeker
What a ‘Cosmic Welcome Mat’ Can Teach Us About Finding Extraterrestrial Life
After searching for aliens for decades, we haven’t yet found definitive prove that they exist. While noted Search for Extraterrestrial Intelligence (SETI) researchers urge scientists to keep looking, San Francisco-based experimental philosopher Jonathon Keats suggests that perhaps we need to rethink our calls for contact.
Keats says that aliens may not be welcome enough. He has designed a set of “cosmic welcome mats” in consultation with space archaeologist Alice Gorman, of Flinders University in Australia. The mats are being tested during the 68th International Astronautical Congress taking place in Adelaide this week and at Flinders University nearby. Keats and Gorman hope to see versions of these mats deployed at welcome centers worldwide, or even on the International Space Station.
If the idea seems quirky on its face, it is. Keats has pursued several projects combining science, philosophy, and art, such as opening a “photosynthetic restaurant” for plants, and creating canvas paintings based on space-based signals detected by the Arecibo Observatory in Puerto Rico.
While most of humanity’s messages to aliens have been in radio form or in time capsules deposited on spacecraft — the Voyager spacecraft’s Golden Record being a notable example — this latest messaging to extraterrestrials comes literally in the form of welcome mats, such as what you would see at a human household’s front door. (Each are 60 by 90 centimeters, or 24 by 35 inches.)
“Developing a language to communicate with extraterrestrial intelligence is by no means straightforward,” Keats told Seeker via email. “That’s where creativity was most important.”
“I came to realize,” he added, “that my greatest chance of success would be to look at creative expression as a human phenomenon: When the communicative chasm is especially extreme — such as the communication of one person’s innermost emotions to strangers — humans tend to express themselves through art.”
The slow pace of SETI discovery isn’t a new problem. The Fermi paradox, named after physicist Enrico Fermi, was proposed in the 1950s to explain why there has been no contact with extraterrestrial civilizations. (Fermi wasn’t the first to propose this, but the theory is named after him.)
Fermi and his colleague Michael Hart suggested that the Milky Way galaxy could be explored in just a few million years, provided that a certain percentage of inhabitants of Earth-like planets discovered interstellar travel. So Earth, they concluded, should have already been in contact with extraterrestrials, prompting them to ask, “Where is everybody?”
Scientists have since generated many answers — perhaps the aliens are dead or don’t exist, or maybe they don’t want to talk to Earth, or it could be that SETI researchers aren’t searching in the right areas.
Keats argues that extraterrestrial life coming to Earth is at least as possible as an alien encountering the Arecibo radio message of 1974 sent to globular star cluster M13, or aliens encountering the time capsules aboard the Pioneer 10/11 or Voyager 1/2 probes that NASA launched in the 1970s. These spacecraft are hurtling out of the solar system, with the exception of Voyager 2, which passed the boundary of interstellar space in 2012.
“I sought to determine what assumptions could be made, at least tentatively, in order to have a reasonable likelihood of being understood,” Keats explained. “Fundamentally these assumptions all derive from what might be called the Alien Anthropic Principle: In order for the aliens to encounter the mats, they must be here in the first place, and that can potentially tell us something about them. In other words, there is a degree of self-selection inherent in being on this planet.”
Read more at Seeker
Keats says that aliens may not be welcome enough. He has designed a set of “cosmic welcome mats” in consultation with space archaeologist Alice Gorman, of Flinders University in Australia. The mats are being tested during the 68th International Astronautical Congress taking place in Adelaide this week and at Flinders University nearby. Keats and Gorman hope to see versions of these mats deployed at welcome centers worldwide, or even on the International Space Station.
If the idea seems quirky on its face, it is. Keats has pursued several projects combining science, philosophy, and art, such as opening a “photosynthetic restaurant” for plants, and creating canvas paintings based on space-based signals detected by the Arecibo Observatory in Puerto Rico.
While most of humanity’s messages to aliens have been in radio form or in time capsules deposited on spacecraft — the Voyager spacecraft’s Golden Record being a notable example — this latest messaging to extraterrestrials comes literally in the form of welcome mats, such as what you would see at a human household’s front door. (Each are 60 by 90 centimeters, or 24 by 35 inches.)
“Developing a language to communicate with extraterrestrial intelligence is by no means straightforward,” Keats told Seeker via email. “That’s where creativity was most important.”
“I came to realize,” he added, “that my greatest chance of success would be to look at creative expression as a human phenomenon: When the communicative chasm is especially extreme — such as the communication of one person’s innermost emotions to strangers — humans tend to express themselves through art.”
The slow pace of SETI discovery isn’t a new problem. The Fermi paradox, named after physicist Enrico Fermi, was proposed in the 1950s to explain why there has been no contact with extraterrestrial civilizations. (Fermi wasn’t the first to propose this, but the theory is named after him.)
Fermi and his colleague Michael Hart suggested that the Milky Way galaxy could be explored in just a few million years, provided that a certain percentage of inhabitants of Earth-like planets discovered interstellar travel. So Earth, they concluded, should have already been in contact with extraterrestrials, prompting them to ask, “Where is everybody?”
Scientists have since generated many answers — perhaps the aliens are dead or don’t exist, or maybe they don’t want to talk to Earth, or it could be that SETI researchers aren’t searching in the right areas.
Keats argues that extraterrestrial life coming to Earth is at least as possible as an alien encountering the Arecibo radio message of 1974 sent to globular star cluster M13, or aliens encountering the time capsules aboard the Pioneer 10/11 or Voyager 1/2 probes that NASA launched in the 1970s. These spacecraft are hurtling out of the solar system, with the exception of Voyager 2, which passed the boundary of interstellar space in 2012.
“I sought to determine what assumptions could be made, at least tentatively, in order to have a reasonable likelihood of being understood,” Keats explained. “Fundamentally these assumptions all derive from what might be called the Alien Anthropic Principle: In order for the aliens to encounter the mats, they must be here in the first place, and that can potentially tell us something about them. In other words, there is a degree of self-selection inherent in being on this planet.”
Read more at Seeker
Sep 27, 2017
Black holes with ravenous appetites define Type I active galaxies
New research from an international team of astronomers, with contributions from the University of Maryland, makes a major modification to a popular theory called the unified model. According to this model, the active nuclei of Type I and Type II galaxies have the same fundamental structure and energetic profile, but appear different solely because the galaxies point toward Earth at different angles. Specifically, Type II galaxies are tilted such that they are obscured by their own rings of dust, making Type I galaxies appear brighter by comparison.
The new results, published September 28, 2017, in the journal Nature, suggest that Type I and Type II galaxies do not just appear different -- they are, in fact, very different from each other, both structurally and energetically. The key factor that distinguishes Type I and Type II galaxies is the rate at which their central black holes consume matter and spit out energy, according to the researchers.
"The unified model has been the prevailing wisdom for years. However, this idea does not fully explain the differences we observe in galaxies' spectral fingerprints, and many have searched for an additional parameter that fills in the gaps," said Richard Mushotzky, a professor of astronomy at UMD and a co-author of the study. "Our new analysis of X-ray data from NASA's Swift Burst Alert Telescope suggests that Type I galaxies are much more efficient at emitting energy."
To conduct the study, Mushotzky and his colleagues re-examined data from 836 active galaxies detected by NASA's Swift Burst Alert Telescope that strongly emit high-energy, or "hard," X-rays -- the same X-rays that medical technicians use to visualize the human skeleton.
To measure the mass and growth rate of these galaxies' active nuclei -- the supermassive black holes at the galaxies' centers -- the researchers used data from 12 different ground-based telescopes spread across the globe to complement the data from the Swift satellite.
"This project began in 2009, as part of my doctoral work at UMD, and has radically grown with the help of more than 40 researchers across the globe," said Michael Koss (M.S. '07, Ph.D. '11, astronomy), a research scientist at Eureka Scientific, Inc. and a co-author of the paper. "When I started out, I spent a month of lonely nights by myself at the Kitt Peak National Observatory observing a few dozen galaxies. I never dreamed we would eventually expand to such a large sample, enabling us to answer many amazing scientific questions for the first time."
By comparing differences in the X-ray spectra between Type I and Type II galaxies, the researchers concluded that, regardless of which way the galaxy faces Earth, the central black holes in Type I galaxies consume matter and emit energy much faster compared with the black holes at the center of Type II galaxies.
"Our results suggest this has a lot to do with the amount of dust that sits close to the central black hole," said Mushotzky, who is also a fellow of the Joint Space-Science Institute. "Type II galaxies have a lot more dust close to the black hole, and this dust pushes against the gas as it enters the black hole."
For decades, astronomers preferentially studied Type II galaxies, largely because the active nuclei of Type I galaxies are very bright, making it difficult to see the stars and gas clouds that constitute the rest of the galaxy. Because the unified model suggested that all active galaxies were fundamentally the same, astronomers focused their efforts on the galaxies that host Type II active nuclei because they are easier to observe.
Read more at Science Daily
The volatile processes that shaped Earth
This is an image illustrating the late-stage building blocks of planetary formation (planetessimals and proto-planets) and the extensive volatile degassing that took place. |
Based on observations of newly-forming stars, scientists know that the solar system began as a disc of dust and gas surrounding the centrally-growing sun. The gas condensed to solids which accumulated into larger rocky bodies like asteroids and mini-planets. Over a period of 100 million years these mini-planets collided with one another and gradually accumulated into the planets we see today, including Earth.
Although it is widely understood that Earth was formed gradually, from much smaller bodies, many of the processes involved in shaping our growing planet are less clear. In a new study featured on the cover of the latest edition of Nature, researchers from the University of Oxford's Department of Earth Sciences untangle some of these processes, revealing that the mini-planets added to Earth had previously undergone melting and evaporation. They also address another scientific conundrum: Earth's depletion in many economically important chemical elements.
It is well known that Earth is strongly depleted, relative to the solar system as a whole, in those elements which condensed from the early gas disc at temperatures less than 1000°C (for example, lead, zinc, copper, silver, bismuth, and tin). The conventional explanation is that Earth grew without these volatile elements and small amounts of an asteroidal-type body were added later. This idea cannot, however, explain the "over abundance" of several other elements -- notably, indium, which is now used in semiconductor technologies, as well as TV and computer screens.
Postgraduate student Ashley Norris and Bernard Wood, Professor of Mineralogy at Oxford's Department of Earth Sciences, set out to uncover the reasons behind the pattern of depletion of these volatile elements on Earth and for the "overabundance" of indium. They constructed a furnace in which they controlled the temperature and atmosphere to simulate the low oxidation state of the very early Earth and planetesimals. In a particular series of experiments they melted rocks at 1300°C in oxygen-poor conditions and determined how the different volatile elements were evaporated from the molten lava.
During the experiments each of the elements of interest evaporated by different amounts. The lava samples were then rapidly cooled and the patterns of element loss determined by chemical analysis. The analyses revealed that the relative losses (volatilities) measured in the molten lava experiments agree very closely with the pattern of depletion observed in Earth. In particular, indium volatility agrees exactly with its observed abundance in Earth -- its abundance, turns out not to be an anomaly.
Professor Bernard Wood said: "Our experiments indicate that the pattern of volatile element depletion in the Earth was established by reaction between molten rock and an oxygen-poor atmosphere. These reactions may have occurred on the early-formed planetesimals which were accreted to Earth or possibly during the giant impact which formed the moon and which is believed to have caused large-scale melting of our planet."
Having focused their original experiments on 13 key elements, the team are in the process of looking at how other elements, such as chlorine and iodine, behave under the same conditions.
Read more at Science Daily
New light shed on how Earth and Mars were created
Planets grow by a process of accretion -- a gradual accumulation of additional material -- in which they collisionally combine with their neighbours.
This is often a chaotic process and material gets lost as well as gained.
Massive planetary bodies impacting at several kilometres per second generate substantial heat which, in turn, produces magma oceans and temporary atmospheres of vaporised rock.
Before planets get to approximately the size of Mars, gravitational attraction is too weak to hold onto this inclement silicate atmosphere.
Repeated loss of this vapour envelope during continued collisional growth causes the planet's composition to change substantially.
Dr Remco Hin from the University of Bristol's School of Earth Sciences, led the research which is published today in Nature.
He said: "We have provided evidence that such a sequence of events occurred in the formation of the Earth and Mars, using high precision measurements of their magnesium isotope compositions.
"Magnesium isotope ratios change as a result of silicate vapour loss, which preferentially contains the lighter isotopes. In this way, we estimated that more than 40 per cent of the Earth's mass was lost during its construction.
"This cowboy building job, as one of my co-authors described it, was also responsible for creating the Earth's unique composition."
The research was carried out in an effort to resolve a decades long debate in Earth and planetary sciences about the origin of distinctive, volatile poor compositions of planets.
Did this result from processes that acted in the mixture of gas and dust in the nebula of the earliest solar system or is it consequence of their violent growth?
Researchers analysed samples of the Earth together with meteorites from Mars and the asteroid Vesta, using a new technique to get higher quality (more accurate and more precise) measurements of magnesium isotope ratios than previously obtained.
The main findings are three-fold:
- Earth, Mars and asteroid Vesta have distinct magnesium isotope ratios from any plausible nebula starting materials
- The isotopically heavy magnesium isotope compositions of planets identify substantial (~40 per cent) mass loss following repeated episodes of vaporisation during their accretion
- This slipshod construction process results in other chemical changes during growth that generate the unique chemical characteristics of Earth.
Dr Hin added: "Our work changes our views on how planets attain their physical and chemical characteristics.
"While it was previously known that building planets is a violent process and that the compositions of planets such as Earth are distinct, it was not clear that these features were linked.
Read more at Science Daily
Archaeologists Discover Mayan King's 1,000-Year-Old Tomb Beneath Palace
The tomb was unearthed at the site of El Perú-Waka' in the rainforest of northern Guatemala. Though the dense city was filled with hundreds of buildings, including pyramids, palaces, plazas and houses, it was only rediscovered in the 1960s, when petroleum workers stumbled upon the ruins.
The site was occupied during the Classic Maya period (from around A.D. 200 to 800), and it had close ties to the nearby Maya rival capitals Tikal and Calakmul. A wealthy royal family once ruled Waka' and controlled what was a major trade route along the San Pedro River. [See Photos of Another Mayan Tomb]
A team of American and Guatemalan archaeologists have been excavating Waka' since 2003. They've found several burials of kings and queens (as well as some potential human sacrificial offerings).
David Freidel, a professor of anthropology at Washington University in St. Louis and co-director of the excavations, explained in a news statement that the king's tomb would have helped to make the royal palace holy ground for the Wak (or "centipede") dynasty. "It's like the ancient Saxon kings [of] England buried in Old Minister, the original church underneath Winchester Cathedral," Freidel said.
Freidel and his colleagues believe the tomb likely belonged to a king because of the red-painted jade mask depicting the ruler as the Maize God, with his forehead inscribed with a symbol that meant"yellow" and "precious" in the ancient Mayan language.
The grave also contained several ceramic vessels, shells and a carved crocodile pendant. The tomb had been reopened at least once sometime after A.D. 600, perhaps so that future generations of mourners could paint the ruler's bare bones red with cinnabar. (Painted bones have been found in Maya tombs before, such as the tomb of the Red Queen in Palenque, which was completely covered in cinnabar dust.)
Read more at Seeker
A Giant Coconut-Eating Rat Was Discovered in the Solomon Islands
Mammalogist Tyrone Lavery of the Field Museum in Chicago heard these seemingly outrageous stories when he first visited the islands in 2010. Intrigued, he brought a team to investigate a forested area, but the scientists came out rat-less.
A logging company later entered the same region and focused their chopping work on a big tree.
“They cut down the tree, and the rat came out of it,” Lavery recounted to Seeker. The terrified gigantic rat, which went scurrying past the equally stunned loggers, was indeed the legendary vika.
Described in the Journal of Mammalogy, the rodent was given the scientific name Uromys vika. It is the first new rat species to be discovered in the Solomon Islands in nearly a century.
Local animal expert Hikuna Judge, a co-author of the paper, obtained one of the rats, permitting detailed study of its anatomy. First, the researchers noted the rodent’s size and heft: a foot and a half long, and 2.2 pounds. In comparison, rats typically seen in American homes and alleys weigh less than half of a pound, on average.
Judge, Lavery, and their colleagues compared vika’s skull and other features to those of rodent species in museum collections. They also conducted a genetic analysis using a DNA sample. The research confirmed that the giant rat does in fact represent a new species.
Over that lengthy period of time, the rat evolved very wide feet to help it walk around in the forest canopy. It also evolved teeth sharp enough to bite through a coconut shell and possibly even stronger materials.
“I have found it can eat another nut called a ngali nut,” Lavery explained. “The shells of these nuts are, in fact, probably thicker than a coconut!”
“It has never been known to attack humans or invade their homes,” Lavery said.
Rats within vika’s genus are all very large. Certain rats on Guadalcanal, Bougainville, and Choiseul Islands could even exceed vika’s weight and length. Their unusual size may be due to a phenomenon known as the “island effect,” or “Foster’s rule.”
“The island effect isn’t a rigid rule,” said Mark Collard, an evolutionary anthropologist at Simon Fraser University. Depending on resources available in a particular environment, some species get bigger while others may evolve into dwarf forms.
One noteworthy example may be Homo floresiensis, aka the Hobbit Human. This member of the human family tree evolved on the island of Flores alongside pygmy elephants and other smaller-than-usual animals.
While food sources were perhaps in short supply on Flores, vika must have had plenty of coconuts and other tasty fare to promote its impressive growth over the millennia.
Vika will soon be designated as “critically endangered,” however, and the researchers are asking the public to help save it and other animals in its unique island ecosystem.
“The best way to save this species is to support Zaira village — they are a community next to the area where the rat was found, and they are protecting their forests for conservation,” Lavery said.
“The biggest threat to this species is logging,” he continued, “and Zaira will not allow logging into their forests. We are launching a campaign on Pozible to crowd fund money in support of this community.”
Read more at Seeker
Sep 26, 2017
Cartography of the Cosmos
This is the universe Salman Habib is trying to reconstruct, structure by structure, using precise observations from telescope surveys combined with next-generation data analysis and simulation techniques currently being primed for exascale computing.
"We're simulating all the processes in the structure and formation of the universe. It's like solving a very large physics puzzle," said Habib, a senior physicist and computational scientist with the High Energy Physics and Mathematics and Computer Science divisions of the U.S. Department of Energy's (DOE) Argonne National Laboratory.
Habib leads the "Computing the Sky at Extreme Scales" project or "ExaSky," one of the first projects funded by the recently established Exascale Computing Project (ECP), a collaborative effort between DOE's Office of Science and its National Nuclear Security Administration.
From determining the initial cause of primordial fluctuations to measuring the sum of all neutrino masses, this project's science objectives represent a laundry list of the biggest questions, mysteries and challenges currently confounding cosmologists.
There is the question of dark energy, the potential cause of the accelerated expansion of the universe, called inflation. Another question is the nature and distribution of dark matter in the universe.
These are immense questions that demand equally expansive computational power to answer. The ECP is readying science codes for exascale systems, the new workhorses of computational and big data science.
Initiated to drive the development of an "exascale ecosystem" of cutting-edge, high-performance architectures, codes and frameworks, the ECP will allow researchers to tackle data and computationally intensive challenges such as the ExaSky simulations of the known universe.
In addition to the magnitude of their computational demands, ECP projects are selected based on whether they meet specific strategic areas, ranging from energy and economic security to scientific discovery and healthcare.
"Salman's research certainly looks at important and fundamental scientific questions, but it has societal benefits, too," said Paul Messina, Argonne Distinguished Fellow. "Human beings tend to wonder where they came from, and that curiosity is very deep."
HACC'ing the night sky
For Habib, the ECP presents a two-fold challenge -- how do you conduct cutting-edge science on cutting-edge machines?
The cross-divisional Argonne team has been working on the science through a multi-year effort at the Argonne Leadership Computing Facility (ALCF), a DOE Office of Science User Facility. The team is running cosmological simulations for large-scale sky surveys on the facility's 10-petaflop high-performance computer, Mira. The simulations are designed to work with observational data collected from specialized survey telescopes, like the forthcoming Dark Energy Spectroscopic Instrument (DESI) and the Large Synoptic Survey Telescope (LSST).
Survey telescopes look at much larger areas of the sky -- up to half the sky, at any point -- than does the Hubble Space Telescope, for instance, which focuses more on individual objects. One night concentrating on one patch, the next night another, survey instruments systematically examine the sky to develop a cartographic record of the cosmos, as Habib describes it.
Working in partnership with Los Alamos and Lawrence Berkeley National Laboratories, the Argonne team is readying itself to chart the rest of the course.
Their primary code, which Habib helped develop, is already among the fastest science production codes in use. Called HACC (Hardware/Hybrid Accelerated Cosmology Code), this particle-based cosmology framework supports a variety of programming models and algorithms.
Unique among codes used in other exascale computing projects, it can run on all current and prototype architectures, from the basic X86 chip used in most home PCs, to graphics processing units, to the newest Knights Landing chip found in Theta, the ALCF's latest supercomputing system.
As robust as the code is already, the HACC team continues to develop it further, adding significant new capabilities, such as hydrodynamics and associated subgrid models.
"When you run very large simulations of the universe, you can't possibly do everything, because it's just too detailed," Habib explained. "For example, if we're running a simulation where we literally have tens to hundreds of billions of galaxies, we cannot follow each galaxy in full detail. So we come up with approximate approaches, referred to as subgrid models."
Even with these improvements and its successes, the HACC code still will need to increase its performance and memory to be able to work in an exascale framework. In addition to HACC, the ExaSky project employs the adaptive mesh refinement code Nyx, developed at Lawrence Berkeley. HACC and Nyx complement each other with different areas of specialization. The synergy between the two is an important element of the ExaSky team's approach.
A cosmological simulation approach that melds multiple approaches allows the verification of difficult-to-resolve cosmological processes involving gravitational evolution, gas dynamics and astrophysical effects at very high dynamic ranges. New computational methods like machine learning will help scientists to quickly and systematically recognize features in both the observational and simulation data that represent unique events.
A trillion particles of light
The work produced under the ECP will serve several purposes, benefitting both the future of cosmological modeling and the development of successful exascale platforms.
On the modeling end, the computer can generate many universes with different parameters, allowing researchers to compare their models with observations to determine which models fit the data most accurately. Alternatively, the models can make predictions for observations yet to be made.
Models also can produce extremely realistic pictures of the sky, which is essential when planning large observational campaigns, such as those by DESI and LSST.
"Before you spend the money to build a telescope, it's important to also produce extremely good simulated data so that people can optimize observational campaigns to meet their data challenges," said Habib.
But the cost of realism is expensive. Simulations can range in the trillion-particle realm and produce several petabytes -- quadrillions of bytes -- of data in a single run. As exascale becomes prevalent, these simulations will produce 10 to 100 times as much data.
The work that the ExaSky team is doing, along with that of the other ECP research teams, will help address these challenges and those faced by computer manufacturers and software developers as they create coherent, functional exascale platforms to meet the needs of large-scale science. By working with their own codes on pre-exascale machines, the ECP research team can help guide vendors in chip design, I/O bandwidth and memory requirements and other features.
Read more at Science Daily
Amount of water in stem cells can determine its fate as fat or bone
The research found that altering the volume of a cell changed its internal dynamics, including the rigidness of the matrix lining the outer surface. In stem cells, removing water condenses the cell, influencing the stem cells to become stiff pre-bone cells, while adding water causes the cells to swell, forming soft pre-fat cells.
Researchers have long understood that stem cells are influenced by the cells around them, picking up cues on what their function should be based on the stiffness of the matrices of neighboring cells.
The results, however, confirm that nature plays as much of a role as nurture in stem cell behavior and development.
"The findings from this study add a fascinating new tool to our understanding and utilization of stem cell biology for regenerative medicine," says Praveen Arany, DDS, PhD, co-author and assistant professor in the Department of Oral Biology in the University at Buffalo School of Dental Medicine.
The study was led by Ming Guo, PhD, d'Arbeloff Assistant Professor in the Department of Mechanical Engineering at the Massachusetts Institute of Technology; and David Weitz, PhD, Mallinckrodt Professor of Physics and of Applied Physics in the John A. Paulson School of Engineering and Applied Sciences at Harvard University.
"For the first time, we're beginning to understand the importance of cell volume and cellular water content in the mechanical properties and physiological functions of cells," says Guo, who began the research as a graduate student in Weitz's lab at Harvard.
The Line Between Bone and Fat
The research originally sought to understand the effects of volume on a cell's characteristics and functions. Cell volume is highly regulated and changes frequently over the course of a cell's life, increasing as the cell grows and decreasing when it divides.
These changes in volume are a result of variations in the amount of protein, DNA and other materials within the cell, though they mostly remain constant. But cells can also experience rapid and extreme changes in size and density through the absorption or release of water, spreading or shrinking in as little as 20 minutes.
By increasing or decreasing the volume of cells by 20 percent, the investigators found that the cells experienced several internal changes, including in gene expression and stiffness.
Knowing the role cell stiffness plays in the development of stem cells, the researchers began to wonder if cell volume could affect their fate as well.
To test the premise, investigators placed stem cells at their normal volume in a hardened hydrogel substrate to simulate the rigidness of bone cells. After one week, a large portion of the stem cells developed into pre-bone cells.
The experiment was repeated with a softened hydrogel substrate. In the softer environment, there was a significant decrease in the number of stem cells that became pre-bone cells. However, when water was removed from the cells to decrease their volume by 20 percent, the number of stem cells that became pre-bone cells increased, despite being in the softer substrate.
A similar experiment was conducted using glass. Researchers placed stem cells on glass to simulate a stiffer environment and found that few of the cells developed into pre-fat cells. It was not until the volume of the stem cells was increased by 20 percent that a spike in the formation of fat cells was found.
The investigators discovered that changing the volume of the cells caused them to behave similarly to as if they were under environmental pressures.
"The surprising thing about these experiments is the observation that volume seems to be related to so much about the cell. It seems to dictate the cell stiffness as well as the cell fate," says Weitz, also a core faculty member of the Wyss Institute for Biologically Inspired Engineering and director of the Materials Research Science and Engineering Center at Harvard.
"These observations may also have implications in external means of monitoring cell fate, which may be important for future biotech applications."
Future studies are needed to examine the effects of varied changes in volume, as well as if cell volume or external cues are the dominating factor in the fate of stem cells.
The Future of Regenerative Medicine
Stem cells sit at the forefront of regenerative medicine, providing researchers and clinicians with the potential to repair or replace damaged tissue and organs.
With the ability to develop into any type of specialized cell -- from a muscle cell to a red blood or brain cell -- stem cells hold the potential to treat various diseases and conditions, from heart disease to tooth loss. Bone marrow transplantation, one form of stem cell therapy, is already in widespread use.
Stem cells may also aid in drug development and the understanding of how cancer and birth defects occur.
Learning what causes differentiation among these cells will help researchers generate methods that influence their behavior and, ultimately, develop new therapies.
Read more at Science Daily
Pigeons better at multitasking than humans
Sara Letzner had humans compete against pigeons in a behavioural experiment. |
Dr Sara Letzner and Prof Dr Dr h. c. Onur Güntürkün from Ruhr-Universität Bochum published the results in the journal "Current Biology" in collaboration with Prof Dr Christian Beste from the University Hospital Carl Gustav Carus at Technische Universität Dresden.
"For a long time, scientists used to believe the mammalian cerebral cortex to be the anatomical cause of cognitive ability; it is made up of six cortical layers," says Sara Letzner. In birds, however, such a structure does not exist. "That means the structure of the mammalian cortex cannot be decisive for complex cognitive functions such as multitasking," continues Letzner.
Six times as densely packed
The pallium of birds does not have any layers comparable to those in the human cortex; but its neurons are more densely packed than in the cerebral cortex in humans: pigeons, for example, have six times as many nerve cells as humans per cubic millimetre of brain. Consequently, the average distance between two neurons in pigeons is fifty per cent shorter than in humans. As the speed at which nerve cell signals are transmitted is the same in both birds and mammals, researchers had assumed that information is processed more quickly in avian brains than in mammalian brains.
They tested this hypothesis using a multitasking exercise that was performed by 15 humans and 12 pigeons. In the experiment, both the human and the avian participants had to stop a task in progress and switch over to an alternative task as quickly as possible. The switchover to the alternative task was performed either at the same time the first task was stopped, or it was delayed by 300 milliseconds.
What makes pigeons faster
In the first case, real multitasking takes place, which means that two processes are running simultaneously in the brain, those being the stopping of the first task and switching over to the alternative task. Pigeons and humans both slow down by the same amount under double stress.
In the second case -- switching over to the alternative task after a short delay -- the processes in the brain undergo a change: the two processes, namely stopping the first task and switching over to the second task, alternate like in a ping-pong game. For this purpose, the groups of nerve cells that control both processes have to continuously send signals back and forth. The researchers had assumed that pigeons must have an advantage over humans because of their greater nerve cell density. They were, in fact, 250 milliseconds faster than humans.
Read more at Science Daily
Minimal Consciousness Restored in Man Who Was in a Vegetative State for 15 Years
Persistent vegetative states lasting longer than 12 months has long been considered irreversible. But a 35-year-old man severely injured in a car accident was partially revived by vagus nerve stimulation after lying in a vegetative state for 15 years.
The technique has been in use for many years for treating people with epilepsy or depression. But this is the first time that doctors attempted to treat a vegetative patient with the technique.
The vagus nerve connects the human brain stem to the heart, lungs, and digestive tract. It's the longest nerve in the body's autonomous nervous system, which mostly regulates unconscious functions like heart rate, digestion, and breathing.
Angela Sirigu, who led the research at the Institute of Cognitive Sciences – Marc Jeannerod in Lyon, France, said the technique could trigger a radical change in neurological treatments worldwide.
“Brain plasticity and brain repair are still possible even when hope seems to have vanished,” Sirigu said in a statement accompanying publication of research describing the procedure.
The research team began the experiment by looking for a particularly difficult case, to reduce the possibility that any improvements weren't simply a matter of chance and good timing. The patient chosen for the experiment had shown no signs of improvement in 15 years.
Doctors then implanted a vagus nerve simulator in the man's chest designed to send small pulses of electricity up the vagus nerve and into the brain.
After a month of constant stimulation, the patient's movements and brain activity improved significantly. He responded to simple commands, such as following an object with his eyes and turning his head upon request.
Computer monitoring of the patient’s brain activity confirmed major changes took place. Imaging scans showed increased metabolic activity in areas of the brain associated with movement, awareness, and sensation. A series of electroencephalogram tests suggested that the patient had improved from a “vegetative state” to a “minimally conscious state.”
By stimulating the vagus nerve, “it is possible to improve a patient's presence in the world,” Sirigu said.
The research was published in the journal Current Biology.
An estimated 25,000 people in the US lie in a vegetative state at any given time.
While the new study marks a positive development, researchers caution that the study is, by design, extremely limited in scope.
“We need to be a little cautious about this, because it's just one patient,” said neurologist Hae Won Shin, an associate professor at the University of North Carolina School of Medicine who was not involved in the research. “I'm really glad to hear that the patient responded positively to vagus nerve stimulation treatment after 15 years in a vegetative state, but it's only one case.”
The researchers are currently planning a larger collaborative study to confirm the therapeutic potential of VNS for patients in a vegetative state. The initial study was supported by France's National Center for Scientific Research, the French National Research Agency, and by a grant from the University of Lyon
Hae, who specializes in epileptic disorders, said vagus nerve stimulation has a track record of proven efficacy in treating certain disorders — but there's a caveat: No one is quite sure how it works.
Read more at Seeker
The technique has been in use for many years for treating people with epilepsy or depression. But this is the first time that doctors attempted to treat a vegetative patient with the technique.
The vagus nerve connects the human brain stem to the heart, lungs, and digestive tract. It's the longest nerve in the body's autonomous nervous system, which mostly regulates unconscious functions like heart rate, digestion, and breathing.
Angela Sirigu, who led the research at the Institute of Cognitive Sciences – Marc Jeannerod in Lyon, France, said the technique could trigger a radical change in neurological treatments worldwide.
“Brain plasticity and brain repair are still possible even when hope seems to have vanished,” Sirigu said in a statement accompanying publication of research describing the procedure.
The research team began the experiment by looking for a particularly difficult case, to reduce the possibility that any improvements weren't simply a matter of chance and good timing. The patient chosen for the experiment had shown no signs of improvement in 15 years.
Doctors then implanted a vagus nerve simulator in the man's chest designed to send small pulses of electricity up the vagus nerve and into the brain.
After a month of constant stimulation, the patient's movements and brain activity improved significantly. He responded to simple commands, such as following an object with his eyes and turning his head upon request.
Computer monitoring of the patient’s brain activity confirmed major changes took place. Imaging scans showed increased metabolic activity in areas of the brain associated with movement, awareness, and sensation. A series of electroencephalogram tests suggested that the patient had improved from a “vegetative state” to a “minimally conscious state.”
By stimulating the vagus nerve, “it is possible to improve a patient's presence in the world,” Sirigu said.
The research was published in the journal Current Biology.
An estimated 25,000 people in the US lie in a vegetative state at any given time.
While the new study marks a positive development, researchers caution that the study is, by design, extremely limited in scope.
“We need to be a little cautious about this, because it's just one patient,” said neurologist Hae Won Shin, an associate professor at the University of North Carolina School of Medicine who was not involved in the research. “I'm really glad to hear that the patient responded positively to vagus nerve stimulation treatment after 15 years in a vegetative state, but it's only one case.”
The researchers are currently planning a larger collaborative study to confirm the therapeutic potential of VNS for patients in a vegetative state. The initial study was supported by France's National Center for Scientific Research, the French National Research Agency, and by a grant from the University of Lyon
Hae, who specializes in epileptic disorders, said vagus nerve stimulation has a track record of proven efficacy in treating certain disorders — but there's a caveat: No one is quite sure how it works.
Read more at Seeker
Sep 25, 2017
Genes are controlled by 'Nano footballs,' scientists discover
Rendering of DNA. |
By placing tiny glowing probes on transcription factors -- special chemicals inside cells which control whether a gene is switched 'on' or 'off' -- researchers gained a remarkable new insight into the way in which genes are controlled.
Crucially, they discovered that transcription factors operate not as single molecules as was previously thought, but as a spherical football-like cluster of around seven to ten molecules of roughly 30 nanometres in diameter.
The discovery of these nano footballs will not only help researchers understand more about the basic ways in which genes operate, but may also provide important insights into human health problems associated with a range of different genetic disorders, including cancer.
The research, supported by the Biotechnology and Biological Sciences Research Council (BBSRC) and published in eLife was carried out by scientists from the University of York, and the University of Gothenburg and Chalmers University of Technology, Sweden.
The researchers employed advanced super-resolution microscopy to look at the nano footballs in real time, using the same type of yeast cells utilised in baking and brewing beer.
Professor Mark Leake, Chair of Biological Physics at the University of York who led the work, said: "Our ability to see inside living cells, one molecule at a time, is simply breathtaking.
"We had no idea that we would discover that transcription factors operated in this clustered way. The textbooks all suggested that single molecules were used to switch genes on and off, not these crazy nano footballs that we observed."
The team believe the clustering process is due to an ingenious strategy of the cell to allow transcription factors to reach their target genes as quickly as possible.
Professor Leake said: "We found out that the size of these nano footballs is a remarkably close match to the gaps between DNA when it is scrunched up inside a cell. As the DNA inside a nucleus is really squeezed in, you get little gaps between separate strands of DNA which are like the mesh in a fishing net. The size of this mesh is really close to the size of the nano footballs we see.
"This means that nano footballs can roll along segments of DNA but then hop to another nearby segment. This allows the nano football to find the specific gene it controls much more quickly than if no nano hopping was possible. In other words, cells can respond as quickly as possible to signals from the outside, which is an enormous advantage in the fight for survival."
Genes are made from DNA, the so-called molecule of life. Since the discovery that DNA has a double helix shape, made in the 1950s by pioneering biophysics researchers, much has been learned about transcription factors which can control whether a gene is switched on or off.
If a gene is switched on, specialised molecular machinery in the cell reads off its genetic code and converts it into a single protein molecule.Thousands of different types of protein molecules can then be made, and when they interact that can drive the building of all of the remarkable structures found inside living cells.
The process of controlling which genes are switched on or off at any particular point in time is fundamental to all life. When it goes wrong, this can lead to serious health problems. In particular, dysfunctional switching of genes can result in cells which grow and divide uncontrollably, which can ultimately lead to cancer.
Read more at Science Daily
Ancient Egyptians Provided a Proper Burial to a Statue of a Revered Deity
Ancient Egyptians buried the statue of the deity Ptah — the god of craftsmen and sculptors — with other revered statues, including those of a sphinx, baboon, cat, Osiris, and Mut, in a pit next to Ptah's temple.
The statue of Ptah had likely sat in the temple for years, but it and the other sacred objects were respectfully buried after they accumulated damage and were declared useless by the ancient Egyptians, the researchers said.
"We can consider that when a new statue was erected in the temple, this one [of Ptah] was set aside in a pit," said study co-researcher Christophe Thiers, director of the French-Egyptian Center for the Study of the Temples of Karnak. "The other artifacts were also previously damaged during their ‘lifetime’ in the temple, and then they were buried with the Ptah statue."
Archaeologists discovered the pit in December 2014 at Karnak, an Egyptian temple precinct, and spent about a month excavating its rich assemblage. The pit held 38 objects, including:
Fourteen statuettes and figurines of Osiris.
Eleven fragments of inlay from statues.
The inlay included that of an iris, a cornea, a false beard, a cap, a strand of hair and an inlay plaque.
Three baboon statuettes (representing the god Thoth).
Two statuettes of the goddess Mut (one with hieroglyphic inscriptions).
Two unidentified statuette bases.
One head and one fragmentary statuette of a cat (Bastet).
One small fragmentary faience stele (a stone slab) recording the name of the god Ptah.
One head of a statuette of a man in gilded limestone.
One lower part of a statue of the seated god Ptah, sawn and repaired.
One sphinx.
One unidentified metal piece.
Next to the statue, the Egyptians would have placed a wooden effigy of the god Osiris that had metal appliqué, including a beard and two feathers in its crown. Then, the other artifacts would have been distributed around these two artifacts, which were then covered with about 8 inches (20 centimeters) of backfill. This is where the ancient Egyptians placed a statue of a small limestone sphinx.
The pit was then covered with more backfill. At the top, the Egyptians placed a small male head made of gilded limestone, likely for protection, the researchers said.
Read more at Seeker
Subscribe to:
Posts (Atom)