We have a 'thirst for knowledge' but sometime 'ignorance is bliss', so how do we choose between these two mind states at any given time?
UCL psychologists have discovered our brains use the same algorithm and neural architecture to evaluate the opportunity to gain information, as it does to evaluate rewards like food or money.
Funded by the Wellcome Trust, the research, published in the Proceedings of the National Academy of Sciences, also finds that people will spend money to both obtain advance knowledge of a good upcoming event and to remain ignorant of an upcoming bad event.
Senior author Dr Tali Sharot (UCL Experimental Psychology) said: "The pursuit of knowledge is a basic feature of human nature, however, in issues ranging from health to finance, people sometimes choose to remain ignorant."
"Our research shows that the brain's reward circuitry selectively treats the opportunity to gain knowledge about future favorable outcomes, but not unfavorable outcomes, as a reward in and of itself, explaining why knowledge may not always be preferred"
In the study 62 participants performed a computer task, and were asked whether they wanted to receive information or remain ignorant about the outcome of lotteries, which had a mixture of favourable (high probability of winning) or unfavourable (high probability of losing) odds. The lottery was played out regardless of whether the volunteers selected to know the outcome and they received the total payment of all lotteries at the end of the game.
In addition, the brains activity of 36 of the participants was scanned while they were performing the task. The researchers found that activity in the brain's reward system -- the nucleus accumbens and ventral tegmental area -- in response to the opportunity to receive information about good lotteries, but not about bad lotteries, displayed a pattern similar to what is observed in response to material rewards. This brain signal was independent from the brain response observed when participants found out whether they won or lost the lottery and predicted their preference for information.
"When participants were told they were about to gain information, the more likely information was to convey good news, the more likely we were to observe a neural signature typical of reward processing," Dr Sharot added.
"The findings may help explain why people are more likely to check their bank accounts when they believe their value has gone up and less likely to do so when they suspect it has gone down."
Lead author, Dr Caroline Charpentier, (formerly UCL Psychology, now at the California Institute of Technology), said: "Our findings are consistent with the theory that beliefs have utility in and of themselves. This means believing that something will happen has the power to affect us in positive and negative ways, similar to how actual events affect us," says lead author.
Read more at Science Daily
Jun 29, 2018
Meteorite 'Black Beauty' expands window for when life might have existed on Mars
The oldest known zircon from Mars. |
Crust formation is an important step in the development of terrestrial planets, and what makes Black Beauty special and expensive is that it contains small pieces of the crust from Mars. More precisely, Black Beauty contains the rare mineral zircon, in which researchers have found a high concentration of hafnium.
"Zircon is a very robust mineral that is ideally suited to provide absolute ages. In this context, the zircons can be used to establish a temporal framework to understand the formation history of the Martian crust," says Professor Martin Bizzarro. "Zircon also acts as a small time capsule as it preserves information about the environment where and when it was created. In this case, a time capsule with hafnium that originates from the earliest crust of Mars, which was present approximately 100 million years before the oldest zircon of Black Beauty was created. Thus, Mars got an early start compared to Earth, whose solid crust wasn't formed until much later."
However, it required a certain amount of courage to reach this result.
We crushed the meteorite
The original 319.8 grams heavy meteorite Black Beauty was found in the Sahara Desert in 2011. It soon became apparent that the meteorite was something special and it currently has a sales price of approximately $10,000 per gram. A year ago, Professor Martin Bizzarro managed to acquire 44 grams of Black Beauty with help from various funding agencies and exchange of meteorites from the museum's collection.
Read more at Science Daily
Mars valleys traced back to precipitation
The central portion of Osuga Valles, which has a total length of 164 km. In some places, it is 20 km wide and plunges to a depth of 900 m. |
A new study now suggests that the branching structure of the former river networks on Mars has striking similarities with terrestrial arid landscapes. This has been demonstrated in a recent paper published in Science Advances by physicist Hansjörg Seybold from the group of James Kirchner, ETH professor at the Institute for Terrestrial Ecosystems, and planetary specialist Edwin Kite from the University of Chicago.
Valleys eroded mainly by rainwater
Using statistics from all mapped river valleys on Mars, the researchers conclude that the contours still visible today must have been created by superficial run-off of (rain)water. Consequently, the influence of groundwater seepage from the soil can be excluded as a dominant process for shaping these features.
The distribution of the branching angles of the valleys on Mars is very similar to those found in arid landscapes on Earth. According to lead author Seybold, this implies that there must have been a similar hydrological environment with sporadic heavy rainfall events on Mars over a prolonged period of time and that this rainwater may have run off quickly over the surface shaping the valley networks. This is how river valleys develop in arid regions on Earth. For example, in Arizona, researchers observed the same valley network patterns in a landscape where astronauts are training for future Mars missions. Valleys in arid regions fork at a narrow angle.
The branching angles on Mars are comparatively low. Seybold therefore rules out the influence of groundwater sapping as the major channel forming process on Mars. River networks that are formed by re-emerging groundwater, as found, for example, in Florida, tend to have much wider branching angles between the two tributaries and do not match the narrow angles of streams in arid areas.
Conditions such as those found in terrestrial arid landscapes today probably prevailed on Mars for only a relatively short period about 3.6 to 3.8 billion years ago. In that period, the atmosphere on Mars may have been much denser than it is today. "Recent research shows that there must have been much more water on Mars than previously assumed," says Seybold.
Evaporation made it rain
One hypothesis suggests that the northern third of Mars was covered by an ocean at that time. Water evaporated, condensed around the high volcanoes of the highlands to the south of the ocean and led to heavy precipitation. As a result, rivers formed, which left traces that can still be observed on Mars today.
Read more at Science Daily
More clues that Earth-like exoplanets are indeed Earth-like
The artist's depiction of Kepler-186f. |
Kepler-186f is the first identified Earth-sized planet outside the solar system orbiting a star in the habitable zone. This means it's the proper distance from its host star for liquid water to pool on the surface.
The Georgia Tech study used simulations to analyze and identify the exoplanet's spin axis dynamics. Those dynamics determine how much a planet tilts on its axis and how that tilt angle evolves over time. Axial tilt contributes to seasons and climate because it affects how sunlight strikes the planet's surface.
The researchers suggest that Kepler-186f's axial tilt is very stable, much like the Earth, making it likely that it has regular seasons and a stable climate. The Georgia Tech team thinks the same is true for Kepler-62f, a super-Earth-sized planet orbiting around a star about 1,200 light-years away from us.
How important is axial tilt for climate? Large variability in axial tilt could be a key reason why Mars transformed from a watery landscape billions of years ago to today's barren desert.
"Mars is in the habitable zone in our solar system, but its axial tilt has been very unstable -- varying from zero to 60 degrees," said Georgia Tech Assistant Professor Gongjie Li, who led the study together with graduate student Yutong Shan from the Harvard-Smithsonian Center for Astrophysics. "That instability probably contributed to the decay of the Martian atmosphere and the evaporation of surface water."
As a comparison, Earth's axial tilt oscillates more mildly -- between 22.1 and 24.5 degrees, going from one extreme to the other every 10,000 or so years.
The orientation angle of a planet's orbit around its host star can be made to oscillate by gravitational interaction with other planets in the same system. If the orbit were to oscillate at the same speed as the precession of the planet's spin axis (akin to the circular motion exhibited by the rotation axis of a top or gyroscope), the spin axis would also wobble back and forth, sometimes dramatically.
Mars and Earth interact strongly with each other, as well as with Mercury and Venus. As a result, by themselves, their spin axes would precess with the same rate as the orbital oscillation, which may cause large variations in their axial tilt. Fortunately, the moon keeps Earth's variations in check. The moon increases our planet's spin axis precession rate and makes it differ from the orbital oscillation rate. Mars, on the other hand, doesn't have a large enough satellite to stabilize its axial tilt. "It appears that both exoplanets are very different from Mars and the Earth because they have a weaker connection with their sibling planets," said Li, a faculty member in the School of Physics. "We don't know whether they possess moons, but our calculations show that even without satellites, the spin axes of Kepler-186f and 62f would have remained constant over tens of millions of years."
Kepler-186f is less than 10 percent larger in radius than Earth, but its mass, composition and density remain a mystery. It orbits its host star every 130 days. According to NASA, the brightness of that star at high noon, while standing on 186f, would appear as bright as the sun just before sunset here on Earth. Kepler-186f is located in the constellation Cygnus as part of a five-planet star system.
Kepler-62f was the most Earth-like exoplanet until scientists noticed 186f in 2014. It's about 40 percent larger than our planet and is likely a terrestrial or ocean-covered world. It's in the constellation Lyra and is the outermost planet among five exoplanets orbiting a single star.
That's not to say either exoplanet has water, let alone life. But both are relatively good candidates.
"Our study is among the first to investigate climate stability of exoplanets and adds to the growing understanding of these potentially habitable nearby worlds," said Li.
"I don't think we understand enough about the origin of life to rule out the possibility of their presence on planets with irregular seasons," added Shan. "Even on Earth, life is remarkably diverse and has shown incredible resilience in extraordinarily hostile environments.
Read more at Science Daily
Jun 28, 2018
This curious animal grew larger over time -- but its brain didn't quite keep up
Ornella's research on the brain evolution of mammals involves developing 3D models of an endocast, which is the imprint of the brain inside the cranium. |
The research, which is published in the journal Palaeontology, offers a rare case of an animal's brain becoming smaller relative to its body size, likely due to a change in its lifestyle over time.
The mountain beaver (Aplodontia rufa) is a rodent that's adapted to burrowing, meaning it lives mostly underground in tunnels dug deep into the soil. But fossil records show that its 30-million-year old ancestor was better adapted to living in trees, similar to squirrels.
"Early squirrels and the mountain beaver's ancestor had a similar, relative brain size," says Ornella Bertrand, a postdoctoral fellow in the Department of Anthropology at U of T Scarborough and lead author of the study.
But something happened over time. While the mountain beaver can climb trees like its ancestor and squirrels -- albeit likely not as well -- they rarely travel too far from their burrows and are mostly nocturnal. As a result of mostly living underground and being less reliant on their vision, it appears an area of the neocortex responsible for sight may have shrunk over time.
"The brain is metabolically expensive, meaning it needs a lot of food energy to function," says Bertrand, whose research focuses on the brain evolution of mammals. "So the parts of the brain that are not crucial for survival might have been selected against."
Bertrand and her team compared virtual endocasts -- the imprint the brain makes against the inner part of the cranium -- and found that it may have been the part of the brain related to sight specifically that shrunk over time.
"There appears to be a relationship between being arboreal -- that is living in trees -- the size of the neocortex and strong vision," says Bertrand. She adds that over time as the modern mountain beaver relied less on its vision, its neocortex decreased in size as a result.
While the modern mountain beaver actually has a larger overall brain size compared to its ancestor, it has a smaller brain relative to its body size, notes Bertrand.
An evolutionary decrease in brain size has been observed in domesticated animals like chickens, pigs and dogs, but this is a rare example of a decrease in brain size due to a specific shift in where the animal spends most of its time, says Bertrand.
As for when this change began to take place, it's likely too hard to tell at this point. "It's difficult to pinpoint when the relative size of the brain started to decrease since we only have three specimens to go by," she adds.
Mountain beavers are native to the northwestern U.S. and parts of southern British Columbia, particularly in the Cascade Mountains. Large by rodent standards -- averaging about 500 to 900 g and between 30 to 50 cm in length -- they're not closely related to the North American beaver.
Read more at Science Daily
Insight into the physics of the Higgs particle
This is Corinna Kollath from the Helmholtz-Institut für Strahlen- und Kernphysik at the University of Bonn. |
For their experiments, scientists at the University of Bonn used a gas made of lithium atoms, which they cooled down significantly. At a certain temperature, the state of the gas changes abruptly: It becomes a superconductor that conducts a current without any resistance. Physicists also speak of a phase transition. A similar sudden change occurs with water when it freezes.
The lithium gas changes to a more orderly state at its phase transition. This includes the formation of so-called Cooper pairs, which are combinations of two atoms that behave like a single particle to the outside.
Partner-dancing atoms
These pairs behave fundamentally differently from individual atoms: They move together and can do so without scattering on other atoms or pairs. This is the reason for the superconductivity. But what happens when you try to excite the pairs?
"We illuminated the gas with microwave radiation," explains Prof. Dr. Michael Köhl from the Physics Institute at the University of Bonn. "This allowed us to create a state in which the pairs start to vibrate and the quality of the superconductivity therefore oscillated very quickly: One moment the gas was a good superconductor, the next a bad one."
This common oscillation of the Cooper pairs corresponds to the Higgs boson discovered at the CERN Accelerator in 2013. As this state is very unstable, only a handful of working groups worldwide have succeeded in producing it.
The experiments allow an insight into certain physical properties of the Higgs boson. For example, the physicists hope that studies like these will enable them to better understand the decay of this extremely short-lived particle in the medium term.
Fast-switchable superconductors
But the experiments are also interesting for another reason: They show a way to switch superconductivity on and off very quickly. Superconductors normally try to remain in their conductive state for as long as possible. They can be dissuaded by heating, but this is a very slow process. The experiments show that in principle this can also be over a thousand times faster. This insight may open up completely new applications for superconductors.
Read more at Science Daily
Paleontologists ID two new Miocene mammals in Bolivia
The animals, which look similar to small moose or deer in a paleoartist's rendering, are being dubbed Theosodon arozquetai and Llullataruca shockeyi, ungulates native only to Bolivia |
The animals, which look similar to small moose or deer in a paleoartist's rendering, are being dubbed Theosodon arozquetai and Llullataruca shockeyi, ungulates native only to Bolivia. They lived in the latter part of the middle Miocene epoch, a time interval from which relatively few fossils have been collected in South America.
The discoveries, announced in the June edition of the Journal of Vertebrate Paleontology, are important not only because they document two species previously unknown to science, but also because they come from the tropical latitudes of South America. The northern half of South America harbors a rich diversity of living mammals, but is a difficult place to find fossils of them.
"Studying fossils from regions such as Bolivia, where few others have looked, has allowed us discover and describe a variety of new species that are changing our views about the history of South America's mammals," said Darin Croft, a biology professor at Case Western Reserve, who co-led the expeditions that recovered the fossils.
The lead author on the journal publication was one of Croft's former students, Case Western Reserve graduate Andrew McGrath, who is now studying this group of animals for his PhD at the University of California-Santa Barbara.
"These new species hint at what might be hiding in the northern parts of South America," McGrath said. "For example, close relatives of Llullataruca disappeared from southern South America around 20 million years ago, but based on our research, we now know they were able to persist some seven million years longer in Bolivia and northern South America than in Patagonia."
Federico Anaya of Bolivia's Universidad Autónoma "Tomas Frías" in Potosí also collaborated on the project. Croft and Anaya have been working together in Bolivia for more than 15 years.
Croft, who has a primary appointment in anatomy at the School of Medicine, is considered one of the world's leaders in neotropical paleomammalogy, the study of South America's prehistoric mammals. Since South America was geographically isolated for most of the past 66 million years, its rich fossil record makes it a perfect location to "investigate topics such as mammal adaptation, diversification, and community ecology," according to his website.
Some of that work was covered in his 2016 book with Chicago-based artist Velizar Simeonovski, Horned Armadillos and Rafting Monkeys: The Fascinating Fossil Mammals of South America, which received an Independent Publisher Book Awards gold medal in science in 2017.
"South America was untouched by mammals from other continents for millions of years, so the solutions its native mammals came up with were often different from those developed by mammals elsewhere," he said. "By comparing how mammals on different continents have evolved to deal with similar ecological situations, we are able to gauge which characteristics developed due to universal ecological principles and which were peculiar to a certain place and time."
Recently, Croft and collaborators explored that question by digging further into the mysteries of how some 11 species of mammals known as "sparassodonts"-extinct weasel-to-jaguar-sized meat-eating marsupials-were able to co-exist during the early Miocene (about 18 million years ago) in southern Argentina.
The research has left Croft and others wrestling with what he calls a "carnivore conundrum."
In short, they are being challenged by findings that suggest that either all ancient carnivorous sparassodonts were crammed into a very narrow meat-eating niche (think mountain lion) -- or some were actually omnivores (think raccoon), but had teeth that did not reflect their varied diet.
Read more at Science Daily
Milky Way is rich in grease-like molecules
Milky Way |
Organic matter of different kinds contains carbon, an element considered essential for life. There is though real uncertainty over its abundance, and only half the carbon expected is found between the stars in its pure form. The rest is chemically bound in two main forms, grease-like (aliphatic) and mothball-like (aromatic).
The UNSW / Ege team used a laboratory to create material with the same properties as interstellar dust. They mimicked the process by which organic molecules are synthesised in the outflows of carbon stars, by expanding a carbon-containing plasma into a vacuum at low temperature. The material was collected and then analysed by a combination of techniques. Using magnetic resonance and spectroscopy (splitting light into its constituent wavelengths) they were able to determine how strongly the material absorbed light with a certain infrared wavelength, a marker for aliphatic carbon.
"Combining our lab results with observations from astronomical observatories allows us to measure the amount of aliphatic carbon between us and the stars," explained Professor Tim Schmidt, from the Australian Research Council Centre of Excellence in Exciton Science in the School of Chemistry at UNSW Sydney.
The researchers found that there are about 100 greasy carbon atoms for every million hydrogen atoms, accounting for between a quarter and a half of the available carbon. In the Milky Way Galaxy, this amounts to about 10 billion trillion trillion tonnes of greasy matter, or enough for 40 trillion trillion trillion packs of butter.
Schmidt is quick to dispel the comparison with anything edible: "This space grease is not the kind of thing you'd want to spread on a slice of toast! It's dirty, likely toxic and only forms in the environment of interstellar space (and our laboratory). It's also intriguing that organic material of this kind -- material that gets incorporated into planetary systems -- is so abundant."
Read more at Science Daily
`Oumuamua gets a boost
The team, led by Marco Micheli (European Space Agency) explored several scenarios to explain the faster-than-predicted speed of this peculiar interstellar visitor. The most likely explanation is that `Oumuamua is venting material from its surface due to solar heating -- a behaviour known as outgassing. The thrust from this ejected material is thought to provide the small but steady push that is sending `Oumuamua hurtling out of the Solar System faster than expected -- as of 1 June 2018 it is traveling at roughly 114,000 kilometres per hour.
Such outgassing is a behaviour typical for comets and contradicts the previous classification of `Oumuamua as an interstellar asteroid. "We think this is a tiny, weird comet," commented Marco Micheli. "We can see in the data that its boost is getting smaller the farther away it travels from the Sun, which is typical for comets."
Usually, when comets are warmed by the Sun they eject dust and gas, which form a cloud of material -- called a coma (cometary) -- around them, as well as the characteristic tail . However, the research team could not detect any visual evidence of outgassing.
"We did not see any dust, coma, or tail, which is unusual," explained co-author Karen Meech of the University of Hawaii, USA. Meech led the discovery team's characterisation of `Oumuamua in 2017. "We think that 'Oumuamua may vent unusually large, coarse dust grains."
The team speculated that perhaps the small dust grains adorning the surface of most comets eroded during `Oumuamua's journey through interstellar space, with only larger dust grains remaining. Though a cloud of these larger particles would not be bright enough to be detected, it would explain the unexpected change to 'Oumuamua's speed.
Not only is `Oumuamua's hypothesised outgassing an unsolved mystery, but also its interstellar origin. The team originally performed the new observations on `Oumuamua to exactly determine its path which would have probably allowed it to trace the object back to its parent star system. The new results means it will be more challenging to obtain this information.
Read more at Science Daily
Jun 26, 2018
New study explains Antarctica's coldest temperatures
The East Antarctic Plateau is a windswept desolate expanse the size off Australia with few bases or instruments. |
After sifting through data from several Earth-observing satellites, scientists announced in 2013 that they found surface temperatures of -93 degrees Celsius (-135 degrees Fahrenheit) in several spots on the East Antarctic Plateau, a high snowy plateau in central Antarctica that encompasses the South Pole. That preliminary study has been revised with new data showing that the coldest sites actually reach -98 degrees Celsius (-144 degrees Fahrenheit). The temperatures are observed during the southern polar night, mostly during July and August.
When the researchers first announced they had found the coldest temperatures on Earth five years ago, they determined that persistent clear skies and light winds are required for temperatures to dip this low. But the new study adds a twist to the story: Not only are clear skies necessary, but the air must also be extremely dry, because water vapor blocks the loss of heat from the snow surface.
The researchers observed the ultra-low temperatures in small dips or shallow hollows in the Antarctic Ice Sheet where cold, dense, descending air pools above the surface and can remain for several days. This allows the surface, and the air above it, to cool still further, until the clear, calm, and dry conditions break down and the air mixes with warmer air higher in the atmosphere.
"In this area, we see periods of incredibly dry air, and this allows the heat from the snow surface to radiate into space more easily," said Ted Scambos, a senior research scientist at the National Snow and Ice Data Center at the University of Colorado Boulder and the study's lead author.
The record of -98 degrees Celsius is about as cold as it is possible to get at Earth's surface, according to the researchers. For the temperature to drop that low, clear skies and dry air need to persist for several days. Temperatures could drop a little lower if the conditions lasted for several weeks, but that's extremely unlikely to happen, Scambos said.
Finding the coldest place
The high elevation of the East Antarctic Plateau and its proximity to the South Pole give it the coldest climate of any region on Earth. The lowest air temperature ever measured by a weather station, -89 degrees Celsius (-128 degrees Fahrenheit), was recorded there at Russia's Vostok Station in July 1983.
But weather stations can't measure temperatures everywhere. So in 2013, Scambos and his colleagues decided to analyze data from several Earth-observing satellites to see if they could find temperatures on the plateau even lower than those recorded at Vostok.
In the new study, they analyzed satellite data collected during the Southern Hemisphere's winter between 2004 and 2016. They used data from the MODIS instrument aboard NASA's Terra and Aqua satellites as well as data from instruments on NOAA's Polar Operational Environmental Satellites.
The researchers observed snow surface temperatures regularly dropping below -90 degrees Celsius (-130 degrees Fahrenheit) almost every winter in a broad region of the plateau, more than 3,500 meters (11,000 feet) above sea level. Within this broad region, they found dozens of sites had much colder temperatures. Nearly 100 locations reached surface temperatures of -98 degrees Celsius.
The atmosphere in this region can sometimes have less than 0.2 mm total precipitable water above the surface. But even when it is that dry and cold, the air traps some of the heat and sends it back to the surface. This means that the cooling rates are very slow as the surface temperatures approach the record values. Conditions do not persist long enough -- it could take weeks -- for the temperatures to dip below the observed records. However, the temperature measured from satellites is the temperature of the snow surface, not the air above it. So the study also estimated the air temperatures by using nearby automatic weather stations and the satellite data.
Interestingly, even though the coldest sites were spread out over hundreds of kilometers, the lowest temperatures were all nearly the same. That got them wondering: Is there a limit to how cold it can get on the plateau?
How cold is it really?
Using the difference between the satellite measurements of the lowest surface snow temperatures at Vostok and three automated stations, and the air temperatures at the same place and time, the researchers inferred that the air temperatures at the very coldest sites (where no stations exist) are probably around -94 degrees Celsius, or about -137 degrees Fahrenheit.
Read more at Science Daily
Journey into the lungs of mice infected with influenza
In the 1966 novel, Fantastic Voyage, written by biochemist and author Isaac Asimov, a crew of people become miniaturized in order to travel through the body of a scientist and save him from a blood clot in his brain.
For University of Wisconsin-Madison virologist and flu expert Yoshihiro Kawaoka, recently seeing real, active influenza infection in the lungs of living mice for the first time was reminiscent of this 50-year-old piece of science fiction, which was also adapted into a film.
Publishing today (June 25, 2018) in the Proceedings of the National Academy of Sciences, Kawaoka and his team describe a new tool they call FluVision, which allows them to witness influenza infection in a living animal in action. Moreover, it provides a window into a world none have seen before, allowing scientists to observe and better understand what happens when a virus infects the lungs and the body responds.
"Now we can see inside of the body in real time in virus-infected animals," says the UW-Madison professor of pathobiological sciences at the UW-Madison School of Veterinary Medicine. "It's like we can shrink and go inside the body."
In so doing, the scientists have documented differences in the action of two different strains of flu, witnessed influenza viruses as they spread in the lungs, showed a reduction in blood flow speed in infected areas of the lungs, watched the activation and behavior of immune cells called neutrophils, and revealed some of the damage that can be caused by infection with a highly-pathogenic flu strain.
Notably, infection with a highly-pathogenic strain of influenza -- the "bird flu," H5N1 -- proceeds more quickly and causes more damage than infection with a milder, mouse-adapted human strain -- H1N1. Pathogenicity refers to the ability of a virus to cause disease.
To microscopically peer inside the lungs of living mice, Kawaoka's team had to overcome several challenges. The first was to find technology that allowed them to see through the lungs. Another group had pioneered this with an approach called two-photon excitation microscopy, and Kawaoka's team adapted it for its study.
The team had to build a system that allowed it to work with influenza viruses at a high level of biosafety -- biosafety level three -- while also allowing technicians access to the laser source required to see objects of interest in the lungs. Lead author Hiroshi Ueki helped design a system in which the laser is located outside of the high-containment lab space, aimed through a small glass window to a microscope inside the lab, all built on stabilized platforms that had to be physically separated but virtually connected.
Kawaoka's team also had to create fluorescently-labeled viruses that could be used to infect the mice and viewed with the laser under the microscope, but which also functioned similarly to viruses found in nature. They call the technology Color-flu.
In addition, the researchers had to develop a method for keeping a portion of the lung still while the mouse breathed so they could get high-quality images and videos. The team had a small, custom-crafted device made called a thoracic window, which Kawaoka says has been patented, that uses a vacuum to stabilize a small portion of surgically-exposed lung during imaging.
For the study, the researchers infected mice with either fluorescently-labeled H5N1 or H1N1. Two days after infection, they could see cells in the lung infected with virus particles. The numbers of these cells reached their peak three days after infection and were higher in H5N1-infected lungs.
Blood flow in the capillaries of influenza-infected lungs slowed down after infection with either virus, though to a lesser extent in H1N1-infected mice. This suggests the viruses affect the vascular system before causing lung damage.
The lungs of mice infected with H5N1 also became "leaky" two days after virus exposure, whereby the contents of the capillaries permeated into the tiny air sacs of the lungs, called alveoli. This was also associated with an increase in the number of dead cells in the lungs.
"Clearly, something is wrong with the pulmonary capillaries," says Kawaoka, who is also a professor at the University of Tokyo, where the work was performed. "The reason why we see this leakage is that the junctions between endothelial cells (which make up the vessels in the lungs) loosen for some reason. We have documented this for the first time."
Studying the mechanisms of infection can be something of a chicken-and-egg endeavor, because once infection starts, so does the body's response, triggering a cascade of actions that can also cause some of the damage associated with the pathogen. Some of it, like the pulmonary leakiness Kawaoka's team observed, may help the body respond, he says. This relationship between virus and host is what drives much of Kawaoka's curiosity.
His team chose to look at immune cells called neutrophils, one of the body's first lines of defense. Their action can cause inflammation. In mice infected with H5N1, neutrophils were recruited to the lungs on the first day after exposure, becoming six times more prevalent. They doubled in the lungs of mice infected with H1N1.
After the number of influenza-infected cells peaked on day three, neutrophil numbers dropped, but those that remained behaved differently than neutrophils in healthy mice.
The team found that neutrophils show two kinds of motion: slow and rapid. In influenza-infected lungs, the neutrophils that remained after the peak day showed a decrease in rapid motion and spent more time moving slowly, as if scouting for infected cells.
"We don't yet know why and what they're doing," says Kawaoka, but, he notes, this is the first time this behavior has been documented. And for him, it is motivation to dig even deeper, and adapt the technology for other respiratory viruses.
Read more at Science Daily
For University of Wisconsin-Madison virologist and flu expert Yoshihiro Kawaoka, recently seeing real, active influenza infection in the lungs of living mice for the first time was reminiscent of this 50-year-old piece of science fiction, which was also adapted into a film.
Publishing today (June 25, 2018) in the Proceedings of the National Academy of Sciences, Kawaoka and his team describe a new tool they call FluVision, which allows them to witness influenza infection in a living animal in action. Moreover, it provides a window into a world none have seen before, allowing scientists to observe and better understand what happens when a virus infects the lungs and the body responds.
"Now we can see inside of the body in real time in virus-infected animals," says the UW-Madison professor of pathobiological sciences at the UW-Madison School of Veterinary Medicine. "It's like we can shrink and go inside the body."
In so doing, the scientists have documented differences in the action of two different strains of flu, witnessed influenza viruses as they spread in the lungs, showed a reduction in blood flow speed in infected areas of the lungs, watched the activation and behavior of immune cells called neutrophils, and revealed some of the damage that can be caused by infection with a highly-pathogenic flu strain.
Notably, infection with a highly-pathogenic strain of influenza -- the "bird flu," H5N1 -- proceeds more quickly and causes more damage than infection with a milder, mouse-adapted human strain -- H1N1. Pathogenicity refers to the ability of a virus to cause disease.
To microscopically peer inside the lungs of living mice, Kawaoka's team had to overcome several challenges. The first was to find technology that allowed them to see through the lungs. Another group had pioneered this with an approach called two-photon excitation microscopy, and Kawaoka's team adapted it for its study.
The team had to build a system that allowed it to work with influenza viruses at a high level of biosafety -- biosafety level three -- while also allowing technicians access to the laser source required to see objects of interest in the lungs. Lead author Hiroshi Ueki helped design a system in which the laser is located outside of the high-containment lab space, aimed through a small glass window to a microscope inside the lab, all built on stabilized platforms that had to be physically separated but virtually connected.
Kawaoka's team also had to create fluorescently-labeled viruses that could be used to infect the mice and viewed with the laser under the microscope, but which also functioned similarly to viruses found in nature. They call the technology Color-flu.
In addition, the researchers had to develop a method for keeping a portion of the lung still while the mouse breathed so they could get high-quality images and videos. The team had a small, custom-crafted device made called a thoracic window, which Kawaoka says has been patented, that uses a vacuum to stabilize a small portion of surgically-exposed lung during imaging.
For the study, the researchers infected mice with either fluorescently-labeled H5N1 or H1N1. Two days after infection, they could see cells in the lung infected with virus particles. The numbers of these cells reached their peak three days after infection and were higher in H5N1-infected lungs.
Blood flow in the capillaries of influenza-infected lungs slowed down after infection with either virus, though to a lesser extent in H1N1-infected mice. This suggests the viruses affect the vascular system before causing lung damage.
The lungs of mice infected with H5N1 also became "leaky" two days after virus exposure, whereby the contents of the capillaries permeated into the tiny air sacs of the lungs, called alveoli. This was also associated with an increase in the number of dead cells in the lungs.
"Clearly, something is wrong with the pulmonary capillaries," says Kawaoka, who is also a professor at the University of Tokyo, where the work was performed. "The reason why we see this leakage is that the junctions between endothelial cells (which make up the vessels in the lungs) loosen for some reason. We have documented this for the first time."
Studying the mechanisms of infection can be something of a chicken-and-egg endeavor, because once infection starts, so does the body's response, triggering a cascade of actions that can also cause some of the damage associated with the pathogen. Some of it, like the pulmonary leakiness Kawaoka's team observed, may help the body respond, he says. This relationship between virus and host is what drives much of Kawaoka's curiosity.
His team chose to look at immune cells called neutrophils, one of the body's first lines of defense. Their action can cause inflammation. In mice infected with H5N1, neutrophils were recruited to the lungs on the first day after exposure, becoming six times more prevalent. They doubled in the lungs of mice infected with H1N1.
After the number of influenza-infected cells peaked on day three, neutrophil numbers dropped, but those that remained behaved differently than neutrophils in healthy mice.
The team found that neutrophils show two kinds of motion: slow and rapid. In influenza-infected lungs, the neutrophils that remained after the peak day showed a decrease in rapid motion and spent more time moving slowly, as if scouting for infected cells.
"We don't yet know why and what they're doing," says Kawaoka, but, he notes, this is the first time this behavior has been documented. And for him, it is motivation to dig even deeper, and adapt the technology for other respiratory viruses.
Read more at Science Daily
A galactic test will clarify the existence of dark matter
This picture shows the distribution of dark matter (above) and stars (below). |
Using one of the fastest supercomputers in the world, the scientists have simulated the matter distribution of the so-called satellite "dwarf" galaxies. These are small galaxies that surround, for instance, the Milky Way or Andromeda.
The researchers focused on a relationship called "radial acceleration relation" (RAR). In disk galaxies, stars move in circular orbits around the galactic center. The acceleration that forces them to constantly change direction is caused by the attraction of matter in the galaxy. The RAR describes the relationship between this acceleration and the one caused by the visible matter only. It provides an insight into the structure of galaxies and their matter distribution.
"We have now simulated, for the first time, the RAR of dwarf galaxies on the assumption that dark matter exists," explains Prof. Dr. Cristiano Porciani of the Argelander Institute for Astronomy at the University of Bonn. "It turned out that they behave as scaled-down versions of larger galaxies." But what if there is no dark matter and instead gravity "works" differently than Newton thought? "In this case the RAR of dwarf galaxies depends strongly on the distance to their parent galaxy, while this does not happen if dark matter exists," explains the researcher Emilio Romano-Díaz.
This difference makes the satellites a powerful probe for testing whether dark matter really exists. The Gaia spacecraft, which was launched by the European Space Agency (ESA) in 2013, could already provide an answer. It was designed to study the stars in the Milky Way and its satellite galaxies in unprecedented detail and has collected a large amount of data.
However, it will probably take years to solve this riddle. "Individual measurements are not enough to test the small differences we have found in our simulations," explains doctoral student Enrico Garaldi. "But repeatedly taking a close look at the same stars improves the measurements every time. Sooner or later it should be possible to determine whether the dwarf galaxies behave like in a universe with dark matter -- or not."
The cement that holds galaxies together
This question is one of the most pressing issues in cosmology today. The existence of dark matter was already suggested more than 80 years ago by the Swiss astronomer Fritz Zwicky. He realized that galaxies move so fast within galaxy clusters that they should actually drift apart. He therefore postulated the presence of invisible matter which, due to its mass, exerts sufficient gravity to keep galaxies on their observed orbits. In the 1970s, his US colleague Vera Rubin discovered a similar phenomenon in spiral galaxies like the Milky Way: they rotate so quickly that the centrifugal force should tear them apart if only visible matter was present.
Today, most physicists are convinced that dark matter makes up about 80 percent of the mass in the universe. Since it does not interact with light, it is invisible to telescopes. Yet, assuming its existence provides an excellent fit to a number of other observations -- such as the distribution of background radiation, an afterglow of the Big Bang. Dark matter also provides a good explanation for the arrangement and formation rate of galaxies in the universe. However, despite numerous experimental efforts, there is no direct proof that dark matter exists. This led astronomers to the hypothesis that the gravitational force itself might behave differently than previously thought. According to the theory called MOND (MOdified Newtonian Dynamics), the attraction between two masses obeys Newton's laws only up to a certain point. At very small accelerations, such as those prevailing in galaxies, gravity becomes considerably stronger. Therefore, galaxies do not tear apart due to their rotational speed and the MOND theory can dispense with the mysterious star putty.
Read more at Science Daily
Recipe for star clusters
A snapshot of a simulated giant molecular cloud marked with with star clusters in formation. |
Researchers Corey Howard, Ralph Pudritz and William Harris, authors of a paper published June 25 in the journal Nature Astronomy, used highly-sophisticated computer simulations to re-create what happens inside gigantic clouds of concentrated gases known to give rise to clusters of stars that are bound together by gravity.
Pudritz and Harris, both professors of Physics and Astronomy at McMaster, were Howard's PhD thesis supervisors and guided his research. Howard recently completed post-doctoral research at the university.
The state-of-the-art simulations follow a cloud of interstellar gas 500 light years in diameter, projecting 5 million years' worth of evolution wrought by turbulence, gravity and feedback from intense radiation pressure produced by massive stars within forming clusters.
The research shows how those forces create dense filaments that funnel gas into what ultimately become super-bright clusters of stars that can merge with other clusters to form vast globular clusters.
"Most stars in galaxies form as members of star clusters within dense molecular clouds, so one of the most basic questions in astronomy is how do clusters that range from hundreds to millions of stars form under a wide variety of conditions," Pudritz says. "Our simulations were carefully designed to determine whether or not this a universal process."
The authors programmed data for such variables as gas pressure, space turbulence and radiation force into their simulation and let it run using resources that included SciNet, Canada's largest supercomputer centre.
After a month, the program turned out star clusters identical to those known to exist, showing that the researchers had managed to reverse-engineer the formation of star clusters, taking a major step towards understanding their formation, which has long been a subject of debate among astrophysicists.
"Our work shows that, given a large enough collection of gas, a massive star cluster is the natural outcome," Howard says. "Since massive star clusters trace the conditions of the galaxies in which they form, we may also be able use this knowledge to reverse-engineer the conditions in the distant universe."
Many had previously argued that clusters of different sizes and ages had formed differently, the authors said, but the new research shows they all form the same way.
The simulations show that the outcome depends on the initial reservoir of gas, that will, after turbulence, gravity and feedback have done their work, create clusters of stars of various sizes over the course of a few million years.
"This is the first convincing route to modelling the formation of star clusters," Harris says. "It applies across all mass scales -- little clusters and big ones -- and it should work at any particular time in the universe's history, in any particular galaxy."
Read more at Science Daily
Jun 25, 2018
Challenging our understanding of how platelets are made
Platelets are uniquely mammalian cells, and are the small cells of the blood that are critical for us to stop bleeding when we cut ourselves. They are also a central part of the process of thrombosis, which underlies heart attacks and stroke, and form the target of major drugs used in the treatment of these diseases, such as aspirin. These cells are formed from large precursor cells, megakaryocytes, in the bone marrow and the lung, at a remarkable rate of 100 billion platelets per day in adult humans (that is one million platelets per second).
Despite this hugely active process, we still do not understand the details of how platelets are formed in the body. Dysfunction in the process underlies many cases of low platelet count and associated bleeding disorders, and so understanding the process better is essential in order to improve the healthcare we can offer to those affected.
The study 'Multiple membrane extrusion sites drive megakaryocyte migration into bone marrow blood vessels' has been published in the new journal Life Science Alliance (jointly published by EMBO, Cold Spring Harbor Press and Rockefeller Press). It is a collaboration between researchers at the University of Bristol, Imperial College London, the Francis Crick Institute, the University of Glasgow, the University of Oxford and MRC Weatherall Institute of Molecular Medicine. It details how researchers have used a novel approach to visualize the process in vivo, called intravital correlative light-electron microscopy.
Professor Alastair Poole, from the University of Bristol, who contributed to the research said; "The results have allowed us to propose a new mechanism for platelet production. In contrast to current understanding we found that most megakaryocytes enter the sinusoidal space as large protrusions, rather than extruding fine proplatelet extensions (as is currently thought)."
The research highlights this difference is important because the mechanism for large protrusion differs from that of proplatelet extension. Proplatelets extend by the sliding of dense bundles of microtubules, whereas the detailed in vivo data shows an absence of these bundles, but the presence of multiple fusion points between the internal membrane and the plasma membrane, at the leading edge of the protruding cell. Mass membrane extrusion therefore drives megakaryocyte large protrusions into the blood vessels of the bone marrow, significantly revising our understanding of the fundamental biology of platelet formation in vivo.
Read more at Science Daily
Despite this hugely active process, we still do not understand the details of how platelets are formed in the body. Dysfunction in the process underlies many cases of low platelet count and associated bleeding disorders, and so understanding the process better is essential in order to improve the healthcare we can offer to those affected.
The study 'Multiple membrane extrusion sites drive megakaryocyte migration into bone marrow blood vessels' has been published in the new journal Life Science Alliance (jointly published by EMBO, Cold Spring Harbor Press and Rockefeller Press). It is a collaboration between researchers at the University of Bristol, Imperial College London, the Francis Crick Institute, the University of Glasgow, the University of Oxford and MRC Weatherall Institute of Molecular Medicine. It details how researchers have used a novel approach to visualize the process in vivo, called intravital correlative light-electron microscopy.
Professor Alastair Poole, from the University of Bristol, who contributed to the research said; "The results have allowed us to propose a new mechanism for platelet production. In contrast to current understanding we found that most megakaryocytes enter the sinusoidal space as large protrusions, rather than extruding fine proplatelet extensions (as is currently thought)."
The research highlights this difference is important because the mechanism for large protrusion differs from that of proplatelet extension. Proplatelets extend by the sliding of dense bundles of microtubules, whereas the detailed in vivo data shows an absence of these bundles, but the presence of multiple fusion points between the internal membrane and the plasma membrane, at the leading edge of the protruding cell. Mass membrane extrusion therefore drives megakaryocyte large protrusions into the blood vessels of the bone marrow, significantly revising our understanding of the fundamental biology of platelet formation in vivo.
Read more at Science Daily
Monarchs ride West Coast winds: Proof of butterfly migration gathered
A Monarch butterfly with a WSU tracking tag on its wing. Sixty of these butterflies were found in the wild. This one in Avila Beach, Calif., located between Los Angeles and San Francisco. |
Most of the tagging was done by citizen scientists and inmates from the Washington State Penitentiary in Walla Walla. The prisoners are carefully trained in raising, tagging, and releasing Monarchs.
The findings were recently published in the Journal of the Lepidopterists' Society. WSU entomology professor David James spearheaded the project, which took a massive amount of time and coordination to put together, ultimately involving hundreds of volunteers. The research was unfunded, making the volunteers indispensable.
Long distance travelers
"On average, these butterflies averaged almost 40 miles of travel each day," James said. "That's pretty remarkable for such a small creature."
Though scientists don't know exactly how the butterflies travel that far, they suspect the Monarchs may ride warm air currents called thermals a few thousand feet up in the air, then use the strong upper-air currents to navigate, James said.
The paper covered the initial five years of the project, from 2012 to 2016. Participants tagged and released 13,778 Monarchs that were raised in captivity and tagged 875 wild Monarchs. More than one-third of the raised Monarchs were reared by inmates at Walla Walla, James said.
Butterflies were released from around Washington, Oregon, Idaho and British Columbia.
A total of 60 tagged Monarchs were recovered more than six miles from their release point, a return rate that James said was expected based on similar work done in the eastern United States. None of the recovered tags were from the wild Monarchs.
The longest recorded journey was from a butterfly released by James himself in Yakima that was recovered at Tecolote Canyon, near Goleta, California, a straight-line distance of 845 miles.
On average, the 60 recovered butterflies travelled just shy of 500 miles.
Flowers for fuel
The results from this project will be used to show migratory corridors where areas can be stocked with flowers, James said. Migration is a very vulnerable time for Monarchs.
"They need fuel, which is nectar from flowers," James said. "If we have large areas without flowers, then they won't make it."
Most scientists think the butterflies descend from their flight in the evening to feed, then eat again in the morning before finding thermals to ride back up, James said.
Monarchs have seen a huge population decline over the last two decades, James said. It's estimated populations have declined approximately 90 percent over that timeframe. Ensuring nectar resources along migration routes will help them survive their journeys.
Critical help from citizen scientists
James said he's never worked on a project with so many citizen scientists before, and is incredibly grateful for all of their help.
"The results we got would have been impossible without their help, whether that's the prisoners or just people that care about butterflies who have contacted us," James said.
The project is still ongoing, with nearly 5,000 people on the group's Facebook page.
James thinks that eventually, technology will eliminate the need to tag so many thousands of butterflies.
Read more at Science Daily
Cranium of a four-million-year-old hominin shows similarities to that of modern humans
Original picture (left) and virtual rendering of the Jacovec cranium (middle) with two sections revealing the inner structure (right). |
The cranium of the extinct Australopithecus genus was found in the lower-lying deposits of the Jacovec Cavern in the Sterkfontein Caves, about 40km North-West of Johannesburg in South Africa. Dr Amelie Beaudet from the School of Geography, Archaeology and Environmental Studies of the University of the Witwatersrand and her colleagues from the Sterkfontein team scanned the cranium at the Evolutionary Studies Institute, based at the University of the Witwatersrand, in 2016 and applied advanced imaging techniques in "virtual paleontology" to further explore the anatomy of the cranium. Their research was funded by the Centre of Excellence in Palaeosciences, the Claude Leon Foundation and the French Institute of South Africa and was published in the Journal of Human Evolution.
"The Jacovec cranium represents a unique opportunity to learn more about the biology and diversity of our ancestors and their relatives and, ultimately, about their evolution," says Beaudet. "Unfortunately, the cranium is highly fragmentary and not much could be said about the identity nor the anatomy of the Jacovec specimen before."
Through high resolution scanning, the researchers were able to quantitatively and non-invasively explore fine details of the inner anatomy of the Jacovec specimen and to report previously unknown information about the genus Australopithecus.
"Our study revealed that the cranium of the Jacovec specimen and of the Ausralopithecus specimens from Sterkfontein in general was thick and essentially composed of spongy bone," says Beaudet. "This large portion of spongy bone, also found in our own cranium, may indicate that blood flow in the brain of Australopithecus may have been comparable to us, and/or that the braincase had an important role in the protection of the evolving brain."
In comparing this cranium to that of another extinct group of our family tree, Paranthropus, that lived in South Africa along with the first humans less than two-million-years ago, their study revealed an intriguing and unexpected aspect of the cranial anatomy in this genus.
"We also found that the Paranthropus cranium was relatively thin and essentially composed of compact bone. This result is of particular interest, as it may suggest a different biology," says Beaudet.
Situated in the Cradle of humankind, a Unesco World Heritage Site, the South African paleontological sites have played a pivotal role in the exploration of our origins. In particular, the Sterkfontein Caves site has been one of the most prolific fossil localities in Africa, with over 800 hominin remains representing 3 genera of hominin recovered since 1936, including the first adult Australopithecus, the iconic "Mrs Ples" and "Little Foot," the most complete single skeleton of an early hominin yet found.
Read more at Science Daily
Why life on Earth first got big
These are Ediacaran fossils at Mistaken Point, Newfoundland. |
The research, led by the University of Cambridge, found that the most successful organisms living in the oceans more than half a billion years ago were the ones that were able to 'throw' their offspring the farthest, thereby colonising their surroundings. The results are reported in the journal Nature Ecology and Evolution.
Prior to the Ediacaran period, between 635 and 541 million years ago, life forms were microscopic in size, but during the Ediacaran, large, complex organisms first appeared, some of which -- such as a type of organism known as rangeomorphs -- grew as tall as two metres. These organisms were some of the first complex organisms on Earth, and although they look like ferns, they may have been some of the first animals to exist -- although it's difficult for scientists to be entirely sure. Ediacaran organisms do not appear to have mouths, organs or means of moving, so they are thought to have absorbed nutrients from the water around them.
As Ediacaran organisms got taller, their body shapes diversified, and some developed stem-like structures to support their height.
In modern environments, such as forests, there is intense competition between organisms for resources such as light, so taller trees and plants have an obvious advantage over their shorter neighbours. "We wanted to know whether there were similar drivers for organisms during the Ediacaran period," said Dr Emily Mitchell of Cambridge's Department of Earth Sciences, the paper's lead author. "Did life on Earth get big as a result of competition?"
Mitchell and her co-author Dr Charlotte Kenchington from Memorial University of Newfoundland in Canada examined fossils from Mistaken Point in south-eastern Newfoundland, one of the richest sites of Ediacaran fossils in the world.
Earlier research hypothesised that increased size was driven by the competition for nutrients at different water depths. However, the current work shows that the Ediacaran oceans were more like an all-you-can-eat buffet.
"The oceans at the time were very rich in nutrients, so there wasn't much competition for resources, and predators did not yet exist," said Mitchell, who is a Henslow Research Fellow at Murray Edwards College. "So there must have been another reason why life forms got so big during this period."
Since Ediacaran organisms were not mobile and were preserved where they lived, it's possible to analyse whole populations from the fossil record. Using spatial analysis techniques, Mitchell and Kenchington found that there was no correlation between height and competition for food. Different types of organisms did not occupy different parts of the water column to avoid competing for resources -- a process known as tiering.
"If they were competing for food, then we would expect to find that the organisms with stems were highly tiered," said Kenchington. "But we found the opposite: the organisms without stems were actually more tiered than those with stems, so the stems probably served another function."
According to the researchers, one likely function of stems would be to enable the greater dispersion of offspring, which rangeomorphs produced by expelling small propagules. The tallest organisms were surrounded by the largest clusters of offspring, suggesting that the benefit of height was not more food, but a greater chance of colonising an area.
"While taller organisms would have been in faster-flowing water, the lack of tiering within these communities shows that their height didn't give them any distinct advantages in terms of nutrient uptake," said Mitchell. "Instead, reproduction appears to have been the main reason that life on Earth got big when it did."
Read more at Science Daily
Jun 24, 2018
Uncovering lost images from the 19th century
Research published today in Scientific Reports -- Nature includes two images from the National Gallery of Canada's photography research unit that show photographs that were taken, perhaps as early as 1850, but were no longer visible because of tarnish and other damage. The retrieved images, one of a woman and the other of a man, were beyond recognition.
"It's somewhat haunting because they are anonymous and yet it is striking at the same time," said Madalena Kozachuk, a PhD student in Western's Department of Chemistry and lead author of the scientific paper.
"The image is totally unexpected because you don't see it on the plate at all. It's hidden behind time," continues Kozachuk. "But then we see it and we can see such fine details: the eyes, the folds of the clothing, the detailed embroidered patterns of the table cloth."
The identities of the woman and the man are not known. It's possible that the plates were produced in the United States, but they could be from Europe.
For the past three years, Kozachuk and an interdisciplinary team of scientists have been exploring how to use synchrotron technology to learn more about chemical changes that damage daguerreotypes.
Invented in 1839, daguerreotype images were created using a highly polished silver-coated copper plate that was sensitive to light when exposed to an iodine vapour. Subjects had to pose without moving for two to three minutes for the image to imprint on the plate, which was then developed as a photograph using a mercury vapour that was heated.
Kozachuk conducts much of her research at the Canadian Light Source (CLS) and previously published results in scientific journals in 2017 and earlier this year. In those articles, the team members identified the chemical composition of the tarnish and how it changed from one point to another on a daguerreotype.
"We compared degradation that looked like corrosion versus a cloudiness from the residue from products used during the rinsing of the photographs during production versus degradation from the cover glass. When you look at these degraded photographs, you don't see one type of degradation," said Ian Coulthard, a senior scientist at the CLS and one of Kozachuk's co-supervisors. He is also a co- author on the research papers.
This preliminary research at the CLS led to today's paper and the images Kozachuk collected at the Cornell High Energy Synchrotron Source where she was able to analyze the daguerreotypes in their entirety.
Kozachuk used rapid-scanning micro-X-ray fluorescence imaging to analyze the plates, which are about 7.5 cm wide, and identified where mercury was distributed on the plates. With an X-ray beam as small as 10x10 microns (a human scalp hair averages 75 microns across) and at an energy most sensitive to mercury absorption, the scan of each daguerreotype took about eight hours.
"Mercury is the major element that contributes to the imagery captured in these photographs. Even though the surface is tarnished, those image particles remain intact. By looking at the mercury, we can retrieve the image in great detail," said Tsun-Kong (T.K.) Sham, Canada Research Chair in Materials and Synchrotron Radiation at Western University. He also is a co-author of the research and Kozachuk's supervisor.
This research will contribute to improving how daguerreotype images are recovered when cleaning is possible and will provide a way to seeing what's below the tarnish if cleaning is not possible.
The prospect of improved conservation methods intrigues John P. McElhone, recently retired as the chief of Conservation and Technical Research branch at the Canadian Photography Institute of National Gallery of Canada. He provided the daguerreotypes from the Institute's research collection.
"There are a lot of interesting questions that at this stage of our knowledge can only be answered by a sophisticated scientific approach," said McElhone, another of the co-authors of today's paper. "A conservator's first step is to have a full and complete understanding of what the material is and how it is assembled on a microscopic and even nanoscale level. We want to find out how the chemicals are arranged on the surface and that understanding gives us access to theories about how degradation happens and how that degradation can possibly or possibly not be reversed."
As the first commercialized photographic process, the daguerreotype is thought to be the first "true" visual representation of history. Unlike painters who could use "poetic licence" in their work, the daguerreotype reflected precisely what was photographed.
Thousands and perhaps millions of daguerreotypes were created over 20 years in the 19th century before the process was replaced. The Canadian Photography Institute collection numbers more than 2,700, not including the daguerreotypes in the institute's research collection.
Read more at Science Daily
Dynamic modeling helps predict the behaviors of gut microbes
The human gut is teeming with microbes, each interacting with one another in a mind-boggling network of positive and negative exchanges. Some produce substances that serve as food for other microbes, while others produce toxins -- antibiotics -- that kill their neighbors.
Scientists have been challenged trying to understand how this collection of gut microbes known as the microbiome is formed, how it changes over time and how it is affected by disturbances like antibiotics used to treat illnesses. A new study from Ophelia Venturelli, a biochemistry professor at the University of Wisconsin-Madison, and her collaborators at the University of California, Berkeley, may help alleviate some of that difficulty.
Published June 21 in the journal Molecular Systems Biology, the study provides a platform for predicting how microbial gut communities work and represents a first step toward understanding how to manipulate the properties of the gut ecosystem. This could allow scientists to, for example, design a probiotic that persists in the gut or tailor a diet to positively influence human health.
"We know very little about the ecological interactions of the gut microbiome," Venturelli says. "Many studies have focused on cataloging all of the microbes present, which is very useful, but we wanted to try to understand the rules governing their assembly into communities, how stability is achieved, and how they respond to perturbations as well."
By learning these rules, researchers say they can better predict interactions between microbes using computational tools instead of performing laborious and time-consuming laboratory experiments.
The data can also start to answer questions about how pathogens cause damage when they invade communities, and how to prevent it.
For the study, the researchers chose 12 bacterial types present in the human gut. They represented the diversity of the gut microbiome and the majority have been shown to significantly affect human health. They have associations with diseases such as diabetes, irritable bowel syndrome, Crohn's disease and colon cancer.
The team collected data on what are called pairwise interactions, which means each bacterial species was paired with just one other to study how the two interacted, without worrying about what all of the others were doing. This was done for every single possible pairing in the 12-member community.
The researchers fed data about the pairwise interactions, along with data on each individual species, into a dynamic model to decipher how all of the bacteria would likely interact when combined. They found the pairwise data alone was sufficient to predict how the larger community assembles.
"This model allows us to better understand and make predictions about the gut microbiome with fewer measurements," Venturelli explains. "We don't need to measure every single possible community of, say, three, four or five of a set of species. We just need to measure all the pairs, which still represents a very large number, to be able to predict the dynamics of the whole gut."
While this is still a challenge, Venturelli says it will significantly reduce the number of measurements scientists need to make.
The researchers also looked at which species seemed to be the most important in the community by measuring substances microbes produce, called metabolites. To their surprise, "the metabolite data was not able to predict the role of important species in the community," Venturelli says.
She and the research team then tested the model's predictive power by trying to estimate the characteristics of different combinations of their 12 chosen bacteria. Although not perfect, the model did well at predicting dynamic behaviors.
Additionally, the team found more positive interactions between microbes in the community than they expected based on other studies that have shown mostly negative interactions.
"We found there's a balance between positive and negative interactions and the negative interactions kind of provide a stabilizing force for the community," Venturelli says. "We are beginning to understand the design principles of stability of the gut microbes and what allows a community to recover from perturbations."
The model allows the scientist to now begin to ask questions about the composition and dynamics of thousands of microbial communities.
Read more at Science Daily
Scientists have been challenged trying to understand how this collection of gut microbes known as the microbiome is formed, how it changes over time and how it is affected by disturbances like antibiotics used to treat illnesses. A new study from Ophelia Venturelli, a biochemistry professor at the University of Wisconsin-Madison, and her collaborators at the University of California, Berkeley, may help alleviate some of that difficulty.
Published June 21 in the journal Molecular Systems Biology, the study provides a platform for predicting how microbial gut communities work and represents a first step toward understanding how to manipulate the properties of the gut ecosystem. This could allow scientists to, for example, design a probiotic that persists in the gut or tailor a diet to positively influence human health.
"We know very little about the ecological interactions of the gut microbiome," Venturelli says. "Many studies have focused on cataloging all of the microbes present, which is very useful, but we wanted to try to understand the rules governing their assembly into communities, how stability is achieved, and how they respond to perturbations as well."
By learning these rules, researchers say they can better predict interactions between microbes using computational tools instead of performing laborious and time-consuming laboratory experiments.
The data can also start to answer questions about how pathogens cause damage when they invade communities, and how to prevent it.
For the study, the researchers chose 12 bacterial types present in the human gut. They represented the diversity of the gut microbiome and the majority have been shown to significantly affect human health. They have associations with diseases such as diabetes, irritable bowel syndrome, Crohn's disease and colon cancer.
The team collected data on what are called pairwise interactions, which means each bacterial species was paired with just one other to study how the two interacted, without worrying about what all of the others were doing. This was done for every single possible pairing in the 12-member community.
The researchers fed data about the pairwise interactions, along with data on each individual species, into a dynamic model to decipher how all of the bacteria would likely interact when combined. They found the pairwise data alone was sufficient to predict how the larger community assembles.
"This model allows us to better understand and make predictions about the gut microbiome with fewer measurements," Venturelli explains. "We don't need to measure every single possible community of, say, three, four or five of a set of species. We just need to measure all the pairs, which still represents a very large number, to be able to predict the dynamics of the whole gut."
While this is still a challenge, Venturelli says it will significantly reduce the number of measurements scientists need to make.
The researchers also looked at which species seemed to be the most important in the community by measuring substances microbes produce, called metabolites. To their surprise, "the metabolite data was not able to predict the role of important species in the community," Venturelli says.
She and the research team then tested the model's predictive power by trying to estimate the characteristics of different combinations of their 12 chosen bacteria. Although not perfect, the model did well at predicting dynamic behaviors.
Additionally, the team found more positive interactions between microbes in the community than they expected based on other studies that have shown mostly negative interactions.
"We found there's a balance between positive and negative interactions and the negative interactions kind of provide a stabilizing force for the community," Venturelli says. "We are beginning to understand the design principles of stability of the gut microbes and what allows a community to recover from perturbations."
The model allows the scientist to now begin to ask questions about the composition and dynamics of thousands of microbial communities.
Read more at Science Daily
Subscribe to:
Posts (Atom)