Jun 15, 2019

Small cluster of neurons is off-on switch for mouse songs

Lab mouse.
Researchers at Duke University have isolated a cluster of neurons in a mouse's brain that are crucial to making the squeaky, ultrasonic 'songs' a male mouse produces when courting a potential mate.

In fact, they now understand these neurons well enough to be able to make a mouse sing on command or to silence it so that it can't sing, even when it wants to impress a mate.

This level of understanding and control is a key advancement in the ongoing search for the mechanisms that allow humans to form speech and other communication sounds. The researchers are broadly interested in the brain's production of speech and have worked with songbirds and mice as models for humans.

"We were interested in understanding how mice produce these 'love songs,' as we call them in the lab," said Katherine Tschida, who led the research as a post-doctoral fellow in both the Richard Mooney and Fan Wang labs at Duke neurobiology.

For this study, Tschida and her colleagues focused on a part of the midbrain called the periaqueductal gray, or PAG for short, because they knew from previous work by others that it would be a key player in the vocalization circuit, she said.

With technology developed by Wang's lab, they were able to locate and isolate the specific neurons involved in the PAG's circuitry and then experiment on them.

By turning the neurons on selectively with a light-based method called optogenetics, the researchers found they could make a mouse immediately begin singing, even though it was alone.

On the other hand, silencing the activity of the PAG neurons rendered courting male mice incapable of singing, even while they persisted in all of their other courtship behaviors.

The females turned out to be less interested in the silent types, which also shows that the singing behavior is key to mouse survival.

Both experiments firmly establish that this "stable and distinct population of neurons" is the key conduit between behavior and vocal communication, Tschida said. The work will appear in the Aug. 7 edition of Neuron, but was published early online in mid-June.

"These neurons are acting as a base for vocalization. But they don't determine the individual parts of the song," Tschida said. "It's a 'gate' for vocalization."

Tschida, who will join the Cornell University faculty next year, said the research will now trace PAG's connections to neurons downstream that communicate with the voicebox, lungs and mouth, for example. And they'll work toward the behavioral centers upstream that tell the mouse there is a female present and he should start singing.

Read more at Science Daily

Gut microbes eat our medication

Pills illustration.
The first time Vayu Maini Rekdal manipulated microbes, he made a decent sourdough bread. At the time, young Maini Rekdal, and most people who head to the kitchen to whip up a salad dressing, pop popcorn, ferment vegetables, or caramelize onions, did not consider the crucial chemical reactions behind these concoctions.

Even more crucial are the reactions that happen after the plates are clean. When a slice of sourdough travels through the digestive system, the trillions of microbes that live in our gut help the body break down that bread to absorb the nutrients. Since the human body cannot digest certain substances -- all-important fiber, for example -- microbes step up to perform chemistry no human can.

"But this kind of microbial metabolism can also be detrimental," said Maini Rekdal, a graduate student in the lab of Professor Emily Balskus and first-author on their new study published in Science. According to Maini Rekdal, gut microbes can chew up medications, too, often with hazardous side effects. "Maybe the drug is not going to reach its target in the body, maybe it's going to be toxic all of a sudden, maybe it's going to be less helpful," Maini Rekdal said.

In their study, Balskus, Maini Rekdal, and their collaborators at the University of California San Francisco, describe one of the first concrete examples of how the microbiome can interfere with a drug's intended path through the body. Focusing on levodopa (L-dopa), the primary treatment for Parkinson's disease, they identified which bacteria are responsible for degrading the drug and how to stop this microbial interference.

Parkinson's disease attacks nerve cells in the brain that produce dopamine, without which the body can suffer tremors, muscle rigidity, and problems with balance and coordination. L-dopa delivers dopamine to the brain to relieve symptoms. But only about 1 to 5% of the drug actually reaches the brain.

This number -- and the drug's efficacy -- varies widely from patient to patient. Since the introduction of L-dopa in the late 1960s, researchers have known that the body's enzymes (tools that perform necessary chemistry) can break down L-dopa in the gut, preventing the drug from reaching the brain. So, the pharmaceutical industry introduced a new drug, carbidopa, to block unwanted L-dopa metabolism. Taken together, the treatment seemed to work.

"Even so," Maini Rekdal said, "there's a lot of metabolism that's unexplained, and it's very variable between people." That variance is a problem: Not only is the drug less effective for some patients, but when L-dopa is transformed into dopamine outside the brain, the compound can cause side effects, including severe gastrointestinal distress and cardiac arrhythmias. If less of the drug reaches the brain, patients are often given more to manage their symptoms, potentially exacerbating these side effects.

Maini Rekdal suspected microbes might be behind the L-dopa disappearance. Since previous research showed that antibiotics improve a patient's response to L-dopa, scientists speculated that bacteria might be to blame. Still, no one identified which bacterial species might be culpable or how and why they eat the drug.

So, the Balskus team launched an investigation. The unusual chemistry -- L-dopa to dopamine -- was their first clue.

Few bacterial enzymes can perform this conversion. But, a good number bind to tyrosine -- an amino acid similar to L-dopa. And one, from a food microbe often found in milk and pickles (Lactobacillus brevis), can accept both tyrosine and L-dopa.

Using the Human Microbiome Project as a reference, Maini Rekdal and his team hunted through bacterial DNA to identify which gut microbes had genes to encode a similar enzyme. Several fit their criteria; but only one strain, Enterococcus faecalis (E. faecalis), ate all the L-dopa, every time.

With this discovery, the team provided the first strong evidence connecting E. faecalis and the bacteria's enzyme (PLP-dependent tyrosine decarboxylase or TyrDC) to L-dopa metabolism.

And yet, a human enzyme can and does convert L-dopa to dopamine in the gut, the same reaction carbidopa is designed to stop. Then why, the team wondered, does the E. faecalis enzyme escape carbidopa's reach?

Even though the human and bacterial enzymes perform the exact same chemical reaction, the bacterial one looks just a little different. Maini Rekdal speculated that carbidopa may not be able to penetrate the microbial cells or the slight structural variance could prevent the drug from interacting with the bacterial enzyme. If true, other host-targeted treatments may be just as ineffective as carbidopa against similar microbial machinations.

But the cause may not matter. Balskus and her team already discovered a molecule capable of inhibiting the bacterial enzyme.

"The molecule turns off this unwanted bacterial metabolism without killing the bacteria; it's just targeting a non-essential enzyme," Maini Rekdal said. This and similar compounds could provide a starting place for the development of new drugs to improve L-dopa therapy for Parkinson's patients.

The team might have stopped there. But instead, they pushed further to unravel a second step in the microbial metabolism of L-dopa. After E. faecalis converts the drug into dopamine, a second organism converts dopamine into another compound, meta-tyramine.

To find this second organism, Maini Rekdal left behind his mother dough's microbial masses to experiment with a fecal sample. He subjected its diverse microbial community to a Darwinian game, feeding dopamine to hordes of microbes to see which prospered.

Eggerthella lenta won. These bacteria consume dopamine, producing meta-tyramine as a by-product. This kind of reaction is challenging, even for chemists. "There's no way to do it on the bench top," Maini Rekdal said, "and previously no enzymes were known that did this exact reaction."

The meta-tyramine by-product may contribute to some of the noxious L-dopa side effects; more research needs to be done. But, apart from the implications for Parkinson's patients, E. lenta's novel chemistry raises more questions: Why would bacteria adapt to use dopamine, which is typically associated with the brain? What else can gut microbes do? And does this chemistry impact our health?

"All of this suggests that gut microbes may contribute to the dramatic variability that is observed in side effects and efficacy between different patients taking L-dopa," Balskus said.

Read more at Science Daily

Jun 14, 2019

Origins of cannabis smoking

Cannabis plant
Cannabis has been cultivated as an oil-seed and fibre crop for millennia in East Asia. Little is known, however, about the early use and eventual cultivation of the plant for its psychoactive and medicinal properties. Despite being one of the most widely used psychoactive drugs in the world today, there is little archaeological or historical evidence for the use of marijuana in the ancient world. The current study, published in the journal Science Advances, identified psychoactive compounds preserved in 2,500-year-old funerary incense burners from the Jirzankal Cemetery in the eastern Pamirs. Researchers from the Max Planck Institute for the Science of Human History, the Chinese Academy of Sciences, and the Chinese Academy of Social Sciences have shown that people were selecting plants with higher levels of THC, and burning them as part of mortuary rituals. This is the earliest clear evidence to date of cannabis being used for its psychoactive properties.

Cannabis is one of the most infamous plants on the planet today, especially in light of rapidly changing legislation surrounding its legalisation in Europe and America. Despite the popularity of the plant for its psychoactive properties, very little is known about the earliest use or cultivation of cannabis for its mind-altering effects. Cannabis plants were cultivated in East Asia for their oily seeds and fibre from at least 4000 BC. However, the early cultivated varieties of cannabis, as well as most wild populations, have low levels of THC and other cannabinoid compounds with psychoactive properties. Therefore, it has been a long-standing mystery as to when and where specific varieties of the plant with higher levels of these compounds were first recognized and used by humans. Many historians place the origins of cannabis smoking on the ancient Central Asian steppes, but these arguments rely solely on a passage from a single ancient text from the late first millennium BC, written by the Greek historian Herodotus. Archaeologists have thus long sought to identify concrete evidence for cannabis smoking in Eurasia, but to date, there are few reliable, well-identified and properly dated examples of early cannabis use.

The researchers in the current study uncovered the early cannabis use when they sought to identify the function of ancient wooden burners discovered by archaeologists from the Chinese Academy of Social Sciences, who were excavating in the high mountainous regions of western China. The burners were recovered from 2500-year-old tombs in the Pamir mountain range. The international research team used a method called gas chromatography-mass spectrometry to isolate and identify compounds preserved in the burners. To their surprise, the chemical signature of the isolated compounds was an exact match to the chemical signature of cannabis. Moreover, the signature indicated a higher level of THC than is normally found in wild cannabis plants.

The data produced by the research effort, which brought together archaeologists and laboratory scientists from Jena, Germany and Beijing, China, provides clear evidence that ancient people in the Pamir Mountains were burning specific varieties of cannabis that had higher THC levels. The findings corroborate other early evidence for cannabis from burials further north, in the Xinjiang region of China and in the Altai Mountains of Russia. As Nicole Boivin, Director at the Max Planck Institute for the Science of Human History notes, "The findings support the idea that cannabis plants were first used for their psychoactive compounds in the mountainous regions of eastern Central Asia, thereafter spreading to other regions of the world."

Cannabis likely spread across exchange routes along the early Silk Road

The THC-containing residues were extracted from burners from a cemetery known as Jirzankal in the remote Pamir Mountains. Some of the skeletons recovered from the site, situated in modern-day western China, have features that resemble those of contemporaneous peoples further west in Central Asia. Objects found in the burials also appear to link this population to peoples further west in the mountain foothills of Inner Asia. Additionally, stable isotope studies on the human bones from the cemetery show that not all of the people buried there grew up locally.

These data fit with the notion that the high-elevation mountain passes of Central and Eastern Asia played a key role in early trans-Eurasian exchange. Indeed, the Pamir region, today so remote, may once have sat astride a key ancient trade route of the early Silk Road. The Silk Road was at certain times in the past the single most important vector for cultural spread in the ancient world. Robert Spengler, the lead archaeobotanist for the study, also at the Max Planck Institute for the Science of Human History, explains, "The exchange routes of the early Silk Road functioned more like the spokes of a wagon wheel than a long-distance road, placing Central Asia at the heart of the ancient world. Our study implies that knowledge of cannabis smoking and specific high-chemical-producing varieties of the cannabis plant were among the cultural traditions that spread along these exchange routes."

People sought and later cultivated more psychoactive varieties of cannabis for use in burial rituals

Compared to cultivated varieties, wild cannabis plants contain lower levels of THC, one of the psychoactive compounds in cannabis. It is still unclear whether the people buried at Jirzankal actively cultivated cannabis or simply sought out higher THC-producing plants. One theory is that cannabis plants will produce greater quantities of active compounds in response to increased UV radiation and other stressors related to growing at higher elevations. So people roaming the high mountainous regions may have discovered more potent wild plants there, and initiated a new kind of use of the plant.

While modern cannabis is used primarily as a recreational drug or for medical applications, cannabis may have been used rather differently in the past. The evidence from Jirzankal suggests that people were burning cannabis at rituals commemorating the dead. They buried their kin in tombs over which they created circular mounds, stone rings and striped patterns using black and white stones.

Whether cannabis also had other uses in society is unclear, though it seems likely that the plant's ability to treat a variety of illnesses and symptoms was recognized early on. Yimin Yang, researcher at the University of the Chinese Academy of Sciences in Beijing observes, "This study of ancient cannabis use helps us understand early human cultural practices, and speaks to the intuitive human awareness of natural phytochemicals in plants." Dr. Yang has studied ancient organic residues in East Asia for over ten years. He notes that "biomarker analyses open a unique window onto details of ancient plant exploitation and cultural communication that other archaeological methods cannot offer."

Read more at Science Daily

Zebras' stripes could be used to control their temperature, study reveals

Grant's zebras
New research published in the Journal of Natural History indicates that zebras' stripes are used to control body temperature after all -- and reveals for the first time a new mechanism for how this may be achieved.

The authors argue it is the special way zebras sweat to cool down and the small-scale convection currents created between the stripes which aid evaporation, while the previously unrecorded ability of zebras to erect their black stripes is a further aid to heat loss. These three elements are key to understanding how the zebras' unique patterning helps them manage their temperature in the heat.

The findings have been published this month in the Journal of Natural History, the scientific publication of the British Natural History Museum, by amateur naturalist and former biology technician, Alison Cobb and her zoologist husband, Dr Stephen Cobb. Together, they have spent many years in sub-Saharan Africa, where he has directed environmental research and development projects.

This study is the first-time zebras have been assessed in their natural habitat to investigate the role of stripes in temperature control. The researchers collected field data from two live zebras, a stallion and a mare, together with a zebra hide draped over a clothes-horse as a control, in Kenya.

The data revealed a temperature difference between the black and white stripes that increases as the day heats up. Whilst this difference stabilises on living zebras during the middle seven hours of the day, with the black stripes 12-15ºC hotter than the white, the stripes on a lifeless zebra hide continue to heat up, by as much as another 16ºC. This indicates there is an underlying mechanism to suppress heating in living zebras. It is therefore the way the zebra stripes are harnessed as one part of their cooling system, rather than just their contrasting coat colour, that is key to understanding why these animals have their unique patterning.

Like all species in the horse family, zebras sweat to keep cool. Recent research reveals that the passage of sweat in horses from the skin to the tips of the hairs is facilitated by a protein called latherin which is also present in zebras. This makes the sweat frothy, increasing its surface area and lowering its surface tension so it evaporates and prevents the animal overheating.

The researchers propose that the differential temperatures and air activity on the black and white stripes set up small-scale convective air movements within and just above the stripes, which destabilise the air and the water vapour at the tips of the hairs.

During the field research, the authors also observed -- probably for the first time -- that zebras have an unexpected ability to raise the hair on their black stripes (like velvet) while the white ones remain flat. The authors propose that the raising of black hairs during the heat of the day, when the stripes are at different temperatures, assists with the transfer of heat from the skin to the hair surface and conversely, when the stripes are at the same temperature in the early morning, and there is no air movement, the raised black hairs will help trap air to reduce heat loss at that time.

These three components- convective air movements, latherin-aided sweating and hair-raising -- work together as a mechanism to enable zebras to wick the sweat away from their skin so it can evaporate more efficiently, to help them cool down.

The authors also speculate that the unstable air associated with the stripes may play a secondary role in deterring biting flies from landing on them. This insect behaviour has been observed in recently published studies about zebra stripes and could confer an additional advantage for zebras.

There is evidence from other recent studies that backs up the idea that heat control may be key to why zebras have their striking coats. It has been demonstrated that the zebra stripes become remarkably more pronounced on animals living in the hottest climates, near the equator. Zebras are also smallest near the equator, providing a large surface area to volume ratio which assists the animals' ability to dissipate heat through evaporation.

Alison Cobb, lead author of the new paper says: "Ever since I read 'How the Leopard Got His Spots' in Kipling's Just So Stories at bedtime when I was about four, I have wondered what zebra stripes are for. In the many years we spent living in Africa, we were always struck by how much time zebras spent grazing in the blazing heat of the day and felt the stripes might be helping them to control their temperature in some way."

"My early attempts forty years ago at testing this hypothesis involved comparing the temperatures of water in oil drums with differently coloured felt coats, but it seemed to me that this was not a good enough experiment, and I wanted to see how the stripes behaved on live zebras."

"Steve, the man who later became my husband and co-author, teaching conservation biology in the University of Nairobi, had a student working with zebras, who said he could calm them down in their crush by brushing them with a long-handled broom. This gave me courage in 1991 to ask permission to go into the Animal Orphanage in Nairobi National Park to see if I could tame one of the wild zebras in the paddock by brushing it with a dandy brush. Apart from its capture, it had never been touched by a human. To my immense pleasure it found this tickling very agreeable and as the days went by it gradually allowed me to brush it all over (see photograph). Two years later I came back to Nairobi and walked into the paddock with the brush. The same zebra mare lifted her head, looked at me hard, and walked up to me to be brushed again."

"It was not until years later that we got the opportunity to collect some field data from zebras in Africa, when we also noticed their ability to raise the hairs of their black stripes, while the white ones lay flat. It was only much more recently, when the role of latherin was discovered in helping horses sweat to keep cool, that it all began to fall into place."

Read more at Science Daily

Squid could thrive under climate change

Bigfin reef squid.
Squid will survive and may even flourish under even the worst-case ocean acidification scenarios, according to a new study published this week.

Dr Blake Spady, from the ARC Centre of Excellence for Coral Reef Studies (Coral CoE) at James Cook University (JCU), led the study. He said squid live on the edge of their environmental oxygen limitations due to their energy-taxing swimming technique. They were expected to fare badly with more carbon dioxide (CO2) in the water, which makes it more acidic.

"Their blood is highly sensitive to changes in acidity, so we expected that future ocean acidification would negatively affect their aerobic performance," said Dr Spady.

Atmospheric CO2 concentrations have increased from 280 parts per million (ppm) before the industrial revolution to more than 400 ppm today. Scientists project atmospheric CO2 -- and by extension CO2 in the oceans -- may exceed 900 ppm by the end of this century unless current CO2 emissions are curtailed.

But when the team tested two-toned pygmy squid and bigfin reef squid at JCU's research aquarium, subjecting them to CO2 levels projected for the end of the century, they received a surprise.

"We found that these two species of tropical squid are unaffected in their aerobic performance and recovery after exhaustive exercise by the highest projected end-of-century CO2 levels," said Dr Spady.

He said it may be an even greater boost for the squid as some of their predators and prey have been shown to lose performance under predicted climate change scenarios.

"We think that squid have a high capacity to adapt to environmental changes due to their short lifespans, fast growth rates, large populations, and high rate of population increase," said Dr Spady.

He said the work is important because it gives a better understanding of how future ecosystems might look under elevated CO2 conditions.

"We are likely to see certain species as being well-suited to succeed in our rapidly changing oceans, and these species of squid may be among them."

Read more at Science Daily

Viruses found to use intricate 'treadmill' to move cargo across bacterial cells

Illustration of bacteriophage viruses infecting bacterial cell
Countless textbooks have characterized bacteria as simple, disorganized blobs of molecules.

Now, using advanced technologies to explore the inner workings of bacteria in unprecedented detail, biologists at the University of California San Diego have discovered that in fact bacteria have more in common with sophisticated human cells than previously known.

Publishing their work June 13 in the journal Cell, UC San Diego researchers working in Professor Joe Pogliano's and Assistant Professor Elizabeth Villa's laboratories have provided the first example of cargo within bacterial cells transiting along treadmill-like structures in a process similar to that occurring in our own cells.

"It's not that bacteria are boring, but previously we did not have a very good ability to look at them in detail," said Villa, one of the paper's corresponding authors. "With new technologies we can start to understand the amazing inner life of bacteria and look at all of their very sophisticated organizational principles."

Study first-author Vorrapon Chaikeeratisak of UC San Diego's Division of Biological Sciences and his colleagues analyzed giant Pseudomonas bacteriophage (also known as phage, the term used to describe viruses that infect bacterial cells). Earlier insights from Pogliano's and Villa's labs found that phage convert the cells they have infected into mammalian-type cells with a centrally located nucleus-like structure, formed by a protein shell surrounding the replicated phage DNA. In the new study the researchers documented a previously unseen process that transports viral components called capsids to DNA at the central nucleus-like structure. They followed as capsids moved from an assembly site on the host membrane, trafficked upon a conveyer belt-like path made of filaments and ultimately arrived at their final phage DNA destination.

"They ride along a treadmill in order to get to where the DNA is housed inside the protein shell, and that's critical for the life cycle of the phage," said Pogliano, a professor in the Section of Molecular Biology. "No one has seen this intracellular cargo travelling along a filament in bacterial cells before."

"The way this giant phage replicates inside bacteria is so fascinating," said Chaikeeratisak. "There are a lot more questions to explore about the mechanisms that it uses to take over the bacterial host cell."

Opening the door to the new discovery was a research combination of time-lapse fluorescence microscopy, which offered a broad perspective of movement within the cell, similar to a Google Earth map perspective of roadways, in coordination with cryo-electron tomography, which provided the ability to zoom into a "street level" view that allowed the scientists to analyze components on a scale of individual vehicles and people within them.

Villa said each technique's perspective helped provide key answers but also brought new questions about the transportation and distribution mechanisms within bacterial cells. Kanika Khanna, a student member of both labs, is trained to use both technologies to gain data and insights from each.

"Zooming in and out allowed us to observe a unique example where things just don't randomly diffuse inside bacterial cells," said Khanna. "These phages have evolved a sophisticated and directed mechanism of transport using filaments to replicate inside their hosts that we could have not seen otherwise."

Phage infect and attack many types of bacteria and are known to live naturally in soil, seawater and humans. Pogliano believes the new findings are important for understanding more about the evolutionary development of phage, which have been the subject of recent attention.

"Viruses like phage have been studied for 100 years but they are now receiving renewed interest because of the potential of using them for phage therapy," said Pogliano.

The type of phage studied in the new paper is the kind that one day could be used in new treatments to cure a variety of infections.

Last year UC San Diego's School of Medicine started the Center for Innovative Phage Applications and Therapeutics (IPATH), which was launched to develop new treatments for infectious disease as widespread resistance to traditional antibiotics continues to grow.

Read more at Science Daily

Jun 13, 2019

Table salt compound spotted on Europa

Tara Regio is the yellowish area to left of center, in this NASA Galileo image of Europa’s surface. This region of geologic chaos is the area researchers identified an abundance of sodium chloride
A familiar ingredient has been hiding in plain sight on the surface of Jupiter's moon Europa. Using a visible light spectral analysis, planetary scientists at Caltech and the Jet Propulsion Laboratory, which Caltech manages for NASA, have discovered that the yellow color visible on portions of the surface of Europa is actually sodium chloride, a compound known on Earth as table salt, which is also the principal component of sea salt.

The discovery suggests that the salty subsurface ocean of Europa may chemically resemble Earth's oceans more than previously thought, challenging decades of supposition about the composition of those waters and making them potentially a lot more interesting for study. The finding was published in Science Advances on June 12.

Flybys from the Voyager and Galileo spacecrafts have led scientists to conclude that Europa is covered by a layer of salty liquid water encased by an icy shell. Galileo carried an infrared spectrometer, an instrument scientists use to examine the composition of the surface they're examining. Galileo's spectrometer found water ice and a substance that appeared to be magnesium sulfate salts -- like Epsom salts, which are used in soaking baths. Since the icy shell is geologically young and features abundant evidence of past geologic activity, it was suspected that whatever salts exist on the surface may derive from the ocean below. As such, scientists have long suspected an ocean composition rich in sulfate salts.

That all changed when new, higher spectral resolution data from the W. M. Keck Observatory on Maunakea suggested that the scientists weren't actually seeing magnesium sulfates on Europa. Most of the sulfate salts considered previously actually possess distinct absorptions that should have been visible in the higher-quality Keck data. However, the spectra of regions expected to reflect the internal composition lacked any of the characteristic sulfate absorptions.

"We thought that we might be seeing sodium chlorides, but they are essentially featureless in an infrared spectrum," says Mike Brown, the Richard and Barbara Rosenberg Professor of Planetary Astronomy at Caltech and co-author of the Science Advances paper.

However, Kevin Hand at JPL had irradiated ocean salts in a laboratory under Europa-like conditions and found that several new and distinct features arise after irradiation, but in the visible portion of the spectrum. He found that the salts changed colors to the point that they could be identified with an analysis of the visible spectrum. Sodium chloride, for example, turned a shade of yellow similar to that visible in a geologically young area of Europa known as Tara Regio.

"Sodium chloride is a bit like invisible ink on Europa's surface. Before irradiation, you can't tell it's there, but after irradiation, the color jumps right out at you," says Hand, scientist at JPL and co-author of the Science Advances paper.

"No one has taken visible wavelength spectra of Europa before that had this sort of spatial and spectral resolution. The Galileo spacecraft didn't have a visible spectrometer. It just had a near-infrared spectrometer," says Caltech graduate student Samantha Trumbo, the lead author of the paper.

"People have traditionally assumed that all of the interesting spectroscopy is in the infrared on planetary surfaces, because that's where most of the molecules that scientists are looking for have their fundamental features," Brown says.

By taking a close look with the Hubble Space Telescope, Brown and Trumbo were able to identify a distinct absorption in the visible spectrum at 450 nanometers, which matched the irradiated salt precisely, confirming that the yellow color of Tara Regio reflected the presence of irradiated sodium chloride on the surface.

"We've had the capacity to do this analysis with the Hubble Space Telescope for the past 20 years," Brown says. "It's just that nobody thought to look."

While the finding does not guarantee that this sodium chloride is derived from the subsurface ocean (this could, in fact, simply be evidence of different types of materials stratified in the moon's icy shell), the study's authors propose that it warrants a reevaluation of the geochemistry of Europa.

Read more at Science Daily

Rare 'superflares' could one day threaten Earth

Giant solar flare illustration.
Astronomers probing the edges of the Milky Way have in recent years observed some of the most brilliant pyrotechnic displays in the galaxy: superflares.

These events occur when stars, for reasons that scientists still don't understand, eject huge bursts of energy that can be seen from hundreds of light years away. Until recently, researchers assumed that such explosions occurred mostly on stars that, unlike Earth's, were young and active.

Now, new research shows with more confidence than ever before that superflares can occur on older, quieter stars like our own -- albeit more rarely, or about once every few thousand years.

The results should be a wake-up call for life on our planet, said Yuta Notsu, the lead author of the study and a visiting researcher at CU Boulder.

If a superflare erupted from the sun, he said, Earth would likely sit in the path of a wave of high-energy radiation. Such a blast could disrupt electronics across the globe, causing widespread black outs and shorting out communication satellites in orbit.

Notsu presented his research at a press briefing at the 234th meeting of the American Astronomical Society in St. Louis.

"Our study shows that superflares are rare events," said Notsu, a researcher in CU Boulder's Laboratory for Atmospheric and Space Physics. "But there is some possibility that we could experience such an event in the next 100 years or so."

Scientists first discovered this phenomenon from an unlikely source: the Kepler Space Telescope. The NASA spacecraft, launched in 2009, seeks out planets circling stars far from Earth. But it also found something odd about those stars themselves. In rare events, the light from distant stars seemed to get suddenly, and momentarily, brighter.

Researchers dubbed those humungous bursts of energy "superflares."

Notsu explained that normal-sized flares are common on the sun. But what the Kepler data was showing seemed to be much bigger, on the order of hundreds to thousands of times more powerful than the largest flare ever recorded with modern instruments on Earth.

And that raised an obvious question: Could a superflare also occur on our own sun?

"When our sun was young, it was very active because it rotated very fast and probably generated more powerful flares," said Notsu, also of the National Solar Observatory in Boulder. "But we didn't know if such large flares occur on the modern sun with very low frequency."

To find out, Notsu and an international team of researchers turned to data from the European Space Agency's Gaia spacecraft and from the Apache Point Observatory in New Mexico. Over a series of studies, the group used those instruments to narrow down a list of superflares that had come from 43 stars that resembled our sun. The researchers then subjected those rare events to a rigorous statistical analysis.

The bottom line: age matters. Based on the team's calculations, younger stars tend to produce the most superflares. But older stars like our sun, now a respectable 4.6 billion years old, aren't off the hook.

"Young stars have superflares once every week or so," Notsu said. "For the sun, it's once every few thousand years on average."

The group published its latest results in May in The Astrophysical Journal.

Notsu can't be sure when the next big solar light show is due to hit Earth. But he said that it's a matter of when, not if. Still, that could give humans time to prepare, protecting electronics on the ground and in orbit from radiation in space.

Read more at Science Daily

Earth's heavy metals result of supernova explosion, research reveals

Supernova concept.
That gold on your ring finger is stellar -- and not just in a complimentary way.

In a finding that may overthrow our understanding of where Earth's heavy elements such as gold and platinum come from, new research by a University of Guelph physicist suggests that most of them were spewed from a largely overlooked kind of star explosion far away in space and time from our planet.

Some 80 per cent of the heavy elements in the universe likely formed in collapsars, a rare but heavy element-rich form of supernova explosion from the gravitational collapse of old, massive stars typically 30 times as weighty as our sun, said physics professor Daniel Siegel.

That finding overturns the widely held belief that these elements mostly come from collisions between neutron stars or between a neutron star and a black hole, said Siegel.

His paper co-authored with Columbia University colleagues appears today in the journal Nature.

Using supercomputers, the trio simulated the dynamics of collapsars, or old stars whose gravity causes them to implode and form black holes.

Under their model, massive, rapidly spinning collapsars eject heavy elements whose amounts and distribution are "astonishingly similar to what we observe in our solar system," said Siegel. He joined U of G this month and is also appointed to the Perimeter Institute for Theoretical Physics, in Waterloo, Ont.

Most of the elements found in nature were created in nuclear reactions in stars and ultimately expelled in huge stellar explosions.

Heavy elements found on Earth and elsewhere in the universe from long-ago explosions range from gold and platinum, to uranium and plutonium used in nuclear reactors, to more exotic chemical elements such as neodymium found in consumer items such as electronics.

Until now, scientists thought that these elements were cooked up mostly in stellar smashups involving neutron stars or black holes, as in a collision of two neutron stars observed by Earth-bound detectors that made headlines in 2017.

Ironically, said Siegel, his team began working to understand the physics of that merger before their simulations pointed toward collapsars as a heavy element birth chamber. "Our research on neutron star mergers has led us to believe that the birth of black holes in a very different type of stellar explosion might produce even more gold than neutron star mergers."

What collapsars lack in frequency, they make up for in generation of heavy elements, said Siegel. Collapsars also produce intense flashes of gamma rays.

"Eighty per cent of these heavy elements we see should come from collapsars. Collapsars are fairly rare in occurrences of supernovae, even more rare than neutron star mergers -- but the amount of material that they eject into space is much higher than that from neutron star mergers."

The team now hopes to see its theoretical model validated by observations. Siegel said infrared instruments such as those on the James Webb Space Telescope, set for launch in 2021, should be able to detect telltale radiation pointing to heavy elements from a collapsar in a far-distant galaxy.

"That would be a clear signature," he said, adding that astronomers might also detect evidence of collapsars by looking at amounts and distribution of heavy element s in other stars across our Milky Way galaxy.

Siegel said this research may yield clues about how our galaxy began.

"Trying to nail down where heavy elements come from may help us understand how the galaxy was chemically assembled and how the galaxy formed. This may actually help solve some big questions in cosmology as heavy elements are a nice tracer."

This year marks the 150th anniversary of Dmitri Mendeleev's creation of the periodic table of the chemical elements. Since then, scientists have added many more elements to the periodic table, a staple of science textbooks and classrooms worldwide.

Read more at Science Daily

How multi-celled animals developed

Microscopic life.
Scientists at The University of Queensland have upended biologists' century-old understanding of the evolutionary history of animals.

Using new technology to investigate how multi-celled animals developed, their findings revealed a surprising truth.

Professor Bernie Degnan said the results contradicted years of tradition.

"We've found that the first multicellular animals probably weren't like the modern-day sponge cells, but were more like a collection of convertible cells," Professor Degnan said.

"The great-great-great-grandmother of all cells in the animal kingdom, so to speak, was probably quite similar to a stem cell.

"This is somewhat intuitive as, compared to plants and fungi, animals have many more cell types, used in very different ways -- from neurons to muscles -- and cell-flexibility has been critical to animal evolution from the start."

The findings disprove a long-standing idea: that multi-celled animals evolved from a single-celled ancestor resembling a modern sponge cell known as a choanocyte.

"Scattered throughout the history of evolution are major transitions, including the leap from a world of microscopic single-cells to a world of multi-celled animals," Professor Degnan said.

"With multicellularity came incredible complexity, creating the animal, plant, fungi and algae kingdoms we see today.

"These large organisms differ from the other more-than-99-per-cent of biodiversity that can only be seen under a microscope."

The team mapped individual cells, sequencing all of the genes expressed, allowing the researchers to compare similar types of cells over time.

Fellow senior author Associate Professor Sandie Degnan said this meant they could tease out the evolutionary history of individual cell types, by searching for the 'signatures' of each type.

"Biologists for decades believed the existing theory was a no-brainer, as sponge choanocytes look so much like single-celled choanoflagellates -- the organism considered to be the closest living relatives of the animals," she said.

"But their transcriptome signatures simply don't match, meaning that these aren't the core building blocks of animal life that we originally thought they were.

"This technology has been used only for the last few years, but it's helped us finally address an age-old question, discovering something completely contrary to what anyone had ever proposed."

"We're taking a core theory of evolutionary biology and turning it on its head," she said.

"Now we have an opportunity to re-imagine the steps that gave rise to the first animals, the underlying rules that turned single cells into multicellular animal life."

Read more at Science Daily

Jupiter-like exoplanets found in sweet spot in most planetary systems

Illustration of Jupiter-like planet orbiting a star.
As planets form in the swirling gas and dust around young stars, there seems to be a sweet spot where most of the large, Jupiter-like gas giants congregate, centered around the orbit where Jupiter sits today in our own solar system.

The location of this sweet spot is between 3 and 10 times the distance Earth sits from our sun (3-10 astronomical units, or AU). Jupiter is 5.2 AU from our sun.

That's just one of the conclusions of an unprecedented analysis of 300 stars captured by the Gemini Planet Imager, or GPI, a sensitive infrared detector mounted on the 8-meter Gemini South telescope in Chile.

The GPI Exoplanet Survey, or GPIES, is one of two large projects that search for exoplanets directly, by blocking stars' light and photographing the planets themselves, instead of looking for telltale wobbles in the star -- the radial velocity method -- or for planets crossing in front of the star -- the transit technique. The GPI camera is sensitive to the heat given off by recently-formed planets and brown dwarfs, which are more massive than gas giant planets, but still too small to ignite fusion and become stars.

The analysis of the first 300 of more than 500 stars surveyed by GPIES, published June 12 in the The Astronomical Journal, "is a milestone," said Eugene Chiang, a UC Berkeley professor of astronomy and member of the collaboration's theory group. "We now have excellent statistics for how frequently planets occur, their mass distribution and how far they are from their stars. It is the most comprehensive analysis I have seen in this field."

The study complements earlier exoplanet surveys by counting planets between 10 and 100 AU, a range in which the Kepler Space Telescope transit survey and radial velocity observations are unlikely to detect planets. It was led by Eric Nielsen, a research scientist at the Kavli Institute for Particle Astrophysics and Cosmology at Stanford University, and involved more than 100 researchers at 40 institutions worldwide, including the University of California, Berkeley.

One new planet, one new brown dwarf

Since the GPIES survey began five years ago, the team has imaged six planets and three brown dwarfs orbiting these 300 stars. The team estimates that about 9 percent of massive stars have gas giants between 5 and 13 Jupiter masses beyond a distance of 10 AU, and fewer than 1 percent have brown dwarfs between 10 and 100 AU.

The new data set provides important insight into how and where massive objects form within planetary systems.

"As you go out from the central star, giant planets become more frequent. Around 3 to 10 AU, the occurrence rate peaks," Chiang said. "We know it peaks because the Kepler and radial velocity surveys find a rise in the rate, going from hot Jupiters very near the star to Jupiters at a few AU from the star. GPI has filled in the other end, going from 10 to 100 AU, and finding that the occurrence rate drops; the giant planets are more frequently found at 10 than 100. If you combine everything, there is a sweet spot for giant planet occurrence around 3 to 10 AU."

"With future observatories, particularly the Thirty-Meter Telescope and ambitious space-based missions, we will start imaging the planets residing in the sweet spot for sun-like stars," said team member Paul Kalas, a UC Berkeley adjunct professor of astronomy.

The exoplanet survey discovered only one previously unknown planet -- 51 Eridani b, nearly three times the mass of Jupiter -- and one previously unknown brown dwarf -- HR 2562 B, weighing in at about 26 Jupiter masses. None of the giant planets imaged were around sun-like stars. Instead, giant gas planets were discovered only around more massive stars, at least 50 percent larger than our sun, or 1.5 solar masses.

"Given what we and other surveys have seen so far, our solar system doesn't look like other solar systems," said Bruce Macintosh, the principal investigator for GPI and a professor of physics at Stanford. "We don't have as many planets packed in as close to the sun as they do to their stars and we now have tentative evidence that another way in which we might be rare is having these kind of Jupiter-and-up planets."

"The fact that giant planets are more common around stars more massive than sun-like stars is an interesting puzzle," Chiang said.

Because many stars visible in the night sky are massive young stars called A stars, this means that "the stars you can see in the night sky with your eye are more likely to have Jupiter-mass planets around them than the fainter stars that you need a telescope to see," Kalas said. "That is kinda cool."

The analysis also shows that gas giant planets and brown dwarfs, while seemingly on a continuum of increasing mass, may be two distinct populations that formed in different ways. The gas giants, up to about 13 times the mass of Jupiter, appear to have formed by accretion of gas and dust onto smaller objects -- from the bottom up. Brown dwarfs, between 13 and 80 Jupiter masses, formed like stars, by gravitational collapse -- from the top down -- within the same cloud of gas and dust that gave rise to the stars.

"I think this is the clearest evidence we have that these two groups of objects, planets and brown dwarfs, form differently," Chiang said. "They really are apples and oranges."

Direct imaging is the future

The Gemini Planet Imager can sharply image planets around distant stars, thanks to extreme adaptive optics, which rapidly detects turbulence in the atmosphere and reduces blurring by adjusting the shape of a flexible mirror. The instrument detects the heat of bodies still glowing from their own internal energy, such as exoplanets that are large, between 2 and 13 times the mass of Jupiter, and young, less than 100 million years old, compared to our sun's age of 4.6 billion years. Even though it blocks most of the light from the central star, the glare still limits GPI to seeing only planets and brown dwarfs far from the stars they orbit, between about 10 and 100 AU.

The team plans to analyze data on the remaining stars in the survey, hoping for greater insight into the most common types and sizes of planets and brown dwarfs.

Chiang noted that the success of GPIES shows that direct imaging will become increasingly important in the study of exoplanets, especially for understanding their formation.

"Direct imaging is the best way at getting at young planets," he said. "When young planets are forming, their young stars are too active, too jittery, for radial velocity or transit methods to work easily. But with direct imaging, seeing is believing."

Read more at Science Daily

Jun 12, 2019

Breakthrough in the discovery of DNA in ancient bones buried in water

During the Iron Age around 300 AD something extraordinary was initiated in Levänluhta area in Isokyrö, SW Finland. The deceased were buried in a lake, and this habit was continued for at least 400 years. When trenches were dug in the local fields in mid-1800's skulls and other human bones were surfacing. These bones had been preserved almost intact in the anoxic, ferrous water. Archaeologists, historians and locals have been wondering about these finds for over 150 years now.

In 2010, a multidisciplinary research group at the University of Helsinki decided to re-investigate the mystery of Levänluhta. The site, thought to be e.g. a sacrificial spring, is exceptional even in global scale and has yielded altogether c. 75 kg human bone material. The research group, led by docent Anna Wessman, had an ambitious aim: to find who the deceased buried in Levänluhta were, and why they were exceptionally buried under water so far from dwelling sites. Now, after several years of scientific work, the group reports their results in the most recent issue of Nature. The results are part of a more extensive international study shedding light on the colonization and population history of Siberia with DNA data from ancient -- up to 31 000 years old -- human bones.

"In our part, we wanted especially to find out the origins of the Iron Age remains found from Levänluhta," says the group leader Anna Wessman.

New results with DNA sequencing technology

This was investigated using cutting edge ancient DNA sequencing technology, which Department of Forensic Medicine is interested in due to the forensic casework performed at the department. Professor Antti Sajantila explains that the early phases of this project were demanding.

"Unability to repeat even our own results was utterly frustrating," Sajantila tells about the first experiments in the laboratory.

The methods were developing rapidly during the international co-operation, and ultimately the first Finnish results were shown to be accurate. Yet, it was surprising that the genomes of three Levänluhta individuals clearly resembled those of the modern Sámi people.

"We understood this quite early, but it took long to confirm these findings," tells docent Jukka Palo.

Locals or by-passers?

The results were suggesting that the Isokyrö region was inhabited by Sámi people in ancient times -- according to carbon datings the bones belonged to individuals that had died 500 -- 700 AD. This would be a concrete proof of Sámi in southern Finland in the past. But were the people locals, recent immigrants or haphazard by-passers? To find out, other techniques than DNA were needed. The solution lied in the enamel of teeth.

Curator Laura Arppe from the Finnish Museum of Natural History tells that strontium isotopes found in the enamel strongly suggest that the individuals grew up in the Levänluhta region.

The current genomes of the people in Finland carry both eastern Uralic and western Scandinavian components, and the genome of one the Levänluhta individuals examined had clear ties to present day Scandinavians. As a whole the replacement of the Sámi people in southern and central Finland reflects the replacement processes in Siberia, clarified in the present article. This has probably been a common feature in the Northern latitudes.

"The Levänluhta project demands further studies, not only to broaden the DNA data but also to understand the water burials as a phenomenon. The question "Why?" still lies unanswered," ponders the bone specialist, docent Kristiina Mannermaa.

Read more at Science Daily

Fifty years later, DDT lingers in lake ecosystems

To control pest outbreaks, airplanes sprayed more than 6,280 tons of dichlorodiphenyltrichloroethane (DDT) onto forests in New Brunswick, Canada, between 1952 and 1968, according to Environment Canada. By 1970, growing awareness of the harmful effects of DDT on wildlife led to curtailed use of the insecticide in the area. However, researchers reporting in ACS' Environmental Science & Technology have shown that DDT lingers in sediments from New Brunswick lakes, where it could alter zooplankton communities.

After being applied aerially to forests, DDT can enter lakes and rivers through atmospheric deposition and land runoff. The long-lived insecticide, now banned in most countries, and its toxic breakdown products accumulate in lake sediments and from there, could enter the food web. Previous research has shown that freshwater crustacean zooplankton such as Cladocera, otherwise known as water fleas, are sensitive to DDT. Joshua Kurek and colleagues wondered if elevated DDT use in the 1950s and 60s could have affected zooplankton populations in lakes, and whether these changes, and DDT and its breakdown products, persist today.

To find out, the researchers collected sediment core samples from five remote lakes in New Brunswick. The lake sediment cores captured environmental conditions from about the years 1890 to 2016. The team analyzed the concentrations of DDT and its breakdown products in thin sections of the sediments, finding that peak DDT levels generally occurred during the 1970s and 80s. The most recent sediments still exceeded levels considered safe for aquatic organisms. When the researchers examined the sediments for partially fossilized remains of Cladocera, they found that most lakes showed a shift from large-bodied to small-bodied zooplankton species, which are generally more tolerant to contaminants, beginning in the 1950s when DDT was widely applied in New Brunswick.

From Science Daily

How the cell protects itself

The cell contains transcripts of the genetic material, which migrate from the cell nucleus to another part of the cell. This movement protects the genetic transcripts from the recruitment of "spliceosomes." If this protection does not happen, the entire cell is in danger: meaning that cancer and neurodegenerative diseases can develop. Researchers at the University of Göttingen and the University Medicine Centre Göttingen have demonstrated the underlying mechanism in the cell. The results were published in the journal Cell Reports.

Human cells are made up of the following: a cell nucleus, which contains the genetic material in the form of DNA; and the cytoplasm, where proteins are built. In the cell nucleus, the DNA that contains the blueprint for the organism is rewritten into another form, messenger RNA, in order to transport the information so that these instructions can be used for protein production. Separated from the original transcript, the proteins can then be produced in the cytoplasm. The separation is important because the messenger RNA is not immediately usable; rather, a precursor (pre-messenger RNA) has to be produced that still contains areas that have to be removed before the messenger RNA reaches the cytoplasm. If these areas are not removed beforehand, then shortened or dysfunctional proteins are produced, which is dangerous for the cell.

The molecular machinery that cuts these areas out of the messenger RNA are the spliceosomes. They contain proteins and another type of transcripts of the DNA, the snRNA. The snRNA is not translated into proteins like messenger RNA, but together with the proteins, forms the molecular machinery: the spliceosome. In human cells, the snRNA of the spliceosomes also moves into the cytoplasm. In other organisms, such as baker's yeast, which is often used as a model organism in research, scientists had thought that the snRNA of the spliceosomes never left the cell nucleus. The reason for the evolutionary development to export snRNA before incorporation into the spliceosomes of human cells was also a mystery.

"Our experiments show that in fact the snRNA of the spliceosomes also migrates into the cytoplasm in yeast," said Professor Heike Krebber, Head of the Department of Molecular Genetics at the Institute for Microbiology and Genetics at the University of Göttingen. In a second step, the researchers answered the question as to why the messenger RNA of the spliceosomes actually moves into the cytoplasm. It was unclear because the spliceosomes' task is to cut out individual RNA regions and this takes place back in the cell nucleus. The team of researchers manipulated the yeast by genetic experiments so that the precursors of snRNA no longer changed in the cytoplasm. The observation: "The spliceosomes attempt to work with the precursors, the unfinished snRNA, and this cannot function as it's supposed to," said Krebber. "This is the reason that healthy cells must first send the precursors of messenger RNA out of the cell nucleus immediately after their production: it is to prevent them from being used by the developing spliceosomes. This basic understanding is important in order to identify the underlying cause of the development of diseases.

From Science Daily

New tool can pinpoint origins of the gut's bacteria

A UCLA-led research team has developed a faster and more accurate way to determine where the many bacteria that live in, and on, humans come from. Broadly, the tool can deduce the origins of any microbiome, a localized and diverse community of microscopic organisms.

The new computational tool, called "FEAST," can analyze large amounts of genetic information in just a few hours, compared to tools that take days or weeks. The software program could be used in health care, public health, environmental studies and agriculture. The study was published online in Nature Methods.

A microbiome typically contains hundreds to thousands of microbial species. Microbiomes are found everywhere, from the digestive tracts of humans, to lakes and rivers that feed water supplies. The microorganisms that make up these communities can originate from their surrounding environment, including food.

Knowing where these organisms come from and how these communities form can give scientists a more detailed picture of the unseen ecological processes that affect human health. The researchers developed the program to give doctors and scientists a more effective tool to investigate these phenomena.

The source-tracking program gives the percentage of the microbiome that came from somewhere else. It's similar in concept to a census that reveals the countries that its immigrant population came from, and what percentage each group is of the total population.

For example, using the source-tracking tool on a kitchen counter sample can indicate how much of that sample came from humans, how much came from food, and specifically which types of food.

Armed with this information, doctors will be able to distinguish a healthy person from one who has a particular disease by simply analyzing their microbiome. Scientists could use the tool to detect contamination in water resources or in food supply chains.

"The microbiome has been linked to many aspects of human physiology and health, yet we are just in the early stages of understanding the clinical implications of this dynamic web of many species and how they interact with each other," said Eran Halperin, the study's principal investigator who holds UCLA faculty appointments in the Samueli School of Engineering and in the David Geffen School of Medicine.

"There has been an unprecedented expansion of microbiome data, which has rapidly increased our knowledge of the diverse functions and distributions of microbial life," Halperin added. "Nonetheless, such big and complex datasets pose statistical and computational challenges."

Compared to other source-tracking tools, FEAST is up to 300 times faster, and is significantly more accurate, the researchers say.

Also, current tools can only analyze smaller datasets, or only target specific microorganisms that are deemed to be harmful contaminants. The new tool can process much larger datasets and offer a more complete picture of the microorganisms that are present and where they came from, the researchers say.

The researchers confirmed FEAST's viability by comparing it against analyses of previously published datasets.

For example, they used the tool to determine the types of microorganisms on a kitchen counter and it provided much more detail than previous tools that analyzed the same dataset.

They also used the tool to compare the gut microbiomes of infants delivered by cesarean section to the microbiomes of babies who were delivered vaginally.

Read more at Science Daily

Mysterious holes in Antarctic sea ice explained by years of robotic data

Antarctica illustration.
The winter ice on the surface of Antarctica's Weddell Sea occasionally forms an enormous hole. A hole that appeared in 2016 and 2017 drew intense curiosity from scientists and reporters. Though even bigger gaps had formed decades before, this was the first time oceanographers had a chance to truly monitor the unexpected gap in Antarctic winter sea ice.

A new study led by the University of Washington combines satellite images of the sea ice cover, robotic drifters and even seals outfitted with sensors to better understand the phenomenon. The research explores why this hole appears in only some years, and what role it could play in the larger ocean circulation.

The study was published June 10 in the journal Nature.

"We thought this large hole in the sea ice -- known as a polynya -- was something that was rare, maybe a process that had gone extinct. But the events in 2016 and 2017 forced us to reevaluate that," said lead author Ethan Campbell, a UW doctoral student in oceanography. "Observations show that the recent polynyas opened from a combination of factors -- one being the unusual ocean conditions, and the other being a series of very intense storms that swirled over the Weddell Sea with almost hurricane-force winds."

A "polynya," a Russian word that roughly means "hole in the ice," can form near shore as wind pushes the ice around. But it can also appear far from the coast and stick around for weeks to months, where it acts as an oasis for penguins, whales and seals to pop up and breathe.

This particular spot far from the Antarctic coast often has small openings and has seen large polynyas before. The biggest known polynyas at that location were in 1974, 1975 and 1976, just after the first satellites were launched, when an area the size of New Zealand remained ice-free through three consecutive Antarctic winters despite air temperatures far below freezing.

Campbell joined the UW as a graduate student in 2016 to better understand this mysterious phenomenon. In a stroke of scientific luck, a big one appeared for the first time in decades. A NASA satellite image in August 2016 drew public attention to a 33,000-square-kilometer (13,000-square-mile) gap that appeared for three weeks. An even bigger gap, of 50,000 square kilometers (19,000 square miles) appeared in September and October of 2017.

The Southern Ocean is thought to play a key role in global ocean currents and carbon cycles, but its behavior is poorly understood. It hosts some of the fiercest storms on the planet, with winds whipping uninterrupted around the continent in the 24-hour darkness of polar winter. The new study used observations from the Southern Ocean Carbon and Climate Observations and Modeling project, or SOCCOM, which puts out instruments that drift with the currents to monitor Antarctic conditions.

The study also used data from the long-running Argo ocean observing program, elephant seals that beam data back to shore, weather stations and decades of satellite images.

"This study shows that this polynya is actually caused by a number of factors that all have to line up for it to happen," said co-author Stephen Riser, a UW professor of oceanography. "In any given year you could have several of these things happen, but unless you get them all, then you don't get a polynya."

The study shows that when winds surrounding Antarctica draw closer to shore, they promote stronger upward mixing in the eastern Weddell Sea. In that region, an underwater mountain known as Maud Rise forces dense seawater around it and leaves a spinning vortex above. Two SOCCOM instruments were trapped in the vortex above Maud Rise and recorded years of observations there.

Analysis shows that when the surface ocean is especially salty, as seen throughout 2016, strong winter storms can set off an overturning circulation. Warmer, saltier water from the depths gets churned up to the surface, where air chills it and makes it denser than the water below. As that water sinks, relatively warmer deep water of about 1 degree Celsius (34 F) replaces it, creating a feedback loop where ice can't reform.

Under climate change, fresh water from melting glaciers and other sources will make the Southern Ocean's surface layer less dense, which might mean fewer polynyas in the future. But the new study questions that assumption. Many models show that the winds circling Antarctica will become stronger and draw closer to the coast -- the new paper suggests this would encourage more polynyas to form, not fewer.

These are the first observations to prove that even a smaller polynya like the one in 2016 moves water from the surface all the way to the deep ocean.

"Essentially it's a flipping over of the entire ocean, rather than an injection of surface water on a one-way trip from the surface to the deep," said co-author Earle Wilson, who recently completed his doctorate in oceanography at the UW.

One way that a surface polynya matters for the climate is for the deepest water in the oceans, known as Antarctic Bottom Water. This cold, dense water lurks below all the other water. Where and how it's created affects its characteristics, and would have ripple effects on other major ocean currents.

"Right now people think most of the bottom water is forming on the Antarctic shelf, but these big offshore polynyas might have been more common in the past," Riser said. "We need to improve our models so we can study this process, which could have larger-scale climate implications."

Large and long-lasting polynyas can also affect the atmosphere, because deep water contains carbon from lifeforms that have sunk over centuries and dissolved on their way down. Once this water reaches the surface that carbon could be released.

"This deep reservoir of carbon has been locked away for hundreds of years, and in a polynya it might get ventilated at the surface through this really violent mixing," Campbell said. "A large carbon outgassing event could really whack the climate system if it happened multiple years in a row."

Read more at Science Daily

Jun 11, 2019

What if dark matter is lighter? Report calls for small experiments to broaden the hunt

Abstract background.
The search for dark matter is expanding. And going small.

While dark matter abounds in the universe -- it is by far the most common form of matter, making up about 85 percent of the universe's total -- it also hides in plain sight. We don't yet know what it's made of, though we can witness its gravitational pull on known matter.

Theorized weakly interacting massive particles, or WIMPs, have been among the cast of likely suspects comprising dark matter, but they haven't yet shown up where scientists had expected them.

Casting many small nets

So scientists are now redoubling their efforts by designing new and nimble experiments that can look for dark matter in previously unexplored ranges of particle mass and energy, and using previously untested methods. The new approach, rather than relying on a few large experiments' "nets" to try to snare one type of dark matter, is akin to casting many smaller nets with much finer mesh.

Dark matter could be much "lighter," or lower in mass and slighter in energy, than previously thought. It could be composed of theoretical, wavelike ultralight particles known as axions. It could be populated by a wild kingdom filled with many species of as-yet-undiscovered particles. And it may not be composed of particles at all.

Momentum has been building for low-mass dark matter experiments, which could expand our current understanding of the makeup of matter as embodied in the Standard Model of particle physics, noted Kathryn Zurek, a senior scientist and theoretical physicist at the Department of Energy's Lawrence Berkeley National Laboratory (Berkeley Lab).

Zurek, who is also affiliated with UC Berkeley, has been a pioneer in proposing low-mass dark matter theories and possible ways to detect it.

"What experimental evidence do we have for physics beyond the Standard Model? Dark matter is one of the best ones," she said. "There are these theoretical ideas that have been around for a decade or so," Zurek added, and new developments in technology -- such as new advances in quantum sensors and detector materials -- have also helped to drive the impetus for new experiments.

"The field has matured and blossomed over the last decade. It's become mainstream -- this is no longer the fringe," she said. Low-mass dark matter discussions have moved from small conferences and workshops to a component of the overall strategy in searching for dark matter.

She noted that Berkeley Lab and UC Berkeley, with their particular expertise in dark matter theories, experiments, and cutting-edge detector and target R&D, are poised to make a big impact in this emerging area of the hunt for dark matter.

Report highlights need to search for "light" dark matter low-mass

Dark matter-related research by Zurek and other Berkeley Lab researchers is highlighted in a DOE report, "Basic Research Needs for Dark Matter Small Projects New Initiatives," based on an October 2018 High Energy Physics Workshop on Dark Matter. Zurek and Dan McKinsey, a Berkeley Lab faculty senior scientist and UC Berkeley physics professor, served as co-leads on a workshop panel focused on dark matter direct-detection techniques, and this panel contributed to the report.

The report proposes a focus on small-scale experiments -- with project costs ranging from $2 million to $15 million -- to search for dark matter particles that have a mass smaller than a proton. Protons are subatomic particles within every atomic nucleus that each weigh about 1,850 times more than an electron.

This new, lower-mass search effort will have "the overarching goal of finally understanding the nature of the dark matter of the universe," the report states.

In a related effort, the U.S. Department of Energy this year solicited proposals for new dark matter experiments, with a May 30 deadline, and Berkeley Lab participated in the proposal process, McKinsey said.

"Berkeley is a dark matter mecca" that is primed for participating in this expanded search, he said. McKinsey has been a participant in large direct-detection dark matter experiments including LUX and LUX-ZEPLIN and is also working on low-mass dark matter detection techniques.

3 priorities in the expanded search

The report highlights three major priority research directions in searching for low-mass dark matter that "are needed to achieve broad sensitivity and ... to reach different key milestones":

  1. Create and detect dark matter particles below the proton mass and associated forces, leveraging DOE accelerators that produce beams of energetic particles. Such experiments could potentially help us understand the origins of dark matter and explore its interactions with ordinary matter, the report states.
  2. Detect individual galactic dark matter particles -- down to a mass measuring about 1 trillion times smaller than that of a proton -- through interactions with advanced, ultrasensitive detectors. The report notes that there are already underground experimental areas and equipment that could be used in support of these new experiments.
  3. Detect galactic dark matter waves using advanced, ultrasensitive detectors with emphasis on the so-called QCD (quantum chromodynamics) axion. Advances in theory and technology now allow scientists to probe for the existence of this type of axion-based dark matter across the entire spectrum of its expected ultralight mass range, providing "a glimpse into the earliest moments in the origin of the universe and the laws of nature at ultrahigh energies and temperatures," the report states.

This axion, if it exists, could also help to explain properties associated with the universe's strong force, which is responsible for holding most matter together -- it binds particles together in an atom's nucleus, for example.

Searches for the traditional WIMP form of dark matter have increased in sensitivity about 1,000-fold in the past decade.

Berkeley scientists are building prototype experiments

Berkeley Lab and UC Berkeley researchers will at first focus on liquid helium and gallium arsenide crystals in searching for low-mass dark matter particle interactions in prototype laboratory experiments now in development at UC Berkeley.

"Materials development is also part of the story, and also thinking about different types of excitations" in detector materials, Zurek said.

Besides liquid helium and gallium arsenide, the materials that could be used to detect dark matter particles are diverse, "and the structures in them are going to allow you to couple to different dark matter candidates," she said. "I think target diversity is extremely important."

The goal of these experiments, which are expected to begin within the next few months, is to develop the technology and techniques so that they can be scaled up for deep-underground experiments at other sites that will provide additional shielding from the natural shower of particle "noise" raining down from the sun and other sources.

McKinsey, who is working on the prototype experiments at UC Berkeley, said that the liquid helium experiment there will seek out any signs of dark matter particles causing nuclear recoil -a process through which a particle interaction gives the nucleus of an atom a slight jolt that researchers hope can be amplified and detected.

One of the experiments seeks to measure excitations from dark matter interactions that lead to the measurable evaporation of a single helium atom.

"If a dark matter particle scatters (on liquid helium), you get a blob of excitation," McKinsey said. "You could get millions of excitations on the surface -- you get a big heat signal."

He noted that atoms in liquid helium and crystals of gallium arsenide have properties that allow them to light up or "scintillate" in particle interactions. Researchers will at first use more conventional light detectors, known as photomultiplier tubes, and then move to more sensitive, next-generation detectors.

"Basically, over the next year we will be studying light signals and heat signals," McKinsey said. "The ratio of heat to light will give us an idea what each event is."

These early investigations will determine whether the tested techniques can be effective in low-mass dark matter detection at other sites that provide a lower-noise environment. "We think this will allow us to probe much lower energy thresholds," he said.

Read more at Science Daily

Citizen scientists re-tune Hubble's galaxy classification

Hundreds of thousands of volunteers have helped to overturn almost a century of galaxy classification, in a new study using data from the longstanding Galaxy Zoo project. The new investigation, published in the journal Monthly Notices of the Royal Astronomical Society, uses classifications of over 6000 galaxies to reveal that "well known" correlations between different features are not found in this large and complete sample.

Almost 100 years ago, in 1927, astronomer Edwin Hubble wrote about the spiral galaxies he was observing at the time, and developed a model to classify galaxies by type and shape. Known as the "Hubble Tuning Fork" due to its shape, this model takes account of two main features: the size of the central region (known as the 'bulge'), and how tightly wound any spiral arms are.

Hubble's model soon became the authoritative method of classifying spiral galaxies, and is still used widely in astronomy textbooks to this day. His key observation was that galaxies with larger bulges tended to have more tightly wound spiral arms, lending vital support to the 'density wave' model of spiral arm formation.

Now though, in contradiction to Hubble's model, the new work finds no significant correlation between the sizes of the galaxy bulges and how tightly wound the spirals are. This suggests that most spirals are not static density waves after all.

Galaxy Zoo Project Scientist and first author of the new work, Professor Karen Masters from Haverford College in the USA explains: "This non-detection was a big surprise, because this correlation is discussed in basically all astronomy textbooks -- it forms the basis of the spiral sequence described by Hubble."

Hubble was limited by the technology of the time, and could only observe the brightest nearby galaxies. The new work is based on a sample 15 times larger from the Galaxy Zoo project, where members of the public assess images of galaxies taken by telescopes around the world, identifying key features to help scientists to follow up and analyse in more detail.

"We always thought that the bulge size and winding of the spiral arms were connected," says Masters. "The new results suggest otherwise, and that has a big impact on our understanding of how galaxies develop their structure."

There are several proposed mechanisms for how spiral arms form in galaxies. One of the most popular is the density wave model -- the idea that the arms are not fixed structures, but caused by ripples in the density of material in the disc of the galaxy. Stars move in and out of these ripples as they pass around the galaxy.

New models however suggest that some arms at least could be real structures, not just ripples. These may consist of collections of stars that are bound by each other's gravity, and physically rotate together. This dynamic explanation for spiral arm formation is supported by state-of-the art computer models of spiral galaxies.

"It's clear that there is still lots of work to do to understand these objects, and it's great to have new eyes involved in the process," adds Brooke Simmons, Deputy Project Scientist for the Galaxy Zoo project.

Read more at Science Daily

An hour or two of outdoor learning every week increases teachers' job satisfaction

A Swansea University study has revealed how as little as an hour a week of outdoor learning has tremendous benefits for children and also boosts teachers' job satisfaction.

Through interviews and focus groups, researchers explored the views and experiences of pupils and educators at three primary schools in south Wales that had adopted an outdoor learning programme, which entailed teaching the curriculum in the natural environment for at least an hour a week.

Interviews were held with headteachers and teachers, and focus groups were conducted with pupils aged 9-11 both before and during the implementation of an outdoor learning programme within the curriculum.

The schools in the study reported a variety of benefits of outdoor learning for both the child and the teacher and for improving health, wellbeing, education and engagement in school.

Lead author of the study Emily Marchant, a PhD researcher in Medical Studies at Swansea University, explained: "We found that the pupils felt a sense of freedom when outside the restricting walls of the classroom. They felt more able to express themselves, and enjoyed being able to move about more too. They also said they felt more engaged and were more positive about the learning experience. We also heard many say that their well-being and memory were better, and teachers told us how it helped engage all types of learners."

The benefits of outdoor education for children are well documented, but a finding of this study is the impact that the outdoor learning programme had on teachers.

Emily said: "Initially, some teachers had reservations about transferring the classroom outdoors but once outdoor learning was embedded within the curriculum, they spoke of improved job satisfaction and personal wellbeing. This is a really important finding given the current concerns around teacher retention rates. Overall, our findings highlight the potential of outdoor learning as a curriculum tool in improving school engagement and the health, wellbeing and education outcomes of children.

"The schools within our study have all continued with regular outdoor learning within the curriculum. With support and recognition from education inspectorates of the wider benefits to children's development and education, outdoor learning could be set within the primary school curriculum."

From Science Daily

Genetics influence how protective childhood vaccines are for individual infants

A genome-wide search in thousands of children in the UK and Netherlands has revealed genetic variants associated with differing levels of protective antibodies produced after routine childhood immunizations. The findings, appearing June 11 in the journal Cell Reports, may inform the development of new vaccine strategies and could lead to personalized vaccination schedules to maximize vaccine effectiveness.

"This study is the first to use a genome-wide genotyping approach, assessing several million genetic variants, to investigate the genetic determinants of immune responses to three routine childhood vaccines," says Daniel O'Connor of the University of Oxford, who is co-first author on the paper along with Eileen Png of the Genome Institute of Singapore. "While this study is a good start, it also clearly demonstrates that more work is needed to fully describe the complex genetics involved in vaccine responses, and to achieve this aim we will need to study many more individuals."

Vaccines have revolutionized public health, preventing millions of deaths each year, particularly in childhood. The maintenance of antibody levels in the blood is essential for continued vaccine-induced protection against pathogens. Yet there is considerable variability in the magnitude and persistence of vaccine-induced immunity. Moreover, antibody levels rapidly wane following immunization with certain vaccines in early infancy, so boosters are required to sustain protection.

"Evoking robust and sustained vaccine-induced immunity from early life is a crucial component of global health initiatives to combat the burden of infectious disease," O'Connor says. "The mechanisms underlying the persistence of antibody is of major interest, since effectiveness and acceptability of vaccines would be improved if protection were sustained after infant immunization without the need for repeated boosting through childhood."

Vaccine responses and the persistence of immunity are determined by various factors, including age, sex, ethnicity, microbiota, nutritional status, and infectious diseases. Twin studies have also shown vaccine-induced immunity to be highly heritable, and recent studies have started to unpick the genetic components underlying this complex trait.

To explore genetic factors that determine the persistence of immunity, O'Connor and colleagues carried out a genome-wide association study of 3,602 children in the UK and Netherlands. The researchers focused on three routine childhood vaccines that protect against life-threatening bacterial infections: capsular group C meningococcal (MenC), Haemophilus influenzae type b (Hib), and tetanus toxoid (TT) vaccines. They analyzed approximately 6.7 million genetic variants affecting a single DNA building block, known as single nucleotide polymorphisms (SNPs), associated with vaccine-induced antibody levels in the blood.

The researchers identified two genetic loci associated with the persistence of vaccine-induced immunity following childhood immunization. The persistence of MenC immunity is associated with SNPs in a genomic region containing a family of signal-regulatory proteins, which are involved in immunological signaling. Meanwhile, the persistence of TT-specific immunity is associated with SNPs in the human leukocyte antigen (HLA) locus. HLA molecules present peptides to T cells, which in turn induce B cells to produce antibodies.

These variants likely account for only a small portion of the genetic determinants of persistence of vaccine-induced immunity. Moreover, it is unclear whether the findings apply to other ethnic populations besides Caucasians from the UK and Netherlands. But according to the authors, neonatal screening approaches could soon incorporate genetic risk factors that predict the persistence of immunity, paving the way for personalized vaccine regimens.

Read more at Science Daily

How the brain changes when mastering a new skill

Mastering a new skill -- whether a sport, an instrument, or a craft -- takes time and training. While it is understood that a healthy brain is capable of learning these new skills, how the brain changes in order to develop new behaviors is a relative mystery. More precise knowledge of this underlying neural circuitry may eventually improve the quality of life for individuals who have suffered brain injury by enabling them to more easily relearn everyday tasks.

Researchers from the University of Pittsburgh and Carnegie Mellon University recently published an article in PNAS that reveals what happens in the brain as learners progress from novice to expert. They discovered that new neural activity patterns emerge with long-term learning and established a causal link between these patterns and new behavioral abilities.

The research was performed as part of the Center for the Neural Basis of Cognition, a cross-institutional research and education program that leverages the strengths of Pitt in basic and clinical neuroscience and bioengineering, with those of CMU in cognitive and computational neuroscience.

The project was jointly mentored by Aaron Batista, associate professor of bioengineering at Pitt; Byron Yu, associate professor of electrical and computer engineering and biomedical engineering at CMU; and Steven Chase, associate professor of biomedical engineering and the Neuroscience Institute at CMU. The work was led by Pitt bioengineering postdoctoral associate Emily Oby.

"We used a brain-computer interface (BCI), which creates a direct connection between our subject's neural activity and the movement of a computer cursor," said Oby. "We recorded the activity of around 90 neural units in the arm region of the primary motor cortex of Rhesus monkeys as they performed a task that required them to move the cursor to align with targets on the monitor."

To determine whether the monkeys would form new neural patterns as they learned, the research group encouraged the animals to attempt a new BCI skill and then compared those recordings to the pre-existing neural patterns.

"We first presented the monkey with what we call an 'intuitive mapping' from their neural activity to the cursor that worked with how their neurons naturally fire and which didn't require any learning," said Oby. "We then induced learning by introducing a skill in the form of a novel mapping that required the subject to learn what neural patterns they need to produce in order to move the cursor."

Like learning most skills, the group's BCI task took several sessions of practice and a bit of coaching along the way.

"We discovered that after a week, our subject was able to learn how to control the cursor," said Batista. "This is striking because by construction, we knew from the outset that they did not have the neural activity patterns required to perform this skill. Sure enough, when we looked at the neural activity again after learning we saw that new patterns of neural activity had appeared, and these new patterns are what enabled the monkey to perform the task."

These findings suggest that the process for humans to master a new skill might also involve the generation of new neural activity patterns.

"Though we are looking at this one specific task in animal subjects, we believe that this is perhaps how the brain learns many new things," said Yu. "Consider learning the finger dexterity required to play a complex piece on the piano. Prior to practice, your brain might not yet be capable of generating the appropriate activity patterns to produce the desired finger movements."

Read more at Science Daily