Sep 14, 2019

How new loops in DNA packaging help us make diverse antibodies

Diversity is good, especially when it comes to antibodies. It's long been known that a gene assembly process called V(D)J recombination allows our immune system to mix and match bits of genetic code, generating new antibodies to conquer newly encountered threats. But how these gene segments come together to be spliced has been a mystery. A new study in Nature provides the answer.

Our DNA strands are organized, together with certain proteins, into a packaging called chromatin, which contains multiple loops. When a cell needs to build a particular protein, the chromatin loops bring two relatively distant DNA segments in close proximity so they can work together. Many of these loops are fixed in place, but cells can sometimes rearrange loops or make new loops when they need to -- notably, cancer cells and immune cells.

The new research, led by Frederick Alt, PhD, director of the Program in Cellular and Molecular Medicine (PCMM) at Boston Children's Hospital, shows in exquisite detail how our immune system's B cells exploit the loop formation process for the purpose of making new kinds of antibodies.

Scanning loops as they form

A pair of enzymes called RAG1 and RAG2, the researchers show, couple with mechanisms involved in making the chromatin loops to initiate the first step of V(D)J recombination -- joining the D and J segments. The RAG 1/2 complex first binds to a site on an antibody gene known as the "recombination center." As the DNA scrolls past during the process of loop formation ("extrusion"), the RAG complex scans for the D and J segments the cell wants to combine. Other factors then impede the extrusion process, pausing the scrolling DNA at the recombination center so that RAG can access the desired segments.

"The loop extrusion process is harnessed by antibody gene loci to properly present substrate gene segments to the RAG complex for V(D)J recombination," says Alt.

While many of the hard-wired chromatin loops are formed and anchored by a factor known as CTCF, the Alt lab shows that other factors are involved in dynamic situations, like antibody formation, that require new loops on the fly. The study also establishes the role of a protein called cohesin in driving the loop extrusion/RAG scanning process.

"While these findings have been made in the context of V(D)J recombination in antibody formation, they have implications for processes that could be involved in gene regulation more generally," says Alt.

Read more at Science Daily

Few people with peanut allergy tolerate peanut after stopping oral immunotherapy

Allergy to peanut, which is often severe, is one of the most common food allergies in the United States. Although previous studies have shown that peanut oral immunotherapy (OIT) -- ingesting small, controlled amounts of peanut protein -- can desensitize adults and children and prevent life-threatening allergic reactions, the optimal duration and dose is unknown. In a study that followed participants after OIT successfully desensitized them to peanut, discontinuing OIT or continuing OIT at a reduced dose led to a decline in its protective effects. The study, published online today in The Lancet, also found that several blood tests administered before OIT could predict the success of therapy. The Phase 2 study was supported by the National Institute of Allergy and Infectious Diseases (NIAID), part of the NIH, and may inform who may benefit from peanut OIT and what changes in this experimental treatment should be implemented.

Investigators at Stanford University enrolled 120 people aged 7 to 55 with diagnosed peanut allergy in the Peanut Oral Immunotherapy Study: Safety Efficacy and Discovery, or POISED. While otherwise avoiding peanut throughout the trial, 95 participants received gradually increasing daily doses of peanut protein up to 4 grams, and 25 participants received daily placebo oat flour OIT. After 24 months, participants were given gradually increasing amounts of peanut in a controlled environment, to assess their tolerance. Of those participants who received peanut OIT, 83% passed the peanut challenge without an allergic reaction, while only 4% on placebo OIT did so.

Those on OIT who passed the challenge were then randomized to receive either placebo OIT or were switched to a 300-mg daily dose of peanut protein. One year later, more participants on 300-mg peanut OIT (37%) passed the challenge than those on placebo OIT (13%), confirming insights from smaller trials that desensitization is maintained in only a minority of participants after OIT is discontinued or reduced. Participants who passed food challenges also had lower initial levels of allergic antibodies to peanut protein and other indicators of allergic activity in the blood. Future research will focus on identifying optimal OIT regimens that maintain protection after therapy and will allow for regular food consumption without allergic symptoms.

From Science Daily

Chameleon inspires 'smart skin' that changes color in the sun

A chameleon can alter the color of its skin so it either blends into the background to hide or stands out to defend its territory and attract a mate. The chameleon makes this trick look easy, using photonic crystals in its skin. Scientists, however, have struggled to make a photonic crystal "smart skin" that changes color in response to the environment, without also changing in size.

The journal ACS Nano is publishing research led by chemists at Emory University that found a solution to the problem. They developed a flexible smart skin that reacts to heat and sunlight while maintaining a near constant volume.

"Watching a chameleon change colors gave me the idea for the breakthrough," says first author Yixiao Dong, a PhD candidate in Emory's Department of Chemistry. "We've developed a new concept for a color-changing smart skin, based on observations of how nature does it."

"Scientists in the field of photonic crystals have been working for a long time to try to create color-changing smart skins for a range of potential applications, such as camouflage, chemical sensing and anti-counterfeiting tags, " adds Khalid Salaita, senior author of the paper and an Emory professor of chemistry. "While our work is still in the fundamental stages, we've established the principles for a new approach to explore and build upon."

Co-authors of the paper include Alisina Bazrafshan and Dale Combs (Emory PhD students); Kimberly Clarke (an Emory post-doctoral fellow); and Anastassia Pokutta, Fatiesa Sulejmani and Wei Sun (from Georgia Tech's Wallace H. Coulter Department of Biomedical Engineering).

Besides chameleons, many other creatures have evolved the ability to change color. The stripes on a neon tetra fish, for example, turn from deep indigo to blue-green when they swim into sunlight.

The coloration in these organisms is not based on pigments, but on tiny particles in a repeating pattern, known as photonic crystals. The periodicity in these particles causes the material to interfere with wavelengths of light. Although the particles themselves are colorless, the precise spacing between them allows certain light waves to pass through them while rejecting others. The visible colors produced change depending on factors such as lighting conditions or shifts in the distance between the particles. The iridescence of some butterfly wings and the feathers of peacocks are among many other examples of photonic crystals in nature.

If you put strawberries into a blender, Dong explains, the resulting liquid will be red because the color of the strawberries comes from pigment. If you grind up iridescent butterfly wings, however, the result will be a dull powder because the rainbow colors were not based on pigments, but on what is known as "structural color." The structure of the photonic crystal arrays is destroyed when the butterfly wings are ground up.

To mimic chameleons and create an artificial smart skin, scientists have experimented with embedding photonic crystal arrays into flexible, water-containing polymers, or hydrogels. Expanding or contracting the hydrogel changes the spacing between the arrays, resulting in a color change. The problem, however, is that the accordion-like action needed to generate a visible change in hue causes the hydrogel to significantly grow or shrink in size, leading to structural instability and buckling of the material.

"No one wants a camouflage cloak that shrinks to change color," Salaita notes.

Dong was pondering the problem while watching YouTube videos of a chameleon. "I wanted to understand why a chameleon doesn't get bigger or smaller as it changes color, but remains its original size," he says.

In close-up, time-lapsed images of the chameleon changing hues, Dong noticed that the arrays of photonic crystals did not cover the entire skin but were spread out within a dark matrix. As the photonic crystals turned different colors, these patches of color remained the same distance apart. Dong hypothesized that the skin cells making up the dark matrix somehow adjusted to compensate for the shifts in the photonic crystals.

"I wondered if we could design something similar -- a composite structure of photonic crystal arrays embedded into a strain-accommodating matrix," Dong says.

The researchers used magnets to arrange patterns of photonic crystals containing iron oxide within a hydrogel. They then embedded these arrays into a second, non-color-changing hydrogel. The second, springy hydrogel was mechanically matched to the first hydrogel to compensate for shifts in distances between the photonic crystals. When heated, this strain-accommodating smart skin (SASS) changes color but maintains a near-constant size.

Dong also tested the material in sunlight, fabricating SASS films into the shape of a fish, in homage to the neon tetra, as well as into the shape of a leaf. When exposed to natural sunlight for 10 minutes, the SASS films shifted from orange to green, without changing in size.

"We've provided a general framework to guide the future design of artificial smart skins," Dong says. "There is still a long way to go for real-life applications, but it's exciting to push the field another step further."

Read more at Science Daily

Sep 13, 2019

Climate change may cut soil's ability to absorb water

Dry soil
Coasts, oceans, ecosystems, weather and human health all face impacts from climate change, and now valuable soils may also be affected.

Climate change may reduce the ability of soils to absorb water in many parts of the world, according to a Rutgers-led study. And that could have serious implications for groundwater supplies, food production and security, stormwater runoff, biodiversity and ecosystems.

The study is published in the journal Science Advances.

"Since rainfall patterns and other environmental conditions are shifting globally as a result of climate change, our results suggest that how water interacts with soil could change appreciably in many parts of the world, and do so fairly rapidly," said co-author Daniel Giménez, a soil scientist and professor in the Department of Environmental Sciences at Rutgers University-New Brunswick. "We propose that the direction, magnitude and rate of the changes should be measured and incorporated into predictions of ecosystem responses to climate change."

Water in soil is crucial for storing carbon, and soil changes could influence the level of carbon dioxide in the air in an unpredictable way, according to Giménez, of the School of Environmental and Biological Sciences. Carbon dioxide is one of the key greenhouse gases linked to climate change.

Giménez co-authored a study published in the journal Nature last year showing that regional increases in precipitation due to climate change may lead to less water infiltration, more runoff and erosion, and greater risk of flash flooding.

Whether rainfall will infiltrate or run off of soil determines how much water will be available for plants or will evaporate into the air. Studies have shown that water infiltration to soil can change over one to two decades with increased rainfall, and climate change is expected to boost rainfall in many areas of the world.

During a 25-year experiment in Kansas that involved irrigation of prairie soil with sprinklers, a Rutgers-led team of scientists found that a 35 percent increase in rainfall led to a 21 percent to 33 percent reduction in water infiltration rates in soil and only a small increase in water retention.

The biggest changes were linked to shifts in relatively large pores, or spaces, in the soil. Large pores capture water that plants and microorganisms can use, and that contributes to enhanced biological activity and nutrient cycling in soil and decreases soil losses through erosion.

With increased rainfall, plant communities had thicker roots that could clog larger pores and there were less intense cycles of soil expansion when water was added or contraction when water was removed.

The next step is to investigate the mechanisms driving the observed changes, in order to extrapolate the findings to other regions of the world and incorporate them into predictions of how ecosystems will respond to climate change. The scientists also want to study a wider array of environmental factors and soil types, and identify other soil changes that may result from shifts in climate.

Read more at Science Daily

Engineers develop 'blackest black' material to date

With apologies to "Spinal Tap," it appears that black can, indeed, get more black.

MIT engineers report today that they have cooked up a material that is 10 times blacker than anything that has previously been reported. The material is made from vertically aligned carbon nanotubes, or CNTs -- microscopic filaments of carbon, like a fuzzy forest of tiny trees, that the team grew on a surface of chlorine-etched aluminum foil. The foil captures more than 99.96 percent of any incoming light, making it the blackest material on record.

The researchers have published their findings today in the journal ACS-Applied Materials and Interfaces. They are also showcasing the cloak-like material as part of a new exhibit today at the New York Stock Exchange, titled "The Redemption of Vanity."

The artwork, a collaboration between Brian Wardle, professor of aeronautics and astronautics at MIT, and his group, and MIT artist-in-residence Diemut Strebe, features a 16.78-carat natural yellow diamond, estimated to be worth $2 million, which the team coated with the new, ultrablack CNT material. The effect is arresting: The gem, normally brilliantly faceted, appears as a flat, black void.

Wardle says the CNT material, aside from making an artistic statement, may also be of practical use, for instance in optical blinders that reduce unwanted glare, to help space telescopes spot orbiting exoplanets.

"There are optical and space science applications for very black materials, and of course, artists have been interested in black, going back well before the Renaissance," Wardle says. "Our material is 10 times blacker than anything that's ever been reported, but I think the blackest black is a constantly moving target. Someone will find a blacker material, and eventually we'll understand all the underlying mechanisms, and will be able to properly engineer the ultimate black."

Wardle's co-author on the paper is former MIT postdoc Kehang Cui, now a professor at Shanghai Jiao Tong University.

Into the void

Wardle and Cui didn't intend to engineer an ultrablack material. Instead, they were experimenting with ways to grow carbon nanotubes on electrically conducting materials such as aluminum, to boost their electrical and thermal properties.

But in attempting to grow CNTs on aluminum, Cui ran up against a barrier, literally: an ever-present layer of oxide that coats aluminum when it is exposed to air. This oxide layer acts as an insulator, blocking rather than conducting electricity and heat. As he cast about for ways to remove aluminum's oxide layer, Cui found a solution in salt, or sodium chloride.

At the time, Wardle's group was using salt and other pantry products, such as baking soda and detergent, to grow carbon nanotubes. In their tests with salt, Cui noticed that chloride ions were eating away at aluminum's surface and dissolving its oxide layer.

"This etching process is common for many metals," Cui says. "For instance, ships suffer from corrosion of chlorine-based ocean water. Now we're using this process to our advantage."

Cui found that if he soaked aluminum foil in saltwater, he could remove the oxide layer. He then transferred the foil to an oxygen-free environment to prevent reoxidation, and finally, placed the etched aluminum in an oven, where the group carried out techniques to grow carbon nanotubes via a process called chemical vapor deposition.

By removing the oxide layer, the researchers were able to grow carbon nanotubes on aluminum, at much lower temperatures than they otherwise would, by about 100 degrees Celsius. They also saw that the combination of CNTs on aluminum significantly enhanced the material's thermal and electrical properties -- a finding that they expected.

What surprised them was the material's color.

"I remember noticing how black it was before growing carbon nanotubes on it, and then after growth, it looked even darker," Cui recalls. "So I thought I should measure the optical reflectance of the sample.

"Our group does not usually focus on optical properties of materials, but this work was going on at the same time as our art-science collaborations with Diemut, so art influenced science in this case," says Wardle.

Wardle and Cui, who have applied for a patent on the technology, are making the new CNT process freely available to any artist to use for a noncommercial art project.

"Built to take abuse"

Cui measured the amount of light reflected by the material, not just from directly overhead, but also from every other possible angle. The results showed that the material absorbed greater than 99.995 percent of incoming light, from every angle. In essence, if the material contained bumps or ridges, or features of any kind, no matter what angle it was viewed from, these features would be invisible, obscured in a void of black.

The researchers aren't entirely sure of the mechanism contributing to the material's opacity, but they suspect that it may have something to do with the combination of etched aluminum, which is somewhat blackened, with the carbon nanotubes. Scientists believe that forests of carbon nanotubes can trap and convert most incoming light to heat, reflecting very little of it back out as light, thereby giving CNTs a particularly black shade.

Read more at Science Daily

Newly discovered comet is likely interstellar visitor

Comet C/2019 Q4 as imaged by the Canada-France-Hawaii Telescope on Hawaii's Big Island on Sept. 10, 2019.
A newly discovered comet has excited the astronomical community this week because it appears to have originated from outside the solar system. The object -- designated C/2019 Q4 (Borisov) -- was discovered on Aug. 30, 2019, by Gennady Borisov at the MARGO observatory in Nauchnij, Crimea. The official confirmation that comet C/2019 Q4 is an interstellar comet has not yet been made, but if it is interstellar, it would be only the second such object detected. The first, 'Oumuamua, was observed and confirmed in October 2017.

The new comet, C/2019 Q4, is still inbound toward the Sun, but it will remain farther than the orbit of Mars and will approach no closer to Earth than about 190 million miles (300 million kilometers).

After the initial detections of the comet, Scout system, which is located at NASA's Jet Propulsion Laboratory in Pasadena, California, automatically flagged the object as possibly being interstellar. Davide Farnocchia of NASA's Center for Near-Earth Object Studies at JPL worked with astronomers and the European Space Agency's Near-Earth Object Coordination Center in Frascati, Italy, to obtain additional observations. He then worked with the NASA-sponsored Minor Planet Center in Cambridge, Massachusetts, to estimate the comet's precise trajectory and determine whether it originated within our solar system or came from elsewhere in the galaxy.

The comet is currently 260 million miles (420 million kilometers) from the Sun and will reach its closest point, or perihelion, on Dec. 8, 2019, at a distance of about 190 million miles (300 million kilometers).

"The comet's current velocity is high, about 93,000 mph [150,000 kph], which is well above the typical velocities of objects orbiting the Sun at that distance," said Farnocchia. "The high velocity indicates not only that the object likely originated from outside our solar system, but also that it will leave and head back to interstellar space."

Currently on an inbound trajectory, comet C/2019 Q4 is heading toward the inner solar system and will enter it on Oct. 26 from above at roughly a 40-degree angle relative to the ecliptic plane. That's the plane in which the Earth and planets orbit the Sun.

C/2019 Q4 was established as being cometary due to its fuzzy appearance, which indicates that the object has a central icy body that is producing a surrounding cloud of dust and particles as it approaches the Sun and heats up. Its location in the sky (as seen from Earth) places it near the Sun -- an area of sky not usually scanned by the large ground-based asteroid surveys or NASA's asteroid-hunting NEOWISE spacecraft.

C/2019 Q4 can be seen with professional telescopes for months to come. "The object will peak in brightness in mid-December and continue to be observable with moderate-size telescopes until April 2020," said Farnocchia. "After that, it will only be observable with larger professional telescopes through October 2020."

Observations completed by Karen Meech and her team at the University of Hawaii indicate the comet nucleus is somewhere between 1.2 and 10 miles (2 and 16 kilometers) in diameter. Astronomers will continue collect observations to further characterize the comet's physical properties (size, rotation, etc.) and also continue to better identify its trajectory.

Read more at Science Daily

Scientists detect the ringing of a newborn black hole for the first time

If Albert Einstein's theory of general relativity holds true, then a black hole, born from the cosmically quaking collisions of two massive black holes, should itself "ring" in the aftermath, producing gravitational waves much like a struck bell reverbates sound waves. Einstein predicted that the particular pitch and decay of these gravitational waves should be a direct signature of the newly formed black hole's mass and spin.

Now, physicists from MIT and elsewhere have "heard" the ringing of an infant black hole for the first time, and found that the pattern of this ringing does, in fact, predict the black hole's mass and spin -- more evidence that Einstein was right all along.

The findings, published today in Physical Review Letters, also favor the idea that black holes lack any sort of "hair" -- a metaphor referring to the idea that black holes, according to Einstein's theory, should exhibit just three observable properties: mass, spin, and electric charge. All other characteristics, which the physicist John Wheeler termed "hair," should be swallowed up by the black hole itself, and would therefore be unobservable.

The team's findings today support the idea that black holes are, in fact, hairless. The researchers were able to identify the pattern of a black hole's ringing, and, using Einstein's equations, calculated the mass and spin that the black hole should have, given its ringing pattern. These calculations matched measurements of the black hole's mass and spin made previously by others.

If the team's calculations deviated significantly from the measurements, it would have suggested that the black hole's ringing encodes properties other than mass, spin, and electric charge -- tantalizing evidence of physics beyond what Einstein's theory can explain. But as it turns out, the black hole's ringing pattern is a direct signature of its mass and spin, giving support to the notion that black holes are bald-faced giants, lacking any extraneous, hair-like properties.

"We all expect general relativity to be correct, but this is the first time we have confirmed it in this way," says the study's lead author, Maximiliano Isi, a NASA Einstein Fellow in MIT's Kavli Institute for Astrophysics and Space Research. "This is the first experimental measurement that succeeds in directly testing the no-hair theorem. It doesn't mean black holes couldn't have hair. It means the picture of black holes with no hair lives for one more day."

A chirp, decoded

On Sept. 9, 2015, scientists made the first-ever detection of gravitational waves -- infinitesimal ripples in space-time, emanating from distant, violent cosmic phenomena. The detection, named GW150914, was made by LIGO, the Laser Interferometer Gravitational-wave Observatory. Once scientists cleared away the noise and zoomed in on the signal, they observed a waveform that quickly crescendoed before fading away. When they translated the signal into sound, they heard something resembling a "chirp."

Scientists determined that the gravitational waves were set off by the rapid inspiraling of two massive black holes. The peak of the signal -- the loudest part of the chirp -- linked to the very moment when the black holes collided, merging into a single, new black hole. While this infant black hole likely gave off gravitational waves of its own, its signature ringing, physicists assumed, would be too faint to decipher amid the clamor of the initial collision.

Isi and his colleagues, however, found a way to extract the black hole's reverberation from the moments immediately after the signal's peak. In previous work led by Isi's co-author, Matthew Giesler, the team showed through simulations that such a signal, and particularly the portion right after the peak, contains "overtones" -- a family of loud, short-lived tones. When they reanalyzed the signal, taking overtones into account, the researchers discovered that they could successfully isolate a ringing pattern that was specific to a newly formed black hole.

In the team's new paper, the researchers applied this technique to actual data from the GW150914 detection, concentrating on the last few milliseconds of the signal, immediately following the chirp's peak. Taking into account the signal's overtones, they were able to discern a ringing coming from the new, infant black hole. Specifically, they identified two distinct tones, each with a pitch and decay rate that they were able to measure.

"We detect an overall gravitational wave signal that's made up of multiple frequencies, which fade away at different rates, like the different pitches that make up a sound," Isi says. "Each frequency or tone corresponds to a vibrational frequency of the new black hole."

Listening beyond Einstein

Einstein's theory of general relativity predicts that the pitch and decay of a black hole's gravitational waves should be a direct product of its mass and spin. That is, a black hole of a given mass and spin can only produce tones of a certain pitch and decay. As a test of Einstein's theory, the team used the equations of general relativity to calculate the newly formed black hole's mass and spin, given the pitch and decay of the two tones they detected.

They found their calculations matched with measurements of the black hole's mass and spin previously made by others. Isi says the results demonstrate that researchers can, in fact, use the very loudest, most detectable parts of a gravitational wave signal to discern a new black hole's ringing, where before, scientists assumed that this ringing could only be detected within the much fainter end of the gravitational wave signal, and only with much more sensitive instruments than what currently exist.

"This is exciting for the community because it shows these kinds of studies are possible now, not in 20 years," Isi says.

As LIGO improves its resolution, and more sensitive instruments come online in the future, researchers will be able to use the group's methods to "hear" the ringing of other newly born black holes. And if they happen to pick up tones that don't quite match up with Einstein's predictions, that could be an even more exciting prospect.

Read more at Science Daily

Sep 12, 2019

Solving the longstanding mystery of how friction leads to static electricity

Static hair
Most people have experienced the hair-raising effect of rubbing a balloon on their head or the subtle spark caused by dragging socked feet across the carpet. Although these experiences are common, a detailed understanding of how they occur has eluded scientists for more than 2,500 years.

Now a Northwestern University team developed a new model that shows that rubbing two objects together produces static electricity, or triboelectricity, by bending the tiny protrusions on the surface of materials.

This new understanding could have important implications for existing electrostatic applications, such as energy harvesting and printing, as well as for avoiding potential dangers, such as fires started by sparks from static electricity.

The research will be published on Thursday, Sept. 12 in the journal Physical Review Letters. Laurence Marks, professor of materials science and engineering in Northwestern's McCormick School of Engineering, led the study. Christopher Mizzi and Alex Lin, doctoral students in Marks's laboratory, were co-first authors of the paper.

Greek philosopher Thales of Miletus first reported friction-induced static electricity in 600 B.C. After rubbing amber with fur, he noticed the fur attracted dust.

"Since then, it has become clear that rubbing induces static charging in all insulators -- not just fur," Marks said. "However, this is more or less where the scientific consensus ended."

At the nanoscale, all materials have rough surfaces with countless tiny protrusions. When two materials come into contact and rub against one another, these protrusions bend and deform.

Marks's team found that these deformations give rise to voltages that ultimately cause static charging. This phenomenon is called the "flexoelectric effect," which occurs when the separation of charge in an insulator arises from deformations such as bending.

Using a simple model, the Northwestern team showed that voltages arising from the bending protrusions during rubbing are, indeed, large enough to cause static electricity. This work explains a number of experimental observations, such as why charges are produced even when two pieces of the same material are rubbed together and predicts experimentally measured charges with remarkable accuracy.

"Our finding suggests that triboelectricity, flexoelectricity and friction are inextricably linked," Marks said. "This provides much insight into tailoring triboelectric performance for current applications and expanding functionality to new technologies."

Read more at Science Daily

A smart artificial hand for amputees merges user and robotic control

EPFL scientists are developing new approaches for improved control of robotic hands -- in particular for amputees -- that combines individual finger control and automation for improved grasping and manipulation. This interdisciplinary proof-of-concept between neuroengineering and robotics was successfully tested on three amputees and seven healthy subjects. The results are published in today's issue of Nature Machine Intelligence.

The technology merges two concepts from two different fields. Implementing them both together had never been done before for robotic hand control, and contributes to the emerging field of shared control in neuroprosthetics.

One concept, from neuroengineering, involves deciphering intended finger movement from muscular activity on the amputee's stump for individual finger control of the prosthetic hand which has never before been done. The other, from robotics, allows the robotic hand to help take hold of objects and maintain contact with them for robust grasping.

"When you hold an object in your hand, and it starts to slip, you only have a couple of milliseconds to react," explains Aude Billard who leads EPFL's Learning Algorithms and Systems Laboratory. "The robotic hand has the ability to react within 400 milliseconds. Equipped with pressure sensors all along the fingers, it can react and stabilize the object before the brain can actually perceive that the object is slipping. "

How shared control works

The algorithm first learns how to decode user intention and translates this into finger movement of the prosthetic hand. The amputee must perform a series of hand movements in order to train the algorithm that uses machine learning. Sensors placed on the amputee's stump detect muscular activity, and the algorithm learns which hand movements correspond to which patterns of muscular activity. Once the user's intended finger movements are understood, this information can be used to control individual fingers of the prosthetic hand.

"Because muscle signals can be noisy, we need a machine learning algorithm that extracts meaningful activity from those muscles and interprets them into movements," says Katie Zhuang first author of the publication.

Next, the scientists engineered the algorithm so that robotic automation kicks in when the user tries to grasp an object. The algorithm tells the prosthetic hand to close its fingers when an object is in contact with sensors on the surface of the prosthetic hand. This automatic grasping is an adaptation from a previous study for robotic arms designed to deduce the shape of objects and grasp them based on tactile information alone, without the help of visual signals.

Many challenges remain to engineer the algorithm before it can be implemented in a commercially available prosthetic hand for amputees. For now, the algorithm is still being tested on a robot provided by an external party.

Read more at Science Daily

Water detected on an exoplanet located in its star's habitable zone

Exoplanet illustration
Ever since the discovery of the first exoplanet in the 1990s, astronomers have made steady progress towards finding and probing planets located in the habitable zone of their stars, where conditions can lead to the formation of liquid water and the proliferation of life.

Results from the Kepler satellite mission, which discovered nearly 2/3 of all known exoplanets to date, indicate that 5 to 20% of Earths and super-Earths are located in the habitable zone of their stars. However, despite this abundance, probing the conditions and atmospheric properties on any of these habitable zone planets is extremely difficult and has remained elusive... until now.

A new study by Professor Björn Benneke of the Institute for Research on Exoplanets at the Université de Montréal, his doctoral student Caroline Piaulet and several of their collaborators reports the detection of water vapour and perhaps even liquid water clouds in the atmosphere of the planet K2-18b. This exoplanet is about nine times more massive than our Earth and is found in the habitable zone of the star it orbits. This M-type star is smaller and cooler than our Sun, but due to K2-18b's close proximity to its star, the planet receives almost the same total amount of energy from its star as our Earth receives from the Sun.

The similarities between the exoplanet K2-18b and the Earth suggest to astronomers that the exoplanet may potentially have a water cycle possibly allowing water to condense into clouds and liquid water rain to fall. This detection was made possible by combining eight transit observations -- the moment when an exoplanet passes in front of its star -- taken by the Hubble Space Telescope.

The Université de Montréal is no stranger to the K2-18 system located 111 light years away. The existence of K2-18b was first confirmed by Prof. Benneke and his team in a 2016 paper using data from the Spitzer Space Telescope. The mass and radius of the planet were then determined by former Université de Montréal and University of Toronto PhD student Ryan Cloutier. These promising initial results encouraged the iREx team to collect follow-up observations of the intriguing world."

Scientists currently believe that the thick gaseous envelope of K2-18b likely prevents life as we know it from existing on the planet's surface. However, the study shows that even these planets of relatively low mass which are therefore more difficult to study can be explored using astronomical instruments developed in recent years. By studying these planets which are in the habitable zone of their star and have the right conditions for liquid water, astronomers are one step closer to directly detecting signs of life beyond our Solar System.

Read more at Science Daily

Electric eel produces highest voltage discharge of any known animal

Electric eel, Electrophorus electricus
South American rivers are home to at least three different species of electric eels, including a newly identified species capable of generating a greater electrical discharge than any other known animal, according to a new analysis of 107 fish collected in Brazil, French Guiana, Guyana and Suriname in recent years.

Scientists have known for more than 250 years that electric eels, which send electricity pulsing through the water to stun their prey, live in the Amazon basin. They are widely distributed in swamps, streams, creeks, and rivers across northern South America, and have long been thought to belong to a single species. With modern genetic and ecological analyses, however, researchers at the Smithsonian's National Museum of Natural History have discovered that electric eels in the Amazon basin belong to three different species that evolved from a shared ancestor millions of years ago. The findings are reported Sept. 10 in the journal Nature Communications.

The identification of two new species of electric eel highlights how much remains to be discovered within the Amazon rainforest -- one of Earth's biodiversity hotspots -- as well as the importance of protecting and preserving this threatened environment, says study leader C. David de Santana, a research associate in the museum's division of fishes. "These fish grow to be seven to eight feet long. They're really conspicuous," he says. "If you can discover a new eight-foot-long fish after 250 years of scientific exploration, can you imagine what remains to be discovered in that region?"

About 250 species of electricity-generating fish are known to live in South America, although electric eels (which actually are fish with a superficial eel-like appearance) are the only ones that use their electricity to hunt and for self-defense. Like other electric fishes, they also navigate and communicate with the electricity they produce. Electric eels inspired the design of the first battery in 1799, and as researchers have learned more about how they generate enough electricity to stun a large animal, scientists and engineers have gained new ideas about how to improve technology and possibly even treat disease.

Smithsonian scientists have been collaborating with researchers at the University of São Paulo's Museum of Zoology in Brazil and other institutions around the world to explore the diversity of the eels and other electric fishes in South America. As part of that effort, de Santana closely examined the electric eel specimens he and his colleagues had collected in the Amazon over the last six years.

All the specimens looked pretty much the same. Finding no external features on the fish that clearly distinguished different groups on first glance, de Santana turned to the animals' DNA, and found genetic differences that indicated his 107 specimens represented three different species. Reexamining the animals with the genetic results in hand, he found subtle physical differences corresponding to the three genetic groups. He determined that each species has its own unique skull shape, as well as defining characteristics on the pectoral fin and a distinctive arrangement of pores on the body.

Each species has its own geographic distribution, too. The long recognized Electrophorus electricus, once thought to be widely distributed across the continent, actually appears to be confined to the highlands of the Guiana Shield, an ancient geological formation where clear waters tumble over rapids and falls. Electrophorus voltai, one of the two newly discovered species, primarily lives further south on the Brazilian Shield, a similar highland region. The third species, Electrophorus varii, named after the late Smithsonian ichthyologist Richard Vari, swims through murky, slow-flowing lowland waters.

Based on genetic comparisons, de Santana and colleagues determined that two groups of electric eels began to evolve in South America about 7.1 million years ago. One, the common ancestor of E. voltai and E. electricus, lived in the clear waters of the ancient highlands, whereas E. varii lived in the lowlands, whose murky waters were full of minerals and, consequently, conducted electricity more efficiently -- an apparently important distinction for electric eels, whose discharge won't travel as far in environments where conductivity is low.

According to the analysis, E. voltai and E. electricus diverged around 3.6 million years ago, around the time the Amazon River changed course, crossing the continent and traversing highland regions. Notably, de Santana's team discovered that E. voltai can discharge up to 860 Volts of electricity -- significantly more than the 650 Volts generated by E. electricus. This makes the species the strongest known bioelectric generator, and may be an adaptation to the lower conductivity of highland waters, he says.

Read more at Science Daily

New flying reptile species was one of largest ever flying animals

Illustration of pterosaurs in flight
A newly identified species of pterosaur is among the largest ever flying animals, according to a new study from Queen Mary University of London.

Cryodrakon boreas, from the Azhdarchid group of pterosaurs (often incorrectly called 'pterodactyls'), was a flying reptile with a wingspan of up to 10 metres which lived during the Cretaceous period around 77 million years ago.

Its remains were discovered 30 years ago in Alberta, Canada, but palaeontologists had assumed they belonged to an already known species of pterosaur discovered in Texas, USA, named Quetzalcoatlus.

The study, published in the Journal of Vertebrate Paleontology, reveals it is actually a new species and the first pterosaur to be discovered in Canada.

Dr David Hone, lead author of the study from Queen Mary University of London, said: "This is a cool discovery, we knew this animal was here but now we can show it is different to other azhdarchids and so it gets a name."

Although the remains -- consisting of a skeleton that has part of the wings, legs, neck and a rib -- were originally assigned to Quetzalcoatlus, study of this and additional material uncovered over the years shows it is a different species in light of the growing understanding of azhdarchid diversity.

The main skeleton is from a young animal with a wingspan of about 5 metres but one giant neck bone from another specimen suggests an adult animal would have a wingspan of around 10 metres.

This makes Cryodrakon boreas comparable in size to other giant azhdarchids including the Texan Quetzalcoatlus which could reach 10.5 m in wingspan and weighed around 250 kg.

Like other azhdarchids these animals were carnivorous and predominantly predated on small animals which would likely include lizards, mammals and even baby dinosaurs.

Dr Hone added: "It is great that we can identify Cryodrakon as being distinct to Quetzalcoatlus as it means we have a better picture of the diversity and evolution of predatory pterosaurs in North America."

Unlike most pterosaur groups, azhdarchids are known primarily from terrestrial settings and, despite their likely capacity to cross oceanic distances in flight, they are broadly considered to be animals that were adapted for, and lived in, inland environments.

Read more at Science Daily

Sep 10, 2019

And then there was light: Looking for the first stars in the Universe

Astronomers are closing in on a signal that has been travelling across the Universe for 12 billion years, bringing them nearer to understanding the life and death of the very earliest stars.

In a paper on the preprint site arXiv and soon to be published in the Astrophysical Journal, a team led by Dr Nichole Barry from Australia's University of Melbourne and the ARC Centre of Excellence for All Sky Astrophysics in 3 Dimensions (ASTRO 3D) reports a 10-fold improvement on data gathered by the Murchison Widefield Array (MWA) -- a collection of 4096 dipole antennas set in the remote hinterland of Western Australia.

The MWA, which started operating in 2013, was built specifically to detect electromagnetic radiation emitted by neutral hydrogen -- a gas that comprised most of the infant Universe in the period when the soup of disconnected protons and neutrons spawned by the Big Bang started to cool down.

Eventually these hydrogen atoms began to clump together to form stars -- the very first ones to exist -- initiating a major phase in the evolution of the Universe, known as the Epoch of Reionisation, or EoR.

"Defining the evolution of the EoR is extremely important for our understanding of astrophysics and cosmology," explains Dr Barry.

"So far, though, no one has been able to observe it. These results take us a lot closer to that goal."

The neutral hydrogen that dominated space and time before and in the early period of the EoR radiated at a wavelength of approximately 21 centimetres. Stretched now to somewhere above two metres because of the expansion of the Universe, the signal persists -- and detecting it remains the theoretical best way to probe conditions in the early days of the Cosmos.

However, doing so is fiendishly difficult.

"The signal that we're looking for is more than 12 billion years old," explains ASTRO 3D member and co-author Associate Professor Cathryn Trott, from the International Centre for Radio Astronomy Research at Curtin University in Western Australia.

"It is exceptionally weak and there are a lot of other galaxies in between it and us. They get in the way and make it very difficult to extract the information we're after."

In other words, the signals recorded by the MWA -- and other EoR-hunting devices such as the Hydrogen Epoch of Reionisation Array in South Africa and the Low Frequency Array in The Netherlands -- are extremely messy.

Using 21 hours of raw data Dr Barry, co-lead author Mike Wilensky, from the University of Washington in the US, and colleagues explored new techniques to refine analysis and exclude consistent sources of signal contamination, including ultra-faint interference generated by radio broadcasts on Earth.

The result was a level of precision that significantly reduced the range in which the EoR may have begun, pulling in constraints by almost an order of magnitude.

"We can't really say that this paper gets us closer to precisely dating the start or finish of the EoR, but it does rule out some of the more extreme models," says Professor Trott.

"That it happened very rapidly is now ruled out. That the conditions were very cold is now also ruled out."

Dr Barry said the results represented not only a step forward in the global quest to explore the infant Universe, but also established a framework for further research.

"We have about 3000 hours of data from MWA," she explains, "and for our purposes some of it is more useful than others. This approach will let us identify which bits are most promising, and analyse it better than we ever could before."

Read more at Science Daily

Why people gain weight as they get older

Many people struggle to keep their weight in check as they get older. Now new research at Karolinska Institutet in Sweden has uncovered why that is: Lipid turnover in the fat tissue decreases during ageing and makes it easier to gain weight, even if we don't eat more or exercise less than before. The study is published in the journal Nature Medicine.

The scientists studied the fat cells in 54 men and women over an average period of 13 years. In that time, all subjects, regardless of whether they gained or lost weight, showed decreases in lipid turnover in the fat tissue, that is the rate at which lipid (or fat) in the fat cells is removed and stored. Those who didn't compensate for that by eating less calories gained weight by an average of 20 percent, according to the study which was done in collaboration with researchers at Uppsala University in Sweden and University of Lyon in France.

The researchers also examined lipid turnover in 41 women who underwent bariatric surgery and how the lipid turnover rate affected their ability to keep the weight off four to seven years after surgery. The result showed that only those who had a low rate before the surgery managed to increase their lipid turnover and maintain their weight loss. The researchers believe these people may have had more room to increase their lipid turnover than those who already had a high-level pre-surgery.

"The results indicate for the first time that processes in our fat tissue regulate changes in body weight during ageing in a way that is independent of other factors," says Peter Arner, professor at the Department of Medicine in Huddinge at Karolinska Institutet and one of the study's main authors. "This could open up new ways to treat obesity."

Prior studies have shown that one way to speed up the lipid turnover in the fat tissue is to exercise more. This new research supports that notion and further indicates that the long-term result of weight-loss surgery would improve if combined with increased physical activity.

"Obesity and obesity-related diseases have become a global problem," says Kirsty Spalding, senior researcher at the Department of Cell and Molecular Biology at Karolinska Institutet and another of the study's main authors. "Understanding lipid dynamics and what regulates the size of the fat mass in humans has never been more relevant."

From Science Daily

Adolescents with high levels of physical activity perform better in school over two years

Adolescents with higher levels of physical activity performed better in school during transition from primary school to lower secondary school than their physically inactive peers, a new study from Finland shows. However, the researchers, from the University of Jyväskylä, found that increased physical activity did not necessarily result in improved academic performance.

Previous cross-sectional studies have reported that physically more active children and adolescents achieve better school grades than their less active peers do, but there are few longitudinal studies on the topic. A newly published study showed that adolescents with higher levels of physical activity over a follow-up period of two academic years had higher academic performance than did those who were continuously inactive. Furthermore, the study showed that increased levels of physical activity do not automatically result in improved academic performance. Instead, the results suggest that those adolescents who increased their physical activity had lower academic performance during the follow-up compared to their more active peers.

What the results mean

Highly active adolescents performed better in school compared to their less active peers. However, our results showed that increasing physical activity over a period of two academic years did not necessarily improve academic performance.

What the results do not mean


Based on our results, it is not possible to say whether physical activity improves academic performance or if adolescents with higher academic performance choose a physically active lifestyle. Therefore, no causal interpretations can be made. However, the results of the present study do not refute the findings of previous studies showing small but positive effects of physical activity on learning and its neural underpinnings.

'The link between physical activity and academic performance do not always reflect a causal relationship. It is possible that high levels of physical activity and good academic performance share the same attributes, such as high motivation towards the task at hand,' says Eero Haapala, postdoctoral researcher from the University of Jyväskylä.

Read more at Science Daily

Bones of Roman Britons provide new clues to dietary deprivation

Researchers at the University of Bradford have shown a link between the diet of Roman Britons and their mortality rates for the first time, overturning a previously-held belief about the quality of the Roman diet.

Using a new method of analysis, the researchers examined stable isotope data (the ratios of particular chemicals in human tissue) from the bone collagen of hundreds of Roman Britons, together with the individuals' age-of-death estimates and an established mortality model.

The data sample included over 650 individuals from various published archaeological sites throughout England.

The researchers -- from institutions including the Museum of London, Durham University and the University of South Carolina -- found that higher nitrogen isotope ratios in the bones were associated with a higher risk of mortality, while higher carbon isotope ratios were associated with a lower risk of mortality.

Romano-British urban archaeological populations are characterised by higher nitrogen isotope ratios, which have been thought previously to indicate a better, or high-status, diet. But taking carbon isotope ratios, as well as death rates, into account showed that the nitrogen could also be recording long-term nutritional stress, such as deprivation or starvation.

Differences in sex were also identified by the researchers, with the data showing that men typically had higher ratios of both isotopes, indicating a generally higher status diet compared to women.

Dr Julia Beaumont of the University of Bradford said: "Normally nitrogen and carbon stable isotopes change in the same direction, with higher ratios of both indicating a better diet such as the consumption of more meat or marine foods. But if the isotope ratios go in opposite directions it can indicate that the individual was under long-term nutritional stress. This was corroborated in our study by the carbon isotope ratios which went down, rather than up, where higher mortality was seen."

During nutritional stress, if there is insufficient intake of protein and calories, nitrogen within the body is recycled to make new proteins, with a resulting rise in the ratio of nitrogen isotopes in the body's tissues.

Dr Beaumont added: "Not all people in Roman Britain were high-status; there was considerable enslavement too and we know slaves were fed a restricted diet. Our research shows that combining the carbon and nitrogen isotope data with other information such as mortality risk is crucial to an accurate understanding of archaeological dietary studies, and it may be useful to look at existing research with fresh eyes."

Read more at Science Daily

Lakes on Saturn's moon Titan are explosion craters, new models suggest

This artist's concept of a lake at the north pole of Saturn's moon Titan illustrates raised rims and rampartlike features such as those seen by NASA's Cassini spacecraft around the moon's Winnipeg Lacus.
Using radar data from NASA's Cassini spacecraft, recently published research presents a new scenario to explain why some methane-filled lakes on Saturn's moon Titan are surrounded by steep rims that reach hundreds of feet high. The models suggests that explosions of warming nitrogen created basins in the moon's crust.

Titan is the only planetary body in our solar system other than Earth known to have stable liquid on its surface. But instead of water raining down from clouds and filling lakes and seas as on Earth, on Titan it's methane and ethane -- hydrocarbons that we think of as gases but that behave as liquids in Titan's frigid climate.

Most existing models that lay out the origin of Titan's lakes show liquid methane dissolving the moon's bedrock of ice and solid organic compounds, carving reservoirs that fill with the liquid. This may be the origin of a type of lake on Titan that has sharp boundaries. On Earth, bodies of water that formed similarly, by dissolving surrounding limestone, are known as karstic lakes.

The new, alternative models for some of the smaller lakes (tens of miles across) turns that theory upside down: It proposes pockets of liquid nitrogen in Titan's crust warmed, turning into explosive gas that blew out craters, which then filled with liquid methane. The new theory explains why some of the smaller lakes near Titan's north pole, like Winnipeg Lacus, appear in radar imaging to have very steep rims that tower above sea level -- rims difficult to explain with the karstic model.

The radar data were gathered by the Cassini Saturn Orbiter -- a mission managed by NASA's Jet Propulsion Laboratory in Pasadena, California -- during its last close flyby of Titan, as the spacecraft prepared for its final plunge into Saturn's atmosphere two years ago. An international team of scientists led by Giuseppe Mitri of Italy's G. d'Annunzio University became convinced that the karstic model didn't jibe with what they saw in these new images.

"The rim goes up, and the karst process works in the opposite way," Mitri said. "We were not finding any explanation that fit with a karstic lake basin. In reality, the morphology was more consistent with an explosion crater, where the rim is formed by the ejected material from the crater interior. It's totally a different process."

The work, published Sept. 9 in Nature Geoscience, meshes with other Titan climate models showing the moon may be warm compared to how it was in earlier Titan "ice ages."

Over the last half-billion or billion years on Titan, methane in its atmosphere has acted as a greenhouse gas, keeping the moon relatively warm -- although still cold by Earth standards. Scientists have long believed that the moon has gone through epochs of cooling and warming, as methane is depleted by solar-driven chemistry and then resupplied.

In the colder periods, nitrogen dominated the atmosphere, raining down and cycling through the icy crust to collect in pools just below the surface, said Cassini scientist and study co-author Jonathan Lunine of Cornell University in Ithaca, New York.

"These lakes with steep edges, ramparts and raised rims would be a signpost of periods in Titan's history when there was liquid nitrogen on the surface and in the crust," he noted. Even localized warming would have been enough to turn the liquid nitrogen into vapor, cause it to expand quickly and blow out a crater.

"This is a completely different explanation for the steep rims around those small lakes, which has been a tremendous puzzle," said Cassini Project Scientist Linda Spilker of JPL. "As scientists continue to mine the treasure trove of Cassini data, we'll keep putting more and more pieces of the puzzle together. Over the next decades, we will come to understand the Saturn system better and better."

Read more at Science Daily

Sep 9, 2019

Deepest optical image of first neutron star merger

Gravitational waves concept
The final chapter of the historic detection of the powerful merger of two neutron stars in 2017 officially has been written. After the extremely bright burst finally faded to black, an international team led by Northwestern University painstakingly constructed its afterglow -- the last bit of the famed event's life cycle.

Not only is the resulting image the deepest picture of the neutron star collision's afterglow to date, it also reveals secrets about the origins of the merger, the jet it created and the nature of shorter gamma ray bursts.

"This is the deepest exposure we have ever taken of this event in visible light," said Northwestern's Wen-fai Fong, who led the research. "The deeper the image, the more information we can obtain."

The study will be published this month in The Astrophysical Journal Letters. Fong is an assistant professor of physics and astronomy in Northwestern's Weinberg College of Arts and Sciences and a member of CIERA (Center for Interdisciplinary Exploration and Research in Astrophysics), an endowed research center at Northwestern focused on advancing studies with an emphasis on interdisciplinary connections.

Many scientists consider the 2017 neutron-star merger, dubbed GW170817, as LIGO's (Laser Interferometer Gravitational-Wave Observatory) most important discovery to date. It was the first time that astrophysicists captured two neutron stars colliding. Detected in both gravitational waves and electromagnetic light, it also was the first-ever multi-messenger observation between these two forms of radiation.

The light from GW170817 was detected, partly, because it was nearby, making it very bright and relatively easy to find. When the neutron stars collided, they emitted a kilonova -- light 1,000 times brighter than a classical nova, resulting from the formation of heavy elements after the merger. But it was exactly this brightness that made its afterglow -- formed from a jet travelling near light-speed, pummeling the surrounding environment -- so difficult to measure.

"For us to see the afterglow, the kilonova had to move out of the way," Fong said. "Surely enough, about 100 days after the merger, the kilonova had faded into oblivion, and the afterglow took over. The afterglow was so faint, however, leaving it to the most sensitive telescopes to capture it."

Hubble to the rescue

Starting in December 2017, NASA's Hubble Space Telescope detected the visible light afterglow from the merger and revisited the merger's location 10 more times over the course of a year and a half.

At the end of March 2019, Fong's team used the Hubble to obtain the final image and the deepest observation to date. Over the course of seven-and-a-half hours, the telescope recorded an image of the sky from where the neutron-star collision occurred. The resulting image showed -- 584 days after the neutron-star merger -- that the visible light emanating from the merger was finally gone.

Next, Fong's team needed to remove the brightness of the surrounding galaxy, in order to isolate the event's extremely faint afterglow.

"To accurately measure the light from the afterglow, you have to take all the other light away," said Peter Blanchard, a postdoctoral fellow in CIERA and the study's second author. "The biggest culprit is light contamination from the galaxy, which is extremely complicated in structure."

Fong, Blanchard and their collaborators approached the challenge by using all 10 images, in which the kilonova was gone and the afterglow remained as well as the final, deep Hubble image without traces of the collision. The team overlaid their deep Hubble image on each of the 10 afterglow images. Then, using an algorithm, they meticulously subtracted -- pixel by pixel -- all light from the Hubble image from the earlier afterglow images.

The result: a final time-series of images, showing the faint afterglow without light contamination from the background galaxy. Completely aligned with model predictions, it is the most accurate imaging time-series of GW170817's visible-light afterglow produced to date.

"The brightness evolution perfectly matches our theoretical models of jets," Fong said. "It also agrees perfectly with what the radio and X-rays are telling us."

Illuminating information


With the Hubble's deep space image, Fong and her collaborators gleaned new insights about GW170817's home galaxy. Perhaps most striking, they noticed that the area around the merger was not densely populated with star clusters.

"Previous studies have suggested that neutron star pairs can form and merge within the dense environment of a globular cluster," Fong said. "Our observations show that's definitely not the case for this neutron star merger."

According to the new image, Fong also believes that distant, cosmic explosions known as short gamma ray bursts are actually neutron star mergers -- just viewed from a different angle. Both produce relativistic jets, which are like a fire hose of material that travels near the speed of light. Astrophysicists typically see jets from gamma ray bursts when they are aimed directly, like staring directly into the fire hose. But GW170817 was viewed from a 30-degree angle, which had never before been done in the optical wavelength.

Read more at Science Daily

Hard as a diamond? Scientists predict new forms of superhard carbon

Natural diamond.
Superhard materials can slice, drill and polish other objects. They also hold potential for creating scratch-resistant coatings that could help keep expensive equipment safe from damage.

Now, science is opening the door to the development of new materials with these seductive qualities.

Researchers have used computational techniques to identify 43 previously unknown forms of carbon that are thought to be stable and superhard -- including several predicted to be slightly harder than or nearly as hard as diamonds. Each new carbon variety consists of carbon atoms arranged in a distinct pattern in a crystal lattice.

The study -- published on Sept. 3 in the journal npj Computational Materials -- combines computational predictions of crystal structures with machine learning to hunt for novel materials. The work is theoretical research, meaning that scientists have predicted the new carbon structures but have not created them yet.

"Diamonds are right now the hardest material that is commercially available, but they are very expensive," says University at Buffalo chemist Eva Zurek. "I have colleagues who do high-pressure experiments in the lab, squeezing materials between diamonds, and they complain about how expensive it is when the diamonds break.

"We would like to find something harder than a diamond. If you could find other materials that are hard, potentially you could make them cheaper. They might also have useful properties that diamonds don't have. Maybe they will interact differently with heat or electricity, for example."

Zurek, PhD, a professor of chemistry in UB College of Arts and Sciences, conceived of the study and co-led the project with Stefano Curtarolo, PhD, professor of mechanical engineering and materials science at Duke University.

The quest for hard materials

Hardness relates to a material's ability to resist deformation. As Zurek explains, it means that "if you try to indent a material with a sharp tip, a hole will not be made, or the hole will be very small."

Scientists consider a substance to be superhard if it has a hardness value of over 40 gigapascals as measured through an experiment called the Vickers hardness test.

All of the study's 43 new carbon structures are predicted to meet that threshold. Three are estimated to exceed the Vickers hardness of diamonds, but only by a little bit. Zurek also cautions that there is some uncertainty in the calculations.

The hardest structures the scientists found tended to contain fragments of diamond and lonsdaleite -- also called hexagonal diamond -- in their crystal lattices. In addition to the 43 novel forms of carbon, the research also newly predicts that a number of carbon structures that other teams have described in the past will be superhard.

Speeding up discovery of superhard materials

The techniques used in the new paper could be applied to identify other superhard materials, including ones that contain elements other than carbon.

"Very few superhard materials are known, so it's of interest to find new ones," Zurek says. "One thing that we know about superhard materials is that they need to have strong bonds. Carbon-carbon bonds are very strong, so that's why we looked at carbon. Other elements that are typically in superhard materials come from the same side of the periodic table, such as boron and nitrogen."

To conduct the study, researchers used XtalOpt, an open-source evolutionary algorithm for crystal structure prediction developed in Zurek's lab, to generate random crystal structures for carbon. Then, the team employed a machine learning model to predict the hardness of these carbon species. The most promising hard and stable structures were used by XtalOpt as "parents" to spawn additional new structures, and so on.

The machine learning model for estimating hardness was trained using the Automatic FLOW (AFLOW) database, a huge library of materials with properties that have been calculated. Curtarolo's lab maintains AFLOW and previously developed the machine learning model with Olexandr Isayev's group at the University of North Carolina at Chapel Hill.

"This is accelerated material development. It's always going to take time, but we use AFLOW and machine learning to greatly accelerate the process," Curtarolo says. "The algorithms learn, and if you have trained the model well, the algorithm will predict the properties of a material -- in this case, hardness -- with reasonable accuracy."

"You can take the best materials predicted using computational techniques and make them experimentally," says study co-author Cormac Toher, PhD, assistant research professor of mechanical engineering and materials science at Duke University.

The first and second authors of the new study are UB PhD graduate Patrick Avery and UB PhD student Xiaoyu Wang, both in Zurek's lab. In addition to these researchers, Zurek, Curtarolo and Toher, the co-authors of the paper include Corey Oses and Eric Gossett of Duke University and Davide Proserpio of the Universitá degi Studi di Milano.

Read more at Science Daily

Lightning 'superbolts' form over oceans from November to February

Lightning storm over ocean.
The lightning season in the Southeastern U.S. is almost finished for this year, but the peak season for the most powerful strokes of lightning won't begin until November, according to a newly published global survey of these rare events.

A University of Washington study maps the location and timing of "superbolts" -- bolts that release electrical energy of more than 1 million Joules, or a thousand times more energy than the average lightning bolt, in the very low frequency range in which lightning is most active. Results show that superbolts tend to hit the Earth in a fundamentally different pattern from regular lightning, for reasons that are not yet fully understood.

The study was published Sept. 9 in the Journal of Geophysical Research: Atmospheres, a journal of the American Geophysical Union.

"It's very unexpected and unusual where and when the very big strokes occur," said lead author Robert Holzworth, a UW professor of Earth and space sciences who has been tracking lightning for almost two decades.

Holzworth manages the World Wide Lightning Location Network, a UW-managed research consortium that operates about 100 lightning detection stations around the world, from Antarctica to northern Finland. By seeing precisely when lightning reaches three or more different stations, the network can compare the readings to determine a lightning bolt's size and location.

The network has operated since the early 2000s. For the new study, the researchers looked at 2 billion lightning strokes recorded between 2010 and 2018. Some 8,000 events -- four millionths of a percent, or one in 250,000 strokes -- were confirmed superbolts.

"Until the last couple of years, we didn't have enough data to do this kind of study," Holzworth said.

The authors compared their network's data against lightning observations from the Maryland-based company Earth Networks and from the New Zealand MetService.

The new paper shows that superbolts are most common in the Mediterranean Sea, the northeast Atlantic and over the Andes, with lesser hotspots east of Japan, in the tropical oceans and off the tip of South Africa. Unlike regular lightning, the superbolts tend to strike over water.

Explore a visualization of the data at https://public.tableau.com/profile/uw.news#!/vizhome/Superbolts/Dashboard1.

"Ninety percent of lightning strikes occur over land," Holzworth said. "But superbolts happen mostly over the water going right up to the coast. In fact, in the northeast Atlantic Ocean you can see Spain and England's coasts nicely outlined in the maps of superbolt distribution."

"The average stroke energy over water is greater than the average stroke energy over land -- we knew that," Holzworth said. "But that's for the typical energy levels. We were not expecting this dramatic difference."

The time of year for superbolts also doesn't follow the rules for typical lightning. Regular lightning hits in the summertime -- the three major so-called "lightning chimneys" for regular bolts coincide with summer thunderstorms over the Americas, sub-Saharan Africa and Southeast Asia. But superbolts, which are more common in the Northern Hemisphere, strike both hemispheres between the months of November and February.

The reason for the pattern is still mysterious. Some years have many more superbolts than others: late 2013 was an all-time high, and late 2014 was the next highest, with other years having far fewer events.

Read more at Science Daily

Researchers unearth 'new' mass-extinction

Mount Emei (Emeishan), Sichuan Province, China.
A team of scientists has concluded that earth experienced a previously underestimated severe mass-extinction event, which occurred about 260 million years ago, raising the total of major mass extinctions in the geologic record to six.

"It is crucial that we know the number of severe mass extinctions and their timing in order to investigate their causes," explains Michael Rampino, a professor in New York University's Department of Biology and a co-author of the analysis, which appears in the journal Historical Biology. "Notably, all six major mass extinctions are correlated with devastating environmental upheavals -- specifically, massive flood-basalt eruptions, each covering more than a million square kilometers with thick lava flows."

Scientists had previously determined that there were five major mass-extinction events, wiping out large numbers of species and defining the ends of geological periods: the end of the Ordovician (443 million years ago), the Late Devonian (372 million years ago), the Permian (252 million years ago), the Triassic (201 million years ago), and the Cretaceous (66 million years ago). And, in fact, many researchers have raised concerns about the contemporary, ongoing loss of species diversity -- a development that might be labeled a "seventh extinction" because such a modern mass extinction, scientists have predicted, could end up being as severe as these past events.

The Historical Biology work, which also included Nanjing University's Shu-zhong Shen, focused on the Guadalupian, or Middle Permian period, which lasted from 272 to about 260 million years ago.

Here, the researchers observe, the end-Guadalupian extinction event -- which affected life on land and in the seas -- occurred at the same time as the Emeishan flood-basalt eruption that produced the Emeishan Traps, an extensive rock formation, found today in southern China. The eruption's impact was akin to those causing other known severe mass extinctions, Rampino says.

"Massive eruptions such as this one release large amounts of greenhouse gases, specifically carbon dioxide and methane, that cause severe global warming, with warm, oxygen-poor oceans that are not conducive to marine life," he notes.

"In terms of both losses in the number of species and overall ecological damage, the end-Guadalupian event now ranks as a major mass extinction, similar to the other five," the authors write.

From Science Daily

Sep 8, 2019

Bad to the bone or just bad behavior?

Hannibal. Voldemort. Skeletor and Gargamel. It's hard to imagine any nefarious villain having redeeming qualities. But what if someone were to tell you that the Joker is a monster only because he learned the behavior from people around him and it's possible that, one day, he might change for the better?

A new study out of Columbia University suggests that the way we perceive others' bad behavior -- as either biological and innate or potentially changeable -- impacts our willingness to cut them some slack.

The study, published in the Journal of Experimental Psychology: General, found that adults are less willing to be charitable toward "bad" individuals whose moral characteristics are attributed to an innate biological source. Conversely, adults are more apt to be generous toward individuals when led to focus on other explanations for moral "badness" that suggest potential for change. Unlike adults, children did not appear to distinguish between characters whose moral characteristics were described in different ways.

The findings may have implications for how we perceive individuals in society, such as those imprisoned for crimes.

"If people want to take something away from this study and apply it to their own lives, it is to be mindful of how they talk about others and their transgressions," said Larisa Heiphetz, an assistant professor of psychology and the study's principal investigator. "People often encounter moral transgressions, whether in others' behaviors or their own. This study reveals that the way we treat those individuals can be strongly influenced by the way others describe their transgressions."

Heiphetz's research also revealed that a person's "goodness" was seen by both age groups as more of a biological, innate trait than "badness." Both children and adults were more likely to say that goodness, rather than badness, was something with which people are born and a fundamental, unchanging part of who they are.

The study, funded by Columbia University, the Indiana University Lilly School of Philanthropy and the John Templeton Foundation, is one in a growing area of research focused on psychological essentialism -- how we think about people's characteristics in essentialist terms (e.g., innate, immutable and due to biological factors) or non-essentialist terms (socially learned, changeable). Prior work has shown that people readily attribute many human characteristics to innate, unchanging factors.

To learn how people perceive moral goodness and moral badness, Heiphetz and a group of Columbia students asked children and adults what they thought about a variety of morally good and morally bad characteristics. They found that both groups perceived "goodness" as a more central, unchanging feature of who someone is than badness, which was more likely to be perceived as something that can improve over time. That led Heiphetz to wonder if there were any consequences associated with this perception, so she gave children and adults material resources, including stickers and entries to a lottery, and told them about pairs of fictional people that had the same "bad" moral characteristics, but for different reasons: One was described in an essentialist way -- born bad -- and the other in a non-essentialist way -- bad as a result of behavior they learned from other people in their lives.

Read more at Science Daily

Study locates brain areas for understanding metaphors in healthy and schizophrenic people

Scientists have used MRI scanners to discover the parts of the brain which understand metaphors, in both healthy volunteers and people with schizophrenia. They found that people with schizophrenia employ different brain circuits to overcome initial lack of understanding. The researchers hope this identification of brain reactions and affected areas may help people with schizophrenia to better comprehend metaphors in everyday speech. This work is presented at the ECNP congress in Copenhagen.

People with schizophrenia have often problems in understanding some common figurative expressions, such as humour, irony, and spoken metaphors. They tend to take the metaphor at its literal meaning (for example, "a leap in the dark" may imply jumping and darkness for someone with schizophrenia): it may take some time for them to arrive at an understanding of what the metaphor is meant to imply. There has been little attempt to understand why this might be so at a neurological level.

A group of Polish and Czech researcher examined 30 patients who had been diagnosed with schizophrenia and 30 healthy controls. While undergoing a brain scan in a high-sensitivity MRI, they read 90 brief stories. 30 of the stories had a metaphorical ending, 30 had an absurd/nonsense ending, and 30 had a neutral ending (i.e. a literal ending). The scientists monitored brain activity while the subjects were reacting to the stories.

They found that compared to controls, the patient group showed increased brain activity in certain areas, but lower brain activity in others. For example, the healthy group showed brain activation in the prefrontal cortex (near the front of the brain) and left amygdala (at the centre of the brain, near the top of the brain stem), implying that these are the brain areas where metaphors are normally processed. Instead, schizophrenia patients showed a decreased activation in the temporal suculus (an area ascending from the low central brain towards the back of the head). Researcher Martin Jáni, from the Jagiellonian University, Krakow, Poland said:

"Previous researchers studied brain areas that are connected to impaired metaphor understanding in schizophrenia, so comparing metaphors with literal statements. However, by adding the absurd punchline, we were able to explore the stage at which the deficit occurs. We also used everyday metaphors, which would be easily understood.

We found that biggest changes in brain activity in schizophrenia patients occur during the basic stage of metaphor processing, that is when a person needs to recognize there is incongruity between the opening sentence and the punchline. These activated areas of the brain are very different to the brain areas activated in healthy patients, as if the brain is struggling to find a compensatory mechanism, to bypass the circuits normally used to understand metaphor".

It's likely that this inability to understand the sort of conventional metaphors we use in everyday life is socially isolating for people with schizophrenia. While this at the research stage, our hope is that we can develop practical skills in patients with schizophrenia - and indeed the people who know them - which will help them understand the speech the way it was intended"

Read more at Science Daily