Apr 18, 2019

A history of the Crusades, as told by crusaders' DNA

This image shows the bones of the Crusaders found in a burial pit in Sidon, Lebanon.
History can tell us a lot about the Crusades, the series of religious wars fought between 1095 and 1291, in which Christian invaders tried to claim the Near East. But the DNA of nine 13th century Crusaders buried in a pit in Lebanon shows that there's more to learn about who the Crusaders were and their interactions with the populations they encountered. The work appears April 18 in The American Journal of Human Genetics.

The remains suggest that the soldiers making up the Crusader armies were genetically diverse and intermixed with the local population in the Near East, although they didn't have a lasting effect on the genetics of Lebanese people living today. They also highlight the important role ancient DNA can play in helping us understand historical events that are less well documented.

"We know that Richard the Lionheart went to fight in the Crusades, but we don't know much about the ordinary soldiers who lived and died there, and these ancient samples give us insights into that," says senior author Chris Tyler-Smith, a genetics researcher at the Wellcome Sanger Institute.

"Our findings give us an unprecedented view of the ancestry of the people who fought in the Crusader army. And it wasn't just Europeans," says first author Marc Haber, also of the Wellcome Sanger Institute. "We see this exceptional genetic diversity in the Near East during medieval times, with Europeans, Near Easterners, and mixed individuals fighting in the Crusades and living and dying side by side."

Archaeological evidence suggested that 25 individuals whose remains were found in a burial pit near a Crusader castle near Sidon, Lebanon, were warriors who died in battle in the 1200s. Based on that, Tyler-Smith, Haber, and their colleagues conducted genetic analyses of the remains and were able to sequence the DNA of nine Crusaders, revealing that three were Europeans, four were Near Easterners, and two individuals had mixed genetic ancestry.

Throughout history, other massive human migrations -- like the movement of the Mongols through Asia under Genghis Khan and the arrival of colonial Iberians in South America -- have fundamentally reshaped the genetic makeup of those regions. But the authors theorize that the Crusaders' influence was likely shorter-lived because the Crusaders' genetic traces are insignificant in people living in Lebanon today. "They made big efforts to expel them, and succeeded after a couple of hundred years," says Tyler-Smith.

This ancient DNA can tell us things about history that modern DNA can't. In fact, when the researchers sequenced the DNA of people living in Lebanon 2,000 years ago during the Roman period, they found that today's Lebanese population is actually more genetically similar to the Roman Lebanese.

"If you look at the genetics of people who lived during the Roman period and the genetics of people who are living there today, you would think that there was just this continuity. You would think that nothing happened between the Roman period and today, and you would miss that for a certain period of time the population of Lebanon included Europeans and people with mixed ancestry," says Haber.

These findings indicate that there may be other major events in human history that don't show up in the DNA of people living today. And if those events aren't as well-documented as the Crusades, we simply might not know about them. "Our findings suggest that it's worthwhile looking at ancient DNA even from periods when it seems like not that much was going on genetically. Our history may be full of these transient pulses of genetic mixing that disappear without a trace," says Tyler-Smith.

That the researchers were able to sequence and interpret the nine Crusaders' DNA at all was also surprising. DNA degrades faster in warm climates, and the remains studied here were burned and crudely buried. "There has been a lot of long-term interest in the genetics of this region, because it has this very strategic position, a lot of history, and a lot of migrations. But previous research has focused mainly on present-day populations, partly because recovering ancient DNA from warm climates is so difficult. Our success shows that studying samples in a similar condition is now possible because of advances in DNA extraction and sequencing technology," says Haber.

Next, the researchers plan to investigate what was happening genetically in the Near East during the transition from the Bronze Age to the Iron Age.

Read more at Science Daily

Giant tortoises migrate unpredictably in the face of climate change

Galapagos giant tortoises are sometimes called gardeners of the Galapagos because they are responsible for long-distance seed dispersal. Their migration is key for many tree and plant species' survival.
Galapagos giant tortoises, sometimes called Gardeners of the Galapagos, are creatures of habit. In the cool dry season, the highlands of the volcano slopes are engulfed in cloud which allows the vegetation to grow despite the lack of rain. On the lower slopes, however, there is no thick fog layer, and vegetation is not available year round. Adult tortoises thus spend the dry season in the higher regions, and trek back to the lower, relatively warmer zones where there is abundant, nutritious vegetation when the rainy season begins.

The tortoises often take the same migration routes over many years in order to find optimal food quality and temperatures. The timing of this migration is essential for keeping their energy levels high, and climate change could disrupt a tortoise's ability to migrate at the right time.

In the Ecological Society of America's journal Ecology, researchers use GPS to track the timing and patterns of tortoise migration over multiple years.

"We had three main goals in the study," says Guillaume Bastille-Rousseau, lead author of the paper. "One was determining if tortoises adjust their timing of migration to current environmental conditions. Two, if so, what clues do they use to adjust the timing, and, three, what are the energetic consequences of migration mis-timing for tortoises?"

The researchers expected the migrations to be timed with current food and temperature conditions because many other migratory species operate that way. Bastille-Rousseau says "many animals, such as ungulates, can track current environmental conditions and migrate accordingly -- what researchers sometime refer to as surfing the green-wave."

Contrary to the researchers' expectations, however, migration is weakly associated with current conditions such as fog, rain, and temperature. For instance, if it is unseasonably arid, it appears the tortoises do not take that variation into account when deciding it is time to migrate. It is unclear at this point whether they are basing their migration decisions on memories of past conditions or if they are simply incorrectly assessing current local conditions.

Bastille-Rousseau says the team is surprised by the mismatch, stating "tortoise timing of migration fluctuated a lot among years, often by over two months. This indicates that migration for tortoises may not just be about foraging opportunities. For example, female tortoises have to make decisions related to nesting, and we still have a lot to learn about migration in giant tortoises."

Fortunately, this sub-optimal timing may not yet have critical impact on tortoise health. Potentially due to their long lives of up to 100 years and large body size, bad timing of migration has smaller consequences for giant tortoises compared to small, short lived animals. Giant tortoises can go up to a year without eating and survive, while other migrating species must eat more regularly to sustain their energy levels.

Giant tortoises are important ecosystem engineers in the Galapagos, responsible for long-distance seed dispersal, and their migration is key for many tree and plant species' survival. How the tortoises' variation in migration timing will affect the rest of the ecosystem is still unclear. Because tortoises do not seem to be tracking annual variation in environmental conditions, it is quite possible that the mistiming of migration will keep increasing in the future.

Read more at Science Daily

Data mining digs up hidden clues to major California earthquake triggers

A historic image of quake damage in Long Beach, California, 1933.
A powerful computational study of southern California seismic records has revealed detailed information about a plethora of previously undetected small earthquakes, giving a more precise picture about stress in the earth's crust. A new publicly available catalog of these findings will help seismologists better understand the stresses triggering the larger earthquakes that occasionally rock the region.

"It's very difficult to unpack what triggers larger earthquakes because they are infrequent, but with this new information about a huge number of small earthquakes, we can see how stress evolves in fault systems," said Daniel Trugman, a post-doctoral fellow at Los Alamos National Laboratory and coauthor of a paper published in the journal Science today. "This new information about triggering mechanisms and hidden foreshocks gives us a much better platform for explaining how big quakes get started," Trugman said.

Crunching the Numbers

Trugman and coauthors from the California Institute of Technology and Scripps Institution of Oceanography performed a massive data mining operation of the Southern California Seismic Network for real quakes buried in the noise. The team was able to detect, understand, and locate quakes more precisely, and they created the most comprehensive earthquake catalog to date. The work identified 1.81 million quakes -- 10 times more earthquakes occurring 10 times more frequently than quakes previously identified using traditional seismology methods.

The team developed a comprehensive, detailed earthquake library for the entire southern California region, called the Quake Template Matching (QTM) catalog. They are using it to create a more complete map of California earthquake faults and behavior. This catalog may help researchers detect and locate quakes more precisely.

The team analyzed nearly two decades of data collected by the Southern California Seismic Network. The network, considered one of the world's best seismic systems, amasses a catalog of quakes from 550 seismic monitoring stations in the region. The SCSN catalog is based entirely on the traditional approach: manual observation and visual analysis. But Trugman says this traditional approach misses many weak signals that are indicators of small earthquakes.

Matching Templates Is Key

The team improved on this catalog with data mining. Using parallel computing, they crunched nearly 100 terabytes of data across 200 graphics processing units. Zooming in at high resolution for a 10-year period, they performed template matching using seismograms (waveforms or signals) of previously identified quakes. To create templates, they cut out pieces of waveforms from previously recorded earthquakes and matched those waveforms to patterns of signals recorded simultaneously from multiple seismic stations. Template matching has been done before, but never at this scale.

"Now we can automate it and search exhaustively through the full waveform archive to find signals of very small earthquakes previously hidden in the noise," Trugman explained.

Applying the templates found events quake precursors, foreshocks and small quakes that had been missed with manual methods. Those events often provide key physical and geographic details to help predict big quakes. The team also identified initiation sequences that reveal how quakes are triggered.

New details also revealed three-dimensional geometry and fault structures, which will support development of more realistic models.

Recently, Trugman and Los Alamos colleagues have applied machine learning to study earthquakes created in laboratory quake machines. That works has uncovered important details about earthquake behavior that may be used to predict quakes.

Read more at Science Daily

How to defend the Earth from asteroids

The NEOWISE space telescope spotted Comet C/2013 US10 Catalina speeding by Earth on August 28, 2015. This comet swung in from the Oort Cloud, the shell of cold, frozen material that surrounds the Sun in the most distant part of the solar system far beyond the orbit of Neptune. NEOWISE captured the comet as it fizzed with activity caused by the Sun's heat. On November 15, 2015, the comet made its closest approach to the Sun, dipping inside the Earth's orbit; it is possible that this is the first time this ancient comet has ever been this close to the Sun. NEOWISE observed the comet in two heat-sensitive infrared wavelengths, 3.4 and 4.6 microns, which are color-coded as cyan and red in this image. NEOWISE detected this comet a number of times in 2014 and 2015; five of the exposures are shown here in a combined image depicting the comet's motion across the sky. The copious quantities of gas and dust spewed by the comet appear red in this image because they are very cold, much colder than the background stars.
A mere 17-20 meters across, the Chelyabinsk meteor caused extensive ground damage and numerous injuries when it exploded on impact with Earth's atmosphere in February 2013.

To prevent another such impact, Amy Mainzer and colleagues use a simple yet ingenious way to spot these tiny near-Earth objects (NEOs) as they hurtle toward the planet. She is the principal investigator of NASA's asteroid hunting mission at the Jet Propulsion Laboratory in Pasadena, California, and will outline the work of NASA's Planetary Defense Coordination Office this week at the American Physical Society April Meeting in Denver -- including her team's NEO recognition method and how it will aid the efforts to prevent future Earth impacts.

"If we find an object only a few days from impact, it greatly limits our choices, so in our search efforts we've focused on finding NEOs when they are further away from Earth, providing the maximum amount of time and opening up a wider range of mitigation possibilities," Mainzer said.

But it's a difficult task -- like spotting a lump of coal in the night's sky, Mainzer explained. "NEOs are intrinsically faint because they are mostly really small and far away from us in space," she said. "Add to this the fact that some of them are as dark as printer toner, and trying to spot them against the black of space is very hard."

Instead of using visible light to spot incoming objects, Mainzer's team at JPL/Caltech has leveraged a characteristic signature of NEOs -- their heat. Asteroids and comets are warmed by the sun and so glow brightly at thermal wavelengths (infrared), making them easier to spot with the Near-Earth Object Wide-field Infrared Survey Explorer (NEOWISE) telescope.

"With the NEOWISE mission we can spot objects regardless of their surface color, and use it to measure their sizes and other surface properties," Mainzer said.

Discovering NEO surface properties provides Mainzer and her colleagues an insight into how big the objects are and what they are made of, both critical details in mounting a defensive strategy against an Earth-threatening NEO.

For instance, one defensive strategy is to physically "nudge" an NEO away from an Earth impact trajectory. But to calculate the energy required for that nudge, details of NEO mass, and therefore size and composition, are necessary.

Astronomers also think that examining the composition of asteroids will help to understand how the solar system was formed.

"These objects are intrinsically interesting because some are thought to be as old as the original material that made up the solar system," Mainzer said. "One of the things that we have been finding is that NEOs are pretty diverse in composition."

Mainzer is now keen to leverage advances in camera technology to aid in the search for NEOs. "We are proposing to NASA a new telescope, the Near-Earth Object Camera (NEOCam), to do a much more comprehensive job of mapping asteroid locations and measuring their sizes," Mainzer said.

Read more at Science Daily

Why lightning often strikes twice

Lightning often does strike twice.
In contrast to popular belief, lightning often does strike twice, but the reason why a lightning channel is 'reused' has remained a mystery. Now, an international research team led by the University of Groningen has used the LOFAR radio telescope to study the development of lightning flashes in unprecedented detail. Their work reveals that the negative charges inside a thundercloud are not discharged all in a single flash, but are in part stored alongside the leader channel at Interruptions. This occurs inside structures which the researchers have called needles. Through these needles, a negative charge may cause a repeated discharge to the ground. The results were published on 18 April in the science journal Nature.

Needles


"This finding is in sharp contrast to the present picture, in which the charge flows along plasma channels directly from one part of the cloud to another, or to the ground," explains Olaf Scholten, Professor of Physics at the KVI-CART institute of the University of Groningen. The reason why the needles have never been seen before lies in the 'supreme capabilities' of LOFAR, adds his colleague Dr Brian Hare, first author of the paper: "These needles can have a length of 100 meters and a diameter of less than five meters, and are too small and too short-lived for other lightning detections systems."

Low Frequency Array (LOFAR) is a Dutch radio telescope consisting of thousands of rather simple antennas spread out over Northern Europe. These antennas are connected with a central computer through fiber-optic cables, which means that they can operate as a single entity. LOFAR is developed primarily for radio astronomy observations, but the frequency range of the antennas also makes it suitable for lightning research, as discharges produce bursts in the VHF (very high frequency) radio band.

Inside the cloud

For the present lightning observations, the scientists have used only the Dutch LOFAR stations, which cover an area of 3,200 square kilometers. This new study analyzed the raw time-traces (which are accurate to one nanosecond) as measured in the 30-80 MHz band. Brian Hare: "These data allow us to detect lightning propagation at a scale where, for the first time, we can distinguish the primary processes. Furthermore, the use of radio waves allows us to look inside the thundercloud, where most of the lightning resides."

Lightning occurs when strong updrafts generate a kind of static electricity in large cumulonimbus clouds. Parts of the cloud become positively charged and others negatively. When this charge separation is large enough, a violent discharge happens, which we know as lightning. Such a discharge starts with a plasma, a small area of ionized air hot enough to be electrically conductive. This small area grows into a forked plasma channel that can reach lengths of several kilometers. The positive tips of the plasma channel collect negative charges from the cloud, which pass through the channel to the negative tip, where the charge is discharged. It was already known that a large amount of VHF emissions is produced at the growing tips of the negative channels while the positive channels show emissions only along the channel, not at the tip.

A new algorithm

The scientists developed a new algorithm for LOFAR data, allowing them to visualize the VHF radio emissions from two lightning flashes. The antenna array and the very precise time stamp on all the data allowed them to pinpoint the emission sources with unprecedented resolution. "Close to the core area of LOFAR, where the antenna density is highest, the spatial accuracy was about one meter," says Professor Scholten. Furthermore, the data obtained was capable of localizing 10 times more VHF sources than other three-dimensional imaging systems, with a temporal resolution in the range of nanoseconds. This resulted in a high-resolution 3D image of the lightning discharge.

Break

The results clearly show the occurrence of a break in the discharge channel, at a location where needles are formed. These appear to discharge negative charges from the main channel, which subsequently re-enter the cloud. The reduction of charges in the channel causes the break. However, once the charge in the cloud becomes high enough again, the flow through the channel is restored, leading to a second discharge of lightning. By this mechanism, lightning will strike in the same area repeatedly.

Read more at Science Daily

Apr 17, 2019

Asteroids help scientists to measure the diameters of faraway stars

When an asteroid passes in front of a star, the resulting diffraction pattern (here greatly exaggerated) can reveal the star's angular size.
Using the unique capabilities of telescopes specialised on cosmic gamma rays, scientists have measured the smallest apparent size of a star on the night sky to date. The measurements with the Very Energetic Radiation Imaging Telescope Array System (VERITAS) reveal the diameters of a giant star 2674 light-years away and of a sun-like star at a distance of 700 light-years. The study establishes a new method for astronomers to determine the size of stars, as the international team led by Tarek Hassan from DESY and Michael Daniel from the Smithsonian Astrophysical Observatory (SAO) reports in the journal Nature Astronomy.

Almost any star in the sky is too far away to be resolved by even the best optical telescopes. To overcome this limitation, the scientists used an optical phenomenon called diffraction to measure the star's diameter. This effect illustrates the wave nature of light, and occurs when an object, such as an asteroid, passes in front of a star. "The incredibly faint shadows of asteroids pass over us everyday," explained Hassan. "But the rim of their shadow isn't perfectly sharp. Instead, wrinkles of light surround the central shadow, like water ripples." This is a general optical phenomenon called a diffraction pattern and can be reproduced in any school lab with a laser hitting a sharp edge.

The researchers used the fact that the shape of the pattern can reveal the angular size of the light source. However, different from the school lab, the diffraction pattern of a star occulted by an asteroid is very hard to measure. "These asteroid occultations are hard to predict," said Daniel. "And the only chance to catch the diffraction pattern is to make very fast snapshots when the shadow sweeps across the telescope." Astronomers have measured the angular size of stars this way that were occulted by the moon. This method works right down to angular diameters of about one milliarcsecond, which is about the apparent size of a two-cent coin atop the Eiffel Tower in Paris as seen from New York.

However, not many stars in the sky are that "big." To resolve even smaller angular diameters, the team employed Cherenkov telescopes. These instruments normally watch out for the extremely short and faint bluish glow that high-energy particles and gamma rays from the cosmos produce when they encounter and race through Earth's atmosphere. Cherenkov telescopes do not produce the best optical images. But thanks to their huge mirror surface, usually segmented in hexagons like a fly's eye, they are extremely sensitive to fast variations of light, including starlight.

Using the four large VERITAS telescopes at the Fred Lawrence Whipple Observatory in Arizona, the team could clearly detect the diffraction pattern of the star TYC 5517-227-1 sweep past as it was occulted by the 60-kilometre asteroid Imprinetta on 22 February 2018. The VERITAS telescopes allowed to take 300 snapshots every second. From these data, the brightness profile of the diffraction pattern could be reconstructed with high accuracy, resulting in an angular, or apparent, diameter of the star of 0.125 milliarcseconds. Together with its distance of 2674 light-years, this means the star's true diameter is eleven times that of our sun. Interestingly, this result categorises the star whose class was ambiguous before as a red giant star.

The researchers repeated the feat three months later on 22 May 2018, when asteroid Penelope with a diameter of 88 kilometres occulted the star TYC 278-748-1. The measurements resulted in an angular size of 0.094 milliarcseconds and a true diameter of 2.17 times that of our sun. This time the team could compare the diameter to an earlier estimate based on other characteristics of the star that had placed its diameter at 2.173 times the solar diameter -- an excellent match, although the earlier estimate was not based on a direct measurement.

"This is the smallest angular size of a star ever measured directly," Daniel emphasised. "Profiling asteroid occultations of stars with Cherenkov telescopes delivers a ten times better resolution than the standard lunar occultation method. Also, it is at least twice as sharp as available interferometric size measurements." The uncertainty of these measurements are about ten per cent, as the authors write. "We expect this can be notably improved by optimising the set-up, for example narrowing the wavelength of the colours recorded," said Daniel. Since different wavelengths are diffracted differently, the pattern is smeared out if too many colours are recorded at the same time.

Read more at Science Daily

Common sleep myths compromise good sleep and health

People often say they can get by on five or fewer hours of sleep, that snoring is harmless, and that having a drink helps you to fall asleep.

These are, in fact, among the most widely held myths about sleeping that not only shape poor habits, but may also pose a significant public health threat, according to a new study publishing online in Sleep Health on April 16.

Researchers from NYU School of Medicine reviewed more than 8,000 websites to identify the 20 most common assumptions about sleep. With a team of sleep medicine experts, they ranked them based on whether each could be dispelled as a myth or supported by scientific evidence, and on the harm that the myth could cause.

"Sleep is a vital part of life that affects our productivity, mood, and general health and well-being," says study lead investigator, Rebecca Robbins, PhD, a postdoctoral research fellow in the Department of Population Health at NYU Langone Health. "Dispelling myths about sleep promotes healthier sleep habits which, in turn, promote overall better health."

The claim by some people that they can get by on five hours of sleep was among the top myths researchers were able to dispel based on scientific evidence. They say this myth also poses the most serious risk to health from long-term sleep deficits. To avoid the effects of this falsehood and others identified in this study, such as the value of taking naps when you routinely have difficulty sleeping overnight, Robbins and her colleagues suggest creating a consistent sleep schedule and spending more time, at least seven hours, asleep.

Another common myth relates to snoring. And while Robbins says snoring can be harmless, it can also be a sign of sleep apnea, a potentially serious sleep disorder in which breathing starts and stops over the course of the night. The authors encourage patients not to dismiss loud snoring, but rather to see a doctor since this sleep behavior may lead to heart stoppages or other illnesses.

The study authors also found sufficient evidence in published studies that, despite beliefs to the contrary, drinking alcoholic beverages before bed is indeed unhealthy for sleep. According to experts, alcohol reduces the body's ability to achieve deep sleep, which people need to function properly.

"Sleep is important to health, and there needs to be greater effort to inform the public regarding this important public health issue," says study senior investigator Girardin Jean Louis, PhD, a professor in the departments of Population Health and Psychiatry at NYU Langone. "For example, by discussing sleep habits with their patients, doctors can help prevent sleep myths from increasing risks for heart disease, obesity, and diabetes."

Read more at Science Daily

Coelacanth reveals new insights into skull evolution

This is the overall anterolateral view of the skull of the Coelacanth's foetus. The brain is in yellow.
An international team of researchers presents the first observations of the development of the skull and brain in the living coelacanth Latimeria chalumnae. Their study, published in Nature, provides new insights into the biology of this iconic animal and the evolution of the vertebrate skull.

The coelacanth Latimeria is a marine fish closely related to tetrapods, four-limbed vertebrates including amphibians, mammals and reptiles. Coelacanths were thought to have been extinct for 70 million years, until the accidental capture of a living specimen by a South African fisherman in 1938. Eighty years after its discovery, Latimeria remains of scientific interest for understanding the origin of tetrapods and the evolution of their closest fossil relatives -- the lobe-finned fishes.

One of the most unusual features of Latimeria is its hinged braincase, which is otherwise only found in many fossil lobe-finned fishes from the Devonian period (410-360 million years ago). The braincase of Latimeria is completely split into an anterior and posterior portion by a joint called the "intracranial joint." In addition, the brain lies far at the rear of the braincase and takes up only 1% of the cavity housing it. This mismatch between the brain and its cavity is totally unequalled among living vertebrates. How the coelacanth skull grows and why the brain remains so small has puzzled scientists for years. To answer these questions, researchers studied specimens at different stages of cranial development from several public natural history collections.

Although many specimens of adult coelacanths are available in natural history collections, earlier life stages such as fetuses are extremely rare. Scientists hence used state-of-the-art imaging techniques to visualize the internal anatomy of the specimens without damaging them. They notably digitalized a 5 cm-long fetus, the earliest developmental stage available for Latimeria, with synchrotron X-ray microtomography at the European Synchrotron (ESRF). Over the last two decades, the ESRF has developed unique expertise in designing non-invasive techniques widely used for evolutionary biology studies.

In addition, the researchers also imaged other stages with a powerful Magnetic Resonance Imaging (MRI) scanner at the Brain and Spine Institute (Paris, France), and a conventional X-ray micro-CTscan at the Muséum national d'Histoire naturelle (Paris, France). These data were used to generate detailed 3D models, which allowed scientists to describe how the form of the skull, the brain and the notochord (a tube extending below the brain and the spinal cord in the early stages of life) changes from a fetus to an adult.

They also observed how these structures are positioned relative to each other at each stage, and compared their observations with what is known about the formation of the skull in other vertebrates.

In contrast to most other vertebrates, where the notochord is replaced by the vertebral column early in embryonic development, the notochord expands considerably in Latimeria. The dramatic enlargement of the notochord likely influences the patterning of the braincase, and might underpin the formation of the intracranial joint. The brain might also be affected by the enlargement of the notochord, as relative size dramatically decreases during development.

These results illuminate for the first time the development of the living coelacanth skull and brain, and open up new avenues for research on the evolution of the vertebrate head.

Read more at Science Daily

New form of laser for sound

The optical laser has grown to a $10 billion global technology market since it was invented in 1960, and has led to Nobel prizes for Art Ashkin for developing optical tweezing and Gerard Mourou and Donna Strickland for work with pulsed lasers. Now a Rochester Institute of Technology researcher has teamed up with experts at the University of Rochester to create a different kind of laser -- a laser for sound, using the optical tweezer technique invented by Ashkin.

In the newest issue of Nature Photonics, the researchers propose and demonstrate a phonon laser using an optically levitated nanoparticle. A phonon is a quantum of energy associated with a sound wave and optical tweezers test the limits of quantum effects in isolation and eliminates physical disturbances from the surrounding environment. The researchers studied the mechanical vibrations of the nanoparticle, which is levitated against gravity by the force of radiation at the focus of an optical laser beam.

"Measuring the position of the nanoparticle by detecting the light it scatters, and feeding that information back into the tweezer beam allows us to create a laser-like situation," said Mishkat Bhattacharya, associate professor of physics at RIT and a theoretical quantum optics researcher. "The mechanical vibrations become intense and fall into perfect sync, just like the electromagnetic waves emerging from an optical laser."

Because the waves emerging from a laser pointer are in sync, the beam can travel a long distance without spreading in all directions -- unlike light from the sun or from a light bulb. In a standard optical laser the properties of the light output are controlled by the material from which the laser is made. Interestingly, in the phonon laser the roles of light and matter are reversed -- the motion of the material particle is now governed by the optical feedback.

"We are very excited to see what the uses of this device are going to be -- especially for sensing and information processing given that the optical laser has so many, and still evolving, applications," said Bhattacharya. He also said the phonon laser promises to enable the investigation of fundamental quantum physics, including engineering of the famous thought experiment of Schrödinger's cat, which can exist at two places simultaneously.

From Science Daily

Scientists restore some functions in a pig's brain hours after death

Immunofluorescent stains for neurons (NeuN; green), astrocytes (GFAP; red), and cell nuclei (DAPI, blue) in the hippocampal CA3 region of brains either unperfused for 10 hours after death (left) or subjected to perfusion with the BrainEx technology (right). After 10 hours postmortem, neurons and astrocytes normally undergo cellular disintegration unless salvaged by the BrainEx system.
Circulation and cellular activity were restored in a pig's brain four hours after its death, a finding that challenges long-held assumptions about the timing and irreversible nature of the cessation of some brain functions after death, Yale scientists report April 18 in the journal Nature.

The brain of a postmortem pig obtained from a meatpacking plant was isolated and circulated with a specially designed chemical solution. Many basic cellular functions, once thought to cease seconds or minutes after oxygen and blood flow cease, were observed, the scientists report.

"The intact brain of a large mammal retains a previously underappreciated capacity for restoration of circulation and certain molecular and cellular activities multiple hours after circulatory arrest," said senior author Nenad Sestan, professor of neuroscience, comparative medicine, genetics, and psychiatry.

However, researchers also stressed that the treated brain lacked any recognizable global electrical signals associated with normal brain function.

"At no point did we observe the kind of organized electrical activity associated with perception, awareness, or consciousness," said co-first author Zvonimir Vrselja, associate research scientist in neuroscience. "Clinically defined, this is not a living brain, but it is a cellularly active brain."

Cellular death within the brain is usually considered to be a swift and irreversible process. Cut off from oxygen and a blood supply, the brain's electrical activity and signs of awareness disappear within seconds, while energy stores are depleted within minutes. Current understanding maintains that a cascade of injury and death molecules are then activated leading to widespread, irreversible degeneration.

However, researchers in Sestan's lab, whose research focuses on brain development and evolution, observed that the small tissue samples they worked with routinely showed signs of cellular viability, even when the tissue was harvested multiple hours postmortem. Intrigued, they obtained the brains of pigs processed for food production to study how widespread this postmortem viability might be in the intact brain. Four hours after the pig's death, they connected the vasculature of the brain to circulate a uniquely formulated solution they developed to preserve brain tissue, utilizing a system they call BrainEx. They found neural cell integrity was preserved, and certain neuronal, glial, and vascular cell functionality was restored.

The new system can help solve a vexing problem -- the inability to apply certain techniques to study the structure and function of the intact large mammalian brain -- which hinders rigorous investigations into topics like the roots of brain disorders, as well as neuronal connectivity in both healthy and abnormal conditions.

"Previously, we have only been able to study cells in the large mammalian brain under static or largely two-dimensional conditions utilizing small tissue samples outside of their native environment," said co-first author Stefano G. Daniele, an M.D./Ph.D. candidate. "For the first time, we are able to investigate the large brain in three dimensions, which increases our ability to study complex cellular interactions and connectivity."

While the advance has no immediate clinical application, the new research platform may one day be able to help doctors find ways to help salvage brain function in stroke patients, or test the efficacy of novel therapies targeting cellular recovery after injury, the authors say.

The research was primarily funded by the National Institutes of Health's (NIH) BRAIN Initiative.

"This line of research holds hope for advancing understanding and treatment of brain disorders and could lead to a whole new way of studying the postmortem human brain," said Andrea Beckel-Mitchener, chief of functional neurogenomics at the NIH's National Institute of Mental Health, which co-funded the research.

The researchers said that it is unclear whether this approach can be applied to a recently deceased human brain. The chemical solution used lacks many of the components natively found in human blood, such as the immune system and other blood cells, which makes the experimental system significantly different from normal living conditions. However, the researcher stressed any future study involving human tissue or possible revival of global electrical activity in postmortem animal tissue should be done under strict ethical oversight.

"Restoration of consciousness was never a goal of this research," said co-author Stephen Latham, director of Yale's Interdisciplinary Center for Bioethics. "The researchers were prepared to intervene with the use of anesthetics and temperature-reduction to stop organized global electrical activity if it were to emerge. Everyone agreed in advance that experiments involving revived global activity couldn't go forward without clear ethical standards and institutional oversight mechanisms."

There is an ethical imperative to use tools developed by the Brain Initiative to unravel mysteries of brain injuries and disease, said Christine Grady, chief of the Department of Bioethics at the NIH Clinical Center.

Read more at Science Daily

Apr 16, 2019

Megalith tombs were family graves in European Stone Age

The Ansarve site on the island of Gotland in the Baltic Sea is embedded in an area with mostly hunter-gathers at the time.
In a new study published in the Proceedings of the National Academy of Sciences, an international research team, led from Uppsala University, discovered kin relationships among Stone Age individuals buried in megalithic tombs on Ireland and in Sweden. The kin relations can be traced for more than ten generations and suggests that megaliths were graves for kindred groups in Stone Age northwestern Europe.

Agriculture spread with migrants from the Fertile Crescent into Europe around 9,000 BCE, reaching northwestern Europe by 4,000 BCE. Starting around 4,500 BCE, a new phenomenon of constructing megalithic monuments, particularly for funerary practices, emerged along the Atlantic façade. These constructions have been enigmatic to the scientific community, and the origin and social structure of the groups that erected them has remained largely unknown. The international team sequenced and analysed the genomes from the human remains of 24 individuals from five megalithic burial sites, encompassing the widespread tradition of megalithic construction in northern and western Europe.

The team collected human remains of 24 individuals from megaliths on Ireland, in Scotland and the Baltic island of Gotland, Sweden. The remains were radiocarbon-dated to between 3,800 and 2,600 BCE. DNA was extracted from bones and teeth for genome sequencing. The team compared the genomic data to the genetic variation of Stone Age groups and individuals from other parts of Europe. The individuals in the megaliths were closely related to Neolithic farmers in northern and western Europe, and also to some groups in Iberia, but less related to farmer groups in central Europe.

The team found an overrepresentation of males compared to females in the megalith tombs on the British Isles.

"We found paternal continuity through time, including the same Y-chromosome haplotypes reoccurring over and over again," says archaeogeneticist Helena Malmström of Uppsala University and co-first author. "However, female kindred members were not excluded from the megalith burials as three of the six kinship relationships in these megaliths involved females."

The genetic data show close kin relationships among the individuals buried within the megaliths. A likely parent-offspring relation was discovered for individuals in the Listhogil Tomb at the Carrowmore site and Tomb 1 at Primrose Grange, about 2 km distance away from each other. "This came as a surprise. It appears as these Neolithic societies were tightly knit with very close kin relations across burial sites," says population-geneticist Federico Sanchez-Quinto of Uppsala University and co-first author.

The Ansarve site on the island of Gotland in the Baltic Sea is embedded in an area with mostly hunter-gathers at the time. "The people buried in the Ansarve tomb are remarkably different on a genetic level compared to the contemporaneous individuals excavated from hunter-gather-contexts, showing that the burial tradition in this megalithic tomb, which lasted for over 700 years, was performed by distinct groups with roots in the European Neolithic expansion," says archaeogeneticist Magdalena Fraser of Uppsala University and co-first author.

"That we find distinct paternal lineages among the people in the megaliths, an overrepresentation of males in some tombs, and the clear kindred relationships point to towards the individuals being part of a patrilineal segment of the society rather than representing a random sample from a larger Neolithic farmer community," says Mattias Jakobsson, population-geneticist at Uppsala University and senior author of the study.

"Our study demonstrates the potential in archaeogenetics to not only reveal large-scale migrations, but also inform about Stone Age societies and the role of particular phenomena in those times such as the megalith phenomena," says Federico Sanchez-Quinto.

Read more at Science Daily

NASA's Cassini reveals surprises with Titan's lakes

This near-infrared, color view from Cassini shows the sun glinting off of Titan's north polar seas.
On its final flyby of Saturn's largest moon in 2017, NASA's Cassini spacecraft gathered radar data revealing that the small liquid lakes in Titan's northern hemisphere are surprisingly deep, perched atop hills and filled with methane.

The new findings, published April 15 in Nature Astronomy, are the first confirmation of just how deep some of Titan's lakes are (more than 300 feet, or 100 meters) and of their composition. They provide new information about the way liquid methane rains on, evaporates from and seeps into Titan -- the only planetary body in our solar system other than Earth known to have stable liquid on its surface.

Scientists have known that Titan's hydrologic cycle works similarly to Earth's -- with one major difference. Instead of water evaporating from seas, forming clouds and rain, Titan does it all with methane and ethane. We tend to think of these hydrocarbons as a gas on Earth, unless they're pressurized in a tank. But Titan is so cold that they behave as liquids, like gasoline at room temperature on our planet.

Scientists have known that the much larger northern seas are filled with methane, but finding the smaller northern lakes filled mostly with methane was a surprise. Previously, Cassini data measured Ontario Lacus, the only major lake in Titan's southern hemisphere. There they found a roughly equal mix of methane and ethane. Ethane is slightly heavier than methane, with more carbon and hydrogen atoms in its makeup.

"Every time we make discoveries on Titan, Titan becomes more and more mysterious," said lead author Marco Mastrogiuseppe, Cassini radar scientist at Caltech in Pasadena, California. "But these new measurements help give an answer to a few key questions. We can actually now better understand the hydrology of Titan."

Adding to the oddities of Titan, with its Earth-like features carved by exotic materials, is the fact that the hydrology on one side of the northern hemisphere is completely different than the that of other side, said Cassini scientist and co-author Jonathan Lunine of Cornell University in Ithaca, New York.

"It is as if you looked down on the Earth's North Pole and could see that North America had completely different geologic setting for bodies of liquid than Asia does," Lunine said.

On the eastern side of Titan, there are big seas with low elevation, canyons and islands. On the western side: small lakes. And the new measurements show the lakes perched atop big hills and plateaus. The new radar measurements confirm earlier findings that the lakes are far above sea level, but they conjure a new image of landforms -- like mesas or buttes -- sticking hundreds of feet above the surrounding landscape, with deep liquid lakes on top.

The fact that these western lakes are small -- just tens of miles across -- but very deep also tells scientists something new about their geology: It's the best evidence yet that they likely formed when the surrounding bedrock of ice and solid organics chemically dissolved and collapsed. On Earth, similar water lakes are known as karstic lakes. Occurring in in areas like Germany, Croatia and the United States, they form when water dissolves limestone bedrock.

Alongside the investigation of deep lakes, a second paper in Nature Astronomy helps unravel more of the mystery of Titan's hydrologic cycle. Researchers used Cassini data to reveal what they call transient lakes. Different sets of observations -- from radar and infrared data -- seem to show liquid levels significantly changed.

The best explanation is that there was some seasonally driven change in the surface liquids, said lead author Shannon MacKenzie, planetary scientist at the Johns Hopkins Applied Physics Laboratory in Laurel, Maryland. "One possibility is that these transient features could have been shallower bodies of liquid that over the course of the season evaporated and infiltrated into the subsurface," she said.

These results and the findings from the Nature Astronomy paper on Titan's deep lakes support the idea that hydrocarbon rain feeds the lakes, which then can evaporate back into the atmosphere or drain into the subsurface, leaving reservoirs of liquid stored below.

Cassini, which arrived in the Saturn system in 2004 and ended its mission in 2017 by deliberately plunging into Saturn's atmosphere, mapped more than 620,000 square miles (1.6 million square kilometers) of liquid lakes and seas on Titan's surface. It did the work with the radar instrument, which sent out radio waves and collected a return signal (or echo) that provided information about the terrain and the liquid bodies' depth and composition, along with two imaging systems that could penetrate the moon's thick atmospheric haze.

The crucial data for the new research were gathered on Cassini's final close flyby of Titan, on April 22, 2017. It was the mission's last look at the moon's smaller lakes, and the team made the most of it. Collecting echoes from the surfaces of small lakes while Cassini zipped by Titan was a unique challenge.

"This was Cassini's last hurrah at Titan, and it really was a feat," Lunine said.

Read more at Science Daily

Astronomers discover third planet in the Kepler-47 circumbinary system

This is an artistic rendition of the Kepler-47 circumbinary planet system. The three planets with the large middle planet being the newly discovered Kepler47d.
Astronomers have discovered a third planet in the Kepler-47 system, securing the system's title as the most interesting of the binary-star worlds. Using data from NASA's Kepler space telescope, a team of researchers, led by astronomers at San Diego State University, detected the new Neptune-to-Saturn-size planet orbiting between two previously known planets.

With its three planets orbiting two suns, Kepler-47 is the only known multi-planet circumbinary system. Circumbinary planets are those that orbit two stars.

The planets in the Kepler-47 system were detected via the "transit method." If the orbital plane of the planet is aligned edge-on as seen from Earth, the planet can pass in front of the host stars, leading to a measurable decrease in the observed brightness. The new planet, dubbed Kepler-47d, was not detected earlier due to weak transit signals.

As is common with circumbinary planets, the alignment of the orbital planes of the planets change with time. In this case, the middle planet's orbit has become more aligned, leading to a stronger transit signal. The transit depth went from undetectable at the beginning of the Kepler Mission to the deepest of the three planets over the span of just four years.

The SDSU researchers were surprised by both the size and location of the new planet. Kepler-47d is the largest of the three planets in the Kepler-47 system.

"We saw a hint of a third planet back in 2012, but with only one transit we needed more data to be sure," said SDSU astronomer Jerome Orosz, the paper's lead author. "With an additional transit, the planet's orbital period could be determined, and we were then able to uncover more transits that were hidden in the noise in the earlier data."

William Welsh, SDSU astronomer and the study's co-author, said he and Orosz expected any additional planets in the Kepler-47 system to be orbiting exterior to the previously known planets. "We certainly didn't expect it to be the largest planet in the system. This was almost shocking," said Welsh. Their research was recently published in the Astronomical Journal.

With the discovery of the new planet, a much better understanding of the system is possible. For example, researchers now know the planets in this circumbinary system are very low density -- less than that of Saturn, the Solar System planet with the lowest density.

While a low density is not that unusual for the sizzling hot-Jupiter type exoplanets, it is rare for mild-temperature planets. Kepler-47d's equilibrium temperature is roughly 50 o F (10 o C), while Kepler-47c is 26 o F (32 o C). The innermost planet, which is the smallest circumbinary planet known, is a much hotter 336 o F (169 o C).

The inner, middle, and outer planets are 3.1, 7.0, and 4.7 times the size of the Earth, and take 49, 87, and 303 days, respectively, to orbit around their suns. The stars themselves orbit each other in only 7.45 days; one star is similar to the Sun, while the other has a third of the mass of the Sun. The entire system is compact and would fit inside the orbit of the Earth. It is approximately 3340 light-years away in the direction of the constellation Cygnus.

Read more at Science Daily

New evidence suggests volcanoes caused biggest mass extinction ever

A volcano erupts in a driving rain.
Researchers say mercury buried in ancient rock provides the strongest evidence yet that volcanoes caused the biggest mass extinction in the history of the Earth.

The extinction 252 million years ago was so dramatic and widespread that scientists call it "the Great Dying." The catastrophe killed off more than 95 percent of life on Earth over the course of hundreds of thousands of years.

Paleontologists with the University of Cincinnati and the China University of Geosciences said they found a spike in mercury in the geologic record at nearly a dozen sites around the world, which provides persuasive evidence that volcanic eruptions were to blame for this global cataclysm.

The study was published this month in the journal Nature Communications.

The eruptions ignited vast deposits of coal, releasing mercury vapor high into the atmosphere. Eventually, it rained down into the marine sediment around the planet, creating an elemental signature of a catastrophe that would herald the age of dinosaurs.

"Volcanic activities, including emissions of volcanic gases and combustion of organic matter, released abundant mercury to the surface of the Earth," said lead author Jun Shen, an associate professor at the China University of Geosciences.

The mass extinction occurred at what scientists call the Permian-Triassic Boundary. The mass extinction killed off much of the terrestrial and marine life before the rise of dinosaurs. Some were prehistoric monsters in their own right, such as the ferocious gorgonopsids that looked like a cross between a sabre-toothed tiger and a Komodo dragon.

The eruptions occurred in a volcanic system called the Siberian Traps in what is now central Russia. Many of the eruptions occurred not in cone-shaped volcanoes but through gaping fissures in the ground. The eruptions were frequent and long-lasting and their fury spanned a period of hundreds of thousands of years.

"Typically, when you have large, explosive volcanic eruptions, a lot of mercury is released into the atmosphere," said Thomas Algeo, a professor of geology in UC's McMicken College of Arts and Sciences.

"Mercury is a relatively new indicator for researchers. It has become a hot topic for investigating volcanic influences on major events in Earth's history," Algeo said.

Researchers use the sharp fossilized teeth of lamprey-like creatures called conodonts to date the rock in which the mercury was deposited. Like most other creatures on the planet, conodonts were decimated by the catastrophe.

The eruptions propelled as much as 3 million cubic kilometers of ash high into the air over this extended period. To put that in perspective, the 1980 eruption of Mount St. Helens in Washington sent just 1 cubic kilometer of ash into the atmosphere, even though ash fell on car windshields as far away as Oklahoma.

In fact, Algeo said, the Siberian Traps eruptions spewed so much material in the air, particularly greenhouse gases, that it warmed the planet by an average of about 10 degrees centigrade.

The warming climate likely would have been one of the biggest culprits in the mass extinction, he said. But acid rain would have spoiled many bodies of water and raised the acidity of the global oceans. And the warmer water would have had more dead zones from a lack of dissolved oxygen.

"We're often left scratching our heads about what exactly was most harmful. Creatures adapted to colder environments would have been out of luck," Algeo said. "So my guess is temperature change would be the No. 1 killer. Effects would exacerbated by acidification and other toxins in the environment."

Stretching over an extended period, eruption after eruption prevented the Earth's food chain from recovering.

"It's not necessarily the intensity but the duration that matters," Algeo said. "The longer this went on, the more pressure was placed on the environment."

Likewise, the Earth was slow to recover from the disaster because the ongoing disturbances continued to wipe out biodiversity, he said.

Earth has witnessed five known mass extinctions over its 4.5 billion years.

Scientists used another elemental signature -- iridium -- to pin down the likely cause of the global mass extinction that wiped out the dinosaurs 65 million years ago. They believe an enormous meteor struck what is now Mexico.

The resulting plume of superheated earth blown into the atmosphere rained down material containing iridium that is found in the geologic record around the world.

Shen said the mercury signature provides convincing evidence that the Siberian Traps eruptions were responsible for the catastrophe. Now researchers are trying to pin down the extent of the eruptions and which environmental effects in particular were most responsible for the mass die-off, particularly for land animals and plants.

Shen said the Permian extinction could shed light on how global warming today might lead to the next mass extinction. If global warming, indeed, was responsible for the Permian die-off, what does warming portend for humans and wildlife today?

"The release of carbon into the atmosphere by human beings is similar to the situation in the Late Permian, where abundant carbon was released by the Siberian eruptions," Shen said.

Algeo said it is cause for concern.

"A majority of biologists believe we're at the cusp of another mass extinction -- the sixth big one. I share that view, too," Algeo said. "What we should learn is this will be serious business that will harm human interests so we should work to minimize the damage."

People living in marginal environments such as arid deserts will suffer first. This will lead to more climate refugees around the world.

Read more at Science Daily

Apr 15, 2019

Abundance of information narrows our collective attention span

The negative effects of social media and a hectic news cycle on our attention span has been an on-going discussion in recent years -- but there's been a lack of empirical data supporting claims of a 'social acceleration'. A new study in Nature Communications finds that our collective attention span is indeed narrowing, and that this effect occurs -- not only on social media -- but also across diverse domains including books, web searches, movie popularity, and more.

Our public discussion can appear to be increasingly fragmented and accelerated. Sociologists, psychologists, and teachers have warned of an emerging crisis stemming from a 'fear of missing out', keeping up to date on social media, and breaking news coming at us 24/7. So far, the evidence to support these claims has only been hinted at or has been largely anecdotal. There has been an obvious lack of a strong empirical foundation.

In a new study, conducted by a team of European scientists from Technische Universität Berlin, Max Planck Institute for Human Development, University College Cork, and DTU, this empirical evidence has been presented regarding one dimension of social acceleration, namely the increasing rates of change within collective attention.

"It seems that the allocated attention in our collective minds has a certain size, but that the cultural items competing for that attention have become more densely packed. This would support the claim that it has indeed become more difficult to keep up to date on the news cycle, for example." says Professor Sune Lehmann from DTU Compute.

The scientists have studied Twitter data from 2013 to 2016, books from Google Books going back 100 years, movie ticket sales going back 40 years, and citations of scientific publications from the last 25 years. In addition, they have gathered data from Google Trends (2010-2018), Reddit (2010-2015), and Wikipedia (2012-2017).

Rapid exhaustion of attention ressources

On this background, they find empirical evidence of ever-steeper gradients and shorter bursts of collective attention given to each cultural item. The paper uses a model for this attention economy to suggest that the accelerating vicissitudes of popular content are driven by increasing production and consumption of content, and therefore are not intrinsic to social media. This results in a more rapid exhaustion of limited attention resources.

When looking into the global daily top 50 hashtags on Twitter, the scientists found that peaks became increasingly steep and frequent: In 2013 a hashtag stayed in the top 50 for an average of 17.5 hours. This gradually decreases to 11.9 hours in 2016.

This trend is mirrored when looking at other domains, online and offline -- and covering different periods. Looking, for instance, at the occurrence of the same five-word phrases (n-grams) in Google Books for the past 100 years, and the success of top box office movies. The same goes for Google searches and the number of Reddit comments on individual submissions. When looking into Wikipedia and scientific publications, however, this trend was not mirrored. Though the exact reason is unclear, the authors suggest that it could be because of their being knowledge communication systems.

"We wanted to understand which mechanisms could drive this behavior. Picturing topics as species that feed on human attention, we designed a mathematical model with three basic ingredients: 'hotness', aging and the thirst for something new." says Dr. Philipp Hövel, lecturer for applied mathematics, University College Cork.

This model offers an interpretation of their observations. When more content is produced in less time, it exhausts the collective attention earlier. The shortened peak of public interest for one topic is directly followed by the next topic, because of the fierce competition for novelty.

"The one parameter in the model that was key in replicating the empirical findings was the input rate -- the abundance of information. The world has become increasingly well connected in the past decades. This means that content is increasing in volume, which exhausts our attention and our urge for 'newness' causes us to collectively switch between topics more rapidly." says postdoc Philipp Lorenz-Spreen, Max Planck Institute for Human Development.

Read more at Science Daily

The history of humanity in your face

These are skulls of hominins over the last 4.4 million years.
The face you see in the mirror is the result of millions of years of evolution and reflects the most distinctive features that we use to identify and recognize each other, molded by our need to eat, breath, see, and communicate.

But how did the modern human face evolve to look the way it does? Eight of the top experts on the evolution of the human face, including Arizona State University's William Kimbel, collaborated on an article published this week in the journal Nature Ecology & Evolution to tell this four-million-year story. Kimbel is the director of the Institute of Human Origins and Virginia M. Ullman Professor of Natural History and the Environment in the School of Human Evolution and Social Change.

After our ancestors stood on two legs and began to walk upright, at least 4.5 million years ago, the skeletal framework of a bipedal creature was pretty well formed. Limbs and digits became longer or shorter, but the functional architecture of bipedal locomotion had developed.

But the skull and teeth provide a rich library of changes that we can track over time, describing the history of evolution of our species. Prime factors in the changing structure of the face include a growing brain and adaptations to respiratory and energy demands, but most importantly, changes in the jaw, teeth, and face responded to shifts in diet and feeding behavior. We are, or we evolved to be, what we eat -- literally!

Diet has played a large role in explaining evolutionary changes in facial shape. The earliest human ancestors ate tough plant foods that required large jaw muscles and cheek teeth to break down, and their faces were correspondingly broad and deep, with massive muscle attachment areas.

As the environment changed to drier, less wooded conditions, especially in the last two million years, early Homo species began to routinely use tools to break down foods or cut meat. The jaws and teeth changed to meet a less demanding food source, and the face became more delicate, with a flatter countenance.

Changes in the human face may not be due only to purely mechanical factors. The human face, after all, plays an important role in social interaction, emotion, and communication. Some of these changes may be driven, in part, by social context. Our ancestors were challenged by the environment and increasingly impacted by culture and social factors. Over time, the ability to form diverse facial expressions likely enhanced nonverbal communication.

Large, protruding brow ridges are typical of some extinct species of our own genus, Homo, like Homo erectus and the Neanderthals. What function did these structures play in adaptive changes in the face? The African great apes also have strong brow ridges, which researchers suggest help to communicate dominance or aggression. It is probably safe to conclude that similar social functions influenced the facial form of our ancestors and extinct relatives. Along with large, sharp canine teeth, large brow ridges were lost along the evolutionary road to our own species, perhaps as we evolved to become less aggressive and more cooperative in social contexts.

Read more at Science Daily

Meteoroid strikes eject precious water from moon

Artist's concept of the LADEE spacecraft (left) detecting water vapor from meteoroid impacts on the Moon (right).
Researchers from NASA and the Johns Hopkins University Applied Physics Laboratory in Laurel, Maryland, report that streams of meteoroids striking the Moon infuse the thin lunar atmosphere with a short-lived water vapor.

The findings will help scientists understand the history of lunar water -- a potential resource for sustaining long term operations on the Moon and human exploration of deep space. Models had predicted that meteoroid impacts could release water from the Moon as a vapor, but scientists hadn't yet observed the phenomenon.

Now, the team has found dozens of these events in data collected by NASA's Lunar Atmosphere and Dust Environment Explorer. LADEE was a robotic mission that orbited the Moon to gather detailed information about the structure and composition of the thin lunar atmosphere, and determine whether dust is lofted into the lunar sky.

"We traced most of these events to known meteoroid streams, but the really surprising part is that we also found evidence of four meteoroid streams that were previously undiscovered," said Mehdi Benna of NASA's Goddard Space Flight Center in Greenbelt, Maryland, and the University of Maryland Baltimore County. Benna is the lead author of the study, published in Nature Geosciences.

The newly identified meteoroid streams, observed by LADEE, occurred on January 9, April 2, April 5 and April 9, 2014.

There's evidence that the Moon has water (H2O) and hydroxyl (OH), a more reactive relative of H2O. But debates continue about the origins of the water, whether it is widely distributed and how much might be present.

"The Moon doesn't have significant amounts of H2O or OH in its atmosphere most of the time," said Richard Elphic, the LADEE project scientist at NASA's Ames Research Center in California's Silicon Valley. "But when the Moon passed through one of these meteoroid streams, enough vapor was ejected for us to detect it. And then, when the event was over, the H2O or OH went away."

Lunar scientists often use the term "water" to refer to both H2O and OH. Figuring out how much H2O and how much OH are present is something future Moon missions might address.

LADEE, which was built and managed by NASA's Ames Research Center in California's Silicon Valley, detected the vapor using its Neutral Mass Spectrometer, an instrument built by Goddard. The mission orbited the Moon from October 2013 to April 2014 and gathered detailed information about the structure and composition of the lunar atmosphere, or more correctly, the "exosphere" -- a faint envelope of gases around the Moon.

To release water, the meteoroids had to penetrate at least 3 inches (8 centimeters) below the surface. Underneath this bone-dry top layer lies a thin transition layer, then a hydrated layer, where water molecules likely stick to bits of soil and rock, called regolith.

From the measurements of water in the exosphere, the researchers calculated that the hydrated layer has a water concentration of about 200 to 500 parts per million, or about 0.02 to 0.05 percent by weight. This concentration is much drier than the driest terrestrial soil, and is consistent with earlier studies. It is so dry that one would need to process more than a metric ton of regolith in order to collect 16 ounces of water.

Because the material on the lunar surface is fluffy, even a meteoroid that's a fraction of an inch (5 millimeters) across can penetrate far enough to release a puff of vapor. With each impact, a small shock wave fans out and ejects water from the surrounding area.

When a stream of meteoroids rains down on the lunar surface, the liberated water will enter the exosphere and spread through it. About two-thirds of that vapor escapes into space, but about one-third lands back on the surface of the Moon.

These findings could help explain the deposits of ice in cold traps in the dark reaches of craters near the poles. Most of the known water on the Moon is located in cold traps, where temperatures are so low that water vapor and other volatiles that encounter the surface will remain stable for a very long time, perhaps up to several billion years. Meteoroid strikes can transport water both into and out of cold traps.

The team ruled out the possibility that all of the water detected came from the meteoroids themselves.

"We know that some of the water must be coming from the Moon, because the mass of water being released is greater than the water mass within the meteoroids coming in," said the second author of the paper, Dana Hurley of the Johns Hopkins University Applied Physics Laboratory.

The analysis indicates that meteoroid impacts release water faster than it can be produced from reactions that occur when the solar wind hits the lunar surface.

"The water being lost is likely ancient, either dating back to the formation of the Moon or deposited early in its history," said Benna.

Read more at Science Daily

Scientists print first 3D heart using patient's biological materials

A 3D-printed, small-scaled human heart engineered from the patient's own materials and cells.
In a major medical breakthrough, Tel Aviv University researchers have "printed" the world's first 3D vascularised engineered heart using a patient's own cells and biological materials. Their findings were published on April 15 in a study in Advanced Science.

Until now, scientists in regenerative medicine -- a field positioned at the crossroads of biology and technology -- have been successful in printing only simple tissues without blood vessels.

"This is the first time anyone anywhere has successfully engineered and printed an entire heart replete with cells, blood vessels, ventricles and chambers," says Prof. Tal Dvir of TAU's School of Molecular Cell Biology and Biotechnology, Department of Materials Science and Engineering, Center for Nanoscience and Nanotechnology and Sagol Center for Regenerative Biotechnology, who led the research for the study.

Heart disease is the leading cause of death among both men and women in the United States. Heart transplantation is currently the only treatment available to patients with end-stage heart failure. Given the dire shortage of heart donors, the need to develop new approaches to regenerate the diseased heart is urgent.

"This heart is made from human cells and patient-specific biological materials. In our process these materials serve as the bioinks, substances made of sugars and proteins that can be used for 3D printing of complex tissue models," Prof. Dvir says. "People have managed to 3D-print the structure of a heart in the past, but not with cells or with blood vessels. Our results demonstrate the potential of our approach for engineering personalized tissue and organ replacement in the future."

Research for the study was conducted jointly by Prof. Dvir, Dr. Assaf Shapira of TAU's Faculty of Life Sciences and Nadav Moor, a doctoral student in Prof. Dvir's lab.

"At this stage, our 3D heart is small, the size of a rabbit's heart," explains Prof. Dvir. "But larger human hearts require the same technology."

For the research, a biopsy of fatty tissue was taken from patients. The cellular and a-cellular materials of the tissue were then separated. While the cells were reprogrammed to become pluripotent stem cells, the extracellular matrix (ECM), a three-dimensional network of extracellular macromolecules such as collagen and glycoproteins, were processed into a personalized hydrogel that served as the printing "ink."

After being mixed with the hydrogel, the cells were efficiently differentiated to cardiac or endothelial cells to create patient-specific, immune-compatible cardiac patches with blood vessels and, subsequently, an entire heart.

According to Prof. Dvir, the use of "native" patient-specific materials is crucial to successfully engineering tissues and organs.

"The biocompatibility of engineered materials is crucial to eliminating the risk of implant rejection, which jeopardizes the success of such treatments," Prof. Dvir says. "Ideally, the biomaterial should possess the same biochemical, mechanical and topographical properties of the patient's own tissues. Here, we can report a simple approach to 3D-printed thick, vascularized and perfusable cardiac tissues that completely match the immunological, cellular, biochemical and anatomical properties of the patient."

The researchers are now planning on culturing the printed hearts in the lab and "teaching them to behave" like hearts, Prof. Dvir says. They then plan to transplant the 3D-printed heart in animal models.

"We need to develop the printed heart further," he concludes. "The cells need to form a pumping ability; they can currently contract, but we need them to work together. Our hope is that we will succeed and prove our method's efficacy and usefulness.

Read more at Science Daily

Apr 14, 2019

Interplay of pollinators and pests influences plant evolution

Brassica rapa pollinated by bumblebees has more attractive flowers.
Brassica rapa plants pollinated by bumblebees evolve more attractive flowers. But this evolution is compromised if caterpillars attack the plant at the same time. With the bees pollinating them less effectively, the plants increasingly self-pollinate. In a greenhouse evolution experiment, scientists at the University of Zurich have shown just how much the effects of pollinators and pests influence each other.

In nature, plants interact with a whole range of organisms, driving the evolution of their specific characteristics. While pollinators influence floral traits and reproduction, herbivorous insects enhance the plant's defense mechanisms. Now botanists at the University of Zurich have investigated the way these different interactions influence each other, and how rapidly plants adapt when the combination of selective agents with which they interact changes.

Experimental evolution in real time

In a two-year greenhouse experiment, Florian Schiestl, professor at UZH's Department of Systematic and Evolutionary Botany, and doctoral candidate Sergio Ramos have demonstrated a powerful interplay between the effects of pollinating insects and those of herbivores. For their experiment they used Brassica rapa, a plant closely related to oilseed rape, interacting with bumblebees and caterpillars as selective agents. Over six generations they subjected four groups of plants to different treatments: with bee pollination only, bee pollination with herbivory (caterpillars), hand pollination without herbivory, and hand pollination with herbivory.

Balance between attraction and defense


After this experimental evolution study, the plants pollinated by bumblebees without herbivory were most attractive to the pollinators: they evolved more fragrant flowers, which tended to be larger. "These plants had adapted to the bees' preferences during the experiment," explains Sergio Ramos. By contrast, bee-pollinated plants with herbivory were less attractive, with higher concentrations of defensive toxic metabolites and less fragrant flowers that tended to be smaller. "The caterpillars compromise the evolution of attractive flowers, as plants assign more resources to defense," says Ramos.

Combined impact on reproduction

The powerful interplay between the effects of bees and caterpillars was also evident in the plants' reproductive characteristics: In the course of their evolution, for example, the bee-pollinated plants developed a tendency to spontaneously self-pollinate when they were simultaneously damaged by caterpillars. Plants attacked by caterpillars developed less attractive flowers, which affected the behavior of the bees so that they pollinated these flowers less well.

Read more at Science Daily

Conservationists discover hidden diversity in ancient frog family

Metamorph Sooglossus sechellensis balanced on a 10 pence coin.
Research scientists led by the University of Kent have uncovered hidden diversity within a type of frog found only in the Seychelles, showing that those on each island have their own distinct lineage.

The family tree of sooglossid frogs dates back at least 63 million years. They are living ancestors of those frogs that survived the meteor strike on earth approximately 66 million years ago, and their most recent common ancestor dates back some 63 million years, making them a highly evolutionarily distinct group.

However, recent work on their genetics led by Dr Jim Labisko from Kent's School of Anthropology and Conversation revealed that until they can complete further investigations into their evolutionary relationships and verify the degree of differentiation between each island population, each island lineage needs to be considered as a potential new species, known as an Evolutionarily Significant Unit (ESU). As a result, Dr Labisko advises conservation managers they should do likewise and consider each as an ESU.

There are just four species of sooglossid frog; the Seychelles frog (Sooglossus sechellensis), Thomasset's rock frog (So. thomasseti), Gardiner's Seychelles frog (Sechellophryne gardineri) and the Seychelles palm frog (Se. pipilodryas).

Of the currently recognised sooglossid species, two (So. thomasseti and Se. pipilodryas) have been assessed as Critically Endangered, and two (So. sechellensis and Se. gardineri) as Endangered for the International Union for Conservation of Nature IUCN Red List. All four species are in the top 50 of ZSL's (Zoological Society of London) Evolutionarily Distinct Globally Endangered (EDGE) amphibians.

Given the Red List and EDGE status of these unique frogs Dr Labisko and his colleagues are carrying out intensive monitoring to assess the level of risk from both climate change and disease to the endemic amphibians of the Seychelles.

Dr Labisko, who completed his PhD on sooglossid frogs at Kent's Durrell Institute of Conservation and Ecology in 2016 said many of these frogs are so small and good at hiding the only way to observe them is by listening for their calls. Although tiny, the sound they emit can be around 100 decibels, equivalent to the sound volume of a power lawnmower'.

Dr Labisko's team are using sound monitors to record the vocal activity of sooglossid frogs for five minutes every hour, every day of the year, in combination with dataloggers that are sampling temperature and moisture conditions on an hourly basis

Dr Labisko said: 'Amphibians play a vital role in the ecosystem as predators, munching on invertebrates like mites and mosquitos, so they contribute to keeping diseases like malaria and dengue in check. Losing them will have serious implications for human health.'

As a result of this study into the frogs, the research team will also contribute to regional investigations into climate change, making a local impact in the Seychelles.

Amphibians around the world are threatened by a lethal fungus known as chytrid. The monitoring of these sooglossid frogs will provide crucial data on amphibian behaviour in relation to climate and disease. If frogs are suddenly not heard in an area where they were previously, this could indicate a range-shift in response to warming temperatures, or the arrival of disease such as chytrid -- the Seychelles is one of only two global regions of amphibian diversity where the disease is yet to be detected.

It may also impact on a variety of other endemic Seychelles flora and fauna, including the caecilians, a legless burrowing amphibian that is even more difficult to study than the elusive sooglossids.

Read more at Science Daily