Jul 22, 2017

The moon is front and center during a total solar eclipse

In the lead-up to a total solar eclipse, most of the attention is on the sun, but Earth's moon also has a starring role.
In the lead-up to a total solar eclipse, most of the attention is on the sun, but Earth's moon also has a starring role.

"A total eclipse is a dance with three partners: the moon, the sun and Earth," said Richard Vondrak, a lunar scientist at NASA's Goddard Space Flight Center in Greenbelt, Maryland. "It can only happen when there is an exquisite alignment of the moon and the sun in our sky."

During this type of eclipse, the moon completely hides the face of the sun for a few minutes, offering a rare opportunity to glimpse the pearly white halo of the solar corona, or faint outer atmosphere. This requires nearly perfect alignment of the moon and the sun, and the apparent size of the moon in the sky must match the apparent size of the sun.

On average, a total solar eclipse occurs about every 18 months somewhere on Earth, although at any particular location, it happens much less often.

The total eclipse on Aug. 21, 2017, will be visible within a 70-mile-wide path that will cross 14 states in the continental U.S. from Oregon to South Carolina. Along this path of totality, the umbra, or dark inner shadow, of the moon will travel at speeds of almost 3,000 miles per hour in western Oregon to 1,500 miles per hour in South Carolina.

In eclipse maps, the umbra is often depicted as a dark circle or oval racing across the landscape. But a detailed visualization created for this year's eclipse reveals that the shape is more like an irregular polygon with slightly curved edges, and it changes as the shadow moves along the path of totality.

"With this new visualization, we can represent the umbral shadow with more accuracy by accounting for the influence of elevation at different points on Earth, as well as the way light rays stream through lunar valleys along the moon's ragged edge," said NASA visualizer Ernie Wright at Goddard.

This unprecedented level of detail was achieved by coupling 3-D mapping of the moon's surface, done by NASA's Lunar Reconnaissance Orbiter, or LRO, with Earth elevation information from several datasets.

LRO's mapping of the lunar terrain also makes it possible to predict very accurately when and where the brilliant flashes of light called Baily's Beads or the diamond-ring effect will occur. These intense spots appear along the edge of the darkened disk just before totality, and again just afterward, produced by sunlight peeking through valleys along the uneven rim of the moon.

In the very distant future, the spectacular shows put on by total solar eclipses will cease. That's because the moon is, on average, slowly receding from Earth at a rate of about 1-1/2 inches, or 4 centimeters, per year. Once the moon moves far enough away, its apparent size in the sky will be too small to cover the sun completely.

Read more at Science Daily

North American monsoon storms fewer but more extreme

Monsoon season now brings more extreme wind and rain to central and southwestern Arizona than in the past, according to new research led by the University of Arizona.

Although there are now fewer storms, the largest monsoon thunderstorms bring heavier rain and stronger winds than did the monsoon storms of 60 years ago, the scientists report.

"The monsoon is the main severe weather threat in Arizona. Dust storms, wind, flash flooding, microbursts -- those are the things that are immediate dangers to life and property," said co-author Christopher Castro, a UA associate professor of hydrology and atmospheric sciences.

The researchers compared precipitation records from 1950-1970 to those from 1991-2010 for Arizona. The researchers also used those records to verify that their climate model generated realistic results.

"This is one of the first studies to look at long-term changes in monsoon precipitation," Castro said. "We documented that the increases in extreme precipitation are geographically focused south and west of the Mogollon Rim -- and that includes Phoenix."

The region of Arizona with more extreme storms includes Bullhead City, Kingman, the Phoenix metropolitan area, the Colorado River valley and Arizona's low deserts, including the towns of Casa Grande, Gila Bend, Ajo, Lukeville and Yuma.

The Tohono O'odham Reservation, Luke Air Force Base, the Barry Goldwater Air Force Range and the Yuma Proving Ground are also in the region with more extreme monsoon weather.

Tucson is just outside of the zone with more extreme storms.

Having less frequent but more intense storms is consistent with what is expected throughout the world due to climate change, Castro said.

"Our work shows that it certainly holds true for the monsoon in Arizona," he said.

When the researchers compared the results from climate and weather models to the actual observations, the model with a resolution of less than 1.5 miles (2.5 km) accurately reproduced the precipitation data. The models with resolutions of 10 miles or more did not.

"You just can't trust coarser simulations to represent changes in severe weather. You have to use the high-resolution model," Castro said.

First author Thang M. Luong conducted the research as part of his doctoral work at the UA. He is now a postdoctoral researcher at King Abdullah University of Science and Technology, Thuwal, Saudi Arabia.

The paper, "The More Extreme Nature of North American Monsoon Precipitation in the Southwestern U.S. as Revealed by a Historical Climatology of Simulated Severe Weather Events," by Luong, Castro, Hsin-I Chang and Timothy Lahmers of the UA Department of Hydrology and Atmospheric Sciences and David K. Adams and Carlos A. Ochoa-Moya of the Universidad Nacional Autónoma de México, México D.F. was published July 3 in the early online edition of the Journal of Applied Meteorology and Climatology.

The U.S. Department of Defense Strategic Environmental Research and Development Program and the Universidad Nacional Autónoma de México PAPIIT funded the research.

The researchers wanted to identify risks from warm-season extreme weather, especially those to Department of Defense installations in the American Southwest.

Existing global and regional climate change models don't represent the North American monsoon well in either seasonal forecasts or climate projections, the research team wrote.

Looking at the average precipitation over the entire monsoon season doesn't show whether monsoon storms are becoming more severe now compared with 60 years ago, Castro said.

Therefore Luong, Castro and their colleagues looked for extreme rainfall events during 1950-1970 as compared with 1991-2010. Average precipitation was about the same, but 1991-2011 had more storms with very heavy rain.

"What's going on in the changes to the extremes is very different from what goes on in the changes to the mean," Castro said. "Big storms, heavy flooding -- we found out those types of extreme precipitation events are becoming more intense and are becoming more intense downwind of the mountain ranges."

The team tested a common computer model of the atmosphere to try to replicate the historical changes in monsoon storm intensity. The model, similar to one used by the National Weather Service for forecasts, produces results similar to what would be observed on radar or satellite imagery by realistically simulating the physical structure of monsoon thunderstorms.

A key innovation of the UA research was the level of detail -- the team tested several different levels of resolution. Only by using the high resolution of 1.5 miles (2.5 km) could the model replicate the actual rainfall recorded for the two 20-year periods being compared.

The recorded data showed only rainfall. The high-resolution models indicated rainier monsoon storms were accompanied by higher winds and more downbursts.

"Because the models get the precipitation right, it gives us confidence that the models get the winds right too," Castro said.

He said that in Phoenix, monsoon storms used to be late in the evening but are now happening earlier.

Read more at Science Daily

Jul 21, 2017

In saliva, clues to a 'ghost' species of ancient human

Neanderthal. In saliva, scientists have found hints that a "ghost" species of archaic human may have contributed genetic material to ancestors of people living in Sub-Saharan Africa today.
In saliva, scientists have found hints that a "ghost" species of archaic humans may have contributed genetic material to ancestors of people living in Sub-Saharan Africa today.

The research adds to a growing body of evidence suggesting that sexual rendezvous between different archaic human species may not have been unusual.

Past studies have concluded that the forebears of modern humans in Asia and Europe interbred with other early hominin species, including Neanderthals and Denisovans. The new research is among more recent genetic analyses indicating that ancient Africans also had trysts with other early hominins.

"It seems that interbreeding between different early hominin species is not the exception -- it's the norm," says Omer Gokcumen, PhD, an assistant professor of biological sciences in the University at Buffalo College of Arts and Sciences.

"Our research traced the evolution of an important mucin protein called MUC7 that is found in saliva," he says. "When we looked at the history of the gene that codes for the protein, we see the signature of archaic admixture in modern day Sub-Saharan African populations."

The research was published on July 21 in the journal Molecular Biology and Evolution. The study was led by Gokcumen and Stefan Ruhl, DDS, PhD, a professor of oral biology in UB's School of Dental Medicine.

A tantalizing clue in saliva


The scientists came upon their findings while researching the purpose and origins of the MUC7 protein, which helps give spit its slimy consistency and binds to microbes, potentially helping to rid the body of disease-causing bacteria.

As part of this investigation, the team examined the MUC7 gene in more than 2,500 modern human genomes. The analysis yielded a surprise: A group of genomes from Sub-Saharan Africa had a version of the gene that was wildly different from versions found in other modern humans.

The Sub-Saharan variant was so distinctive that Neanderthal and Denisovan MUC7 genes matched more closely with those of other modern humans than the Sub-Saharan outlier did.

"Based on our analysis, the most plausible explanation for this extreme variation is archaic introgression -- the introduction of genetic material from a 'ghost' species of ancient hominins," Gokcumen says. "This unknown human relative could be a species that has been discovered, such as a subspecies of Homo erectus, or an undiscovered hominin. We call it a 'ghost' species because we don't have the fossils."

Given the rate that genes mutate during the course of evolution, the team calculated that the ancestors of people who carry the Sub-Saharan MUC7 variant interbred with another ancient human species as recently as 150,000 years ago, after the two species' evolutionary path diverged from each other some 1.5 to 2 million years ago.

Why MUC7 matters

The scientists were interested in MUC7 because in a previous study they showed that the protein likely evolved to serve an important purpose in humans.

In some people, the gene that codes for MUC7 holds six copies of genetic instructions that direct the body to build parts of the corresponding protein. In other people, the gene harbors only five sets of these instructions (known as tandem repeats).

Prior studies by other researchers found that the five-copy version of the gene protected against asthma, but Gokcumen and Ruhl did not see this association when they ran a more detailed analysis.

The new study did conclude, however, that MUC7 appears to influence the makeup of the oral microbiome, the collection of bacteria within the mouth. The evidence for this came from an analysis of biological samples from 130 people, which found that different versions of the MUC7 gene were strongly associated with different oral microbiome compositions.

Read more at Science Daily

Best measure of star-forming material in galaxy clusters in early universe

The Tadpole Galaxy is a disrupted spiral galaxy showing streams of gas stripped by gravitational interaction with another galaxy. Molecular gas is the required ingredient to form stars in galaxies in the early universe.
The international Spitzer Adaptation of the Red-sequence Cluster Survey (SpARCS) collaboration based at the University of California, Riverside has combined observations from several of the world's most powerful telescopes to carry out one of the largest studies yet of molecular gas -- the raw material which fuels star formation throughout the universe -- in three of the most distant clusters of galaxies ever found, detected as they appeared when the universe was only four billion years old.

Results were recently published in The Astrophysical Journal Letters. Allison Noble, a postdoctoral researcher at the Massachusetts Institute of Technology, led this newest research from the SpARCS collaboration.

Clusters are rare regions of the universe consisting of tight groups of hundreds of galaxies containing trillions of stars, as well as hot gas and mysterious dark matter. First, the research team used spectroscopic observations from the W. M. Keck Observatory on Mauna Kea, Hawai'i, and the Very Large Telescope in Chile that confirmed 11 galaxies were star-forming members of the three massive clusters. Next, the researchers took images through multiple filters from NASA's Hubble Space Telescope, which revealed a surprising diversity in the galaxies' appearance, with some galaxies having already formed large disks with spiral arms.

One of the telescopes the SpARCS scientists used is the extremely sensitive Atacama Large Millimeter Array (ALMA) telescope capable of directly detecting radio waves emitted from the molecular gas found in galaxies in the early universe. ALMA observations allowed the scientists to determine the amount of molecular gas in each galaxy, and provided the best measurement yet of how much fuel was available to form stars.

The researchers compared the properties of galaxies in these clusters with the properties of "field galaxies" (galaxies found in more typical environments with fewer close neighbors). To their surprise, they discovered that cluster galaxies had higher amounts of molecular gas relative to the amount of stars in the galaxy, compared to field galaxies. The finding puzzled the team because it has long been known that when a galaxy falls into a cluster, interactions with other cluster galaxies and hot gas accelerate the shut off of its star formation relative to that of a similar field galaxy (the process is known as environmental quenching).

"This is definitely an intriguing result," said Gillian Wilson, a professor of physics and astronomy at UC Riverside and the leader of the SpARCS collaboration. "If cluster galaxies have more fuel available to them, you might expect them to be forming more stars than field galaxies, and yet they are not."

Noble, a SpARCS collaborator and the study's leader, suggests several possible explanations: It is possible that something about being in the hot, harsh cluster environment surrounded by many neighboring galaxies perturbs the molecular gas in cluster galaxies such that a smaller fraction of that gas actively forms stars. Alternatively, it is possible that an environmental process, such as increased merging activity in cluster galaxies, results in the observed differences between the cluster and field galaxy populations.

"While the current study does not answer the question of which physical process is primarily responsible for causing the higher amounts of molecular gas, it provides the most accurate measurement yet of how much molecular gas exists in galaxies in clusters in the early universe," Wilson said.

Read more at Science Daily

A wolf's howl in miniature: Researchers discover mice speak similarly to humans

Some mice and rats employ a whistle-like mechanism.
Grasshopper mice (genus Onychomys), rodents known for their remarkably loud call, produce audible vocalizations in the same way that humans speak and wolves howl, according to new research published in Proceedings of the Royal Society B. Grasshopper mice employ both a traditional whistle-like mechanism used by other mice and rats and a unique airflow-induced tissue vibration like that of humans.

Researchers from Northern Arizona University, Midwestern University at Glendale and Ritsumeikan University in Japan used heliox experiments, laryngeal and vocal tract morphological investigations and biomechanical modelling to investigate how grasshopper mice produce spectacular long-distance calls.

"Our findings provide the first evidence of a mouse that produces sound like humans and sets the stage for studies on vocal injuries and aging," said lead author Bret Pasch, NAU assistant professor and Merriam-Powell Center affiliate. "Moreover, the research provides a baseline for a larger comparative analysis of vocalizations in rodents, which comprise more than 40 percent of mammalian diversity but whose many voices remain undiscovered."

Grasshopper mice are predatory rodents that inhabit deserts, grasslands and prairies of the western United States and northern Mexico. Like most mice, grasshopper mice produce ultrasonic vocalizations above the range of human hearing in close-distance social interactions through whistle-like mechanisms.

Unlike other mice, grasshopper mice also produce long-distance audible vocalizations, or advertisement vocalizations. Naturalist Vernon Bailey described the call of grasshopper mice as a "wolf's howl in miniature." Both male and female animals often assume an upright posture and open their mouths widely to generate a loud call that may carry more than 100 meters. Grasshopper mice have relatively large home ranges, so their calls serve as a mechanism to detect mates and competitors across large distances.

Imaging the voice box of grasshopper mice revealed a thin layer of connective tissue and a tiny structure called a vocal membrane previously only described in detail in echolocating bats. In addition, the mice possess a bell-shaped vocal tract, similar in shape to a loudspeaker, which increases vocal intensity, just like opera singers.

From Science Daily

New Kingdom Egypt: The goldsmith’s tomb

A view of the ruins of the town of Sai. Founded by the Egyptians on the island of the same name in the Nile, in what is now Sudan, the town was occupied from 1500 until 1200 BC.
Ludwig-Maximilians-Universitaet (LMU) in Munich Egyptologist Julia Budka is studying the impact of intercultural contacts in Ancient Egypt. Her excavations in Sudan have uncovered a tomb dating to around 1450 BC on the island of Sai in the Nile.

A previously unknown tomb, some 3400 years old, has recently been uncovered on the island of Sai in the River Nile. It was in use for some time and contains the remains of up to 25 persons. Further analysis of the finds could elucidate the multicultural nature of the island's population during this period.

The island was then located in Nubia, which was the primary source of gold for the New Kingdom of the Egyptian Pharaohs at that time. The tomb was most probably built for a master goldsmith by the name of Khnummose, and was discovered during excavations conducted by Julia Budka, Professor of Egyptian Archaeology and Art. Investigation of the tomb's contents and inscriptions has so far revealed that, following the conquest by the Pharaoh Thutmose III of the local African Kerma kingdom of Kerma, the local elites were rapidly integrated by the new regime. The earliest Egyptian-style burials on Sai date to the reign of this king.

Over the past 5 years, Budka has carried out parallel studies on three different Egyptian settlements that were established during the period of the so-called New Kingdom between 1500 und 1200 BC. The excavations on the island of Sai, which lies in what is now the Sudanese section of the Nile, not only provide insights into the relationship between the official representatives of the occupying power and the local Nubian population, they also demonstrate that the island was inhabited for longer than hitherto assumed.

"It had been thought that the settlement on the island was abandoned after the foundation of a new town at Amara West. Our finds, on the other hand, prove that Hornakht, one of Egypt's highest ranking bureaucrats during the reign of Ramses II, not only had his official residence on the island, but was also buried there," says Budka. This clearly shows that the town on Sai survived until about 1200 BC.

From Science Daily

Astronomers Identify the Origin of Peculiar Signals From Nearby Red Dwarf Star

The Arecibo Observatory in Puerto Rico
Distant satellite interference, not alien communications, were the source of signals emanating from Ross 128, a dim star located 11 light-years from Earth.

Astronomers have finally solved the mystery of peculiar signals coming from a nearby star, a story that sparked intense public speculation this week that perhaps, finally, alien life had been found.

It hasn't. The signal, which has been formally named "Weird!" was interference from a distant satellite.

Of course, astronomers said all along that extraterrestrials were quite far at the bottom of the list of possibilities for the signals detected from Ross 128, a dim star known as a red dwarf some 11 light-years away.

To experts, the true mystery was that they couldn't figure out if the bursts were unusual stellar activity, emissions from other background objects, or interference from satellite communications.

"However, many people were more interested in the signals as potential proof of transmissions from an extraterrestrial intelligent civilization," wrote Abel Mendez, director of the Planetary Habitability Laboratory at the University of Puerto Rico at Arecibo in a blog post Friday, revealing the true nature of the signals.

After further fueling speculation by summoning the world experts in the hunt for life elsewhere in the universe — the SETI Berkeley Research Center at the University of California — the team issued its conclusion.

"We are now confident about the source of the Weird! Signal," Mendez wrote.

"The best explanation is that the signals are transmissions from one or more geostationary satellites."

The signals only appeared around Ross 128 because it is located "close to the celestial equator where many geostationary satellites are placed," Mendez added.

Study of people

He also released the results of an informal survey that he had posted on his website, asking people to weigh in on what they thought the source of the signals was, and whether or not they were scientists well versed in the matter.

"Nearly 800 people participated in this informal survey (including more than 60 astronomers)," he wrote.

The whole group's consensus was that the signals were most likely coming from some story of stellar activity, or some kind of astronomical phenomenon.

Most people discounted the possibility of radio interference or instrumental failures, saying these were least likely. This, Mendez explained, was hardly a scientific approach to the question.

"This is interesting since in the absence of solid information about the signal, most astronomers would think that these were probably the most likely explanation," Mendez wrote.

Furthermore, about one quarter of respondents said "the most likely explanation of the signal was that of a communication with an Extraterrestrial Intelligence (ETI)," he added.

"These results reflect the still high expectations the public maintains on the possibility of contacting ETI."

Read more at Seeker

Jul 20, 2017

Our brains synchronize during a conversation

Scientists measured the movement of their brainwaves simultaneously and confirmed that their oscillations took place at the same time.
The rhythms of brainwaves between two people taking part in a conversation begin to match each other. This is the conclusion of a study published in the magazine Scientific Reports, led by the Basque research centre BCBL. According to scientists, this interbrain synchrony may be a key factor in understanding language and interpersonal communication.

Something as simple as an everyday conversation causes the brains of the participants to begin to work simultaneously. This is the conclusion of a study carried out by the Basque Centre on Cognition, Brain, and Language (BCBL), recently published in the magazine Scientific Reports.

Until now, most traditional research had suggested the hypothesis that the brain "synchronizes" according to what is heard, and correspondingly adjusts its rhythms to auditory stimuli.

Now, the experts from this Donostia-based research centre have gone a step further and simultaneously analysed the complex neuronal activity of two strangers who hold a dialogue for the first time.

The team, led by Alejandro Pérez, Manuel Carreiras and Jon Andoni Duñabeitia, has confirmed by recording cerebral electrical activity- that the neuronal activity of two people involved in an act of communication "synchronize" in order to allow for a "connection" between both subjects.

"It involves interbrain communion that goes beyond language itself and may constitute a key factor in interpersonal relations and the understanding of language," Jon Andoni Duñabeitia explains.

Thus, the rhythms of the brainwaves corresponding to the speaker and the listener adjust according to the physical properties of the sound of the verbal messages expressed in a conversation. This creates a connection between the two brains, which begin to work together towards a common goal: communication.

"The brains of the two people are brought together thanks to language, and communication creates links between people that go far beyond what we can perceive from the outside," added the researcher from the Basque research centre. "We can find out if two people are having a conversation solely by analysing their brain waves."

What is neural synchrony?


For the purposes of the study, the BCBL researchers used 15 dyads of people of the same sex, complete strangers to each other, separated by a folding screen. This ensured that the connection generated was truly thanks to the communication established.

Following a script, the dyads held a general conversation and took turns playing the roles of speaker and listener.

Through electroencephalography (EEG) -- a non-invasive procedure that analyses electrical activity in the brain -- the scientists measured the movement of their brainwaves simultaneously and confirmed that their oscillations took place at the same time.

"To be able to know if two people are talking between themselves, and even what they are talking about, based solely on their brain activity is something truly marvellous. Now we can explore new applications, which are highly useful in special communicative contexts, such as the case of people who have difficulties with communication," Duñabeitia pointed out.

In the future, the understanding of this interaction between two brains would allow for the comprehension and analysis of very complex aspects of the fields of psychology, sociology, psychiatry, or education, using the neural images within an ecological or real-world context.

"Demonstrating the existence of neural synchrony between two people involved in a conversation has only been the first step," confirmed Alejandro Pérez. "There are many unanswered questions and challenges left to resolve."

Pérez further maintains that the practical potential of the study is enormous. "Problems with communication occur every day. We are planning to get the most out of this discovery of interbrain synchronization with the goal of improving communication," he concluded.

Read more at Science Daily

Not under the skin, but on it: Living together brings couples' microbiomes together

You share more than just your living space with your partner.
Couples who live together share many things: Bedrooms, bathrooms, food, and even bacteria. After analyzing skin microbiomes from cohabitating couples, microbial ecologists at the University of Waterloo, in Canada, found that people who live together significantly influence the microbial communities on each other's skin.

The commonalities were strong enough that computer algorithms could identify cohabitating couples with 86 percent accuracy based on skin microbiomes alone, the researchers report this week in mSystems, an open-access journal of the American Society for Microbiology.

However, the researchers also reported that cohabitation is likely less influential on a person's microbial profile than other factors like biological sex and what part of the body is being studied. In addition, the microbial profile from a person's body usually looks more like their own microbiome than like that of their significant other.

"You look like yourself more than you look like your partner," says Ashley Ross, who led the study while a graduate student in the lab of Josh Neufeld.

"Can we link couples back together? The answer is yes, but not a very loud yes," says senior author Neufeld, whose lab focuses on microbial communities and their interactions.

Neufeld and Ross, together with Andrew Doxey, analyzed 330 skin swabs collected from 17 sites on the participants, all of whom were heterosexual and lived in the Waterloo region. Participants self-collected samples with swabs, and sites included the upper eyelids, outer nostrils, inner nostrils, armpits, torso, back, navel, and palms of hands.

Neufeld says the study is the first to identify regions of skin with the most similar microbiomes between partners. They found the strongest similarities on partners' feet.

"In hindsight, it makes sense," says Neufeld. "You shower and walk on the same floor barefoot. This process likely serves as a form of microbial exchange with your partner, and also with your home itself." As a result, partners end up with the same mix.

The analyses revealed stronger correlations in some sites than in others. For example, microbial communities on the inner thigh were more similar among people of the same biological sex than between cohabiting partners. Computer algorithms could differentiate between men and women with 100 percent accuracy by analyzing inner thigh samples alone, suggesting that a person's biological sex can be determined based on that region, but not others.

The researchers also found that the microbial profiles of sites on a person's left side -- like hands, eyelids, armpits, or nostrils -- strongly resemble those on their right side. Of all the swab sites, the least microbial diversity was found on either side of the outer nose.

Ross says previous research had shown that skin microbial communities vary within an individual from region to region, but she wanted to know what other factors -- like cohabitation -- help shape the microbiome. In previous work, she and Neufeld analyzed samples collected from door handles at the University of Waterloo to determine whether buildings could be identified based on their door-handle microbiomes. In the future, she says she hopes to see similar analyses of same-sex couples, or couples of different ethnic backgrounds.

Read more at Science Daily

Use of cognitive abilities to care for grandkids may have driven evolution of menopause

A grandmother's cognitive abilities make a big difference to her grandchildren's future.
Instead of having more children, a grandmother may pass on her genes more successfully by using her cognitive abilities to directly or indirectly aid her existing children and grandchildren. Such an advantage could have driven the evolution of menopause in humans, according to new research published in PLOS Computational Biology.

Women go through menopause long before the end of their expected lifespan. Researchers have long hypothesized that menopause and long post-reproductive lifespan provide an evolutionary advantage; that is, they increase the chances of a woman passing on her genes. However, the precise nature of this advantage is still up for debate.

To investigate the evolutionary advantage of menopause, Carla Aimé and colleagues at the Institute of Evolutionary Sciences of Montpellier developed computer simulations of human populations using artificial neural networks. Then they tested which conditions were required for menopause to emerge in the simulated populations.

Specifically, the research team used the simulations to model the emergence and evolution of resource allocation decision-making in the context of reproduction. Menopause can be considered a resource allocation strategy in which reproduction is halted so that resources can be reallocated elsewhere.

The researchers found that emergence of menopause and long post-reproductive lifespan in the simulated populations required the existence of cognitive abilities in combination with caring for grandchildren. The importance of cognitive abilities rather than physical strength lends support to a previously proposed hypothesis for the evolution of menopause known as the Embodied Capital Model.

Read more at Science Daily

Elephant seals recognize each other by the rhythm of their calls

This photo shows two northern elephant seal males scuffling on the beach in San Mateo, California.
Every day, humans pick up on idiosyncrasies such as slow drawls, high-pitched squeaks, or hints of accents to put names to voices from afar. This ability may not be as unique as once thought, researchers report on July 20 in Current Biology. They find that unlike all other non-human mammals, northern elephant seal males consider the spacing and timing of vocal pulses in addition to vocal tones when identifying the calls of their rivals.

"This is the first natural example where on a daily basis, an animal uses the memory and the perception of rhythm to recognize other members of the population," says first author Nicolas Mathevon, of the Université de Lyon/Saint-Etienne in France. "There have been experiments with other mammals showing that they can detect rhythm, but only with conditioning."

Over several years studying an elephant seal colony in Año Nuevo State Park, California, the researchers were able to recognize many of the individual animals just by the rhythm of their voices, he says. To test whether the elephant seals themselves made those distinctions in the same way, the researchers designed an experiment based on the social behavior of the colony's "beta males," who shy away upon hearing the call of a more powerful "alpha male" but ignore or confront other beta males and still-weaker "peripheral males."

Upon hearing computer-modified alpha male calls with a sped-up or slowed-down tempo or a shifted pitch range, the beta males fled the scene if the alteration was minute enough to be within the individual variation of a particular alpha male's roar but stayed put when confronted with more extreme changes. The divergent responses indicated that the seals were sensitive to both rhythmic and tonal characteristics when identifying potential rivals within the colony.

"It is possible that maybe the ability to perceive rhythm is actually very general in animals," Mathevon says, "but it's extremely important for elephant seals, to the point of survival. Competing for females, the males fight very violently, even to the point of killing one another. So it's very important for them to accurately recognize the voices, to be able to choose the right strategy, to know to avoid a fight with a dominant male, or even to start a fight with an inferior one."

Read more at Science Daily

Explore Mars With NASA Images and Data Visualization of the Red Planet

Have you ever wished you could go to Mars without taking on the long-term commitment and risks associated with spaceflight? Now you can explore the surface of Mars without leaving the comfort of planet Earth, thanks to troves of imagery from NASA spacecraft and a cool data-visualization software called OpenSpace.

With OpenSpace, you can fly over Martian mountaintops and swoop through the deep canyons of Valles Marineris with the highest-resolution views from NASA's Mars Reconnaissance Orbiter (MRO), creating sort of a Google Earth for Mars. And that's just the beginning; the makers of OpenSpace said they aim to ultimately map the entire known universe with dynamic and interactive visualizations created from real scientific data.

Using data and images from the Context Camera (CTX) on MRO and the Mars rovers Spirit and Opportunity, researchers have already mapped 90 percent of the Red Planet's surface down to a resolution of about 20 feet (6 meters) per pixel. Incorporating high-resolution images from the spacecraft's HiRISE camera (High-Resolution Imaging Science Experiment), OpenSpace has allowed researchers to image parts of Mars down to a resolution of about 25 centimeters (10 inches) per pixel. That's 24 times sharper than before.

This view of the Ganges Chasma region on Mars shows mountainous features called light mounds. Westerly winds on Mars have created a wake around the structure on the right that is visible around the dunes at the mountain's base.
NASA and SpaceX have used HiRISE to look at potential landing sites for upcoming robotic missions like the Mars 2020 rover and Red Dragon sample-return mission, because the camera can resolve details in the terrain and determine whether it's safe for a rover to touch down and drive around there. When HiRISE isn't looking at landing sites, scientists use it to study other aspects of the Martian surface.

Since MRO arrived at the Red Planet 11 years ago, HiRISE has taken more than 4,500 stereo images of the Martian surface. The US Geological Survey has so far gotten around to processing only about 380 of these images to incorporate them into a map of the Martian terrain.

That's where OpenSpace comes in. In a partnership with the American Museum of Natural History in New York City and Linköping University in Sweden, researchers and student interns have been working to turn loads of data into stunning, interactive visualizations. "We have figured out the technique where we can do that ourselves with a massive photogrammetry tool kit called the NASA Ames Stereo Pipeline," Carter Emmart, the director of the astrovisualization program at the museum and creative lead of OpenSpace, told Space.com.

Carter Emmart (center), director of astrovisualization at the American Museum of Natural History, worked with Bergen County Academies students Brian Di Paolo, David Song, Vincent Mallet and Janette Levin (left to right) to develop stunning visualizations of Mars with authentic NASA data.
NASA had warned Emmart and his colleagues against working with these images "because they're extremely data intensive," he said, "but we've worked that out with our production systems staff here at the museum and together with the high school students, so we have a pipeline for cherry-picking essentially these interesting areas that have not been processed yet."

With high-resolution, 3D renditions of the Martian landscape, you can make out small surface features like sharp mountain peaks and rocks as small as footballs. You can even find NASA's Curiosity rover and its landing site, where some hardware was left behind, and what's left of the European Space Agency's Schiaparelli lander that crashed on Mars in October.

Since they started working in 2002, Emmart and his student interns have created "a unique visual system for looking at what we call our digital universe — data that essentially goes from the Earth to the macroscales of the universe."

Beyond Mars, the researchers are already planning to build these types of visualizations for Earth's moon, Pluto, Mercury, and Saturn's moons Titan and Enceladus, with the ultimate and ambitious goal of visualizing the entire universe. OpenSpace also visualizes space weather events, like solar flares and coronal mass ejections, which can affect Earth's satellites and other spacecraft throughout the solar system.

Eventually, OpenSpace visualizations will be available on YouTube in the form of 360-degree videos, Emmart said. For now, those who wish to embark on a journey to Mars with OpenSpace can do so at New York's Hayden Planetarium. On Aug. 1, Emmart will give his Mars presentation together with the MARSBAND, a group of musicians who will play live music to accompany the Martian tour.

Read more at Discovery News

Jul 19, 2017

High-energy trap in our galaxy's center, revealed by gamma-ray telescopes

An illustration of NASA's Fermi Gamma-ray Space Telescope orbiting Earth.
A combined analysis of data from NASA's Fermi Gamma-ray Space Telescope and the High Energy Stereoscopic System (H.E.S.S.), a ground-based observatory in Namibia, suggests the center of our Milky Way contains a "trap" that concentrates some of the highest-energy cosmic rays, among the fastest particles in the galaxy.

"Our results suggest that most of the cosmic rays populating the innermost region of our galaxy, and especially the most energetic ones, are produced in active regions beyond the galactic center and later slowed there through interactions with gas clouds," said lead author Daniele Gaggero at the University of Amsterdam. "Those interactions produce much of the gamma-ray emission observed by Fermi and H.E.S.S."

Cosmic rays are high-energy particles moving through space at almost the speed of light. About 90 percent are protons, with electrons and the nuclei of various atoms making up the rest. In their journey across the galaxy, these electrically charged particles are affected by magnetic fields, which alter their paths and make it impossible to know where they originated.

But astronomers can learn about these cosmic rays when they interact with matter and emit gamma rays, the highest-energy form of light.

In March 2016, scientists with the H.E.S.S. Collaboration reported gamma-ray evidence of the extreme activity in the galactic center. The team found a diffuse glow of gamma rays reaching nearly 50 trillion electron volts (TeV). That's some 50 times greater than the gamma-ray energies observed by Fermi's Large Area Telescope (LAT). To put these numbers in perspective, the energy of visible light ranges from about 2 to 3 electron volts.

The Fermi spacecraft detects gamma rays when they enter the LAT. On the ground, H.E.S.S. detects the emission when the atmosphere absorbs gamma rays, which triggers a cascade of particles resulting in a flash of blue light.

In a new analysis, published July 17 in the journal Physical Review Letters, an international team of scientists combined low-energy LAT data with high-energy H.E.S.S. observations. The result was a continuous gamma-ray spectrum describing the galactic center emission across a thousandfold span of energy.

"Once we subtracted bright point sources, we found good agreement between the LAT and H.E.S.S. data, which was somewhat surprising due to the different energy windows and observing techniques used," said co-author Marco Taoso at the Institute of Theoretical Physics in Madrid and Italy's National Institute of Nuclear Physics (INFN) in Turin.

This agreement indicates that the same population of cosmic rays -- mostly protons -- found throughout the rest of the galaxy is responsible for gamma rays observed from the galactic center. But the highest-energy share of these particles, those reaching 1,000 TeV, move through the region less efficiently than they do everywhere else in the galaxy. This results in a gamma-ray glow extending to the highest energies H.E.S.S. observed.

"The most energetic cosmic rays spend more time in the central part of the galaxy than previously thought, so they make a stronger impression in gamma rays," said co-author Alfredo Urbano at the European Organization for Nuclear Research (CERN) in Geneva and INFN Trieste.

This effect is not included in conventional models of how cosmic rays move through the galaxy. But the researchers show that simulations incorporating this change display even better agreement with Fermi data.

"The same breakneck particle collisions responsible for producing these gamma rays should also produce neutrinos, the fastest, lightest and least understood fundamental particles," said co-author Antonio Marinelli of INFN Pisa. Neutrinos travel straight to us from their sources because they barely interact with other matter and because they carry no electrical charge, so magnetic fields don't sway them.

Read more at Science Daily

Nanoparticles could spur better LEDs, invisibility cloaks

LED light bulb.
In an advance that could boost the efficiency of LED lighting by 50 percent and even pave the way for invisibility cloaking devices, a team of University of Michigan researchers has developed a new technique that peppers metallic nanoparticles into semiconductors.

It's the first technique that can inexpensively grow metal nanoparticles both on and below the surface of semiconductors. The process adds virtually no cost during manufacturing and its improved efficiency could allow manufacturers to use fewer semiconductors in finished products, making them less expensive.

The metal nanoparticles can increase the efficiency of LEDs in several ways. They can act as tiny antennas that alter and redirect the electricity running through the semiconductor, turning more of it into light. They can also help reflect light out of the device, preventing it from being trapped inside and wasted.

The process can be used with the gallium nitride that's used in LED lighting and can also boost efficiency in other semiconductor products, including solar cells. It's detailed in a study published in the Journal of Applied Physics.

"This is a seamless addition to the manufacturing process, and that's what makes it so exciting," said Rachel Goldman, U-M professor of materials science and engineering, and physics. "The ability to make 3-D structures with these nanoparticles throughout is going to open a lot of possibilities."

The key innovation


The idea of adding nanoparticles to increase LED efficiency is not new. But previous efforts to incorporate them have been impractical for large-scale manufacturing. They focused on pricey metals like silver, gold and platinum. In addition, the size and spacing of the particles must be very precise; this required additional and expensive manufacturing steps. Furthermore, there was no cost-effective way to incorporate particles below the surface.

Goldman's team discovered a simpler way that integrates easily with the molecular beam epitaxy process used to make semiconductors. Molecular beam epitaxy sprays multiple layers of metallic elements onto a wafer. This creates exactly the right conductive properties for a given purpose.

The U-M researchers applied an ion beam between these layers -- a step that pushes metal out of the semiconductor wafer and onto the surface. The metal forms nanoscale particles that serve the same purpose as the pricey gold and platinum flecks in earlier research. Their size and placement can be precisely controlled by varying the angle and intensity of the ion beam. And applying the ion beam over and over between each layer creates a semiconductor with the nanoparticles interspersed throughout.

"If you carefully tailor the size and spacing of nanoparticles and how deeply they're embedded, you can find a sweet spot that enhances light emissions," said Myungkoo Kang, a former graduate student in Goldman's lab and first author on the study. "This process gives us a much simpler and less expensive way to do that."

Researchers have known for years that metallic particles can collect on the surface of semiconductors during manufacturing. But they were always considered a nuisance, something that happened when the mix of elements was incorrect or the timing was off.

"From the very early days of semiconductor manufacturing, the goal was always to spray a smooth layer of elements onto the surface. If the elements formed particles instead, it was considered a mistake," Goldman said. "But we realized that those 'mistakes' are very similar to the particles that manufacturers have been trying so hard to incorporate into LEDs. So we figured out a way to make lemonade out of lemons."

Toward invisibility cloaks


Because the technique allows precise control over the nanoparticle distribution, the researchers say it may one day be useful for cloaks that render objects partially invisible by inducing a phenomenon known as "reverse refraction."

Reverse refraction bends light waves backwards in a way that doesn't occur in nature, potentially directing them around an object or away from the eye. The researchers believe that by carefully sizing and spacing an array of nanoparticles, they may be able to induce and control reverse refraction in specific wavelengths of light.

"For invisibility cloaking, we need to both transmit and manipulate light in very precise ways, and that's very difficult today," Goldman said. "We believe that this process could give us the level of control we need to make it work."

Read more at Science Daily

Origin of modern dog has a single geographic origin, study reveals

A picture of the 5000 year old Late Neolithic CTC dog skull in the lab before it underwent whole genome sequencing.
By analyzing the DNA of two prehistoric dogs from Germany, an international research team led by Krishna R. Veeramah, PhD, Assistant Professor of Ecology & Evolution in the College of Arts & Sciences at Stony Brook University, has determined that their genomes were the probable ancestors of modern European dogs. The finding, to be published in Nature Communications, suggests a single domestication event of modern dogs from a population of gray wolves that occurred between 20,000 and 40,000 years ago.

Dogs were the first animal to be domesticated by humans. The oldest dog fossils that can be clearly distinguished from wolves are from the region of what is now Germany from around 15,000 years ago. However, the archeological record is ambiguous, with claims of ancient domesticated dog bones as far east as Siberia. Recent analysis of genetic data from modern dogs adds to mystery, with some scientists suggesting many areas of Europe, Central Asia, South Asia and the Middle East as possible origins of dog domestication.

In 2016, research by scientists using emerging paleogenomics techniques proved effective for sequencing the genome of a 5,000-year-old ancient dog from Ireland. The results of the study led the research team to suggest dogs were domesticated not once but twice. The team from Oxford University also hypothesized that an indigenous dog population domesticated in Europe was replaced by incoming migrants domesticated independently in East Asia sometime during the Neolithic era.

"Contrary to the results of this previous analysis, we found that our ancient dogs from the same time period were very similar to modern European dogs, including the majority of breed dogs people keep as pets," explained Dr. Veeramah. "This suggests that there was no mass Neolithic replacement that occurred on the continent and that there was likely only a single domestication event for the dogs observed in the fossil record from the Stone Age and that we also see and live with today."

In the paper, titled "Ancient European dog genomes reveal continuity since the Early Neolithic," Veeramah and colleagues used the older 7,000 year old dog to narrow the timing of dog domestication to the 20,000 to 40,000 years ago range.

They also found evidence of the younger 5,000 year old dog to be a mixture of European dogs and something that resembles current central Asian/Indian dogs. This finding may reflect that people moving into Europe from the Asian Steppes at the beginning of the Bronze Age brought their own dogs with them.

"We also reanalyzed the ancient Irish dog genome alongside our German dog genomes and believe we found a number of technical errors in the previous analysis that likely led those scientists to incorrectly make the conclusion of a dual domestication event," added Veeramah.

Read more at Science Daily

5,000-Year-Long Tsunami Record Found in Guano-Encrusted Sumatran Cave

Using fluorescent lights, Kerry Sieh and Charles Rubin of the Earth Observatory of Singapore look for charcoal and shells for radiocarbon dating.
When the December 26, 2004 Indian Ocean earthquake occurred off the west coast of Sumatra, Indonesia, the 9.1 magnitude event — the third-largest tremor ever recorded on a seismograph — was so strong that it caused the entire planet to vibrate by as much as 0.4 inches. The quake triggered a series of devastating tsunamis that killed up to 280,000 people in fourteen countries, inundating some coastal communities with 100-foot-tall waves. The tsunamis are now regarded as being among the deadliest natural disasters in all of recorded history.

Motivated to better understand quake and tsunami dynamics, scientists Charles Rubin, Benjamin Horton, and their colleagues have been studying the seismic history of the region. Archaeologist Patrick Daly at the Earth Observatory of Singapore (EOS) suggested that they excavate a sea cave about 22 miles south of Banda Aceh, Sumatra. The research process involves plunging a metal cylinder approximately 23 feet into the substrate to obtain readable samples.

“As we stopped by the entrance to the cave, our first excavations did not show anything interesting,” said Horton, a professor in the Department of Marine and Coastal Sciences at Rutgers University. “At this point, we returned for headlamps and excavation gear to explore the interior of the cave.”

“After about fifteen minutes of excavations,” he continued, “it was clear to me, Rubin, and Daly that we exposed a series of ‘stacked tsunami’ deposits separated by organic material that was probably deposited between earthquakes. We quickly realized that we had found a quite extraordinary record of tsunamis that stretched back thousands of years.”

Archaeologist Patrick Daly (wearing hat), Kerry Sieh (pointing), Charles Rubin (second from left), Benjamin Horton, and Jedrzej Majewski (behind Daly) are seen in an Indonesian sea cave.
New analysis of the find, published in the journal Nature Communications, provides a 5,000-year-long sedimentary snapshot of tsunamis in the region. The record shows that eleven tsunamis were generated between 7,900 and 2,900 years ago by earthquakes along the Sunda Megathrust, a 3,300-mile-long fault running from Myanmar to Sumatra along the floor of the Indian Ocean.

The investigation additionally determined that there were two tsunami-free millennia during the 5,000 years, and one century in which four tsunamis struck the coast. The scientists could see that smaller tsunamis tend to occur relatively close together, followed by long dormant periods. These, in turn, tend to be followed by very strong quakes and tsunamis, such as the one that struck in 2004.

According to the researchers, the 5,000-year record of tsunamis represents the first such discovery in a sea cave, the first record of tsunamis over this long of a period in the Indian Ocean, and the clearest record of tsunamis from anywhere in the world. Behind all of these scientific firsts is something rather stinky and unappetizing: mound after mound of bat guano.

The stratigraphy of the sea cave in Sumatra excavated by scientists from the Earth Observatory of Singapore, Rutgers, and other institutions shows lighter bands of sand deposited by tsunamis over a period of 5,000 years and darker bands of organic material, largely consisting of bat guano.
Bats love sea caves, which provide a cool and moist hideaway that is perfect for their roosts. For thousands of years, bats have therefore been visiting this particular cave.

Rubin, the study’s lead author and a professor at EOS, Horton, and their team discovered that organic debris from the copious quantities of guano is present above each of the eleven identified historic tsunami beds. It neatly marks each off, like lines of icing in a layer cake.

The scientists knew what a tsunami bed looks like in the area because they extensively studied the one left behind after the 2004 disaster. It, and the eleven ancient beds, all consist of fine-grained sand, pieces of shale, and mudstone known as “rip-up clasts,” weathered cave chalk, and abundant numbers of preserved tiny marine animals, mostly originating from the ocean depths.

“We were able to refine the timing of past tsunamis with radiocarbon dating,” Rubin said, adding that a statistical model further “allowed us to understand the uncertainties of timing between events, and we were able to make a comparison between our record of past tsunamis to other sites around the Indian Ocean.”

He and his team believe that the Sunda — also called Sumatra — Megathrust is the most likely source for triggering both earthquakes and tsunamis, at least in this region. In other areas, volcanic eruptions and underwater landslides may also lead to similar events.

As for why smaller tsunamis sometimes occur relatively close together, Rubin said, “The closely spaced tsunamis perhaps represent temporal clustering of earthquakes that produced tsunamis. It seems that earthquakes during this period are separated by just a few decades.”

Read more at Seeker

Jul 18, 2017

Epigenetics between the generations: We inherit more than just genes

Egg-cell of a female fruit fly with the egg cell in which H3K27me3 was made visible through green staining. This cell, together with the sperm, will contribute to the formation of the next generation of flies. In the upper right corner, a maternal and paternal pre-nucleus are depicted before their fusion during fertilization. The green colouration of H3K27me3 appears exclusively in the maternal pre-nucleus, indicating that their epigenetic instructions are inherited into the next generation.
We are more than the sum of our genes. Epigenetic mechanisms modulated by environmental cues such as diet, disease or our lifestyle take a major role in regulating the DNA by switching genes on and off. It has been long debated if epigenetic modifications accumulated throughout the entire life can cross the border of generations and be inherited to children or even grand children. Now researchers from the Max Planck Institute of Immunobiology and Epigenetics in Freiburg show robust evidence that not only the inherited DNA itself but also the inherited epigenetic instructions contribute in regulating gene expression in the offspring. Moreover, the new insights by the Lab of Nicola Iovino describe for the first time biological consequences of this inherited information. The study proves that mother's epigenetic memory is essential for the development and survival of the new generation.

In our body we find more than 250 different cell types. They all contain the exact same DNA bases in exactly the same order; however, liver or nerve cells look very different and have different skills. What makes the difference is a process called epigenetics. Epigenetic modifications label specific regions of the DNA to attract or keep away proteins that activate genes. Thus, these modifications create, step by step, the typical patterns of active and inactive DNA sequences for each cell type. Moreover, contrary to the fixed sequence of 'letters' in our DNA, epigenetic marks can also change throughout our life and in response to our environment or lifestyle. For example, smoking changes the epigenetic makeup of lung cells, eventually leading to cancer. Other influences of external stimuli like stress, disease or diet are also supposed to be stored in the epigenetic memory of cells.

It has long been thought that these epigenetic modifications never cross the border of generations. Scientists assumed that epigenetic memory accumulated throughout life is entirely cleared during the development of sperms and egg cells. Just recently a handful of studies stirred the scientific community by showing that epigenetic marks indeed can be transmitted over generations, but exactly how, and what effects these genetic modifications have in the offspring is not yet understood. "We saw indications of intergenerational inheritance of epigenetic information since the rise of the epigenetics in the early nineties. For instance, epidemiological studies revealed a striking correlation between the food supply of grandfathers and an increased risk of diabetes and cardiovascular disease in their grandchildren. Since then, various reports suggested epigenetic inheritance in different organisms but the molecular mechanisms were unknown," says Nicola Iovino, corresponding author in the new study.

Epigenetics between the generations

He and his team at the Max Planck Institute of Immunobiology and Epigenetics in Freiburg, Germany used fruit flies to explore how epigenetic modifications are transmitted from the mother to the embryo. The team focused on a particular modification called H3K27me3 that can also be found in humans. It alters the so-called chromatin, the packaging of the DNA in the cell nucleus, and is mainly associated with repressing gene expression.

The Max Planck researchers found that H3K27me3 modifications labeling chromatin DNA in the mother's egg cells were still present in the embryo after fertilization, even though other epigenetic marks are erased. "This indicates that the mother passes on her epigenetic marks to her offspring. But we were also interested, if those marks are doing something important in the embryo," explains Fides Zenk, first author of the study.

Inherited epigenetic marks are important for embryogenesis

Therefore the researchers used a variety of genetic tools in fruit flies to remove the enzyme that places H3K27me3 marks and discovered that embryos lacking H3K27me3 during early development could not develop to the end of embryogenesis. "It turned out that, in reproduction, epigenetic information is not only inherited from one generation to another but also important for the development of the embryo itself," says Nicola Iovino.

When they had a closer look into the embryos, the team found that several important developmental genes that are normally switched off during early embryogenesis were turned on in embryos without H3K27me3. "We assumed that activating those genes too soon during development disrupted embryogenesis and eventually caused the death of the embryo. It seems, virtually, that inherited epigenetic information is needed to process and correctly transcribe the genetic code of the embryo," explains Fides Zenk.

Implications for the theory of heredity and human health

With these results the study by the Max Planck researchers is an important step forward and shows clearly the biological consequences of inherited epigenetic information. Not only by providing evidence that epigenetic modifications in flies can be transmitted down through generations, but moreover by revealing that epigenetic marks transmitted from the mother are a fine-tuned mechanism to control gene activation during the complex process of early embryogenesis.

Read more at Science Daily

Fake news: Study tests people's ability to detect manipulated images of real-world scenes

Manipulated image -- can you spot whats wrong?
People can detect a fake image of a real-world scene only 60% of the time, and even then can only tell what is wrong with the image 45% of the time, according to research published in the open access journal Cognitive Research: Principles and Implications.

Sophie Nightingale, PhD Student and lead author from the University of Warwick said: "Our study found that although people performed better than chance at detecting and locating image manipulations, they are far from perfect. This has serious implications because of the high-level of images, and possibly fake images, that people are exposed to on a daily basis through social networking sites, the internet and the media."

The researchers set up an online test that used a bank of 40 images created from 10 original images sourced from Google Images. Six of the original images were subjected to five different types of manipulation, including physically implausible and physically plausible manipulations, to create 30 manipulated images. 707 participants in the online test were shown 10 random images that included each of the five manipulation types and five original images. Participants never saw a manipulation or original form of the same image twice.

A mean 60% of images were correctly identified as being manipulated when participants were asked "Do you think this photo has been digitally altered?," which was just over the chance performance of 50%. Of the people that answered "yes" to this question only a mean 45% of manipulations could be correctly located in the image when a grid overlay was placed on the image and participants were prompted to select the regions where a manipulation was present.

Dr Derrick Watson, study co-author from the University of Warwick explained: "We found that people were better at detecting physically implausible manipulations but not any better at locating these manipulations, compared to physically plausible manipulations. So even though people are able to detect something is wrong they can't reliably identify what exactly is wrong with the image. Images have a powerful influence on our memories so if people can't differentiate between real and fake details in photos, manipulations could frequently alter what we believe and remember."

In a second experiment using an image set created by the authors, 659 people completed an online task that tested their ability to locate manipulations regardless of whether or not they said there was one present. The results revealed that ability to detect something wrong was similar (mean 65% of the time) to the first experiment but that manipulations were accurately located in the image 39% more of the time than expected by chance. This suggests that people are better at the more direct task of locating manipulations than the more generic one of detecting if a photo has been manipulated or not.

Dr Kimberley Wade, study co-author from the University of Warwick, said: "People's poor ability to identify manipulated photos raises problems in the context of legal proceedings where photos may be used as evidence. Jurors and members of the court assume these images to be real, though a manipulated image could go undetected with devastating consequences. We need to work to find better ways to protect people from the negative effects of photo manipulation, and we're now exploring a number of ways that might help people to better detect fakes."

Read more at Science Daily

Ancient, massive asteroid impact could explain Martian geological mysteries

Mosaic of the Valles Marineris hemisphere of Mars projected into point perspective, a view similar to that which one would see from a spacecraft.
The origin and nature of Mars is mysterious. It has geologically distinct hemispheres, with smooth lowlands in the north and cratered, high-elevation terrain in the south. The red planet also has two small oddly-shaped oblong moons and a composition that sets it apart from that of the Earth.

New research by University of Colorado Boulder professor Stephen Mojzsis outlines a likely cause for these mysterious features of Mars: a colossal impact with a large asteroid early in the planet's history. This asteroid -- about the size of Ceres, one of the largest asteroids in the Solar System -- smashed into Mars, ripped off a chunk of the northern hemisphere and left behind a legacy of metallic elements in the planet's interior. The crash also created a ring of rocky debris around Mars that may have later clumped together to form its moons, Phobos and Deimos.

The study appeared online in the journal Geophysical Research Letters, a publication of the American Geophysical Union, in June.

"We showed in this paper -- that from dynamics and from geochemistry -- that we could explain these three unique features of Mars," said Mojzsis, a professor in CU Boulder's Department of Geological Sciences. "This solution is elegant, in the sense that it solves three interesting and outstanding problems about how Mars came to be."

Astronomers have long wondered about these features. Over 30 years ago, scientists proposed a large asteroid impact to explain the disparate elevations of Mars' northern and southern hemispheres; the theory became known as the "single impact hypothesis." Other scientists have suggested that erosion, plate tectonics or ancient oceans could have sculpted the distinct landscapes. Support for the single impact hypothesis has grown in recent years, supported by computer simulations of giant impacts.

Mojzsis thought that by studying Mars' metallic element inventory, he might be able to better understand its mysteries. He teamed up with Ramon Brasser, an astronomer at the Earth-Life Science Institute at the Tokyo Institute of Technology in Japan, to dig in.

The team studied samples from Martian meteorites and realized that an overabundance of rare metals -- such as platinum, osmium and iridium -- in the planet's mantle required an explanation. Such elements are normally captured in the metallic cores of rocky worlds, and their existence hinted that Mars had been pelted by asteroids throughout its early history. By modeling how a large object such as an asteroid would have left behind such elements, Mojzsis and Brasser explored the likelihood that a colossal impact could account for this metal inventory.

The two scientists first estimated the amount of these elements from Martian meteorites, and deduced that the metals account for about 0.8 percent of Mars' mass. Then, they used impact simulations with different-sized asteroids striking Mars to see which size asteroid accumulated the metals at the rate they expected in the early solar system.

Based on their analysis, Mars' metals are best explained by a massive meteorite collision about 4.43 billion years ago, followed by a long history of smaller impacts. In their computer simulations, an impact by an asteroid at least 1,200 kilometers (745 miles) across was needed to deposit enough of the elements. An impact of this size also could have wildly changed the crust of Mars, creating its distinctive hemispheres.

In fact, Mojzsis said, the crust of the northern hemisphere appears to be somewhat younger than the ancient southern highlands, which would agree with their findings.

"The surprising part is how well it fit into our understanding of the dynamics of planet formation," said Mojzsis, referring to the theoretical impact. "Such a large impact event elegantly fits in to what we understand from that formative time."

Such an impact would also be expected to have generated a ring of material around Mars that later coalesced into Phobos and Deimos; this explains in part why those moons are made of a mix of native and non-Martian material.

In the future, Mojzsis will use CU Boulder's collection of Martian meteorites to further understand Mars' mineralogy and what it can tell us about a possible asteroid impact. Such an impact should have initially created patchy clumps of asteroid material and native Martian rock. Over time, the two material reservoirs became mixed. By looking at meteorites of different ages, Mojzsis can see if there's further evidence for this mixing pattern and, therefore, potentially provide further support for a primordial collision.

Read more at Science Daily

Human-made aerosols identified as driver in shifting global rainfall patterns

Spatial distribution of the annual-mean precipitation averaged from 1979-2008.
In a new study, scientists found that aerosol particles released into the atmosphere from the burning of fossil fuels are a primary driver of changes in rainfall patterns across the globe.

The results of the climate system-model simulations conducted by researchers Brian Soden and Eui-Seok Chung from the University of Miami (UM) Rosenstiel School of Marine and Atmospheric Science revealed that changes in clouds, as a result of their interaction with these human-made aerosols in the atmosphere, are driving large-scale shifts in rainfall and temperature on Earth.

A southward shift of the tropical rain belt is thought to be the leading cause of the severe drought conditions experienced in large portions of Africa and South America during the second half of the 20th century, which have caused significant impacts to local communities and water availability in these regions.

Using multiple climate model projections, the researchers measured the effects human-made aerosols have had on rainfall changes in the 20th and 21st centuries to discover that when only greenhouse gases or natural climate forces are considered, climate models are not able to capture the southward shift of the tropical rain belt. The analysis suggests that human-made aerosols are the primary driver of the observed southward shift in rainfall patters throughout the latter half of the 20th century.

"Our analysis showed that interactions between aerosol particles and clouds have caused large-scale shifts in precipitation during the latter half of the 20th century, and will play a key role in regulating future shifts in tropical rainfall patterns," said UM Rosenstiel School atmospheric scientist Chung, the lead author of the study.

Changes in the radiative properties of clouds from the increase of these human-made particles in the atmosphere is resulting in large-scale changes in atmospheric circulation that drive regional climate and rainfall, say the researchers.

"Human-induced changes in rainfall can have substantial implications for society and the environment by affecting the availability of water," said Soden, a UM Rosenstiel School atmospheric sciences professor and the senior author of the study. "Our work helps to understand the mechanisms that drive large-scale shifts in precipitation to better predict how the climate will change in the future."

The models the researchers used also found that the largest shift in rainfall patterns will occur over the tropics rather than in the mid-latitude northern hemisphere, the greatest source region of these human-made industrial aerosols.

Understanding these aerosol-cloud interactions are necessary to better model future changes in tropical rainfall worldwide.

Read more at Science Daily

Why Tyrannosaurus was a slow runner and why the largest are not always the fastest

There is a parabola-like relationship between the body mass of animals and the maximum speed they can reach. For the first time, researchers are able to describe how this comes about, thanks to a simple mathematical model.
No other animal on land is faster than a cheetah -- the elephant is indeed larger, but slower. For small to medium-sized animals, larger also means faster, but for really large animals, when it comes to speed, everything goes downhill again. For the first time, it is now possible to describe how this parabola-like relationship between body size and speed comes about. A research team under the direction of the German Centre for Integrative Biodiversity Research (iDiv) and the Friedrich Schiller University Jena (Germany) have managed to do so thanks to a new mathematical model, and also published their findings in the journal Nature Ecology and Evolution.

A beetle is slower than a mouse, which is slower than a rabbit, which is slower than a cheetah... which is slower than an elephant? No! No other animal on land is faster than a cheetah -- the elephant is indeed larger, but slower. For small to medium-sized animals, larger also means faster, but for really large animals, when it comes to speed, everything goes downhill again. For the first time, it is now possible to describe how this parabola-like relationship between body size and speed comes about.

The model is amazingly simple: The only information that it must be 'fed' with is the weight of a particular animal as well as the medium it moves in, so either land, air or water. On this basis alone, it calculates the maximum speed that an animal can reach with almost 90% accuracy. "The best feature of our model is that it is universally applicable," says the lead author of the study, Myriam Hirt of the iDiv research centre and the University of Jena. "It can be performed for all body sizes of animals, from mites to blue whales, with all means of locomotion, from running and swimming to flying, and can be applied in all habitats." Moreover, the model is by no means limited to animal species that currently exist, but can be applied equally well to extinct species.

Tyrannosaurus reached a speed of only 17 miles/hour


"To test whether we can use our model to calculate the maximum speed of animals that are already extinct, we have applied it to dinosaur species, whose speed has up to now been simulated using highly complex biomechanical processes," explains Hirt. The result is that the simple model delivered results for Triceratops, Tyrannosaurus, Brachiosaurus and others that matched those from complex simulations -- and were not exactly record-breaking for Tyrannosaurus, who reached a speed of only 27 km/h (17 mi/h). "This means that in future, our model will enable us to estimate, in a very simple way, how fast other extinct animals were able to run," says the scientist.

Read more at Science Daily

Jul 17, 2017

You're not yourself when you're sleepy

Poor sleep is associated with a particularly serious sign of depression, suggests new research.
More than a third of Americans don't get enough sleep, and growing evidence suggests it's not only taking a toll on their physical health through heart disease, diabetes, stroke, and/or other conditions, but hurting their mental health as well.

According to a recent study led by Postdoctoral FellowIvan Vargas, PhD, in the journal Cognitive Therapy and Research, those who are sleep deprived lose some of their ability to be positive-minded people. That may not sound serious, but medical experts say an inability to think positively is a serious symptom of depression that could be dangerous if left unaddressed. An estimated 16.1 million adults experienced a major depressive episode in 2014.

"In general, we have a tendency to notice positive stimuli in our environment," said Vargas. "We tend to focus on positive things more than anything else, but now we're seeing that sleep deprivation may reverse that bias."

In their study, Vargas and his team took 40 healthy adults, and randomized them to either 28 consecutive hours awake, or a full eight hours of sleep. All participants participated in a computer test measuring their accuracy and response time at identifying happy, sad and neutral faces to assess how they pay attention to positive or negative information.

The team found that those who were acutely sleep deprived were less likely to focus on the happy faces. They didn't necessarily focus more on the negative, but were less likely to focus on the positive. The study may have implications for those experiencing depression and/or anxiety.

There are many symptoms of depression -- including feeling sad and no longer being able to enjoy things you typically would, but poor sleep is associated with a particularly serious sign of the condition.

"Depression is typically characterized as the tendency to think and feel more negatively or sad, but more than that, depression is associated with feeling less positive, less able to feel happy," Vargas says, "Similarly, if you don't get enough sleep, it reduces your ability to attend to positive things, which over time may confer risk for depression."

Interestingly enough, in the present study, those with a history of insomnia symptoms were less sensitive to the effects of the sleep loss. The authors believe this might be because those with a history of insomnia symptoms have more experience being in sleep-deprived conditions and have developed coping methods to modulate the effect of sleep loss.

Vargas and colleagues recently presented a related study at SLEEP 2017, the 31st Annual Meeting of the Associated Professional Sleep Societies LLC, on the association of insomnia and suicide, finding that people who suffer from insomnia are three times more likely to report thoughts of suicide and death during the past 30 days than those without the condition.

The study comes amid a growing body of knowledge associating sleep disorders and depression. For example, ongoing research presented this year at SLEEP 2017 from a multi-center NIH-sponsored "Treatment of Insomnia and Depression" study (abstract 0335 here) suggests that cognitive-behavioral therapy for insomnia (CBT-I) may help achieve depression remission in those suffering from both depression and insomnia who sleep at least 7 hours each night. (A clinical practice guideline published in 2016 in Annals of Internal Medicine recommends CBT-I (not sleep medications) as the initial treatment for chronic insomnia.

Read more at Science Daily