Jun 22, 2019

Scientists map huge undersea fresh-water aquifer off U.S. Northeast

Aerial view of the ocean at Chatham, Cape Cod.
In a new survey of the sub-seafloor off the U.S. Northeast coast, scientists have made a surprising discovery: a gigantic aquifer of relatively fresh water trapped in porous sediments lying below the salty ocean. It appears to be the largest such formation yet found in the world. The aquifer stretches from the shore at least from Massachusetts to New Jersey, extending more or less continuously out about 50 miles to the edge of the continental shelf. If found on the surface, it would create a lake covering some 15,000 square miles. The study suggests that such aquifers probably lie off many other coasts worldwide, and could provide desperately needed water for arid areas that are now in danger of running out.

The researchers employed innovative measurements of electromagnetic waves to map the water, which remained invisible to other technologies. "We knew there was fresh water down there in isolated places, but we did not know the extent or geometry," said lead author Chloe Gustafson, a PhD. candidate at Columbia University's Lamont-Doherty Earth Observatory. "It could turn out to be an important resource in other parts of the world." The study appears this week in the journal Scientific Reports.

The first hints of the aquifer came in the 1970s, when companies drilled off the coastline for oil, but sometimes instead hit fresh water. Drill holes are just pinpricks in the seafloor, and scientists debated whether the water deposits were just isolated pockets or something bigger. Starting about 20 years ago, study coauthor Kerry Key, now a Lamont-Doherty geophysicist, helped oil companies develop techniques to use electromagnetic imaging of the sub-seafloor to look for oil. More recently, Key decided to see if some form of the technology could also be used also to find fresh-water deposits. In 2015, he and Rob L. Evans of Woods Hole Oceanographic Institution spent 10 days on the Lamont-Doherty research vessel Marcus G. Langseth making measurements off southern New Jersey and the Massachusetts island of Martha's Vineyard, where scattered drill holes had hit fresh-water-rich sediments.

They dropped receivers to the seafloor to measure electromagnetic fields below, and the degree to which natural disruptions such as solar winds and lightning strikes resonated through them. An apparatus towed behind the ship also emitted artificial electromagnetic pulses and recorded the same type of reactions from the subseafloor. Both methods work in a simple way: salt water is a better conductor of electromagnetic waves than fresh water, so the freshwater stood out as a band of low conductance. Analyses indicated that the deposits are not scattered; they are more or less continuous, starting at the shoreline and extending far out within the shallow continental shelf -- in some cases, as far as 75 miles. For the most part, they begin at around 600 feet below the ocean floor, and bottom out at about 1,200 feet.

The consistency of the data from both study areas allowed to the researchers to infer with a high degree of confidence that fresh water sediments continuously span not just New Jersey and much of Massachusetts, but the intervening coasts of Rhode Island, Connecticut and New York. They estimate that the region holds at least 670 cubic miles of fresh water. If future research shows the aquifer extends further north and south, it would rival the great Ogallala Aquifer, which supplies vital groundwater to eight Great Plains states, from South Dakota to Texas.

The water probably got under the seabed in one of two different ways, say the researchers. Some 15,000 to 20,000 years ago, toward the end of the last glacial age, much of the world's water was locked up in mile-deep ice; in North America, it extended through what is now northern New Jersey, Long Island and the New England coast. Sea levels were much lower, exposing much of what is now the underwater U.S. continental shelf. When the ice melted, sediments formed huge river deltas on top of the shelf, and fresh water got trapped there in scattered pockets. Later, sea levels rose. Up to now, the trapping of such "fossil" water has been the common explanation for any fresh water found under the ocean.

But the researchers say the new findings indicate that the aquifer is also being fed by modern subterranean runoff from the land. As water from rainfall and water bodies percolates through onshore sediments, it is likely pumped seaward by the rising and falling pressure of tides, said Key. He likened this to a person pressing up and down on a sponge to suck in water from the sponge's sides. Also, the aquifer is generally freshest near the shore, and saltier the farther out you go, suggesting that it mixes gradually with ocean water over time. Terrestrial fresh water usually contains less than 1 part per thousand salt, and this is about the value found undersea near land. By the time the aquifer reaches its outer edges, it rises to 15 parts per thousand. (Typical seawater is 35 parts per thousand.)

Read more at Science Daily

Astronomers make first detection of polarized radio waves in Gamma Ray Burst jets

Black hole illustration
Good fortune and cutting-edge scientific equipment have allowed scientists to observe a Gamma Ray Burst jet with a radio telescope and detect the polarisation of radio waves within it for the first time -- moving us closer to an understanding of what causes the universe's most powerful explosions.

Gamma Ray Bursts (GRBs) are the most energetic explosions in the universe, beaming out mighty jets which travel through space at over 99.9% the speed of light, as a star much more massive than our sun collapses at the end of its life to produce a black hole.

Studying the light from Gamma Ray Burst jets as we detect it travelling across space is our best hope of understanding how these powerful jets are formed, but scientists need to be quick to get their telescopes into position and get the best data. The detection of polarised radio waves from a burst's jet, made possible by a new generation of advanced radio telescopes, offers new clues to this mystery.

The light from this particular event, known as GRB 190114C, which exploded with the force of millions of suns' worth of TNT about 4.5 billion years ago, reached NASA's Neil Gehrels Swift Observatory on Jan 14, 2019.

A rapid alert from Swift allowed the research team to direct the Atacama Large Millimeter/Sub-millimeter Array (ALMA) telescope in Chile to observe the burst just two hours after Swift discovered it. Two hours later the team was able to observe the GRB from the Karl G. Jansky Very Large Array (VLA) telescope when it became visible in New Mexico, USA.

Combining the measurements from these observatories allowed the research team to determine the structure of magnetic fields within the jet itself, which affects how the radio light is polarised. Theories predict different arrangements of magnetic fields within the jet depending on the fields' origin, so capturing radio data enabled the researchers to test these theories with observations from telescopes for the first time.

The research team, from the University of Bath, Northwestern University, the Open University of Israel, Harvard University, California State University in Sacramento, the Max Planck Institute in Garching, and Liverpool John Moores University discovered that only 0.8% of the jet light was polarised, meaning that jet's magnetic field was only ordered over relatively small patches -- each less than about 1% of the diameter of the jet. Larger patches would have produced more polarised light.

These measurements suggest that magnetic fields may play a less significant structural role in GRB jets than previously thought.

This helps us narrow down the possible explanations for what causes and powers these extraordinary explosions. The study is published in Astrophysical Journal Letters.

First author Dr Tanmoy Laskar, from the University of Bath's Astrophysics group, said: "We want to understand why some stars produce these extraordinary jets when they die, and the mechanism by which these jets are fuelled -- the fastest known outflows in the universe, moving at speeds close to that of light and shining with the incredible luminosity of over a billion suns combined.

"I was in a cab on my way to O'Hare airport in Chicago, following a visit with collaborators when the burst went off. The extreme brightness of this event and the fact that it was visible in Chile right away made it a prime target for our study, and so I immediately contacted ALMA to say we were going to observe this one, in the hope of detecting the first radio polarisation signal.

"It was fortuitous that the target was well placed in the sky for observations with both ALMA in Chile and the VLA in New Mexico. Both facilities responded quickly and the weather was excellent. We then spent two months in a painstaking process to make sure our measurement was genuine and free from instrumental effects. Everything checked out, and that was exciting.

Dr Kate Alexander, who led the VLA observations, said: "The lower frequency data from the VLA helped confirm that we were seeing the light from the jet itself, rather than from the interaction of the jet with its environment."

Dr Laskar added: "This measurement opens a new window into GRB science and the studies of energetic astrophysical jets. We would like to understand whether the low level of polarisation measured in this event is characteristic of all GRBs, and if so, what this could tell us about the magnetic structures in GRB jets and the role of magnetic fields in powering jets throughout the universe."

Professor Carole Mundell, Head of Astrophysics at the University of Bath, added: "The exquisite sensitivity of ALMA and rapid response of the telescopes has, for the first time, allowed us to swiftly and accurately measure the degree of polarisation of microwaves from a GRB afterglow just two hours after the blast and probe the magnetic fields that are thought to drive these powerful, ultrafast outflows."

Read more at Science Daily

Jun 20, 2019

Earth's oldest animals could take trips

New UC Riverside-led research settles a longstanding debate about whether the most ancient animal communities were deliberately mobile. It turns out they were, because they were hungry.

"This is the first time in the fossil record we see an animal moving to get food," said study lead Scott Evans, a UCR paleontology doctoral candidate.

Evans' team demonstrated that the 550-million-year-old ocean-dwelling creatures moved on their own rather than being pushed around by waves or weather. The research answers questions about when, why and how animals first developed mobility.

The team searched for evidence of movement in more than 1,300 fossils of Dickinsonia, dinner-plate-shaped creatures up to a meter long that lived and fed on a layer of ocean slime.

Details of the team's analysis were published this month in the journal Geobiology. It found that Dickinsonia move like worms, constricting and relaxing their muscles to go after their next meal of microorganisms.

Dickinsonia were first discovered in the 1940s and since then, scientists have debated whether the fossils showed evidence of self-directed movement. To test this, it was crucial that Evans be able to analyze how multiple creatures living in the same area behaved relative to one another.

Evans and study co-author Mary Droser, a UCR professor of paleontology, reasoned that if Dickinsonia were riding waves or caught in storms, then all the individuals in the same area would have been moved in the same direction. However, that isn't what the evidence shows.

"Multiple fossils within the same community showed random movement not at all consistent with water currents," Evans said.

Critically, Evans was able to use fossil communities in the Australian outback unearthed by Droser and paper co-author James Gehling of the South Australian Museum. The duo systematically excavated large bed surfaces containing as many as 200 Dickinsonia fossils, allowing Evans to test whether the groups of the animals moved in the same or different directions, Evans said.

The team also analyzed the directions traveled by individual Dickinsonia.

"Something being transported by current should flip over or be somewhat aimless," Evans said. "These movement patterns clearly show directionality based on the animals' biology, and that they preferred to move forward."

Future studies at UCR will try to determine what Dickinsonia bodies were made of. "The tissues of the animals are not preserved, so it's not possible to directly analyze their body composition," he said. "But we will look at other clues they left behind."

Understanding Dickinsonia's capabilities offers insight not only into the evolution of animal life on Earth, but also about the Earth itself and possibly about life on other planets.

"If we want to search for complex life on other planets, we need to know how and why complex life evolved here," Evans said. "Knowing the conditions that enabled large mobile organisms to move during the Ediacaran era, 550 million years ago, gives us a clue about the habitable zone elsewhere."

Read more at Science Daily

Cool halo gas caught spinning like galactic disks

A group of astronomers led by Crystal Martin and Stephanie Ho of the University of California, Santa Barbara, has discovered a dizzying cosmic choreography among typical star-forming galaxies; their cool halo gas appears to be in step with the galactic disks, spinning in the same direction.

The researchers used W. M. Keck Observatory to obtain the first-ever direct observational evidence showing that corotating halo gas is not only possible, but common. Their findings suggest that the whirling gas halo will eventually spiral in towards the disk.

"This is a major breakthrough in understanding how galactic disks grow," said Martin, Professor of Physics at UC Santa Barbara and lead author of the study. "Galaxies are surrounded by massive reservoirs of gas that extend far beyond the visible portions of galaxies. Until now, it has remained a mystery how exactly this material is transported to galactic disks where it can fuel the next generation of star formation."

The study is published in today's issue of the Astrophysical Journal and shows the combined results of 50 standard star-forming galaxies taken over a period of several years.

Nearly a decade ago, theoretical models predicted that the angular momentum of the spinning cool halo gas partially offsets the gravitational force pulling it towards the galaxy, thereby slowing down the gas accretion rate and lengthening the period of disk growth.

The team's results confirm this theory, which show that the angular momentum of the halo gas is high enough to slow down the infall rate but not so high as to shut down feeding the galactic disk entirely.

The astronomers first obtained spectra of bright quasars behind star-forming galaxies to detect the invisible halo gas by its absorption-line signature in the quasar spectra. Next, the researchers used Keck Observatory's laser guide star adaptive optics (LGSAO) system and near-infrared camera (NIRC2) on the Keck II telescope, along with Hubble Space Telescope's Wide Field Camera 3 (WFC3), to obtain high-resolution images of the galaxies.

"What sets this work apart from previous studies is that our team also used the quasar as a reference 'star' for Keck's laser guide star AO system," said co-author Ho, a physics graduate student at UC Santa Barbara. "This method removed the blurring caused by the atmosphere and produced the detailed images we needed to resolve the galactic disks and geometrically determine the orientation of the galactic disks in three-dimensional space."

The team then measured the Doppler shifts of the gas clouds using the Low Resolution Imaging Spectrometer (LRIS) at Keck Observatory, as well as obtaining spectra from Apache Point Observatory. This enabled the researchers to determine what direction the gas is spinning and how fast. The data proved that the gas is rotating in the same direction as the galaxy, and the angular momentum of the gas is not stronger than the force of gravity, meaning the gas will spiral into the galactic disk.

"Just as ice skaters build up momentum and spin when they bring their arms inward, the halo gas is likely spinning today because it was once at much larger distances where it was deposited by galactic winds, stripped from satellite galaxies, or directed toward the galaxy by a cosmic filament," said Martin.

Read more at Science Daily

Astronomers uncover first polarized radio signals from gamma-ray burst

An international team of astronomers has captured the first-ever polarized radio waves from a distant cosmic explosion.

This explosive event (known as gamma-ray burst GRB 190114C) is part of a class of the most energetic explosions in the universe. It was produced when a star -- much more massive than our sun -- collapsed to form a black hole.

Gamma ray bursts produce powerful jets that travel close to the speed of light and shine with the incredible luminosity of more than a billion suns combined. Astronomers have struggled to understand how these jets are formed and why they seem to appear only in gamma ray bursts -- but not other explosions, such as ordinary supernovae.

Because these jets are extremely bright at radio wavelengths, the discovery of polarized radio signals may offer new clues to help solve this mystery. Polarization is a property of light that indicates how a magnetic field is organized and structured in a jet.

"We know that only a very tiny fraction (less than 1%) of massive stars form jets when they collapse," said Northwestern University's Raffaella Margutti, who contributed to the study. "But we have not known how they manage to launch these outflows with such extreme properties, and we don't know why only a few stars do this."

"This measurement opens a new window into gamma-ray burst science and the studies of energetic astrophysical jets," said Tanmoy Laskar, a postdoctoral researcher at the University of Bath in the U.K. and lead author of the study. "We would like to understand whether the low level of polarization measured in this event is characteristic of all gamma-ray bursts and, if so, what this could tell us about the magnetic structures in gamma-ray burst jets and the role of magnetic fields in powering jets throughout the universe."

The paper was published last week in the Astrophysical Journal Letters.

The international team included three astrophysicists from Northwestern's Weinberg College of Arts and Sciences: Kate Alexander, Wen-fai Fong and Margutti. All are members of Northwestern's Center for Interdisciplinary and Exploratory Research in Astrophysics (CIERA).

Astronomers have hypothesized that cosmic magnetic fields might flow through the jets, helping them form and providing structural support. The physical extent of these magnetic fields, which have implications for the jet launching mechanism, however, had never before been measured.

To obtain these measurements, the international team employed a novel trick. They observed the jets in linearly polarized light, which is sensitive to the size of magnetic field patches. Larger magnetic field patches, for example, produce more polarized light.

On January 14, 2019, a flash of gamma rays triggered NASA's Swift satellite, which alerted astronomers of the burst's location in the direction of the constellation Fornax. The astronomers then used the Atacama Large Millimeter/Submillimeter Array (ALMA) telescope in Chile to search for radio waves from the explosion, which occurred more than 4.5 billion years ago in a galaxy 7 billion light-years away.

"Magnetic fields are ubiquitous but notoriously difficult to constrain in our universe," said Fong, an assistant professor of astrophysics. "The fact that we have been able to detect their presence -- let alone in the fastest jets we know of -- is an incredible and storied feat of observation."

The team detected a subtle, but revealing, polarization signal of 0.8%, implying magnetic field patches about the size of our solar system. Next, the researchers will combine this new information with data from X-ray and visible light telescopes.

"The lower frequency data from the Very Large Array (VLA) in New Mexico helped confirm that we were seeing the light from the jet itself rather than from the interaction of the jet with its environment," said Alexander, a NASA Einstein Fellow who led the VLA observations.

Read more at Science Daily

Your nose knows when it comes to stronger memories

Memories are stronger when the original experiences are accompanied by unpleasant odors, a team of researchers has found. The study broadens our understanding of what can drive Pavlovian responses and points to how negative experiences influence our ability to recall past events.

"These results demonstrate that bad smells are capable of producing memory enhancements in both adolescents and adults, pointing to new ways to study how we learn from and remember positive and negative experiences," explains Catherine Hartley, an assistant professor in New York University's Department of Psychology and the senior author of the paper, which appears in the journal Learning and Memory.

"Because our findings spanned different age groups, this study suggests that aversive odors might be used in the future to examine emotional learning and memory processes across development," adds Alexandra Cohen, an NYU postdoctoral fellow and the paper's lead author.

The impact of negative experiences on memory has long been shown -- and is familiar to us. For example, if you are bitten by a dog, you may develop a negative memory of the dog that bit you, and your negative association may also go on to generalize to all dogs. Moreover, because of the trauma surrounding the bite, you are likely to have a better recollection of it than you would other past experiences with dogs.

"The generalization and persistence in memory of learned negative associations are core features of anxiety disorders, which often emerge during adolescence," notes Hartley.

In order to better understand how learned negative associations influence memory during this stage of development, the researchers designed and administered a Pavlovian learning task to individuals aged 13 to 25. Mild electrical shocks are often used in this type of learning task. In this study, the researchers used bad smells because they can be ethically administered in studying children.

The task included the viewing of a series of images belonging to one of two conceptual categories: objects (e.g., a chair) and scenes (e.g., a snow-capped mountain). As the study's participants viewed the images, they wore a nasal mask connected to an olfactometer. While participants viewed images from one category, unpleasant smells were sometimes circulated through the device to the mask; while viewing images from the other category, unscented air was used. This allowed the researchers to examine memory for images associated with a bad smell as well as for generalization to related images. In other words, if the image of a chair was associated with a bad smell, would memory be enhanced only for the chair or for objects in general?

What constitutes a "bad" odor is somewhat subjective. In order to determine which odors the participants found unlikable, the researchers had the subjects -- prior to the start of the experiment -- breathe in a variety of odors and indicate which ones they thought were unpleasant. The odors were blends of chemical compounds provided by a local perfumer and included scents such as rotting fish and manure.

As the subjects viewed the images, the scientists measured perspiration from the palm of the subjects' hands as an index of arousal -- a common research technique used to confirm the creation of a negative association (in this case, of a bad smell). A day later, researchers tested participants' memory for the images.

Read more at Science Daily

Jun 19, 2019

Biology of leptin, the hunger hormone, revealed

Leptin model.
In a new study, Yale researchers offer insight into leptin, a hormone that plays a key role in appetite, overeating, and obesity. Their findings advance knowledge about leptin and weight gain, and also suggest a potential strategy for developing future weight-loss treatments, they said.

The study, led by investigators at Yale and Harvard, was published the week of June 17, 2019, in the Proceedings of the National Academy of Sciences.

Leptin, which is secreted by fat cells, informs the brain when fuel stored in body fat and in the liver is becoming depleted. It has not been well understood how low leptin concentrations in plasma -- the largest component of blood -- increase appetite. The researchers studied the biology of leptin in rodents. They also investigated the influence of nerve cells in the brain known as AgRP neurons, which regulate eating behavior.

The researchers discovered that the mechanisms by which reductions in plasma leptin concentrations stimulate food intake are not limited to the brain, as previously thought. In rodents, fasting first activates leptin receptors in the brain, followed by an intermediary step that involves the endocrine system. This system includes the pituitary and adrenal glands, which secrete another hormone, corticosterone, that regulates energy, stress responses, and food intake.

The research team learned that this chain of events is required for leptin to stimulate hunger when food is restricted, or when diabetes is poorly controlled and plasma leptin concentrations drop below a critical threshold, said Gerald Shulman, M.D., the George R. Cowgill Professor of Medicine at Yale School of Medicine, and co-corresponding author of the study.

In further experiments, the researchers also showed that plasma corticosterone activates AgRP neurons, which increases hunger when either leptin or blood-sugar levels are low, Shulman noted. In humans, leptin and blood sugar drop when people diet.

These findings add to scientists' knowledge of leptin, which has been the focus of research on obesity and weight loss since its discovery in the 1990s. The study reveals "the basic biology of leptin, and how the endocrine system is mediating its effect to regulate food intake under conditions of starvation and poorly controlled diabetes," said Shulman.

The research also lends support to a different strategy for developing drugs that treat obesity. "It suggests that AgRP neurons may be an attractive therapeutic target," he said.

Read more at Science Daily

Origin of life: A prebiotic route to DNA

How were the building-blocks of life first formed on the early Earth?
DNA, the hereditary material, may have appeared on Earth earlier than has been assumed hitherto. Ludwig-Maximilians-Universitaet (LMU) in Munich chemists led by Oliver Trapp show that a simple reaction pathway could have given rise to DNA subunits on the early Earth.

How were the building-blocks of life first formed on the early Earth? As yet, only partially satisfactory answers to this question are available. However, one thing is clear: The process of biological evolution that has given rise to the diversity of life on our planet must have been preceded by a phase of chemical evolution. During this 'prebiotic' stage, the first polymeric molecules capable of storing information and reproducing themselves were randomly assembled from organic precursors that were available on the early Earth. The most efficient replicators subsequently evolved into the macromolecular informational nucleic acids -- DNA and RNA -- that became the basis for all forms of life on our planet.

For billions of years, DNA has been the primary carrier of hereditary information in biological organisms. DNA strands are made up of four types of chemical subunits, and the genetic information it contains is encoded in the linear sequence of these 'nucleosides'. Moreover, the four subunits comprise two complementary pairs. Interactions between two strands with complementary sequences are responsible for the formation of the famous double helix, and play a crucial role in DNA replication. RNA also has vital functions in the replication of DNA and in the translation of nucleotide sequences into proteins.

Which of these two types of nucleic acid came first? The unanimous answer to that question up to now was RNA. Plausible models that explain how RNA molecules could have been synthesized from precursor compounds in prebiotic settings were first proposed decades ago, and have since received substantial experimental support. Moreover, its conformational versatility allows RNA both to store information and to act as a catalyst. These insights have led to the idea of an 'RNA world' that preceded the emergence of DNA, which is now well established among specialists. How then were the first DNA subunits synthesized? The generally accepted view is that this process was catalyzed by an enzyme -- a comparatively complex biomolecule whose emergence would have required millions of years of evolution.

But now a team of chemists led by LMU's Professor Oliver Trapp has proposed a much more direct mechanism for the synthesis of DNA subunits from organic compounds that would have been present in a prebiotic environment. "The reaction pathway is relatively simple," says Trapp, which suggests it could well have been realized in a prebiotic setting. For example, it does not require variations in reaction parameters, such as temperature. In Trapp's experiments, the necessary ingredients are water, a mildly alkaline pH and temperatures of between 40 and 70°C. Under such conditions, adequately high reaction rates and product yields are achieved, with high selectivity and correct stereochemistry.

Each of the nucleoside subunits found in DNA is made up of a nitrogen-containing base and a sugar called deoxyribose. Up to now, it was thought that deoxynucleosides could only be synthesized under prebiotic conditions by directly coupling these two -- preformed -- components together. But no plausible non-enzymatic mechanism for such a step had ever been proposed. The essential feature of the new pathway, as Trapp explains, is that the sugar is not linked to the base in a single step. Instead, it is built up on the preformed base by a short sequence of reaction steps involving simple organic molecules such as acetaldehyde and glyceraldehyde. In addition, the LMU researchers have identified a second family of possible precursors of DNA in which the deoxyribose moiety is replaced by a different sugar.

Read more at Science Daily

The fellowship of the wing: Pigeons flap faster to fly together

Pigeons flying
New research publishing June 18 in the open-access journal, PLOS Biology, led by Dr Lucy Taylor from the University of Oxford's Department of Zoology now reveals that homing pigeons fit in one extra wingbeat per second when flying in pairs compared to flying solo.

Birds that fly in 'V'-formations, such as geese, are able to conserve energy by flying in aerodynamically optimal positions. By contrast, in species that don't fly in formation, such as homing pigeons, the costs and benefits of flocking have been less well understood.

The research indicates that flying with another bird requires more energy compared to flying solo. 'The results of this study were completely unexpected. Energy is the currency of life so it's astonishing that the birds are prepared to pay a substantial energetic cost to fly together," said lead-author, Dr Lucy Taylor.

The team used high frequency GPS and accelerometer bio-loggers to measure how pigeons changed their wingbeat patterns when flying in pairs compared to flying solo. The accelerometers act much like fitness trackers but, instead of measuring steps, the researchers measure wingbeats. 'The increase in wingbeat frequency is equivalent to Usain Bolt running the 100m sprint at his usual speed, whilst fitting in nearly one extra step per second. The pigeons are flapping faster when flying in pairs but hardly going any faster," said Dr Taylor.

The increase in wingbeat frequency is likely to be related to the demands of coordinating flight. Dr Taylor said: 'Imagine trying to coordinate with and avoid hitting another small object travelling at around 44 miles per hour. This is nearly two times faster than an Olympic sprinter, and the birds can move up and down as well as left and right. For a pigeon, flapping your wings faster will both give you faster reactions and greater control over your movements, and will help keep your head stable making it easier to track where the other bird is.'

Despite the costs of fitting in one additional wingbeat per second, the birds consistently chose to fly together, suggesting that they were able to gain other benefits from flocking. Birds flying in a pair were simultaneously able to improve their homing accuracy, meaning that they could conserve energy by flying shorter routes home. Combined with increased predator protection from safety in numbers, this research suggests that the overall benefits of flocking outweigh the immediate energetic costs of changing wingbeat patterns.

From Science Daily

Meteors help Martian clouds form

Mars.
How did the Red Planet get all of its clouds? CU Boulder researchers may have discovered the secret: just add meteors.

Astronomers have long observed clouds in Mars' middle atmosphere, which begins about 18 miles (30 kilometers) above the surface, but have struggled to explain how they formed.

Now, a new study, which will be published on June 17 in the journal Nature Geoscience, examines those wispy accumulations and suggests that they owe their existence to a phenomenon called "meteoric smoke" -- essentially, the icy dust created by space debris slamming into the planet's atmosphere.

The findings are a good reminder that planets and their weather patterns aren't isolated from the solar systems around them.

"We're used to thinking of Earth, Mars and other bodies as these really self-contained planets that determine their own climates," said Victoria Hartwick, a graduate student in the Department of Atmospheric and Ocean Sciences (ATOC) and lead author of the new study. "But climate isn't independent of the surrounding solar system."

The research, which included co-authors Brian Toon at CU Boulder and Nicholas Heavens at Hampton University in Virginia, hangs on a basic fact about clouds: They don't come out of nowhere.

"Clouds don't just form on their own," said Hartwick, also of the Laboratory for Atmospheric and Space Physics at CU Boulder. "They need something that they can condense onto."

On Earth, for example, low-lying clouds begin life as tiny grains of sea salt or dust blown high into the air. Water molecules clump around these particles, becoming bigger and bigger until they form the large puffs that you can see from the ground.

But, as far as scientists can tell, those sorts of cloud seeds don't exist in Mars' middle atmosphere, Hartwick said. And that's what led her and her colleagues to meteors.

Hartwick explained that about two to three tons of space debris crash into Mars every day on average. And as those meteors rip apart in the planet's atmosphere, they inject a huge volume of dust into the air.

To find out if such smoke would be enough to give rise to Mars' mysterious clouds, Hartwick's team turned to massive computer simulations that attempt to mimic the flows and turbulence of the planet's atmosphere.

And sure enough, when they included meteors in their calculations, clouds appeared.

"Our model couldn't form clouds at these altitudes before," Hartwick said. "But now, they're all there, and they seem to be in all the right places."

The idea might not be as outlandish as it sounds, she added. Research has shown that similar interplanetary schmutz may help to seed clouds near Earth's poles.

But she also says that you shouldn't expect to see gigantic thunderheads forming above the surface of Mars anytime soon. The clouds her team studied were much more like bits of cotton candy than the clouds Earthlings are used to.

"But just because they're thin and you can't really see them doesn't mean they can't have an effect on the dynamics of the climate," Hartwick said.

The researchers' simulations, for example, showed that middle atmosphere clouds could have a large impact on the Martian climate. Depending on where the team looked, those clouds could cause temperatures at high altitudes to swing up or down by as much as 18 degrees Fahrenheit (10 degrees Celsius).

And that climactic impact is what gets Brian Toon, a professor in ATOC, excited. He said that the team's findings on modern-day Martian clouds may also help to reveal the planet's past evolution and how it once managed to support liquid water at its surface.

Read more at Science Daily

Jun 18, 2019

Sun's history found buried in Moon's crust

Sun, Earth, Moon (not to proportion; stock image, elements furnished by NASA).
The Sun is why we're here. It's also why Martians or Venusians are not.

When the Sun was just a baby four billion years ago, it went through violent outbursts of intense radiation, spewing scorching, high-energy clouds and particles across the solar system. These growing pains helped seed life on early Earth by igniting chemical reactions that kept Earth warm and wet. Yet, these solar tantrums also may have prevented life from emerging on other worlds by stripping them of atmospheres and zapping nourishing chemicals.

Just how destructive these primordial outbursts were to other worlds would have depended on how quickly the baby Sun rotated on its axis. The faster the Sun turned, the quicker it would have destroyed conditions for habitability.

This critical piece of the Sun's history, though, has bedeviled scientists, said Prabal Saxena, an astrophysicist at NASA's Goddard Space Flight Center in Greenbelt, Maryland. Saxena studies how space weather, the variations in solar activity and other radiation conditions in space, interacts with the surfaces of planets and moons.

Now, he and other scientists are realizing that the Moon, where NASA will be sending astronauts by 2024, contains clues to the ancient mysteries of the Sun, which are crucial to understanding the development of life.

"We didn't know what the Sun looked like in its first billion years, and it's super important because it likely changed how Venus' atmosphere evolved and how quickly it lost water. It also probably changed how quickly Mars lost its atmosphere, and it changed the atmospheric chemistry of Earth," Saxena said.

The Sun-Moon Connection

Saxena stumbled into investigating the early Sun's rotation mystery while contemplating a seemingly unrelated one: Why, when the Moon and Earth are made of largely the same stuff, is there significantly less sodium and potassium in lunar regolith, or Moon soil, than in Earth soil?

This question, too, revealed through analyses of Apollo-era Moon samples and lunar meteorites found on Earth, has puzzled scientists for decades -- and it has challenged the leading theory of how the Moon formed.

Our natural satellite took shape, the theory goes, when a Mars-sized object smashed into Earth about 4.5 billion years ago. The force of this crash sent materials spewing into orbit, where they coalesced into the Moon.

"The Earth and Moon would have formed with similar materials, so the question is, why was the Moon depleted in these elements?" said Rosemary Killen, an planetary scientist at NASA Goddard who researches the effect of space weather on planetary atmospheres and exospheres.

The two scientists suspected that one big question informed the other -- that the history of the Sun is buried in the Moon's crust.

Killen's earlier work laid the foundation for the team's investigation. In 2012, she helped simulate the effect solar activity has on the amount of sodium and potassium that is either delivered to the Moon's surface or knocked off by a stream of charged particles from the Sun, known as the solar wind, or by powerful eruptions known as coronal mass ejections.

Saxena incorporated the mathematical relationship between a star's rotation rate and its flare activity. This insight was derived by scientists who studied the activity of thousands of stars discovered by NASA's Kepler space telescope: The faster a star spins, they found, the more violent its ejections. "As you learn about other stars and planets, especially stars like our Sun, you start to get a bigger picture of how the Sun evolved over time," Saxena said.

Using sophisticated computer models, Saxena, Killen and colleagues think they may have finally solved both mysteries. Their computer simulations, which they described on May 3 in the The Astrophysical Journal Letters, show that the early Sun rotated slower than 50% of baby stars. According to their estimates, within its first billion years, the Sun took at least 9 to 10 days to complete one rotation.

They determined this by simulating the evolution of our solar system under a slow, medium, and then a fast-rotating star. And they found that just one version -- the slow-rotating star -- was able to blast the right amount of charged particles into the Moon's surface to knock enough sodium and potassium into space over time to leave the amounts we see in Moon rocks today.

"Space weather was probably one of the major influences for how all the planets of the solar system evolved," Saxena said, "so any study of habitability of planets needs to consider it."

Life Under the Early Sun

The rotation rate of the early Sun is partly responsible for life on Earth. But for Venus and Mars -- both rocky planets similar to Earth -- it may have precluded it. (Mercury, the closest rocky planet to the Sun, never had a chance.)

Earth's atmosphere was once very different from the oxygen-dominated one we find today. When Earth formed 4.6 billion years ago, a thin envelope of hydrogen and helium clung to our molten planet. But outbursts from the young Sun stripped away that primordial haze within 200 million years.

As Earth's crust solidified, volcanoes gradually coughed up a new atmosphere, filling the air with carbon dioxide, water, and nitrogen. Over the next billion years, the earliest bacterial life consumed that carbon dioxide and, in exchange, released methane and oxygen into the atmosphere. Earth also developed a magnetic field, which helped protect it from the Sun, allowing our atmosphere to transform into the oxygen- and nitrogen-rich air we breathe today.

"We were lucky that Earth's atmosphere survived the terrible times," said Vladimir Airapetian, a senior Goddard heliophysicist and astrobiologist who studies how space weather affects the habitability of terrestrial planets. Airapetian worked with Saxena and Killen on the early Sun study.

Had our Sun been a fast rotator, it would have erupted with super flares 10 times stronger than any in recorded history, at least 10 times a day. Even Earth's magnetic field wouldn't have been enough to protect it. The Sun's blasts would have decimated the atmosphere, reducing air pressure so much that Earth wouldn't retain liquid water. "It could have been a much harsher environment," Saxena noted.

But the Sun rotated at an ideal pace for Earth, which thrived under the early star. Venus and Mars weren't so lucky. Venus was once covered in water oceans and may have been habitable. But due to many factors, including solar activity and the lack of an internally generated magnetic field, Venus lost its hydrogen -- a critical component of water. As a result, its oceans evaporated within its first 600 million years, according to estimates. The planet's atmosphere became thick with carbon dioxide, a heavy molecule that's harder to blow away. These forces led to a runaway greenhouse effect that keeps Venus a sizzling 864 degrees Fahrenheit (462 degrees Celsius), far too hot for life.

Mars, farther from the Sun than Earth is, would seem to be safer from stellar outbursts. Yet, it had less protection than did Earth. Due partly to the Red Planet's weak magnetic field and low gravity, the early Sun gradually was able to blow away its air and water. By about 3.7 billion years ago, the Martian atmosphere had become so thin that liquid water immediately evaporated into space. (Water still exists on the planet, frozen in the polar caps and in the soil.)

After influencing the course for life (or lack thereof) on the inner planets, the aging Sun gradually slowed its pace and continues to do so. Today, it revolves once every 27 days, three times slower than it did in its infancy. The slower spin renders it much less active, though the Sun still has violent outbursts occasionally.

Exploring the Moon, Witness of Solar System Evolution

To learn about the early Sun, Saxena said, you need to look no further than the Moon, one of the most well-preserved artifacts from the young solar system.

"The reason the Moon ends up being a really useful calibrator and window into the past is that it has no annoying atmosphere and no plate tectonics resurfacing the crust," he said. "So as a result, you can say, 'Hey, if solar particles or anything else hit it, the Moon's soil should show evidence of that.'"

Apollo samples and lunar meteorites are a great starting point for probing the early solar system, but they are only small pieces in a large and mysterious puzzle. The samples are from a small region near the lunar equator, and scientists can't tell with complete certainty where on the Moon the meteorites came from, which makes it hard to place them into geological context.

Since the South Pole is home to the permanently shadowed craters where we expect to find the best-preserved material on the Moon, including frozen water, NASA is aiming to send a human expedition to the region by 2024.

Read more at Science Daily

Two new Earth-like planets discovered near Teegarden's Star

Exoplanets illustration.
An international research team led by the University of Göttingen has discovered two new Earth-like planets near one of our closest neighboring stars. "Teegarden's star" is only about 12.5 light years away from Earth and is one of the smallest known stars. It is only about 2,700 °C warm and about ten times lighter than the Sun. Although it is so close to us, the star wasn't discovered until 2003. The scientists observed the star for about three years. The results were published in the journal Astronomy and Astrophysics.

Their data clearly show the existence of two planets. "The two planets resemble the inner planets of our solar system," explains lead author Mathias Zechmeister of the Institute for Astrophysics at the University of Göttingen. "They are only slightly heavier than Earth and are located in the so-called habitable zone, where water can be present in liquid form."

The astronomers suspect that the two planets could be part of a larger system. "Many stars are apparently surrounded by systems with several planets," explains co-author Professor Stefan Dreizler of the University of Göttingen. Teegarden's star is the smallest star where researchers have so far been able to measure the weight of a planet directly. "This is a great success for the Carmenes project, which was specifically designed to search for planets around the lightest stars," says Professor Ansgar Reiners of the University of Göttingen, one of the scientific directors of the project.

Although planetary systems around similar stars are known, they have always been detected using the "transit method" -- the planets have to pass visibly in front of the star and darken it for a moment, which only happens in a very small fraction of all planetary systems. Such transits have not yet been found for the new planets. But the system is located at a special place in the sky: from Teegarden's star you can see the planets of the solar system passing in front of the Sun.

"An inhabitant of the new planets would therefore have the opportunity to view the Earth using the transit method," says Reiners. The new planets are the tenth and eleventh discovered by the team.

Carmenes is carried out by the universities of Göttingen, Hamburg, Heidelberg, and Madrid, the Max-Planck-Institut für Astronomie Heidelberg, Institutes Consejo Superior de Investigaciones Científicas in Barcelona, Granada, and Madrid, Thüringer Landessternwarte, Instituto de Astrofísica de Canarias, and Calar-Alto Observatory.

From Science Daily

The brain consumes half of a child's energy -- and that could matter for weight gain

Children at school.
Weight gain occurs when an individual's energy intake exceeds their energy expenditure -- in other words, when calories in exceed calories out. What is less well understood is the fact that, on average, nearly half of the body's energy is used by the brain during early childhood.

In a new paper published in the journal Proceedings of the National Academy of Sciences (PNAS), "A hypothesis linking the energy demand of the brain to obesity risk," co-authors Christopher Kuzawa of Northwestern University and Clancy Blair of New York University School of Medicine, propose that variation in the energy needs of brain development across kids -- in terms of the timing, intensity and duration of energy use -- could influence patterns of energy expenditure and weight gain.

"We all know that how much energy our bodies burn is an important influence on weight gain," said Kuzawa, professor of anthropology in the Weinberg College of Arts and Sciences and a faculty fellow with the Institute for Policy Research at Northwestern. "When kids are 5, their brains use almost half of their bodies' energy. And yet, we have no idea how much the brain's energy expenditure varies between kids. This is a huge hole in our understanding of energy expenditure."

"A major aim of our paper is to bring attention to this gap in understanding and to encourage researchers to measure the brain's energy use in future studies of child development, especially those focused on understanding weight gain and obesity risk."

According to the authors, another important unknown is whether programs designed to stimulate brain development through enrichment, such as preschool programs like Head Start, might influence the brain's pattern of energy use.

"We believe it plausible that increased energy expenditure by the brain could be an unanticipated benefit to early child development programs, which, of course, have many other demonstrated benefits," Kuzawa said. "That would be a great win-win."

This new hypothesis was inspired by Kuzawa and his colleagues' 2014 study showing that the brain consumes a lifetime peak of two-thirds of the body's resting energy expenditure, and almost half of total expenditure, when kids are five years old. This study also showed that ages when the brain's energy needs increase during early childhood are also ages of declining weight gain. As the energy needed for brain development declines in older children and adolescents, the rate of weight gain increases in parallel.

Read more at Science Daily

The evolution of puppy dog eyes

Dog with raised eyebrows.
Dogs have evolved new muscles around the eyes to better communicate with humans.

New research comparing the anatomy and behavior of dogs and wolves suggests dogs' facial anatomy has changed over thousands of years specifically to allow them to better communicate with humans.

In the first detailed analysis comparing the anatomy and behavior of dogs and wolves, researchers found that the facial musculature of both species was similar, except above the eyes. Dogs have a small muscle, which allows them to intensely raise their inner eyebrow, which wolves do not.

The authors suggest that the inner eyebrow raising movement triggers a nurturing response in humans because it makes the dogs' eyes appear larger, more infant like and also resembles a movement humans produce when they are sad.

The research team, led by comparative psychologist Dr Juliane Kaminski, at the University of Portsmouth, included a team of behavioural and anatomical experts in the UK and USA.

It is published in the journal Proceedings of the National Academy of Sciences (PNAS).

Dr Kaminski said: "The evidence is compelling that dogs developed a muscle to raise the inner eyebrow after they were domesticated from wolves.

"We also studied dogs' and wolves' behavior, and when exposed to a human for two minutes, dogs raised their inner eyebrows more and at higher intensities than wolves.

"The findings suggest that expressive eyebrows in dogs may be a result of humans unconscious preferences that influenced selection during domestication. When dogs make the movement, it seems to elicit a strong desire in humans to look after them. This would give dogs, that move their eyebrows more, a selection advantage over others and reinforce the 'puppy dog eyes' trait for future generations."

Dr Kaminski's previous research showed dogs moved their eyebrows significantly more when humans were looking at them compared to when they were not looking at them.

She said: "The AU101 movement is significant in the human-dog bond because it might elicit a caring response from humans but also might create the illusion of human-like communication."

Lead anatomist Professor Anne Burrows, at Duquesne University, Pittsburgh, USA, co-author of the paper, said: "To determine whether this eyebrow movement is a result of evolution, we compared the facial anatomy and behaviour of these two species and found the muscle that allows for the eyebrow raise in dogs was, in wolves, a scant, irregular cluster of fibres.

"The raised inner eyebrow movement in dogs is driven by a muscle which doesn't consistently exist in their closest living relative, the wolf.

"This is a striking difference for species separated only 33,000 years ago and we think that the remarkably fast facial muscular changes can be directly linked to dogs' enhanced social interaction with humans."

Dr Kaminski and co-author, evolutionary psychologist Professor Bridget Waller, also at the University of Portsmouth, previously mapped the facial muscular structure of dogs, naming the movement responsible for a raised inner eyebrow the Action Unit (AU) 101.

Professor Waller said: "This movement makes a dogs' eyes appear larger, giving them a childlike appearance. It could also mimic the facial movement humans make when they're sad.

"Our findings show how important faces can be in capturing our attention, and how powerful facial expression can be in social interaction."

Co-author and anatomist Adam Hartstone-Rose, at North Carolina State University, USA, said: "These muscles are so thin that you can literally see through them -- and yet the movement that they allow seems to have such a powerful effect that it appears to have been under substantial evolutionary pressure. It is really remarkable that these simple differences in facial expression may have helped define the relationship between early dogs and humans."

Co-author Rui Diogo, an anatomist at Howard University, Washington DC, USA, said: "I must admit that I was surprised to see the results myself because the gross anatomy of muscles is normally very slow to change in evolution, and this happened very fast indeed, in just some dozens of thousands of years."

Soft tissue, including muscle, doesn't tend to survive in the fossil record, making the study of this type of evolution harder.

The only dog species in the study that did not have the muscle was the Siberian husky, which is among more ancient dog breeds.

An alternative reason for the human-dog bond could be that humans have a preference for other individuals which have whites in the eye and that intense AU 101 movements exposes the white part of the dogs eyes.

Read more at Science Daily

NASA's Cassini reveals New Sculpting in Saturn Rings

A false-color image mosaic shows Daphnis, one of Saturn's ring-embedded moons, and the waves it kicks up in the Keeler gap. Images collected by Cassini's close orbits in 2017 are offering new insight into the complex workings of the rings.
As NASA's Cassini dove close to Saturn in its final year, the spacecraft provided intricate detail on the workings of Saturn's complex rings, new analysis shows.

Although the mission ended in 2017, science continues to flow from the data collected. A new paper published June 13 in Science describes results from four Cassini instruments taking their closest-ever observations of the main rings.

Findings include fine details of features sculpted by masses embedded within the rings. Textures and patterns, from clumpy to strawlike, pop out of the images, raising questions about the interactions that shaped them. New maps reveal how colors, chemistry and temperature change across the rings.

Like a planet under construction inside a disk of protoplanetary material, tiny moons embedded in Saturn's rings (named A through G, in order of their discovery) interact with the particles around them. In that way, the paper provides further evidence that the rings are a window into the astrophysical disk processes that shape our solar system.

The observations also deepen scientists' understanding of the complex Saturn system. Scientists conclude that at the outer edge of the main rings, a series of similar impact-generated streaks in the F ring have the same length and orientation, showing that they were likely caused by a flock of impactors that all struck the ring at the same time. This shows that the ring is shaped by streams of material that orbit Saturn itself rather than, for instance, by cometary debris (moving around the Sun) that happens to crash into the rings.

"These new details of how the moons are sculpting the rings in various ways provide a window into solar system formation, where you also have disks evolving under the influence of masses embedded within them," said lead author and Cassini scientist Matt Tiscareno of the SETI Institute in Mountain View, California.

Enduring Mysteries

At the same time, new puzzles have arisen and old mysteries have deepened with the latest research. The close-up ring images brought into focus three distinct textures -- clumpy, smooth and streaky -- and made it clear that these textures occur in belts with sharp boundaries. But why? In many places the belts aren't connected to any ring characteristics that scientists have yet identified.

"This tells us the way the rings look is not just a function of how much material there is," Tiscareno said. "There has to be something different about the characteristics of the particles, perhaps affecting what happens when two ring particles collide and bounce off each other. And we don't yet know what it is."

The data analyzed were gathered during the Ring Grazing Orbits (December 2016 to April 2017) and the Grand Finale (April to September 2017), when Cassini flew just above Saturn's cloud tops. As the spacecraft was running out of fuel, the mission team deliberately plunged it into the planet's atmosphere in September 2017.

Cassini's Visible and Infrared Mapping Spectrometer (VIMS) uncovered another mystery. The spectrometer, which imaged the rings in visible and near-infrared light, identified unusually weak water-ice bands in the outermost part of the A ring. That was a surprise, because the area is known to be highly reflective, which usually is a sign of less-contaminated ice and thus stronger water ice bands.

The new spectral map also sheds light on the composition of the rings. And while scientists already knew that water ice is the main component, the spectral map ruled out detectable ammonia ice and methane ice as ingredients. But it also doesn't see organic compounds -- a surprise, given the organic material Cassini has discovered flowing from the D ring into Saturn's atmosphere.

"If organics were there in large amounts -- at least in the main A, B and C rings -- we'd see them," said Phil Nicholson, Cassini VIMS scientist of Cornell University in Ithaca, New York. "I'm not convinced yet that they are a major component of the main rings."

The research signals the start of the next era of Cassini science, said NASA's Ames Research Center's Jeff Cuzzi, who's been studying Saturn's rings since the 1970s and is the interdisciplinary scientist for rings on the Cassini mission.

"We see so much more, and closer up, and we're getting new and more interesting puzzles," Cuzzi said. "We are just settling into the next phase, which is building new, detailed models of ring evolution -- including the new revelation from Cassini data that the rings are much younger than Saturn."

The new observations give scientists an even more intimate view of the rings than they had before, and each examination reveals new complexities, said Cassini Project Scientist Linda Spilker, based at NASA's Jet Propulsion Laboratory in Pasadena, California.

"It's like turning the power up one more notch on what we could see in the rings. Everyone just got a clearer view of what's going on," Spilker said. "Getting that extra resolution answered many questions, but so many tantalizing ones remain."

Read more at Science Daily

Jun 17, 2019

The complex fate of Antarctic species in the face of a changing climate

Oxygen concentrations in both the open ocean and coastal waters have declined by 2-5% since at least the middle of the 20th century.

This is one of the most important changes occurring in an ocean becoming increasingly modified by human activities, with raised water temperatures, carbon dioxide content and nutrient inputs.

Through this, humans are altering the abundances and distributions of marine species but the decline in oxygen could pose a new set of threats to marine life.

Writing in Philosophical Transactions of the Royal Society B, scientists present support for the theory that marine invertebrates with larger body size are generally more sensitive to reductions in oxygen than smaller animals, and so will be more sensitive to future global climate change.

It is widely believed that the occurrence of gigantic species in polar waters is made possible by the fact that there is more oxygen dissolved in ice cold water than in the warmer waters of temperate and tropic regions.

So as our ocean warms and oxygen decreases, it has been suggested that such oxygen limitation will have a greater effect on larger than smaller marine invertebrates and fish.

The study was conducted by John Spicer, Professor of Marine Zoology at the University of Plymouth, and Dr Simon Morley, an Ecophysiologist with the British Antarctic Survey (BAS).

They investigated how a number of different sized amphipod species -- found in abundance in Antarctic waters and relatives of the sandhoppers on temperate beaches) -- performed when the oxygen in the water they were in was reduced.

Overall, there was a reduction in performance with body size supporting the theory that larger species may well be more vulnerable because of oxygen limitation.

However, the picture is a little more complex than this with evolutionary innovation -- such as the presence of oxygen binding pigments in their body fluids to enhance oxygen transport, and novel gas exchange structures in some, but not all, species -- to some extent offsetting any respiratory disadvantages of large body size.

Professor Spicer, who has spent more than 30 years examining the effect of climate change on marine organisms, said: "Over the last 50 years, the oxygen in our oceans has decreased by around 2-5% and this is already having an effect on species' ability to function. Unless they adapt, many larger marine invertebrates will either shrink in size of face extinction, which would have a profoundly negative impact on the ecosystems of which they are a part. This is obviously a major cause for concern.

"Our research also shows that some species have evolved mechanisms to compensate for reductions in oxygen, and so it is not always as simple as drawing a link between size and future survival. But it would be foolhardy to pin our hopes on such 'evolutionary rescue'. Many large species will almost certainly be the first casualties of our warming, oxygen-poor ocean."

Read more at Science Daily

100-year-old physics model replicates modern Arctic ice melt

The Arctic is melting faster than we thought it would. In fact, Arctic ice extent is at a record low. When that happens -- when a natural system behaves differently than scientists expect -- it's time to take another look at how we understand the system. University of Utah mathematician Ken Golden and atmospheric scientist Court Strong study the patterns formed by ponds of melting water atop the ice. The ponds are dark, while the ice is bright, meaning that the bigger the ponds, the darker the surface and the more solar energy it absorbs.

So, it's more than a little important to know how the ice's reflectivity, also called albedo, is changing. That's a key component in understanding the balance between solar energy coming in and energy reflected out of the Arctic. Earlier work showed that the presence or absence of melt ponds in global climate models can have a dramatic effect on long term predictions of Arctic sea ice volume.

To model the melt ponds' growth, Golden, Strong and their colleagues tweaked a nearly 100-year-old physics model, called the Ising model, that explains how a material may gain or lose magnetism by accounting for how atoms interact with each other and an applied magnetic field. In their model, they replaced the property of an atom's magnetic spin (either up or down) with the property of frozen (white) or melted (blue) sea ice.

"The model captures the essential mechanism of pattern formation of Arctic melt ponds," the researchers write, and replicates important characteristics of the variation in pond size and geometry. This work is the first to account for the basic physics of melt ponds and to produce realistic patterns that accurately demonstrate how melt water is distributed over the sea ice surface. The geometry of the melt water patterns determines both sea ice albedo and the amount of light that penetrates the ice, which significantly impacts the ecology of the upper ocean.

Unfortunately, a model like this can't halt the ice from melting. But it can help us make better estimates of how quickly Arctic ice or permafrost is disappearing -- and better climate models help us prepare for the warmer future ahead.

From Science Daily

Cold weather increases the risk of fatal opioid overdoses

Cold weather snaps are followed by a marked increase in fatal opioid overdoses, a new study finds.

A research team led by Brandon Marshall, an associate professor of epidemiology at Brown University's School of Public Health, found a 25 percent increase in fatal opioid overdoses after periods of freezing temperatures compared to days with an average temperature of 52 degrees.

And while the researchers continue to investigate the reasons for this pattern, Marshall suggests some interventions that could reduce overdose risk regardless of the cause. These include cold weather-triggered public health messages that remind people to check on neighbors and loved ones who use opioids, or those that warn individuals who use drugs not to use alone, especially during cold weather.

"It is well known that opioids induce respiratory depression, and that's what causes a fatal overdose," Marshall said. "However, there may be a host of other risk factors that contribute to opioid overdose deaths, which could be avenues for effective interventions. Regardless of what is causing the correlation between cold weather and fatal overdoses, our findings suggest that agencies and organizations should consider scaling up harm-reduction efforts after a period of cold weather."

The findings were published in the journal Epidemiology.

The research involved a partnership between opioid researchers at Brown and the Rhode Island Department of Health (RIDOH), as well as Brown experts who study how the environment impacts human health. Marshall serves also as scientific director for Prevent Overdose R.I., an overdose-surveillance dashboard operated with RIDOH, which provided the data on overdose deaths in Rhode Island.

The research team looked at more than 3,000 opioid-related deaths in Connecticut and Rhode Island from 2014 to 2017. They compared the average temperature on the day of each death -- and up to two weeks before -- to the average temperature of three reference days in the same month. They found that an average temperature of 32 degrees three to seven days prior to day of death was associated with a 25 percent increase in the risk of fatal overdose compared to periods with an average temperature of 52 degrees.

Cold snaps may contribute to increased risk of a fatal opioid overdose in several ways, the study said.

One possibility is that opioid use and exposure to cold weather could combine to create a negative biological effect, said William Goedel, a doctoral student at the School of Public Health, who spearheaded the analysis. Opioids are known to reduce breathing, and even without the effect of drugs, it is already harder to breathe in cold air. Some opioids also reduce the temperature at which the body starts to shiver, which makes it harder to regulate one's body temperature, he added.

Cold weather also changes people's behavior, which could increase the risk of overdose, Marshall said. For example, people may be more likely to use opioids alone when the weather is cold, without someone present who can administer the overdose-reversal drug naloxone. Additionally, cold weather may impact the opioid distribution network. This could potentially increase the risk of using drugs containing illicit fentanyl, or the risk of using drugs more potent than a person is accustomed to, Goedel added.

Despite the increase in overdoses in days following cold snaps, Marshall said the team was surprised to find no direct link between low temperatures on the day of death and the risk of fatal overdose.

"One possibility is that the same-day temperature is based around the recorded day of death, which in some cases is an estimate, especially when a body isn't found for a couple of days," Goedel said. "The lack of a strong correlation with temperature on the day of death could be due to the uncertainty of when people actually died."

The findings might also reflect the cumulative effect of low temperatures on overdose risk, Goedel added.

"Thirty-two degrees on just one day is cold, but to maintain an average of 32 degrees for three or four days means there was a long time where it was quite cold."

Most of the fatal overdoses during the study occurred indoors, Marshall noted. This suggests that providing support for home heating costs or providing warm locations for people to go to during cold spells could also reduce opioid overdoses, he said, although more research is needed.

Marshall would also like to see if the linkage between cold weather and increased risk of overdose deaths is due to major storms or if lower-than-average temperatures alone increase the risk of a fatal overdose.

The researchers also examined above-average temperatures, but did not detect any clear pattern.

Read more at Science Daily

Global commodities trade and consumption place the world's primates at risk of extinction

Orangutan.
A recent study published in the peer-reviewed journal PeerJ -- the Journal of Life and Environmental Sciences highlights the fact that the economic benefits of commodity export for primate habitat countries has been limited relative to the extreme environmental costs of pollution, habitat degradation, loss of biodiversity, continued food insecurity and the threat of emerging diseases.

The world's primate fauna, distributed in the Neotropics, Africa and in South and Southeast Asia, represents an important global component of the Earth´s land-based biodiversity. The presence and activities of primates support a range of tropical community-wide ecological functions and services that provide vital resources to natural ecosystems, including local human populations.

Alarmingly, around 60% of primate species are now threatened with extinction and ~75% have declining populations as a result of escalating anthropogenic pressures resulting in deforestation, habitat degradation, and increased spatial conflict between an expanding human population and the natural range of primates.

The study finds that growing market demands for food and nonfood commodities from high-income nations and the global community at large are significant drivers of rapid and widespread primate habitat loss and degradation.

The global consumption of food and natural resources, along with an increasingly globalized economy has created an expanding international market for agricultural products. Such growth is also reflected in the growth of the area of deforestation that is commodity driven. Available evidence indicates that between 2001 to 2015, 160 million hectares of forest were lost in the tropics due to human activities and that 50% or more of this loss was commodity driven. That is, forests were converted to agricultural fields, cattle pastures, mines to extract minerals and metals, fossil fuel exploration, and urbanization.

Given that global commodity resource extraction is predicted to more than double, from 85bn tonnes today to 186bn by the year 2050, reversing the current trend of primate population decline and extinction due to habitat loss and degradation will require a stronger global resolve to reduce the world's per capita demand for forest-risk food and nonfood commodities from primate-range regions, while at the same time implementing sustainable land use practices that improve the standard of living for local human communities, protect local biodiversity, and mitigate climate change.

In order to avoid the impending extinction of the world´s primates, the researchers suggest a number of measures to be implemented including changing global consumer habits (e.g., using less oil seed, eating less meat), the creation of an international environmental improvement fund to mitigate the negative effects of forest-risk commodities trade, and assigning responsibility for environmental damage to those international corporations that control production, export, and supply chains.

Authors Alejandro Estrada, Paul A. Garber and Abhishek Chaudhary write, "Growing global consumer demands for food and non-food commodities from primate range regions are placing primate populations at risk of extinction. These increasing demands have resulted in an accelerated global expansion of agriculture and of extractive industries and in the growth of infrastructure to support these activities leading to widespread primate habitat loss and degradation."

Read more at Science Daily

Jun 16, 2019

Exercise may have different effects in the morning and evening

Researchers from the University of Copenhagen have learned that the effect of exercise may differ depending on the time of day it is performed. In mice they demonstrate that exercise in the morning results in an increased metabolic response in skeletal muscle, while exercise later in the day increases energy expenditure for an extended period of time.

We probably all know how important a healthy circadian rhythm is. Too little sleep can have severe health consequences. But researchers are still making new discoveries confirming that the body's circadian clock affects our health.

Now, researchers from University of Copenhagen -- in collaboration with researchers from University of California, Irvine -- have learned that the effect of exercise may differ depending on the time of day it is performed. Studies in mice reveal that the effect of exercise performed in the beginning of the mouse' dark/active phase, corresponding to our morning, differs from the effect of exercise performed in the beginning of the light/resting phase, corresponding to our evening.

'There appears to be rather significant differences between the effect of exercise performed in the morning and evening, and these differences are probably controlled by the body's circadian clock. Morning exercise initiates gene programs in the muscle cells, making them more effective and better capable of metabolising sugar and fat. Evening exercise, on the other hand, increases whole body energy expenditure for an extended period of time', says one of the researchers behind the study, Associate Professor Jonas Thue Treebak from the Novo Nordisk Foundation Center for Basic Metabolic Research.

Morning Exercise Is Not Necessarily Better than Evening Exercise

The researchers have measured a number of effects in the muscle cells, including the transcriptional response and effects on the metabolites. The results show that responses are far stronger in both areas following exercise in the morning and that this is likely to be controlled by a central mechanism involving the protein HIF1-alfa, which directly regulates the body's circadian clock.

Morning exercise appears to increase the ability of muscle cells to metabolise sugar and fat, and this type of effect interests the researchers in relation to people with severe overweight and type 2 diabetes.

On the other hand, the results also show that exercise in the evening increases energy expenditure in the hours after exercise. Therefore, the researchers cannot necessarily conclude that exercise in the morning is better than exercise in the evening, Jonas Thue Treebak stresses.

Read more at Science Daily

Phantom sensations: When the sense of touch deceives

Without being aware of it, people sometimes wrongly perceive tactile sensations. A new study in the scientific journal "Current Biology" shows how healthy people can sometimes misattribute touch to the wrong side of their body, or even to a completely wrong part of the body. The study was conducted by researchers at Bielefeld University's Cluster of Excellence CITEC, the University of Hamburg, and New York University.

"The limitations of the previous explanations for how and where our brain processes touch become apparent when it comes to individuals who have had parts of their bodies amputated or suffer from neurological diseases," says Professor Dr. Tobias Heed, one of the authors of the study. His research group "Biopsychology and Cognitive Neuroscience" is part of CITEC and the Department of Psychology at Bielefeld University. "People who have had a hand or a leg amputated often report phantom sensations in these limbs. But where exactly does this false perception come from?"

To begin answering this question, Heed, working together with Dr. Stephanie Badde (New York University, USA) and Professor Dr. Brigitte Röder (University of Hamburg), studied whether phantom sensations could also be found in healthy people. "In doing so, we showed that healthy adults actually did systematically misattribute touch on the hands to the feet, and vice versa," says Heed.

The Starting Point

In the brain, neighboring neurons respond to corresponding parts of the skin. "Previously, scientists thought that our conscious perception of where a touch occurred stems from a topographical map in the brain. Following this assumption, parts of the body such as the hands, feet, or the face are represented on this map. Our new findings, however, demonstrate that other characteristics of touch are also used to attribute a touch to parts of the body," says Heed. He refers to the presumed "map" as an anatomical system of reference. Previously, it was also believed that spatial perception had an influence on the processing of touch, meaning where a touch takes place in spatial terms, such as to the left, in front of, or below, as Heed explains. Many previous findings were interpreted such that the brain was probably using this other map, which is referred to as an external reference system.

"When parts of the body are positioned on the other side of the body than they usually are -- for example, when crossing your legs -- the two coordinate systems come into conflict." The external coordinate system then locates, for instance, the left leg as being on the right side -- and this does not conform what is stored in the brain about the side of the body that the leg belongs to. "In our study, our initial goal was to sort out the role of the brain's anatomical perception as well as the impact of spatial perception," says Heed.

The Study
For the experiments, tactile stimulators were affixed to each of the study participants' hands and feet. These stimulators could generate a sensation on the skin. Using this impulse generator, the researchers then quickly touched the test subjects successively on two different parts of the body, such as on the left foot and the left hand. In the next step, the participants said or showed where they felt the first touch. This process was then repeated several hundred times on each test subject. In some instances, the study participants had to cross their feet or their hands, while at other times their limbs remained in their normal positions. "Remarkably, in 8% of all cases, subjects attributed the first touch to a part of the body that had not even been touched -- this is a kind of phantom sensation," explains Stephanie Badde, the study's lead author.

The Reasons Why


The previous conception -- that the attributed location of touch on the body depends on "maps" of the body -- cannot explain these new findings. We show that phantom sensations depend on three characteristics," says Tobias Heed. "The most important is the identity of the limb -- whether we're dealing with a hand or a foot. This is why a touch on one hand is often perceived on the other hand."

The second most-important factor is the side of the body where the touched limb belongs to, which explains why a touch on the left foot can sometimes erroneously be felt on the left hand.

Another factor is the canonical anatomical position of the part of the body in question -- where in space the hands or the feet are usually located. This was demonstrated by the researchers in their experiment with crossed body parts: the left hand was positioned on the right side in the experiment. If the left hand were to be touched, the brain sometimes misattributes this touch to the right foot -- to another part of the body that belongs neither to the same side of the body nor the same external spatial position as the part of the body that was actually touched. "Decisive is the normal position of the part of the body that was touched: here, the left hand results in a response from a part of the body that is now positioned where the hand that was touched would normally be located," says Stephanie Badde.

Read more at Science Daily