Jan 26, 2019

Fault lines are no barrier to safe storage of CO2 below ground

Carbon dioxide emissions can be securely stored in underground rocks, with minimal possibility of the gas escaping from fault lines back into the atmosphere, research by the University of Edinburgh has shown.
Carbon dioxide emissions can be captured and securely stored in underground rocks, even if geological faults are present, research has confirmed.

There is minimal possibility of the gas escaping from fault lines back into the atmosphere, the study has shown.

The findings are further evidence that an emerging technology known as Carbon Capture and Storage (CCS), in which CO2 gas emissions from industry are collected and transported for underground storage, is reliable.

Such an approach can reduce emissions of CO2 and help to limit the impact of climate change. If widely adopted, CCS could help meet targets set by the 2015 UN Paris Agreement, which seeks to limit climate warming to below 2C compared with pre-industrial levels.

The latest findings, from tests on a naturally occurring CO2 reservoir, may address public concerns over the proposed long-term storage of carbon dioxide in depleted gas and oil fields.

Scientists from the Universities of Edinburgh, Freiburg, Glasgow and Heidelberg studied a natural CO2 repository in Arizona, US, where gas migrates through geological faults to the surface.

Researchers used chemical analysis to calculate the amount of gas that had escaped the underground store over almost half a million years.

They found that a very small amount of carbon dioxide escaped the site each year, well within the safe levels needed for effective storage.

The study, published in Scientific Reports, was supported by the European Union and Natural Environment Research Council.

Dr Stuart Gilfillan, of the University of Edinburgh's School of GeoSciences, who jointly led the study, said: "This shows that even sites with geological faults are robust, effective stores for CO2. This find significantly increases the number of sites around the world that may be suited to storage of this harmful greenhouse gas."

Dr Johannes Miocic, of the University of Freiburg, who jointly led the study, said: "The safety of carbon dioxide storage is crucial for successful widespread implementation of much-needed carbon capture and storage technology. Our research shows that even imperfect sites can be secure stores for hundreds of thousands of years."

From Science Daily

Rocking motion improves sleep and memory, studies in mice and people show

Asleep in a hammock.
Anyone who has ever put a small child to bed or drifted off in a gently swaying hammock will know that a rocking motion makes getting to sleep seem easier. Now, two new studies reported in Current Biology on January 24, one conducted in young adults and the other in mice, add to evidence for the broad benefits of a rocking motion during sleep. In fact, the studies in people show that rocking not only leads to better sleep, but it also boosts memory consolidation during sleep.

"Having a good night's sleep means falling asleep rapidly and then staying asleep during the whole night," says Laurence Bayer of the University of Geneva, Switzerland. "Our volunteers -- even if they were all good sleepers -- fell asleep more rapidly when rocked and had longer periods of deeper sleep associated with fewer arousals during the night. We thus show that rocking is good for sleep."

Bayer and their colleagues had earlier shown that continuous rocking during a 45-minute nap helped people to fall asleep faster and sleep more soundly. In the new study, led by Laurence Bayer and Sophie Schwartz, University of Geneva, Switzerland, they wanted to explore the effects of rocking on sleep and its associated brain waves throughout the night.

The researchers enlisted 18 healthy young adults to undergo sleep monitoring in the lab. The first night was intended to get them used to sleeping there. They then stayed two more nights -- one sleeping on a gently rocking bed and the other sleeping on an identical bed that wasn't moving.

The data showed that participants fell asleep faster while rocking. Once asleep, they also spent more time in non-rapid eye movement sleep, slept more deeply, and woke up less.

Next, the researchers wanted to know how that better sleep influenced memory. To assess memory consolidation, participants studied word pairs. The researchers then measured their accuracy in recalling those paired words in an evening session compared to the next morning when they woke up. They found that people did better on the morning test when they were rocked during sleep.

Further studies showed that rocking affects brain oscillations during sleep. They saw that the rocking motion caused an entrainment of specific brain oscillations of non-rapid eye movement sleep (slow oscillations and spindles). As a result, the continuous rocking motion helped to synchronize neural activity in the thalamo-cortical networks of the brain, which play an important role in both sleep and memory consolidation.

The second study in mice by Konstantinos Kompotis and colleagues is the first to explore whether rocking promotes sleep in other species. And, indeed, it did. The researchers, led by Paul Franken, University of Lausanne, Switzerland, used commercial reciprocating shakers to rock the cages of mice as they slept.

While the best rocking frequency for mice was found to be four times faster than in people, the researchers' studies show that rocking reduced the time it took to fall asleep and increased sleep time in mice as it does in humans. However, the mice did not show evidence of sleeping more deeply.

Researchers had suspected that the effects of rocking on sleep were tied to rhythmic stimulation of the vestibular system, the sensory system that contributes to the sense of balance and spatial orientation. To explore this notion in the mouse, the researchers studied animals whose vestibular systems were disrupted by non-functioning otolithic organs, found in their ears. Their studies showed that mice lacking working otolithic organs experienced none of the beneficial effects of rocking during sleep.

Taken together, the two studies "provide new insights into the neurophysiological mechanisms underlying the effects of rocking stimulation on sleep," Bayer and Perrault write. The findings may be relevant for the development of new approaches for treating patients with insomnia and mood disorders, as well as older people, who frequently suffer from poor sleep and memory impairments.

The researchers say it will be essential in future work to explore the precise deeper brain structures involved in the effects of rocking on sleep.

Read more at Science Daily

Jan 25, 2019

A reptile platypus from the early Triassic

Complete fossil and line drawing of Eretmorhipis carrolldongi. Related to the dolphin-like ichthyosaurs, Eretmorhipis evolved in a world devastated by the mass extinction event at the end of the Permian era.
No animal alive today looks quite like a duckbilled platypus, but about 250 million years ago something very similar swam the shallow seas in what is now China, finding prey by touch with a cartilaginous bill. The newly discovered marine reptile Eretmorhipis carrolldongi from the lower Triassic period is described in the journal Scientific Reports Jan. 24.

Apart from its platypus-like bill, Eretmorhipis was about 70 centimeters long with a long rigid body, small head and tiny eyes, and four flippers for swimming and steering. Bony plates ran down the animal's back.

Eretmorhipis was previously known only from partial fossils without a head, said Professor Ryosuke Motani, a paleontologist at the University of California, Davis Department of Earth and Planetary Sciences and coauthor on the paper.

"This is a very strange animal," Motani said. "When I started thinking about the biology I was really puzzled."

The two new fossils show the animal's skull had bones that would have supported a bill of cartilage. Like the modern platypus, there is a large hole in the bones in the middle of the bill. In the platypus, the bill is filled with receptors that allow it to hunt by touch in muddy streams.

In the early Triassic, the area was covered by a shallow sea, about a meter deep, over a carbonate platform extending for hundreds of miles. Eretmorhipis fossils were found at what were deeper holes, or lagoons, in the platform. There are no fossils to show what Eretmorhipis ate, but it likely fed on shrimp, worms and other small invertebrates, Motani said.

Its long, bony body means that Eretmorhipis was probably a poor swimmer, Motani said.

"It wouldn't survive in the modern world, but it didn't have any rivals at the time," he said.

Related to the dolphin-like ichthyosaurs, Eretmorhipis evolved in a world devastated by the mass extinction event at the end of the Permian era. The fossil provides more evidence of rapid evolution occurring during the early Triassic, Motani said.

Co-authors on the study are Long Cheng and Chun-bo Yan, Wuhan Centre of China Geological Survey, Wuhan; Da-yong Jiang, Peking University; Andrea Tintori, Università degli Studi di Milano, Italy; and Olivier Rieppel, The Field Museum, Chicago. The work was supported by grants from the China Geological Survey, the National Natural Science Foundation of China and the Ministry of Science and Technology.

From Science Daily

Neanderthal hunting spears could kill at a distance

This is a photo of the spear fragment from Clacton-on-Sea, England dating from 400,000 years ago.
Neanderthals have been imagined as the inferior cousins of modern humans, but a new study by archaeologists at UCL reveals for the first time that they produced weaponry advanced enough to kill at a distance.

The study, published in Scientific Reports, examined the performance of replicas of the 300,000 year old Schöningen spears -- the oldest weapons reported in archaeological records -- to identify whether javelin throwers could use them to hit a target at distance.

Dr Annemieke Milks (UCL Institute of Archaeology), who led the study, said: "This study is important because it adds to a growing body of evidence that Neanderthals were technologically savvy and had the ability to hunt big game through a variety of hunting strategies, not just risky close encounters. It contributes to revised views of Neanderthals as our clever and capable cousins."

The research shows that the wooden spears would have enabled Neanderthals to use them as weapons and kill at distance. It is a significant finding given that previous studies considered Neanderthals could only hunt and kill their prey at close range.

The Schöningen spears are a set of ten wooden throwing spears from the Palaeolithic Age that were excavated between 1994 and 1999 in an open-cast lignite mine in Schöningen, Germany, together with approximately 16,000 animal bones.

The Schöningen spears represent the oldest completely preserved hunting weapons of prehistoric Europe so far discovered. Besides Schöningen, a spear fragment from Clacton-on-Sea, England dating from 400,000 years ago can be found at the Natural History Museum, London.

The study was conducted with six javelin athletes who were recruited to test whether the spears could be used to hit a target at a distance. Javelin athletes were chosen for the study because they had the skill to throw at high velocity, matching the capability of a Neanderthal hunter.

Owen O'Donnell, an alumnus of UCL Institute of Archaeology, made the spear replicas by hand using metal tools. They were crafted from Norwegian spruce trees grown in Kent, UK. The surface was manipulated at the final stage with stone tools, creating a surface that accurately replicated that of a Pleistocene wooden spear. Two replicas were used, weighing 760g and 800g, which conform to ethnographic records of wooden spears.

The javelin athletes demonstrated that the target could be hit at up to 20 metres, and with significant impact which would translate into a kill against prey. This is double the distance that scientists previously thought the spears could be thrown, demonstrating that Neanderthals had the technological capabilities to hunt at a distance as well as at close range.

The weight of the Schöningen spears previously led scientists to believe that they would struggle to travel at significant speed. However, the study shows that the balance of weight and the speed at which the athletes could throw them produces enough kinetic energy to hit and kill a target.

Dr Matt Pope (UCL Institute of Archaeology), co-author on the paper, said: "The emergence of weaponry -- technology designed to kill -- is a critical but poorly established threshold in human evolution.

"We have forever relied on tools and have extended our capabilities through technical innovation. Understanding when we first developed the capabilities to kill at distance is therefore a dark, but important moment in our story."

Read more at Science Daily

Rapidly receding glaciers on Baffin Island reveal long-covered Arctic landscapes

An aerial view of the Baffin Islands in Canada.
Glacial retreat in the Canadian Arctic has uncovered landscapes that haven't been ice-free in more than 40,000 years and the region may be experiencing its warmest century in 115,000 years, new University of Colorado Boulder research finds.

The study, published today in the journal Nature Communications, uses radiocarbon dating to determine the ages of plants collected at the edges of 30 ice caps on Baffin Island, west of Greenland. The island has experienced significant summertime warming in recent decades.

"The Arctic is currently warming two to three times faster than the rest of the globe, so naturally, glaciers and ice caps are going to react faster," said Simon Pendleton, lead author and a doctoral researcher in CU Boulder's Institute of Arctic and Alpine Research (INSTAAR).

Baffin is the world's fifth largest island, dominated by deeply incised fjords separated by high-elevation, low-relief plateaus. The thin, cold plateau ice acts as a kind of natural cold storage, preserving ancient moss and lichens in their original growth position for millennia.

"We travel to the retreating ice margins, sample newly exposed plants preserved on these ancient landscapes and carbon date the plants to get a sense of when the ice last advanced over that location," Pendleton said. "Because dead plants are efficiently removed from the landscape, the radiocarbon age of rooted plants define the last time summers were as warm, on average, as those of the past century"

In August, the researchers collected 48 plant samples from 30 different Baffin ice caps, encompassing a range of elevations and exposures. They also sampled quartz from each site in order to further establish the age and ice cover history of the landscape.

Once the samples were processed and radiocarbon dated back in labs at the Institute of Arctic and Alpine Research (INSTAAR) at CU Boulder and the University of California Irvine, the researchers found that these ancient plants at all 30 ice caps have likely been continuously covered by ice for at least the past 40,000 years.

"Unlike biology, which has spent the past three billion years developing schemes to avoid being impacted by climate change, glaciers have no strategy for survival," said Gifford Miller, senior author of the research and a professor of geological sciences at CU Boulder. "They're well behaved, responding directly to summer temperature. If summers warm, they immediately recede; if summers cool, they advance. This makes them one of the most reliable proxies for changes in summer temperature."

When compared against temperature data reconstructed from Baffin and Greenland ice cores, the findings suggest that modern temperatures represent the warmest century for the region in 115,000 years and that Baffin could be completely ice-free within the next few centuries.

"You'd normally expect to see different plant ages in different topographical conditions," Pendleton said. "A high elevation location might hold onto its ice longer, for example. But the magnitude of warming is so high that everything is melting everywhere now."

"We haven't seen anything as pronounced as this before," Pendleton said.

Read more at Science Daily

Static electricity could charge our electronics

These images show how the surfaces of magnesia (top block) and barium titanate (bottom block) respond when they come into contact. The resulting lattice deformations in each object contributes to the driving force behind the electric charge transfer during friction.
Unhappy with the life of your smartphone battery?

Thought so.

Help could be on the way from one of the most common, yet poorly understand, forms of power generation: static electricity.

"Nearly everyone has zapped their finger on a doorknob or seen child's hair stick to a balloon. To incorporate this energy into our electronics, we must better understand the driving forces behind it," says James Chen, PhD, assistant professor in the Department of Mechanical and Aerospace Engineering in the School of Engineering and Applied Sciences at the University at Buffalo.

Chen is a co-author of a study in the December issue of the Journal of Electrostatics that suggests the cause of this hair-raising phenomenon is tiny structural changes that occur at the surface of materials when they come into contact with each other.

The finding could ultimately help technology companies create more sustainable and longer-lasting power sources for small electronic devices.

Supported by a $400,000 National Science Foundation grant, Chen and Zayd Leseman, PhD, associate professor of mechanical and nuclear engineering at Kansas State University, are conducting research on the triboelectric effect, a phenomenon wherein one material becomes electrically charged after it contacts a different material through friction.

The triboelectric effect has been known since ancient times, but the tools for understanding and applying it have only become available recently due to the advent of nanotechnology.

"The idea our study presents directly answers this ancient mystery, and it has the potential to unify the existing theory. The numerical results are consistent with the published experimental observations," says Chen.

The research Chen and Leseman conduct is a mix of disciplines, including contact mechanics, solid mechanics, materials science, electrical engineering and manufacturing. With computer models and physical experiments, they are engineering triboelectric nanogenerators (TENGs), which are capable of controlling and harvesting static electricity.

"The friction between your fingers and your smartphone screen. The friction between your wrist and smartwatch. Even the friction between your shoe and the ground. These are great potential sources of energy that we can to tap into," Chen says. "Ultimately, this research can increase our economic security and help society by reducing our need for conventional sources of power."

Read more at Science Daily

Where is Earth's submoon?

Earth's moon
"Can moons have moons?"

This simple question -- asked by the four-year old son of Carnegie's Juna Kollmeier -- started it all. Not long after this initial bedtime query, Kollmeier was coordinating a program at the Kavli Institute for Theoretical Physics (KITP) on the Milky Way while her one-time college classmate Sean Raymond of Université de Bordeaux was attending a parallel KITP program on the dynamics of Earth-like planets. After discussing this very simple question at a seminar, the two joined forces to solve it. Their findings are the basis of a paper published in Monthly Notices of the Royal Astronomical Society.

The duo kicked off an internet firestorm late last year when they posted a draft of their article examining the possibility of moons that orbit other moons on a preprint server for physics and astronomy manuscripts.

The online conversation obsessed over the best term to describe such phenomena with options like moonmoons and mini-moons being thrown into the mix. But nomenclature was not the point of Kollmeier and Raymond's investigation (although they do have a preference for submoons). Rather, they set out to define the physical parameters for moons that would be capable of being stably orbited by other, smaller moons.

"Planets orbit stars and moons orbit planets, so it was natural to ask if smaller moons could orbit larger ones," Raymond explained.

Their calculations show that only large moons on wide orbits from their host planets could host submoons. Tidal forces from both the planet and moon act to destabilize the orbits of submoons orbiting smaller moons or moons that are closer to their host planet.

They found that four moons in our own Solar System are theoretically capable of hosting their own satellite submoons. Jupiter's moon Callisto, Saturn's moons Titan and Iapetus, and Earth's own Moon all fit the bill of a satellite that could host its own satellite, although none have been found so far. However, they add that further calculations are needed to address possible sources of submoon instability, such as the non-uniform concentration of mass in our Moon's crust.

"The lack of known submoons in our Solar System, even orbiting around moons that could theoretically support such objects, can offer us clues about how our own and neighboring planets formed, about which there are still many outstanding questions," Kollmeier explained.

The moons orbiting Saturn and Jupiter are thought to have been born from the disk of gas and dust that encircle gas giant planets in the later stages of their formation. Our own Moon, on the other hand, is thought to have originated in the aftermath of a giant impact between the young Earth and a Mars-sized body. The lack of stable submoons could help scientists better understand the different forces that shaped the satellites we do see.

Kollmeier added: "and, of course, this could inform ongoing efforts to understand how planetary systems evolve elsewhere and how our own Solar System fits into the thousands of others discovered by planet-hunting missions."

For example, the newly discovered possible exomoon orbiting the Jupiter-sized Kepler 1625b is the right mass and distance from its host to support a submoon, Kollmeier and Raymond found. Although, the inferred tilt of its orbit might make it difficult for such an object to remain stable. However, detecting a submoon around an exomoon would be very difficult.

Given the excitement surrounding searches for potentially habitable exoplanets, Kollmeier and Raymond calculated that the best case scenario for life on large submoons is around massive stars. Although extremely common, small red dwarf stars are so faint and their habitable zones so close that tidal forces are very strong and submoons (and often even moons themselves) are unstable.

Finally, the authors point out that an artificial submoon may be stable and thereby serve as a time capsule or outpost. On a stable orbit around the Moon -- such as the one for NASA's proposed Lunar Gateway -- a submoon would keep humanity's treasures safe for posterity long after Earth became unsuitable for life. Kollmeier and Raymond agree that there is much more work to be done (and fun to be had) to understand submoons (or the lack thereof) as a rocky record of the history of planet-moon systems.

Sean Raymond maintains a science blog (planetplanet.net) where more details and illustrations (including a poem he wrote about the article) can be found.

Read more at Science Daily

Jan 24, 2019

Kids prefer friends who talk like they do

Children tend to prefer to be friends with other children who speak with the same local accent as they have, even if they grow up in a diverse community and are regularly exposed to a variety of accents, according to research published by the American Psychological Association.

"It's common knowledge that adults unconsciously discriminate against others based on the way they speak, but we wanted to understand when, how and why these biases develop," said lead author Melissa Paquette-Smith, PhD, of the University of California, Los Angeles.

The findings were published in the journal Developmental Psychology.

Previous research has shown that children as young as 5 years old prefer to be friends with peers who speak like them and that these preferences are so strong that they can override preferences for friends of the same race, according to Paquette-Smith. She and her co-authors wanted to extend the existing research and explore whether regular exposure to a wide variety of accents might change these preferences.

The researchers conducted three experiments with nearly 150 5- and 6-year-old English-speaking children living in the Greater Toronto Area, one of the most culturally and linguistically diverse metropolitan areas in the world. More than half of the residents there were born outside of Canada and close to 50 percent learned a language other than English from birth, the study noted.

In experiment one, 5- and 6-year-olds were shown pairs of children on a computer screen. One child in each pair spoke English with the local Canadian accent and the other spoke English with a British accent. After listening to both speakers, children were asked to pick which child they wanted as their friend. The researches also examined whether the amount of exposure children had to different accents in everyday life influenced these preferences. Given the diversity in the community, most of the children reported moderate to very frequent contact with non-local accents, whether it was because they lived with someone in their home or had a daycare provider or teacher with a different accent.

"Even though they were regularly exposed to a variety of accents, Canadian children still preferred to be friends with peers who spoke with a Canadian accent over peers who spoke with a British accent. The amount of exposure children had to other accents in everyday life did not seem to dampen these preferences," said Paquette-Smith.

Paquette-Smith and her colleagues then wanted to see how children's friend preferences would be affected if they did the same task with children who were not native English speakers.

The second experiment used the same number of English-only speaking children and again, most reported having medium or high exposure to non-local accents. The set-up was the same, except that instead of British children, the participants listened to voices of children who were born and raised in Korea and who had learned English as a second language.

As in the first experiment, the children showed a preference for their Canadian-accented peers, but the effect was even greater in the second experiment, according to Paquette-Smith.

"There are a number of reasons why this may have been the case," said Paquette-Smith. "It could be that the Korean kids were less fluent in English or that the Canadian participants had a harder time understanding them, or that the British accents were simply harder to distinguish from the Canadian accents."

For the final experiment, the researchers investigated the possibility that children's ability to tell apart the two accents could have played a role in these preferences. The team predicted that the children would be better at identifying their Canadian variety of English when it was compared with a Korean accent and that they would have more difficulty distinguishing between the Canadian and British varieties of English.

The children listened to the voices of the Canadian, British and Korean speakers used in experiments one and two. After the voices were played, the experimenter asked the child, "Who talks like you? Like they grew up here?" and then the children made their selections.

"Our predictions were right, children had an easier time differentiating between the Canadian and Korean and British and Korean speakers," said Paquette-Smith. "The most difficult comparison for the children to make was between the Canadian and British speakers. We believe this is because children are better at distinguishing their local accent from a non-native accent compared with a regional accent."

Paquette-Smith cautioned that a preference for friends with similar accents does not necessarily mean that the children were biased against those with non-native accents.

Read more at Science Daily

When coral species vanish, their absence can imperil surviving corals

A wave of death sweeps over an experimental plot of corals made up of a single species. Plots of corals containing three species faired much better. Biodiversity appears to measurably make a difference in overall coral health.
Waves of annihilation have beaten coral reefs down to a fraction of what they were 40 years ago, and what's left may be facing creeping death: The effective extinction of many coral species may be weakening reef systems thus siphoning life out of the corals that remain.

In the shallows off Fiji's Pacific shores, two marine researchers from the Georgia Institute of Technology for a new study assembled groups of corals that were all of the same species, i.e. groups without species diversity. When Cody Clements snorkeled down for the first time to check on them, his eyes instantly told him what his data would later reveal.

"One of the species had entire plots that got wiped out, and they were overgrown with algae," Clements said. "Rows of corals had tissue that was brown -- that was dead tissue. Other tissue had turned white and was in the process of dying."

36 ghastly plots

Clements, a postdoctoral researcher and the study's first author, also assembled groups of corals with a mixture of species, i.e. biodiverse groups, for comparison. In total, there were 36 single-species plots, or monocultures. Twelve additional plots contained polycultures that mixed three species.

By the end of the 16-month experiment, monocultures had faired obviously worse. And the study had shown via the measurably healthier growth in polycultures that science can begin to quantify biodiversity's contribution to coral survival as well as the effects of biodiversity's disappearance.

"This was a starter experiment to see if we would get an initial result, and we did," said principal investigator Mary Hay, a Regents Professor and Harry and Linda Teasley Chair in Georgia Tech's School of Biological Sciences. "So much reef death over the years has reduced coral species variety and made reefs more homogenous, but science still doesn't understand enough about how coral biodiversity helps reefs survive. We want to know more."

The results of the study appear in the February issue of the journal Nature Ecology and Evolution and were made available online on January 7, 2018. The research was funded by the National Science Foundation, by the National Institutes of Health's Fogarty International Center, and by the Teasley Endowment.

The study's insights could aid ecologists restocking crumbling reefs with corals -- which are animals. Past replenishing efforts have often deployed patches of single species that have had trouble taking hold, and the researchers believe the study should encourage replanting using biodiverse patches.

40 years' decimation

The decimation of corals Hay has witnessed in over four decades of undersea research underscores this study's importance.

"It's shocking how quickly the Caribbean reefs crashed. In the 1970s and early 1980s, reefs consisted of about 60 percent live coral cover," Hay said. "Coral cover declined dramatically through the 1990s and has remained low. It's now at about 10 percent throughout the Caribbean."

"You used to find living diverse reefs with structurally complex coral stands the size of city blocks. Now, most Caribbean reefs look more like parking lots with a few sparse corals scattered around."

84 percent loss

The fact that the decimation in the Pacific is less grim is bitter irony. About half of living coral cover disappeared there between the early 1980s and early 2000s with declines accelerating since.

"From 1992 to 2010, the Great Barrier Reef, which is arguably the best-managed reef system on Earth, lost 84 percent," Clements said. "All of this doesn't include the latest bleaching events reported so widely in the media, and they killed huge swaths of reef in the Pacific."

The 2016 bleaching event also sacked reefs off of Fiji where the researchers ran their experiment. The coral deaths have been associated with extended periods of ocean heating, which have become much more common in recent decades.

10 times more species

Still, there's hope. Pacific reefs support ten times as many coral species as Caribbean reefs, and Clements' and Hay's new study suggests that this higher biodiversity may help make these reefs more robust than the Caribbean reefs. There, many species have joined the endangered list, or are "functionally extinct," still present but in traces too small to have ecological impact.

The Caribbean's coral collapse may have been a warning shot on the dangers of species loss. Some coral species protect others from getting eaten or infected, for example.

"A handful of species may be critical for the survival of many others, and we don't yet know well enough which are most critical. If key species disappear, the consequences could be enormous," said Hay, who believes he may have already witnessed this in the Caribbean. "The decline of key species may drive the decline of others and potentially create a death spiral."

864 abrasive animals

Off Fiji's shores, Clements transported by kayak, one by one, 48 concrete tables he had built on land. He dove them into place and mounted on top of them 864 jaggy corals in planters he had fashioned from the tops of plastic soda bottles.

"I scratched a lot of skin off of my fingers screwing those corals onto the tables," he said, laughing at the memory. "I drank enough saltwater through my snorkel doing it, too."

Clements laid out 18 corals on each tabletop: Three groups of monocultures filled 36 tables (12 with species A, 12 with species B, 12 with species C). The remaining 12 tabletops held polycultures with balanced A-B-C mixtures. He collected data four months into the experiment and at 16 months.

The polycultures all looked great. Only one monoculture species, Acropora millepora, had nice growth at the 16-month mark, but that species is more susceptible to disease, bleaching, predators, and storms. It may have sprinted ahead in growth in the experiment, but long-term it would probably need the help of other species to cope with its own fragility.

Read more at Science Daily

Seeing double could help resolve dispute about how fast the universe is expanding

A Hubble Space Telescope picture of a doubly imaged quasar.
The question of how quickly the universe is expanding has been bugging astronomers for almost a century. Different studies keep coming up with different answers -- which has some researchers wondering if they've overlooked a key mechanism in the machinery that drives the cosmos.

Now, by pioneering a new way to measure how quickly the cosmos is expanding, a team led by UCLA astronomers has taken a step toward resolving the debate. The group's research is published today in Monthly Notices of the Royal Astronomical Society.

At the heart of the dispute is the Hubble constant, a number that relates distances to the redshifts of galaxies -- the amount that light is stretched as it travels to Earth through the expanding universe. Estimates for the Hubble constant range from about 67 to 73 kilometers per second per megaparsec, meaning that two points in space 1 megaparsec apart (the equivalent of 3.26 million light-years) are racing away from each other at a speed between 67 and 73 kilometers per second.

"The Hubble constant anchors the physical scale of the universe," said Simon Birrer, a UCLA postdoctoral scholar and lead author of the study. Without a precise value for the Hubble constant, astronomers can't accurately determine the sizes of remote galaxies, the age of the universe or the expansion history of the cosmos.

Most methods for deriving the Hubble constant have two ingredients: a distance to some source of light and that light source's redshift. Looking for a light source that had not been used in other scientists' calculations, Birrer and colleagues turned to quasars, fountains of radiation that are powered by gargantuan black holes. And for their research, the scientists chose one specific subset of quasars -- those whose light has been bent by the gravity of an intervening galaxy, which produces two side-by-side images of the quasar on the sky.

Light from the two images takes different routes to Earth. When the quasar's brightness fluctuates, the two images flicker one after another, rather than at the same time. The delay in time between those two flickers, along with information about the meddling galaxy's gravitational field, can be used to trace the light's journey and deduce the distances from Earth to both the quasar and the foreground galaxy. Knowing the redshifts of the quasar and galaxy enabled the scientists to estimate how quickly the universe is expanding.

The UCLA team, as part of the international H0liCOW collaboration, had previously applied the technique to study quadruply imaged quasars, in which four images of a quasar appear around a foreground galaxy. But quadruple images are not nearly as common -- double-image quasars are thought to be about five times as abundant as the quadruple ones.

To demonstrate the technique, the UCLA-led team studied a doubly imaged quasar known as SDSS J1206+4332; they relied on data from the Hubble Space Telescope, the Gemini and W.M. Keck observatories, and from the Cosmological Monitoring of Gravitational Lenses, or COSMOGRAIL, network -- a program managed by Switzerland's Ecole Polytechnique Federale de Lausanne that is aimed at determining the Hubble constant.

Tommaso Treu, a UCLA professor of physics and astronomy and the paper's senior author, said the researchers took images of the quasar every day for several years to precisely measure the time delay between the images. Then, to get the best estimate possible of the Hubble constant, they combined the data gathered on that quasar with data that had previously been gathered by their H0liCOW collaboration on three quadruply imaged quasars.

"The beauty of this measurement is that it's highly complementary to and independent of others," Treu said.

The UCLA-led team came up with an estimate for the Hubble constant of about 72.5 kilometers per second per megaparsec, a figure in line with what other scientists had determined in research that used distances to supernovas -- exploding stars in remote galaxies -- as the key measurement. However, both estimates are about 8 percent higher than one that relies on a faint glow from all over the sky called the cosmic microwave background, a relic from 380,000 years after the Big Bang, when light traveled freely through space for the first time.

"If there is an actual difference between those values, it means the universe is a little more complicated," Treu said.

On the other hand, Treu said, it could also be that one measurement -- or all three -- are wrong.

The researchers are now looking for more quasars to improve the precision of their Hubble constant measurement. Treu said one of the most important lessons of the new paper is that doubly imaged quasars give scientists many more useful light sources for their Hubble constant calculations. For now, though, the UCLA-led team is focusing its research on 40 quadruply imaged quasars, because of their potential to provide even more useful information than doubly imaged ones.

Read more at Science Daily

Making the Hubble's deepest images even deeper

The new version of Hubble's deep image. In dark grey you can see the new light that has been found around the galaxies in this field. That light corresponds to the brightness of more than one hundred billion suns.
To produce the deepest image of the Universe from space a group of researchers from the Instituto de Astrofísica de Canarias (IAC) led by Alejandro S. Borlaff used original images from the Hubble Space Telescope (HST taken over a region in the sky called the Hubble Ultra-Deep Field (HUDF). After improving the process of combining several images the group was able to recover a large quantity of light from the outer zones of the largest galaxies in the HUDF. Recovering this light, emitted by the stars in these outer zones, was equivalent to recovering the light from a complete galaxy ("smeared out" over the whole field) and for some galaxies this missing light shows that they have diameters almost twice as big as previously measured.

The HUDF is the result of combining hundreds of images taken with the Wide Field Camera 3 (WFC3) of the HST during over 230 hours of observation which, in 2012, yielded the deepest image of the Universe taken until then. But the method of combining the individual images was not ideally suited to detect faint extended objects. To do this, Borlaff explains "What we have done is to go back to the archive of the original images, directly as observed by the HST, and improve the process of combination, aiming at the best image quality not only for the more distant smaller galaxies but also for the extended regions of the largest galaxies.

The WFC3 with which the data were taken was installed by astronauts in May 2009, when the Hubble had already been in space for 19 years. This was a major challenge for the researchers because the complete instrument (telescope+ camera) could not be tested on the ground, which made calibration more difficult. To overcome the problems they analysed several thousand images of different regions on the sky, with the aim of improving the calibration of the telescope on orbit.

The image of the universe which is now the deepest "has been possible thanks to a striking improvement in the techniques of image processing which has been achieved in recent years, a field in which the group working in the IAC is at the forefront," says Borlaff.

From Science Daily

How to escape a black hole: Simulations provide new clues about powerful plasma jets

This visualization of a general-relativistic collisionless plasma simulation shows the density of positrons near the event horizon of a rotating black hole. Plasma instabilities produce island-like structures in the region of intense electric current.
Black holes are known for their voracious appetites, binging on matter with such ferocity that not even light can escape once it's swallowed up.

Less understood, though, is how black holes purge energy locked up in their rotation, jetting near-light-speed plasmas into space to opposite sides in one of the most powerful displays in the universe. These jets can extend outward for millions of light years.

New simulations led by researchers working at the Department of Energy's Lawrence Berkeley National Laboratory (Berkeley Lab) and UC Berkeley have combined decades-old theories to provide new insight about the driving mechanisms in the plasma jets that allows them to steal energy from black holes' powerful gravitational fields and propel it far from their gaping mouths.

The simulations could provide a useful comparison for high-resolution observations from the Event Horizon Telescope, an array that is designed to provide the first direct images of the regions where the plasma jets form.

The telescope will enable new views of the black hole at the center of our own Milky Way galaxy, as well as detailed views of other supermassive black holes.

"How can the energy in a black hole's rotation be extracted to make jets?" said Kyle Parfrey, who led the work on the simulations while he was an Einstein Postdoctoral Fellow affiliated with the Nuclear Science Division at Berkeley Lab. "This has been a question for a long time."

Now a senior fellow at NASA Goddard Space Flight Center in Maryland, Parfrey is the lead author of a study, published Jan. 23 in Physical Review Letters, that details the simulations research.

The simulations, for the first time, unite a theory that explains how electric currents around a black hole twist magnetic fields into forming jets, with a separate theory explaining how particles crossing through a black hole's point of no return -- the event horizon -- can appear to a distant observer to carry in negative energy and lower the black hole's overall rotational energy.

It's like eating a snack that causes you to lose calories rather than gaining them. The black hole actually loses mass as a result of slurping in these "negative-energy" particles.

Computer simulations have difficulty in modeling all of the complex physics involved in plasma-jet launching, which must account for the creation of pairs of electrons and positrons, the acceleration mechanism for particles, and the emission of light in the jets.

Berkeley Lab has contributed extensively to plasma simulations over its long history. Plasma is a gas-like mixture of charged particles that is the universe's most common state of matter.

Parfrey said he realized that more complex simulations to better describe the jets would require a combination of expertise in plasma physics and the general theory of relativity.

"I thought it would be a good time to try to bring these two things together," he said.

Performed at a supercomputing center at NASA Ames Research Center in Mountain View, California, the simulations incorporate new numerical techniques that provide the first model of a collisionless plasma -- in which collisions between charged particles do not play a major role -- in the presence of a strong gravitational field associated with a black hole.

The simulations naturally produce effects known as the Blandford-Znajek mechanism, which describes the twisting magnetic fields that form jets, and a separate Penrose process that describes what happens when negative-energy particles are gulped down by the black hole.

The Penrose process, "even though it doesn't necessarily contribute that much to extracting the black hole's rotation energy," Parfrey said, "is possibly directly linked to the electric currents that twist the jets' magnetic fields."

While more detailed than some earlier models, Parfrey noted that his team's simulations are still playing catch-up with observations, and are idealized in some ways to simplify the calculations needed to perform the simulations.

The team intends to better model the process by which electron-positron pairs are created in the jets in order to study the jets' plasma distribution and their emission of radiation more realistically for comparison to observations. They also plan to broaden the scope of the simulations to include the flow of infalling matter around the black hole's event horizon, known as its accretion flow.

Read more at Science Daily

Jan 23, 2019

Star material could be building block of life

Image of the Rho Ophiuchi star formation region with IRAS16293-2422 B circled.
An organic molecule detected in the material from which a star forms could shed light on how life emerged on Earth, according to new research led by Queen Mary University of London.

The researchers report the first ever detection of glycolonitrile (HOCH2CN), a pre-biotic molecule which existed before the emergence of life, in a solar-type protostar known as IRAS16293-2422 B.

This warm and dense region contains young stars at the earliest stage of their evolution surrounded by a cocoon of dust and gas -- similar conditions to those when our Solar System formed.

Detecting pre-biotic molecules in solar-type protostars enhances our understanding of how the solar system formed as it indicates that planets created around the star could begin their existence with a supply of the chemical ingredients needed to make some form of life.

This finding, published in the journal Monthly Notices of the Royal Astronomical Society: Letters, is a significant step forward for pre-biotic astrochemistry since glycolonitrile is recognised as a key precursor towards the formation of adenine, one of the nucleobases that form both DNA and RNA in living organisms.

IRAS16293-2422 B is a well-studied protostar in the constellation of Ophiuchus, in a region of star formation known as rho Ophiuchi, about 450 light-years from Earth.

The research was also carried out with the Centro de Astrobiología in Spain, INAF-Osservatorio Astrofisico di Arcetri in Italy, the European Southern Observatory, and the Harvard-Smithsonian Center for Astrophysics in the USA.

Lead author Shaoshan Zeng, from Queen Mary University of London, said: "We have shown that this important pre-biotic molecule can be formed in the material from which stars and planets emerge, taking us a step closer to identifying the processes that may have led to the origin of life on Earth."

The researchers used data from the Atacama Large Millimeter/submillimetre Array (ALMA) telescope in Chile to uncover evidence for the presence of glycolonitrile in the material from which the star is forming -- known as the interstellar medium.

With the ALMA data, they were able to identify the chemical signatures of glycolonitrile and determine the conditions in which the molecule was found. They also followed this up by using chemical modelling to reproduce the observed data which allowed them to investigate the chemical processes that could help to understand the origin of this molecule.

This follows the earlier detection of methyl isocyanate in the same object by researchers from Queen Mary. Methyl isocyanate is what is known as an isomer of glycolonitrile -- it is made up of the same atoms but in a slightly different arrangement, meaning it has different chemical properties.

Read more at Science Daily

Planetary collision that formed the moon made life possible on Earth

A schematic depicting the formation of a Mars-sized planet (left) and its differentiation into a body with a metallic core and an overlying silicate reservoir. The sulfur-rich core expels carbon, producing silicate with a high carbon to nitrogen ratio. The moon-forming collision of such a planet with the growing Earth (right) can explain Earth's abundance of both water and major life-essential elements like carbon, nitrogen and sulfur, as well as the geochemical similarity between Earth and the moon.
Most of Earth's essential elements for life -- including most of the carbon and nitrogen in you -- probably came from another planet.

Earth most likely received the bulk of its carbon, nitrogen and other life-essential volatile elements from the planetary collision that created the moon more than 4.4 billion years ago, according to a new study by Rice University petrologists in the journal Science Advances.

"From the study of primitive meteorites, scientists have long known that Earth and other rocky planets in the inner solar system are volatile-depleted," said study co-author Rajdeep Dasgupta. "But the timing and mechanism of volatile delivery has been hotly debated. Ours is the first scenario that can explain the timing and delivery in a way that is consistent with all of the geochemical evidence."

The evidence was compiled from a combination of high-temperature, high-pressure experiments in Dasgupta's lab, which specializes in studying geochemical reactions that take place deep within a planet under intense heat and pressure.

In a series of experiments, study lead author and graduate student Damanveer Grewal gathered evidence to test a long-standing theory that Earth's volatiles arrived from a collision with an embryonic planet that had a sulfur-rich core.

The sulfur content of the donor planet's core matters because of the puzzling array of experimental evidence about the carbon, nitrogen and sulfur that exist in all parts of the Earth other than the core.

"The core doesn't interact with the rest of Earth, but everything above it, the mantle, the crust, the hydrosphere and the atmosphere, are all connected," Grewal said. "Material cycles between them."

One long-standing idea about how Earth received its volatiles was the "late veneer" theory that volatile-rich meteorites, leftover chunks of primordial matter from the outer solar system, arrived after Earth's core formed. And while the isotopic signatures of Earth's volatiles match these primordial objects, known as carbonaceous chondrites, the elemental ratio of carbon to nitrogen is off. Earth's non-core material, which geologists call the bulk silicate Earth, has about 40 parts carbon to each part nitrogen, approximately twice the 20-1 ratio seen in carbonaceous chondrites.

Grewal's experiments, which simulated the high pressures and temperatures during core formation, tested the idea that a sulfur-rich planetary core might exclude carbon or nitrogen, or both, leaving much larger fractions of those elements in the bulk silicate as compared to Earth. In a series of tests at a range of temperatures and pressure, Grewal examined how much carbon and nitrogen made it into the core in three scenarios: no sulfur, 10 percent sulfur and 25 percent sulfur.

"Nitrogen was largely unaffected," he said. "It remained soluble in the alloys relative to silicates, and only began to be excluded from the core under the highest sulfur concentration."

Carbon, by contrast, was considerably less soluble in alloys with intermediate sulfur concentrations, and sulfur-rich alloys took up about 10 times less carbon by weight than sulfur-free alloys.

Using this information, along with the known ratios and concentrations of elements both on Earth and in non-terrestrial bodies, Dasgupta, Grewal and Rice postdoctoral researcher Chenguang Sun designed a computer simulation to find the most likely scenario that produced Earth's volatiles. Finding the answer involved varying the starting conditions, running approximately 1 billion scenarios and comparing them against the known conditions in the solar system today.

"What we found is that all the evidence -- isotopic signatures, the carbon-nitrogen ratio and the overall amounts of carbon, nitrogen and sulfur in the bulk silicate Earth -- are consistent with a moon-forming impact involving a volatile-bearing, Mars-sized planet with a sulfur-rich core," Grewal said.

Dasgupta, the principal investigator on a NASA-funded effort called CLEVER Planets that is exploring how life-essential elements might come together on distant rocky planets, said better understanding the origin of Earth's life-essential elements has implications beyond our solar system.

"This study suggests that a rocky, Earth-like planet gets more chances to acquire life-essential elements if it forms and grows from giant impacts with planets that have sampled different building blocks, perhaps from different parts of a protoplanetary disk," Dasgupta said.

"This removes some boundary conditions," he said. "It shows that life-essential volatiles can arrive at the surface layers of a planet, even if they were produced on planetary bodies that underwent core formation under very different conditions."

Dasgupta said it does not appear that Earth's bulk silicate, on its own, could have attained the life-essential volatile budgets that produced our biosphere, atmosphere and hydrosphere.

Read more at Science Daily

Climate change tipping point could be coming sooner than we think

Limpopo province in South Africa-- a semi-arid region shown to have reduced carbon uptake due to soil moisture anomalies. This negative trend is expected to continue through the 21st century.
Global carbon emissions reached a record high in 2018, rising by an estimated 3.4 percent in the U.S. alone. This trend is making scientists, government officials, and industry leaders more anxious than ever about the future of our planet. As United Nations Secretary General António Guterres said at the opening of the 24th annual U.N. climate conference on December 3, "We are in deep trouble with climate change."

A Columbia Engineering study, published today in Nature, confirms the urgency to tackle climate change. While it's known that extreme weather events can affect the year-to-year variability in carbon uptake, and some researchers have suggested that there may be longer-term effects, this new study is the first to actually quantify the effects through the 21st century and demonstrates that wetter-than-normal years do not compensate for losses in carbon uptake during dryer-than-normal years, caused by events such as droughts or heatwaves.

Anthropogenic emissions of CO2 -- emissions caused by human activities -- are increasing the concentration of CO2 in the Earth's atmosphere and producing unnatural changes to the planet's climate system. The effects of these emissions on global warming are only being partially abated by the land and ocean. Currently, the ocean and terrestrial biosphere (forests, savannas, etc.) are absorbing about 50% of these releases -- explaining the bleaching of coral reefs and acidification of the ocean, as well as the increase of carbon storage in our forests.

"It is unclear, however, whether the land can continue to uptake anthropogenic emissions at the current rates," says Pierre Gentine, associate professor of earth and environmental engineering and affiliated with the Earth Institute, who led the study. "Should the land reach a maximum carbon uptake rate, global warming could accelerate, with important consequences for people and the environment. This means that we all really need to act now to avoid greater consequences of climate change."

Working with his PhD student Julia Green, Gentine wanted to understand how variability in the hydrological cycle (droughts and floods, and long-term drying trends) was affecting the capacity of the continents to trap some of the emissions of CO2. The research is particularly timely as climate scientists have predicted that extreme events will likely increase in frequency and intensity in the future, some of which we are already witnessing today, and that there will also be a change in rainfall patterns that will likely affect the ability of the Earth's vegetation to uptake carbon.

To define the amount of carbon stored in vegetation and soil, Gentine and Green analyzed net biome productivity (NBP), defined by the Intergovernmental Panel on Climate Change as the net gain or loss of carbon from a region, equal to the net ecosystem production minus the carbon lost from disturbance like a forest fire or a forest harvest.

The researchers used data from four Earth System Models from the GLACE-CMIP5 (Global Land Atmosphere Coupling Experiment -- Coupled Model Intercomparison Project) experiments, to run a series of experiments to isolate reductions in NBP that are due strictly to changes in soil moisture. They were able to isolate the effects of changes in long-term soil moisture trends (i.e. drying) as well as short-term variability (i.e., the effects of extreme events such as floods and droughts) on the ability of the land to uptake carbon.

"We saw that the value of NBP, in this instance a net gain of carbon on the land surface, would actually be almost twice as high if it weren't for these changes (variability and trend) in soil moisture," says Green, the paper's lead author. "This is a big deal! If soil moisture continues to reduce NBP at the current rate, and the rate of carbon uptake by the land starts to decrease by the middle of this century -- as we found in the models -- we could potentially see a large increase in the concentration of atmospheric CO2 and a corresponding rise in the effects of global warming and climate change."

Gentine and Green note that soil-moisture variability notably reduces the present land carbon sink, and their results show that both variability and drying trends reduce it in the future. By quantifying the critical importance of soil-water variability for the terrestrial carbon cycle, and the reduction in carbon uptake due to the effects of these changes in soil moisture, the study findings highlight the necessity of implementing improved modeling of vegetation response to water stress and land-atmosphere coupling in Earth system models to constrain the future terrestrial carbon flux and to better predict future climate.

"Essentially, if there were no droughts and heat waves, if there were not going to be any long-term drying over the next century, then the continents would be able to store almost twice as much carbon as they do now," says Gentine. "Because soil moisture plays such a large role in the carbon cycle, in the ability of the land to uptake carbon, it's essential that processes related to its representation in models become a top research priority."

There is still a great deal of uncertainty on how plants respond to water stress, and so Green and Gentine will continue their work on improving representations of vegetation response to soil moisture changes. They are now focusing on the tropics, a region with lots of unknowns, and the largest terrestrial carbon sink, to determine how vegetation activity is being controlled by both changes in soil moisture as well as atmospheric dryness. These findings will provide guidance on improving the representation of plant water stress in the tropics.

Read more at Science Daily

Birth of massive black holes in the early universe

A 30,000 light-year region from the Renaissance Simulation centered on a cluster of young galaxies that generate radiation (white) and metals (green) while heating the surrounding gas. A dark matter halo just outside this heated region forms three supermassive stars (inset) each over 1,000 times the mass of our sun that will quickly collapse into massive black holes and eventually supermassive black holes over billions of years.
The light released from around the first massive black holes in the universe is so intense that it is able to reach telescopes across the entire expanse of the universe. Incredibly, the light from the most distant black holes (or quasars) has been traveling to us for more than 13 billion light years. However, we do not know how these monster black holes formed.

New research led by researchers from Georgia Institute of Technology, Dublin City University, Michigan State University, the University of California at San Diego, the San Diego Supercomputer Center and IBM, provides a new and extremely promising avenue for solving this cosmic riddle. The team showed that when galaxies assemble extremely rapidly -- and sometimes violently -- that can lead to the formation of very massive black holes. In these rare galaxies, normal star formation is disrupted and black hole formation takes over.

The new study finds that massive black holes form in dense starless regions that are growing rapidly, turning upside down the long-accepted belief that massive black hole formation was limited to regions bombarded by the powerful radiation of nearby galaxies. Conclusions of the study, reported on January 23rd in the journal Nature and supported by funding from the National Science Foundation, the European Union and NASA, also finds that massive black holes are much more common in the universe than previously thought.

The key criteria for determining where massive black holes formed during the universe's infancy relates to the rapid growth of pre-galactic gas clouds that are the forerunners of all present-day galaxies, meaning that most supermassive black holes have a common origin forming in this newly discovered scenario, said John Wise, an associate professor in the Center for Relativistic Astrophysics at Georgia Tech and the paper's corresponding author. Dark matter collapses into halos that are the gravitational glue for all galaxies. Early rapid growth of these halos prevented the formation of stars that would have competed with black holes for gaseous matter flowing into the area.

"In this study, we have uncovered a totally new mechanism that sparks the formation of massive black holes in particular dark matter halos," Wise said. "Instead of just considering radiation, we need to look at how quickly the halos grow. We don't need that much physics to understand it -- just how the dark matter is distributed and how gravity will affect that. Forming a massive black hole requires being in a rare region with an intense convergence of matter."

When the research team found these black hole formation sites in the simulation, they were at first stumped, said John Regan, research fellow in the Centre for Astrophysics and Relativity in Dublin City University. The previously accepted paradigm was that massive black holes could only form when exposed to high levels of nearby radiation.

"Previous theories suggested this should only happen when the sites were exposed to high levels of star-formation killing radiation," he said. "As we delved deeper, we saw that these sites were undergoing a period of extremely rapid growth. That was the key. The violent and turbulent nature of the rapid assembly, the violent crashing together of the galaxy's foundations during the galaxy's birth prevented normal star formation and led to perfect conditions for black hole formation instead. This research shifts the previous paradigm and opens up a whole new area of research."

The earlier theory relied on intense ultraviolet radiation from a nearby galaxy to inhibit the formation of stars in the black hole-forming halo, said Michael Norman, director of the San Diego Supercomputer Center at UC San Diego and one of the work's authors. "While UV radiation is still a factor, our work has shown that it is not the dominant factor, at least in our simulations," he explained.

The research was based on the Renaissance Simulation suite, a 70-terabyte data set created on the Blue Waters supercomputer between 2011 and 2014 to help scientists understand how the universe evolved during its early years. To learn more about specific regions where massive black holes were likely to develop, the researchers examined the simulation data and found ten specific dark matter halos that should have formed stars given their masses but only contained a dense gas cloud. Using the Stampede2 supercomputer, they then re-simulated two of those halos -- each about 2,400 light-years across -- at much higher resolution to understand details of what was happening in them 270 million years after the Big Bang.

"It was only in these overly-dense regions of the universe that we saw these black holes forming," Wise said. "The dark matter creates most of the gravity, and then the gas falls into that gravitational potential, where it can form stars or a massive black hole."

The Renaissance Simulations are the most comprehensive simulations of the earliest stages of the gravitational assembly of the pristine gas composed of hydrogen and helium and cold dark matter leading to the formation of the first stars and galaxies. They use a technique known as adaptive mesh refinement to zoom in on dense clumps forming stars or black holes. In addition, they cover a large enough region of the early universe to form thousands of objects -- a requirement if one is interested in rare objects, as is the case here. "The high resolution, rich physics and large sample of collapsing halos were all needed to achieve this result," said Norman.

The improved resolution of the simulation done for two candidate regions allowed the scientists to see turbulence and the inflow of gas and clumps of matter forming as the black hole precursors began to condense and spin. Their growth rate was dramatic.

"Astronomers observe supermassive black holes that have grown to a billion solar masses in 800 million years," Wise said. "Doing that required an intense convergence of mass in that region. You would expect that in regions where galaxies were forming at very early times."

Another aspect of the research is that the halos that give birth to black holes may be more common than previously believed.

"An exciting component of this work is the discovery that these types of halos, though rare, may be common enough," said Brian O'Shea, a professor at Michigan State University. "We predict that this scenario would happen enough to be the origin of the most massive black holes that are observed, both early in the universe and in galaxies at the present day."

Future work with these simulations will look at the lifecycle of these massive black hole formation galaxies, studying the formation, growth and evolution of the first massive black holes across time. "Our next goal is to probe the further evolution of these exotic objects. Where are these black holes today? Can we detect evidence of them in the local universe or with gravitational waves?" Regan asked.

For these new answers, the research team -- and others -- may return to the simulations.

Read more at Science Daily

Jan 22, 2019

Unique camera enables researchers to see the world the way birds do

The image to the right was taken with the specially designed camera.
Using a specially designed camera, researchers at Lund University in Sweden have succeeded for the first time in recreating how birds see colours in their surroundings. The study reveals that birds see a very different reality compared to what we see.

Human colour vision is based on three primary colours: red, green and blue. The colour vision of birds is based on the same three colours -- but also ultraviolet. Biologists at Lund have now shown that the fourth primary colour of birds, ultraviolet, means that they see the world in a completely different way. Among other things, birds see contrasts in dense forest foliage, whereas people only see a wall of green.

"What appears to be a green mess to humans are clearly distinguishable leaves for birds. No one knew about this until this study," says Dan-Eric Nilsson, professor at the Department of Biology at Lund University.

For birds, the upper sides of leaves appear much lighter in ultraviolet. From below, the leaves are very dark. In this way the three-dimensional structure of dense foliage is obvious to birds. This in turn makes it easy for them to move, find food and navigate. People, on the other hand, do not perceive ultraviolet, and see the foliage in green; the primary colour where contrast is the worst.

Dan-Eric Nilsson founded the world-leading Lund Vision Group at Lund University. The study in question is a collaboration with Cynthia Tedore and was conducted during her time as a postdoc in Lund. She is now working at the University of Hamburg.

It is the first time that researchers have succeeded in imitating bird colour vision with a high degree of precision. This was achieved with the help of a unique camera and advanced calculations. The camera was designed within the Lund Vision Group and equipped with rotating filter wheels and specially manufactured filters, which make it possible to show what different animals see clearly. In this case, the camera imitates with a high degree of accuracy the colour sensitivity of the four different types of cones in bird retinas.

"We have discovered something that is probably very important for birds, and we continue to reveal how reality appears also to other animals," says Dan-Eric Nilsson, continuing:

"We may have the notion that what we see is the reality, but it's a highly human reality. Other animals live in other realities, and we can now see through their eyes and reveal many secrets. Reality is in the eye of the beholder," he concludes.

From Science Daily

We need to rethink everything we know about global warming

Air pollution.
For a while now, the scientific community has known that global warming is caused by humanmade emissions in the form of greenhouse gases and global cooling by air pollution in the form of aerosols.

However, new research published in Science by Hebrew University of Jerusalem Professor Daniel Rosenfeld shows that the degree to which aerosols cool the earth has been grossly underestimated, necessitating a recalculation of climate change models to more accurately predict the pace of global warming.

Aerosols are tiny particles that float in the air. They can form naturally (e.g., desert dust) or artificially (e.g., smoke from coal, car exhaust). Aerosols cool our environment by enhancing cloud cover that reflect the sunlight (heat) back to space.

As for the first, clouds form when wind rises and cools. However, cloud composition is largely determined by aerosols. The more aerosol particles a shallow cloud contains, the more small water droplets it will hold. Rain happens when these droplets bind together. Since it takes longer for small droplets to bind together than it does for large droplets, aerosol-filled or "polluted" clouds contain more water, live in the sky longer (while they wait for droplets to bind and rain to fall, after which the clouds will dissipate) and cover a greater area. All the while, the aerosol-laden clouds reflect more solar energy back into space, thereby cooling the Earth's overall temperature.

To what extent do aerosols cool down our environment? To date, all estimates were unreliable because it was impossible to separate the effects of rising winds which create the clouds, from the effects of aerosols which determine their composition. Until now.

Rosenfeld and his colleague Yannian Zhu from the Meteorological Institute of Shaanxi Province in China developed a new method that uses satellite images to separately calculate the effect of vertical winds and aerosol cloud droplet numbers. They applied this methodology to low-lying cloud cover above the world's oceans between the Equator and 40S. With this new method, Rosenfeld and his colleagues were able to more accurately calculate aerosols' cooling effects on the Earth's energy budget. And, they discovered that aerosols' cooling effect is nearly twice higher than previously thought.

However, if this is true then how come the earth is getting warmer, not cooler? For all of the global attention on climate warming, aerosol pollution rates from vehicles, agriculture and power plants is still very high. For Rosenfeld, this discrepancy might point to an ever deeper and more troubling reality. "If the aerosols indeed cause a greater cooling effect than previously estimated, then the warming effect of the greenhouse gases has also been larger than we thought, enabling greenhouse gas emissions to overcome the cooling effect of aerosols and points to a greater amount of global warming than we previously thought," he shared.

The fact that our planet is getting warmer even though aerosols are cooling it down at higher rates than previously thought brings us to a Catch-22 situation: Global efforts to improve air quality by developing cleaner fuels and burning less coal could end up harming our planet by reducing the number of aerosols in the atmosphere, and by doing so, diminishing aerosols' cooling ability to offset global warming.

According to Rosenfeld, another hypothesis to explain why Earth is getting warmer even though aerosols have been cooling it down at an even a greater rate is a possible warming effect of aerosols when they lodge in deep clouds, meaning those 10 kilometers or more above the Earth. Israel's Space Agency and France's National Centre for Space Studies (CNES) have teamed up to develop new satellites that will be able to investigate this deep cloud phenomenon, with Professor Rosenfeld as its principal investigator.

Read more at Science Daily

Fossilized slime of 100-million-year-old hagfish shakes up vertebrate family tree

Tethymyxine tapirostrum, is a 100-million-year-old, 12-inch long fish embedded in a slab of Cretaceous period limestone from Lebanon, believed to be the first detailed fossil of a hagfish.
Paleontologists at the University of Chicago have discovered the first detailed fossil of a hagfish, the slimy, eel-like carrion feeders of the ocean. The 100-million-year-old fossil helps answer questions about when these ancient, jawless fish branched off the evolutionary tree from the lineage that gave rise to modern-day jawed vertebrates, including bony fish and humans.

The fossil, named Tethymyxine tapirostrum,is a 12-inch long fish embedded in a slab of Cretaceous period limestone from Lebanon. It fills a 100-million-year gap in the fossil record and shows that hagfish are more closely related to the blood-sucking lamprey than to other fishes. This means that both hagfish and lampreys evolved their eel-like body shape and strange feeding systems after they branched off from the rest of the vertebrate line of ancestry about 500 million years ago.

"This is a major reorganization of the family tree of all fish and their descendants. This allows us to put an evolutionary date on unique traits that set hagfish apart from all other animals," said Tetsuto Miyashita, PhD, a Chicago Fellow in the Department of Organismal Biology and Anatomy at UChicago who led the research. The findings are published this week in the Proceedings of the National Academy of Sciences.

The slimy dead giveaway

Modern-day hagfish are known for their bizarre, nightmarish appearance and unique defense mechanism. They don't have eyes, or jaws or teeth to bite with, but instead use a spiky tongue-like apparatus to rasp flesh off dead fish and whales at the bottom of the ocean. When harassed, they can instantly turn the water around them into a cloud of slime, clogging the gills of would-be predators.

This ability to produce slime is what gave away the Tethymyxine fossil. Miyashita used an imaging technology called synchrotron scanning at Stanford University to identify chemical traces of soft tissue that were left behind in the limestone when the hagfish fossilized. These soft tissues are rarely preserved, which is why there are so few examples of ancient hagfish relatives to study.

The scanning picked up a signal for keratin, the same material that makes up fingernails in humans. Keratin, as it turns out, is a crucial part of what makes the hagfish slime defense so effective. Hagfish have a series of glands along their bodies that produce tiny packets of tightly-coiled keratin fibers, lubricated by mucus-y goo. When these packets hit seawater, the fibers explode and trap the water within, turning everything into shark-choking slop. The fibers are so strong that when dried out they resemble silk threads; they're even being studied as possible biosynthetic fibers to make clothes and other materials.

Miyashita and his colleagues found more than a hundred concentrations of keratin along the body of the fossil, meaning that the ancient hagfish probably evolved its slime defense when the seas included fearsome predators such as plesiosaurs and ichthyosaurs that we no longer see today.

"We now have a fossil that can push back the origin of the hagfish-like body plan by hundreds of millions of years," Miyashita said. "Now, the next question is how this changes our view of the relationships between all these early fish lineages."

Shaking up the vertebrate family tree

Features of the new fossil help place hagfish and their relatives on the vertebrate family tree. In the past, scientists have disagreed about where they belonged, depending on how they tackled the question. Those who rely on fossil evidence alone tend to conclude that hagfish are so primitive that they are not even vertebrates. This implies that all fishes and their vertebrate descendants had a common ancestor that -- more or less -- looked like a hagfish.

But those who work with genetic data argue that hagfish and lampreys are more closely related to each other. This suggests that modern hagfish and lampreys are the odd ones out in the family tree of vertebrates. In that case, the primitive appearance of hagfish and lampreys is deceptive, and the common ancestor of all vertebrates was probably something more conventionally fish-like.

Miyashita's work reconciles these two approaches, using physical evidence of the animal's anatomy from the fossil to come to the same conclusion as the geneticists: that the hagfish and lampreys should be grouped separately from the rest of fishes.

"In a sense, this resets the agenda of how we understand these animals," said Michael Coates, PhD, professor of organismal biology and anatomy at UChicago and a co-author of the new study. "Now we have this important corroboration that they are a group apart. Although they're still part of vertebrate biodiversity, we now have to look at hagfish and lampreys more carefully, and recognize their apparent primitiveness as a specialized condition.

Paleontologists have increasingly used sophisticated imaging techniques in the past few years, but Miyashita's research is one of a handful so far to use synchrotron scanning to identify chemical elements in a fossil. While it was crucial to detect anatomical structures in the hagfish fossil, he believes it can also be a useful tool to help scientists detect paint or glue used to embellish a fossil or even outright forge a specimen. Any attempt to spice up a fossil specimen leaves chemical fingerprints that light up like holiday decorations in a synchrotron scan.

"I'm impressed with what Tetsuto has marshaled here," Coates said. "He's maxed out all the different techniques and approaches that can be applied to this fossil to extract information from it, to understand it and to check it thoroughly."

Read more at Science Daily

Greenland ice melting four times faster than in 2003

Iceberg in Greenland.
Greenland is melting faster than scientists previously thought -- and will likely lead to faster sea level rise -- thanks to the continued, accelerating warming of the Earth's atmosphere, a new study has found.

Scientists concerned about sea level rise have long focused on Greenland's southeast and northwest regions, where large glaciers stream iceberg-sized chunks of ice into the Atlantic Ocean. Those chunks float away, eventually melting. But a new study published Jan. 21 in the Proceedings of the National Academy of Sciences found that the largest sustained ice loss from early 2003 to mid-2013 came from Greenland's southwest region, which is mostly devoid of large glaciers.

"Whatever this was, it couldn't be explained by glaciers, because there aren't many there," said Michael Bevis, lead author of the paper, Ohio Eminent Scholar and a professor of geodynamics at The Ohio State University. "It had to be the surface mass -- the ice was melting inland from the coastline."

That melting, which Bevis and his co-authors believe is largely caused by global warming, means that in the southwestern part of Greenland, growing rivers of water are streaming into the ocean during summer. The key finding from their study: Southwest Greenland, which previously had not been considered a serious threat, will likely become a major future contributor to sea level rise.

"We knew we had one big problem with increasing rates of ice discharge by some large outlet glaciers," he said. "But now we recognize a second serious problem: Increasingly, large amounts of ice mass are going to leave as meltwater, as rivers that flow into the sea."

The findings could have serious implications for coastal U.S. cities, including New York and Miami, as well as island nations that are particularly vulnerable to rising sea levels.

And there is no turning back, Bevis said.

"The only thing we can do is adapt and mitigate further global warming -- it's too late for there to be no effect," he said. "This is going to cause additional sea level rise. We are watching the ice sheet hit a tipping point."

Climate scientists and glaciologists have been monitoring the Greenland ice sheet as a whole since 2002, when NASA and Germany joined forces to launch GRACE. GRACE stands for Gravity Recovery and Climate Experiment, and involves twin satellites that measure ice loss across Greenland. Data from these satellites showed that between 2002 and 2016, Greenland lost approximately 280 gigatons of ice per year, equivalent to 0.03 inches of sea level rise each year. But the rate of ice loss across the island was far from steady.

Bevis' team used data from GRACE and from GPS stations scattered around Greenland's coast to identify changes in ice mass. The patterns they found show an alarming trend -- by 2012, ice was being lost at nearly four times the rate that prevailed in 2003. The biggest surprise: This acceleration was focused in southwest Greenland, a part of the island that previously hadn't been known to be losing ice that rapidly.

Bevis said a natural weather phenomenon -- the North Atlantic Oscillation, which brings warmer air to West Greenland, as well as clearer skies and more solar radiation -- was building on man-made climate change to cause unprecedented levels of melting and runoff. Global atmospheric warming enhances summertime melting, especially in the southwest. The North Atlantic Oscillation is a natural -- if erratic -- cycle that causes ice to melt under normal circumstances. When combined with man-made global warming, though, the effects are supercharged.

"These oscillations have been happening forever," Bevis said. "So why only now are they causing this massive melt? It's because the atmosphere is, at its baseline, warmer. The transient warming driven by the North Atlantic Oscillation was riding on top of more sustained, global warming."

Bevis likened the melting of Greenland's ice to coral bleaching: Once the ocean's water hits a certain temperature, coral in that region begins to bleach. There have been three global coral bleaching events. The first was caused by the 1997-98 El Niño, and the other two events by the two subsequent El Niños. But El Niño cycles have been happening for thousands of years -- so why have they caused global coral bleaching only since 1997?

"What's happening is sea surface temperature in the tropics is going up; shallow water gets warmer and the air gets warmer," Bevis said. "The water temperature fluctuations driven by an El Niño are riding this global ocean warming. Because of climate change, the base temperature is already close to the critical temperature at which coral bleaches, so an El Niño pushes the temperature over the critical threshold value. And in the case of Greenland, global warming has brought summertime temperatures in a significant portion of Greenland close to the melting point, and the North Atlantic Oscillation has provided the extra push that caused large areas of ice to melt."

Before this study, scientists understood Greenland to be one of the Earth's major contributors to sea-level rise -- mostly because of its glaciers. But these new findings, Bevis said, show that scientists need to be watching the island's snowpack and ice fields more closely, especially in and near southwest Greenland.

GPS systems in place now monitor Greenland's ice margin sheet around most of its perimeter, but the network is very sparse in the southwest, so it is necessary to densify the network there, given these new findings.

Read more at Science Daily