Aug 9, 2019

When invasive plants take root, native animals pay the price

Imagine a new breed of pirate not only able to sail the high seas, but to exploit nearly any mode of transportation without detection. And these raiders' ambitions have little to do with amassing treasure and everything to do with hijacking ecosystems.

Today's invasive species are as tenacious and resilient as the pirates of yesteryear, and when these plunderers set foot in new locations around the world, they know how to make themselves at home. As a result, home will never be the same for many native residents.

Virginia Tech researchers have discovered that when invasive plants take root, native animals pay the price.

Jacob Barney, associate professor in the College of Agriculture and Life Sciences' School of Plant and Environmental Sciences, graduate researcher Becky Fletcher, and a team of five other doctoral students conducted the first-ever comprehensive meta-analytic review examining the ecological impacts of invasive plants by exploring how animals -- indigenous and exotic -- respond to these nonnative plants. Their study, which took place over a two-year period, is published in the journal Global Change Biology.

"Individual studies are system-specific, but we wanted to look for commonalities about how animals respond to invasive species. Our findings suggest that the impacts of invasive plants are much worse than we thought," said Barney. "Exotic animals' ability to survive on invasive plants coupled with the reduction of native animals is almost a worst-case scenario."

The team's findings underscore the negative impact of invasive plant species on native animal populations -- populations that include worms, birds, and a host of mammals and other vertebrates -- all of whom serve a multitude of important ecosystem functions across a range of trophic levels. Only mollusks and arthropods were unaffected.

"We had reason to believe that native and exotic animals may respond differently to invasive plants," said Fletcher, a Kansas City native who is completing her doctorate in invasive plant ecology, and the paper's lead author. "We hypothesized that exotic plants may increase the abundance of exotic animals while reducing the abundance of native animals."

As it turns out, invasive plants had no impact on the abundance of exotic animals. The plants do not facilitate exotic animals, nor do they harm them. In essence, nonnative flora provides sufficient nourishment and other benefits to uphold, if not to enlarge, nonnative animal populations. On the other hand, native animals are diminishing as invasive plants gain a foothold in their habitats.

Gourav Sharma, Ariel Heminger, and Cody Dickinson hold an array of beautiful, though invasive, plant species, including Queen Anne's lace, spotted knapweed, butterfly bush, tree-of-heaven, Amur honeysuckle, Amur maple, and Japanese barberry. Collectively, invasive species -- plants, animals, and diseases -- cause an estimated $120 billion in damages each year.

"Invasive species are one of the five drivers of global change. Just as human-induced phenomena, such as land use disturbance, climate change, and disease, are re-shaping our ecosystems, the same is true for invasive plants and animals," said Barney, who is also a fellow in the Fralin Life Sciences Institute and an affiliate of the Global Change Center. "Our world will witness even more invasions over time. So, we must understand the body of research because it will drive conservation efforts."

As a result of human activity, invasive plant and animal species now encircle the planet, colonizing terrestrial, aquatic, and marine environments, and suffusing every ocean and continent. In addition to their ability to displace native plants and animals, invasive species reduce wildlife habitat and alter natural processes. These environmental damages are often amplified by cascading impacts on other associated species and systems, including deforestation, storm water runoff, reduced groundwater, increased risk of wildfires, and the introduction of pathogens. Such sweeping losses also reap severe economic repercussions. While invasive insects cost the agricultural industry $13 billion in crops annually, collectively, invasive species -- plants, animals, and diseases -- cause an estimated $120 billion in damages each year in the United States alone.

A worst-case scenario feared by some researchers is invasion meltdown, which hypothesizes that once an exotic species -- plant or animal -- becomes abundant in an area, the ecosystem may change in such a way that facilitates the establishment of additional invaders. While Barney's study was not designed to test invasion meltdown, the scenario is not so far-fetched.

"In the context of biodiversity, we are worried about the impact invasive species are having on diversity and ecosystems," said Fletcher.

The researchers cite studies showing that native cardinals nesting in invasive Lonicera maackii shrubs fledged 20 percent fewer offspring. The team also discovered that animals in wet ecosystems were more impacted than in dry ecosystems. Rivers, already more nutrient-rich than terrestrial systems, are subject to frequent and intense disruptions such as flooding that can flow debris, seeds, and vegetation to new locales.

"As a result of climate change and land-use disturbance, species homogenization is the new normal," said Barney, pointing out another challenge for researchers. "So, identifying nativity, the place a plant or animal has long existed, is becoming much harder. We need to document what is native versus exotic in every system as this will better inform our understanding of the effects of invasive plants."

This information, coupled with better taxonomic identification of the animals impacted by invasive plants, could shed light on whether invasive species are the arbiters of global change or merely the victims.

Read more at Science Daily

Fish preserve DNA 'memories' far better than humans do

We are all familiar with the common myth that fish have poor memory, but it turns that their DNA has the capacity to hold much more memory than that of humans.

In a study published recently in the journal Nature, University of Otago researchers report that memory in the form of 'DNA methylation' is preserved between generations of fish, in contrast to humans where this is almost entirely erased.

DNA is often compared to a large book, with the words representing an instruction manual for life. DNA methylation encodes additional information that we are only starting to understand -- a little like discovering handwritten notes in the margins of the book saying which pages are the most important, or recording newly acquired information. In humans, these notes are removed at each generation but this apparently does not occur in fish.

First author of the study, University of Otago Anatomy PhD student Oscar Ortega elaborates; "Methylation sits on top of DNA and is used to control which genes are turned on and off. It also helps to define cellular identity and function. In humans and other mammals, DNA methylation is erased at each generation; however, we found that global erasure of DNA methylation memory does not occur at all in the fish we studied."

In recent years much attention has been paid to the idea that significant events such as war or famine can have a lasting effect on subsequent generations through the inheritance of altered DNA methylation patterns. While these 'transgenerational' DNA memory effects appear to be potentially important, because of DNA methylation erasure events during development, it is thought to be extremely rare in humans. However, because fish apparently do not have these erasure events, it seems possible they can transmit life experience through their DNA in the form of methylation.

Dr Tim Hore, research team leader and Senior Lecturer, at Otago's Department of Anatomy, says the study's findings provide new avenues for scientists to study how the memory of events in one generation, can be passed on to the next.

"Mammalian biologists have searched long and hard to find reliable examples of where altered DNA methylation patterns are passed on to subsequent generations; yet only a handful have been verified in repeated studies. However, unlike humans, DNA methylation is not erased at each generation in at least some fish. So, we think intergenerational memory transfer through DNA methylation could be much more common in fish," Dr Hore says.

Also published in Nature Communications is a complementary study from the Garvin Institute (Australia), confirming the Otago observations. "It is really great to have immediate validation that our results are robust -- they used different techniques and developmental samples, but came to the same conclusions as we did," Dr Hore adds.

Read more at Science Daily

Hubble's new portrait of Jupiter

New Hubble Space Telescope view of Jupiter, taken on June 27, 2019.
A new Hubble Space Telescope view of Jupiter, taken on June 27, 2019, reveals the giant planet's trademark Great Red Spot, and a more intense color palette in the clouds swirling in Jupiter's turbulent atmosphere than seen in previous years. The colors, and their changes, provide important clues to ongoing processes in Jupiter's atmosphere.

The bands are created by differences in the thickness and height of the ammonia ice clouds. The colorful bands, which flow in opposite directions at various latitudes, result from different atmospheric pressures. Lighter bands rise higher and have thicker clouds than the darker bands.

Among the most striking features in the image are the rich colors of the clouds moving toward the Great Red Spot, a storm rolling counterclockwise between two bands of clouds. These two cloud bands, above and below the Great Red Spot, are moving in opposite directions. The red band above and to the right (northeast) of the Great Red Spot contains clouds moving westward and around the north of the giant tempest. The white clouds to the left (southwest) of the storm are moving eastward to the south of the spot.

All of Jupiter's colorful cloud bands in this image are confined to the north and south by jet streams that remain constant, even when the bands change color. The bands are all separated by winds that can reach speeds of up to 400 miles (644 kilometers) per hour.

On the opposite side of the planet, the band of deep red color northeast of the Great Red Spot and the bright white band to the southeast of it become much fainter. The swirling filaments seen around the outer edge of the red super storm are high-altitude clouds that are being pulled in and around it.

The Great Red Spot is a towering structure shaped like a wedding cake, whose upper haze layer extends more than 3 miles (5 kilometers) higher than clouds in other areas. The gigantic structure, with a diameter slightly larger than Earth's, is a high-pressure wind system called an anticyclone that has been slowly downsizing since the 1800s. The reason for this change in size is still unknown.

A worm-shaped feature located below the Great Red Spot is a cyclone, a vortex around a low-pressure area with winds spinning in the opposite direction from the Red Spot. Researchers have observed cyclones with a wide variety of different appearances across the planet. The two white oval-shaped features are anticyclones, like small versions of the Great Red Spot.

Another interesting detail is the color of the wide band at the equator. The bright orange color may be a sign that deeper clouds are starting to clear out, emphasizing red particles in the overlying haze.

The new image was taken in visible light as part of the Outer Planets Atmospheres Legacy program, or OPAL. The program provides yearly Hubble global views of the outer planets to look for changes in their storms, winds and clouds.

Read more at Science Daily

These sharks use unique molecules to glow green

Chain catshark, Scyliorhinus retifer.
In the depths of the sea, certain shark species transform the ocean's blue light into a bright green color that only other sharks can see -- but how they biofluoresce has previously been unclear. In a study publishing August 8 in the journal iScience, researchers have identified what's responsible for the sharks' bright green hue: a previously unknown family of small-molecule metabolites. Not only is this mechanism of biofluorescence different from how most marine creatures glow, but it may also play other useful roles for the sharks, including helping them identify each other in the ocean and fight against microbial infections.

"Studying biofluorescence in the ocean is like a constantly evolving mystery novel, with new clues being provided as we move the research forward," says David Gruber, a professor at City University of New York and co-corresponding author of the study. "After we first reported that swell sharks were biofluorescent, my collaborators and I decided to dive deeper into this topic. We wanted to learn more about what their biofluorescence might mean to them."

Gruber, working with Jason Crawford, a professor at Yale University and the study's co-corresponding author, focused on two species of sharks -- the swell shark and the chain catshark. They noticed that the sharks' skin had two tones -- light and dark -- and extracted chemicals from the two skin types. What they found was a type of fluorescent molecule that was only present in the light skin.

"The exciting part of this study is the description of an entirely new form of marine biofluorescence from sharks -- one that is based on brominated tryptophan-kynurenine small-molecule metabolites," Gruber says.

These types of small-molecule metabolites are known to be fluorescent and activate pathways similar to those that, in other vertebrates, play a role in the central nervous system and immune system. But in the sharks, the novel small-molecule fluorescent variants account for the biophysical and spectral properties of their lighter skin. This mechanism is different from animals in the upper ocean, such as jellyfish and corals, that commonly use green fluorescent proteins as mechanisms to transform blue light into other colors, Gruber says.

"It's a completely different system for them to see each other that other animals cannot necessarily tap into. They have a completely different view of the world that they're in because of these biofluorescent properties that their skin exhibits and that their eyes can detect," Crawford says. "Imagine if I were bright green, but only you could see me as being bright green, but others could not."

The molecules also serve multiple other purposes, including to help the sharks identify each other in the ocean and potentially provide protection against microbial infections, Crawford says.

"It is also interesting that these biofluorescent molecules display antimicrobial properties. These catsharks live on the ocean bottom, yet we don't see any biofouling or growth, so this could help explain yet another amazing feature of shark skin," Gruber says. "This study opens new questions related to potential function of biofluorescence in central nervous system signaling, resilience to microbial infections, and photoprotection."

While the study focused on two biofluorescent shark species, Gruber and Crawford hope to more broadly explore the bioluminescent and biofluorescent properties of marine animals, which can ultimately lead to the development of new imaging techniques.

"If you can harness the abilities that marine animals have to make light, you can generate molecular systems for imaging in the lab or in medicine. Imaging is an incredibly important biomedical objective that these types of systems could help to propel into the future," Crawford says.

Read more at Science Daily

Aug 8, 2019

ALMA dives into black hole's 'sphere of influence'

What happens inside a black hole stays inside a black hole, but what happens inside a black hole's "sphere of influence" -- the innermost region of a galaxy where a black hole's gravity is the dominant force -- is of intense interest to astronomers and can help determine the mass of a black hole as well as its impact on its galactic neighborhood.

New observations with the Atacama Large Millimeter/submillimeter Array (ALMA) provide an unprecedented close-up view of a swirling disk of cold interstellar gas rotating around a supermassive black hole. This disk lies at the center of NGC 3258, a massive elliptical galaxy about 100 million light-years from Earth. Based on these observations, a team led by astronomers from Texas A&M University and the University of California, Irvine, have determined that this black hole weighs a staggering 2.25 billion solar masses, the most massive black hole measured with ALMA to date.

Though supermassive black holes can have masses that are millions to billions of times that of the Sun, they account for just a small fraction of the mass of an entire galaxy. Isolating the influence of a black hole's gravity from the stars, interstellar gas, and dark matter in the galactic center is challenging and requires highly sensitive observations on phenomenally small scales.

"Observing the orbital motion of material as close as possible to a black hole is vitally important when accurately determining the black hole's mass." said Benjamin Boizelle, a postdoctoral researcher at Texas A&M University and lead author on the study appearing in the Astrophysical Journal. "These new observations of NGC 3258 demonstrate ALMA's amazing power to map the rotation of gaseous disks around supermassive black holes in stunning detail."

Astronomers use a variety of methods to measure black hole masses. In giant elliptical galaxies, most measurements come from observations of the orbital motion of stars around the black hole, taken in visible or infrared light. Another technique, using naturally occurring water masers (radio-wavelength lasers) in gas clouds orbiting around black holes, provides higher precision, but these masers are very rare and are associated almost exclusively with spiral galaxies having smaller black holes.

During the past few years, ALMA has pioneered a new method to study black holes in giant elliptical galaxies. About 10 percent of elliptical galaxies contain regularly rotating disks of cold, dense gas at their centers. These disks contain carbon monoxide (CO) gas, which can be observed with millimeter-wavelength radio telescopes.

By using the Doppler shift of the emission from CO molecules, astronomers can measure the velocities of orbiting gas clouds, and ALMA makes it possible to resolve the very centers of galaxies where the orbital speeds are highest.

"Our team has been surveying nearby elliptical galaxies with ALMA for several years to find and study disks of molecular gas rotating around giant black holes," said Aaron Barth of UC Irvine, a co-author on the study. "NGC 3258 is the best target we've found, because we're able to trace the disk's rotation closer to the black hole than in any other galaxy."

Just as the Earth orbits around the Sun faster than Pluto does because it experiences a stronger gravitational force, the inner regions of the NGC 3258 disk orbit faster than the outer parts due to the black hole's gravity. The ALMA data show that the disk's rotation speed rises from 1 million kilometers per hour at its outer edge, about 500 light-years from the black hole, to well over 3 million kilometers per hour near the disk's center at a distance of just 65 light-years from the black hole.

The researchers determined the black hole's mass by modeling the disk's rotation, accounting for the additional mass of the stars in the galaxy's central region and other details such as the slightly warped shape of the gaseous disk. The clear detection of rapid rotation enabled the researchers to determine the black hole's mass with a precision better than one percent, although they estimate an additional systematic 12 percent uncertainty in the measurement because the distance to NGC 3258 is not known very precisely. Even accounting for the uncertain distance, this is one of the most highly precise mass measurements for any black hole outside of the Milky Way galaxy.

Read more at Science Daily

Dark matter may be older than the Big Bang

Dark matter, which researchers believe make up about 80% of the universe's mass, is one of the most elusive mysteries in modern physics. What exactly it is and how it came to be is a mystery, but a new Johns Hopkins University study now suggests that dark matter may have existed before the Big Bang.

The study, published August 7 in Physical Review Letters, presents a new idea of how dark matter was born and how to identify it with astronomical observations.

"The study revealed a new connection between particle physics and astronomy. If dark matter consists of new particles that were born before the Big Bang, they affect the way galaxies are distributed in the sky in a unique way. This connection may be used to reveal their identity and make conclusions about the times before the Big Bang too," says Tommi Tenkanen, a postdoctoral fellow in Physics and Astronomy at the Johns Hopkins University and the study's author.

While not much is known about its origins, astronomers have shown that dark matter plays a crucial role in the formation of galaxies and galaxy clusters. Though not directly observable, scientists know dark matter exists by its gravitation effects on how visible matter moves and is distributed in space.

For a long time, researchers believed that dark matter must be a leftover substance from the Big Bang. Researchers have long sought this kind of dark matter, but so far all experimental searches have been unsuccessful.

"If dark matter were truly a remnant of the Big Bang, then in many cases researchers should have seen a direct signal of dark matter in different particle physics experiments already," says Tenkanen.

Using a new, simple mathematical framework, the study shows that dark matter may have been produced before the Big Bang during an era known as the cosmic inflation when space was expanding very rapidly. The rapid expansion is believed to lead to copious production of certain types of particles called scalars. So far, only one scalar particle has been discovered, the famous Higgs boson.

"We do not know what dark matter is, but if it has anything to do with any scalar particles, it may be older than the Big Bang. With the proposed mathematical scenario, we don't have to assume new types of interactions between visible and dark matter beyond gravity, which we already know is there," explains Tenkanen.

While the idea that dark matter existed before the Big Bang is not new, other theorists have not been able to come up with calculations that support the idea. The new study shows that researchers have always overlooked the simplest possible mathematical scenario for dark matter's origins, he says.

The new study also suggests a way to test the origin of dark matter by observing the signatures dark matter leaves on the distribution of matter in the universe.

Read more at Science Daily

Using lasers to visualize molecular mysteries in our atmosphere

Invisible to the human eye, molecular interactions between gases and liquids underpin much of our lives, including the absorption of oxygen molecules into our lungs, many industrial processes and the conversion of organic compounds within our atmosphere. But difficulties in measuring gas-liquid collisions have so far prevented the fundamental exploration of these processes.

Kenneth McKendrick and Matthew Costen, both at Heriot-Watt University, in Edinburgh, U.K., hope their new technique of enabling the visualization of gas molecules bouncing off a liquid surface will help climate scientists improve their predictive atmospheric models. The technique is described in The Journal of Chemical Physics, from AIP Publishing.

"The molecule of interest in our study, the hydroxyl radical, is an unstable fragment of a molecule that affects the whole of the understanding of atmospheric chemistry and things that genuinely affect climate," said McKendrick. "Some of these important OH reactions take place at the surface of liquid droplets, but we can't see surface interactions directly, so we measure the characteristics of the scattered molecules from real-time movies to infer what happened during their encounter with the liquid."

Laser sheets are the key to the technique, inducing a short-lived fluorescent signal from each molecule as it passes through 10 nanosecond pulses. Laser-induced fluorescence isn't new in itself, but this was the first time laser sheets have been applied to scattering from a surface in a vacuum with no other molecules present to interfere with the scattering from the molecular beam. This enabled the McKendrick team to capture individual frames of molecular movement, from molecular beam to liquid surface and scattering, which were compiled into movies.

Unlike previous methods of capturing gas-liquid interactions, all the characteristics needed to understand the interaction -- speed, scatter angle, rotation, etc. -- are captured within the simple movies that McKendrick describes as "intuitive." By observing the molecular film strips, McKendrick's team noted molecules scattered at a broad range of angles, similar to a ball bouncing off in all directions when thrown onto an uneven surface. This simple observation directly proved the surface of liquids is not flat.

"When you get down to the molecular level, the surface of these liquids is very rough, so much so that you can barely tell the difference between the distribution of molecules when directed down vertically onto the surface or when at an angle of 45 degrees. This finding is important for understanding the chances of different molecular processes happening at the liquid surface," said McKendrick.

Read more at Science Daily

1-2 caffeinated drinks not linked with higher risk of migraines; 3+ may trigger them

Afflicting more than one billion adults worldwide, migraine is the third most prevalent illness in the world. In addition to severe headache, symptoms of migraine can include nausea, changes in mood, sensitivity to light and sound, as well as visual and auditory hallucinations. People who suffer from migraine report that weather patterns, sleep disturbances, hormonal changes, stress, medications and certain foods or beverages can bring on migraine attacks. However, few studies have evaluated the immediate effects of these suspected triggers.

In a study published today in the American Journal of Medicine, researchers at Beth Israel Deaconess Medical Center (BIDMC), Brigham and Women's Hospital and the Harvard T.H. Chan School of Public Health (HSPH) evaluated the role of caffeinated beverages as a potential trigger of migraine. Led by Elizabeth Mostofsky, ScD, an investigator in BIDMC's Cardiovascular Epidemiology Research Unit and a member of the Department of Epidemiology at HSPH, researchers found that, among patients who experience episodic migraine, one to two servings of caffeinated beverages were not associated with headaches on that day, but three or more servings of caffeinated beverages may be associated with higher odds of migraine headache occurrence on that day or the following day.

"While some potential triggers -- such as lack of sleep -- may only increase migraine risk, the role of caffeine is particularly complex, because it may trigger an attack but also helps control symptoms," said Mostofsky. "Caffeine's impact depends both on dose and on frequency, but because there have been few prospective studies on the immediate risk of migraine headaches following caffeinated beverage intake, there is limited evidence to formulate dietary recommendations for people with migraines."

In their prospective cohort study, Mostofsky and colleagues -- including Principal Investigator Suzanne M. Bertisch, MD, MPH, of the Division of Sleep and Circadian Disorders at Brigham and Women's Hospital, Beth Israel Deaconess Medical Center, and Harvard Medical School -- 98 adults with frequent episodic migraine completed electronic diaries every morning and every evening for at least six weeks. Every day, participants reported the total servings of caffeinated coffee, tea, soda and energy drinks they consumed, as well as filled out twice daily headache reports detailing the onset, duration, intensity, and medications used for migraines since the previous diary entry. Participants also provided detailed information about other common migraine triggers, including medication use, alcoholic beverage intake, activity levels, depressive symptoms, psychological stress, sleep patterns and menstrual cycles.

To evaluate the link between caffeinated beverage intake and migraine headache on the same day or on the following day, Mostofsky, Bertisch and colleagues used a self-matched analysis, comparing an individual participant's incidence of migraines on days with caffeinated beverage intake to that same participant's incidence of migraines on days with no caffeinated beverage intake. This self-matching eliminated the potential for factors such as sex, age, and other individual demographic, behavioral and environmental factors to confound the data. The researchers further matched headache incidence by day of the week, eliminating weekend versus week day habits that may also impact migraine occurrence.

Self-matching also allowed for the variations in caffeine dose across different types of beverages and preparations.

"One serving of caffeine is typically defined as eight ounces or one cup of caffeinated coffee, six ounces of tea, a 12-ounce can of soda and a 2-ounce can of an energy drink," said Mostofsky. "Those servings contain anywhere from 25 to 150 milligrams of caffeine, so we cannot quantify the amount of caffeine that is associated with heightened risk of migraine. However, in this self-matched analysis over only six weeks, each participant's choice and preparation of caffeinated beverages should be fairly consistent."

Overall, the researchers saw no association between one to two servings of caffeinated beverages and the odds of headaches on the same day, but they did see higher odds of same-day headaches on days with three or more servings of caffeinated beverages. However, among people who rarely consumed caffeinated beverages, even one to two servings increased the odds of having a headache that day.

"Despite the high prevalence of migraine and often debilitating symptoms, effective migraine prevention remains elusive for many patients," said Bertisch. "This study was a novel opportunity to examine the short-term effects of daily caffeinated beverage intake on the risk of migraine headaches. Interestingly, despite some patients with episodic migraine thinking they need to avoid caffeine, we found that drinking one to two servings/day was not associated with higher risk of headache. More work is needed to confirm these findings, but it is an important first step."

Read more at Science Daily

Aug 7, 2019

Strange coral spawning improving Great Barrier Reef's resilience

Great Barrier Reef
A phenomenon that makes coral spawn more than once a year is improving the resilience of the Great Barrier Reef.

The discovery was made by University of Queensland and CSIRO researchers investigating whether corals that split their spawning over multiple months are more successful at spreading their offspring across different reefs.

Dr Karlo Hock, from UQ's School of Biological Sciences, said coral mass spawning events are one of the most spectacular events in the oceans.

"They're incredibly beautiful," Dr Hock said.

"On Australia's Great Barrier Reef, all coral colonies typically spawn only once per year, over several nights after the full moon, as the water warms up in late spring."

Study co-author Dr Christopher Doropoulos from the CSIRO Oceans & Atmosphere said sometimes however, coral split their spawning over two successive months.

"This helps them synchronise their reproduction to the best environmental conditions and moon phases," he said.

"While reproductive success during split spawning may be lower than usual because it can lead to reduced fertilisation, we found that the release of eggs in two separate smaller events gives the corals a second and improved chance of finding a new home reef."

The research team brought together multi-disciplinary skills in modelling, coral biology, ecology, and oceanography, simulating the dispersal of coral larvae during these split spawning events, among the more than 3800 reefs that make up the Great Barrier Reef.

They looked at whether the split spawning events more reliably supply larvae to the reefs, as well as whether the ability to exchange larvae among the reefs is enhanced by them.

UQ's Professor Peter J. Mumby said split spawning events can increase the reliability of larval supply as the reefs tend to be better connected and have more numerous, as well as more frequent, larval exchanges.

"This means that split spawning can increase the recovery potential for reefs in the region.

"A more reliable supply of coral larvae could particularly benefit reefs that have recently suffered disturbances, when coral populations need new coral recruits the most.

"This will become more important as coral reefs face increasingly unpredictable environmental conditions and disturbances."

Dr Hock said the research also revealed that the natural processes of recovery can sometimes be more resilient than originally thought.

"However, even with such mechanisms in place, coral populations can only withstand so much pressure," he said.

"It all ends up being the matter of scale: any potential benefits from split spawning might be irrelevant if we don't have enough coral on these reefs to reproduce successfully.

Read more at Science Daily

Fear of predators causes PTSD-like changes in brains of wild animals

Black-capped chickadee.
Fear can be measured in the brain and fearful life-threatening events can leave quantifiable long-lasting traces in the neural circuitry of the brain with enduring effects on behaviour, as shown most clearly in post-traumatic stress disorder (PTSD).

A new study by Western University demonstrates that the fear predators inspire can leave long-lasting traces in the neural circuitry of wild animals and induce enduringly fearful behaviour, comparable to effects seen in PTSD research.

The findings of this study, led by Western University's Liana Zanette, Scott MacDougall-Shackleton and Michael Clinchy, were published today in Scientific Reports -- Nature.

For the first time, Zanette, her students and collaborators experimentally demonstrated that the effects predator exposure has on the neural circuitry of fear in wild animals can persist beyond the period of the immediate 'fight or flight' response and instead can remain measurable more than a week later, in animals exposed in the interim to natural environmental and social conditions.

"These results have important implications for biomedical researchers, mental health clinicians, and ecologists," explains Zanette, a biology professor in Western's Faculty of Science and a renowned expert on the ecology and neurobiology of fear. "Our findings support both the notion that PTSD is not unnatural, and that long-lasting effects of predator-induced fear with likely effects on fecundity and survival, are the norm in nature."

Retaining a powerful enduring memory of a life-threatening predator encounter is clearly evolutionarily beneficial if it helps the individual avoid such events in the future and a growing number of biomedical researchers have begun to propose that PTSD is the cost of inheriting an evolutionarily primitive mechanism that prioritizes survival over the quality of life.

Ecologists are recognizing that predators can affect prey numbers not just by killing prey, but by scaring them, as well. For example, Zanette and her collaborators have shown in a previous study that scared parents are less able to care for their young.

The long-lasting effects of fear on the brain demonstrated in this new study suggest predator exposure could impair parental behaviour for a prolonged period thereafter with greater negative effects on offspring survival than previously envisaged.

Read more at Science Daily

A long time ago, galaxies far, far away

ALMA radio telescope antennas.
Astronomers used the combined power of multiple astronomical observatories around the world and in space to discover a treasure-trove of previously unknown ancient massive galaxies. This is the first multiple discovery of its kind and such an abundance of this type of galaxy defies current models of the universe. These galaxies are also intimately connected with supermassive black holes and the distribution of dark matter.

The Hubble Space Telescope gave us unprecedented access to the previously unseen universe, but even it is blind to some of the most fundamental pieces of the cosmic puzzle. Astronomers from the Institute of Astronomy at the University of Tokyo wanted to see some things they long suspected may be out there but which Hubble could not show them. Newer generations of astronomical observatories have finally revealed what they sought.

"This is the first time that such a large population of massive galaxies was confirmed during the first 2 billion years of the 13.7-billion-year life of the universe. These were previously invisible to us," said researcher Tao Wang. "This finding contravenes current models for that period of cosmic evolution and will help to add some details, which have been missing until now."

But how can something as big as a galaxy be invisible to begin with?

"The light from these galaxies is very faint with long wavelengths invisible to our eyes and undetectable by Hubble," explained Professor Kotaro Kohno. "So we turned to the Atacama Large Millimeter/submillimeter Array (ALMA), which is ideal for viewing these kinds of things. I have a long history with that facility and so knew it would deliver good results."

Even though these galaxies were the largest of their time, the light from them is not only weak but also stretched due to their immense distance. As the universe expands, light passing through becomes stretched, so visible light becomes longer, eventually becoming infrared. The amount of stretching allows astronomers to calculate how far away something is, which also tells you how long ago the light you're seeing was emitted from the thing in question.

"It was tough to convince our peers these galaxies were as old as we suspected them to be. Our initial suspicions about their existence came from the Spitzer Space Telescope's infrared data," continued Wang. "But ALMA has sharp eyes and revealed details at submillimeter wavelengths, the best wavelength to peer through dust present in the early universe. Even so, it took further data from the imaginatively named Very Large Telescope in Chile to really prove we were seeing ancient massive galaxies where none had been seen before."

Another reason these galaxies appear so weak is because larger galaxies, even in the present day, tend to be shrouded in dust, which obscures them more than their smaller galactic siblings.

And what does the discovery of these massive galaxies imply?

"The more massive a galaxy, the more massive the supermassive black hole at its heart. So the study of these galaxies and their evolution will tell us more about the evolution of supermassive black holes, too," said Kohno. "Massive galaxies are also intimately connected with the distribution of invisible dark matter. This plays a role in shaping the structure and distribution of galaxies. Theoretical researchers will need to update their theories now."

What's also interesting is how these 39 galaxies are different from our own. If our solar system were inside one of them and you were to look up at the sky on a clear night, you would see something quite different to the familiar pattern of the Milky Way.

"For one thing, the night sky would appear far more majestic. The greater density of stars means there would be many more stars close by appearing larger and brighter," explained Wang. "But conversely, the large amount of dust means farther-away stars would be far less visible, so the background to these bright close stars might be a vast dark void."

As this is the first time such a population of galaxies has been discovered, the implications of their study are only now being realized. There may be many surprises yet to come.

Read more at Science Daily

Dead planets can 'broadcast' for up to a billion years

Planetary nebula with white dwarf illustration
Astronomers are planning to hunt for cores of exoplanets around white dwarf stars by 'tuning in' to the radio waves that they emit.

In new research led by the University of Warwick, scientists have determined the best candidate white dwarfs to start their search, based upon their likelihood of hosting surviving planetary cores and the strength of the radio signal that we can 'tune in' to.

Published in the Monthly Notices of the Royal Astronomical Society, the research led by Dr Dimitri Veras from the Department of Physics assesses the survivability of planets that orbit stars which have burnt all of their fuel and shed their outer layers, destroying nearby objects and removing the outer layers of planets. They have determined that the cores which result from this destruction may be detectable and could survive for long enough to be found from Earth.

The first exoplanet confirmed to exist was discovered orbiting a pulsar by co-author Professor Alexander Wolszczan from Pennsylvania State University in the 1990s, using a method that detects radio waves emitted from the star. The researchers plan to observe white dwarfs in a similar part of the electromagnetic spectrum in the hope of achieving another breakthrough.

The magnetic field between a white dwarf and an orbiting planetary core can form a unipolar inductor circuit, with the core acting as a conductor due to its metallic constituents. Radiation from that circuit is emitted as radio waves which can then be detected by radio telescopes on Earth. The effect can also be detected from Jupiter and its moon Io, which form a circuit of their own.

However, the scientists needed to determine how long those cores can survive after being stripped of their outer layers. Their modelling revealed that in a number of cases, planetary cores can survive for over 100 million years and as long as a billion years.

The astronomers plan to use the results in proposals for observation time on telescopes such as Arecibo in Puerto Rico and the Green Bank Telescope in West Virginia to try to find planetary cores around white dwarfs.

Lead author Dr Dimitri Veras from the University of Warwick said: "There is a sweet spot for detecting these planetary cores: a core too close to the white dwarf would be destroyed by tidal forces, and a core too far away would not be detectable. Also, if the magnetic field is too strong, it would push the core into the white dwarf, destroying it. Hence, we should only look for planets around those white dwarfs with weaker magnetic fields at a separation between about 3 solar radii and the Mercury-Sun distance.

"Nobody has ever found just the bare core of a major planet before, nor a major planet only through monitoring magnetic signatures, nor a major planet around a white dwarf. Therefore, a discovery here would represent 'firsts' in three different senses for planetary systems."

Professor Alexander Wolszczan from Pennsylvania State University, said: "We will use the results of this work as guidelines for designs of radio searches for planetary cores around white dwarfs. Given the existing evidence for a presence of planetary debris around many of them, we think that our chances for exciting discoveries are quite good."

Read more at Science Daily

Aug 6, 2019

Climate change could shrink oyster habitat in California

Ocean acidification is bad news for shellfish, as it makes it harder for them to form their calcium-based shells. But climate change could also have multiple other impacts that make California bays less hospitable to shelled organisms like oysters, which are a key part of the food web.

Changes to water temperature and chemistry resulting from human-caused climate change could shrink the prime habitat and farming locations for oysters in California bays, according to a new study from the University of California, Davis.

The study, published today in the journal Limnology and Oceanography, shows that changes to dissolved oxygen levels, water temperature, and salinity could have an even greater impact than ocean acidification on oyster growth in estuaries and bays.

"The study demonstrates that focusing on ocean acidification alone is misguided," said UC Davis Professor Ted Grosholz, who led the study with funding from California Sea Grant. "Many climate-related stressors contribute to the projected shrinkage of the estuarine zone, where oysters and likely other shellfish would need to flourish."

WHAT'S DIFFERENT ABOUT ESTUARIES

California is home to two oyster species, whose primary habitats are partially enclosed estuaries and bays like Tomales Bay, where Grosholz and colleagues focused their research. The native Olympia oyster is an important foundation species for marine ecosystems, providing structure and food upon which other species rely. The Pacific oyster, though not native to California, is commonly farmed for food in the state, supporting a $25-million industry.

The growing problem of ocean acidification -- caused by increased carbon dioxide (CO2) in the atmosphere -- has been a subject of concern among oyster growers and biologists in California.

In the estuaries and bays where oysters grow, ocean acidification is a much more complicated process than in the open ocean. Out at sea, the process of ocean acidification is mostly driven by atmosphere CO2 -- as concentrations go up in the atmosphere, more CO2 dissolves into the water, making it more acidic.

What's different about estuaries, said Grosholz, is there are lots of other inputs that can influence acidity.

"In an estuary, you have freshwater coming down from rivers, and there are plants, macroalgae, and phytoplankton that are much more abundant and greatly influence pH," he said.

This leads to daily cycle of changes in pH (the measure of acidity) that can far exceed pH changes in the ocean.

Estuarine organisms have evolved with this variable pH regime, which could make them more resilient to ocean acidification. But other factors important to oyster health, such as temperature, salinity, and dissolved oxygen, are also projected to change with climate change.

EXPLORING IMPACTS

Grosholz started the research project in 2014 with UC Davis researcher Ann Russell to explore the impacts of acidification in estuaries and tease out other potential climate change impacts on oyster growth and health.

From 2014 to 2017, the researchers planted juvenile oysters of both species in test beds at nine different locations in Tomales Bay. They monitored their health over one-month periods, during a variety of seasonal conditions. At the same time, they closely monitored water chemistry, temperature, and other conditions at each location, and measured the growth and mortality of the oysters.

They also conducted field experiments with oyster larvae to see how the variations in conditions affected survival and settlement -- the process by which shellfish change from free-floating larvae and settle on a hard surface.

A STRESSFUL COMBINATION OF THREATS

The study found that the lowest oyster growth and survival rates were in the parts of the bay most impacted by river runoff and inflow of upwelled ocean waters. While oyster growth and survival rates were better in the spring and summer, upwelled ocean waters low in dissolved oxygen and pH were also detrimental to the oysters.

In California, climate change is expected to lead to increased variability in precipitation, higher water temperatures, and increased upwelling. The study suggests that this combination of effects would lead to greater stress on oysters, particular at the edges of bays that connect to rivers and the ocean.

"This means the oysters will be getting squeezed at both sides, reducing the zone of habitat they can thrive in," said Grosholz.

The study provides insight for oyster restoration projects as well as commercial oyster farmers. For example, projections for suitable oyster habitat could help determine where to site projects and farms for the best chance of success.

Read more at Science Daily

How deep space travel could affect the brain

Astronaut, background of stars illustration
Exposure to chronic, low dose radiation -- the conditions present in deep space -- causes neural and behavioral impairments in mice, researchers report in eNeuro. These results highlight the pressing need to develop safety measures to protect the brain from radiation during deep space missions as astronauts prepare to travel to Mars.

Radiation is known to disrupt signaling among other processes in the brain. However, previous experiments used short-term, higher dose-rate exposures of radiation, which does not accurately reflect the conditions in space.

To investigate how deep space travel could affect the nervous system, Charles Limoli and colleagues at the University of California, Irvine, Stanford University, Colorado State University and the Eastern Virginia School of Medicine exposed mice to chronic, low dose radiation for six months. They found that the radiation exposure impaired cellular signaling in the hippocampus and prefrontal cortex, resulting in learning and memory impairments. They also observed increased anxiety behaviors, indicating that the radiation also impacted the amygdala.

The researchers predict that during a deep space mission approximately one in five astronauts would experience anxiety-like behavior and one in three would experience certain levels of memory impairments. Additionally, the astronauts may struggle with decision-making.

From Science Daily

New hormone injection aids weight loss in obese patients

The findings came from a small study in which patients lost on average 4.4kg and the treatment led to substantial improvements to their blood glucose, with some patients' reducing to near-normal levels.

Obesity is a common problem in the UK and it is estimated that one in four adults are obese.

One of the most common types of weight loss surgery is a procedure known as gastric bypass surgery, which can be very effective in keeping excess weight off and improving blood sugar levels in diabetics. However, some patients decide against surgery and the procedure can cause complications such as abdominal pain, chronic nausea, vomiting and debilitating low blood sugar levels.

Previous research by Imperial College London suggested that one of the reasons why gastric bypass surgery works so well is because three specific hormones originating from the bowels are released in higher levels. This hormone combination, called 'GOP' for short, reduces appetite, causes weight loss and improves the body's ability to use the sugar absorbed from eating.

Researchers wanted to see if infusing patients with the GOP hormones glucagon-like peptide-1 (GLP-1), oxyntomodulin and peptide, to mimic the high levels seen after surgery, could aid weight loss and reduce high glucose levels.

Fifteen patients were given the GOP treatment for four weeks using a pump that slowly injects the GOP mixture under the skin for 12 hours a day, beginning one hour before breakfast and disconnecting after their last meal of the day. Patients also received dietetic advice on healthy eating and weight loss from a dietician.

Professor Tricia Tan, Professor of Practice (Metabolic Medicine & Endocrinology) at Imperial College London and lead author of the study, said:

"Obesity and type 2 diabetes can lead to very serious and potentially life-threatening conditions such as cancer, stroke and heart disease. There is a real need to find new medicines so we can improve and save the lives of many patients. Although this is a small study our new combination hormone treatment is promising and has shown significant improvements in patients' health in only four weeks. Compared to other methods the treatment is non-invasive and reduced glucose levels to near-normal levels in our patients."

The work, published in Diabetes Care and presented at the American Diabetes Association 79th Scientific Sessions meeting at San Francisco, took place at Imperial College London in collaboration with University of Copenhagen and University College Dublin. The treatment was trialled on patients at the National Institute for Health Research Imperial Clinical Research Facility at Hammersmith Hospital, part of Imperial College Healthcare NHS Trust.

Twenty-six obese patients with prediabetes (when blood glucose is too high but not high enough to be classified as diabetes) and those with diabetes were recruited to the study at Hammersmith Hospital from July 2016 to October 2018. Fifteen patients were randomly selected to receive the hormone treatment and 11 patients were given a saline (salt water) infusion as a placebo over a four-week period. The team also recruited 21 patients who had undergone bariatric surgery and 22 patients who followed a very low-calorie diet to compare the results of GOP. All patients were given a glucose monitoring device to track their glucose levels following treatment.

In the trial, patients on the GOP treatment lost an average of 4.4kg, compared with 2.5kg for participants receiving a saline placebo. The treatment also had no side effects.

However, patients who received bariatric surgery or who followed a very low calorie diet lost significantly more weight than GOP patients. The changes in weight were 10.3kg for bariatric patient and 8.3kg for patients who followed a very low calorie diet.

Professor Tan commented: "Although the weight loss was smaller, using the GOP infusion would be preferable as it has fewer side effects than bariatric surgery. This result shows that it is possible to obtain some of the benefits of a gastric bypass operation without undergoing the surgery itself. If further trials are successful, in future we could potentially give this type of treatment to many more patients."

The team also found that GOP was capable of lowering blood glucose levels to near-normal levels, with little variation in the blood glucose. Patients who received bariatric surgery also had an overall improvement in blood glucose, but the levels were much more variable, leaving them vulnerable to low blood glucose levels.

The team aim to carry out a larger clinical trial to assess the impact of GOP on more patients over a longer period of time.

Read more at Science Daily

Maya more warlike than previously thought

The Maya of Central America are thought to have been a kinder, gentler civilization, especially compared to the Aztecs of Mexico. At the peak of Mayan culture some 1,500 years ago, warfare seemed ritualistic, designed to extort ransom for captive royalty or to subjugate rival dynasties, with limited impact on the surrounding population.

Only later, archeologists thought, did increasing drought and climate change lead to total warfare -- cities and dynasties were wiped off the map in so-called termination events -- and the collapse of the lowland Maya civilization around 1,000 A.D. (or C.E., current era).

New evidence unearthed by a researcher from the University of California, Berkeley, and the U.S. Geological Survey calls all this into question, suggesting that the Maya engaged in scorched-earth military campaigns -- a strategy that aims to destroy anything of use, including cropland -- even at the height of their civilization, a time of prosperity and artistic sophistication.

The finding also indicates that this increase in warfare, possibly associated with climate change and resource scarcity, was not the cause of the disintegration of the lowland Maya civilization.

"These data really challenge one of the dominant theories of the collapse of the Maya," said David Wahl, a UC Berkeley adjunct assistant professor of geography and a researcher at the USGS in Menlo Park, California. "The findings overturn this idea that warfare really got intense only very late in the game."

"The revolutionary part of this is that we see how similar Mayan warfare was from early on," said archaeologist Francisco Estrada-Belli of Tulane University, Wahl's colleague. "It wasn't primarily the nobility challenging one another, taking and sacrificing captives to enhance the charisma of the captors. For the first time, we are seeing that this warfare had an impact on the general population."

Total warfare

The evidence, reported today in the journal Nature Human Behaviour, is an inch-thick layer of charcoal at the bottom of a lake, Laguna Ek'Naab, in Northern Guatemala: a sign of extensive burning of a nearby city, Witzna, and its surroundings that was unlike any other natural fire recorded in the lake's sediment.

The charcoal layer dates from between 690 and 700 A.D., right in the middle of the classic period of Mayan civilization, 250-950 A.D. The date for the layer coincides exactly with the date -- May 21, 697 A.D. -- of a "burning" campaign recorded on a stone stela, or pillar, in a rival city, Naranjo.

"This is really the first time the written record has been linked to an event in the paleo data sets in the New World," Wahl said. "In the New World, there is so little writing, and what's preserved is mostly on stone monuments. This is unique in that we were able to identify this event in the sedimentary record and point to the written record, particularly these Mayan hieroglyphs, and make the inference that this is the same event."

Wahl, a geologist who studies past climate and is first author of the study, worked with USGS colleague Lysanna Anderson and Estrada-Belli to extract 7 meters of sediment cores from the lake. Laguna Ek'Naab, which is about 100 meters across, is located at the base of the plateau where Witzna once flourished and has collected thousands of years of sediment from the city and its surrounding agricultural fields. After seeing the charcoal layer, the archaeologists examined many of Witzna's ruined monuments still standing in the jungle and found evidence of burning in all of them.

"What we see here is, it looks like they torched the entire city and, indeed, the entire watershed," Wahl said. "Then, we see this really big decrease in human activity afterwards, which suggests at least that there was a big hit to the population. We can't know if everyone was killed or they moved or if they simply migrated away, but what we can say is that human activity decreased very dramatically immediately after that event."

This one instance does not prove that the Maya engaged in total warfare throughout the 650-year classic period, Estrada-Belli said, but it does fit with increasing evidence of warlike behavior throughout that period: mass burials, fortified cities and large standing armies.

"We see destroyed cities and resettled people similar to what Rome did to Carthage or Mycenae to Troy," Estrada-Belli said.

And if total warfare was already common at the peak of Mayan lowland civilization, then it is unlikely to have been the cause of the civilization's collapse, the researchers argue.

"I think, based on this evidence, the theory that a presumed shift to total warfare was a major factor in the collapse of Classic Maya society is no longer viable," said Estrada-Belli. "We have to rethink the cause of the collapse, because we're not on the right path with warfare and climate change."

'Bahlam Jol burned for the second time'

Though Mayan civilization originated more than 4,000 years ago, the Classic period is characterized by widespread monumental architecture and urbanization exemplified by Tikal in Guatemala and Dzibanché in Mexico's Yucatan. City-states -- independent states made up of cities and their surrounding territories -- were ruled by dynasties that, archaeologists thought, established alliances and waged wars much like the city-states of Renaissance Italy, which affected the nobility without major impacts on the population.

In fact, most archaeologists believe that the incessant warfare that arose in the terminal Classic period (800-950 A.D.), presumably because of climate change, was the major cause of the decline of Mayan cities throughout present day El Salvador, Honduras, Guatemala, Belize and Southern Mexico.

So when Wahl, Anderson and Estrada-Belli discovered the charcoal layer in 2013 in Laguna Ek'Naab -- a layer unlike anything Wahl had seen before -- they were puzzled. The scientists had obtained the lake core in order to document the changing climate in Central America, hoping to correlate these with changes in human occupation and food cultivation.

The puzzle lingered until 2016, when Estrada-Belli and co-author Alexandre Tokovinine, a Mayan epigrapher at the University of Alabama, discovered a key piece of evidence in the ruins of Witzna: an emblem glyph, or city seal, identifying Witzna as the ancient Mayan city Bahlam Jol. Searching through a database of names mentioned in Mayan hieroglyphs, Tokovinine found that very name in a "war statement" on a stela in the neighboring city-state of Naranjo, about 32 kilometers south of Bahlam Jol/Witzna.

The statement said that on the day ." .. 3 Ben, 16 Kasew ('Sek'), Bahlam Jol 'burned' for the second time." According to Tokovinine, the connotation of the word "burned," or puluuy in Mayan, has always been unclear, but the date 3 Ben, 16 Kasew on the Mayan calendar, or May 21, 697, clearly associates this word with total warfare and the scorched earth destruction of Bahlam Jol/Witzna.

"The implications of this discovery extend beyond mere reinterpretation of references to burning in ancient Maya inscriptions," Tokovinine said. "We need to go back to the drawing board on the very paradigm of ancient Maya warfare as centered on taking captives and extracting tribute."

Three other references to puluuy or "burning" are mentioned in the same war statement, referencing the cities of Komkom, known today as Buenavista del Cayo; K'an Witznal, now Ucanal; and K'inchil, location unknown. These cities may also have been decimated, if the word puluuy describes the same extreme warfare in all references. The earlier burning of Bahlam Jol/Witzna mentioned on the stela may also have left evidence in the lake cores -- there are three other prominent charcoal layers in addition to the one from 697 A.D. -- but the date of the earlier burning is unknown.

Mayan archaeologists have reconstructed some of the local history, and it's known that the conquest of Bahlam Jol/Witzna was set in motion by a queen of Naranjo, Lady 6 Sky, who was trying to reestablish her dynasty after the city-state had declined and lost all its possessions. She set her seven-year-old son, Kahk Tilew, on the throne and then began military campaigns to wipe out all the rival cities that had rebelled, Estrada-Belli said.

"The punitive campaign was recorded as being waged by her son, the king, but we know it's really her," he said.

That was not the end of Bahlam Jol/Witzna, however. The city revived, to some extent, with a reduced population, as seen in the lake cores. And the emblem glyph was found on a stela erected around 800 A.D, 100 years after the city's destruction. The city was abandoned around 1,000 A.D.

"The ability to tie geologic evidence of a devastating fire to an event noted in the epigraphic record, made possible by the relatively uncommon discovery of an ancient Maya city's emblem glyph, reflects a confluence of findings nearly unheard of in the field of geoarchaeology," Wahl said.

Read more at Science Daily

Aug 5, 2019

How wildfires trap carbon for centuries to millennia

Charcoal produced by wildfires could trap carbon for hundreds of years and help mitigate climate change, according to new research published today.

The extensive and unprecedented outbreak of wildfires in the arctic and the vast amounts of CO2 they are emitting have been hitting the headlines across the world.

But a new Nature Geoscience study quantifies the important role that charcoal plays in helping to compensate for carbon emissions from fires. And the research team say that this charcoal could effectively 'lock away' a considerable amount of carbon for years to come.

In an average year, wildfires around the world burn an area equivalent to the size of India and emit more carbon dioxide to the atmosphere than global road, rail, shipping and air transport combined.

As vegetation in burned areas regrows, it draws CO2 back out of the atmosphere through photosynthesis. This is part of the normal fire-recovery cycle, which can take less than a year in grasslands or decades in fire-adapted forests.

In extreme cases, such as arctic or tropical peatlands, full recovery may not occur for centuries.

This recovery of vegetation is important because carbon that is not re-captured stays in the atmosphere and contributes to climate change.

Deforestation fires are a particularly important contributor to climate change as these result in a long-term loss of carbon to the atmosphere.

Now, a new study by researchers at Swansea University and Vrije Universiteit Amsterdam has quantified the important role that charcoal created by fires -- known as pyrogenic carbon -- plays in helping to compensate for carbon emissions.

Lead author Dr Matthew Jones, who recently joined the UEA's School of Environmental Sciences from Swansea Univsersity, said: "CO2 emitted during fires is normally sequestered again as vegetation regrows, and researchers generally consider wildfires to be carbon neutral events once full biomass recovery has occurred.

"However, in a fire some of the vegetation is not consumed by burning, but instead transformed to charcoal. This carbon-rich material can be stored in soils and oceans over very long time periods.

"We have combined field studies, satellite data, and modelling to better quantify the amount of carbon that is placed into storage by fires at the global scale."

The paper, which was co-authored by Dr Cristina Santin and Prof Stefan Doerr, from Swansea University, and Prof Guido van der Werf, of Vrije Universiteit Amsterdam, explained that, as well as emitting CO2 to the atmosphere, landscape fires also transfer a significant fraction of affected vegetation carbon to charcoal and other charred materials.

The researchers say this pyrogenic carbon needs to be considered in global fire emission models.

Dr Jones said: "Our results show that, globally, the production of pyrogenic carbon is equivalent to 12 per cent of CO2 emissions from fires and can be considered a significant buffer for landscape fire emissions.

"Climate warming is expected to increase the prevalence of wildfires in many regions, particularly in forests. This may lead to an overall increase in atmospheric CO2 emissions from wildfires, but also an increase in pyrogenic carbon storage. If vegetation is allowed to recover naturally then the emitted CO2 will be recaptured by regrowth in future decades, leaving behind an additional stock of pyrogenic carbon in soils, lakes and oceans.

"We expect any additional pyrogenic carbon to be trapped for a period of centuries to millennia, and although it will eventually return to the atmosphere as charcoal degrades, it is locked away and unable to affect our climate in the meantime.

"This brings some good news, although rising CO2 emissions caused by human activity, including deforestation and some peatland fires, continue to pose a serious threat to global climate."

There are still important questions to be answered about how a warmer, more drought-prone climate will affect the global extent of wildfires in the future. For example, will there be more fire in arctic peatlands as we are experiencing this summer, and what proportion of CO2 emissions will be recaptured by future vegetation regrowth?

Read more at Science Daily

Symphony of genes in animal evolution

One of the most exciting discoveries in genome research was that the last common ancestor of all multicellular animals -- which lived about 600 million years ago -- already possessed an extremely complex genome. Many of the ancestral genes can still be found in modern day species (e.g., human). However, it has long been unclear whether the arrangement of these genes in the genome also had a certain function. In a recent study in Nature Ecology and Evolution, the biologists led by Oleg Simakov and Ulrich Technau show that not only individual genes but also these gene arrangements in the genome have played a key role in the course of animal evolution.

Genomes store the instructions for how to build an organism. Often only individual genes are associated with certain functions. However, the genome not only defines single genes but also tells us about their arrangement on the DNA. Remarkably, many of these arrangements have been preserved from the genome of the common ancestor of sponges and humans, over 600 million years ago. Despite this, their potential function has long eluded scientists.

What gene arrangements reveal

In their current study, the team from the Department of Molecular Evolution and Development at the University of Vienna has now uncovered the first insights into this question. Using comparative genomic analyses, the researchers reconstructed evolutionarily conserved gene arrangements in animals and investigated their activity in different cell types. They could show that genes that are always present together in the genome in several species, also tend to be active in the same cells. For example, three genes that have been adjacent in several species (e.g., in sponges or cnidarians) for 600 million years are primarily active in a digestive cell type. "Cell types in animals can thus be characterized not only by individual genes but also by specific gene arrangements, and different cell types are also capable of accessing different regions in the genome," explains Oleg Simakov, evolutionary biologist at the University of Vienna. In addition, the team noted that certain cell types seem to utilize such conserved regions more than others, and thus may represent very ancestral functions.

The results show that not only gene loss or the emergence of new genes have played an important role in evolution, but also the changes in the arrangement of genes in the genome have made a significant contribution. "The study thus opens up a far-reaching perspective on investigating the functions of these regions in the respective cell types," concludes Simakov.

From Science Daily

Measuring distances to remote celestial objects and analyzing cosmic clouds

Researchers in Japan and the Netherlands jointly developed an originative radio receiver DESHIMA (Deep Spectroscopic High-redshift Mapper) and successfully obtained the first spectra and images with it. Combining the ability to detect a wide frequency range of cosmic radio waves and to disperse them into different frequencies, DESHIMA demonstrated its unique power to efficiently measure the distances to the remotest objects as well as to map the distributions of various molecules in nearby cosmic clouds.

"Deshima" (or, Dejima) was a Dutch trading post in Japan built in the mid-17th century. For 200 years, Deshima was Japan's precious window to the world. Now, the two friendly nations open up another window to a new world, the vast Universe, with innovative nanotechnology.

"DESHIMA is a completely new type of astronomical instrument with which a 3D map of the early Universe can be constructed," said Akira Endo, a researcher at the Delft University of Technology and the leader of the DESHIMA project.

The uniqueness of DESHIMA is that it can disperse the wide frequency range of radio waves into different frequencies. DESHIMA's instantaneous frequency width (332 -- 377 GHz) is more than five times wider than that of the receivers used in the Atacama Large Millimeter/submillimeter Array (ALMA).

Dispersing the cosmic radio waves in different frequencies, or spectroscopy, is an important technique to extract various information about the Universe. Since different molecules emit radio waves in different frequencies, spectroscopic observations tell us the composition of the celestial objects. Also, the cosmic expansion decreases the measured frequencies, and measuring the frequency shift from the native frequency provides us the distances to remote objects.

"There are many existing radio receivers with spectroscopic capability, however, the covered frequency range in one observation is quite limited," says Yoichi Tamura, an associate professor at Nagoya University. "On the other hand, DESHIMA achieves an ideal balance between the width of frequency range and spectroscopic performance."

Behind this unique capability is the innovative nanotechnology. The research team developed a special superconducting electric circuit, a filterbank, in which radio waves are dispersed into different frequencies, like a sorting conveyor in a fulfillment center. At the end of the "signal conveyors," sensitive Microwave Kinetic Inductance Detectors (MKID) are located and detect the dispersed signals. DESHIMA is the world's first instrument to combine these two technologies on a chip to detect radio waves from the Universe.

As its first test observation, DESHIMA was installed on a 10-m submillimeter telescope, the Atacama Submillimeter Telescope Experiment (ASTE) operated by the National Astronomical Observatory of Japan (NAOJ) in Northern Chile. The first target was the active galaxy VV 114. The distance to the galaxy has been already measured to be 290 million light-years. DESHIMA successfully detected the signal from the carbon monoxide (CO) molecules in the galaxy at the right frequency expected from the expansion of the Universe.

When astronomers try to detect radio emission from a remote object with unknown distance, usually they sweep a certain range of frequency. Using conventional radio receivers with narrow bandwidth, they need to repeat observations while slightly shifting the frequency. By contrast, the wide-band DESHIMA greatly improves the efficiency of the emission search and helps researchers to produce maps of distant galaxies.

DESHIMA's high performance has also been proven for observations of nearby molecular clouds. DESHIMA simultaneously captured and imaged the distribution of the emission signals from three molecules, CO, formyl ion (HCO+), and hydrogen cyanide (HCN) in the Orion nebula.

Read more at Science Daily

A new lens for life-searching space telescopes

The University of Arizona Richard F. Caris Mirror Laboratory is a world leader in the production of the world's largest telescope mirrors. In fact, it is currently fabricating mirrors for the largest and most advanced earth-based telescope: The Giant Magellan Telescope.

But there are size constraints, ranging from the mirror's own weight, which can distort images, to the size of our freeways and underpasses that are needed to transport finished pieces. Such giant mirrors are reaching their physical limits, but when they do, the UA will continue to be a global contributor to the art of gathering light and drive change in the way astronomers observe the stars.

"We are developing a new technology to replace mirrors in space telescopes," said UA associate professor Daniel Apai, of Steward Observatory and the Lunar and Planetary Laboratory. "If we succeed, we will be able to vastly increase the light-collecting power of telescopes, and among other science, study the atmospheres of 1,000 potentially earth-like planets for signs of life."

Apai leads the space science half of the team, while UA professor Tom Milster, of the James C. Wyant College of Optical Sciences, leads the optical design of a replicable space telescope dubbed Nautilus. The researchers intend to deploy a fleet of 35 14-meter-wide spherical telescopes, each individually more powerful than the Hubble Space Telescope.

Each unit will contain a meticulously crafted 8.5-meter diameter lens, which will be used for astronomical observations. One use particularly exciting for Apai is analyzing starlight as it filters through planetary atmospheres, a technique which could reveal chemical signatures of life.

When combined, the telescope array will be powerful enough to characterize 1,000 extrasolar planets from as far away as 1,000 light years. Even NASA's most ambitious space telescope missions are designed to study a handful of potentially Earth-like extrasolar planets.

"Such a sample may be too small to truly understand the complexity of exo-earths," according to Apai and Milster's co-authored paper, which was published July 29 in the Astronomical Journal along with several other authors, including Steward Observatory astronomer Glenn Schneider and Alex Bixel, an astronomer and UA graduate student.

To develop Nautilus, Apai and Milster defined a goal and designed Nautilus to meet it.

"We wanted to search 1,000 potentially earth-like planets for signs of life. So, we first asked, what kinds of stars are most likely to host planets? Then, how far do we need to go in space to have 1,000 earth-like planets orbiting around them? It turned out that it's over 1,000 light years -- a great distance, but still just a small part of the galaxy," Apai said. "We then calculated the light collecting power needed, which turned out to be the equivalent of a 50-meter diameter telescope."

The Hubble mirror is 2.4 meters in diameter and the James Webb Space Telescope mirror is 6.5 meters in diameter. Both were designed for different purposes and before exoplanets were even discovered.

"Telescope mirrors collect light -- the larger the surface, the more starlight they can catch," Apai said. "But no one can build a 50-meter mirror. So we came up with Nautilus, which relies on lenses, and instead of building an impossibly huge 50-meter mirror, we plan on building a whole bunch of identical smaller lenses to collect the same amount of light."

The lenses were inspired by lighthouse lenses -- large but lightweight -- and include additional tweaks such as precision carving with diamond-tipped tools. The patented design, which is a hybrid between refractive and diffractive lenses, make them more powerful and suitable for planet hunting, Milster said.

Because the lenses are lighter than mirrors, they are less expensive to launch into space and can be made quickly and cheaply using a mold. They are also less sensitive to misalignments, making telescopes built with this technology much more economical. Much like Ford did for cars, Ikea did for furniture, and SpaceX for rockets, Nautilus will use new technology, a simpler design, and lightweight components to provide cheaper and more efficient telescopes with more light-collecting power.

Nautilus telescopes also don't require any fancy observing technique.

"We don't need extremely high-contrast imaging. We don't need a separate spacecraft with a giant starshade to occult the planet host stars. We don't need to go into the infrared," Apai said. "What we do need is to collect lots of light in an efficient and cheap way."

In the last few decades, computers, electronics and data-collection instruments have all become smaller, cheaper, faster and more efficient. Mirrors, on the other hand, are exceptions to this growth as they haven't seen big cost reductions.

"Currently, mirrors are expensive because it takes years to grind, polish, coat and test," Apai said. Their weight also makes them expensive to launch. "But our Nautilus technology starts with a mold, and often it takes just hours to make a lens. We also have more control over the process, so if we make a mistake, we don't need to start all over again like you may need to with a mirror."

Additionally, risk would be distributed over many telescopes, so if something goes wrong, the mission isn't scrapped. Many telescopes remain.

"Everything is simple, cheap and replicable, and we can collect a lot of light," Apai said.

Apai and Milster have another vision if they succeed: "Using the low-cost, replicated space telescope technology, universities would be able to launch their own small, Earth- or space-observing telescopes. Instead of competing for bits of time on Hubble, they'd get their own telescope, controlled by their own teams," Apai said.

Read more at Science Daily

Aug 4, 2019

Confirmation of toasty TESS planet leads to surprising find of promising world

This diagram shows the layout of the GJ 357 system. Planet d orbits within the star’s so-called habitable zone, the orbital region where liquid water can exist on a rocky planet’s surface. If it has a dense atmosphere, which will take future studies to determine, GJ 357 d could be warm enough to permit the presence of liquid water.
A piping hot planet discovered by NASA's Transiting Exoplanet Survey Satellite (TESS) has pointed the way to additional worlds orbiting the same star, one of which is located in the star's habitable zone. If made of rock, this planet may be around twice Earth's size.

The new worlds orbit a star named GJ 357, an M-type dwarf about one-third the Sun's mass and size and about 40% cooler that our star. The system is located 31 light-years away in the constellation Hydra. In February, TESS cameras caught the star dimming slightly every 3.9 days, revealing the presence of a transiting exoplanet -- a world beyond our solar system -- that passes across the face of its star during every orbit and briefly dims the star's light.

"In a way, these planets were hiding in measurements made at numerous observatories over many years," said Rafael Luque, a doctoral student at the Institute of Astrophysics of the Canary Islands (IAC) on Tenerife who led the discovery team. "It took TESS to point us to an interesting star where we could uncover them."

The transits TESS observed belong to GJ 357 b, a planet about 22% larger than Earth. It orbits 11 times closer to its star than Mercury does our Sun. This gives it an equilibrium temperature -- calculated without accounting for the additional warming effects of a possible atmosphere -- of around 490 degrees Fahrenheit (254 degrees Celsius).

"We describe GJ 357 b as a 'hot Earth,'" explains co-author Enric Pallé, an astrophysicist at the IAC and Luque's doctoral supervisor. "Although it cannot host life, it is noteworthy as the third-nearest transiting exoplanet known to date and one of the best rocky planets we have for measuring the composition of any atmosphere it may possess."

But while researchers were looking at ground-based data to confirm the existence of the hot Earth, they uncovered two additional worlds. The farthest-known planet, named GJ 357 d, is especially intriguing.

"GJ 357 d is located within the outer edge of its star's habitable zone, where it receives about the same amount of stellar energy from its star as Mars does from the Sun," said co-author Diana Kossakowski at the Max Planck Institute for Astronomy in Heidelberg, Germany. "If the planet has a dense atmosphere, which will take future studies to determine, it could trap enough heat to warm the planet and allow liquid water on its surface."

Without an atmosphere, it has an equilibrium temperature of -64 F (-53 C), which would make the planet seem more glacial than habitable. The planet weighs at least 6.1 times Earth's mass, and orbits the star every 55.7 days at a range about 20% of Earth's distance from the Sun. The planet's size and composition are unknown, but a rocky world with this mass would range from about one to two times Earth's size.

Even through TESS monitored the star for about a month, Luque's team predicts any transit would have occurred outside the TESS observing window.

GJ 357 c, the middle planet, has a mass at least 3.4 times Earth's, orbits the star every 9.1 days at a distance a bit more than twice that of the transiting planet, and has an equilibrium temperature around 260 F (127 C). TESS did not observe transits from this planet, which suggests its orbit is slightly tilted -- perhaps by less than 1 degree -- relative to the hot Earth's orbit, so it never passes across the star from our perspective.

To confirm the presence of GJ 357 b and discover its neighbors, Luque and his colleagues turned to existing ground-based measurements of the star's radial velocity, or the speed of its motion along our line of sight. An orbiting planet produces a gravitational tug on its star, which results in a small reflex motion that astronomers can detect through tiny color changes in the starlight. Astronomers have searched for planets around bright stars using radial velocity data for decades, and they often make these lengthy, precise observations publicly available for use by other astronomers.

Luque's team examined ground-based data stretching back to 1998 from the European Southern Observatory and the Las Campanas Observatory in Chile, the W.M. Keck Observatory in Hawaii, and the Calar Alto Observatory in Spain, among many others.

Read more at Science Daily

TESS discovers three new planets nearby, including temperate 'sub-Neptune'

This infographic illustrates key features of the TOI 270 system, located about 73 light-years away in the southern constellation Pictor. The three known planets were discovered by NASA’s Transiting Exoplanet Survey Satellite through periodic dips in starlight caused by each orbiting world. Insets show information about the planets, including their relative sizes, and how they compare to Earth. Temperatures given for TOI 270’s planets are equilibrium temperatures, calculated without the warming effects of any possible atmospheres.
NASA's Transiting Exoplanet Survey Satellite, or TESS, has discovered three new worlds that are among the smallest, nearest exoplanets known to date. The planets orbit a star just 73 light years away and include a small, rocky super-Earth and two sub-Neptunes -- planets about half the size of our own icy giant.

The sub-Neptune furthest out from the star appears to be within a "temperate" zone, meaning that the very top of the planet's atmosphere is within a temperature range that could support some forms of life. However, scientists say the planet's atmosphere is likely a thick, ultradense heat trap that renders the planet's surface too hot to host water or life.

Nevertheless, this new planetary system, which astronomers have dubbed TOI-270, is proving to have other curious qualities. For instance, all three planets appear to be relatively close in size. In contrast, our own solar system is populated with planetary extremes, from the small, rocky worlds of Mercury, Venus, Earth, and Mars, to the much more massive Jupiter and Saturn, and the more remote ice giants of Neptune and Uranus.

There's nothing in our solar system that resembles an intermediate planet, with a size and composition somewhere in the middle of Earth and Neptune. But TOI-270 appears to host two such planets: both sub-Neptunes are smaller than our own Neptune and not much larger than the rocky planet in the system.

Astronomers believe TOI-270's sub-Neptunes may be a "missing link" in planetary formation, as they are of an intermediate size and could help researchers determine whether small, rocky planets like Earth and more massive, icy worlds like Neptune follow the same formation path or evolve separately.

TOI-270 is an ideal system for answering such questions, because the star itself is nearby and therefore bright, and also unusually quiet. The star is an M-dwarf, a type of star that is normally extremely active, with frequent flares and solar storms. TOI-270 appears to be an older M-dwarf that has since quieted down, giving off a steady brightness, against which scientists can measure many properties of the orbiting planets, such as their mass and atmospheric composition.

"There are a lot of little pieces of the puzzle that we can solve with this system," says Maximilian Günther, a postdoc in MIT's Kavli Institute for Astrophysics and Space Research and lead author of a study published in Nature Astronomy that details the discovery. "You can really do all the things you want to do in exoplanet science, with this system."

A planetary pattern

Günther and his colleagues detected the three new planets after looking through measurements of stellar brightness taken by TESS. The MIT-developed satellite stares at patches of the sky for 27 days at a time, monitoring thousands of stars for possible transits -- characteristic dips in brightness that could signal a planet temporarily blocking the star's light as it passes in front of it.

The team isolated several such signals from a nearby star, located 73 light years away in the southern sky. They named the star TOI-270, for the 270th "TESS Object of Interest" identified to date. The researchers used ground-based instruments to follow up on the star's activity, and confirmed that the signals are the result of three orbiting exoplanets: planet b, a rocky super-Earth with a roughly three-day orbit; planet c, a sub-Neptune with a five-day orbit; and planet d, another sub-Neptune slightly further out, with an 11-day orbit.

Günther notes that the planets seem to line up in what astronomers refer to as a "resonant chain," meaning that the ratio of their orbits are close to whole integers -- in this case, 3:5 for the inner pair, and 2:1 for the outer pair -- and that the planets are therefore in "resonance" with each other. Astronomers have discovered other small stars with similarly resonant planetary formations. And in our own solar system, the moons of Jupiter also happen to line up in resonance with each other.

"For TOI-270, these planets line up like pearls on a string," Günther says. "That's a very interesting thing, because it lets us study their dynamical behavior. And you can almost expect, if there are more planets, the next one would be somewhere further out, at another integer ratio."

"An exceptional laboratory"

TOI-270's discovery initially caused a stir of excitement within the TESS science team, as it seemed, in the first analysis, that planet d might lie in the star's habitable zone, a region that would be cool enough for the planet's surface to support water, and possibly life. But the researchers soon realized that the planet's atmosphere was probably extremely thick, and would therefore generate an intense greenhouse effect, causing the planet's surface to be too hot to be habitable.

But Günther says there is a good possibility that the system hosts other planets, further out from planet d, that might well lie within the habitable zone. Planet d, with an 11-day orbit, is about 10 million kilometers out from the star. Günther says that, given that the star is small and relatively cool -- about half as hot as the sun -- its habitable zone could potentially begin at around 15 million kilometers. But whether a planet exists within this zone, and whether it is habitable, depends on a host of other parameters, such as its size, mass, and atmospheric conditions.

Fortunately, the team writes in their paper that "the host star, TOI-270, is remarkably well-suited for future habitability searches, as it is particularly quiet." The researchers plan to focus other instruments, including the upcoming James Webb Space Telescope, on TOI-270, to pin down various properties of the three planets, as well as search for additional planets in the star's habitable zone.

Read more at Science Daily