Jun 4, 2022

Astronomy team finds evidence of galactic metal shrouded in dust

A thorough understanding of galaxy evolution depends in part on an accurate measurement of the abundance of metals in the intergalactic medium -- the space between stars -- but dust can impede observations in optical wavelengths. An international team of astronomers at the University of California, Irvine, Oxford University in England, and other institutions uncovered evidence of heavier elements in local galaxies -- found to be deficient in earlier studies -- by analyzing infrared data gathered during a multiyear campaign.

For a paper published recently in Nature Astronomy, the researchers examined five galaxies that are dim in visible wavelengths but trillions of times more luminous than the sun in the infrared. Interactions between these galaxies and neighboring star systems cause gas to shift around and collapse, setting up conditions for prodigious star formation.

"Studying the gas content of these galaxies with optical instruments, astronomers were convinced that they were significantly metal-poor when compared with other galaxies of similar mass," said lead author Nima Chartab, UCI postdoctoral scholar in physics & astronomy. "But when we observed emission lines of these dusty galaxies in infrared wavelengths, we were afforded a clear view of them and found no significant metal deficiency."

To determine the abundance of gas-phase metals in the intergalactic medium, the astronomers sought to acquire data on the ratios of proxies, oxygen and nitrogen, because infrared emissions from these elements are less obscured by galactic dust.

"We are looking for evidence of baryon cycling in which stars process elements like hydrogen and helium to produce carbon, nitrogen and oxygen," said co-author Asantha Cooray, UCI professor of physics & astronomy. "The stars eventually go supernovae and blow up and then all of that gas in the outskirts of the stars gets turned into clouds that get thrown around. The material in them is loose and diffuse but eventually through gravitational perturbations caused by other stars moving around, the gas will start to clump and collapse, leading to the formation of new stars."

Observing this process in infrared wavelengths is a challenge for astronomers because water vapor in Earth's atmosphere blocks radiation on this part of the electromagnetic spectrum, making measurements from even the highest-altitude ground telescopes -- like those at the Keck Observatory in Hawaii -- insufficient.

Part of the dataset used by the team came from the now-retired Herschel Space Telescope, but Herschel was not equipped with a spectrometer capable of reading a specific emission line that the UCI-led team needed for its study. The researchers' solution was to take to the skies -- reaching more than 45,000 feet above sea level -- in the Stratospheric Observatory for Infrared Astronomy, NASA's Boeing 747 equipped with a 2.5-meter telescope.

"It took us nearly three years to collect all the data in using NASA's SOFIA observatory, because these flights don't last all night; they're more in the range of 45 minutes of observing time, so the study took a lot of flight planning and coordination," said Cooray.

By analyzing infrared emissions, the researchers were able to compare the metallicity of their target ultraluminous infrared galaxies with less dusty galaxies with similar mass and star formation rates. Chartab explained that these new data show that ultraluminous infrared galaxies are in line with the fundamental metallicity relation determined by stellar mass, metal abundance and star formation rate.

The new data further show that the underabundance of metals derived from optical emission lines is likely due to "heavy dust obscuration associated with starburst," according to the paper.

"This study is one example where it was critical for us to use this infrared wavelength to get a full understanding of what's going on in some of these galaxies," said Cooray. "When the optical observations initially came out suggesting that these galaxies had low metals, theorists went and wrote papers, there were a lot of simulations trying to explain what was going on. People thought, 'Maybe they really are low-metal galaxies,' but we found that not to be the case. Having a full view of the universe across the whole electromagnetic spectrum is really crucial, I think."

Read more at Science Daily

A 50% reduction in emissions by 2030 can be achieved. Here's how

The United States has set an ambitious goal to reduce greenhouse gas (GHG) emissions by at least 50% by 2030. Are we on track to succeed?

A new study by a team of scientists and policy analysts from across the nation suggests that there are multiple pathways to achieve this goal -- but big commitments will need to be made, immediately.

"This study should give policy makers and other energy stakeholders some level of comfort, by showing that everybody in the field is pointing in the same direction. The case for clean energy is stronger than ever before and our study shows that the 2030 emission target can be achieved," said Nikit Abhyankar, one of the study's authors and a scientist in the Electricity Markets & Policy Department at Lawrence Berkeley National Laboratory (Berkeley Lab). He notes that the most urgent actions will be to double the amount of renewable capacity built each year and transition predominately to electric vehicles within the next decade or so.

"With the right policies and infrastructure, we can reduce our emissions, while saving American consumers billions of dollars and generating new employment," he said.

Reducing GHG emissions by 50% by 2030 would put the United States on a path to limit global warming to 1.5 degrees Celsius, the target scientists say is required to avoid the worst consequences of the climate crisis.

The study, published in Science, consolidates findings from six recently published techno-economic models that simulate the U.S. energy system operations in comprehensive detail. According to the authors, the separate models all agree on four major points:
 

  • The majority of the country's greenhouse gas emissions come from power generation and transportation, so to reduce overall emissions by 50%, the electricity grid needs to run on 80% clean energy (up from today's 40%), and the majority of vehicles sold by 2030 need to be electric. Other important sources of GHG emissions reduction include electrification of buildings and industries.
  • The primary barrier to increased alternative energy use will not be cost, it will be enacting new policies. A coordinated policy response between states and the federal government will be necessary to succeed.
  • Thanks to advances in wind, solar, and energy storage technologies, powering the electric grid with renewables will not be more expensive; and electric vehicles could save every household up to $1,000 per year in net benefits.
  • A clean-energy transition would reduce air pollution, prevent up to 200,000 premature deaths, and avoid up to $800 billion in environmental and health costs through 2050. Many of the health benefits will occur in communities of color and frontline communities that are disproportionately exposed to vehicle, power plant, and industrial pollution.


"Our study provides the first detailed roadmap for how the United States can reach its 50% greenhouse gas emissions-reduction target by 2030," said lead author John Bistline, program manager in the Energy Systems and Climate Analysis Group at the Electric Power Research Institute. "This will require tripling the pace of historic carbon reductions, an ambitious but achievable target if stakeholders collaborate across all sectors. By comparing results across six independent models, we provide greater confidence about the policies and technology deployment needed to achieve near-term climate goals, laying the groundwork for an affordable, reliable, and equitable net-zero future."

According to Abhyankar, who led the development of one of the six models, "By 2030, wind, solar, coupled with energy storage can provide the bulk of the 80% clean electricity. The findings also show that generating the remaining 20% of grid power won't require the creation of new fossil fuel generators." He noted that existing gas plants, used infrequently and combined with energy storage, hydropower, and nuclear power are sufficient to meet demand during periods of extraordinarily low renewable energy generation or exceptionally high electricity demand. "And if the right policies are in place, the coal and gas power plants in the country that currently provide the majority of the nation's electricity would recover their initial investment, thereby avoiding risk of cost under-recovery for investors."

"Since announcing the nation's emissions reduction pledge at the 2021 United Nations climate conference, the United States has taken steps in the right direction," said Abhyankar. "But a lot still needs to happen. What we are hoping is that this study will give some level of a blueprint of how it could be done."

Read more at Science Daily

Jun 3, 2022

NASA's Davinci mission to take the plunge through massive atmosphere of Venus

In a recently published paper, NASA scientists and engineers give new details about the agency's Deep Atmosphere Venus Investigation of Noble gases, Chemistry, and Imaging (DAVINCI) mission, which will descend through the layered Venus atmosphere to the surface of the planet in mid-2031. DAVINCI is the first mission to study Venus using both spacecraft flybys and a descent probe.

DAVINCI, a flying analytical chemistry laboratory, will measure critical aspects of Venus' massive atmosphere-climate system for the first time, many of which have been measurement goals for Venus since the early 1980s. It will also provide the first descent imaging of the mountainous highlands of Venus while mapping their rock composition and surface relief at scales not possible from orbit. The mission supports measurements of undiscovered gases present in small amounts and the deepest atmosphere, including the key ratio of hydrogen isotopes -- components of water that help reveal the history of water, either as liquid water oceans or steam within the early atmosphere.

The mission's carrier, relay and imaging spacecraft (CRIS) has two onboard instruments that will study the planet's clouds and map its highland areas during flybys of Venus and will also drop a small descent probe with five instruments that will provide a medley of new measurements at very high precision during its descent to the hellish Venus surface.

"This ensemble of chemistry, environmental, and descent imaging data will paint a picture of the layered Venus atmosphere and how it interacts with the surface in the mountains of Alpha Regio, which is twice the size of Texas," said Jim Garvin, lead author of the paper in the Planetary Science Journal and DAVINCI principal investigator from NASA's Goddard Space Flight Center in Greenbelt, Maryland. "These measurements will allow us to evaluate historical aspects of the atmosphere as well as detect special rock types at the surface such as granites while also looking for tell-tale landscape features that could tell us about erosion or other formational processes."

DAVINCI will make use of three Venus gravity assists, which save fuel by using the planet's gravity to change the speed and/or direction of the CRIS flight system. The first two gravity assists will set CRIS up for a Venus flyby to perform remote sensing in the ultraviolet and the near infrared light, acquiring over 60 gigabits of new data about the atmosphere and surface. The third Venus gravity assist will set up the spacecraft to release the probe for entry, descent, science, and touchdown, plus follow-on transmission to Earth.

The first flyby of Venus will be six and half months after launch and it will take two years to get the probe into position for entry into the atmosphere over Alpha Regio under ideal lighting at "high noon," with the goal of measuring the landscapes of Venus at scales ranging from 328 feet (100 meters) down to finer than one meter. Such scales enable lander style geologic studies in the mountains of Venus without requiring landing.

Once the CRIS system is about two days away from Venus, the probe flight system will be released along with the titanium three foot (one meter) diameter probe safely encased inside. The probe will begin to interact with the Venus upper atmosphere at about 75 miles (120 kilometers) above the surface. The science probe will commence science observations after jettisoning its heat shield around 42 miles (67 kilometers) above the surface. With the heatshield jettisoned, the probe's inlets will ingest atmospheric gas samples for detailed chemistry measurements of the sort that have been made on Mars with the Curiosity rover. During its hour-long descent to the surface, the probe will also acquire hundreds of images as soon as it emerges under the clouds at around 100,000 feet (30,500 meters) above the local surface.

"The probe will touch-down in the Alpha Regio mountains but is not required to operate once it lands, as all of the required science data will be taken before reaching the surface." said Stephanie Getty, deputy principal investigator from Goddard. "If we survive the touchdown at about 25 miles per hour (12 meters/second), we could have up to 17-18 minutes of operations on the surface under ideal conditions."

DAVINCI is tentatively scheduled to launch June 2029 and enter the Venusian atmosphere in June 2031.

"No previous mission within the Venus atmosphere has measured the chemistry or environments at the detail that DAVINCI's probe can do," said Garvin. "Furthermore, no previous Venus mission has descended over the tesserae highlands of Venus, and none have conducted descent imaging of the Venus surface. DAVINCI will build on what Huygens probe did at Titan and improve on what previous in situ Venus missions have done, but with 21st century capabilities and sensors."

Read more at Science Daily

How plesiosaurs swam underwater

Plesiosaurs, which lived about 210 million years ago, adapted to life underwater in a unique way: their front and hind legs evolved in the course of evolution to form four uniform, wing-like flippers. In her thesis supervised at Ruhr-Universität Bochum and the University of Bonn, Dr. Anna Krahl investigated how they used these to move through the water. Partly by using the finite element method, which is widely used in engineering, she was able to show that it was necessary to twist the flippers in order to travel forward. She was able to reconstruct the movement sequence using bones, models and reconstructions of the muscles.

Plesiosaurs belong to a group of saurians called Sauropterygia, or paddle lizards, that re-adapted to living in the oceans. They evolved in the late Triassic 210 million years ago, lived at the same time as the dinosaurs, and became extinct at the end of the Cretaceous period. Plesiosaurs are characterized by an often extremely elongated neck with a small head -- the elasmosaurs even have the longest neck of all vertebrates. But there were also large predatory forms with a rather short neck and huge skulls. In all plesiosaurs, the neck is attached to a teardrop-shaped, hydrodynamically well adapted body with a markedly shortened tail.

Researchers have puzzled for 120 years how plesiosaurs swam


The second feature that makes plesiosaurs so unusual are their four uniform wing-like flippers. "Having the front legs transformed into wing-like flippers is relatively common in evolution, for instance in sea turtles. Never again, however, did the hind legs evolve into an almost identical-looking airfoil-like wing," explains Anna Krahl, whose doctoral thesis was supervised by Professor P. Martin Sander (Bonn) and Professor Ulrich Witzel (Bochum). Sea turtles and penguins, for example, have webbed feet. For more than 120 years, researchers in vertebrate paleontology have puzzled over how plesiosaurs might have swum with these four wings. Did they row like freshwater turtles or ducks? Did they fly underwater like sea turtles and penguins? Or did they combine underwater flight and rowing like modern-day sea lions or the pig-nosed turtle? It is also unclear whether the front and rear flippers were flapped in unison, in opposition, or out of phase.

Anna Krahl has been studying the body structure of plesiosaurs for several years. She examined the bones of the shoulder and pelvic girdle, the front and hind flippers, and the shoulder joint surfaces of the plesiosaur Cryptoclidus eurymerus from the Middle Jurassic period (about 160 million years ago) on a complete skeleton displayed in the Goldfuß Museum of the University of Bonn. Plesiosaurs have stiffened elbow, knee, hand, and ankle joints, but functioning shoulder, hip, and finger joints. "Analysis comparing them to modern-day sea turtles, and based on what is known about their swimming process, indicated that plesiosaurs were probably not able to rotate their flippers as much as would be necessary for rowing," concludes Krahl, summarizing one of her preliminary papers. Rowing is primarily a back-and-forth motion that uses water resistance to move forward. The preferred direction of flipper movement in plesiosaurs, on the other hand, was up-and-down, as used by underwater fliers to generate propulsion.

The question remained how plesiosaurs could ultimately twist their flippers to place them in a hydrodynamically favorable position and produce lift without rotating the upper arm and thigh around the longitudinal axis. "This could work by means of twisting the flippers around their long axis," says Anna Krahl. "Other vertebrates, such as the leatherback turtle, have also been shown to use this movement to generate propulsion through lift." Twisting, for example, involves bending the first finger far downward and the last finger far upward. The remaining fingers bridge these extreme positions so that the flipper tip is almost vertical without requiring any real rotation in the shoulder or wrist.

A reconstruction of the muscles of the fore- and hind flippers for Cryptoclidus using reptiles alive today showed that plesiosaurs could actively enable such flipper twisting. In addition to classical models, the researchers also made computer tomographies of the humerus and femur of Cryptoclidus and used them to create virtual 3D models. "These digital models were the basis for calculating the forces using a method we borrowed from engineering: the finite element method, or FE," explains Anna Krahl. All the muscles and their angles of attachment on the humerus and femur were virtually reproduced in an FE computer program that can simulate physiological functional loads, for example on construction components but also on prostheses. Based on muscle force assumptions from a similar study on sea turtles, the team was able to calculate and visualize the loading on each bone.

Twisting of the flippers can be proven indirectly

During a movement cycle, the limb bones are loaded by compression, tension, bending and torsion. "The FE analyses showed that the humerus and femur in the flippers are functionally loaded mainly by compression and to a much lesser extent by tensile stress," Anna Krahl explains. "This means that the plesiosaur built its bones by using as little material as necessary." This natural state can only be maintained if the muscles that twist the flippers and the muscles that wrap around the bone are included. "We can therefore indirectly prove that plesiosaurs twisted their flippers in order to swim efficiently," Anna Krahl sums up.

Read more at Science Daily

New research shows long-term personality traits influence problem-solving in zebra finches

Personality is not unique to humans. New research published in the Royal Society Open Science journal demonstrates that zebra finches have personalities, and some traits are consistent over two years of the birds' lives.

In addition to showing stable personality, zebra finches innovated solutions to novel foraging tasks, where sometimes success was related to personality type.

The article was written by Lisa Barrett and Jessica Marsh, of the University of Wyoming; Neeltje Boogert, of the University of Exeter; Christopher Templeton, of Pacific University Oregon; and Sarah Benson-Amram, of the University of British Columbia, formerly of UW and the leader of UW's Animal Behavior and Cognition Lab.

The authors of the paper tested 41 zebra finches at UW from 2016-18 to measure individual differences in the birds' behavior through time.

The authors measured a host of traits -- dominance, boldness, activity, risk-taking, aggressiveness and obstinacy -- in the short term (two weeks) and the long term (two years), using standardized personality tests that had been established in the literature.

To assess boldness, for example, the authors placed a novel object in an enclosure with a bird that had been feeding and measured how long it took the bird to resume feeding in the presence of the novel object. To assess dominance, the authors recorded interactions of groups of birds at a single feeder. The authors measured obstinacy -- or docility -- while handling the birds by counting the number of escape attempts the birds made beneath a net.

"We were interested to see if personality would remain stable or if individuals would be flexible in their behavior over time," says Barrett, the lead author. "Repeating our tests over two years with the exact same birds allowed us to answer that question."

Barrett and colleagues found that not all traits were equally consistent. Of the traits they measured, many traits were consistent over two weeks, but only boldness and obstinacy were consistent over two years.

Next, the researchers tested whether personality related to problem-solving success on three novel tasks previously used with zebra finches.

"Since individuals vary in their personality type and in their cognitive ability, we wanted to see if these two sources of variation were linked," says Marsh, who was an undergraduate at the time she worked on the study.

The authors found that problem-solving success related to boldness, dominance and obstinacy. For example, less dominant birds were more likely to solve two of the tasks compared to their more dominant counterparts. This result provides support for the "necessity drives innovation" hypothesis, which states that less dominant individuals -- who receive fewer resources due to competition with their flock mates -- may need to innovate new ways to access food.

"In this work, we leveraged a comprehensive suite of personality tests and multiple cognitive tasks, and we carried out our work over a longer period of time than traditional tests," Benson-Amram says. "This allowed us to uncover the importance of measuring multiple traits for understanding the link between personality and problem-solving."

Read more at Science Daily

When AI is the inventor who gets the patent?

The day is coming -- some say has already arrived -- when new inventions that benefit society are dreamt up by artificial intelligence all on its own.

It's not surprising these days to see new inventions that either incorporate or have benefitted from artificial intelligence (AI) in some way, but what about inventions dreamt up by AI -- do we award a patent to a machine?

This is the quandary facing lawmakers around the world with a live test case in the works that its supporters say is the first true example of an AI system named as the sole inventor.

In commentary published in the journal Nature, two leading academics from UNSW Sydney examine the implications of patents being awarded to an AI entity.

Intellectual Property (IP) law specialist Associate Professor Alexandra George and AI expert, Laureate Fellow and Scientia Professor Toby Walsh argue that patent law as it stands is inadequate to deal with such cases and requires legislators to amend laws around IP and patents -- laws that have been operating under the same assumptions for hundreds of years.

The case in question revolves around a machine called DABUS (Device for the Autonomous Bootstrapping of Unified Sentience) created by Dr Stephen Thaler, who is president and chief executive of US-based AI firm Imagination Engines. Dr Thaler has named DABUS as the inventor of two products -- a food container with a fractal surface that helps with insulation and stacking, and a flashing light for attracting attention in emergencies.

For a short time in Australia, DABUS looked like it might be recognised as the inventor because, in late July 2021, a trial judge accepted Dr Thaler's appeal against IP Australia's rejection of the patent application five months earlier. But after the Commissioner of Patents appealed the decision to the Full Court of the Federal Court of Australia, the five-judge panel upheld the appeal, agreeing with the Commissioner that an AI system couldn't be named the inventor.

A/Prof. George says the attempt to have DABUS awarded a patent for the two inventions instantly creates challenges for existing laws which has only ever considered humans or entities comprised of humans as inventors and patent-holders.

"Even if we do accept that an AI system is the true inventor, the first big problem is ownership. How do you work out who the owner is? An owner needs to be a legal person, and an AI is not recognised as a legal person," she says.

Ownership is crucial to IP law. Without it there would be little incentive for others to invest in the new inventions to make them a reality.

"Another problem with ownership when it comes to AI-conceived inventions, is even if you could transfer ownership from the AI inventor to a person: is it the original software writer of the AI? Is it a person who has bought the AI and trained it for their own purposes? Or is it the people whose copyrighted material has been fed into the AI to give it all that information?" asks A/Prof. George.

For obvious reasons


Prof. Walsh says what makes AI systems so different to humans is their capacity to learn and store so much more information than an expert ever could. One of the requirements of inventions and patents is that the product or idea is novel, not obvious and is useful.

"There are certain assumptions built into the law that an invention should not be obvious to a knowledgeable person in the field," Prof. Walsh says.

"Well, what might be obvious to an AI won't be obvious to a human because AI might have ingested all the human knowledge on this topic, way more than a human could, so the nature of what is obvious changes."

Prof. Walsh says this isn't the first time that AI has been instrumental in coming up with new inventions. In the area of drug development, a new antibiotic was created in 2019 -- Halicin -- that used deep learning to find a chemical compound that was effective against drug-resistant strains of bacteria.

"Halicin was originally meant to treat diabetes, but its effectiveness as an antibiotic was only discovered by AI that was directed to examine a vast catalogue of drugs that could be repurposed as antibiotics. So there's a mixture of human and machine coming into this discovery."

Prof. Walsh says in the case of DABUS, it's not entirely clear whether the system is truly responsible for the inventions.

"There's lots of involvement of Dr Thaler in these inventions, first in setting up the problem, then guiding the search for the solution to the problem, and then interpreting the result," Prof. Walsh says.

"But it's certainly the case that without the system, you wouldn't have come up with the inventions."

Change the laws

Either way, both authors argue that governing bodies around the world will need to modernise the legal structures that determine whether or not AI systems can be awarded IP protection. They recommend the introduction of a new 'sui generis' form of IP law -- which they've dubbed 'AI-IP' -- that would be specifically tailored to the circumstances of AI-generated inventiveness. This, they argue, would be more effective than trying to retrofit and shoehorn AI-inventiveness into existing patent laws.

Looking forward, after examining the legal questions around AI and patent law, the authors are currently working on answering the technical question of how AI is going to be inventing in the future.

Read more at Science Daily

Jun 2, 2022

Research shows how Gulf of Mexico escaped ancient mass extinction

An ancient bout of global warming 56 million years ago that acidified oceans and wiped-out marine life had a milder effect in the Gulf of Mexico, where life was sheltered by the basin's unique geology -- according to research by the University of Texas Institute for Geophysics (UTIG).

Published in the journal Marine and Petroleum Geology, the findings not only shed light on an ancient mass extinction, but could also help scientists determine how current climate change will affect marine life and aid in efforts to find deposits of oil and gas.

And although the Gulf of Mexico is very different today, UTIG geochemist Bob Cunningham, who led the research, said that valuable lessons can be drawn about climate change today from how the Gulf was impacted in the past.

"This event known as the Paleocene-Eocene Thermal Maximum or PETM is very important to understand because it's pointing towards a very powerful, albeit brief, injection of carbon into the atmosphere that's akin to what's happening now," he said.

Cunningham and his collaborators investigated the ancient period of global warming and its impact on marine life and chemistry by studying a group of mud, sand, and limestone deposits found across the Gulf.

They sifted through rock chips brought up during oil and gas drilling and found an abundance of microfossils from radiolarians -- a type of plankton -- that had surprisingly thrived in the Gulf during the ancient global warming. They concluded that a steady supply of river sediments and circulating ocean waters had helped radiolarians and other microorganisms survive even while Earth's warming climate became more hostile to life.

"In a lot of places, the ocean was absolutely uninhabitable for anything," said UTIG biostratigrapher Marcie Purkey Phillips. "But we just don't seem to see as severe an effect in the Gulf of Mexico as has been seen elsewhere."

The reasons for that go back to geologic forces reshaping North America at the time. About 20 million years before the ancient global warming, the rise of the Rocky Mountains had redirected rivers into the northwest Gulf of Mexico -- a tectonic shift known as the Laramide uplift -- sending much of the continent's rivers through what is now Texas and Louisiana into the Gulf's deeper waters.

When global warming hit and North America became hotter and wetter, the rain-filled rivers fire-hosed nutrients and sediments into the basin, providing plenty of nutrients for phytoplankton and other food sources for the radiolarians.

The findings also confirm that the Gulf of Mexico remained connected to the Atlantic Ocean and the salinity of its waters never reached extremes -- a question that until now had remained open. According to Phillips, the presence of radiolarians alone -- which only thrive in nutrient-rich water that's no saltier than seawater today -- confirmed that the Gulf's waters did not become too salty. Cunningham added that the organic content of sediments decreased farther from the coast, a sign that deep currents driven by the Atlantic Ocean were sweeping the basin floor.

The research accurately dates closely related geologic layers in the Wilcox Group (a set of rock layers that house an important petroleum system), a feat that can aid in efforts to find undiscovered oil and gas reserves in formations that are the same age. At the same time, the findings are important for researchers investigating the effects of today's global warming because they show how the water and ecology of the Gulf changed during a very similar period of climate change long ago.

The study compiled geologic samples from 36 industry wells dotted across the Gulf of Mexico, plus a handful of scientific drilling expeditions including the 2016 UT Austin-led investigation of the Chicxulub asteroid impact, which led to the extinction of non-avian dinosaurs.

For John Snedden, a study coauthor and senior research scientist at UTIG, the study is a perfect example of industry data being used to address important scientific questions.

"The Gulf of Mexico is a tremendous natural archive of geologic history that's also very closely surveyed," he said. "We've used this very robust database to examine one of the highest thermal events in the geologic record, and I think it's given us a very nuanced view of a very important time in Earth's history."

Read more at Science Daily

A 3400-year-old city emerges from the Tigris River

A team of German and Kurdish archaeologists have uncovered a 3400-year-old Mittani Empire-era city once located on the Tigris River. The settlement emerged from the waters of the Mosul reservoir early this year as water levels fell rapidly due to extreme drought in Iraq. The extensive city with a palace and several large buildings could be ancient Zakhiku -- believed to have been an important center in the Mittani Empire (ca. 1550-1350 BC).

Bronze Age city resurfaced due to drought

Iraq is one of the countries in the world most affected by climate change. The south of the country in particular has been suffering from extreme drought for months. To prevent crops from drying out, large amounts of water have been drawn down from the Mosul reservoir -- Iraq's most important water storage -- since December. This led to the reappearance of a Bronze Age city that had been submerged decades ago without any prior archaeological investigations. It is located at Kemune in the Kurdistan Region of Iraq.

This unforeseen event put archaeologists under sudden pressure to excavate and document at least parts of this large, important city as quickly as possible before it was resubmerged. The Kurdish archaeologist Dr. Hasan Ahmed Qasim, chairman of the Kurdistan Archaeology Organization, and the German archaeologists Jun.-Prof. Dr. Ivana Puljiz, University of Freiburg, and Prof. Dr. Peter Pfälzner, University of Tübingen, spontaneously decided to undertake joint rescue excavations at Kemune. These took place in January and February 2022 in collaboration with the Directorate of Antiquities and Heritage in Duhok (Kurdistan Region of Iraq).

Fritz Thyssen Foundation supported excavations

A team for the rescue excavations was put together within days. Funding for the work was obtained at short notice from the Fritz Thyssen Foundation through the University of Freiburg. The German-Kurdish archaeological team was under immense time pressure because it was not clear when the water in the reservoir would rise again.

Massive fortification, multi-storey storage building, industrial complex

Within a short time, the researchers succeeded in largely mapping the city. In addition to a palace, which had already been documented during a short campaign in 2018, several other large buildings were uncovered -- a massive fortification with wall and towers, a monumental, multi-storey storage building and an industrial complex. The extensive urban complex dates to the time of the Empire of Mittani (approx. 1550-1350 BC), which controlled large parts of northern Mesopotamia and Syria.

"The huge magazine building is of particular importance because enormous quantities of goods must have been stored in it, probably brought from all over the region," says Puljiz. Qasim concludes, "The excavation results show that the site was an important center in the Mittani Empire."

The research team was stunned by the well-preserved state of the walls -- sometimes to a height of several meters -- despite the fact that the walls are made of sun-dried mud bricks and were under water for more than 40 years. This good preservation is due to the fact that the city was destroyed in an earthquake around 1350 BC, during which the collapsing upper parts of the walls buried the buildings.

Ceramic vessels with over 100 cuneiform tablets

Of particular interest is the discovery of five ceramic vessels that contained an archive of over 100 cuneiform tablets. They date to the Middle Assyrian period, shortly after the earthquake disaster struck the city. Some clay tablets, which may be letters, are even still in their clay envelopes. The researchers hope this discovery will provide important information about the end of the Mittani-period city and the beginning of Assyrian rule in the region. "It is close to a miracle that cuneiform tablets made of unfired clay survived so many decades under water," Pfälzner says.

Read more at Science Daily

Time crystals 'impossible' but obey quantum physics

Scientists have created the first "time-crystal" two-body system in an experiment that seems to bend the laws of physics.

It comes after the same team recently witnessed the first interaction of the new phase of matter.

Time crystals were long believed to be impossible because they are made from atoms in never-ending motion. The discovery, published in Nature Communications, shows that not only can time crystals be created, but they have potential to be turned into useful devices.

Time crystals are different from a standard crystal -- like metals or rocks -- which is composed of atoms arranged in a regularly repeating pattern in space.

First theorised in 2012 by Nobel Laureate Frank Wilczek and identified in 2016, time crystals exhibit the bizarre property of being in constant, repeating motion in time despite no external input. Their atoms are constantly oscillating, spinning, or moving first in one direction, and then the other.

EPSRC Fellow Dr Samuli Autti, lead author from Lancaster University's Department of Physics, explained: "Everybody knows that perpetual motion machines are impossible. However, in quantum physics perpetual motion is okay as long as we keep our eyes closed. By sneaking through this crack we can make time crystals."

"It turns out putting two of them together works beautifully, even if time crystals should not exist in the first place. And we already know they also exist at room temperature."

A "two-level system" is a basic building block of a quantum computer. Time crystals could be used to build quantum devices that work at room temperature.

An international team of researchers from Lancaster University, Royal Holloway London, Landau Institute, and Aalto University in Helsinki observed time crystals by using Helium-3 which is a rare isotope of helium with one missing neutron. The experiment was carried out in Aalto University.

Read more at Science Daily

Study examines why the memory of fear is seared into our brains

Experiencing a frightening event is likely something you'll never forget. But why does it stay with you when other kinds of occurrences become increasingly difficult to recall with the passage of time?

A team of neuroscientists from the Tulane University School of Science and Engineering and Tufts University School of Medicine have been studying the formation of fear memories in the emotional hub of the brain -- the amygdala -- and think they have a mechanism.

In a nutshell, the researchers found that the stress neurotransmitter norepinephrine, also known as noradrenaline, facilitates fear processing in the brain by stimulating a certain population of inhibitory neurons in the amygdala to generate a repetitive bursting pattern of electrical discharges. This bursting pattern of electrical activity changes the frequency of brain wave oscillation in the amygdala from a resting state to an aroused state that promotes the formation of fear memories.

Published recently in Nature Communications, the research was led by Tulane cell and molecular biology professor Jeffrey Tasker, the Catherine and Hunter Pierson Chair in Neuroscience, and his PhD student Xin Fu.

Tasker used the example of an armed robbery. "If you are held up at gunpoint, your brain secretes a bunch of the stress neurotransmitter norepinephrine, akin to an adrenaline rush," he said.

"This changes the electrical discharge pattern in specific circuits in your emotional brain, centered in the amygdala, which in turn transitions the brain to a state of heightened arousal that facilitates memory formation, fear memory, since it's scary. This is the same process, we think, that goes awry in PTSD and makes it so you cannot forget traumatic experiences."

Read more at Science Daily

Jun 1, 2022

Spaceflight: Microgravity analog culture profoundly affects microbial infection process in 3-D human tissue models

Infectious microbes have evolved sophisticated means to invade host cells, outwit the body's defenses and cause disease. While researchers have tried to puzzle out the complicated interactions between microorganisms and the host cells they infect, one facet of the disease process has often been overlooked -- the physical forces that impact host-pathogen interactions and disease outcomes.

In a new study, corresponding authors Cheryl Nickerson, Jennifer Barrila and their colleagues demonstrate that under low fluid shear force conditions that simulate those found in microgravity culture during spaceflight, the foodborne pathogen Salmonella infects 3-D models of human intestinal tissue at much higher levels, and induces unique alterations in gene expression.

This study advances previous work by the same team showing that physical forces of fluid shear acting on both the pathogen and host can transform the landscape of infection.

Understanding this subtle interplay of host and pathogen during infection is critical to ensuring astronaut health, particularly on extended space missions. Such research also sheds new light on the still largely mysterious processes of infection on earth, as low fluid shear forces are also found in certain tissues in our bodies that pathogens infect, including the intestinal tract.

While the team has extensively characterized the interaction between conventionally grown shake flask cultures of Salmonella Typhimurium and 3-D intestinal models, this study marks the first time that S. Typhimurium has been grown under the low fluid shear conditions of simulated microgravity and then used to infect a 3-D model of human intestinal epithelium co-cultured with macrophage immune cells, key cell types targeted by Salmonella during infection.

The 3-D co-culture intestinal model used in this study more faithfully replicates the structure and behavior of the same tissue within the human body and is more predictive of responses to infection, as compared with conventional laboratory cell cultures.

Results showed dramatic changes in gene expression of 3-D intestinal cells following infection with both wild-type and mutant S. Typhimurium strains grown under simulated microgravity conditions. Many of these changes occurred in genes known to be intimately involved with S. Typhimurium's prodigious ability to invade and colonize host cells and escape surveillance and destruction by the host's immune system.

"A major challenge limiting human exploration of space is the lack of a comprehensive understanding of the impact of space travel on crew health," Nickerson says. "This challenge will negatively impact both deep space exploration by professional astronauts, as well as civilians participating in the rapidly expanding commercial space market in low Earth orbit. Since microbes accompany humans wherever they travel and are essential for controlling the balance between health and disease, understanding the relationship between spaceflight, immune cell function, and microorganisms will be essential to understand infectious disease risk for humans."

Nickerson, who co-directed the new study with Jennifer Barrila, is a researcher in the Biodesign Center for Fundamental and Applied Microbiomics and is also a professor with ASU's School of Life Sciences. The research appears in the current issue of the journal Frontiers in Cellular and Infection Microbiology

Life-altering force

Life on earth has diversified into an almost incomprehensibly vast array of forms, evolving under wildly dissimilar environmental conditions. Yet one parameter has remained constant. Throughout the 3.7-billion-year history of life on earth, all living organisms evolved under, and respond to, the pull of Earth's gravity.

For more than 20 years, Nickerson has been a pioneer in exploring the effects of the reduced microgravity environment of spaceflight on a range of pathogenic microbes and the impact on interactions with human cells and animals they infect. She and her colleagues have doggedly pursued this research in both land-based and spaceflight settings, the results of which helped lay the foundation for the rapidly growing research field, mechanobiology of infectious disease, the study of how physical forces impact infection and disease outcomes.

Among their important findings is that the low fluid shear conditions associated with the reduced gravity environment of spaceflight and spaceflight analog culture are similar to those encountered by pathogens inside the infected host, and that these conditions can induce unique changes in the ability of pathogenic microbes like Salmonella to aggressively infect host cells and exacerbate disease, a property known as virulence.

The infectious agent explored in the new study, Salmonella Typhimurium, is a bacterial pathogen responsible for gastrointestinal disease in humans and animals. Salmonella is the leading cause of death from food-borne illness in the United States. According to the CDC, Salmonella bacteria cause about 1.35 million infections, 26,500 hospitalizations, and 420 deaths in the United States each year. Foods contaminated by the bacteria are the primary source for most of these illnesses.

Salmonella infection typically causes diarrhea, fever, and stomach cramps, beginning 6 hours to 6 days after infection. Illness from the disease usually lasts 4 to 7 days. In severe cases, hospitalization may be required.

'Shear' probability?

Cells in mammalian organisms, including humans, as well as the bacterial cells that infect them, are exposed to extracellular fluid flowing over their outer surfaces. Just as a gentle downstream current will affect the pebbles in the underlying streambed differently than a raging torrent, so the force of fluid gliding over cell surfaces can cause changes to affected cells. This liquid abrasion of cell surfaces is known as fluid shear.

Since spaceflight experiments are rare and access to the space research platform is currently limited, researchers often simulate the low fluid shear conditions that microbes encounter during culture in spaceflight by growing cells in liquid growth media within a device known as a rotating wall vessel bioreactor or RWV. As the cylindrical reactor rotates, cells are maintained in suspension, gently and continuously tumbling in their surrounding culture medium. This process mimics the low fluid shear conditions of microgravity that cells experience during culture in spaceflight.

The team has also shown that this fluid shear level is relevant to conditions that microbial cells encounter in the human intestine and other tissues during infection, triggering changes in gene expression that can help some pathogens better colonize host cells and evade the immune system's efforts to destroy them.

Portrait of an intruder

The study found significant changes in both gene expression and ability to infect 3-D intestinal models by Salmonella bacteria cultured in the RWV bioreactor. These experiments involved two S. Typhimurium strains, one unaltered or wild type strain and one mutant strain.

The mutant strain was otherwise identical to the wild type but lacked an important protein known as Hfq, a major stress response regulator in Salmonella. In earlier research, Nickerson and her team discovered that Hfq acts as a master regulator of Salmonella's infection process in both spaceflight and spaceflight analog culture. They later discovered additional pathogens that also use Hfq to regulate their responses to these same conditions.

Unexpectedly, in the current study, the hfq mutant strain was still able to attach, invade into, and survive within 3-D tissue models at levels comparable to the wild type strain. In agreement with this finding, many genes responsible for Salmonella's ability to colonize human cells, including those associated with cell adherence, motility, and invasion were still activated in the mutant strain under simulated microgravity conditions, despite the removal of Hfq.

From the host perspective, the 3-D intestinal co-culture model responded to Salmonella infection by upregulating genes involved in inflammation, tissue remodeling, and wound healing at higher levels when the bacteria were grown under simulated microgravity conditions prior to use in infection studies. This was observed for both wild type and hfq mutant strains of the pathogen.

Data from this new spaceflight analog study reinforces previous findings from the team's 2006, 2008 and 2010 Space Shuttle experiments. In particular, the 2010 flight experiment conducted aboard Space Shuttle Discovery, called STL-IMMUNE, used the same wild type strain of S. Typhimurium to infect a 3-D model of human intestinal tissue made from the same epithelial cells used in the new study.

Several commonalities were observed between host cell responses to infection in the new spaceflight analog study and those previously reported when infections took place in true spaceflight during the STL-IMMUNE experiment. These results further reinforce the RWV as a predictive ground-based spaceflight analogue culture system that mimics key aspects of microbial responses to true spaceflight culture.

"During STL-IMMUNE, we discovered that infection of a human 3-D intestinal epithelial model by Salmonella during spaceflight induced key transcriptional and proteomic biosignatures that were consistent with enhanced infection by the pathogen," Barrila says. "However, due to the technical challenges of performing in-flight infections, we could not quantify whether the bacteria were actually attaching and invading into the tissue at higher levels. The use of the RWV bioreactor as a spaceflight analog culture system in our current study has been a powerful tool which allowed us to explore this experimental question at a deeper level."

Read more at Science Daily

Scaling new heights with new research showing how plants can grow at altitude

A new study has found that plant species are adapted to the altitude where they grow by 'sensing' the oxygen levels that surround them.

Altitude is an important part of plant ecology with at least 30% of plant species diversity contained in mountains and climate change is leading to the retreat of alpine species and some crops to higher altitudes.

Research led by scientists at the University of Nottingham has identified a mechanism through which plants can sense atmospheric oxygen levels (that decrease with altitude) that will help to understand how plants live at high altitude. The work was carried out in collaboration with scientists in Spain and Ecuador and was funded by the Leverhulme Trust. Their findings have been published today in Nature.

Researchers analysed plants growing at low and high-altitude locations. The team, working in Nottingham, Ecuador and Spain was able to identify how oxygen-sensing controls the pathway of chlorophyll synthesis, permitting plants to match the levels of a key toxic chemical to surrounding oxygen levels.

Climate change is leading to the displacement of wild species and crops (for example coffee) to higher altitudes, this research offers new insights into the underlying genetic mechanisms controlling their ability to survive at different altitudes. This new understanding of the genetic changes plants go through at altitude could lead to approaches to help plant breeders enhance the capacity of crops to grow at higher altitudes.

The research was led by Professor Michael Holdsworth from the University of Nottingham in collaboration with Professor Karina Proaño at ESPE University in Sangolquí, Ecuador and Professor Carlos Alonso Blanco from the Spanish National Centre for Biotechnology CSIC.

Professor Holdsworth commented: "Altitude is a key component of ecology with different altitudes subjecting plants to changing environments, some components of which are fixed by altitude and others that are not. For life at high altitude, it was previously considered that plants need to adapt to many variables, including high UV light and lower temperatures usually present at high altitude but this study is the first time that perception of atmospheric oxygen levels has been shown to be a key determinant of altitude adaptation in plants. "

He continues: "Exploring this novel finding allowed us to show that atmospheric oxygen level is the key determinant of altitude perception. We define the molecular pathway through which oxygen-sensing results in an adapted phenotype and we find that distinct species of flowering plants are adapted to absolute altitude through conserved oxygen-sensing control of chlorophyll synthesis and hypoxia gene expression. Showing that this mechanism works in diverse species provides a new paradigm for plant ecology."

Read more at Science Daily

Study suggests that most of our evolutionary trees could be wrong

New research led by scientists at the Milner Centre for Evolution at the University of Bath suggests that determining evolutionary trees of organisms by comparing anatomy rather than gene sequences is misleading. The study, published in Communications Biology, shows that we often need to overturn centuries of scholarly work that classified living things according to how they look.

Since Darwin and his contemporaries in the 19th Century, biologists have been trying to reconstruct the "family trees" of animals by carefully examining differences in their anatomy and structure (morphology).

However, with the development of rapid genetic sequencing techniques, biologists are now able to use genetic (molecular) data to help piece together evolutionary relationships for species very quickly and cheaply, often proving that organisms we once thought were closely related actually belong in completely different branches of the tree.

For the first time, scientists at Bath compared evolutionary trees based on morphology with those based on molecular data, and mapped them according to geographical location.

They found that the animals grouped together by molecular trees lived more closely together geographically than the animals grouped using the morphological trees.

Matthew Wills, Professor of Evolutionary Paleobiology at the Milner Centre for Evolution at the University of Bath, said: "It turns out that we've got lots of our evolutionary trees wrong.

"For over a hundred years, we've been classifying organisms according to how they look and are put together anatomically, but molecular data often tells us a rather different story.

"Our study proves statistically that if you build an evolutionary tree of animals based on their molecular data, it often fits much better with their geographical distribution.

"Where things live -- their biogeography -- is an important source of evolutionary evidence that was familiar to Darwin and his contemporaries.

"For example, tiny elephant shrews, aardvarks, elephants, golden moles and swimming manatees have all come from the same big branch of mammal evolution -- despite the fact that they look completely different from one another (and live in very different ways).

"Molecular trees have put them all together in a group called Afrotheria, so-called because they all come from the African continent, so the group matches the biogeography."

The study found that convergent evolution -- when a characteristic evolves separately in two genetically unrelated groups of organisms -- is much more common than biologists previously thought.

Professor Wills said: "We already have lots of famous examples of convergent evolution, such as flight evolving separately in birds, bats and insects, or complex camera eyes evolving separately in squid and humans.

"But now with molecular data, we can see that convergent evolution happens all the time -- things we thought were closely related often turn out to be far apart on the tree of life.

"People who make a living as lookalikes aren't usually related to the celebrity they're impersonating, and individuals within a family don't always look similar -- it's the same with evolutionary trees too.

"It proves that evolution just keeps on re-inventing things, coming up with a similar solution each time the problem is encountered in a different branch of the evolutionary tree.

"It means that convergent evolution has been fooling us -- even the cleverest evolutionary biologists and anatomists -- for over 100 years!"

Dr Jack Oyston, Research Associate and first author of the paper, said: "The idea that biogeography can reflect evolutionary history was a large part of what prompted Darwin to develop his theory of evolution through natural selection, so it's pretty surprising that it hadn't really been considered directly as a way of testing the accuracy of evolutionary trees in this way before now.

"What's most exciting is that we find strong statistical proof of molecular trees fitting better not just in groups like Afrotheria, but across the tree of life in birds, reptiles, insects and plants too.

Read more at Science Daily

About 3 grams a day of omega-3 fatty acids may lower blood pressure, more research needed

About 3 grams daily of omega-3 fatty acids, consumed in foods or supplements, appears to be the optimal daily dose to help lower blood pressure, according to a research review published today in the Journal of the American Heart Association, an open access, peer-reviewed journal of the American Heart Association.

Omega-3 fatty acids docosahexaenoic acid (DHA) and eicosapentaenoic acid (EPA) are typically found in fatty fish, such as salmon, tuna, sardines, trout, herring and oysters. Some people also take combined DHA and EPA in supplements. While some studies suggest that consumption of omega-3 fatty acids may lower blood pressure, the optimal dosage needed to lower blood pressure has not been clear. The National Institutes of Health has established an adequate intake of omega-3 fatty acids for healthy people at 1.1- 1.6 grams daily, depending on age and sex.

"According to our research, the average adult may have a modest blood pressure reduction from consuming about 3 grams a day of these fatty acids," said study author Xinzhi Li, M.D., Ph.D., assistant professor and program director of the School of Pharmacy at Macau University of Science and Technology in Macau, China.

Researchers analyzed the results of 71 clinical trials from around the world published from 1987 to 2020. The studies examined the relationship between blood pressure and the omega-3 fatty acids DHA and EPA (either individually or combined) in people aged 18 and older with or without high blood pressure or cholesterol disorders. There were nearly 5,000 participants combined, ranging in age from 22 to 86 years. Participants took dietary and/or prescription supplement sources of fatty acids for an average of 10 weeks.

The analysis found:
 

  • Compared to adults who did not consume EPA and DHA, those who consumed between 2 and 3 grams daily of combined DHA and EPA omega-3 fatty acids (in supplements, food or both) had reduced systolic (top number) and diastolic (bottom number) blood pressure by an average 2 mm Hg.
  • Consuming more than 3 grams of omega-3 fatty acids daily may have added blood pressure-lowering benefit for adults with high blood pressure or high blood lipids:
  • At 3g a day of omega-3s, systolic blood pressure (SBP) decreased an average of 4.5 mm Hg for those with hypertension, and about 2 mm Hg on average for those without.
  • At 5g a day of omega-3s, SBP declined an average of nearly 4 mm Hg for those with hypertension and less than 1 mm Hg on average for those without.
  • Similar differences were seen in people with high blood lipids and among those older than age 45.


About 4-5 ounces of Atlantic salmon provide 3 grams of omega 3 fatty acids. A typical fish oil supplement contains about 300 mg of omega-3s per pill, but doses vary widely.

"Most of the studies reported on fish oil supplements rather than on EPA and DHA omega-3's consumed in food, which suggests supplements may be an alternative for those who cannot eat fatty fish such as salmon regularly," Li said. "Algae supplements with EPA and DHA fatty acids are also an option for people who do not consume fish or other animal products."

The U.S. Food and Drug Administration (FDA) announced in June 2019 that it did not object to the use of certain health claims that consuming EPA and DHA omega-3 fatty acids in food or dietary supplements may reduce the risk of hypertension and coronary heart disease. However, they noted that the evidence was inconclusive and highly inconsistent.

"Our study supports the FDA guidance that EPA and DHA omega-3 fatty acids may reduce the risk of coronary heart disease by lowering high blood pressure, especially among people already diagnosed with hypertension," he said. "However, while our study may add a layer of credible evidence, it does not meet the threshold to make an authorized health claim for omega-3 fatty acids in compliance with FDA regulations."

Read more at Science Daily

May 31, 2022

Gemini North telescope helps explain why Uranus and Neptune are different colors

Astronomers may now understand why the similar planets Uranus and Neptune are different colors. Using observations from the Gemini North telescope, the NASA Infrared Telescope Facility, and the Hubble Space Telescope, researchers have developed a single atmospheric model that matches observations of both planets. The model reveals that excess haze on Uranus builds up in the planet's stagnant, sluggish atmosphere and makes it appear a lighter tone than Neptune.

Neptune and Uranus have much in common -- they have similar masses, sizes, and atmospheric compositions -- yet their appearances are notably different. At visible wavelengths Neptune has a distinctly bluer color whereas Uranus is a pale shade of cyan. Astronomers now have an explanation for why the two planets are different colors.

New research suggests that a layer of concentrated haze that exists on both planets is thicker on Uranus than a similar layer on Neptune and 'whitens' Uranus's appearance more than Neptune's. If there were no haze in the atmospheres of Neptune and Uranus, both would appear almost equally blue.

This conclusion comes from a model that an international team led by Patrick Irwin, Professor of Planetary Physics at Oxford University, developed to describe aerosol layers in the atmospheres of Neptune and Uranus. Previous investigations of these planets' upper atmospheres had focused on the appearance of the atmosphere at only specific wavelengths. However, this new model, consisting of multiple atmospheric layers, matches observations from both planets across a wide range of wavelengths. The new model also includes haze particles within deeper layers that had previously been thought to contain only clouds of methane and hydrogen sulfide ices.

"This is the first model to simultaneously fit observations of reflected sunlight from ultraviolet to near-infrared wavelengths," explained Irwin, who is the lead author of a paper presenting this result in the Journal of Geophysical Research: Planets. "It's also the first to explain the difference in visible color between Uranus and Neptune."

The team's model consists of three layers of aerosols at different heights. The key layer that affects the colors is the middle layer, which is a layer of haze particles (referred to in the paper as the Aerosol-2 layer) that is thicker on Uranus than on Neptune. The team suspects that, on both planets, methane ice condenses onto the particles in this layer, pulling the particles deeper into the atmosphere in a shower of methane snow. Because Neptune has a more active, turbulent atmosphere than Uranus does, the team believes Neptune's atmosphere is more efficient at churning up methane particles into the haze layer and producing this snow. This removes more of the haze and keeps Neptune's haze layer thinner than it is on Uranus, meaning the blue color of Neptune looks stronger.

"We hoped that developing this model would help us understand clouds and hazes in the ice giant atmospheres," commented Mike Wong, an astronomer at the University of California, Berkeley, and a member of the team behind this result. "Explaining the difference in color between Uranus and Neptune was an unexpected bonus!"

To create this model, Irwin's team analyzed a set of observations of the planets encompassing ultraviolet, visible, and near-infrared wavelengths (from 0.3 to 2.5 micrometers) taken with the Near-Infrared Integral Field Spectrometer (NIFS) on the Gemini North telescope near the summit of Maunakea in Hawai'i -- which is part of the international Gemini Observatory, a Program of NSF's NOIRLab -- as well as archival data from the NASA Infrared Telescope Facility, also located in Hawai'i, and the NASA/ESA Hubble Space Telescope.

The NIFS instrument on Gemini North was particularly important to this result as it is able to provide spectra -- measurements of how bright an object is at different wavelengths -- for every point in its field of view. This provided the team with detailed measurements of how reflective both planets' atmospheres are across both the full disk of the planet and across a range of near-infrared wavelengths.

"The Gemini observatories continue to deliver new insights into the nature of our planetary neighbors," said Martin Still, Gemini Program Officer at the National Science Foundation. "In this experiment, Gemini North provided a component within a suite of ground- and space-based facilities critical to the detection and characterization of atmospheric hazes."

Read more at Science Daily

Palms at the poles: Fossil plants reveal lush southern hemisphere forests in ancient hothouse climate

For decades, paleobotanist David Greenwood has collected fossil plants from Australia -- some so well preserved it's hard to believe they're millions of years old. These fossils hold details about the ancient world in which they thrived, and Greenwood and a team of researchers including climate modeler and research David Hutchinson, from the University of New South Wales, and UConn Department of Geosciences paleobotanist Tammo Reichgelt, have begun the process of piecing together the evidence to see what more they could learn from the collection. Their findings are published in Paleoceanography & Paleoclimatology.

The fossils date back 55 to 40 million years ago, during the Eocene epoch. At that time, the world was much warmer and wetter, and these hothouse conditions meant there were palms at the North and South Pole and predominantly arid landmasses like Australia were lush and green. Reichgelt and co-authors looked for evidence of differences in precipitation and plant productivity between then and now.

Since different plants thrive under specific conditions, plant fossils can indicate what kinds of environments those plants lived in.

By focusing on the morphology and taxonomic features of 12 different floras, the researchers developed a more detailed view of what the climate and productivity was like in the ancient hothouse world of the Eocene epoch.

Reichgelt explains the morphological method relies on the fact that the leaves of angiosperms -- flowering plants -- in general have a strategy for responding to climate.

"For example, if a plant has large leaves and it is left out in the sun and doesn't get enough water, it starts to shrivel up and die because of excess evaporation," Reichgelt says. "Plants with large leaves also lose heat to its surroundings. Finding a large fossil leaf therefore means that most likely this plant was not growing in an environment that was too dry or too cold for excess evaporation or sensible heat loss to happen. These and other morphological features can be linked to the environment that we can quantify. We can compare fossils to modern floras around the world and find the closest analogy."

The second approach was taxonomic. "If you travel up a mountain, the taxonomic composition of the flora changes. Low on the mountain, there may be a deciduous forest that is dominated by maples and beeches and as you go further up the mountain, you see more spruce and fir forest," says Reichgelt. "Finding fossils of beech and maple therefore likely means a warmer climate then if we find fossils of spruce and fir." Such climatic preferences of plant groups can be used to quantitatively reconstruct the ancient climate in which a group of plants in a fossil assemblage was growing.

The results show that the Eocene climate would have been very different to Australia's modern climate. To sustain a lush green landscape, the continent required a steady supply of precipitation. Warmth means more evaporation, and more rainfall was available to move into Australia's continental interior. Higher levels of carbon dioxide in the atmosphere at the time, 1500 to 2000 parts per million, also contributed to the lushness via a process called carbon fertilization. Reichgelt explains that with the sheer abundance of CO2, plants were basically stuffing their faces.

"Southern Australia seems to have been largely forested, with primary productivity similar to seasonal forests, not unlike those here in New England today," Reichgelt says. "In the Northern Hemisphere summer today, there is a big change in the carbon cycle, because lots of carbon dioxide gets drawn down due to primary productivity in the enormous expanse of forests that exists in a large belt around 40 to 60 degrees north. In the Southern Hemisphere, no such landmass exists at those same latitudes today. But Australia during the Eocene occupied 40 degrees to 60 degrees south. And as a result, there would be a highly productive large landmass during the Southern Hemisphere summer, drawing down carbon, more so than what Australia is doing today since it is largely arid."

Hutchinson says the geological evidence suggests the climate is highly sensitive to CO2 and that this effect may be larger than what our climate models predict, "The data also suggests that polar amplification of warming was very strong, and our climate models also tend to under-represent this effect. So, if we can improve our models of the high-CO2 Eocene world, we might improve our predictions of the future."

Future projects will expand the data set beyond Australia to ask what global productivity does during a hothouse climate on a global scale.

"We have large datasets of plant fossils that have been collected around the world, so we can apply the same methods that we use here to ask what happens to global biosphere productivity," says Reichgelt.

With increasing carbon emissions, there is more research going into studying what happens in the biosphere with increased photosynthetic activity and water use efficiency in plants. Reichgelt explains that modern plants have not had the time to evolve to changing CO2 conditions. However, by looking to the past, we can glean some of that information.

Read more at Science Daily

Healthy development thanks to older siblings

In a new study, a Leipzig-based team of researchers including scientists from the Helmholtz Centre for Environmental Research (UFZ), Leipzig University (UL), the MPI for Evolutionary Anthropology (MPI EVA) and the German Centre for Integrative Biodiversity Research (iDiv) used longitudinal data from the LINA (Lifestyle and environmental factors and their Influence on the Newborn Allergy risk) cohort to test 373 German mother-child pairs, from pregnancy until 10 years of age.

Mothers were asked to fill in three validated questionnaires, to assess their stress levels and their child's behavioural problems. First, the researchers assessed which social and environmental factors were linked to an increase in maternal stress levels during pregnancy, and the long-term consequences of maternal stress on the occurrence of child behavioural problems. Second, the researchers assessed whether the presence of siblings had a positive effect on the occurrence of child behavioural problems, by directly reducing stress levels and increasing children's psychological well-being, or by indirectly buffering the negative consequences of maternal stress.

Prenatal stress can cause behavioural problems in the child

The results of the study demonstrated that socio-environmental stressors, like the lack of sufficient social areas in the neighbourhood, were clearly linked to an increase in maternal stress levels during pregnancy. Moreover, mothers who had experienced high stress levels, like worries, loss of joy or tension, during pregnancy were also more likely to report the occurrence of behavioural problems when their children were 7, 8 or 10 years old. "These results confirm previous findings about the negative impact that even mild forms of prenatal stress might have on child behaviour, even after several years, and highlight the importance of early intervention policies that increase maternal wellbeing and reduce the risks of maternal stress already during pregnancy," explains Federica Amici (UL, MPI-EVA), one of the researchers involved in the project.

On a more positive note, the study also found a lower occurrence of behavioural problems in children with older siblings. "Children who have older brothers or sisters in their households are less likely to develop problems, which suggests that siblings are crucial to promote a healthy child development," explains Gunda Herberth (UFZ), coordinator of the LINA study.

Higher social competence thanks to older siblings?

This study further suggests that the presence of older siblings directly reduced the likelihood of developing behavioral problems, but did not modulate the negative effects of maternal stress on child behaviour. How could older siblings reduce the occurrence of behavioural problems in children? By interacting with their older siblings, children may develop better emotional, perspective taking and problem solving skills, which are linked to higher social competence and emotional understanding. Moreover, the presence of older siblings may provide learning opportunities for parents, who might thus develop different expectations and better parental skills.

Read more at Science Daily

Your liver is just under three years old

The liver has a unique ability to regenerate after damage. However, it was unknown whether this ability decreases as we age. International scientists led by Dr. Olaf Bergmann at the Center for Regenerative Therapies Dresden (CRTD) at TU Dresden used a technique known as retrospective radiocarbon birth dating to determine the age of the human liver. They showed that no matter the person's age, the liver is always on average less than three years old. The results demonstrate that aging does not influence liver renewal, making the liver an organ that replaces its cells equally well in young and old people.

The liver is an essential organ that takes care of clearing toxins in our bodies. Because it constantly deals with toxic substances, it is likely to be regularly injured. To overcome this, the liver has a unique capacity among organs to regenerate itself after damage. Because a lot of the body's ability to heal itself and regenerate decreases as we age, scientists were wondering if the liver's capacity to renew also diminishes with age.

The nature of liver renewal in humans also remained a mystery. The animal models provided contradictory answers. "Some studies pointed to the possibility that liver cells are long-lived while others showed a constant turnover. It was clear to us that if we want to know what happens in humans, we need to find a way to directly assess the age of human liver cells," says Dr. Olaf Bergmann, research group leader at the Center for Regenerative Therapies Dresden (CRTD) at TU Dresden.

The Human Liver Remains a Young Organ

The interdisciplinary team of biologists, physicists, mathematicians, and clinicians led by Dr. Bergmann analyzed the livers of multiple individuals who died at ages between 20 and 84 years old. Surprisingly, the team showed that the liver cells of all subjects were more or less the same age.

"No matter if you are 20 or 84, your liver stays on average just under three years old," explains Dr. Bergmann. The results show that the adjustment of liver mass to the needs of the body is tightly regulated through the constant replacement of liver cells and that this process is maintained even in older people. This ongoing liver cell replacement is important for various aspects of liver regeneration and cancer formation.

Liver Cells with More DNA Renew Less

However, not all the cells in our liver are that young. A fraction of cells can live up to 10 years before renewing itself. This subpopulation of liver cells carries more DNA than the typical cells. "Most of our cells have two sets of chromosomes, but some cells accumulate more DNA as they age. In the end, such cells can carry four, eight, or even more sets of chromosomes," explains Dr. Bergmann.

"When we compared typical liver cells with the cells richer in DNA, we found fundamental differences in their renewal. Typical cells renew approximately once a year, while the cells richer in DNA can reside in the liver for up to a decade," says Dr. Bergmann. "As this fraction gradually increases with age, this could be a protective mechanism that safeguards us from accumulating harmful mutations. We need to find out if there are similar mechanisms in chronic liver disease, which in some cases can turn into cancer."

Lessons from the Nuclear Fallout

Determining the biological age of human cells is a massive technical challenge, as methods commonly used in animal models cannot be applied to humans.

Dr. Bergmann's group specializes in retrospective radiocarbon birth dating and uses the technique to assess the biological age of human tissues. Carbon is a chemical element that is ubiquitous and forms the backbone of life on Earth. Radiocarbon is one of a variety of types of carbon. It appears naturally in the atmosphere. Plants incorporate it through photosynthesis, in the same way as typical carbon, and pass it on to animals and humans. Radiocarbon is weakly radioactive and unstable. These characteristics are taken advantage of in archeology to determine the age of ancient samples.

"Archeologists have used the decay of radiocarbon successfully for many years to assess the age of specimens, one example being dating of the shroud of Turin," says Dr. Bergmann. "The radioactive decay of radiocarbon is very slow. It provides enough resolution for archeologists but it is not useful for determining the age of human cells. Nevertheless, we can still take advantage of the radiocarbon in our research."

The aboveground nuclear tests carried out in the 1950s introduced massive amounts of radiocarbon into the atmosphere, into the plants, and into the animals. As a result, cells formed in this period have higher amounts of radiocarbon in their DNA.

Following the official ban of aboveground nuclear testing in 1963, the amounts of atmospheric radiocarbon started to drop and so did the amounts of radiocarbon incorporated into the animal DNA. The values of atmospheric and cellular radiocarbon correspond to each other very well.

"Even though these are negligible amounts that are not harmful, we can detect and measure them in tissue samples. By comparing the values to the levels of atmospheric radiocarbon, we can retrospectively establish the age of the cells," explains Dr. Bergmann.

Unparalleled Insights Directly From the Source


The Bergmann group also explores the mechanisms that drive the regeneration of other tissues considered as static, such as the brain or the heart. The team has previously used their expertise in retrospective radiocarbon birth dating to show that the formation of new brain and heart cells is not limited to prenatal time but continues throughout life. Currently, the group is investigating whether new human heart muscle cells can still be generated in people with chronic heart disease.

Read more at Science Daily

May 30, 2022

Scientists shine new light on role of Earth's orbit in the fate of ancient ice sheets

Scientists have finally put to bed a long-standing question over the role of Earth's orbit in driving global ice age cycles.

In a new study published today in the journal Science, the team from Cardiff University has been able to pinpoint exactly how the tilting and wobbling of the Earth as it orbits around the Sun has influenced the melting of ice sheets in the Northern Hemisphere over the past 2 million years or so.

Scientists have long been aware that the waxing and waning of massive Northern Hemisphere ice sheets results from changes in the geometry of Earth's orbit around the Sun.

There are two aspects of the Earth's geometry that can influence the melting of ice sheets: obliquity and precession.

Obliquity is the angle of the Earth's tilt as it travels around the Sun and is the reason why we have different seasons.

Precession is how the Earth wobbles as it rotates, much like a slightly off-centre spinning top. The angle of this wobble means that sometimes the Northern Hemisphere is closest to the Sun and other times the Southern Hemisphere is closest, meaning that roughly every 10,000 years one hemisphere will have warmer summers compared to the other, before it switches.

Scientists have determined that over the past million years or so, the combined effects of obliquity and precession on the waxing and waning of Northern Hemisphere ice sheets has resulted, through complicated interactions within the climate system, in ice age cycles lasting approximately 100 thousand years.

However, before 1 million years ago, in a period known as the early Pleistocene, the duration of ice age cycles was controlled only by obliquity and these ice age cycles were almost exactly 41,000 years long.

For decades, scientists have been puzzled as to why precession did not play a more important part in driving ice age cycles during this period.

In their new study, the Cardiff University team reveal new evidence suggesting that precession did actually play a role during the early Pleistocene.

Their results show that more intense summers, driven by precession, have always caused Northern Hemisphere ice sheets to melt, but before 1 million years ago, these events were less devastating and did not lead to the complete collapse of ice sheets.

Lead author of the study Professor Stephen Barker, from Cardiff University's School of Earth and Environmental Sciences, said: "Early Pleistocene ice sheets in the northern hemisphere were smaller than their more recent counterparts, and limited to higher latitudes where the effects of obliquity dominate over precession. This probably explains why it has taken so long for us to find evidence of precession forcing during early Pleistocene.

"These findings are the culmination of a major effort, involving more than 12 years of painstaking work in the laboratory to process nearly 10,000 samples and the development of a range of new analytical approaches. Thanks to this we can finally put to rest a long-standing problem in paleoclimatology and ultimately contribute to a better understanding of Earth's climate system.

Read more at Science Daily

Researchers aim X-rays at century-old plant secretions for insight into Aboriginal Australian cultural heritage

For tens of thousands of years, Aboriginal Australians have created some of the world's most striking artworks. Today their work continues long lines of ancestral traditions, stories of the past and connections to current cultural landscapes, which is why researchers are keen on better understanding and preserving the cultural heritage within.

In particular, knowing the chemical composition of pigments and binders that Aboriginal Australian artists employ could allow archaeological scientists and art conservators to identify these materials in important cultural heritage objects. Now, researchers are turning to X-ray science to help reveal the composition of the materials used in Aboriginal Australian cultural heritage -- starting with the analysis of century-old samples of plant secretions, or exudates.

Aboriginal Australians continue to use plant exudates, such as resins and gums, to create rock and bark paintings and for practical applications, such as hafting stone points to handles. But just what these plant materials are made of is not well known.

Therefore, scientists from six universities and laboratories around the world turned to high-energy X-rays at the Stanford Synchrotron Radiation Lightsource (SSRL) at the Department of Energy's SLAC National Accelerator Laboratory and the synchrotron SOLEIL in France. The team aimed X-rays at 10 well-preserved plant exudate samples from the native Australian genera Eucalyptus, Callitris, Xanthorrhoea and Acacia. The samples had been collected more than a century ago and held in various institutions in South Australia.

The results of their study were clearer and more profound than expected.

"We got the breakthrough data we had hoped for," said Uwe Bergmann, physicist at University of Wisconsin-Madison and former SLAC scientist who develops new X-ray methods. "For the first time, we were able to see the molecular structure of a well-preserved collection of native Australian plant samples, which might allow us to discover their existence in other important cultural heritage objects."

Researchers today published their results in the Proceedings of the National Academy of Sciences.

Looking below the surface

Over time, the surface of plant exudates can change as the materials age. Even if these changes are just nanometers thick, they can still block the view underneath.

"We had to see into the bulk of the material beneath this top layer or we'd have no new information about the plant exudates," SSRL Lead Scientist Dimosthenis Sokaras said.

Conventionally, molecules with carbon and oxygen are studied with lower-energy, so-called "soft" X-rays, that would not be able to penetrate through the debris layer. For this study, researchers sent high-energy X-ray photons, called "hard" X-rays, into the sample. The photons squeezed past foggy top layers and into the sample's elemental arrangements beneath. Hard X-rays don't get stuck in the surface, whereas soft X-rays do, Sokaras said.

Once inside, the high-energy photons scattered off of the plant exudate's elements and were captured by a large array of perfectly aligned, silicon crystals at SSRL. The crystals filtered out only the scattered X-rays of one specific wavelength and funneled them into a small detector, kind of like how a kitchen sink funnels water drops down its drain.

Next, the team matched the wavelength difference between the incident and scattered photons to the energy levels of a plant exudate's carbon and oxygen, providing the detailed molecular information about the unique Australian samples.

A path for the future

Understanding the chemistries of each plant exudate will allow for a better understanding of identification and conservation approaches of Aboriginal Australian art and tools, Rafaella Georgiou, a physicist at Synchrotron SOLEIL, said.

"Now we can go ahead and study other organic materials of cultural importance using this powerful X-ray technique," she said.

Researchers hope that people who work in cultural heritage analysis will see this powerful synchrotron radiation technique as a valuable method for determining the chemistries of their samples.

Read more at Science Daily

Fjords emit as much methane as all the deep oceans globally

During heavy storms, the normally stratified layers of water in ocean fjords get mixed, which leads to oxygenation of the fjord floor. But these storm events also result in a spike in methane emissions from fjords to the atmosphere.

Researchers from the University of Gothenburg have estimated that the total emissions of this climate-warming gas are as great from fjords as from all the deep ocean areas in the world put together.

The world's fjords were created when the inland ice receded, and are a relatively rare natural feature, constituting only 0.13 per cent of all the oceans on Earth. However, according to researchers from the University of Gothenburg, emissions of methane from the surface of fjords are comparable to the emissions of this gas from global deep oceans which account for 84 per cent of the global sea surface area. These results were presented in an article in the science journal Limnology and Oceanography Letters.

"It's been known for some time that many fjords have anoxic environments closest to the bottom and that methane forms in the bottom sediment. Usually, only a small portion of this gas ever reaches the atmosphere because it gets broken down as it ascends through the more oxygen-rich waters closer to the surface. But in our research, we recorded large emissions of methane when the water in the fjord was mixed during storm events, for example," says Stefano Bonaglia, researcher in marine geochemistry at the Department of Marine Sciences at the University of Gothenburg.

Anoxic environments produce methane

Detecting and budgeting methane emissions to the atmosphere is essential to be able to model the future climate. Researchers estimate that methane emissions cause about 30 per cent of the greenhouse effect. The contribution of the oceans to methane emissions is budgeted as significantly smaller than from land areas. But human activity has increased eutrophication in coastal areas, and this has created larger areas of anoxic waters on the sea floor. This is particularly apparent in fjords, and although they constitute only 0.13 per cent of the global sea surface area, they account for about half of all methane emissions to the atmosphere.

"This is because in fjords, carbon-rich sediment is deposited from marine plants and animals as well as from materials entering the fjords from the surrounding land via streams that flow into them. As fjords are relatively protected from ocean currents, the water tends to remain stratified in layers at different temperatures and with different concentrations of salt and oxygen. The layers closest to the fjord floor are anoxic regions where methane gas forms as the material in the sediment decomposes," says Stefano Bonaglia.

Agriculture drives eutrophication


The researchers from the University of Gothenburg studied By Fjord near Uddevalla during the period 2009-2021 and conducted field studies to measure methane production in the fjord. By Fjord is hypoxic and affected by eutrophication. The Bäve River flows into the fjord, bringing with it high concentrations of nutrients from agriculture in the region. It was clear that during mixing events in the fjord, emissions of methane to the atmosphere rose. During these events, anoxic water from the bottom is lifted rapidly to the surface, taking the methane with it, which can then be emitted into the atmosphere.

1 million tonnes methane

"The methane emissions were high, and American researchers have seen the same types of events in fjords in Canada. We estimate that emissions from all the world's fjords are of the same magnitude -- around 1 Teragram (Tg) or 1 million tonnes per year -- as the budgeted emissions from global deep oceans. This is because the distance from the bottom to the surface of a fjord is much shorter than in deep oceans. This results in more organic matter being deposited in the sediment, and not enough time for the methane to be broken down on its way up to the surface," says Stefano Bonaglia, and adds that if climate change leads to more extreme weather events, methane emissions may rise, but only up to a certain point.

Read more at Science Daily

What's in a name? Glimmers of evolution in naming babies, choosing a dog

Maverick was first used as a baby name after a television show called "Maverick" aired in the 1950s, but its popularity rose meteorically in 1986 with the release of the movie "Top Gun." Today, it is even used for baby girls.

The name Emma peaked in popularity in the late 1800s, declined precipitously through the first half of the 1900s, then shot back up to be one of the most popular names of the early 2000s. Linda peaked somewhere in the late 1940s and Daniel in the mid-1980s. But each rise in popularity was followed by an equally steep decline.

So, what's in a name -- or, at least, what's in a baby name trend? University of Michigan evolutionary biologist Mitchell Newberry has found that the more popular a name becomes, the less likely future parents are to follow suit. Same goes for popular dog breeds: Dalmatians today are a tenth as popular as they were in the 1990s.

Newberry, an assistant professor of complex systems, says examining trends in the popularity of baby names and dog breeds can be a proxy for understanding ecological and evolutionary change. The names and dog breed preferences themselves are like genes or organisms competing for scarce resources. In this case, the scarce resources are the minds of parents and dog owners. His results are published in the journal Nature Human Behavior.

Newberry looks at frequency-dependent selection, a kind of natural selection in which the tendency to copy a certain variant depends on that variant's current frequency or popularity, regardless of its content. If people tend to copy the most common variant, then everyone ends up doing roughly the same thing. But if people become less willing to copy a variant the more popular it becomes, it leads to a greater diversity of variants.

"Think of how we use millions of different names to refer to people but we almost always use the same word to refer to baseball," Newberry said. "For words, there's pressure to conform, but my work shows that the diversity of names results from pressures against conformity."

These trends are common in biology, but difficult to quantify. What researchers do have is a complete database of the names of babies over the last 87 years.

Newberry used the Social Security Administration baby name database, itself born in 1935, to examine frequency dependence in first names in the United States. He found that when a name is most rare -- 1 in 10,000 births -- it tends to grow, on average, at a rate of 1.4% a year. But when a name is most common -- more than 1 in 100 births -- its popularity declines, on average, at 1.6%.

"This is really a case study showing how boom-bust cycles by themselves can disfavor common types and promote diversity," Newberry said. "If people are always thirsting after the newest thing, then it's going to create a lot of new things. Every time a new thing is created, it's promoted, and so more rare things rise to higher frequency and you have more diversity in the population."

Using the same techniques they applied to baby names, Newberry and colleagues examined dog breed preferences using a database of purebred dog registrations from the American Kennel Club. They found boom-bust cycles in the popularity of dog breeds similar to the boom-bust cycles in baby names.

The researchers found a Greyhound boom in the 1940s and a Rottweiler boom in the 1990s. This shows what researchers call a negative frequency dependent selection, or anti-conformity, meaning that as frequency increases, selection becomes more negative. That means that rare dog breeds at 1 in 10,000 tend to increase in popularity faster than dogs already at 1 in 10.

"Biologists basically think these frequency-dependent pressures are fundamental in determining so many things," Newberry said. "The long list includes genetic diversity, immune escape, host-pathogen dynamics, the fact that there's basically a one-to-one ratio of males and females -- and even what different populations think is sexy.

"Why do birds like long tails? Why do bamboos take so long to flower? Why do populations split into different species? All of these relate at a fundamental level to either pressures of conformity or anticonformity within populations."

Conformity is necessary within species, Newberry says. For example, scientists can alter the order of genes on a fly's chromosomes, and it does not affect the fly at all. But that doesn't happen in the wild, because when that fly mates, its genes won't pair with its mate's, and their offspring will not survive.

However, we also need anticonformity, he says. If we all had the same immune system, we would all be susceptible to exactly the same diseases. Or, Newberry says, if the same species of animal all visited the same patch of land for food, they would quickly eat themselves out of existence.

Read more at Science Daily