Jan 15, 2022

Unusual team finds gigantic planet hidden in plain sight

A UC Riverside astronomer and a group of eagle-eyed citizen scientists have discovered a giant gas planet hidden from view by typical stargazing tools.

The planet, TOI-2180 b, has the same diameter as Jupiter, but is nearly three times more massive. Researchers also believe it contains 105 times the mass of Earth in elements heavier than helium and hydrogen. Nothing quite like it exists in our solar system.

Details of the finding have been published in the Astronomical Journal and presented at the American Astronomical Society virtual press event on Jan. 13.

"TOI-2180 b is such an exciting planet to have found," said UCR astronomer Paul Dalba, who helped confirm the planet's existence. "It hits the trifecta of 1) having a several-hundred-day orbit, 2) being relatively close to Earth (379 lightyears is considered close for an exoplanet), and 3) us being able to see it transit in front of its star. It is very rare for astronomers to discover a planet that checks all three of these boxes."

Dalba also explained that the planet is special because it takes 261 days to complete a journey around its star, a relatively long time compared to many known gas giants outside our solar system. Its relative proximity to Earth and the brightness of the star it orbits also make it likely astronomers will be able to learn more about it.

In order to locate exoplanets, which orbit stars other than our sun, NASA's TESS satellite looks at one part of the sky for a month, then moves on. It is searching for dips in brightness that occur when a planet crosses in front of a star.

"The rule of thumb is that we need to see three 'dips' or transits before we believe we've found a planet," Dalba said. A single transit event could be caused by a telescope with a jitter, or a star masquerading as a planet. For these reasons, TESS isn't focused on these single transit events. However, a small group of citizen scientists is.

Looking over TESS data, Tom Jacobs, a group member and former U.S. naval officer, saw light dim from the TOI-2180 star, just once. His group alerted Dalba, who specializes in studying planets that take a long time to orbit their stars.

Using the Lick Observatory's Automated Planet Finder Telescope, Dalba and his colleagues observed the planet's gravitational tug on the star, which allowed them to calculate the mass of TOI-2180 b and estimate a range of possibilities for its orbit.

Hoping to observe a second transit event, Dalba organized a campaign using 14 different telescopes across three continents in the northern hemisphere. Over the course of 11 days in August 2021, the effort resulted in 20,000 images of the TOI-2180 star, though none of them detected the planet with confidence.

However, the campaign did lead the group to estimate that TESS will see the planet transit its star again in February, when they're planning a follow up study. Funding for Dalba's research is provided by the National Science Foundation's Astronomy and Astrophysics Postdoctoral Fellowship Program.

The citizen planet hunters' group takes publicly available data from NASA satellites like TESS and looks for single transit events. While professional astronomers use algorithms to scan a lot of data automatically, the Visual Survey Group uses a program they created to inspect telescope data by eye.

Read more at Science Daily

Your gut senses the difference between real sugar and artificial sweetener

Your taste buds may or may not be able to tell real sugar from a sugar substitute, but there are cells in your intestines that can and do distinguish between the two sweet solutions. And they can communicate the difference to your brain in milliseconds.

Not long after the sweet taste receptor was identified in the mouths of mice 20 years ago, scientists attempted to knock those taste buds out. But they were surprised to find that mice could still somehow discern and prefer natural sugar to artificial sweetener, even without a sense of taste.

The answer to this riddle lies much further down in the digestive tract, at the upper end of the gut just after the stomach, according to research led by Diego Bohórquez, an associate professor of medicine and neurobiology in the Duke University School of Medicine.

In a paper appearing Jan. 13 in Nature Neuroscience, "we've identified the cells that make us eat sugar, and they are in the gut," Bohórquez said. Infusing sugar directly into the lower intestine or colon does not have the same effect. The sensing cells are in the upper reaches of the gut, he said.

Having discovered a gut cell called the neuropod cell, Bohórquez with his research team has been pursuing this cell's critical role as a connection between what's inside the gut and its influence in the brain. The gut, he argues, talks directly to the brain, changing our eating behavior. And in the long run, these findings may lead to entirely new ways of treating diseases.

Originally termed enteroendrocrine cells because of their ability to secrete hormones, specialized neuropod cells can communicate with neurons via rapid synaptic connections and are distributed throughout the lining of the upper gut. In addition to producing relatively slow-acting hormone signals, the Bohórquez research team has shown that these cells also produce fast-acting neurotransmitter signals that reach the vagus nerve and then the brain within milliseconds.

Bohórquez said his group's latest findings further show that neuropods are sensory cells of the nervous system just like taste buds in the tongue or the retinal cone cells in the eye that help us see colors.

"These cells work just like the retinal cone cells that that are able to sense the wavelength of light," Bohórquez said. "They sense traces of sugar versus sweetener and then they release different neurotransmitters that go into different cells in the vagus nerve, and ultimately, the animal knows 'this is sugar' or 'this is sweetener.'"

Using lab-grown organoids from mouse and human cells to represent the small intestine and duodenum (upper gut), the researchers showed in a small experiment that real sugar stimulated individual neuropod cells to release glutamate as a neurotransmitter. Artificial sugar triggered the release of a different neurotransmitter, ATP.

Using a technique called optogenetics, the scientists were then able to turn the neuropod cells on and off in the gut of a living mouse to show whether the animal's preference for real sugar was being driven by signals from the gut. The key enabling technology for the optogenetic work was a new flexible waveguide fiber developed by MIT scientists. This flexible fiber delivers light throughout the gut in a living animal to trigger a genetic response that silenced the neuropod cells. With their neuropod cells switched off, the animal no longer showed a clear preference for real sugar.

"We trust our gut with the food we eat," Bohórquez said. "Sugar has both taste and nutritive value and the gut is able to identify both."

"Many people struggle with sugar cravings, and now we have a better understanding of how the gut senses sugars (and why artificial sweeteners don't curb those cravings)," said co-first author Kelly Buchanan, a former Duke University School of Medicine student who is now an Internal Medicine resident at Massachusetts General Hospital. "We hope to target this circuit to treat diseases we see every day in the clinic."

In future work, Bohórquez said he will be showing how these cells also recognize other macronutrients. "We always talk about 'a gut sense,' and say things like 'trust your gut,' well, there's something to this," Bohórquez said.

Read more at Science Daily

Jan 14, 2022

Cosmic 'spider' found to be source of powerful gamma-rays

Using the 4.1-meter SOAR Telescope in Chile, astronomers have discovered the first example of a binary system where a star in the process of becoming a white dwarf is orbiting a neutron star that has just finished turning into a rapidly spinning pulsar. The pair, originally detected by the Fermi Gamma-ray Space Telescope, is a "missing link" in the evolution of such binary systems.

A bright, mysterious source of gamma rays has been found to be a rapidly spinning neutron star -- dubbed a millisecond pulsar -- that is orbiting a star in the process of evolving into an extremely-low-mass white dwarf. These types of binary systems are referred to by astronomers as "spiders" because the pulsar tends to "eat" the outer parts of the companion star as it turns into a white dwarf.

The duo was detected by astronomers using the 4.1-meter SOAR Telescope on Cerro Pachón in Chile, part of Cerro Tololo Inter-American Observatory (CTIO), a Program of NSF's NOIRLab.

NASA's Fermi Gamma-ray Space Telescope has been cataloging objects in the Universe that produce copious gamma rays since its launch in 2008, but not all of the sources of gamma rays that it detects have been classified. One such source, called 4FGL J1120.0-2204 by astronomers, was the second brightest gamma-ray source in the entire sky that had gone unidentified, until now.

Astronomers from the United States and Canada, led by Samuel Swihart of the US Naval Research Laboratory in Washington, D.C., used the Goodman Spectrograph on the SOAR Telescope to determine the true identity of 4FGL J1120.0-2204. The gamma-ray source, which also emits X-rays, as observed by NASA's Swift and ESA's XMM-Newton space telescopes, has been shown to be a binary system consisting of a "millisecond pulsar" that spins hundreds of times per second, and the precursor to an extremely-low-mass white dwarf. The pair are located over 2600 light-years away.

"Michigan State University's dedicated time on the SOAR Telescope, its location in the southern hemisphere and the precision and stability of the Goodman spectrograph, were all important aspects of this discovery," says Swihart.

"This is a great example of how mid-sized telescopes in general, and SOAR in particular, can be used to help characterize unusual discoveries made with other ground and space-based facilities," notes Chris Davis, NOIRLab Program Director at US National Science Foundation. "We anticipate that SOAR will play a crucial role in the follow-up of many other time-variable and multi-messenger sources over the coming decade."

The optical spectrum of the binary system measured by the Goodman spectrograph showed that light from the proto-white dwarf companion is Doppler shifted -- alternately shifted to the red and the blue -- indicating that it orbits a compact, massive neutron star every 15 hours.

"The spectra also allowed us to constrain the approximate temperature and surface gravity of the companion star," says Swihart, whose team was able to take these properties and apply them to models describing how binary star systems evolve. This allowed them to determine that the companion is the precursor to an extremely-low-mass white dwarf, with a surface temperature of 8200 °C (15,000 °F), and a mass of just 17% that of the Sun.

When a star with a mass similar to that of the Sun or less reaches the end of its life, it will run out of the hydrogen used to fuel the nuclear fusion processes in its core. For a time, helium takes over and powers the star, causing it to contract and heat up, and prompting its expansion and evolution into a red giant that is hundreds of millions of kilometers in size. Eventually, the outer layers of this swollen star can be accreted onto a binary companion and nuclear fusion halts, leaving behind a white dwarf about the size of Earth and sizzling at temperatures exceeding 100,000 °C (180,000 °F).

The proto-white dwarf in the 4FGL J1120.0-2204 system hasn't finished evolving yet. "Currently it's bloated, and is about five times larger in radius than normal white dwarfs with similar masses," says Swihart. "It will continue cooling and contracting and, in about two billion years, it will look identical to many of the extremely low mass white dwarfs that we already know about."

Millisecond pulsars twirl hundreds of times every second. They are spun up by accreting matter from a companion, in this case from the star that became the white dwarf. Most millisecond pulsars emit gamma rays and X-rays, often when the pulsar wind, which is a stream of charged particles emanating from the rotating neutron star, collides with material emitted from a companion star.

About 80 extremely low-mass white dwarfs are known, but "this is the first precursor to an extremely low-mass white dwarf found that is likely orbiting a neutron star," says Swihart. Consequently, 4FGL J1120.0-2204 is a unique look at the tail-end of this spin-up process. All the other white dwarf-pulsar binaries that have been discovered are well past the spinning-up stage.

Read more at Science Daily

Scientists dive deep into the different effects of morning and evening exercise

It is well established that exercise improves health, and recent research has shown that exercise benefits the body in different ways, depending on the time of day. However, scientists still do not know why the timing of exercise produces these different effects. To gain a better understanding, an international team of scientists recently carried out the most comprehensive study to date of exercise performed at different times of the day.

Their research shows how the body produces different health-promoting signaling molecules in an organ-specific manner following exercise depending on the time of day. These signals have a broad impact on health, influencing sleep, memory, exercise performance, and metabolic homeostasis. Their findings were recently published in the journal Cell Metabolism.

"A better understanding of how exercise affects the body at different times of day might help us to maximize the benefits of exercise for people at risk of diseases, such as obesity and type 2 diabetes," says Professor Juleen R. Zierath from Karolinska Institutet and the Novo Nordisk Foundation Center for Basic Metabolic Research (CBMR) at the University of Copenhagen.

Using exercise to fix a faulty body clock

Almost all cells regulate their biological processes over 24 hours, otherwise called a circadian rhythm. This means that the sensitivity of different tissues to the effects of exercise changes depending on the time of day. Earlier research has confirmed that exercise timing according to our circadian rhythm can optimize the health-promoting effects of exercise.

The team of international scientists wanted a more detailed understanding of this effect, so they carried out a range of experiments on mice that exercised either in the early morning or the late evening. Blood samples and different tissues, including brain, heart, muscle, liver, and fat were collected and analyzed by mass spectrometry. This allowed the scientists to detect hundreds of different metabolites and hormone signaling molecules in each tissue, and to monitor how they were changed by exercising at different times of the day.

The result is an 'Atlas of Exercise Metabolism' -- a comprehensive map of exercise-induced signaling molecules present in different tissues following exercise at different times of day.

"As this is the first comprehensive study that summarizes time and exercise dependent metabolism over multiple tissues, it is of great value to generate and refine systemic models for metabolism and organ crosstalk," adds Dominik Lutter, Head of Computational Discovery Research from the Helmholtz Diabetes Center at Helmholtz Munich.

New insights include a deeper understanding of how tissues communicate with each other, and how exercise can help to 'realign' faulty circadian rhythms in specific tissues -- faulty circadian clocks have been linked to increased risks of obesity and type 2 diabetes. Finally, the study identified new exercise-induced signaling molecules in multiple tissues, which need further investigation to understand how they can individually or collectively influence health.

"Not only do we show how different tissues respond to exercise at different times of the day, but we also propose how these responses are connected to induce an orchestrated adaptation that controls systemic energy homeostasis," says Associate Professor Jonas Thue Treebak from CBMR at the University of Copenhagen, and co-first author of the publication.

A resource for future exercise research


The study has several limitations. The experiments were carried out in mice. While mice share many common genetic, physiological, and behavioral characteristics with humans, they also have important differences. For example, mice are nocturnal, and the type of exercise was also limited to treadmill running, which can produce different results compared to high-intensity exercise. Finally, the impact of sex, age and disease were not considered in the analysis.

"Despite the limitations, it's an important study that helps to direct further research that can help us better understand how exercise, if timed correctly, can help to improve health," says Assistant Professor Shogo Sato from the Department of Biology and the Center for Biological Clocks Research at Texas A&M University, and fellow co-first author.

Fellow co-first author Kenneth Dyar, Head of Metabolic Physiology from the Helmholtz Diabetes Center at Helmholtz Munich, stressed the utility of the atlas as a comprehensive resource for exercise biologists. "While our resource provides important new perspectives about energy metabolites and known signaling molecules, this is just the tip of the iceberg. We show some examples of how our data can be mined to identify new tissue and time-specific signaling molecules," he says.

Read more at Science Daily

Why do we forget? New theory proposes 'forgetting' is actually a form of learning

We create countless memories as we live our lives but many of these we forget. Why? Counter to the general assumption that memories simply decay with time, 'forgetting' might not be a bad thing -- that is according to scientists who believe it may represent a form of learning.

The scientists behind the new theory -- outlined today in leading international journal Nature Reviews Neuroscience -- suggest that changes in our ability to access specific memories are based on environmental feedback and predictability. Rather than being a bug, forgetting may be a functional feature of the brain, allowing it to interact dynamically with the environment.

In a changing world like the one we and many other organisms live in, forgetting some memories can be beneficial as this can lead to more flexible behaviour and better decision-making. If memories were gained in circumstances that are not wholly relevant to the current environment, forgetting them can be a positive change that improves our wellbeing.

So, in effect, the scientists believe we learn to forget some memories while retaining others that are important. Forgetting of course comes at the cost of lost information, but a growing body of research indicates that, at least in some cases, forgetting is due to altered memory access rather than memory loss.

The new theory has been proposed by Dr Tomás Ryan, Associate Professor in the School of Biochemistry and Immunology and the Trinity College Institute of Neuroscience at Trinity College Dublin, and Dr Paul Frankland, Professor in the Department of Psychology at the University of Toronto and the Hospital for Sick Children in Toronto.

Both Dr Ryan and Dr Frankland are fellows of the Canadian global research organization CIFAR, which enabled this collaboration through its Child & Brain Development program, which is pursuing interdisciplinary work in this area.

Dr Ryan, whose research team is based in the Trinity Biomedical Sciences Institute (TBSI), said:

"Memories are stored in ensembles of neurons called 'engram cells' and successful recall of these memories involves the reactivation of these ensembles. The logical extension of this is that forgetting occurs when engram cells cannot be reactivated. The memories themselves are still there, but if the specific ensembles cannot be activated they can't be recalled. It's as if the memories are stored in a safe but you can't remember the code to unlock it.

"Our new theory proposes that forgetting is due to circuit remodelling that switches engram cells from an accessible to an inaccessible state. Because the rate of forgetting is impacted by environmental conditions, we propose that forgetting is actually a form of learning that alters memory accessibility in line with the environment and how predictable it is."

Read more at Science Daily

Past eight years: Warmest since modern recordkeeping began

Earth's global average surface temperature in 2021 tied with 2018 as the sixth warmest on record, according to independent analyses done by NASA and the National Oceanic and Atmospheric Administration (NOAA).

Continuing the planet's long-term warming trend, global temperatures in 2021 were 1.5 degrees Fahrenheit (0.85 degrees Celsius) above the average for NASA's baseline period, according to scientists at NASA's Goddard Institute for Space Studies (GISS) in New York. NASA uses the period from 1951-1980 as a baseline to see how global temperature changes over time.

Collectively, the past eight years are the warmest years since modern recordkeeping began in 1880. This annual temperature data makes up the global temperature record -- which tells scientists the planet is warming.

According to NASA's temperature record, Earth in 2021 was about 1.9 degrees Fahrenheit (or about 1.1 degrees Celsius) warmer than the late 19th century average, the start of the industrial revolution.

"Science leaves no room for doubt: Climate change is the existential threat of our time," said NASA Administrator Bill Nelson. "Eight of the top 10 warmest years on our planet occurred in the last decade, an indisputable fact that underscores the need for bold action to safeguard the future of our country -- and all of humanity. NASA's scientific research about how Earth is changing and getting warmer will guide communities throughout the world, helping humanity confront climate and mitigate its devastating effects."

This warming trend around the globe is due to human activities that have increased emissions of carbon dioxide and other greenhouse gases into the atmosphere. The planet is already seeing the effects of global warming: Arctic sea ice is declining, sea levels are rising, wildfires are becoming more severe and animal migration patterns are shifting. Understanding how the planet is changing -- and how rapidly that change occurs -- is crucial for humanity to prepare for and adapt to a warmer world.

Weather stations, ships, and ocean buoys around the globe record the temperature at Earth's surface throughout the year. These ground-based measurements of surface temperature are validated with satellite data from the Atmospheric Infrared Sounder (AIRS) on NASA's Aqua satellite. Scientists analyze these measurements using computer algorithms to deal with uncertainties in the data and quality control to calculate the global average surface temperature difference for every year. NASA compares that global mean temperature to its baseline period of 1951-1980. That baseline includes climate patterns and unusually hot or cold years due to other factors, ensuring that it encompasses natural variations in Earth's temperature.

Many factors affect the average temperature any given year, such as La Nina and El Nino climate patterns in the tropical Pacific. For example, 2021 was a La Nina year and NASA scientists estimate that it may have cooled global temperatures by about 0.06 degrees Fahrenheit (0.03 degrees Celsius) from what the average would have been.

A separate, independent analysis by NOAA also concluded that the global surface temperature for 2021 was the sixth highest since record keeping began in 1880. NOAA scientists use much of the same raw temperature data in their analysis and have a different baseline period (1901-2000) and methodology.

"The complexity of the various analyses doesn't matter because the signals are so strong," said Gavin Schmidt, director of GISS, NASA's leading center for climate modeling and climate change research. "The trends are all the same because the trends are so large."

NASA's full dataset of global surface temperatures for 2021, as well as details of how NASA scientists conducted the analysis, are publicly available from GISS (https://data.giss.nasa.gov/gistemp).

 Read more at Science Daily

Jan 13, 2022

'Slushy' magma ocean led to formation of the Moon’s crust

Scientists have shown how the freezing of a 'slushy' ocean of magma may be responsible for the composition of the Moon's crust.

The scientists, from the University of Cambridge and the Ecole normale supérieure de Lyon, have proposed a new model of crystallisation, where crystals remained suspended in liquid magma over hundreds of millions of years as the lunar 'slush' froze and solidified. The results are reported in the journal Geophysical Review Letters.

Over fifty years ago, Apollo 11 astronauts collected samples from the lunar Highlands. These large, pale regions of the Moon -- visible to the naked eye -- are made up of relatively light rocks called anorthosites. Anorthosites formed early in the history of the Moon, between 4.3 and 4.5 billion years ago.

Similar anorthosites, formed through the crystallisation of magma, can be found in fossilised magma chambers on Earth. Producing the large volumes of anorthosite found on the Moon however, would have required a huge global magma ocean.

Scientists believe that the Moon formed when two protoplanets, or embryonic worlds, collided. The larger of these two protoplanets became the Earth, and the smaller became the Moon. One of the outcomes of this collision was that the Moon was very hot -- so hot that its entire mantle was molten magma, or a magma ocean.

"Since the Apollo era, it has been thought that the lunar crust was formed by light anorthite crystals floating at the surface of the liquid magma ocean, with heavier crystals solidifying at the ocean floor," said co-author Chloé Michaut from Ecole normale supérieure de Lyon. "This 'flotation' model explains how the lunar Highlands may have formed."

However, since the Apollo missions many lunar meteorites have been analysed and the surface of the Moon has been extensively studied. Lunar anorthosites appear more heterogenous in their composition than the original Apollo samples, which contradicts a flotation scenario where the liquid ocean is the common source of all anorthosites.

The range of anorthosite ages -- over 200 million years -- is difficult to reconcile with an ocean of essentially liquid magma whose characteristic solidification time is close to 100 million years.

"Given the range of ages and compositions of the anorthosites on the Moon, and what we know about how crystals settle in solidifying magma, the lunar crust must have formed through some other mechanism," said co-author Professor Jerome Neufeld from Cambridge's Department of Applied Mathematics and Theoretical Physics.

Michaut and Neufeld developed a mathematical model to identify this mechanism.

In the low lunar gravity, the settling of crystal is difficult, particularly when strongly stirred by the convecting magma ocean. If the crystals remain suspended as a crystal slurry, then when the crystal content of the slurry exceeds a critical threshold, the slurry becomes thick and sticky, and the deformation slow.

This increase of crystal content occurs most dramatically near the surface, where the slushy magma ocean is cooled, resulting in a hot, well-mixed slushy interior and a slow-moving, crystal rich lunar 'lid'.

"We believe it's in this stagnant 'lid' that the lunar crust formed, as lightweight, anorthite-enriched melt percolated up from the convecting crystalline slurry below," said Neufeld. "We suggest that cooling of the early magma ocean drove such vigorous convection that crystals remained suspended as a slurry, much like the crystals in a slushy machine."

Enriched lunar surface rocks likely formed in magma chambers within the lid, which explains their diversity. The results suggest that the timescale of lunar crust formation is several hundreds of million years, which corresponds to the observed ages of the lunar anorthosites.

Read more at Science Daily

World's largest fish breeding area discovered in Antarctica

Near the Filchner Ice Shelf in the south of the Antarctic Weddell Sea, a research team has found the world's largest fish breeding area known to date. A towed camera system photographed and filmed thousands of nests of icefish of the species Neopagetopsis ionah on the seabed. The density of the nests and the size of the entire breeding area suggest a total number of about 60 million icefish breeding at the time of observation. These findings provide support for the establishment of a Marine Protected Area in the Atlantic sector of the Southern Ocean. A team led by Autun Purser from the Alfred Wegener Institute publish their results in the current issue of the scientific journal Current Biology.

The joy was great when, in February 2021, researchers viewed numerous fish nests on the monitors aboard the German research vessel Polarstern, which their towed camera system transmitted live to the vessel from the seabed, 535 to 420 metres below the ship, from the seafloor of the Antarctic Weddell Sea. The longer the mission lasted, the more the excitement grew, finally ending in disbelief: nest followed nest, with later precise evaluation showing that there were on average one breeding site per three square metres, with the team even finding a maximum of one to two active nests per square metre.

The mapping of the area suggests a total extent of 240 square kilometres, which is roughly the size of the island of Malta. Extrapolated to this area size, the total number of fish nests was estimated to be about 60 million. "The idea that such a huge breeding area of icefish in the Weddell Sea was previously undiscovered is totally fascinating," says Autun Purser, deep-sea biologist at the Alfred Wegener Institute, Helmholtz Centre for Polar and Marine Research (AWI) and lead author of the current publication. After all, the Alfred Wegener Institute has been exploring the area with its icebreaker Polarstern since the early 1980s. So far, only individual Neopagetopsis ionah or small clusters of nests have been detected here.

The unique observations are made with a so-called OFOBS, the Ocean Floor Observation and Bathymetry System. It is a camera sledge built to survey the seafloor of extreme environments, like ice-covered seas. It is towed on a special fibre-optic and power cable normally at a speed of about one half to one knot, about one and half metres above the seafloor. "After the spectacular discovery of the many fish nests, we thought about a strategy on board to find out how large the breeding area was -- there was literally no end in sight. The nests are three quarters of a metre in diameter -- so they are much larger than the structures and creatures, some of which are only centimetres in size, that we normally detect with the OFOBS system," Autun Purser reports. "So, we were able to increase the height above ground to about three metres and the towing speed to a maximum of three knots, thus multiplying the area investigated. We covered an area of 45,600 square metres and counted an incredible 16,160 fish nests on the photo and video footage," says the AWI expert.

Based on the images, the team was able to clearly identify the round fish nests, about 15 centimetres deep and 75 centimetres in diameter, which were made distinct from the otherwise muddy seabed by a round central area of small stones. Several types of fish nests were distinguished: "Active" nests, containing between 1,500 and 2,500 eggs and guarded in three-quarters of the cases by an adult icefish of the species Neopagetopsis ionah, or nests which contained only eggs; there were also unused nests, in the vicinity of which either only a fish without eggs could be seen, or a dead fish. The researchers mapped the distribution and density of the nests using OFOBS's longer-range but lower-resolution side scan sonars, which recorded over 100,000 nests.

The scientists combined their results with oceanographic and biological data. The result: the breeding area corresponds spatially with the inflow of warmer deep water from the Weddell Sea onto the higher shelf. With the help of transmitter equipped seals, the multidisciplinary team was also able to prove that the region is also a popular destination for Weddell seals. 90 per cent of the seals' diving activities took place within the region of active fish nests, where they presumably go in search of food. No wonder, the researchers calculate the biomass of the ice fish colony there at 60 thousand tonnes.

With its biomass, this huge breeding area is an extremely important ecosystem for the Weddell Sea and, according to current research, likely to be the most spatially extensive contiguous fish breeding colony discovered worldwide to date, the experts report in the publication in Current Biology.

German Federal Research Minister Bettina Stark-Watzinger said: "My congratulations to the researchers involved on their fascinating discovery. After the MOSAiC expedition, German marine and polar research has once more reaffirmed its outstanding position. German research vessels are floating environmental research laboratories. They continue to sail the polar seas and our oceans almost non-stop, serving as platforms for science aimed at generating important findings to support climate and environmental protection. Funding by the Federal Ministry of Education and Research (BMBF) provides German marine and polar research with one of the most state-of-the-art research vessel fleets worldwide. This discovery can make an important contribution towards protecting the Antarctic environment. The BMBF will continue to work towards this goal under the umbrella of the United Nations Decade of Ocean Science for Sustainable Development that runs until 2030."

For AWI Director and deep-sea biologist Prof. Antje Boetius, the current study is a sign of how urgent it is to establish marine protected areas in Antarctica. "This great discovery was enabled by a specific under-ice survey technology we developed during my ERC Grant. It shows how important it is to be able to investigate unknown ecosystems before we disturb them. Considering how little known the Antarctic Weddell Sea is, this underlines all the more the need of international efforts to establish a Marine Protected Area (MPA)," Antje Boetius classifies the results of the study, in which she was not directly involved. A proposal for such an MPA has been prepared under the lead of the Alfred Wegener Institute and is defended since 2016 by the European Union and its member states as well as other supporting countries in the international Commission for the Conservation of Antarctic Marine Living Resources (CCAMLR).

Read more at Science Daily

Epigenetic mechanisms for parent-specific genetic activation decoded

Hereditary diseases as well as cancers and cardiovascular diseases may be associated with a phenomenon known as genomic imprinting, in which only the maternally or paternally inherited gene is active. An international research team involving scientists at the Technical University of Munich (TUM), the Max Planck Institute for Molecular Genetics (MPIMG) in Berlin and Harvard University in Cambridge (USA) has now investigated the mechanisms responsible for the deactivation of the genes.

Our cells contain the entire genetic information from our mother and our father. From each of them we inherit 23 chromosomes that contain our DNA. Two copies of each gene are therefore present in our genome and, as a general rule, both are active. This has the advantage that defective mutations inherited from the mother or father are generally cancelled out by the other copy of the gene.

However, for around one percent of our genes, only the gene inherited from the father or mother is active, while the other is deactivated, a phenomenon known as genomic imprinting.

Approach for treating diseases

"Many genetic and epigenetic diseases are associated with genomic imprinting, such as Beckwith-Wiedemann syndrome, Angelman syndrome and Prader-Willi syndrome," explains Dr. Daniel Andergassen, the head of the Independent Junior Research Group at the Institute of Pharmacology and Toxicology at TUM. "If the healthy, deactivated gene could be reactivated, it would be theoretically possible to compensate for complications caused by the active, defective gene."

"But before developing future treatments, we need to understand the fundamentals," says Prof. Alexander Meissner, director at the MPIMG. "It has become clear in recent years that genomic imprinting is mediated by multiple molecular mechanisms."

Read lock for the gene

In genomic imprinting, either the "packaging" of the genetic material or the DNA itself is chemically modified. Instead of the genetic information being changed, the modifications block the gene from being read.

"These are so-called epigenetic mechanisms," says Andergassen. "The DNA can be seen as the hardware, and epigenetics as the software responsible for regulating the genes." Genetic regulation takes place in every cell in the body. All the cells contain the same genetic information, but depending on the organ, different genes are active.

Genetic scissors remove the "off switch"

Meissner and Andergassen, who at the beginning of the study where still conducting the research at Harvard University (USA) along with Dr. Zachary Smith, used mice to investigate which epigenetic mechanisms were behind the imprinting.

They used the molecular biology technique known as CRISPR-Cas9 that functions as a "genetic scissors," removing and inserting segments of DNA. The scientists removed known epigenetic "off switches" and observed whether the deactivated gene was reactivated. With this approach, they were able to link the most important epigenetic "off switches" with imprinted genes.

Hydrocarbon molecules render genes inactive


It turns out that most of the genes are inactivated through DNA methylation that attaches hydrocarbon molecules to the genetic material. Another group of genes is silenced by a set of enzymes known as Polycombs. In the placenta, an additional mechanism comes into play: In this tissue, some genes are deactivated by chemically modifying the proteins that serve as a structural scaffold for the DNA.

The small but crucial difference

Along with genomic imprinting that switches off individual genes, the researchers investigated another phenomenon. In female cells, which unlike male cells have two X chromosomes, one chromosome is entirely deactivated very early in embryonic development. This is true in almost all mammals, including humans.

"We discovered that the enzyme PRC2 plays an important role in the inactivation of the X chromosome, at least in the placenta," says Andergassen. "Once we remove this enzyme, the silent X chromosome is reactivated." The results could be significant for X-chromosome-related disease because reactivation of the silent gene could compensate for the malfunctioning active gene. In a follow-up project at TUM, Andergassen will study whether heart diseases might also be associated with epigenetics and especially with the inactive X chromosome in women. "Because our epigenetics change as we get old, it is conceivable that the X chromosome becomes active again and that the duplicate genetic activity has a negative influence," says the researcher.

Read more at Science Daily

Rare African script offers clues to the evolution of writing

The world's very first invention of writing took place over 5000 years ago in the Middle East, before it was reinvented in China and Central America. Today, almost all human activities -- from education to political systems and computer code -- rely on this technology.

But despite its impact on daily life, we know little about how writing evolved in its earliest years. With so few sites of origin, the first traces of writing are fragmentary or missing altogether.

In a study just published in Current Anthropology, a team of researchers at the Max Planck Institute for the Science of Human History in Jena, Germany, showed that writing very quickly becomes 'compressed' for efficient reading and writing.

To arrive at this insight they turned to a rare African writing system that has fascinated outsiders since the early 19th century.

"The Vai script of Liberia was created from scratch in about 1834 by eight completely illiterate men who wrote in ink made from crushed berries," says lead author Dr Piers Kelly, now at the University of New England, Australia. The Vai language had never before been written down.

According to Vai teacher Bai Leesor Sherman, the script was always taught informally from a literate teacher to a single apprentice student. It remains so successful that today it is even used to communicate pandemic health messages.

"Because of its isolation, and the way it has continued to develop up until the present day, we thought it might tell us something important about how writing evolves over short spaces of time," says Kelly.

"There's a famous hypothesis that letters evolve from pictures to abstract signs. But there are also plenty of abstract letter-shapes in early writing. We predicted, instead, that signs will start off as relatively complex and then become simpler across new generations of writers and readers."

The team scrutinised manuscripts in the Vai language from archives in Liberia, the United States, and Europe. By analysing year-by-year changes in its 200 syllabic letters, they traced the entire evolutionary history of the script from 1834 onwards. Applying computational tools for measuring visual complexity, they found that the letters really did become visually simpler with each passing year.

"The original inventors were inspired by dreams to design individual signs for each syllable of their language. One represents a pregnant woman, another is a chained slave, others are taken from traditional emblems. When these signs were applied to writing spoken syllables, then taught to new people, they became simpler, more systematic and more similar to one another," says Kelly.

This pattern of simplification can be observed over much longer time scales for ancient writing systems as well.

"Visual complexity is helpful if you're creating a new writing system. You generate more clues and greater contrasts between signs, which helps illiterate learners. This complexity later gets in the way of efficient reading and reproduction, so it fades away," says Kelly.

Read more at Science Daily

Jan 12, 2022

Rugby ball-shaped exoplanet discovered

With the help of the CHEOPS space telescope, an international team including researchers from the Universities of Bern and Geneva as well as the National Centre of Competence in Research (NCCR) PlanetS, was able to detect the deformation of an exoplanet for the first time. Due to strong tidal forces, the appearance of the planet WASP-103b resembles a rugby ball rather than a sphere.

On coasts, the tides determine the rhythm of events. At low tide, boats remain on land; at high tide, the way out to sea is cleared for them again. On Earth, the tides are mainly generated by the moon. Its gravitational pull causes an accumulation of water in the ocean region below, which is then missing in surrounding regions and thus accounts for the low tide. Although this deformation of the ocean causes striking differences in level in many places, it is hardly recognisable from space.

On the planet WASP-103b, tides are much more extreme. The planet orbits its star in just one day and is deformed by the strong tidal forces so drastically, that its appearance resembles a rugby ball. This is shown by a new study involving researchers from the Universities of Bern and Geneva as well as the National Centre of Competence in Research (NCCR) PlanetS, published today in the scientific journal Astronomy & Astrophysics. This finding was made possible thanks to observations with the CHEOPS space telescope. CHEOPS is a joint mission of the European Space Agency (ESA) and Switzerland, led by the University of Bern in collaboration with the University of Geneva.

A groundbreaking measurement

The planet WASP-103b is located in the constellation Hercules, is almost twice the size of Jupiter, has one and a half times its mass and is about fifty times closer to its star than Earth is to the Sun. "Because of its great proximity to its star, we had already suspected that very large tides are caused on the planet. But, we had not yet been able to verify this," explains study co-author Yann Alibert, professor of astrophysics at the University of Bern and member of the NCCR PlanetS.

The NASA/ESA Hubble Space Telescope and NASA's Spitzer Space Telescope had already observed the planet. In combination with the high precision and pointing flexibility of CHEOPS, these observations enabled the researchers to measure the tiny signal of the tidal deformation of the planet light years away. In doing so, they took advantage of the fact that the planet dims the light of the star slightly each time it passes in front of it. "After observing several such so-called "transits," we were able to measure the deformation. It's incredible that we were able to do this -- it's the first time such an analysis has been done," reports Babatunde Akinsanmi, a researcher at the University of Geneva, co-author of the study and NCCR PlanetS associate.

The planet is inflated

The researchers' results not only allow conclusions to be drawn about the shape of the planet, but also about its interior. This is because the team was also able to derive a parameter called the "Love number" (named after the British mathematician Augustus E. H. Love) from the transit light curve of WASP-103b. It indicates how the mass is distributed within the planet and thus also gives clues about its inner structure. "The resistance of a material to deformation depends on its composition," explains Akinsanmi. "We can only see the tides on Earth in the oceans. The rocky part doesn't move that much. Therefore, by measuring how much the planet is deformed, we can determine how much of it is made up of rock, gas or water."

WASP-103b's Love number is like Jupiter's, our Solar System's biggest gas giant. It suggests that the internal structures of WASP-103b and Jupiter are similar -- even though WASP-103b is twice as large. "In principle, we would expect a planet with 1.5 times the mass of Jupiter to be about the same size. Therefore, WASP-103b must be highly inflated due to heating by its nearby star, and perhaps other mechanisms," says Monika Lendl, professor of astronomy at the University of Geneva and co-author of the study.

Read more at Science Daily

Ancient Mesopotamian discovery transforms knowledge of early farming

Rutgers researchers have unearthed the earliest definitive evidence of broomcorn millet (Panicum miliaceum) in ancient Iraq, challenging our understanding of humanity's earliest agricultural practices. Their findings appear in the journal Scientific Reports.

"Overall, the presence of millet in ancient Iraq during this earlier time period challenges the accepted narrative of agricultural development in the region as well as our models for how ancient societies provisioned themselves," said Elise Laugier, an environmental archaeologist and National Science Foundation postdoctoral fellow in the School of Arts and Sciences at Rutgers University-New Brunswick.

Broomcorn millet is an "amazingly robust, quick-growing and versatile summer crop" that was first domesticated in East Asia, Laugier added. The researchers analyzed microscopic plant remains (phytoliths) from Khani Masi, a mid-late second millennium BCE (c. 1500-1100 BCE) site in the Kurdistan region of Iraq.

"The presence of this East Asian crop in ancient Iraq highlights the interconnected nature of Eurasia during this time, contributing to our knowledge of early food globalization," Laugier said. "Our discovery of millet and thus the evidence of summer cultivation practices also forces us to reconsider the capacity and resilience of the agricultural systems that sustained and provisioned Mesopotamia's early cities, states and empires."

The discovery of broomcorn millet in ancient Mesopotamia was surprising for environmental and historical reasons. Until now, researchers thought that millet wasn't grown in Iraq until the construction of later 1st millennium BCE imperial irrigation systems. Millet generally requires summer precipitation to grow, but Southwest Asia has a wet-winter and dry-summer climate, and agricultural production is based almost entirely on crops grown during the winter, such as wheat and barley.

Agricultural production is thought to be the basis for supporting and provisioning Mesopotamian cities, states and empires. The researchers' new evidence that crops and food were, in fact, grown in summer months means that previous studies likely vastly under-appreciated the capacities and resilience of ancient agricultural food-system societies in semi-arid ecosystems.

The new study is also part of growing archaeological research showing that in the past, agricultural innovation was a local initiative, adopted as part of local diversification strategies long before they were used in imperial agricultural intensification regimes -- new information that could have an impact on how agricultural innovations move forward today.

"Although millet isn't a common or preferred food in semi-arid Southwest Asia or the United States today, it is still common in other parts of Asia and Africa," Laugier said. "Millet is a hearty, fast-growing, low-water requiring and nutritious gluten-free grain that could hold a lot of potential for increasing the resilience capacities of our semi-arid food systems. Today's agricultural innovators should consider investing in more diverse and resilient food systems, just as people did in ancient Mesopotamia."

Read more at Science Daily

Earliest human remains in eastern Africa dated to more than 230,000 years ago

The age of the oldest fossils in eastern Africa widely recognised as representing our species, Homo sapiens, has long been uncertain. Now, dating of a massive volcanic eruption in Ethiopia reveals they are much older than previously thought.

The remains -- known as Omo I -- were found in Ethiopia in the late 1960s, and scientists have been attempting to date them precisely ever since, by using the chemical fingerprints of volcanic ash layers found above and below the sediments in which the fossils were found.

An international team of scientists, led by the University of Cambridge, has reassessed the age of the Omo I remains -- and Homo sapiens as a species. Earlier attempts to date the fossils suggested they were less than 200,000 years old, but the new research shows they must be older than a colossal volcanic eruption that took place 230,000 years ago. The results are reported in the journal Nature.

The Omo I remains were found in the Omo Kibish Formation in southwestern Ethiopia, within the East African Rift valley. The region is an area of high volcanic activity, and a rich source of early human remains and artefacts such as stone tools. By dating the layers of volcanic ash above and below where archaeological and fossil materials are found, scientists identified Omo I as the earliest evidence of our species, Homo sapiens.

"Using these methods, the generally accepted age of the Omo fossils is under 200,000 years, but there's been a lot of uncertainty around this date," said Dr Céline Vidal from Cambridge's Department of Geography, the paper's lead author. "The fossils were found in a sequence, below a thick layer of volcanic ash that nobody had managed to date with radiometric techniques because the ash is too fine-grained."

As part of a four-year project led by Professor Clive Oppenheimer, Vidal and her colleagues have been attempting to date all the major volcanic eruptions in the Ethiopian Rift around the time of the emergence of Homo sapiens, a period known as the late Middle Pleistocene.

The researchers collected pumice rock samples from the volcanic deposits and ground them down to sub-millimetre size. "Each eruption has its own fingerprint -- its own evolutionary story below the surface, which is determined by the pathway the magma followed," said Vidal. "Once you've crushed the rock, you free the minerals within, and then you can date them, and identify the chemical signature of the volcanic glass that holds the minerals together."

The researchers carried out new geochemical analysis to link the fingerprint of the thick volcanic ash layer from the Kamoya Hominin Site (KHS ash) with an eruption of Shala volcano, more than 400 kilometres away. The team then dated pumice samples from the volcano to 230,000 years ago. Since the Omo I fossils were found deeper than this particular ash layer, they must be more than 230,000 years old.

"First I found there was a geochemical match, but we didn't have the age of the Shala eruption," said Vidal. "I immediately sent the samples of Shala volcano to our colleagues in Glasgow so they could measure the age of the rocks. When I received the results and found out that the oldest Homo sapiens from the region was older than previously assumed, I was really excited."

"The Omo Kibish Formation is an extensive sedimentary deposit which has been barely accessed and investigated in the past," said co-author and co-leader of the field investigation Professor Asfawossen Asrat from Addis Ababa University in Ethiopia, who is currently at BIUST in Botswana. "Our closer look into the stratigraphy of the Omo Kibish Formation, particularly the ash layers, allowed us to push the age of the oldest Homo sapiens in the region to at least 230,000 years."

"Unlike other Middle Pleistocene fossils which are thought to belong to the early stages of the Homo sapiens lineage, Omo I possesses unequivocal modern human characteristics, such as a tall and globular cranial vault and a chin," said co-author Dr Aurélien Mounier from the Musée de l'Homme in Paris. "The new date estimate, de facto, makes itthe oldest unchallenged Homo sapiens in Africa."

The researchers say that while this study shows a new minimum age for Homo sapiens in eastern Africa, it's possible that new finds and new studies may extend the age of our species even further back in time.

"We can only date humanity based on the fossils that we have, so it's impossible to say that this is the definitive age of our species," said Vidal. "The study of human evolution is always in motion: boundaries and timelines change as our understanding improves. But these fossils show just how resilient humans are: that we survived, thrived and migrated in an area that was so prone to natural disasters."

"It's probably no coincidence that our earliest ancestors lived in such a geologically active rift valley -- it collected rainfall in lakes, providing fresh water and attracting animals, and served as a natural migration corridor stretching thousands of kilometres," said Oppenheimer. "The volcanoes provided fantastic materials to make stone tools and from time to time we had to develop our cognitive skills when large eruptions transformed the landscape."

"Our forensic approach provides a new minimum age for Homo sapiens in eastern Africa, but the challenge still remains to provide a cap, a maximum age, for their emergence, which is widely believed to have taken place in this region," said co-author Professor Christine Lane, head of the Cambridge Tephra Laboratory where much of the work was carried out. "It's possible that new finds and new studies may extend the age of our species even further back in time."

Read more at Science Daily

Study challenges evolutionary theory that DNA mutations are random

A simple roadside weed may hold the key to understanding and predicting DNA mutation, according to new research from University of California, Davis, and the Max Planck Institute for Developmental Biology in Germany.

The findings, published January 12 in the journal Nature, radically change our understanding of evolution and could one day help researchers breed better crops or even help humans fight cancer.

Mutations occur when DNA is damaged and left unrepaired, creating a new variation. The scientists wanted to know if mutation was purely random or something deeper. What they found was unexpected.

"We always thought of mutation as basically random across the genome," said Grey Monroe, an assistant professor in the UC Davis Department of Plant Sciences who is lead author on the paper. "It turns out that mutation is very non-random and it's non-random in a way that benefits the plant. It's a totally new way of thinking about mutation."

Researchers spent three years sequencing the DNA of hundreds of Arabidopsis thaliana, or thale cress, a small, flowering weed considered the "lab rat among plants" because of its relatively small genome comprising around 120 million base pairs. Humans, by comparison, have roughly 3 billion base pairs.

"It's a model organism for genetics," Monroe said.

Lab-grown plants yield many variations

Work began at Max Planck Institute where researchers grew specimens in a protected lab environment, which allowed plants with defects that may not have survived in nature be able to survive in a controlled space.

Sequencing of those hundreds of Arabidopsis thaliana plants revealed more than 1 million mutations. Within those mutations a nonrandom pattern was revealed, counter to what was expected.

"At first glance, what we found seemed to contradict established theory that initial mutations are entirely random and that only natural selection determines which mutations are observed in organisms," said Detlef Weigel, scientific director at Max Planck Institute and senior author on the study.

Instead of randomness they found patches of the genome with low mutation rates. In those patches, they were surprised to discover an over-representation of essential genes, such as those involved in cell growth and gene expression.

"These are the really important regions of the genome," Monroe said. "The areas that are the most biologically important are the ones being protected from mutation."

The areas are also sensitive to the harmful effects of new mutations. "DNA damage repair seems therefore to be particularly effective in these regions," Weigel added.

Plant evolved to protect itself

The scientists found that the way DNA was wrapped around different types of proteins was a good predictor of whether a gene would mutate or not. "It means we can predict which genes are more likely to mutate than others and it gives us a good idea of what's going on," Weigel said.

The findings add a surprising twist to Charles Darwin's theory of evolution by natural selection because it reveals that the plant has evolved to protect its genes from mutation to ensure survival.

"The plant has evolved a way to protect its most important places from mutation," Weigel said. "This is exciting because we could even use these discoveries to think about how to protect human genes from mutation."

Future uses

Knowing why some regions of the genome mutate more than others could help breeders who rely on genetic variation to develop better crops. Scientists could also use the information to better predict or develop new treatments for diseases like cancer that are caused by mutation.

"Our discoveries yield a more complete account of the forces driving patterns of natural variation; they should inspire new avenues of theoretical and practical research on the role of mutation in evolution," the paper concludes.

Co-authors from UC Davis include Daniel Kliebenstein, Mariele Lensink, Marie Klein, from the Department of Plant Sciences. Researchers from the Carnegie Institution for Science, Stanford University, Westfield State University, University of Montpellier, Uppsala University, College of Charleston, and South Dakota State University contributed to the research.

Read more at Science Daily

Jan 11, 2022

Matter and antimatter seem to respond equally to gravity

As part of an experiment to measure -- to an extremely precise degree -- the charge-to-mass ratios of protons and antiprotons, the RIKEN-led BASE collaboration at CERN, Geneva, Switzerland, has found that, within the uncertainty of the experiment, matter and antimatter respond to gravity in the same way.

Matter and antimatter create some of the most interesting problems in physics today. They are essentially equivalent, except that where a particle has a positive charge its antiparticle has a negative one. In other respects they seem equivalent. However, one of the great mysteries of physics today, known as "baryon asymmetry," is that, despite the fact that they seem equivalent, the universe seems made up entirely of matter, with very little antimatter. Naturally, scientists around the world are trying hard to find something different between the two, which could explain why we exist.

As part of this quest, scientists have explored whether matter and antimatter interact similarly with gravity, or whether antimatter would experience gravity in a different way than matter, which would violate Einstein's weak equivalence principle. Now, the BASE collaboration has shown, within strict boundaries, that antimatter does in fact respond to gravity in the same way as matter.

The finding, published in Nature, actually came from a different experiment, which was examining the charge-to-mass ratios of protons and antiprotons, one of the other important measurements that could determine the key difference between the two.

This work involved 18 months of work at CERN's antimatter factory. To make the measurements, the team confined antiprotons and negatively charged hydrogen ions, which they used as a proxy for protons, in a Penning trap. In this device, a particle follows a cyclical trajectory with a frequency, close to the cyclotron frequency, that scales with the trap's magnetic-field strength and the particle's charge-to-mass ratio. By feeding antiprotons and negatively charged hydrogen ions into the trap, one at a time, they were able to measure, under identical conditions, the cyclotron frequencies of the two particle types, comparing their charge-to-mass ratios. According to Stefan Ulmer, the leader of the project, "By doing this, we were able to obtain a result that they are essentially equivalent, to a degree four times more precise than previous measures. To this level of CPT invariance, causality and locality hold in the relativistic quantum field theories of the Standard Model."

Interestingly, the group used the measurements to test a fundamental physics law known as the weak equivalence principle. According to this principle, different bodies in the same gravitational field should undergo the same acceleration in the absence of frictional forces. Because the BASE experiment was placed on the surface of the Earth, the proton and antiproton cyclotron-frequency measurements were made in the gravitational field on the Earth's surface, and any difference between the gravitational interaction of protons and antiprotons would result in a difference between the cyclotron frequencies.

By sampling the gravitational field of the Earth as the planet orbited the Sun, the scientists found that matter and antimatter responded to gravity in the same way up to a degree of three parts in 100, which means that the gravitational acceleration of matter and antimatter are identical within 97% of the experienced acceleration.

Ulmer adds that these measurements could lead to new physics. He says, "The 3% accuracy of the gravitational interaction obtained in this study is comparable to the accuracy goal of the gravitational interaction between antimatter and matter that other research groups plan to measure using free-falling anti-hydrogen atoms. If the results of our study differ from those of the other groups, it could lead to the dawn of a completely new physics."

Read more at Science Daily

Water scarcity may spur improvements at manufacturing facilities

As climate change continues and the incidence of drought rise, water is increasingly becoming scarce for manufacturing. But a new study by researchers at Penn State and UCLA suggests that there is a silver lining -- companies that use water may pivot to become efficient and more eco-friendly during periods of water scarcity.

According to the researchers, just as water is an essential ingredient for life, it's also an essential ingredient for many manufacturing processes. Water is often consumed in large amounts while making many common products ranging from cars to smartphones to computer chips.

In their study, the researchers -- including Suvrat Dhanorkar and Suresh Muthulingam from Penn State and Charles Corbett from UCLA -- found that subsequent to periods of water scarcity, manufacturing facilities that use water extensively streamlined their processes to lower their toxic emissions into water, like lakes and rivers.

An added benefit the researchers found was that the changes in the processes also resulted in reduced toxic emissions into land and air. On average, the estimated reductions in toxic emissions were more than 2.5%.

Dhanorkar, associate professor of supply chain and information systems, said the study -- recently published in the journal Management Science -- is one of the first to flip the question of how industry is contributing to climate change and instead ask how industry is responding to climate-change induced events, like droughts.

"Most of the prior research has been focused on how companies are negatively affecting the environment," Dhanorkar said. "We wanted to flip it and see how companies respond to climate change. It opens a new area of research that can, in the future, start to look at not just manufacturing, but also how these climate change-induced events affect innovation and other economic factors like unemployment."

For the study, the researchers gathered data from 3,092 manufacturing facilities in Texas from 2000 to 2016. The researchers focused their study on Texas because the state frequently experiences droughts and periods of water scarcity, as well as produces many types of products, including ??food, petroleum, coal, chemicals, and metal.

The data included information on the weeks of drought experienced by each facility and the total amount of toxic emissions recorded at each facility.

"We found that water scarcity can induce manufacturing facilities that rely heavily on water to improve their environmental performance by lowering toxic releases, but only when they face persistent drought," said Muthulingam, associate professor of supply chain management. "These effects also extended into the facilities reducing emissions in other ways, such as into the land and air, too."

The researchers said one explanation for the results could be that water scarcity prompts companies to become more careful about how they use water. But, because water is used across processes, companies become aware of other areas where they can improve upon, as well.

"A lot of these industries use water at different points in their processes," Dhanorkar said. "So, when there's water shortage and you're investigating how to improve water usage, it may also reveal shortcomings in other aspects of the processes that aren't related to water, as well. These companies might be learning a lot about their processes not just from a water standpoint, but more broadly."

Read more at Science Daily

Why people deceive themselves

A philosophy team from Ruhr-Universität Bochum (RUB) and the University of Antwerp analysed the role self-deception plays in everyday life and the strategies people use to deceive themselves. In the journal Philosophical Psychology, Dr. Francesco Marchi and Professor Albert Newen describe four strategies used to stabilise and shield the positive self-image. According to their theory, self-deception helps people to stay motivated in difficult situations. The article was published on 6 January 2022.

Four strategies of self-deception

"All people deceive themselves, and quite frequently at that," says Albert Newen from the RUB Institute of Philosophy II. "For instance, if a father is convinced that his son is a good student and then the son brings home bad grades, he may first say that the subject isn't that important or that the teacher didn't explain the material well." The researchers call this strategy of self-deception the reorganisation of beliefs. In their article, they describe three more frequently used strategies that come into play even earlier in order to prevent unpleasant facts from getting to you in the first place.

This includes selecting facts through purposeful action: people avoid places or persons that might bring problematic facts to their attention, such as the parent-teacher conference. Another strategy is to reject facts by casting doubt on the credibility of the source. As long as the father hears about his son's academic problems only indirectly and does not see the grades, he can ignore the problems. The last strategy is what Newen and Marchi call generating facts from an ambiguous state of affairs: "For instance, if the kind mathematics teacher gently suggests that the son is not coping, and the father would have expected a clear statement in case of difficulties, he may interpret the considerable kindness and the gentle description as a positive assessment of his son's abilities," Francesco Marchi elaborates on the example.

The researchers describe all four strategies as typical psychological thinking tendencies. Self-deception is neither unreasonable nor detrimental to people in the short term, but always in the medium and long term. "These are not malicious ways of doing things, but part of the basic cognitive equipment of humans to preserve their established view of themselves and the world," says Newen. In normal times with few changes, the tendency to stick to proven views is helpful and also deeply rooted in evolution. "However, this cognitive tendency is catastrophic in times of radically new challenges that require rapid changes in behaviour," adds the Bochum researcher.

Read more at Science Daily

Successful transplant of porcine heart into adult human with end-stage heart disease

In a first-of-its-kind surgery, a 57-year-old patient with terminal heart disease received a successful transplant of a genetically-modified pig heart and is still doing well three days later. It was the only currently available option for the patient. The historic surgery was conducted by University of Maryland School of Medicine (UMSOM) faculty at the University of Maryland Medical Center (UMMC), together known as the University of Maryland Medicine.

This organ transplant demonstrated for the first time that a genetically-modified animal heart can function like a human heart without immediate rejection by the body. The patient, David Bennett, a Maryland resident, is being carefully monitored over the next days and weeks to determine whether the transplant provides lifesaving benefits. He had been deemed ineligible for a conventional heart transplant at UMMC as well as at several other leading transplant centers that reviewed his medical records.

"It was either die or do this transplant. I want to live. I know it's a shot in the dark, but it's my last choice," said Mr. Bennett, the patient, a day before the surgery was conducted. He had been hospitalized and bedridden for the past few months. "I look forward to getting out of bed after I recover."

The U.S. Food and Drug Administration granted emergency authorization for the surgery on New Year's Eve through its expanded access (compassionate use) provision. It is used when an experimental medical product, in this case the genetically-modified pig's heart, is the only option available for a patient faced with a serious or life-threatening medical condition. The authorization to proceed was granted in the hope of saving the patient's life.

"This was a breakthrough surgery and brings us one step closer to solving the organ shortage crisis. There are simply not enough donor human hearts available to meet the long list of potential recipients," said Bartley P. Griffith, MD, who surgically transplanted the pig heart into the patient. Dr. Griffith is the Thomas E. and Alice Marie Hales Distinguished Professor in Transplant Surgery at UMSOM. "We are proceeding cautiously, but we are also optimistic that this first-in-the-world surgery will provide an important new option for patients in the future."

Considered one of the world's foremost experts on transplanting animal organs, known as xenotransplantation, Muhammad M. Mohiuddin, MD, Professor of Surgery at UMSOM, joined the UMSOM faculty five years ago and established the Cardiac Xenotransplantation Program with Dr. Griffith. Dr. Mohiuddin serves as the program's Scientific/Program Director and Dr. Griffith as its Clinical Director.

"This is the culmination of years of highly complicated research to hone this technique in animals with survival times that have reached beyond nine months. The FDA used our data and data on the experimental pig to authorize the transplant in an end-stage heart disease patient who had no other treatment options," said Dr. Mohiuddin. "The successful procedure provided valuable information to help the medical community improve this potentially life-saving method in future patients."

About 110,000 Americans are currently waiting for an organ transplant, and more than 6,000 patients die each year before getting one, according to the federal government's organdonor.gov. Xenotransplantation could potentially save thousands of lives but does carry a unique set of risks, including the possibility of triggering a dangerous immune response. These responses can trigger an immediate rejection of the organ with a potentially deadly outcome to the patient.

Xenotransplants were first tried in the 1980s, but were largely abandoned after the famous case of Stephanie Fae Beauclair (known as Baby Fae) at Loma Linda University in California. The infant, born with a fatal heart condition, received a baboon heart transplant and died within a month of the procedure due to the immune system's rejection of the foreign heart. However, for many years, pig heart valves have been used successfully for replacing valves in humans.

Before consenting to receive the transplant, Mr. Bennett, the patient, was fully informed of the procedure's risks, and that the procedure was experimental with unknown risks and benefits. He had been admitted to the hospital more than six weeks earlier with life-threatening arrythmia and was connected to a heart-lung bypass machine, called extracorporeal membrane oxygenation (ECMO), to remain alive. In addition to not qualifying to be on the transplant list, he was also deemed ineligible for an artificial heart pump due to his arrhythmia.

Revivicor, a regenerative medicine company based in Blacksburg, VA, provided the genetically-modified pig to the xenotransplantation laboratory at UMSOM. On the morning of the transplant surgery, the surgical team, led by Dr. Griffith and Dr. Mohiuddin, removed the pig's heart and placed it in the XVIVO Heart Box, perfusion device, a machine that keeps the heart preserved until surgery.

The physician-scientists also used a new drug along with conventional anti-rejection drugs, which are designed to suppress the immune system and prevent the body from rejecting the foreign organ. The new drug used is an experimental compound made by Kiniksa Pharmaceuticals.

"This unprecedented and historic procedure highlights the importance of translational research which lays the groundwork for patients to benefit in the future. It is the culmination of our longstanding commitment to discovery and innovation in our xenotransplantation program," said E. Albert Reece, MD, PhD, MBA, Executive Vice President for Medical Affairs, UM Baltimore, and the John Z. and Akiko K. Bowers Distinguished Professor and Dean, University of Maryland School of Medicine. "Our transplant surgeon-scientists are among the most talented in the country, and are helping to bring the promise of xenotransplantation to fruition. We hope it will one day become a standard of care for patients in need of organ transplants. As has happened throughout our history, the University of Maryland School of Medicine continues to address the most complex medical and scientific problems."

Bruce Jarrell, MD, President of the University of Maryland, Baltimore, who himself is a transplant surgeon, recalled: "Dr. Griffith and I began as organ transplant surgeons when it was in its infancy. Back then, it was the dream of every transplant surgeon, myself included, to achieve xenotransplantation and it is now personally gratifying to me to see this long-sought goal clearly in view. It is a spectacular achievement."

"This is truly a historic, monumental step forward. While we have long been at the forefront of research driving progress toward the promise of xenotransplantation as a viable solution to the organ crisis, many believed this breakthrough would be well into the future," said Bert W. O'Malley, MD, President and CEO, University of Maryland Medical Center. "I couldn't be more proud to say the future is now. Our skilled team of UMMC and UMSOM physician-scientists will continue to advance and adapt medical discovery for patient care that could offer a lifeline for more patients in dire need."

Mohan Suntha, MD, MBA, President and CEO, University of Maryland Medical System, added: "The University of Maryland Medical System is committed to working with our University of Maryland School of Medicine partners to explore, research, and in many cases implement the innovations in patient care that make it possible to improve quality of life and save lives. We appreciate the tremendous courage of this live recipient, who has made an extraordinary decision to participate in this groundbreaking procedure to not only potentially extend his own life, but also for the future benefit of others."

Organs from genetically modified pigs have been the focus of much of the research in xenotransplantation, in part because of physiologic similarities between pigs, human, and nonhuman primates. UMSOM received $15.7 million sponsored research grant to evaluate Revivicor genetically-modified pig UHearts™ in baboon studies.

Three genes -- responsible for rapid antibody-mediated rejection of pig organs by humans -- were "knocked out" in the donor pig. Six human genes responsible for immune acceptance of the pig heart were inserted into the genome. Lastly, one additional gene in the pig was knocked out to prevent excessive growth of the pig heart tissue, which totaled 10 unique gene edits made in the donor pig.

"We are thrilled to support the world-class team of transplant surgeons led by Dr. Griffith and Dr. Mohiuddin at the University of Maryland School of Medicine," said David Ayares, PhD, Chief Scientific Officer of Revivicor, Inc. "This transplant is groundbreaking, and is another step in the investigation of xeno organs for human use."

Dr. Mohiuddin, Dr. Griffith, and their research team spent the past five years perfecting the surgical technique for transplantation of pig hearts into non-human primates. Dr. Mohiuddin's xenotransplant research experience spans over 30 years during which time he demonstrated in peer-reviewed research that genetically-modified pig's hearts can function when placed in the abdomen for as long as three years. Success was dependent on the right combination of genetic modifications to the experimental donor pig UHeart™ and anti-rejection drugs, including some experimental compounds.

"As a cardiothoracic surgeon who does lung transplants, this is an amazing moment in the history of our field. Decades of research here at Maryland and elsewhere have gone into this achievement. This has the potential to revolutionize the field of transplantation by eventually eliminating the organ shortage crisis," said Christine Lau, MD, MBA the Dr. Robert W. Buxton Professor and Chair of the Department of Surgery at UMSOM and Surgeon-in-Chief at UMMC. "This is a continuation of steps to making xenotransplantation a life-saving reality for patients in need."

Read more at Science Daily

Jan 10, 2022

Ocean physics explain cyclones on Jupiter

Hurtling around Jupiter and its 79 moons is the Juno spacecraft, a NASA-funded satellite that sends images from the largest planet in our solar system back to researchers on Earth. These photographs have given oceanographers the raw materials for a new study published today in Nature Physics that describes the rich turbulence at Jupiter's poles and the physical forces that drive the large cyclones.

Lead author Lia Siegelman, a physical oceanographer and postdoctoral scholar at Scripps Institution of Oceanography at the University of California San Diego, decided to pursue the research after noticing that the cyclones at Jupiter's pole seem to share similarities with ocean vortices she studied during her time as a PhD student. Using an array of these images and principles used in geophysical fluid dynamics, Siegelman and colleagues provided evidence for a longtime hypothesis that moist convection -- when hotter, less dense air rises -- drives these cyclones.

"When I saw the richness of the turbulence around the Jovian cyclones with all the filaments and smaller eddies, it reminded me of the turbulence you see in the ocean around eddies," said Siegelman. "These are especially evident on high-resolution satellite images of plankton blooms for example."

Siegelman says that understanding Jupiter's energy system, a scale much larger than Earth's one, could also help us understand the physical mechanisms at play on our own planet by highlighting some energy routes that could also exist on Earth.

"To be able to study a planet that is so far away and find physics that apply there is fascinating," she said. "It begs the question, do these processes also hold true for our own blue dot?"

Juno is the first spacecraft to capture images of Jupiter's poles; previous satellites orbited the equatorial region of the planet, providing views of the planet's famed Red Spot. Juno is equipped with two camera systems, one for visible light images and another that captures heat signatures using the Jovian Infrared Auroral Mapper (JIRAM), an instrument on the Juno spacecraft supported by the Italian Space Agency.

Siegelman and colleagues analyzed an array of infrared images capturing Jupiter's north polar region, and in particular the polar vortex cluster. From the images, the researchers could calculate wind speed and direction by tracking the movement of the clouds between images. Next, the team interpreted infrared images in terms of cloud thickness. Hot regions correspond to thin clouds, where it is possible to see deeper into Jupiter's atmosphere. Cold regions represent thick cloud cover, blanketing Jupiter's atmosphere.

These findings gave the researchers clues on the energy of the system. Since Jovian clouds are formed when hotter, less dense air rises, the researchers found that the rapidly rising air within clouds acts as an energy source that feeds larger scales up to the large circumpolar and polar cyclones.

Juno first arrived at the Jovian system in 2016, providing scientists with the first look at these large polar cyclones, which have a radius of about 1,000 kilometers or 620 miles. There are eight of these cyclones occurring at Jupiter's north pole, and five at its south pole. These storms have been present since that first view five years ago. Researchers are unsure how they originated or for how long they have been circulating, but they now know that moist convection is what sustains them. Researchers first hypothesized this energy transfer after observing lightning in storms on Jupiter.

Juno will continue orbiting Jupiter until 2025, providing researchers and the public alike with novel images of the planet and its extensive lunar system.

Read more at Science Daily

Astronomers identify potential clue to reinonization of universe

About 400,000 years after the universe was created began a period called "The Epoch of Reionization."

During this time, the once hotter universe began to cool and matter clumped together, forming the first stars and galaxies. As these stars and galaxies emerged, their energy heated the surrounding environment, reionizing some of the remaining hydrogen in the universe.

The universe's reionization is well known, but determining how it happened has been tricky. To learn more, astronomers have peered beyond our Milky Way galaxy for clues. In a new study, astronomers at the University of Iowa identified a source in a suite of galaxies called Lyman continuum galaxies that may hold clues about how the universe was reionized.

In the study, the Iowa astronomers identified a black hole, a million times as bright as our sun, that may have been similar to the sources that powered the universe's reionization. That black hole, the astronomers report from observations made in February 2021 with NASA's flagship Chandra X-ray observatory, is powerful enough to punch channels in its respective galaxy, allowing ultraviolet photons to escape and be observed.

"The implication is that outflows from black holes may be important to enable escape of the ultraviolet radiation from galaxies that reionized the intergalactic medium," says Phil Kaaret, professor and chair in the Department of Physics and Astronomy and the study's corresponding author.

"We can't yet see the sources that actually powered the universe's reionization because they are too far away," Kaaret says. "We looked at a nearby galaxy with properties similar to the galaxies that formed in the early universe. One of the primary reasons that the James Webb Space Telescope was built was to try to see the galaxies hosting the sources that actually powered the universe's reionization."

Read more at Science Daily

Archaeological dig reveals participants in California’s Gold Rush dined on salted Atlantic cod

It turns out San Francisco has been a destination for lovers of imported delicacies since its earliest Gold Rush days.

According to results published recently in the peer-reviewed Journal of Anthropological Research, an excavation at Thompson's Cove in San Francisco has shown "Atlantic cod were imported during the 1850s, likely as a (largely) deboned, dried and salted product from the East Coast of the United States." The results underscore the importance of global maritime trade in northern California during the Gold Rush.

Co-author Brittany Bingham, doctoral student in anthropology at the University of Kansas, performed genetic analysis on 18 cod bones recovered from Thompson's Cove to determine if they came from cod caught in the deep nearby waters of the Pacific or were shipped in packages by boat from Atlantic fisheries. Her results on five specimens for ancient DNA show Atlantic cod were imported during the debut of the Gold Rush.

Bingham said bones tend to be better preserved and more suitable for analysis than other materials left behind from the rapid surge in San Francisco's population. (In the first year of the Gold Rush, between 1848-49, the area's 800 residents quickly swelled to more than 20,000.)

"Bones preserve better than other things that don't last in the archaeological record as well," she said. "You won't get a quality DNA sample from every bone -- some are burned, and soil and other factors can affect preservation, so we typically check for DNA and determine what we're looking at. But often people move bones elsewhere and maybe they're thrown in a different place than the rest of the bones, so you don't have the whole specimen to look at. That's where people like me come into play, and we'll take the one tiny piece of bone that might have been found and figure out what it actually came from."

The results of Bingham's analysis were among the first archaeological results to confirm findings from historical newspapers and invoices: The early history of San Francisco included the importation of a wide range of fish and seafood to support the population boom.

The project came about when the Musto Building built in 1907 at Thompson's Cove -- where the city was first settled -- undertook a mandatory retrofitting to be more resilient to earthquakes, triggering a California compliance law requiring archaeological work in conjunction with construction at the site. Today, the building is home to a private social club.

Kale Bruner, who earned her doctorate in anthropology at KU, worked on the Thompson's Cove site as construction took place. Today, Bruner serves as a research associate at the Museum of the Aleutians.

"Compliance work is challenging in a lot of ways because you don't really get a lot of control over the excavations, and this case was kind of an extreme example of that -- the fieldwork conditions were overwhelming -- and I was the only archaeologist on site," Bruner said. "They were fortunately only excavating dirt in one location at a time, so I only had one piece of machinery to be watching, but we were hitting archaeologically significant material constantly. It was two years essentially of monitoring that kind of activity and documenting as rapidly as possible everything that was being uncovered."

Aside from evidence of Atlantic cod, the authors reported about 8,000 total specimens or fragments of animal bone, and a total number of artifacts collected that numbered nearly 70,000. The work will yield more academic papers on the historical significance of the site.

Lead author Cyler Conrad, adjunct assistant professor of anthropology at the University of New Mexico and archaeologist with Los Alamos National Laboratory, has published other findings from work at Thompson's Cove, including evidence of a California hide and tallow trade, eating of wild game, hunting of ducks and geese, and even importation of Galapagos tortoise.

He described the Gold Rush era as exciting and chaotic, a time that in some ways mirrored the supply chain problems plaguing the world in the COVID-19 era.

"During the Gold Rush, it took many months for vessels to arrive in San Francisco, so often when you needed things is not when they would arrive, and when things would arrive, they were often not needed anymore," Conrad said. "You find these descriptions of San Francisco as this kind of muddy mess, a kind of a tent city where there were shacks built upon shacks all the way up until the shoreline, just stacked with crates and boxes. Even at Thompson's Cove, I think Kale excavated several essentially intact crates of frying pans and shovel heads. You can imagine shiploads of shovels might arrive, but maybe everyone had a shovel already or maybe it was winter, and no one was in the gold fields and you have all this material that accumulates right along the shoreline -- but that was convenient for our work."

Conrad said the work to determine the Atlantic origins of cod bones found at the site was a significant contribution to understanding maritime trade of the era, when Atlantic cod was either shipped by boat all the way around Cape Horn -- or shipped to Panama, then hauled across the isthmus, before being shipped up to the Northern California gold fields.

Read more at Science Daily

Medieval warhorses were surprisingly small in stature

Medieval warhorses are often depicted as massive and powerful beasts, but in reality many were no more than pony-sized by modern standards, a new study shows.

Horses during the period were often below 14.2 hands high, but size was clearly not everything, as historical records indicate huge sums were spent on developing and maintaining networks for the breeding, training and keeping of horses used in combat.

A team of archaeologists and historians searching for the truth about the Great Horse have found they were not always bred for size, but for success in a wide range of different functions -- including tournaments and long-distance raiding campaigns.

Researchers analysed the largest dataset of English horse bones dating between AD 300 and 1650, found at 171 separate archaeological sites.

The study, published in the International Journal of Osteoarchaeology, shows that breeding and training of warhorses was influenced by a combination of biological and cultural factors, as well as behavioural characteristics of the horses themselves such as temperament.

Depictions of medieval warhorses in films and popular media frequently portray massive mounts on the scale of Shire horses, some 17 to 18 hands high. However, the evidence suggests that horses of 16 and even 15 hands were very rare indeed, even at the height of the royal stud network during the 13th and 14th centuries, and that animals of this size would have been seen as very large by medieval people.

Researcher Helene Benkert, from the University of Exeter, said: "Neither size, nor limb bone robusticity alone, are enough to confidently identify warhorses in the archaeological record. Historic records don't give the specific criteria which defined a warhorse; it is much more likely that throughout the medieval period, at different times, different conformations of horses were desirable in response to changing battlefield tactics and cultural preferences."

The tallest Norman horse recorded was found at Trowbridge Castle, Wiltshire, estimated to be about 15hh, similar to the size of small modern light riding horses. The high medieval period (1200-1350 AD) sees the first emergence of horses of around 16hh, although it is not until the post-medieval period (1500-1650 AD) that the average height of horses becomes significantly larger, finally approaching the sizes of modern warmblood and draft horses.

Professor Alan Outram, from the University of Exeter, said: "High medieval destriers may have been relatively large for the time period, but were clearly still much smaller than we might expect for equivalent functions today. Selection and breeding practices in the Royal studs may have focused as much on temperament and the correct physical characteristics for warfare as they did on raw size."

Professor Oliver Creighton, the Principal Investigator for the project, commented: "The warhorse is central to our understanding of medieval English society and culture as both a symbol of status closely associated with the development of aristocratic identity and as a weapon of war famed for its mobility and shock value, changing the face of battle."

Read more at Science Daily

Fewer than 1 in 5 adults with Type 2 diabetes in the U.S. are meeting optimal heart health targets

Fewer than 1 in 5 adults with Type 2 diabetes in the U.S. are meeting targets to reduce heart disease risk. Fortunately, available therapies can help when combined with new approaches that address social determinants of health and other barriers to care, according to a new American Heart Association scientific statement published today in the Association's flagship journal Circulation. A scientific statement is an expert analysis of current research and may inform future clinical practice guidelines.

"This new scientific statement is an urgent call to action to follow the latest evidence-based approaches and to develop new best practices to advance Type 2 diabetes treatment and care and reduce CVD risk," said Joshua J. Joseph, M.D., M.P.H., FAHA, chair of the statement writing group and an assistant professor of medicine in the division of endocrinology, diabetes and metabolism at The Ohio State University College of Medicine in Columbus, Ohio. "Far too few people -- less than 20% of those with Type 2 diabetes -- are successfully managing their heart disease risk, and far too many are struggling to stop smoking and lose weight, two key CVD risk factors. Health care professionals, the health care industry and broader community organizations all have an important role to play in supporting people with Type 2 diabetes."

Type 2 diabetes is the most common form of diabetes, affecting more than 34 million people in the U.S., representing nearly 11% of the U.S. population, according to the U.S. Centers for Disease Control and Prevention's 2020 National Diabetes Statistics Report, and cardiovascular disease (CVD) is the leading cause of death and disability among people with Type 2 diabetes (T2D). Type 2 diabetes occurs when the body is unable to efficiently use the insulin it makes or when the pancreas loses its capacity to produce insulin. People with T2D often have other cardiovascular disease risk factors, including overweight or obesity, high blood pressure or high cholesterol. Adults with T2D are twice as likely to die from CVD -- including heart attacks, strokes and heart failure -- compared to adults who do not have T2D.

The new scientific statement, based on the writing group's extensive review of clinical trial results through June 2020, addresses the gap between existing evidence on how best to lower cardiovascular risk in people with T2D and the reality for people living with T2D. Targets to reduce CVD risk among people with T2D include managing blood glucose, blood pressure and cholesterol levels; increasing physical activity; healthy nutrition; obesity and weight management; not smoking; not drinking alcohol; and psychosocial care. Greater adherence to an overall healthy lifestyle among people with T2D is associated with a substantially lower risk of CVD and CVD mortality.

"In the United States, less than 1 in 5 adults with T2D not diagnosed with cardiovascular disease are meeting optimal T2D management goals of not smoking and achieving healthy levels of blood sugar, blood pressure and low-density lipoprotein (LDL) cholesterol, also known as 'bad' cholesterol," Joseph said.

A surprisingly large proportion -- as high as 90% -- of factors to effectively manage CVD with T2D includes modifiable lifestyle and societal factors. "Social determinants of health, which includes health-related behaviors, socioeconomic factors, environmental factors and structural racism, have been recognized to have a profound impact on cardiovascular disease and Type 2 diabetes outcomes," he said. "People with T2D face numerous barriers to health including access to care and equitable care, which must be considered when developing individualized care plans with our patients."

Shared decision-making among patients and health care professionals is essential for successfully managing T2D and CVD. A comprehensive diabetes care plan should be tailored based on individual risks and benefits and in consideration the patient's preferences; potential cost concerns; support to effectively manage T2D and take medications as prescribed, including diabetes self-management education and support; promotion and support of healthy lifestyle choices that improve cardiovascular health including nutrition and physical activity; and treatment for any other CVD risk factors.

"One avenue to continue to address and advance diabetes management is through breaking down the four walls of the clinic or hospital through community engagement, clinic-to-community connections and academic-community-government partnerships that may help address and support modifiable lifestyle behaviors such as physical activity, nutrition, smoking cessation and stress management," Joseph said.

The statement also highlights recent evidence on treating T2D that may spur clinicians and patients to review and update their T2D management plan to also address CVD risk factors:

New ways to control blood sugar

The American Heart Association's last scientific statement on blood sugar control was published in 2015, just as research was starting to suggest that glucose-lowering medications may also reduce the risk of heart attack, stroke, heart failure or cardiovascular death.

"Since 2015, a number of important national and international clinical trials that specifically examined new T2D medications for lowering cardiovascular disease and cardiovascular mortality risk among people with Type 2 diabetes have been completed," Joseph said. "GLP-1 (glucagon-like pepdite-1) receptor agonists have been found to improve blood sugar and weight, and they have been game changers in reducing the risk of heart disease, stroke, heart failure and kidney disease." GLP-1 medications (injectable synthetic hormones such as liraglutide and semaglutide) stimulate the release of insulin to control blood sugar, and they also reduce appetite and help people feel full, which may help with weight management or weight loss.

In addition, SGLT-2 (sodium-glucose co-transporter 2) inhibitors (oral medications such as canaglifozin, dapagliflozin, ertugliflozin and empagliflozin) have also been found to be effective in reducing the risks of CVD and chronic kidney disease. SGLT-2 inhibitors spur the kidneys to dispose of excess glucose through the urine, which lowers the risk of heart failure and slows the decrease in kidney function that is common among people with T2D.

"Cost may be a barrier to taking some T2D medications as prescribed, however, many of these medications are now more commonly covered by more health insurance plans," Joseph said. "Another barrier is recognition by patients that these newer T2D medications are also effective in reducing the risk of heart disease, stroke, heart failure and kidney disease. Increasing public awareness about the link between CVD and T2D and provide support, education and tools that help improve T2D and reduce CVD risk are at the core of the Know Diabetes by Heart™ initiative, from the American Heart Association and American Diabetes Association."

Personalized blood pressure control

The statement highlights that individualized approaches to treating high blood pressure are best. These approaches should consider ways to minimize the side effects of hypertension treatment and avoid potentially over-treating frail patients.

Importance of lowering cholesterol levels

Statin medications remain the first line of lipid-lowering therapy, and the Association suggests other types of medications may be considered for people unable to tolerate a statin or who aren't reaching their LDL cholesterol targets with a statin. These medications may include ezetimibe, bempodoic acid, bile acid resins, fibrates and PCSK-9 inhibitors, depending on the individual's overall health status and other health conditions.

Re-thinking aspirin use


Older adults (ages 65 years and older) with T2D are more likely than those who do not have T2D to take a daily low-dose aspirin to help prevent cardiovascular disease. However, it may be time to review if daily low-dose aspirin is still appropriate. Recently published research suggests the increased risk of major bleeding from aspirin may outweigh the benefits, and newer, more potent antiplatelet medications may be more effective for some people.

The statement reinforces the importance of a comprehensive, multidisciplinary and individualized approach to reduce CVD risk among people with T2D. Optimal care should incorporate healthy lifestyle interventions, and medications and/or treatments including surgery that improve T2D management and support healthy weight and weight loss. Social determinants of health, structural racism and health equity are important factors that must also be considered and addressed.

Read more at Science Daily