Jun 26, 2017

Bird's eye perspective

Susana da Silva made a quilt inspired by the patterning of cells in the chick retina. The gray circle represents the high-acuity area, or rod-free zone.
Humans belong to a select club of species that enjoy crisp color vision in daylight, thanks to a small spot in the center of the retina at the back of the eye. Other club members include monkeys and apes, various fish and reptiles, and many birds, which must home in on their scurrying dinners from afar or peck at tiny seeds.

Less clear is what controls the formation of the high-acuity spot, known as the fovea in humans.

Harvard Medical School researchers have now provided the first insight into this perplexing question by studying an unusual model: chickens.

Connie Cepko, Bullard Professor of Genetics and Neuroscience at HMS, and Susana da Silva, a postdoctoral fellow in the Cepko lab, found that formation of the high-acuity area in chicks requires suppression of retinoic acid, a derivative of vitamin A known to play many important roles in embryonic development.

In addition to deepening our understanding of how humans acquired sensitive daytime vision, the findings, reported June 22 in Developmental Cell, could help regenerative medicine researchers model healthy human eyes.

If the discoveries hold true in humans, the work might also one day provide clues about how to combat macular degeneration, the leading cause of vision loss among people age 50 and older. The macula is the part of the retina where the fovea is found.

"I think it's important to understand how you build this specialized area in the retina that's responsible for any major activity you do during the day, such as reading, driving, recognizing faces and using the phone," said da Silva. "It would also be exciting if people can use what we learn from this basic developmental question to treat diseases affecting the retina."

Most of the human retina -- the photosensitive part of the eye that translates light into nerve signals and relays them to the brain -- is lined with rod cells, which allow us to see well in dim light. The fovea, however, consists almost entirely of cone cells, which respond to color and bright light.

Twenty years ago, a researcher in Cepko's lab discovered that chickens also have a rod-free zone.

Although it's not yet clear how closely the chickens' high-acuity areas match ours, Cepko believes it's a good place to start asking questions -- especially since scientists' usual collection of mammalian model organisms, including mice, rats, rabbits and guinea pigs, don't have anything like a fovea.

In the new studies of chick embryos, Cepko and da Silva found that the complex patterning of cells in the rod-free zone formed because of a drop in retinoic acid that occurred only in that area of the retina and only for a brief time during development.

What spurred the drop? Probing further, the researchers found that the answer lay in a shifting balance between enzymes that create and those that destroy retinoic acid.

Enzymes known as retinaldehyde dehydrogenases, or Raldhs, ordinarily make retinoic acid in the retina. But Cepko and da Silva discovered that as cones and ganglion cells formed, levels of the enzymes Cyp26a1 and Cyp26c1 surged, breaking down retinoic acid faster than Raldhs could produce it.

When retinoic acid levels fell, a protein called fibroblast growth factor 8, or Fgf8, flourished, the investigators found. Fgf8 is another well-known molecule in embryonic development that often works with retinoic acid to stimulate and pattern cell growth.

Once their work was done, the Cyp26a1 and Cyp26c1 enzymes ebbed away, allowing Raldhs to replenish retinoic acid in the rod-free zone.

Cepko and da Silva saw similar expression patterns for Raldhs and Cyp26a1 in human retinal tissue, suggesting that something similar happens in people.

"This is the first mechanism we've uncovered for how this area forms," said Cepko. "We don't know where it will lead, but it's pretty exciting."

Stem cell researchers have made remarkable progress in building so-called organoids that mimic human eyes so they can study human health and disease. But they have run into a problem that the new study may help them solve.

"People can grow these incredible little eyes from stem cells, but so far no one's been able to form a fovea," said Cepko, who is also an HMS professor of ophthalmology at Massachusetts Eye and Ear.

She believes the trouble may arise because the researchers add retinoic acid to their cell cultures.

"We're suggesting that removing retinoic acid at the right time, adding Fgf8 or otherwise manipulating these two molecules may allow them to generate a fovea," she said.

It's also possible that the research will provide a foundation for investigating why the macula is so prone to disease, which could in turn lead to new treatments.

"Macular degeneration is a major problem for the aging population, and we don't understand why that area is vulnerable," said Cepko.

But Cepko and da Silva are driven mainly by the excitement of answering questions about the retina, learning about human development and probing evolutionary relationships between species.

Read more at Science Daily

Here's Why Finding ‘Missing Link’ Black Holes Is So Hard

For decades, while astronomers have detected black holes equal in mass either to a few suns or millions of suns, the missing-link black holes in between have eluded discovery. Now, a new study suggests such intermediate-mass black holes may not exist in the modern-day universe because of the rate at which black holes grow.

Scientists think stellar-mass black holes — up to a few times the sun's mass — form when giant stars die and collapse in on themselves. Over the years, astronomers have detected a number of stellar-mass black holes in the nearby universe, and in 2010, researchers detected the first such black hole outside the local cluster of nearby galaxies known as the Local Group.

As big as stellar-mass black holes might seem, they are tiny in comparison to the so-called supermassive black holes that are millions to billions of times the sun's mass, which form the hearts of most, if not all, large galaxies. The oldest supermassive black holes found to date include one found in 2015 — with a mass of about 12 billion solar masses — that existed when the universe was only about 875 million years old. This finding and others suggest that many black holes were born in the dawn of time, back when the universe was smaller and matter was more concentrated, making it easier for them to form and grow.

Much remains uncertain about how black holes reach supermassive girth and influence the universe around them. As such, astronomers want to analyze intermediate-mass black holes of about 100 to 10,000 solar masses that they expect would serve as the middle stages between stellar-mass and supermassive black holes.

However, while astronomers have discovered a number of potential intermediate-mass black holes, the evidence remains inconclusive, said astrophysicists Tal Alexander at the Weizmann Institute of Science in Rehovot, Israel, and Ben Bar-Or at the Institute for Advanced Study in Princeton, New Jersey.

Now these researchers suggest the dearth of these missing links may be due to the rate at which black holes may grow. They detailed their findings online June 19 in the journal Nature Astronomy.

In recent years, scientists have discovered a dozen or so instances of black holes devouring stars. If black holes grew solely by consuming stars and dense, compact objects such as white dwarfs and neutrons stars instead of, say, giant clouds of gas or dark matter, the researchers estimated that black holes would still grow at the relatively constant rate of one solar mass per 10,000 years. (If they could eat gas or dark matter, they could grow even faster, but the data regarding such materials in the early universe is more open to question.)

Although one solar mass per 10,000 years may not seem especially quick, it means that even a stellar-mass black hole could grow completely past the intermediate-mass stage after 10 billion years. In comparison, the universe is about 13.8 billion years old.

These findings suggest that the seeds for supermassive black holes "were created quite early on in galaxies, when things were more dense," Bar-Or told Space.com. These seeds already exceeded intermediate-mass stage by about 1.6 billion to 2.2 billion years after the Big Bang — "some or even most of the black holes may have passed the supermassive-black-hole mass threshold even earlier," Alexander told Space.com.

Read more at Seeker

A Massive Marine Extinction in Earth's History Was Just Discovered

Illustration of the giant shark Carcharocles megalodon, which died along with other large marine species during a newly identified extinction event.
The largest known shark that ever lived, Carcharocles megalodon, ruled the seas for over 20 million years. The enormous toothy predator, which could grow to about 60 feet long, seemed indestructible. Even now, the shark’s status is so legendary that — like a powerful celebrity — only one word is sufficient to name it: megalodon.

“Megalodon lived all around the world, during a time in which the oceans were warmer than today,” biologist and marine species specialist Catalina Pimiento said. “Our research suggests it was a cosmopolitan giant shark that was able to live in different latitudes, as ocean temperature didn’t determine its distribution. We also know it used shallow water productive areas as nurseries.”

Life appeared to be pretty good for this dominant apex predator, until disaster struck. Pimiento and an international team of researchers determined that megalodon did not die out alone. When the gigantic shark went extinct around 2.6 million years ago, so too did a third of all other large marine species. The previously unknown “Pliocene marine megafauna extinction” is described in the journal Nature Ecology & Evolution.

Pimiento conducted the research at the Paleontological Institute and Museum of the University of Zurich with her colleagues John Griffin, Christopher Clements, Daniele Silvestro, Sara Varela, Mark Uhen, and Carlos Jaramillo. The team made their determinations after a meta-analysis that looked at numerous prior studies concerning the fossil record of sharks, marine mammals, sea birds, and sea turtles.

“The work of hundreds of paleontologists over many years allowed us to characterize this extinction,” said Pimiento. “Most of these works have been catalogued in a public database: The Paleobiology Database.”

The scientists found that, in addition to megalodon, species of big sea cows and baleen whales also went extinct 2-3 million years ago. As many as 43 percent of sea turtle species, 35 percent of seabirds and 9 percent of sharks also died out at this time.

The drivers of the die-out are not precisely known, but the researchers note that violent sea level fluctuations coincided with the extinction event. Coastal habitats were significantly reduced as a result. Marine mammals that megalodon feasted on started to decline, while new competitors evolved.

The researchers analyzed a phenomenon called functional diversity, which generally concerns the range of characteristics and behaviors that organisms exhibit in communities and ecosystems. During the newly identified extinction event, 17 percent of the total diversity of ecological functions in the marine ecosystem disappeared, and 21 percent changed.

Particularly impacted were warm-blooded animals, suggesting that large, homeothermic species could be more vulnerable to extinction when major changes occur in their environments.

“Today, larger marine animals are more susceptible because they are targeted by humans,” Pimiento said.

Read more at Seeker

Laser That’s a Billion Times Brighter Than the Sun Reveals New Behavior in Light

A scientist at work in the Extreme Light Laboratory at the University of Nebraska-Lincoln, where physicists using the brightest light ever produced were able to change the way photons scatter from electrons.
Researchers at the University of Nebraska-Lincoln have generated the brightest light ever produced on Earth, and it may change the way we look at the universe – quite literally.

Donald Umstadter, head of the university's Extreme Light Laboratory, worked with colleagues in physics and astronomy to design an experiment in which pulses of light one billion times brighter than the surface of the sun were shot into an extremely tiny space. To facilitate the experiment, the research team fired up the lab's Diocles Laser, a room-sized assembly of optical equipment that can reach a peak power output greater than all the world's power plants combined.

The trick is that the laser produces bursts of light that only last a tiny fraction of a second.

“That makes high power, equivalent to a trillion light bulbs, but only for a very short amount of time — less than a trillionth of a second,” Umstadter said. “We then concentrate that power into a tiny spot, a millionth of a meter in size. That makes high intensity or brightness.”

By aiming the laser bursts at an intersecting stream of electrons, with precision down to a millionth of a meter, the researchers were able to observe how photons behave when striking a single electron.

It turns out that, at this level of brightness, the photons misbehave rather spectacularly. The high-energy illumination essentially knocks the electrons out of their usual alignment, scattering light in a fundamentally different way. The impact rattles the electrons into a figure-eight “quiver” pattern, shooting off additional photons at different angles, shapes, and wavelengths. The phenomenon, mathematically predicted in various theories, had never before been confirmed in the laboratory.

The team's research was funded by the US Air Force, US Department of Energy, the Department of Homeland Security's Domestic Nuclear Detection Office, and US and Chinese national science foundations. Thee findings are published in the journal Nature Photonics.

The really interesting part, Umstadter said, is that the scattering of photons can actually change the way we perceive illuminated objects. Under normal conditions, when light is increased, the perceived object looks brighter, but otherwise appears the same as in lower-light conditions.

Under the ultra-bright light of the Diocles laser, however, scientists can actually see things that are otherwise invisible to the human eye.

“It is amazing,” Umstadter said. “The light's coming off at different angles, with different colors, depending on how bright it is. What it reveals for the first time is the motion of electrons oscillating in the light fields at nearly the speed of light. They oscillate in a different pattern than they do in normal light.”

As a practical matter, the new technique can generate three-dimensional images with unprecedented resolution and accuracy. The additional photons ejected from the illuminated electrons act like super-powered X-rays. Doctors could use this kind of imaging to spot tumors or microfractures that would otherwise be missed by standard X-ray machines.

The technology can also be used to map circuitry on the molecular level, which will be useful for manufacturers who are increasingly building semiconductors on the nanoscopic scale. The super x-ray properties could also be used at airport security checkpoints, to make sure that that laptop computer is really a laptop computer.

“The higher X-ray energies we produce can be used to see through thickly shielded materials, nearly a meter thickness of steel, for cargo inspection, or non-destructive testing and evaluation of critical components,” Umstadter said.

Read more at Seeker

Jun 25, 2017

How eggs got their shapes

These are average egg shapes for each of 1400 species (black dots), illustrating variation in asymmetry and ellipticity.
The evolution of the amniotic egg -- complete with membrane and shell -- was key to vertebrates leaving the oceans and colonizing the land and air. Now, 360 million years later, bird eggs come in all shapes and sizes, from the almost perfectly spherical eggs of brown hawk- owls to the tear-drop shape of sandpipers' eggs. The question is, how and why did this diversity in shape evolve?

The answer to that question may help explain how birds evolved and solve an old mystery in natural history.

An international team of scientists led by researchers at Harvard and Princeton universities, with colleagues in the UK, Israel and Singapore, took a quantitative approach to this question. Using methods and ideas from mathematics, physics and biology, they characterized the shape of eggs from about 1,400 species of birds and developed a model that explains how an egg's membrane determines its shape. Using an evolutionary framework, the researchers found that the shape of an egg correlates with flight ability, suggesting that adaptations for flight may have been critical drivers of egg-shape variation in birds.

The research is published in Science.

"Our study took a unified approach to understanding egg shape by asking three questions: how to quantify egg shape and provide a basis for comparison of shapes across species, what are the biophysical mechanisms that determine egg shape, and what are the implications of egg shape in an evolutionary and ecological setting," said senior author, L. Mahadevan, the Lola England de Valpine Professor of Applied Mathematics at the John A. Paulson School of Engineering and Applied Sciences (SEAS), Professor of Organismic and Evolutionary Biology, and of Physics at Harvard. " We showed that egg shapes vary smoothly across species, that it is determined by the membrane properties rather than the shell, and finally that there is a strong correlation linking birds that have eggs that are elliptical and asymmetric with a strong flight ability, the last a real surprise."

Mahadevan is also a Core Faculty Member of the Wyss Institute of Bioinspired Engineering at Harvard University.

The researchers began by plotting the shape -- as defined by the pole-to-pole asymmetry and the ellipticity -- of some 50,000 eggs, representing 14 percent of species in 35 orders, including two extinct orders.

The researchers found that egg shape was a continuum -- with many species overlapping. The shapes ranged from almost perfectly spherical eggs to conical-shaped eggs.

So, how is this diverse spectrum of shapes formed?

Researchers have long known that egg membranes play an important role in egg shape -- after all, if an egg shell is dissolved in a mild acid, like vinegar, the egg actually maintains its shape. But how do the properties of the membrane contribute to shape?

Think of a balloon, said Mahadevan. If a balloon is uniformly thick and made of one material, it will be spherical when inflated. But if it is not uniform, all manner of shapes can be obtained.

"Guided by observations that show that the membrane thickness varies from pole to pole, we constructed a mathematical model that considers the egg to be a pressurized elastic shell that grows and showed that we can capture the entire range of egg shapes observed in nature," said Mahadevan.

The variations of shape come from the variation in the membrane's thickness and material properties and the ratio of the differential pressure to the stretchiness of the membrane.

The next question is, how are these forms related to the function of the bird?

The researchers looked at correlations between egg shape and traits associated with the species of bird, including nest type and location, clutch size (the number of eggs laid at a time), diet and flight ability.

"We discovered that flight may influence egg shape," said lead author Mary Caswell Stoddard, Assistant Professor of Ecology and Evolutionary Biology at Princeton University and former Junior Fellow in the Harvard Society of Fellows. "To maintain sleek and streamlined bodies for flight, birds appear to lay eggs that are more asymmetric or elliptical. With these egg shapes, birds can maximize egg volume without increasing the egg's width -- this is an advantage in narrow oviducts."

So an albatross and a hummingbird, while two very different birds, may have evolved similarly shaped eggs because both are high-powered fliers.

"It's clear from our study that variation in the size and shape of bird eggs is not simply random but is instead related to differences in ecology, including the amount of calcium in the diet, and particularly the extent to which each species is designed for powerful flight" says coauthor Dr. Joseph Tobias from Imperial College, UK.

Next, the researchers hope to observe the egg laying process in real time, to compare it to and refine their model.

Read more at Science Daily

Video games can change your brain

The studies show that playing video games can change how our brains perform, and even their structure.
Scientists have collected and summarized studies looking at how video games can shape our brains and behavior. Research to date suggests that playing video games can change the brain regions responsible for attention and visuospatial skills and make them more efficient. The researchers also looked at studies exploring brain regions associated with the reward system, and how these are related to video game addiction.

Do you play video games? If so, you aren't alone. Video games are becoming more common and are increasingly enjoyed by adults. The average age of gamers has been increasing, and was estimated to be 35 in 2016. Changing technology also means that more people are exposed to video games. Many committed gamers play on desktop computers or consoles, but a new breed of casual gamers has emerged, who play on smartphones and tablets at spare moments throughout the day, like their morning commute. So, we know that video games are an increasingly common form of entertainment, but do they have any effect on our brains and behavior?

Over the years, the media have made various sensationalist claims about video games and their effect on our health and happiness. "Games have sometimes been praised or demonized, often without real data backing up those claims. Moreover, gaming is a popular activity, so everyone seems to have strong opinions on the topic," says Marc Palaus, first author on the review, recently published in Frontiers in Human Neuroscience.

Palaus and his colleagues wanted to see if any trends had emerged from the research to date concerning how video games affect the structure and activity of our brains. They collected the results from 116 scientific studies, 22 of which looked at structural changes in the brain and 100 of which looked at changes in brain functionality and/or behavior.

The studies show that playing video games can change how our brains perform, and even their structure. For example, playing video games affects our attention, and some studies found that gamers show improvements in several types of attention, such as sustained attention or selective attention. The brain regions involved in attention are also more efficient in gamers and require less activation to sustain attention on demanding tasks.

There is also evidence that video games can increase the size and efficiency of brain regions related to visuospatial skills. For example, the right hippocampus was enlarged in both long-term gamers and volunteers following a video game training program.

Video games can also be addictive, and this kind of addiction is called "Internet gaming disorder." Researchers have found functional and structural changes in the neural reward system in gaming addicts, in part by exposing them to gaming cues that cause cravings and monitoring their neural responses. These neural changes are basically the same as those seen in other addictive disorders.

Read more at Science Daily

Jun 24, 2017

Ultra-thin camera creates images without lenses

At Caltech, engineers have developed a new camera design that replaces the lenses with an ultra-thin optical phased array (OPA).
Traditional cameras -- even those on the thinnest of cell phones -- cannot be truly flat due to their optics: lenses that require a certain shape and size in order to function. At Caltech, engineers have developed a new camera design that replaces the lenses with an ultra-thin optical phased array (OPA). The OPA does computationally what lenses do using large pieces of glass: it manipulates incoming light to capture an image.

Lenses have a curve that bends the path of incoming light and focuses it onto a piece of film or, in the case of digital cameras, an image sensor. The OPA has a large array of light receivers, each of which can individually add a tightly controlled time delay (or phase shift) to the light it receives, enabling the camera to selectively look in different directions and focus on different things.

"Here, like most other things in life, timing is everything. With our new system, you can selectively look in a desired direction and at a very small part of the picture in front of you at any given time, by controlling the timing with femto-second -- quadrillionth of a second -- precision," says Ali Hajimiri, Bren Professor of Electrical Engineering and Medical Engineering in the Division of Engineering and Applied Science at Caltech, and the principal investigator of a paper describing the new camera. The paper was presented at the Optical Society of America's (OSA) Conference on Lasers and Electro-Optics (CLEO) and published online by the OSA in the OSA Technical Digest in March 2017.

"We've created a single thin layer of integrated silicon photonics that emulates the lens and sensor of a digital camera, reducing the thickness and cost of digital cameras. It can mimic a regular lens, but can switch from a fish-eye to a telephoto lens instantaneously -- with just a simple adjustment in the way the array receives light," Hajimiri says.

Phased arrays, which are used in wireless communication and radar, are collections of individual transmitters, all sending out the same signal as waves. These waves interfere with each other constructively and destructively, amplifying the signal in one direction while canceling it out elsewhere. Thus, an array can create a tightly focused beam of signal, which can be steered in different directions by staggering the timing of transmissions made at various points across the array.

A similar principle is used in reverse in an optical phased array receiver, which is the basis for the new camera. Light waves that are received by each element across the array cancel each other from all directions, except for one. In that direction, the waves amplify each other to create a focused "gaze" that can be electronically controlled.

"What the camera does is similar to looking through a thin straw and scanning it across the field of view. We can form an image at an incredibly fast speed by manipulating the light instead of moving a mechanical object," says graduate student Reza Fatemi (MS '16), lead author of the OSA paper.

Last year, Hajimiri's team rolled out a one-dimensional version of the camera that was capable of detecting images in a line, such that it acted like a lensless barcode reader but with no mechanically moving parts. This year's advance was to build the first two-dimensional array capable of creating a full image. This first 2D lensless camera has an array composed of just 64 light receivers in an 8 by 8 grid. The resulting image has low resolution. But this system represents a proof of concept for a fundamental rethinking of camera technology, Hajimiri and his colleagues say.

"The applications are endless," says graduate student Behrooz Abiri (MS '12), coauthor of the OSA paper. "Even in today's smartphones, the camera is the component that limits how thin your phone can get. Once scaled up, this technology can make lenses and thick cameras obsolete. It may even have implications for astronomy by enabling ultra-light, ultra-thin enormous flat telescopes on the ground or in space."

Read more at Science Daily

A 100-year-old physics problem has been solved

This is a wave-interference and resonant energy transfer from one source to another distant source or object, pertaining to the fundamental concept of resonances.
At EPFL, researchers challenge a fundamental law and discover that more electromagnetic energy can be stored in wave-guiding systems than previously thought. The discovery has implications in telecommunications. Working around the fundamental law, they conceived resonant and wave-guiding systems capable of storing energy over a prolonged period while keeping a broad bandwidth. Their trick was to create asymmetric resonant or wave-guiding systems using magnetic fields.

The study, which has just been published in Science, was led by Kosmas Tsakmakidis, first at the University of Ottawa and then at EPFL's Bionanophotonic Systems Laboratory run by Hatice Altug, where the researcher is now doing post-doctoral research.

This breakthrough could have a major impact on many fields in engineering and physics. The number of potential applications is close to infinite, with telecommunications, optical detection systems and broadband energy harvesting representing just a few examples.

Casting aside reciprocity

Resonant and wave-guiding systems are present in the vast majority of optical and electronic systems. Their role is to temporarily store energy in the form of electromagnetic waves and then release them. For more than 100 hundred years, these systems were held back by a limitation that was considered to be fundamental: the length of time a wave could be stored was inversely proportional to its bandwidth. This relationship was interpreted to mean that it was impossible to store large amounts of data in resonant or wave-guiding systems over a long period of time because increasing the bandwidth meant decreasing the storage time and quality of storage.

This law was first formulated by K. S. Johnson in 1914, at Western Electric Company (the forerunner of Bell Telephone Laboratories). He introduced the concept of the Q factor, according to which a resonator can either store energy for a long time or have a broad bandwidth, but not both at the same time. Increasing the storage time meant decreasing the bandwidth, and vice versa. A small bandwidth means a limited range of frequencies (or 'colors') and therefore a limited amount of data.

Until now, this concept had never been challenged. Physicists and engineers had always built resonant systems -- like those to produce lasers, make electronic circuits and conduct medical diagnoses -- with this constraint in mind.

But that limitation is now a thing of the past. The researchers came up with a hybrid resonant / wave-guiding system made of a magneto-optic material that, when a magnetic field is applied, is able to stop the wave and store it for a prolonged period, thereby accumulating large amounts of energy. Then when the magnetic field is switched off, the trapped pulse is released.

With such asymmetric and non-reciprocal systems, it was possible to store a wave for a very long period of time while also maintaining a large bandwidth. The conventional time-bandwidth limit was even beaten by a factor of 1,000. The scientists further showed that, theoretically, there is no upper ceiling to this limit at all in these asymmetric (non-reciprocal) systems.

"It was a moment of revelation when we discovered that these new structures did not feature any time-bandwidth restriction at all. These systems are unlike what we have all been accustomed to for decades, and possibly hundreds of years," says Tsakmakidis, the study's lead author. "Their superior wave-storage capacity performance could really be an enabler for a range of exciting applications in diverse contemporary and more traditional fields of research." Hatice Altug adds.

Medicine, the environment and telecommunications

One possible application is in the design of extremely quick and efficient all-optical buffers in telecommunication networks. The role of the buffers is to temporarily store data arriving in the form of light through optical fibers. By slowing the mass of data, it is easier to process. Up to now, the storage quality had been limited.

Read more at Science Daily

Seeker's Visual Guide to Solar Eclipses Throughout History

Solar eclipse at the ruins of Chichén Itzá.
It is hard to overestimate how important solar eclipses were to early humans. The names of several ancient Hawaiian leaders provide evidence of the significance of these dramatic celestial events: Keke-la (thin sun), Ku-ko-hu (appearing blotted), He-ma (to become faded) and Pa-le-na (not shining). Entire civilizations, such as the Aztec empire, were said to have begun and ended, in part, because of omens tied to solar eclipses, and their effect on viewers.

“The impact of solar eclipses on Mesoamerican culture and on virtually all other early civilizations cannot be overstated,” according to Bruce Masse, formerly of the University of Hawaii and Los Alamos National Laboratory.

In a paper published in the journal Vistas in Astronomy, he said that such celestial events pervade “cosmology, art iconography, chiefly symbols, architecture, time reckoning, and religious and chiefly rituals,” as well as myths and historical accounts.

Witnessing, and then surviving, an eclipse must have seemed like coming back from the dead.

The origin of the word “eclipse” comes from the Greek term ekleipsis, meaning an abandonment, a feeling shared by the Inca of South America. Worshippers of the sun god Inti, the Inca felt that their leader was mad at them whenever the moon obscured the sun. They rarely practiced human sacrifice, but a wave of killing would follow solar eclipses. The irony is that the leaders were desperately trying to give Inti what they were supposed to value the most.

While such a response would be unthinkable today, solar eclipses continue to captivate. From likely prehistoric gatherings at Stonehenge to anticipation of this year’s August 21 total solar eclipse, these incredible sky shows remain some of the solar system’s most compelling events.

Partial solar eclipse visible through rocks that form the monument Stonehenge in Wiltshire, Southwest England.
Construction of the megalith Stonehenge, located in England, began in 3100 BC. While historians still debate the monument’s underlying meanings, there is consensus that astronomical alignments inspired much of its design. For example, lines of it point to either sunrise at the summer solstice or sunset at the winter solstice. Some scholars believe that eclipses can be predicted via various methods involving study of Stonehenge. As photographer Ben Stansall shows, portions of the monument can frame certain solar eclipses.

People use protective glasses to catch a glimpse of a solar eclipse in front of the Pyramids of Giza and the Sphinx on March 20, 2015, in Giza, Egypt.
Pyramids and temples of ancient Egypt show stellar alignments, some of which are attested in inscriptions. A 25-year solar-lunar calendar, dating to 1257 BC during the reign of Ramses II, reveals how astronomers at the time were attempting to understand sun and moon cycles. How the pyramids fit into that process remains a mystery, but viewing solar eclipses at or near these monuments often provides some of the most striking visuals.

Babylonian clay tablet that records eclipses between 518 and 465 BC.
The earliest records of specific solar eclipses are found on clay tablets. The oldest known mention is a description of a total solar eclipse said to have occurred on May 3, 1375 BC. Modern assessment of the event — recorded on a clay tablet from the ancient city of Ugarit, in what is now Syria — determined that the eclipse actually happened on March 5, 1223. In a paper published in Nature, authors T. De Jong and W. H. Van Soldt wrote, “This new date implies that the secular deceleration of the Earth’s rotation has changed very little during the past 3,000 years.”

Solar eclipse at the ruins of Chichén Itzá.
Chichén Itzá, a massive Mayan step pyramid dating to about 600 AD, is a masterpiece of astronomical special effects. On the spring equinox, light and shadows on the Temple of Kukulcán make it look as though a feathered serpent god is crawling down the side of the pyramid. From certain angles during a solar eclipse, the darkened orb can look as though it is ascending the temple’s steps.

Read more at Discovery News

Jun 22, 2017

Select memories can be erased, leaving others intact

Two Aplysia sensory neurons with synaptic contacts on the same motor neuron in culture after isolation from the nervous system of Aplysia. The motor neuron has been injected with a fluorescent molecule that blocks the activity of a specific Protein Kinase M molecule.
Different types of memories stored in the same neuron of the marine snail Aplysia can be selectively erased, according to a new study by researchers at Columbia University Medical Center (CUMC) and McGill University and published today in Current Biology.

The findings suggest that it may be possible to develop drugs to delete memories that trigger anxiety and post-traumatic stress disorder (PTSD) without affecting other important memories of past events.

During emotional or traumatic events, multiple memories can become encoded, including memories of any incidental information that is present when the event occurs. In the case of a traumatic experience, the incidental, or neutral, information can trigger anxiety attacks long after the event has occurred, say the researchers.

"The example I like to give is, if you are walking in a high-crime area and you take a shortcut through a dark alley and get mugged, and then you happen to see a mailbox nearby, you might get really nervous when you want to mail something later on," says Samuel Schacher, PhD, a professor of neuroscience in the Department of Psychiatry at CUMC and co-author of the paper. In the example, fear of dark alleys is an associative memory that provides important information -- e.g., fear of dark alleys -- based on a previous experience. Fear of mailboxes, however, is an incidental, non-associative memory that is not directly related to the traumatic event.

"One focus of our current research is to develop strategies to eliminate problematic non-associative memories that may become stamped on the brain during a traumatic experience without harming associative memories, which can help people make informed decisions in the future -- like not taking shortcuts through dark alleys in high-crime areas," Dr. Schacher adds.

Brains create long-term memories, in part, by increasing the strength of connections between neurons and maintaining those connections over time. Previous research suggested that increases in synaptic strength in creating associative and non-associative memories share common properties. This suggests that selectively eliminating non-associative synaptic memories would be impossible, because for any one neuron, a single mechanism would be responsible for maintaining all forms of synaptic memories.

The new study tested that hypothesis by stimulating two sensory neurons connected to a single motor neuron of the marine snail Aplysia; one sensory neuron was stimulated to induce an associative memory and the other to induce a non-associative memory.

By measuring the strength of each connection, the researchers found that the increase in the strength of each connection produced by the different stimuli was maintained by a different form of a Protein Kinase M (PKM) molecule (PKM Apl III for associative synaptic memory and PKM Apl I for non-associative). They found that each memory could be erased -- without affecting the other -- by blocking one of the PKM molecules.

In addition, they found that specific synaptic memories may also be erased by blocking the function of distinct variants of other molecules that either help produce PKMs or protect them from breaking down.

The researchers say that their results could be useful in understanding human memory because vertebrates have similar versions of the Aplysia PKM proteins that participate in the formation of long-term memories. In addition, the PKM-protecting protein KIBRA is expressed in humans, and mutations of this gene produce intellectual disability.

"Memory erasure has the potential to alleviate PTSD and anxiety disorders by removing the non-associative memory that causes the maladaptive physiological response," says Jiangyuan Hu, PhD, an associate research scientist in the Department of Psychiatry at CUMC and co-author of the paper. "By isolating the exact molecules that maintain non-associative memory, we may be able to develop drugs that can treat anxiety without affecting the patient's normal memory of past events."

"Our study is a 'proof of principle' that presents an opportunity for developing strategies and perhaps therapies to address anxiety," said Dr. Schacher. "For example, because memories are still likely to change immediately after recollection, a therapist may help to 'rewrite' a non-associative memory by administering a drug that inhibits the maintenance of non-associative memory."

Read more at Science Daily