To date, few fundamentals have been known about the most common gland in the body, the sweat glands that are essential to controlling body temperature, allowing humans to live in the world’s diverse climates. Now, in a tour de force, researchers at The Rockefeller University and the Howard Hughes Medical Institute have identified, in mice, the stem cell from which sweat glands initially develop as well as stem cells that regenerate adult sweat glands.
In their study, published in Cell, the scientists devised a strategy to purify and molecularly characterize the different kinds of stem cell populations that make up the complex sweat duct and glands of the skin. With this information in hand, they studied how these different populations of stem cells respond to normal tissue homeostasis and to different types of skin injuries, and how the sweat glands differ from their close cousins, the mammary glands.
No sweat. Researchers in Elaine Fuchs's lab identified four different types of paw-skin progenitor cells that are responsible for homeostasis and wound repair. This image shows that the sweat ductal and epidermal progenitors (in red) proliferate and repair an epidermal scratch wound; the sweat gland progenitors (in blue and green) show no signs of proliferation to this type of wound, but instead respond to deep glandular wounds.
“Mammary gland stem cells respond to hormonal induction by greatly expanding glandular tissue to increase milk production,” explains Elaine Fuchs, Rebecca C. Lancefield Professor at Rockefeller and an investigator at the Howard Hughes Medical Institute. “In contrast, during a marathon race, sweat gland stem cells remain largely dormant, and glandular output rather than tissue expansion accounts for the 3 liters of sweat our body needs. These fascinating differences in stem cell activity and tissue production are likely at the root why breast cancers are so frequent, while sweat gland cancers are rare.” Their findings might also help in the future to improve treatments for burn patients and to develop topical treatments for people who sweat too much, or too little.
“For now, the study represents a baby step towards these clinical goals, but a giant leap forward in our understanding of sweat glands,” says the study’s lead author, Catherine P. Lu, a postdoctoral researcher in Fuchs’s Laboratory of Mammalian Cell Biology and Development.
Each human has millions of sweat glands but they have rarely been extensively studied possibly due to the difficulty of gathering enough of the tiny organs to research in a lab, says Lu. The mouse is traditionally used as a model for human sweat gland studies, so in this project, Lu and colleagues laboriously extracted sweat glands from the tiny paw pads of mice the only place they are found in these and most other mammals.
The research team sought to discover whether the different cells that make up the sweat gland and duct contained stem (progenitor) cells, which can help repair damaged adult glands. “We didn’t know if sweat stem cells exist at all, and if they do, where they are and how they behave,” she says. The last major studies on proliferative potential within sweat glands and sweat ducts were conducted in the early 1950s before modern biomedical techniques were used to understand fundamental bioscience.
Fuchs’ team determined that just before birth, the nascent sweat duct forms as a downgrowth from progenitor cells in the epidermis, the same master cells that at different body sites give rise to mammary glands, hair follicles and many other epithelial appendages. As each duct grows deeper into the skin, a sweat gland emerges from its base.
Lu then led the effort to look for stem cells in the adult sweat gland. The gland is made up of two layers -- an inner layer of luminal cells that produce the sweat and an outer layer of myoepithelial cells that squeeze the duct to discharge the sweat.
Lu devised a strategy to fluorescently tag and sort the different populations of ductal and glandular cells. The Fuchs team then injected each population of purified cells into different body areas of female host recipient mice to see what the cells would do.
Interestingly, when introduced into the mammary fat pads, the sweat gland myoepithelial cells generated fluorescent sweat gland-like structures. “Each fluorescent gland had the proper polarized distribution of myoepithelial and luminal cells, and they also produced sodium potassium channel proteins that are normally expressed in adult sweat glands but not mammary glands,” Lu says.
Intriguingly, when the host mice were put through pregnancy, some of the fluorescent sweat glands began to express milk, while still retaining some sweat gland features as well. Even more surprising was that sweat gland myoepithelial cells produced epidermis when engrafted to the back skin of the mice.
“Taken together, these findings tell us that adult glandular stem cells have certain intrinsic features that enable them to remember who they are in some environments, but adopt new identities in other environments,” Fuchs says. “To test the possible clinical implications of our findings, we would need to determine how long these foreign tissues made by the stem cells will last — unless it is long-term, a short-term “fix” might only be useful as a temporary bandage for regenerative medicine purposes,” Fuchs cautions.
Irrespective of whether the knowledge is yet prime-time for the clinics, the findings can now be used to explore the roots of some genetic disorders that affect sweat glands, as well as ways to potential ways to treat them. “We have just laid down some critical fundamentals of sweat gland and sweat duct biology,” Lu says. “Our study not only illustrates how sweat glands develop and how their cells respond to injury, but also identifies the stem cells within the sweat glands and sweat ducts and begins to explore their potential for making tissues for the first time.”
Read more at Science Daily
Jul 21, 2012
Researchers Produce First Complete Computer Model of an Organism
In a breakthrough effort for computational biology, the world's first complete computer model of an organism has been completed, Stanford researchers reported last week in the journal Cell.
A team led by Markus Covert, assistant professor of bioengineering, used data from more than 900 scientific papers to account for every molecular interaction that takes place in the life cycle of Mycoplasma genitalium, the world's smallest free-living bacterium.
By encompassing the entirety of an organism in silico, the paper fulfills a longstanding goal for the field. Not only does the model allow researchers to address questions that aren't practical to examine otherwise, it represents a stepping-stone toward the use of computer-aided design in bioengineering and medicine.
"This achievement demonstrates a transforming approach to answering questions about fundamental biological processes," said James M. Anderson, director of the National Institutes of Health Division of Program Coordination, Planning and Strategic Initiatives. "Comprehensive computer models of entire cells have the potential to advance our understanding of cellular function and, ultimately, to inform new approaches for the diagnosis and treatment of disease."
The research was partially funded by an NIH Director's Pioneer Award from the National Institutes of Health Common Fund.
From information to understanding
Biology over the past two decades has been marked by the rise of high-throughput studies producing enormous troves of cellular information. A lack of experimental data is no longer the primary limiting factor for researchers. Instead, it's how to make sense of what they already know.
Most biological experiments, however, still take a reductionist approach to this vast array of data: knocking out a single gene and seeing what happens.
"Many of the issues we're interested in aren't single-gene problems," said Covert. "They're the complex result of hundreds or thousands of genes interacting."
This situation has resulted in a yawning gap between information and understanding that can only be addressed by "bringing all of that data into one place and seeing how it fits together," according to Stanford bioengineering graduate student and co-first author Jayodita Sanghvi.
Integrative computational models clarify data sets whose sheer size would otherwise place them outside human ken.
"You don't really understand how something works until you can reproduce it yourself," Sanghvi said.
Small is beautiful
Mycoplasma genitalium is a humble parasitic bacterium known mainly for showing up uninvited in human urogenital and respiratory tracts. But the pathogen also has the distinction of containing the smallest genome of any free-living organism -- only 525 genes, as opposed to the 4,288 of E. coli, a more traditional laboratory bacterium.
Despite the difficulty of working with this sexually transmitted parasite, the minimalism of its genome has made it the focus of several recent bioengineering efforts. Notably, these include the J. Craig Venter Institute's 2008 synthesis of the first artificial chromosome.
"The goal hasn't only been to understand M. genitalium better," said co-first author and Stanford biophysics graduate student Jonathan Karr. "It's to understand biology generally."
Even at this small scale, the quantity of data that the Stanford researchers incorporated into the virtual cell's code was enormous. The final model made use of more than 1,900 experimentally determined parameters.
To integrate these disparate data points into a unified machine, the researchers modeled individual biological processes as 28 separate "modules," each governed by its own algorithm. These modules then communicated to each other after every time step, making for a unified whole that closely matched M. genitalium's real-world behavior.
Probing the silicon cell
The purely computational cell opens up procedures that would be difficult to perform in an actual organism, as well as opportunities to reexamine experimental data.
In the paper, the model is used to demonstrate a number of these approaches, including detailed investigations of DNA-binding protein dynamics and the identification of new gene functions.
The program also allowed the researchers to address aspects of cell behavior that emerge from vast numbers of interacting factors.
The researchers had noticed, for instance, that the length of individual stages in the cell cycle varied from cell to cell, while the length of the overall cycle was much more consistent. Consulting the model, the researchers hypothesized that the overall cell cycle's lack of variation was the result of a built-in negative feedback mechanism.
Cells that took longer to begin DNA replication had time to amass a large pool of free nucleotides. The actual replication step, which uses these nucleotides to form new DNA strands, then passed relatively quickly. Cells that went through the initial step quicker, on the other hand, had no nucleotide surplus. Replication ended up slowing to the rate of nucleotide production.
These kinds of findings remain hypotheses until they're confirmed by real-world experiments, but they promise to accelerate the process of scientific inquiry.
"If you use a model to guide your experiments, you're going to discover things faster. We've shown that time and time again," said Covert.
Read more at Science Daily
A team led by Markus Covert, assistant professor of bioengineering, used data from more than 900 scientific papers to account for every molecular interaction that takes place in the life cycle of Mycoplasma genitalium, the world's smallest free-living bacterium.
By encompassing the entirety of an organism in silico, the paper fulfills a longstanding goal for the field. Not only does the model allow researchers to address questions that aren't practical to examine otherwise, it represents a stepping-stone toward the use of computer-aided design in bioengineering and medicine.
"This achievement demonstrates a transforming approach to answering questions about fundamental biological processes," said James M. Anderson, director of the National Institutes of Health Division of Program Coordination, Planning and Strategic Initiatives. "Comprehensive computer models of entire cells have the potential to advance our understanding of cellular function and, ultimately, to inform new approaches for the diagnosis and treatment of disease."
The research was partially funded by an NIH Director's Pioneer Award from the National Institutes of Health Common Fund.
From information to understanding
Biology over the past two decades has been marked by the rise of high-throughput studies producing enormous troves of cellular information. A lack of experimental data is no longer the primary limiting factor for researchers. Instead, it's how to make sense of what they already know.
Most biological experiments, however, still take a reductionist approach to this vast array of data: knocking out a single gene and seeing what happens.
"Many of the issues we're interested in aren't single-gene problems," said Covert. "They're the complex result of hundreds or thousands of genes interacting."
This situation has resulted in a yawning gap between information and understanding that can only be addressed by "bringing all of that data into one place and seeing how it fits together," according to Stanford bioengineering graduate student and co-first author Jayodita Sanghvi.
Integrative computational models clarify data sets whose sheer size would otherwise place them outside human ken.
"You don't really understand how something works until you can reproduce it yourself," Sanghvi said.
Small is beautiful
Mycoplasma genitalium is a humble parasitic bacterium known mainly for showing up uninvited in human urogenital and respiratory tracts. But the pathogen also has the distinction of containing the smallest genome of any free-living organism -- only 525 genes, as opposed to the 4,288 of E. coli, a more traditional laboratory bacterium.
Despite the difficulty of working with this sexually transmitted parasite, the minimalism of its genome has made it the focus of several recent bioengineering efforts. Notably, these include the J. Craig Venter Institute's 2008 synthesis of the first artificial chromosome.
"The goal hasn't only been to understand M. genitalium better," said co-first author and Stanford biophysics graduate student Jonathan Karr. "It's to understand biology generally."
Even at this small scale, the quantity of data that the Stanford researchers incorporated into the virtual cell's code was enormous. The final model made use of more than 1,900 experimentally determined parameters.
To integrate these disparate data points into a unified machine, the researchers modeled individual biological processes as 28 separate "modules," each governed by its own algorithm. These modules then communicated to each other after every time step, making for a unified whole that closely matched M. genitalium's real-world behavior.
Probing the silicon cell
The purely computational cell opens up procedures that would be difficult to perform in an actual organism, as well as opportunities to reexamine experimental data.
In the paper, the model is used to demonstrate a number of these approaches, including detailed investigations of DNA-binding protein dynamics and the identification of new gene functions.
The program also allowed the researchers to address aspects of cell behavior that emerge from vast numbers of interacting factors.
The researchers had noticed, for instance, that the length of individual stages in the cell cycle varied from cell to cell, while the length of the overall cycle was much more consistent. Consulting the model, the researchers hypothesized that the overall cell cycle's lack of variation was the result of a built-in negative feedback mechanism.
Cells that took longer to begin DNA replication had time to amass a large pool of free nucleotides. The actual replication step, which uses these nucleotides to form new DNA strands, then passed relatively quickly. Cells that went through the initial step quicker, on the other hand, had no nucleotide surplus. Replication ended up slowing to the rate of nucleotide production.
These kinds of findings remain hypotheses until they're confirmed by real-world experiments, but they promise to accelerate the process of scientific inquiry.
"If you use a model to guide your experiments, you're going to discover things faster. We've shown that time and time again," said Covert.
Read more at Science Daily
Jul 20, 2012
3-D Tumor Models Improve Drug Discovery Success Rate
Imagine millions of cancer cells organized in thousands of small divots. Hit these cells with drugs and when some cells die, you have a candidate for a cancer drug. But a review published this week in the journal Expert Opinion on Drug Discovery argues that these 2D models in fact offer very little information about a potential drug's effects in the body and may often give researchers misleading results.
"Up until the 1980s animal models were the standard for cancer drug discovery. However, with the increase in the number of compounds available for testing and the advent of high-throughput screening (HTS), the use of animals to discover cancer drugs became too costly and unethical. Consequently, 2D cell culture models have become the mainstay for drug discovery or to explore a drug's mechanism of action," says Dan LaBarbera, PhD, investigator at the University of Colorado Cancer Center and the University of Colorado Skaggs School of Pharmacy and Pharmaceutical Sciences. LaBarbera is principal investigator of the recent review, on which he collaborated with Skaggs colleagues Brian Reid, PhD, and Byong Hoon Yoo, PhD.
LaBarbera cites the gap between results in 2D cells and effects in tumors themselves as a contributing factor for the declining rate of drugs passing FDA approval. In particular, only 5 percent of investigational new drugs targeting cancer make it through clinical trials, at a cost of about $800 million per drug. When you factor in the inevitable failures at various points in development, each approved drug costs an average $1.5 billion.
To increase the drug success rate, LaBarbera suggests something called the multicellular tumor spheroid (MCTS) model. In these models, instead of 2D monolayers, cancer cells are cultured as 3D spheroids. One of the advantages of the MCTS model is that when spheroids reach a critical diameter, they begin to form an outer proliferating zone, an inner quiescent zone, and a central necrotic core -- more faithfully mimicking the microenvironments of human tumors. Additionally, spheroids can be grown in the presence of compounds that mimic extra cellular matrix -- the environment that surrounds and very much affects the growth and behaviors of human tumors.
Instead of indiscriminately killing cells, modern cancer drugs tend to target cells with very specific genetic mutations that turn on and off very specific growth and survival mechanisms that in turn very frequently depend on everything else going on in and around the cells. Using MCTS models, researchers can ask questions about how a drug will penetrate a tumor's heterogeneous 3D structure and how a drug will interact with the environment surrounding these tiny tumors.
"Though these MCTS models have been around since the 1970s, only recently has technology made it possible to use them in place of 2D models for the high-throughput screening used in drug discovery," LaBarbera says.
Remember those millions of cancer cells organized in independent divots that researchers hit with drugs? We're fairly tied to the technology that reads the results of these divots. But micro-technologies now allow multicellular tumor spheroids to be cultured in place of 2D cell cultures using high-throughput micro-well plates -- we can use the same drug testing machinery on these new models. Likewise, materials science technology now exists to grow cells within semipermeable membranes, helping researchers define the shape of the eventual spheres. And as futuristic as it undoubtedly sounds, magnetic cell levitation can help alleviate the problem of cells sticking to the plastic well surface, which limits spheroid growth.
The recent practicality of high-throughput MCTS screening leads LaBarbera to call today a "renaissance" for the technique.
Of course, this 3D testing is initially more expensive and more challenging. "A lot of researchers try to get cost down to pennies per well -- you can see how screening millions of compounds equals millions of dollars -- but this often leads to a higher cost down the road due to a lower success rate. Yes, it may cost more to do HTS with 3D models, but in the long run it may lead to higher success rates and so decreased costs," LaBarbera says.
LaBarbera suggests that another use of the systems biology approach made possible by 3D models like MCTS is to bridge the gap between high-volume, low-accuracy screens and more involved testing in animal models.
Read more at Science Daily
"Up until the 1980s animal models were the standard for cancer drug discovery. However, with the increase in the number of compounds available for testing and the advent of high-throughput screening (HTS), the use of animals to discover cancer drugs became too costly and unethical. Consequently, 2D cell culture models have become the mainstay for drug discovery or to explore a drug's mechanism of action," says Dan LaBarbera, PhD, investigator at the University of Colorado Cancer Center and the University of Colorado Skaggs School of Pharmacy and Pharmaceutical Sciences. LaBarbera is principal investigator of the recent review, on which he collaborated with Skaggs colleagues Brian Reid, PhD, and Byong Hoon Yoo, PhD.
LaBarbera cites the gap between results in 2D cells and effects in tumors themselves as a contributing factor for the declining rate of drugs passing FDA approval. In particular, only 5 percent of investigational new drugs targeting cancer make it through clinical trials, at a cost of about $800 million per drug. When you factor in the inevitable failures at various points in development, each approved drug costs an average $1.5 billion.
To increase the drug success rate, LaBarbera suggests something called the multicellular tumor spheroid (MCTS) model. In these models, instead of 2D monolayers, cancer cells are cultured as 3D spheroids. One of the advantages of the MCTS model is that when spheroids reach a critical diameter, they begin to form an outer proliferating zone, an inner quiescent zone, and a central necrotic core -- more faithfully mimicking the microenvironments of human tumors. Additionally, spheroids can be grown in the presence of compounds that mimic extra cellular matrix -- the environment that surrounds and very much affects the growth and behaviors of human tumors.
Instead of indiscriminately killing cells, modern cancer drugs tend to target cells with very specific genetic mutations that turn on and off very specific growth and survival mechanisms that in turn very frequently depend on everything else going on in and around the cells. Using MCTS models, researchers can ask questions about how a drug will penetrate a tumor's heterogeneous 3D structure and how a drug will interact with the environment surrounding these tiny tumors.
"Though these MCTS models have been around since the 1970s, only recently has technology made it possible to use them in place of 2D models for the high-throughput screening used in drug discovery," LaBarbera says.
Remember those millions of cancer cells organized in independent divots that researchers hit with drugs? We're fairly tied to the technology that reads the results of these divots. But micro-technologies now allow multicellular tumor spheroids to be cultured in place of 2D cell cultures using high-throughput micro-well plates -- we can use the same drug testing machinery on these new models. Likewise, materials science technology now exists to grow cells within semipermeable membranes, helping researchers define the shape of the eventual spheres. And as futuristic as it undoubtedly sounds, magnetic cell levitation can help alleviate the problem of cells sticking to the plastic well surface, which limits spheroid growth.
The recent practicality of high-throughput MCTS screening leads LaBarbera to call today a "renaissance" for the technique.
Of course, this 3D testing is initially more expensive and more challenging. "A lot of researchers try to get cost down to pennies per well -- you can see how screening millions of compounds equals millions of dollars -- but this often leads to a higher cost down the road due to a lower success rate. Yes, it may cost more to do HTS with 3D models, but in the long run it may lead to higher success rates and so decreased costs," LaBarbera says.
LaBarbera suggests that another use of the systems biology approach made possible by 3D models like MCTS is to bridge the gap between high-volume, low-accuracy screens and more involved testing in animal models.
Read more at Science Daily
'Seeds' of Massive Black Holes Found at the Center of the Milky Way Galaxy
Many galaxies contain enormous amounts of molecular gas in small areas near their nuclei. Highly condensed molecular gas is a birthplace of lots of stars. Moreover, it is considered to closely relate to activities of galactic nuclei. Therefore, it is important to investigate the physical state and chemical properties of molecular gas at galaxy centers through observation. To obtain detailed observation data, it is best to survey the center of the Milky Way Galaxy in which our solar system exists.
The research team observed emission lines at wavelengths of 0.87 mm, emitted from carbon monoxide molecules in an area of several degrees that includes the center of the Milky Way Galaxy. The ASTE 10 m telescope in the Atacama Desert (4,800 meters above sea level) of Chile was used for observation. More than 250 hours in total were spent on the prolonged observation from 2005 to 2010.
The research team compared this observation data with data of emission lines at wavelengths of 2.6 mm, emitted from carbon monoxide molecules in the same area, which were obtained using the NRO 45m Telescope (Note: 1). When intensity values of emission lines at different wavelengths, emitted from carbon monoxide molecules, are compared, it is possible to estimate temperature and density of molecular gas. In this way, the research team succeeded in drawing detailed distribution maps of "warm, dense" molecular gas of more than 50 degrees Kelvin and more than 10,000 hydrogen molecules per cubic centimeter at the center of the Milky Way Galaxy for the first time ever.
Oka, the research team leader, said, "The results are astonishing." The "warm, dense" molecular gas in that area is concentrated in four clumps (Sgr A, L=+1.3°, L=-0.4°, L=-1.2°). Moreover, it turns out that these four gas clumps are all moving at a very fast speed of more than 100 km/s. Sgr A, one of the four gas clumps, contains "Sagittarius A*," the nucleus of the Milky Way Galaxy. Oka added, "The remaining three gas clumps are objects we discovered for the very first time. It is thought that 'Sagittarius A*' is the location of a supermassive black hole that is approximately 4 million times the mass of the sun. It can be inferred that the gas clump 'Sgr A' has a disk-shaped structure with radius of 25 light-years and revolves around the supermassive black hole at a very fast speed."
On the other hand, the team found signs of expansion other than rotation in the remaining three gas clumps. This means that the gas clumps, L=+1.3°, L=-0.4°and L=-1.2°, have structures that were formed by supernova explosions that occurred within the gas clumps. The gas clump "L=+1.3°" has the largest amount of expansion energy. Its expanding energy is equivalent to 200 supernova explosions. The age of the gas masses is estimated as approximately 60,000 years old. Therefore, given that the energy source is the supernova explosions, the supernova explosions have continued to occur every 300 years.
The research team used the NRO 45m Telescope again to further examine the molecular gas's distribution, motion and composition to determine whether supernova explosions caused the expansion. "Observation clearly showed that the energy source of L=+1.3° is multiple supernova explosions. We detected multiple expansion structures and molecules attributed to shock waves," Oka said about the excitement when observing it. "Based on the observation of L=+1.3°, it is also natural to think that the expanding gas clumps L=-0.4° and L=-1.2° derived energy from multiple supernova explosions," Oka added.
A supernova explosion is a huge explosion that occurs when a star with more massive than eight to ten times the mass of the sun ends its life. Such a high occurrence of supernova explosions (once per 300 years) indicates that many young, massive stars are concentrated in the gas clumps. In other words, this means that there is a massive "star cluster" in each gas clump. Based on the frequency of the supernova explosions, the team estimated the mass of the star cluster buried in L=+1.3°as more than 100,000 times the mass of the sun, which is equivalent to that of the largest star cluster found in the Milky Way Galaxy.
As just described, the star cluster is huge, but it had not been discovered until now. "The solar system is located at the edge of the Milky Way Galaxy's disk, and is about 30,000 light-years away from the center of the Milky Way Galaxy. The huge amount of gas and dust lying between the solar system and the center of the Milky Way Galaxy prevent not only visible light, but also infrared light, from reaching the Earth. Moreover, innumerable stars in the bulge and disc of the Milky Way Galaxy lie in the line of sight. Therefore, no matter how large the star cluster is, it is very difficult to directly see the star cluster at the center of the Milky Way Galaxy," Oka explained.
"Huge star clusters at the center of the Milky Way Galaxy have an important role related to formation and growth of the Milky Way Galaxy's nucleus," said Oka. According to theoretical calculations, when the density of stars at the center of star clusters increases, the stars are merged together, one after another. Then, it is expected that IMBHs with several hundred times the mass of the sun are formed. Eventually, these IMBHs and star clusters sink into the nucleus of the Milky Way Galaxy. It can be thought that the IMBHs and star clusters are then merged further, and form a massive black hole at the Milky Way Galaxy's nucleus. Alternatively, the IMBHs and star clusters could help expand an existing massive black hole.
It can be thought that the supermassive black hole at "Sagittarius A*," the nucleus of the Milky Way Galaxy, has also been grown up through these processes. In summary, the new discovery is the finding of "cradles" of IMBHs that become "seeds" of the supermassive black hole at the nucleus.
Read more at Science Daily
The research team observed emission lines at wavelengths of 0.87 mm, emitted from carbon monoxide molecules in an area of several degrees that includes the center of the Milky Way Galaxy. The ASTE 10 m telescope in the Atacama Desert (4,800 meters above sea level) of Chile was used for observation. More than 250 hours in total were spent on the prolonged observation from 2005 to 2010.
The research team compared this observation data with data of emission lines at wavelengths of 2.6 mm, emitted from carbon monoxide molecules in the same area, which were obtained using the NRO 45m Telescope (Note: 1). When intensity values of emission lines at different wavelengths, emitted from carbon monoxide molecules, are compared, it is possible to estimate temperature and density of molecular gas. In this way, the research team succeeded in drawing detailed distribution maps of "warm, dense" molecular gas of more than 50 degrees Kelvin and more than 10,000 hydrogen molecules per cubic centimeter at the center of the Milky Way Galaxy for the first time ever.
Oka, the research team leader, said, "The results are astonishing." The "warm, dense" molecular gas in that area is concentrated in four clumps (Sgr A, L=+1.3°, L=-0.4°, L=-1.2°). Moreover, it turns out that these four gas clumps are all moving at a very fast speed of more than 100 km/s. Sgr A, one of the four gas clumps, contains "Sagittarius A*," the nucleus of the Milky Way Galaxy. Oka added, "The remaining three gas clumps are objects we discovered for the very first time. It is thought that 'Sagittarius A*' is the location of a supermassive black hole that is approximately 4 million times the mass of the sun. It can be inferred that the gas clump 'Sgr A' has a disk-shaped structure with radius of 25 light-years and revolves around the supermassive black hole at a very fast speed."
On the other hand, the team found signs of expansion other than rotation in the remaining three gas clumps. This means that the gas clumps, L=+1.3°, L=-0.4°and L=-1.2°, have structures that were formed by supernova explosions that occurred within the gas clumps. The gas clump "L=+1.3°" has the largest amount of expansion energy. Its expanding energy is equivalent to 200 supernova explosions. The age of the gas masses is estimated as approximately 60,000 years old. Therefore, given that the energy source is the supernova explosions, the supernova explosions have continued to occur every 300 years.
The research team used the NRO 45m Telescope again to further examine the molecular gas's distribution, motion and composition to determine whether supernova explosions caused the expansion. "Observation clearly showed that the energy source of L=+1.3° is multiple supernova explosions. We detected multiple expansion structures and molecules attributed to shock waves," Oka said about the excitement when observing it. "Based on the observation of L=+1.3°, it is also natural to think that the expanding gas clumps L=-0.4° and L=-1.2° derived energy from multiple supernova explosions," Oka added.
A supernova explosion is a huge explosion that occurs when a star with more massive than eight to ten times the mass of the sun ends its life. Such a high occurrence of supernova explosions (once per 300 years) indicates that many young, massive stars are concentrated in the gas clumps. In other words, this means that there is a massive "star cluster" in each gas clump. Based on the frequency of the supernova explosions, the team estimated the mass of the star cluster buried in L=+1.3°as more than 100,000 times the mass of the sun, which is equivalent to that of the largest star cluster found in the Milky Way Galaxy.
As just described, the star cluster is huge, but it had not been discovered until now. "The solar system is located at the edge of the Milky Way Galaxy's disk, and is about 30,000 light-years away from the center of the Milky Way Galaxy. The huge amount of gas and dust lying between the solar system and the center of the Milky Way Galaxy prevent not only visible light, but also infrared light, from reaching the Earth. Moreover, innumerable stars in the bulge and disc of the Milky Way Galaxy lie in the line of sight. Therefore, no matter how large the star cluster is, it is very difficult to directly see the star cluster at the center of the Milky Way Galaxy," Oka explained.
"Huge star clusters at the center of the Milky Way Galaxy have an important role related to formation and growth of the Milky Way Galaxy's nucleus," said Oka. According to theoretical calculations, when the density of stars at the center of star clusters increases, the stars are merged together, one after another. Then, it is expected that IMBHs with several hundred times the mass of the sun are formed. Eventually, these IMBHs and star clusters sink into the nucleus of the Milky Way Galaxy. It can be thought that the IMBHs and star clusters are then merged further, and form a massive black hole at the Milky Way Galaxy's nucleus. Alternatively, the IMBHs and star clusters could help expand an existing massive black hole.
It can be thought that the supermassive black hole at "Sagittarius A*," the nucleus of the Milky Way Galaxy, has also been grown up through these processes. In summary, the new discovery is the finding of "cradles" of IMBHs that become "seeds" of the supermassive black hole at the nucleus.
Read more at Science Daily
River Networks On Saturn's Largest Moon, Titan, Point to a Puzzling Geologic History
New findings suggest the surface of Saturn's largest moon may have undergone a recent transformation. For many years, Titan's thick, methane- and nitrogen-rich atmosphere kept astronomers from seeing what lies beneath. Saturn's largest moon appeared through telescopes as a hazy orange orb, in contrast to other heavily cratered moons in the solar system.
In 2004, the Cassini-Huygens spacecraft -- a probe that flies by Titan as it orbits Saturn -- penetrated Titan's haze, providing scientists with their first detailed images of the surface. Radar images revealed an icy terrain carved out over millions of years by rivers of liquid methane, similar to how rivers of water have etched into Earth's rocky continents.
While images of Titan have revealed its present landscape, very little is known about its geologic past. Now researchers at MIT and the University of Tennessee at Knoxville have analyzed images of Titan's river networks and determined that in some regions, rivers have created surprisingly little erosion. The researchers say there are two possible explanations: either erosion on Titan is extremely slow, or some other recent phenomena may have wiped out older riverbeds and landforms.
"It's a surface that should have eroded much more than what we're seeing, if the river networks have been active for a long time," says Taylor Perron, the Cecil and Ida Green Assistant Professor of Geology at MIT. "It raises some very interesting questions about what has been happening on Titan in the last billion years."
A paper detailing the group's findings will appear in the Journal of Geophysical Research-Planets.
What accounts for a low crater count?
Compared to most moons in our solar system, Titan is relatively smooth, with few craters pockmarking its facade. Titan is around four billion years old, about the same age as the rest of the solar system. But judging by the number of craters, one might estimate that its surface is much younger, between 100 million and one billion years old.
What might explain this moon's low crater count? Perron says the answer may be similar to what happens on Earth.
"We don't have many impact craters on Earth," Perron says. "People flock to them because they're so few, and one explanation is that Earth's continents are always eroding or being covered with sediment. That may be the case on Titan, too."
For example, plate tectonics, erupting volcanoes, advancing glaciers and river networks have all reshaped Earth's surface over billions of years. On Titan, similar processes -- tectonic upheaval, icy lava eruptions, erosion and sedimentation by rivers -- may be at work.
But identifying which of these geological phenomena may have modified Titan's surface is a significant challenge. Images generated by the Cassini spacecraft, similar to aerial photos but with much coarser resolution, are flat, depicting terrain from a bird's-eye perspective, with no information about a landform's elevation or depth.
"It's an interesting challenge," Perron says. "It's almost like we were thrown back a few centuries, before there were many topographic maps, and we only had maps showing where the rivers are."
Charting a river's evolution
Perron and MIT graduate student Benjamin Black set out to determine the extent to which river networks may have renewed Titan's surface. The team analyzed images taken from Cassini-Huygens, and mapped 52 prominent river networks from four regions on Titan. The researchers compared the images with a model of river network evolution developed by Perron. This model depicts the evolution of a river over time, given variables such as the strength of the underlying material and the rate of flow through the river channels. As a river erodes slowly through the ice, it transforms from a long, spindly thread into a dense, treelike network of tributaries.
Black compared his measurements of Titan's river networks with the model, and found the moon's rivers most resembled the early stages of a typical terrestrial river's evolution. The observations indicate that rivers in some regions have caused very little erosion, and hence very little modification of Titan's surface.
"They're more on the long and spindly side," Black says. "You do see some full and branching networks, and that's tantalizing, because if we get more data, it will be interesting to know whether there really are regional differences."
Going a step further, Black compared Titan's images with recently renewed landscapes on Earth, including volcanic terrain on the island of Kauai and recently glaciated landscapes in North America. The river networks in those locations are similar in form to those on Titan, suggesting that geologic processes may have reshaped the moon's icy surface in the recent past.
Read more at Science Daily
In 2004, the Cassini-Huygens spacecraft -- a probe that flies by Titan as it orbits Saturn -- penetrated Titan's haze, providing scientists with their first detailed images of the surface. Radar images revealed an icy terrain carved out over millions of years by rivers of liquid methane, similar to how rivers of water have etched into Earth's rocky continents.
While images of Titan have revealed its present landscape, very little is known about its geologic past. Now researchers at MIT and the University of Tennessee at Knoxville have analyzed images of Titan's river networks and determined that in some regions, rivers have created surprisingly little erosion. The researchers say there are two possible explanations: either erosion on Titan is extremely slow, or some other recent phenomena may have wiped out older riverbeds and landforms.
"It's a surface that should have eroded much more than what we're seeing, if the river networks have been active for a long time," says Taylor Perron, the Cecil and Ida Green Assistant Professor of Geology at MIT. "It raises some very interesting questions about what has been happening on Titan in the last billion years."
A paper detailing the group's findings will appear in the Journal of Geophysical Research-Planets.
What accounts for a low crater count?
Compared to most moons in our solar system, Titan is relatively smooth, with few craters pockmarking its facade. Titan is around four billion years old, about the same age as the rest of the solar system. But judging by the number of craters, one might estimate that its surface is much younger, between 100 million and one billion years old.
What might explain this moon's low crater count? Perron says the answer may be similar to what happens on Earth.
"We don't have many impact craters on Earth," Perron says. "People flock to them because they're so few, and one explanation is that Earth's continents are always eroding or being covered with sediment. That may be the case on Titan, too."
For example, plate tectonics, erupting volcanoes, advancing glaciers and river networks have all reshaped Earth's surface over billions of years. On Titan, similar processes -- tectonic upheaval, icy lava eruptions, erosion and sedimentation by rivers -- may be at work.
But identifying which of these geological phenomena may have modified Titan's surface is a significant challenge. Images generated by the Cassini spacecraft, similar to aerial photos but with much coarser resolution, are flat, depicting terrain from a bird's-eye perspective, with no information about a landform's elevation or depth.
"It's an interesting challenge," Perron says. "It's almost like we were thrown back a few centuries, before there were many topographic maps, and we only had maps showing where the rivers are."
Charting a river's evolution
Perron and MIT graduate student Benjamin Black set out to determine the extent to which river networks may have renewed Titan's surface. The team analyzed images taken from Cassini-Huygens, and mapped 52 prominent river networks from four regions on Titan. The researchers compared the images with a model of river network evolution developed by Perron. This model depicts the evolution of a river over time, given variables such as the strength of the underlying material and the rate of flow through the river channels. As a river erodes slowly through the ice, it transforms from a long, spindly thread into a dense, treelike network of tributaries.
Black compared his measurements of Titan's river networks with the model, and found the moon's rivers most resembled the early stages of a typical terrestrial river's evolution. The observations indicate that rivers in some regions have caused very little erosion, and hence very little modification of Titan's surface.
"They're more on the long and spindly side," Black says. "You do see some full and branching networks, and that's tantalizing, because if we get more data, it will be interesting to know whether there really are regional differences."
Going a step further, Black compared Titan's images with recently renewed landscapes on Earth, including volcanic terrain on the island of Kauai and recently glaciated landscapes in North America. The river networks in those locations are similar in form to those on Titan, suggesting that geologic processes may have reshaped the moon's icy surface in the recent past.
Read more at Science Daily
Stone Age Tools Help to Streamline Modern Manufacturing
Innovative research by the National Physical Laboratory (NPL) and the University of Bradford used laser microscopes to explore how stone tools were used in prehistory, and the process has helped streamline surface measurement techniques for modern manufacturers.
The analysis of stone tools is a key factor in understanding early human life including social organisation and diet. Archaeologists at the University of Bradford hypothesised that reconstructing past activities was the best way to study what each tool was used for. They proposed to measure the surface structures of replica stone tools before and after they were used in different reconstructions on two natural materials -- antler and wood.
NPL conducted surface measurement investigations on the replica tools using a confocal microscope to create a map of surface structure. Richard Leach, who led the work at NPL, said: "We measured the surfaces of each tool using a confocal microscope to create a map of its surface structure. Optical measurements create 3D constructions of each surface recorded without physically contacting the surface."
The measurements taken by NPL on each tool before, during and after wear experiments revealed variations in the surfaces that can be used to predict the use of the tools. The results offered interesting insight into the breadth of future experiments necessary to provide conclusive results on the use of stone tools in prehistory.
These measurements also formed part of a development process for new instruments being used in a wider NPL project to support all aspects of manufacturing: from turbine blades to grinding machines to mobile phone screens.
NPL has produced a range of equipment which allows the manufacturing world to gain a better understanding of surface topography, without using stylus instruments -- which have proved slow for in-process applications. NPL has worked in conjunction with the International Organization for Standardization (ISO) to develop the use of optical systems to conduct areal surface measurements, including instrumentation, calibration artefacts, good practice guides and reference software.
More accurate surface measurements allow manufacturers to revolutionise designs of existing and developing technologies, for example by controlling how the surfaces glide through air, absorb or repel water, or reflect light.
Read more at Science Daily
The analysis of stone tools is a key factor in understanding early human life including social organisation and diet. Archaeologists at the University of Bradford hypothesised that reconstructing past activities was the best way to study what each tool was used for. They proposed to measure the surface structures of replica stone tools before and after they were used in different reconstructions on two natural materials -- antler and wood.
NPL conducted surface measurement investigations on the replica tools using a confocal microscope to create a map of surface structure. Richard Leach, who led the work at NPL, said: "We measured the surfaces of each tool using a confocal microscope to create a map of its surface structure. Optical measurements create 3D constructions of each surface recorded without physically contacting the surface."
The measurements taken by NPL on each tool before, during and after wear experiments revealed variations in the surfaces that can be used to predict the use of the tools. The results offered interesting insight into the breadth of future experiments necessary to provide conclusive results on the use of stone tools in prehistory.
These measurements also formed part of a development process for new instruments being used in a wider NPL project to support all aspects of manufacturing: from turbine blades to grinding machines to mobile phone screens.
NPL has produced a range of equipment which allows the manufacturing world to gain a better understanding of surface topography, without using stylus instruments -- which have proved slow for in-process applications. NPL has worked in conjunction with the International Organization for Standardization (ISO) to develop the use of optical systems to conduct areal surface measurements, including instrumentation, calibration artefacts, good practice guides and reference software.
More accurate surface measurements allow manufacturers to revolutionise designs of existing and developing technologies, for example by controlling how the surfaces glide through air, absorb or repel water, or reflect light.
Read more at Science Daily
Jul 19, 2012
Understanding Hot Nuclear Matter That Permeated the Early Universe
A review article appearing in the July 20, 2012, issue of the journal Science describes groundbreaking discoveries that have emerged from the Relativistic Heavy Ion Collider (RHIC) at the U.S. Department of Energy's Brookhaven National Laboratory, synergies with the heavy-ion program at the Large Hadron Collider (LHC) in Europe, and the compelling questions that will drive this research forward on both sides of the Atlantic. With details that help enlighten our understanding of the hot nuclear matter that permeated the early universe, the article is a prelude to the latest findings scientists from both facilities will present at the next gathering of physicists dedicated to this research -- Quark Matter 2012, August 12-18 in Washington, D.C.
"Nuclear matter in today's universe hides inside atomic nuclei and neutron stars," begin the authors, Barbara Jacak, a physics professor at Stony Brook University and spokesperson for the PHENIX experiment at RHIC, and Berndt Mueller, a theoretical physicist at Duke University. Collisions between heavy ions at machines like RHIC, running since 2000, and more recently, the LHC, make this hidden realm accessible by recreating the extreme conditions of the early universe on a microscopic scale. The temperatures achieved in these collisions -- more than 4 trillion degrees Celsius, the hottest ever created in a laboratory -- briefly liberate the subatomic quarks and gluons that make up protons and neutrons of ordinary atomic nuclei so scientists can study their properties and interactions.
"Quarks and the gluons that hold them together are the building blocks of all the visible matter that exists in the universe today -- from stars, to planets, to people," Jacak said. "Understanding the evolution of our universe thus requires knowledge of the structure and dynamics of these particles in their purest form, a primordial 'soup' known as quark-gluon plasma (QGP)."
RHIC was the first machine to demonstrate the formation of quark-gluon plasma, and determine its unexpected properties. Instead of an ideal gas of weakly interacting quarks and gluons, the QGP discovered at RHIC behaves like a nearly frictionless liquid. This matter's extremely low viscosity (near the lowest theoretically possible), its ability to stop energetic particle jets in their tracks, and its very rapid attainment of such a high equilibrium temperature all suggest that the fluid's constituents are quite strongly interacting, or coupled.
"Understanding strongly coupled or strongly correlated systems is at the intellectual forefront of multiple subfields of physics," the authors write. The findings at RHIC have unanticipated connections to several of these, including conventional plasmas, superconductors, and even some atoms at the opposite extreme of the temperature scale -- a minute fraction of a degree above absolute zero -- which also behave as a nearly perfect fluid with vanishingly low viscosity when confined within an atomic trap.
Another stunning surprise was that mathematical approaches using methods of string theory and theoretical black holes occupying extra dimensions could be used to describe some of these seemingly unrelated strongly coupled systems, including RHIC's nearly perfect liquid. "Physicists were astounded," the authors note. Although the mathematics is clear and well established, the physical reasons for the relationship are still a deep mystery.
When the LHC began its first heavy ion experiments in 2010 -- at nearly 14 times higher energy than RHIC's -- they largely confirmed RHIC's pioneering findings with evidence of a strongly coupled, low-viscosity liquid, albeit at a temperature about 30 percent higher than at RHIC. With a higher energy range, LHC offers a higher rate of rare particles, such as heavy (charm and bottom) quarks, and high- energy jets that can probe particular properties of the QGP system. RHIC can go to lower energies and collide a wide range of ions from protons, to copper, to gold, to uranium -- and produce asymmetric collisions between two different kinds of ions. This flexibility at RHIC allows scientists to produce QGP under a wide variety of initial conditions, and thereby to distinguish intrinsic QGP properties from the influence of the initial conditions.
"The two facilities are truly complementary," said Mueller, whose work on quantum chromodynamics (QCD), the theory that describes the interactions of quarks and gluons, helps guide experiments and interpret results at both facilities. "Both RHIC and the LHC are essential to advancing our understanding of the subatomic interactions that governed the early universe, and how those gave form to today's matter as they coalesced into more ordinary forms."
An essential part of the experimental and theoretical research path going forward will be a detailed exploration of the nuclear "phase diagram" -- how quark matter evolves over a range of energies, temperatures, and densities. LHC will search the highest range of energies, where the matter produced contains quarks and antiquarks in almost complete balance. But all evidence to date from both colliders suggests that RHIC is in the energy "sweet spot" for exploring the transition from ordinary matter to QGP -- analogous to the way an ordinary substance like water changes phases from ice to liquid water to gas.
Read more at Science Daily
"Nuclear matter in today's universe hides inside atomic nuclei and neutron stars," begin the authors, Barbara Jacak, a physics professor at Stony Brook University and spokesperson for the PHENIX experiment at RHIC, and Berndt Mueller, a theoretical physicist at Duke University. Collisions between heavy ions at machines like RHIC, running since 2000, and more recently, the LHC, make this hidden realm accessible by recreating the extreme conditions of the early universe on a microscopic scale. The temperatures achieved in these collisions -- more than 4 trillion degrees Celsius, the hottest ever created in a laboratory -- briefly liberate the subatomic quarks and gluons that make up protons and neutrons of ordinary atomic nuclei so scientists can study their properties and interactions.
"Quarks and the gluons that hold them together are the building blocks of all the visible matter that exists in the universe today -- from stars, to planets, to people," Jacak said. "Understanding the evolution of our universe thus requires knowledge of the structure and dynamics of these particles in their purest form, a primordial 'soup' known as quark-gluon plasma (QGP)."
RHIC was the first machine to demonstrate the formation of quark-gluon plasma, and determine its unexpected properties. Instead of an ideal gas of weakly interacting quarks and gluons, the QGP discovered at RHIC behaves like a nearly frictionless liquid. This matter's extremely low viscosity (near the lowest theoretically possible), its ability to stop energetic particle jets in their tracks, and its very rapid attainment of such a high equilibrium temperature all suggest that the fluid's constituents are quite strongly interacting, or coupled.
"Understanding strongly coupled or strongly correlated systems is at the intellectual forefront of multiple subfields of physics," the authors write. The findings at RHIC have unanticipated connections to several of these, including conventional plasmas, superconductors, and even some atoms at the opposite extreme of the temperature scale -- a minute fraction of a degree above absolute zero -- which also behave as a nearly perfect fluid with vanishingly low viscosity when confined within an atomic trap.
Another stunning surprise was that mathematical approaches using methods of string theory and theoretical black holes occupying extra dimensions could be used to describe some of these seemingly unrelated strongly coupled systems, including RHIC's nearly perfect liquid. "Physicists were astounded," the authors note. Although the mathematics is clear and well established, the physical reasons for the relationship are still a deep mystery.
When the LHC began its first heavy ion experiments in 2010 -- at nearly 14 times higher energy than RHIC's -- they largely confirmed RHIC's pioneering findings with evidence of a strongly coupled, low-viscosity liquid, albeit at a temperature about 30 percent higher than at RHIC. With a higher energy range, LHC offers a higher rate of rare particles, such as heavy (charm and bottom) quarks, and high- energy jets that can probe particular properties of the QGP system. RHIC can go to lower energies and collide a wide range of ions from protons, to copper, to gold, to uranium -- and produce asymmetric collisions between two different kinds of ions. This flexibility at RHIC allows scientists to produce QGP under a wide variety of initial conditions, and thereby to distinguish intrinsic QGP properties from the influence of the initial conditions.
"The two facilities are truly complementary," said Mueller, whose work on quantum chromodynamics (QCD), the theory that describes the interactions of quarks and gluons, helps guide experiments and interpret results at both facilities. "Both RHIC and the LHC are essential to advancing our understanding of the subatomic interactions that governed the early universe, and how those gave form to today's matter as they coalesced into more ordinary forms."
An essential part of the experimental and theoretical research path going forward will be a detailed exploration of the nuclear "phase diagram" -- how quark matter evolves over a range of energies, temperatures, and densities. LHC will search the highest range of energies, where the matter produced contains quarks and antiquarks in almost complete balance. But all evidence to date from both colliders suggests that RHIC is in the energy "sweet spot" for exploring the transition from ordinary matter to QGP -- analogous to the way an ordinary substance like water changes phases from ice to liquid water to gas.
Read more at Science Daily
Scientists Connect Seawater Chemistry With Ancient Climate Change and Evolution
Humans get most of the blame for climate change, with little attention paid to the contribution of other natural forces. Now, scientists from the University of Toronto and the University of California Santa Cruz are shedding light on one potential cause of the cooling trend of the past 45 million years that has everything to do with the chemistry of the world's oceans.
"Seawater chemistry is characterized by long phases of stability, which are interrupted by short intervals of rapid change," says Professor Ulrich Wortmann in the Department of Earth Sciences at the University of Toronto, lead author of a study to be published in Science this week. "We've established a new framework that helps us better interpret evolutionary trends and climate change over long periods of time. The study focuses on the past 130 million years, but similar interactions have likely occurred through the past 500 million years."
Wortmann and co-author Adina Paytan of the Institute of Marine Sciences at the University of California Santa Cruz point to the collision between India and Eurasia approximately 50 million years ago as one example of an interval of rapid change. This collision enhanced dissolution of the most extensive belt of water-soluble gypsum on Earth, stretching from Oman to Pakistan, and well into Western India -- remnants of which are well exposed in the Zagros mountains.
The authors suggest that the dissolution or creation of such massive gyspum deposits will change the sulfate content of the ocean, and that this will affect the amount of sulfate aerosols in the atmosphere and thus climate. "We propose that times of high sulfate concentrations in ocean water correlate with global cooling, just as times of low concentration correspond with greenhouse periods," says Paytan.
"When India and Eurasia collided, it caused dissolution of ancient salt deposits which resulted in drastic changes in seawater chemistry," Paytan continues. "This may have led to the demise of the Eocene epoch -- the warmest period of the modern-day Cenozoic era -- and the transition from a greenhouse to icehouse climate, culminating in the beginning of the rapid expansion of the Antarctic ice sheet."
The researchers combined data of past seawater sulfur composition, assembled by Paytan in 2004, with Wortmann's recent discovery of the strong link between marine sulfate concentrations and carbon and phosphorus cycling. They were able to explain the seawater sulfate isotope record as a result of massive changes to the accumulation and weathering of gyspum -- the mineral form of hydrated calcium sulfate.
Read more at Science Daily
"Seawater chemistry is characterized by long phases of stability, which are interrupted by short intervals of rapid change," says Professor Ulrich Wortmann in the Department of Earth Sciences at the University of Toronto, lead author of a study to be published in Science this week. "We've established a new framework that helps us better interpret evolutionary trends and climate change over long periods of time. The study focuses on the past 130 million years, but similar interactions have likely occurred through the past 500 million years."
Wortmann and co-author Adina Paytan of the Institute of Marine Sciences at the University of California Santa Cruz point to the collision between India and Eurasia approximately 50 million years ago as one example of an interval of rapid change. This collision enhanced dissolution of the most extensive belt of water-soluble gypsum on Earth, stretching from Oman to Pakistan, and well into Western India -- remnants of which are well exposed in the Zagros mountains.
The authors suggest that the dissolution or creation of such massive gyspum deposits will change the sulfate content of the ocean, and that this will affect the amount of sulfate aerosols in the atmosphere and thus climate. "We propose that times of high sulfate concentrations in ocean water correlate with global cooling, just as times of low concentration correspond with greenhouse periods," says Paytan.
"When India and Eurasia collided, it caused dissolution of ancient salt deposits which resulted in drastic changes in seawater chemistry," Paytan continues. "This may have led to the demise of the Eocene epoch -- the warmest period of the modern-day Cenozoic era -- and the transition from a greenhouse to icehouse climate, culminating in the beginning of the rapid expansion of the Antarctic ice sheet."
The researchers combined data of past seawater sulfur composition, assembled by Paytan in 2004, with Wortmann's recent discovery of the strong link between marine sulfate concentrations and carbon and phosphorus cycling. They were able to explain the seawater sulfate isotope record as a result of massive changes to the accumulation and weathering of gyspum -- the mineral form of hydrated calcium sulfate.
Read more at Science Daily
The Bizarre, Breathtaking Science Photos of Fritz Goro
If science seeks to uncover the truth, then photography seeks to lay that truth bare to the world.
Photographer Fritz Goro understood this sentiment well. His photographs highlight the beautiful, strange, amusing and poignant within the realm of scientific inquiry. Goro spent four decades as a photographer for LIFE magazine and Scientific American. The photos here are a selection of those featured at LIFE.com.
Goro was born in Bremen, Germany, and trained in the Bauhaus school of sculpture and design. He began a career in photojournalism, and by age 30 had become editor of the weekly Munich Illustrated. He left Germany with his wife in 1933 when Hitler came to power, came to the U.S. in 1936, and soon began freelancing for LIFE.
Scientific subjects were his passion, and according to LIFE he said he took photos of things that “more knowledgeable photographers might have considered unphotographable… I began to take pictures of things I barely understood, using techniques I’d never used before.”
Among his subjects were gestating fetuses, blood circulation in the heart, and the separation of plutonium and uranium isotopes to make the atomic bomb. His New York Times obituary hails him as the inventor of macrophotography, and quotes Gerard Piel, former science editor at LIFE, as saying, ”It was (Goro’s) artistry and ingenuity that made photographs of abstractions, of the big ideas from the genetic code to plate tectonics.” Goro also developed techniques for photographing bioluminescence, holograms and lasers.
Read more and see more pictures at Wired Science
Photographer Fritz Goro understood this sentiment well. His photographs highlight the beautiful, strange, amusing and poignant within the realm of scientific inquiry. Goro spent four decades as a photographer for LIFE magazine and Scientific American. The photos here are a selection of those featured at LIFE.com.
Goro was born in Bremen, Germany, and trained in the Bauhaus school of sculpture and design. He began a career in photojournalism, and by age 30 had become editor of the weekly Munich Illustrated. He left Germany with his wife in 1933 when Hitler came to power, came to the U.S. in 1936, and soon began freelancing for LIFE.
Scientific subjects were his passion, and according to LIFE he said he took photos of things that “more knowledgeable photographers might have considered unphotographable… I began to take pictures of things I barely understood, using techniques I’d never used before.”
Among his subjects were gestating fetuses, blood circulation in the heart, and the separation of plutonium and uranium isotopes to make the atomic bomb. His New York Times obituary hails him as the inventor of macrophotography, and quotes Gerard Piel, former science editor at LIFE, as saying, ”It was (Goro’s) artistry and ingenuity that made photographs of abstractions, of the big ideas from the genetic code to plate tectonics.” Goro also developed techniques for photographing bioluminescence, holograms and lasers.
Read more and see more pictures at Wired Science
World's Oldest Known Bra Found
A 15th century bra was recently unearthed during reconstruction work at a medieval castle. The remarkably modern looking bra is arguably now the world's oldest known brassiere.
Fiber samples taken from the linen bra date to the medieval era, so this item appears to be legit. It pushes back the known history of the modern-styled bra by possibly more than 400 years.
The bra was discovered in a waste-filled vault at Lengberg Castle, East Tyrol, Austria. The stash included more than 2,700 individual textile fragments -- parts of nicely tailored trousers, buttoned shirts, and 4 modern-looking bras.
Beatrix Nutz, an archeologist from the University of Innsbruck, found all of it.
The bra is known as a "longline bra," meanging that the cups are each made from two pieces of linen sewn together vertically.
A press release from the University of Innsbruck further describes the bra as follows:
The surrounding fabric of somewhat coarser linen extends down to the bottom of the ribcage with a row of six eyelets on the left side of the body for fastening with a lace. The corresponding row of eyelets is missing. Needle-lace is sewn onto the cups and the fabric above thus decorating the cleavage. In the triangular area between the two cups there might have been additional decoration, maybe another sprang-work.
Women, sometimes at the urging of men, have been trying to cover, restrain, or elevate their breasts for ages, with corsets becoming popular in the 16th century. But before the Austrian medieval bra find, there was nothing to indicate the existence of bras with clearly visible cups before the 19th century.
Read more at Discovery News
Fiber samples taken from the linen bra date to the medieval era, so this item appears to be legit. It pushes back the known history of the modern-styled bra by possibly more than 400 years.
The bra was discovered in a waste-filled vault at Lengberg Castle, East Tyrol, Austria. The stash included more than 2,700 individual textile fragments -- parts of nicely tailored trousers, buttoned shirts, and 4 modern-looking bras.
Beatrix Nutz, an archeologist from the University of Innsbruck, found all of it.
The bra is known as a "longline bra," meanging that the cups are each made from two pieces of linen sewn together vertically.
A press release from the University of Innsbruck further describes the bra as follows:
The surrounding fabric of somewhat coarser linen extends down to the bottom of the ribcage with a row of six eyelets on the left side of the body for fastening with a lace. The corresponding row of eyelets is missing. Needle-lace is sewn onto the cups and the fabric above thus decorating the cleavage. In the triangular area between the two cups there might have been additional decoration, maybe another sprang-work.
Women, sometimes at the urging of men, have been trying to cover, restrain, or elevate their breasts for ages, with corsets becoming popular in the 16th century. But before the Austrian medieval bra find, there was nothing to indicate the existence of bras with clearly visible cups before the 19th century.
Read more at Discovery News
Jul 18, 2012
How Vegetarian Dinosaurs Feasted
Plant-eating dinosaurs 150 million years ago were super speedy eaters and could efficiently remove leaves from branches before swallowing the greenery whole.
The research, published in the journal Naturwissenschaften, helps to explain how some of the largest animals on Earth managed to eat such amazing quantities of plants.
The study focused in particular on Diplodocus, which probably could have won speed eating contests today. This massive beast measured nearly 100 feet in length and weighed over 33,000 pounds.
“Diplodocus would most likely have fed predominantly on leaves, either biting them off -- much like how we use our incisors -- or raking their teeth along a branch, shearing leaves off the branch,” lead author Mark Young told Discovery News.
Co-author Paul Barrett added, “It's likely that Diplodocus fed most often on conifer leaves (and accidentally ingested small twigs and branches) and also on ferns and horsetails.”
Young, a researcher in the University of Edinburgh’s School of GeoSciences, and his team made those determinations after creating a 3D model of a complete Diplodocus skull using data from a CT scan. The model was then biomechanically analyzed using a technique called finite element analysis (FEA) to test common feeding behaviors.
FEA is widely implemented to do everything from designing airplanes to making orthopedic implants. It revealed the various stresses and strains acting on the Diplodocus skull during feeding to determine whether the skull or teeth would break under certain conditions.
“Using these techniques, borrowed from the worlds of engineering and medicine, we can start to examine the feeding behavior of this long-extinct animal in levels of detail that were simply impossible until recently,” Barrett, a paleontologist at the Natural History Museum in London, said.
The research shows that the skull and teeth of the toothy, iconic looking dinosaur were biomechanically suitable for biting off leaves, raking through branches, and breaking off leaves. The dinosaur did not rip bark from trees, as some living deer do today.
As for the super long neck and relatively tiny head of herbivorous dinosaurs, Young said, “There have been numerous hypotheses put forward to explain the evolution of extremely long necks in sauropods, ranging from sexual selection (females preferring longer necked males) to feeding on tall trees.”
Read more at Discovery News
The research, published in the journal Naturwissenschaften, helps to explain how some of the largest animals on Earth managed to eat such amazing quantities of plants.
The study focused in particular on Diplodocus, which probably could have won speed eating contests today. This massive beast measured nearly 100 feet in length and weighed over 33,000 pounds.
“Diplodocus would most likely have fed predominantly on leaves, either biting them off -- much like how we use our incisors -- or raking their teeth along a branch, shearing leaves off the branch,” lead author Mark Young told Discovery News.
Co-author Paul Barrett added, “It's likely that Diplodocus fed most often on conifer leaves (and accidentally ingested small twigs and branches) and also on ferns and horsetails.”
Young, a researcher in the University of Edinburgh’s School of GeoSciences, and his team made those determinations after creating a 3D model of a complete Diplodocus skull using data from a CT scan. The model was then biomechanically analyzed using a technique called finite element analysis (FEA) to test common feeding behaviors.
FEA is widely implemented to do everything from designing airplanes to making orthopedic implants. It revealed the various stresses and strains acting on the Diplodocus skull during feeding to determine whether the skull or teeth would break under certain conditions.
“Using these techniques, borrowed from the worlds of engineering and medicine, we can start to examine the feeding behavior of this long-extinct animal in levels of detail that were simply impossible until recently,” Barrett, a paleontologist at the Natural History Museum in London, said.
The research shows that the skull and teeth of the toothy, iconic looking dinosaur were biomechanically suitable for biting off leaves, raking through branches, and breaking off leaves. The dinosaur did not rip bark from trees, as some living deer do today.
As for the super long neck and relatively tiny head of herbivorous dinosaurs, Young said, “There have been numerous hypotheses put forward to explain the evolution of extremely long necks in sauropods, ranging from sexual selection (females preferring longer necked males) to feeding on tall trees.”
Read more at Discovery News
Fortunetelling Verdict Raises Thorny Questions
Last week a federal judge in Alexandria, Louisiana, overturned a law banning fortunetelling on the basis that it is free speech protected by the First Amendment.
U.S. District Judge Dee Drell struck down an ordinance outlawing fortunetelling, astrology, palm reading, tarot, and other forms of divination on the grounds that the practices are fraudulent and inherently deceptive. The case involved a fortuneteller named Rachel Adams who sued to overturn the law and won.
About one in seven Americans have consulted a psychic or fortuneteller, and their services are in high demand, especially during hard economic times. This curious case raises issues about the boundary between freedom of speech and fraudulent (or at least unproven) claims.
There are, of course, exceptions to free speech that go beyond yelling fire in a crowded theater. People who lie on their tax returns can be convicted of tax evasion, and those who lie in a court of law can be convicted of perjury, which under federal law is a felony. Companies, also, are legally prohibited from making false statements about their merchandise; Ford cannot claim its cars get 200 miles per gallon, and vitamin manufacturers cannot advertise that their pills cure cancer. But other cases are murkier.
Free Speech and The Right to Lie
Last month the Supreme Court ruled that Xavier Alvarez, a public official who falsely claimed that he had received the Medal of Honor, could not be prosecuted under the Stolen Valor Act, a 2006 law that made it a crime to falsely claim “to have been awarded any decoration or medal authorized by Congress for the Armed Forces of the United States.” Alvarez admitted that his statements were false, but claimed that his lies were free speech protected by the First Amendment. The Supreme Court agreed and overturned the law.
The First Amendment freedom to lie and misrepresent matters of fact was even invoked by top Wall Street credit rating companies including Standard & Poor’s, Moody’s Investors Service, and others. In the months and years leading up to the global financial crash, these companies routinely inflated the ratings of billions of dollars worth of investments they bought and sold. When investors and investigators demanded to know why companies that were given stellar confidence ratings one day went bankrupt the next, the agencies claimed that their investment ratings were merely “opinions” not necessarily based on truth or fact, and as such were protected by the First Amendment.
Psychics and fortunetellers try a similar strategy, often offering their services “for entertainment only,” a tacit acknowledgement that the information they provide may not be reliable. Yet the fact is that—like clients of credit rating companies—the clients of psychics often do take the advice they get seriously, making life, love, and career decisions based upon fortunetelling. If clients truly are seeking only entertainment, for the $40 to $100 per hour psychics typically charge there are far cheaper ways to be entertained.
Some fortunetellers offer readings for fun and pleasure, and for the most part it’s not palm reading per se that police are concerned about, it’s the confidence schemes, theft by deception, and fraud that often accompany fortunetelling. One common scam involves luring clients in with inexpensive readings, then convincing them that a recent misfortune is the result of a curse put on them by an enemy. The imaginary curse can be lifted but it won’t come cheap, and some victims have been robbed of tens of thousands of dollars. In one recent case a “psychic” misused the influence and trust placed in him to sexually exploit several women.
Read more at Discovery News
U.S. District Judge Dee Drell struck down an ordinance outlawing fortunetelling, astrology, palm reading, tarot, and other forms of divination on the grounds that the practices are fraudulent and inherently deceptive. The case involved a fortuneteller named Rachel Adams who sued to overturn the law and won.
About one in seven Americans have consulted a psychic or fortuneteller, and their services are in high demand, especially during hard economic times. This curious case raises issues about the boundary between freedom of speech and fraudulent (or at least unproven) claims.
There are, of course, exceptions to free speech that go beyond yelling fire in a crowded theater. People who lie on their tax returns can be convicted of tax evasion, and those who lie in a court of law can be convicted of perjury, which under federal law is a felony. Companies, also, are legally prohibited from making false statements about their merchandise; Ford cannot claim its cars get 200 miles per gallon, and vitamin manufacturers cannot advertise that their pills cure cancer. But other cases are murkier.
Free Speech and The Right to Lie
Last month the Supreme Court ruled that Xavier Alvarez, a public official who falsely claimed that he had received the Medal of Honor, could not be prosecuted under the Stolen Valor Act, a 2006 law that made it a crime to falsely claim “to have been awarded any decoration or medal authorized by Congress for the Armed Forces of the United States.” Alvarez admitted that his statements were false, but claimed that his lies were free speech protected by the First Amendment. The Supreme Court agreed and overturned the law.
The First Amendment freedom to lie and misrepresent matters of fact was even invoked by top Wall Street credit rating companies including Standard & Poor’s, Moody’s Investors Service, and others. In the months and years leading up to the global financial crash, these companies routinely inflated the ratings of billions of dollars worth of investments they bought and sold. When investors and investigators demanded to know why companies that were given stellar confidence ratings one day went bankrupt the next, the agencies claimed that their investment ratings were merely “opinions” not necessarily based on truth or fact, and as such were protected by the First Amendment.
Psychics and fortunetellers try a similar strategy, often offering their services “for entertainment only,” a tacit acknowledgement that the information they provide may not be reliable. Yet the fact is that—like clients of credit rating companies—the clients of psychics often do take the advice they get seriously, making life, love, and career decisions based upon fortunetelling. If clients truly are seeking only entertainment, for the $40 to $100 per hour psychics typically charge there are far cheaper ways to be entertained.
Some fortunetellers offer readings for fun and pleasure, and for the most part it’s not palm reading per se that police are concerned about, it’s the confidence schemes, theft by deception, and fraud that often accompany fortunetelling. One common scam involves luring clients in with inexpensive readings, then convincing them that a recent misfortune is the result of a curse put on them by an enemy. The imaginary curse can be lifted but it won’t come cheap, and some victims have been robbed of tens of thousands of dollars. In one recent case a “psychic” misused the influence and trust placed in him to sexually exploit several women.
Read more at Discovery News
Dolphins May Be Math Geniuses
Dolphins may use complex nonlinear mathematics when hunting, according to a new study that suggests these brainy marine mammals could be far more skilled at math than was ever thought possible before.
Inspiration for the new study, published in the latest Proceedings of the Royal Society A, came after lead author Tim Leighton watched an episode of the Discovery Channel's "Blue Planet" series and saw dolphins blowing multiple tiny bubbles around prey as they hunted.
"I immediately got hooked, because I knew that no man-made sonar would be able to operate in such bubble water," explained Leighton, a professor of ultrasonics and underwater acoustics at the University of Southampton, where he is also an associate dean.
"These dolphins were either 'blinding' their most spectacular sensory apparatus when hunting -- which would be odd, though they still have sight to reply on -- or they have a sonar that can do what human sonar cannot…Perhaps they have something amazing," he added.
Leighton and colleagues Paul White and student Gim Hwa Chua set out to determine what the amazing ability might be. They started by modeling the types of echolocation pulses that dolphins emit. The researchers processed them using nonlinear mathematics instead of the standard way of processing sonar returns. The technique worked, and could explain how dolphins achieve hunting success with bubbles.
The math involved is complex. Essentially it relies upon sending out pulses that vary in amplitude. The first may have a value of 1 while the second is 1/3 that amplitude.
"So, provided the dolphin remembers what the ratios of the two pulses were, and can multiply the second echo by that and add the echoes together, it can make the fish 'visible' to its sonar," Leighton told Discovery News. "This is detection enhancement."
But that's not all. There must be a second stage to the hunt.
"Bubbles cause false alarms because they scatter strongly and a dolphin cannot afford to waste its energy chasing false alarms while the real fish escape," Leighton explained.
The second stage then involves subtracting the echoes from one another, ensuring the echo of the second pulse is first multiplied by three. The process, in short, therefore first entails making the fish visible to sonar by addition. The fish is then made invisible by subtraction to confirm it is a true target.
In order to confirm that dolphins use such nonlinear mathematical processing, some questions must still be answered. For example, for this technique to work, dolphins would have to use a frequency when they enter bubbly water that is sufficiently low, permitting them to hear frequencies that are twice as high in pitch.
"Until measurements are taken of wild dolphin sonar as they hunt in bubbly water, these questions will remain unanswered," Leighton said. "What we have shown is that it is not impossible to distinguish targets in bubbly water using the same sort of pulses that dolphins use."
If replicated, the sonar model may prove to be a huge benefit to humans. It might be able to detect covert circuitry, such as bugging devices hidden in walls, stones or foliage. It could also dramatically improve detection of sea mines.
"Currently, the navy uses dolphins or divers feeling with their hands in such difficult conditions as near shore bubbly water, for example in the Gulf," he said.
Read more at Discovery News
Inspiration for the new study, published in the latest Proceedings of the Royal Society A, came after lead author Tim Leighton watched an episode of the Discovery Channel's "Blue Planet" series and saw dolphins blowing multiple tiny bubbles around prey as they hunted.
"I immediately got hooked, because I knew that no man-made sonar would be able to operate in such bubble water," explained Leighton, a professor of ultrasonics and underwater acoustics at the University of Southampton, where he is also an associate dean.
"These dolphins were either 'blinding' their most spectacular sensory apparatus when hunting -- which would be odd, though they still have sight to reply on -- or they have a sonar that can do what human sonar cannot…Perhaps they have something amazing," he added.
Leighton and colleagues Paul White and student Gim Hwa Chua set out to determine what the amazing ability might be. They started by modeling the types of echolocation pulses that dolphins emit. The researchers processed them using nonlinear mathematics instead of the standard way of processing sonar returns. The technique worked, and could explain how dolphins achieve hunting success with bubbles.
The math involved is complex. Essentially it relies upon sending out pulses that vary in amplitude. The first may have a value of 1 while the second is 1/3 that amplitude.
"So, provided the dolphin remembers what the ratios of the two pulses were, and can multiply the second echo by that and add the echoes together, it can make the fish 'visible' to its sonar," Leighton told Discovery News. "This is detection enhancement."
But that's not all. There must be a second stage to the hunt.
"Bubbles cause false alarms because they scatter strongly and a dolphin cannot afford to waste its energy chasing false alarms while the real fish escape," Leighton explained.
The second stage then involves subtracting the echoes from one another, ensuring the echo of the second pulse is first multiplied by three. The process, in short, therefore first entails making the fish visible to sonar by addition. The fish is then made invisible by subtraction to confirm it is a true target.
In order to confirm that dolphins use such nonlinear mathematical processing, some questions must still be answered. For example, for this technique to work, dolphins would have to use a frequency when they enter bubbly water that is sufficiently low, permitting them to hear frequencies that are twice as high in pitch.
"Until measurements are taken of wild dolphin sonar as they hunt in bubbly water, these questions will remain unanswered," Leighton said. "What we have shown is that it is not impossible to distinguish targets in bubbly water using the same sort of pulses that dolphins use."
If replicated, the sonar model may prove to be a huge benefit to humans. It might be able to detect covert circuitry, such as bugging devices hidden in walls, stones or foliage. It could also dramatically improve detection of sea mines.
"Currently, the navy uses dolphins or divers feeling with their hands in such difficult conditions as near shore bubbly water, for example in the Gulf," he said.
Read more at Discovery News
Exoplanet Neighbor is Smaller than Earth
Astronomers believe they have found a planet about two-thirds the size of Earth orbiting a star 33 light-years away, a virtual neighbor in cosmic terms.
Don't pack your suitcase yet. The planet, known as UCF-1.01, is not very hospitable, with temperatures that exceed 1,000 degrees Fahrenheit, a surface that may be volcanic or molten and little if any atmosphere.
It's not just a summer heat wave. For UCF-1.01, it's a way of life. The planet is located so close to its parent star, a red dwarf known as GJ 436, that it completes an orbit in 1.4 Earth days.
Astronomers were following up studies of a Neptune-sized world already known to be orbiting GJ 436 when they realized there may be one -- or even two -- additional planets in the system.
Confirmation, however, will require telescopes more sensitive than what is currently available.
The studies were conducted with NASA's Spitzer infrared space telescope, which recorded slight, regular dips in the amount of light coming from the star. This phenomenon could be due to a planet passing in front of its parent star, relative to the telescope's view.
NASA's Kepler space telescope uses this technique to look for so-called transiting planets. The smaller the planet, however, the more difficult the search.
Of Kepler's 1,800 candidate planet-hosting stars, only three have been confirmed to have worlds smaller than Earth and just one is believed to be smaller than Spitzer’s potential find.
Read more at Discovery News
Don't pack your suitcase yet. The planet, known as UCF-1.01, is not very hospitable, with temperatures that exceed 1,000 degrees Fahrenheit, a surface that may be volcanic or molten and little if any atmosphere.
It's not just a summer heat wave. For UCF-1.01, it's a way of life. The planet is located so close to its parent star, a red dwarf known as GJ 436, that it completes an orbit in 1.4 Earth days.
Astronomers were following up studies of a Neptune-sized world already known to be orbiting GJ 436 when they realized there may be one -- or even two -- additional planets in the system.
Confirmation, however, will require telescopes more sensitive than what is currently available.
The studies were conducted with NASA's Spitzer infrared space telescope, which recorded slight, regular dips in the amount of light coming from the star. This phenomenon could be due to a planet passing in front of its parent star, relative to the telescope's view.
NASA's Kepler space telescope uses this technique to look for so-called transiting planets. The smaller the planet, however, the more difficult the search.
Of Kepler's 1,800 candidate planet-hosting stars, only three have been confirmed to have worlds smaller than Earth and just one is believed to be smaller than Spitzer’s potential find.
Read more at Discovery News
Weird Ancient Spiral Galaxy Discovered
Astronomers have discovered a three-armed spiral galaxy dating back nearly 11 billion years -- much older than similarly structured objects that are common in the modern universe.
The discovery was so jarring, scientists at first didn't believe their data.
"Our first thought was that we must have the wrong distance for the galaxy," lead researcher David Law, with the University of Toronto, told Discovery News.
"Then we thought perhaps it was the human brain playing tricks on us. If you look at enough blobby, weird-looking galaxies sooner or later, like a Rorschach blob test, you start to pick out patterns whether or not they're there," Law said.
Follow-up observation, however, showed that the galaxy, known as Q2343-BX442 and located in the direction of the Pegasus constellation, was not a mass of clumpy galaxies, but indeed a regular, rotating, spiral disk galaxy, albeit a thicker and puffier rendition of modern day spiral galaxies, such as the Milky Way.
The Hubble Space Telescope was used to image the galaxy's spiral structure, but to prove the spirals were indeed rotating, the researchers used the Keck II telescope in Hawaii to study the object's internal motions.
"It was shocking," said astronomer Alice Shapley, with the University of California, Los Angeles. "At redshift 2, where we found this galaxy, there hadn't been any other spirals or any other rotating disk galaxies found. We don't know of any other samples like this."
Most galaxies dating back to redshift 2, a cosmic yardstick that equates to about 10.7 billion years old, look lumpy and irregular, without symmetry.
"They're kind of like train wrecks," Shapley told Discovery News.
Out of sample of 306 Hubble Space Telescope images of ancient galaxies, just one -- BX442 -- had this beautiful spiral pattern, she added.
Scientists believe the galaxy's shape is due to gravitational effects of a small intruder galaxy. If that is true, BX442's days as a spiral are numbered.
Computer simulations show BX442, a relatively large galaxy with about the same mass as the Milky Way, would last about 100 million years as a spiral structure, a relative blink in cosmic time.
"We think that we just happened to catch it at a very special time," Shapley said. "I'd say by today, it probably doesn't look like a spiral galaxy."
Other spiral galaxies, like our own Milky Way, are longer-lived.
"One of the leading mechanisms that we believe explains modern day spirals, such as the Milky Way, is what is called 'density wave theory,' which doesn't need any kind of nearby galaxy. It happens from the disk alone in isolation," Law said.
Read more at Discovery News
The discovery was so jarring, scientists at first didn't believe their data.
"Our first thought was that we must have the wrong distance for the galaxy," lead researcher David Law, with the University of Toronto, told Discovery News.
"Then we thought perhaps it was the human brain playing tricks on us. If you look at enough blobby, weird-looking galaxies sooner or later, like a Rorschach blob test, you start to pick out patterns whether or not they're there," Law said.
Follow-up observation, however, showed that the galaxy, known as Q2343-BX442 and located in the direction of the Pegasus constellation, was not a mass of clumpy galaxies, but indeed a regular, rotating, spiral disk galaxy, albeit a thicker and puffier rendition of modern day spiral galaxies, such as the Milky Way.
The Hubble Space Telescope was used to image the galaxy's spiral structure, but to prove the spirals were indeed rotating, the researchers used the Keck II telescope in Hawaii to study the object's internal motions.
"It was shocking," said astronomer Alice Shapley, with the University of California, Los Angeles. "At redshift 2, where we found this galaxy, there hadn't been any other spirals or any other rotating disk galaxies found. We don't know of any other samples like this."
Most galaxies dating back to redshift 2, a cosmic yardstick that equates to about 10.7 billion years old, look lumpy and irregular, without symmetry.
"They're kind of like train wrecks," Shapley told Discovery News.
Out of sample of 306 Hubble Space Telescope images of ancient galaxies, just one -- BX442 -- had this beautiful spiral pattern, she added.
Scientists believe the galaxy's shape is due to gravitational effects of a small intruder galaxy. If that is true, BX442's days as a spiral are numbered.
Computer simulations show BX442, a relatively large galaxy with about the same mass as the Milky Way, would last about 100 million years as a spiral structure, a relative blink in cosmic time.
"We think that we just happened to catch it at a very special time," Shapley said. "I'd say by today, it probably doesn't look like a spiral galaxy."
Other spiral galaxies, like our own Milky Way, are longer-lived.
"One of the leading mechanisms that we believe explains modern day spirals, such as the Milky Way, is what is called 'density wave theory,' which doesn't need any kind of nearby galaxy. It happens from the disk alone in isolation," Law said.
Read more at Discovery News
Jul 16, 2012
Cannabis 'Pharma Factory' Discovered
U of S researchers have discovered the chemical pathway that Cannabis sativa uses to create bioactive compounds called cannabinoids, paving the way for the development of marijuana varieties to produce pharmaceuticals or cannabinoid-free industrial hemp. The research appears online in the July 16 early edition of the Proceedings of the National Academy of Sciences (PNAS).
U of S adjunct professor of biology Jon Page explains that the pathway is an unusual one, involving a specialized version of one enzyme, called hexanoyl-CoA synthetase, and another enzyme, called olivetolic acid cyclase (OAC), that has never before been seen in plants.
"What cannabis has done is take a rare fatty acid with a simple, six-carbon chain and use it as a building block to make something chemically complex and pharmacologically active," Page says.
Page led the research with PhD student Steve Gagne, who discovered OAC, and postdoctoral researcher Jake Stout, who discovered hexanoyl-CoA synthetase (reported earlier this year in The Plant Journal).
Cannabis has been cultivated for thousands of years for food, fibre, medicine and as a psychoactive drug. Cannabinoids such as delta-9-tetrahydrocannabinol, or THC, are produced on the flowers of the female plant in tiny hair-like structures called trichomes, the plant's own "chemical factories." The researchers used genomic analysis of isolated trichome cells to produce a catalog of the genes involved in cannabinoid production.
Page and his colleagues have already used the new enzymes to coax yeast to produce olivetolic acid, a key metabolic intermediate on the biochemical pathway that leads to cannabinoids.
"Now that we know the pathway, we could develop ways to produce cannabinoids with yeast or other microorganisms, which could be a valuable alternative to chemical synthesis for producing cannabinoids for the pharmaceutical industry," Page says.
There are more than 100 known cannabinoids, only a few of which have been explored for their possible medicinal uses. THC is the main psychoactive cannabinoid, responsible for the "high" sought by recreational users, as well as medicinal effects such as pain relief, nausea suppression and appetite stimulation. More than 19,000 patients in Canada are authorized to legally use marijuana to benefit from these effects, and many others use cannabinoid-containing drugs via prescription. Another important cannabinoid, cannabidiol (CBD) has anti-anxiety and neuro-protective properties.
Page explains that knowledge of the cannabinoid-making pathway could also make matters easier for Canadian farmers. Plant breeders can now look for cannabis strains that lack key parts of the cannabinoid-making pathway, which would allow for zero-THC varieties (current Canadian regulations call for no more than 0.3 per cent THC for industrial hemp, compared to 15 per cent or higher in the more potent marijuana varieties).
Although hemp cultivation in Canada dates back to the 1600s in Quebec, today industrial hemp is a niche crop, grown mostly on the Prairies. Its popularity fluctuates considerably, with about 15,700 hectares (39,000 acres) grown in 2011 according to statistics from Health Canada, which regulates the crop.
While hemp is well-known as a fibre crop for everything from textiles, rope and paper, it is more often grown in Canada for its seed. Hemp seed, which is high in omega-3 and omega-6 fatty acids, is marketed for its healthy qualities. It is used in everything from lactose-free hemp milk, breakfast cereals, snack foods and protein supplements for athletes. Hemp oil is also used in cosmetic skin care products.
Read more at Science Daily
U of S adjunct professor of biology Jon Page explains that the pathway is an unusual one, involving a specialized version of one enzyme, called hexanoyl-CoA synthetase, and another enzyme, called olivetolic acid cyclase (OAC), that has never before been seen in plants.
"What cannabis has done is take a rare fatty acid with a simple, six-carbon chain and use it as a building block to make something chemically complex and pharmacologically active," Page says.
Page led the research with PhD student Steve Gagne, who discovered OAC, and postdoctoral researcher Jake Stout, who discovered hexanoyl-CoA synthetase (reported earlier this year in The Plant Journal).
Cannabis has been cultivated for thousands of years for food, fibre, medicine and as a psychoactive drug. Cannabinoids such as delta-9-tetrahydrocannabinol, or THC, are produced on the flowers of the female plant in tiny hair-like structures called trichomes, the plant's own "chemical factories." The researchers used genomic analysis of isolated trichome cells to produce a catalog of the genes involved in cannabinoid production.
Page and his colleagues have already used the new enzymes to coax yeast to produce olivetolic acid, a key metabolic intermediate on the biochemical pathway that leads to cannabinoids.
"Now that we know the pathway, we could develop ways to produce cannabinoids with yeast or other microorganisms, which could be a valuable alternative to chemical synthesis for producing cannabinoids for the pharmaceutical industry," Page says.
There are more than 100 known cannabinoids, only a few of which have been explored for their possible medicinal uses. THC is the main psychoactive cannabinoid, responsible for the "high" sought by recreational users, as well as medicinal effects such as pain relief, nausea suppression and appetite stimulation. More than 19,000 patients in Canada are authorized to legally use marijuana to benefit from these effects, and many others use cannabinoid-containing drugs via prescription. Another important cannabinoid, cannabidiol (CBD) has anti-anxiety and neuro-protective properties.
Page explains that knowledge of the cannabinoid-making pathway could also make matters easier for Canadian farmers. Plant breeders can now look for cannabis strains that lack key parts of the cannabinoid-making pathway, which would allow for zero-THC varieties (current Canadian regulations call for no more than 0.3 per cent THC for industrial hemp, compared to 15 per cent or higher in the more potent marijuana varieties).
Although hemp cultivation in Canada dates back to the 1600s in Quebec, today industrial hemp is a niche crop, grown mostly on the Prairies. Its popularity fluctuates considerably, with about 15,700 hectares (39,000 acres) grown in 2011 according to statistics from Health Canada, which regulates the crop.
While hemp is well-known as a fibre crop for everything from textiles, rope and paper, it is more often grown in Canada for its seed. Hemp seed, which is high in omega-3 and omega-6 fatty acids, is marketed for its healthy qualities. It is used in everything from lactose-free hemp milk, breakfast cereals, snack foods and protein supplements for athletes. Hemp oil is also used in cosmetic skin care products.
Read more at Science Daily
Defining the Mechanical Mechanisms in Living Cells
If you place certain types of living cells on a microscope slide, the cells will inch across the glass, find their neighbors, and assemble themselves into a simple, if primitive tissue. A new study at Stanford University may help explain this phenomenon, and then some, about the mechanical structure and behavior of complex living organisms.
In the paper published in the Proceedings of the National Academy of Sciences, chemical engineer Alexander Dunn, PhD, and a multidisciplinary team of researchers in biology, physiology, and chemical and mechanical engineering, were able to measure -- and to literally see -- the mechanical forces at play between and within the living cells.
There are scads of data explaining chemical signaling between cells. "And yet, one of the great roadblocks to a complete knowledge of how cells work together to form tissues, organs and, ultimately, us, is an understanding of the mechanical forces at play between and within cells," said Dunn.
Using a new force-sensing technique, Dunn and team have been able to see mechanical forces at work inside living cells to understand how cells connect to one another and how individual cells control their own shape and movement within larger tissues.
Pulling back the veil on the exact nature of this mechanism could have bearing on biological understanding ranging from how tissues and tumors form and grow, to the creation of entire complex living organisms.
Seeing the force
"Cells are really just machines. Small, incredibly complex biological machines, but machines nonetheless," said Dunn. "They rely on thousands of moving parts that give the cell shape and control of its destiny."
The mechanical parts are proteins whose exact functions often remain a mystery, but Dunn and team have helped explain the behaviors of a few.
At its most basic level, a cell is like a balloon filled with saltwater, Dunn explained. The exterior of the cell, the balloon part, is known as the membrane. Protruding through the membrane, with portions both inside and outside the cell, are certain proteins called cadherins.
Outside of the cell, cadherins bind one cell to its neighbors like Velcro. The 'herin' portion of the name, in fact, shares a Latin root with "adhere."
On the inside of the cell, cadherin is connected to long fibers of actin and myosin that stretch from membrane to nucleus to membrane again. Actin and myosin work together as the muscle of the cell, providing tension that gives the cell shape and the ability to control its own movement. Without this force, the balloon of the cell would be a shapeless, immobile blob.
Puppeteer's string
"If you watch a cell moving across a glass slide, you can see it attach itself on one side of the cell and detach on the other, which causes a contraction that allows the cell to, bit by bit, pull itself from place to place," said Dunn. "It's clearly moving itself."
While it was understood that cadherin and actin are connected to one another by other proteins known as catenins, what was not known was how, when, and where the cells might be using their muscles (actin and myosin) to tug on the Velcro (cadherin) that holds them to other cells.
This is an important problem in the development of organisms, since a cell must somehow control its shape and its attachments to other cells as it grows, divides, and migrates from one place to another within the tissue. Dunn and his colleagues have shown that the actin-catenin-cadherin structure transmits force within the cell and, further, that cadherin can convey mechanical forces from one cell to the next.
It is a form of mechanical communication, like the strings of a puppeteer. Dunn and others in the field believe that these mechanical forces may be important in conveying to a cell how to position itself within a tissue, when to reproduce and when to stop as the tissue reaches its proper size and shape.
"That is the theory, but an important piece was missing," said Dunn. "Our research shows that forces at cell-cell contacts can in fact be communicated from one cell to its neighbors. The theorized mechanical signaling mechanism is feasible."
Story within a story
How Dunn and his colleagues got to this point is a story in itself. It reads like the recipe for a witch's potion -- cultured canine kidney cells, DNA from jellyfish and spider silk, and microscopic glass needles.
To measure the force between cells, a team combining the skill of several Stanford laboratories -- headed by Professor Dunn in chemical engineering, professors William Weis and W. James Nelson in the Department of Molecular and Cellular Physiology and associate professor of mechanical engineering Beth Pruitt -- used a tiny and ingenious molecular force sensor developed by Martin Schwartz and colleagues at the University of Virginia. The sensor combines fluorescent proteins from jellyfish with a springy protein from spider silk.
The genes for the sensor are incorporated into the cell's DNA. Under illumination, the cells glow in varying colors depending on how much stretch the sensor is under. In this study, the force sensor is inserted into the cadherin molecules -- when the Velcro stretches, so does the sensor.
The team then took things a step further. By turning the activity of myosin, actin and catenin on and off, they were able to determine that these proteins are in fact linked together and are at the heart of inter- and intra-cellular mechanical force transmission.
Lastly, using glass microneedles, the team tugged at connected pairs of cells, pulling at one cell to show that force gets communicated to the other through the cadherin interface.
"At this point we now know that a cell exerts exquisite control over the balance of its internal forces and can detect force exerted from outside by its neighbors, but we still know next to nothing about how," said Dunn. "We are extremely curious to find out more."
Read more at Science Daily
In the paper published in the Proceedings of the National Academy of Sciences, chemical engineer Alexander Dunn, PhD, and a multidisciplinary team of researchers in biology, physiology, and chemical and mechanical engineering, were able to measure -- and to literally see -- the mechanical forces at play between and within the living cells.
There are scads of data explaining chemical signaling between cells. "And yet, one of the great roadblocks to a complete knowledge of how cells work together to form tissues, organs and, ultimately, us, is an understanding of the mechanical forces at play between and within cells," said Dunn.
Using a new force-sensing technique, Dunn and team have been able to see mechanical forces at work inside living cells to understand how cells connect to one another and how individual cells control their own shape and movement within larger tissues.
Pulling back the veil on the exact nature of this mechanism could have bearing on biological understanding ranging from how tissues and tumors form and grow, to the creation of entire complex living organisms.
Seeing the force
"Cells are really just machines. Small, incredibly complex biological machines, but machines nonetheless," said Dunn. "They rely on thousands of moving parts that give the cell shape and control of its destiny."
The mechanical parts are proteins whose exact functions often remain a mystery, but Dunn and team have helped explain the behaviors of a few.
At its most basic level, a cell is like a balloon filled with saltwater, Dunn explained. The exterior of the cell, the balloon part, is known as the membrane. Protruding through the membrane, with portions both inside and outside the cell, are certain proteins called cadherins.
Outside of the cell, cadherins bind one cell to its neighbors like Velcro. The 'herin' portion of the name, in fact, shares a Latin root with "adhere."
On the inside of the cell, cadherin is connected to long fibers of actin and myosin that stretch from membrane to nucleus to membrane again. Actin and myosin work together as the muscle of the cell, providing tension that gives the cell shape and the ability to control its own movement. Without this force, the balloon of the cell would be a shapeless, immobile blob.
Puppeteer's string
"If you watch a cell moving across a glass slide, you can see it attach itself on one side of the cell and detach on the other, which causes a contraction that allows the cell to, bit by bit, pull itself from place to place," said Dunn. "It's clearly moving itself."
While it was understood that cadherin and actin are connected to one another by other proteins known as catenins, what was not known was how, when, and where the cells might be using their muscles (actin and myosin) to tug on the Velcro (cadherin) that holds them to other cells.
This is an important problem in the development of organisms, since a cell must somehow control its shape and its attachments to other cells as it grows, divides, and migrates from one place to another within the tissue. Dunn and his colleagues have shown that the actin-catenin-cadherin structure transmits force within the cell and, further, that cadherin can convey mechanical forces from one cell to the next.
It is a form of mechanical communication, like the strings of a puppeteer. Dunn and others in the field believe that these mechanical forces may be important in conveying to a cell how to position itself within a tissue, when to reproduce and when to stop as the tissue reaches its proper size and shape.
"That is the theory, but an important piece was missing," said Dunn. "Our research shows that forces at cell-cell contacts can in fact be communicated from one cell to its neighbors. The theorized mechanical signaling mechanism is feasible."
Story within a story
How Dunn and his colleagues got to this point is a story in itself. It reads like the recipe for a witch's potion -- cultured canine kidney cells, DNA from jellyfish and spider silk, and microscopic glass needles.
To measure the force between cells, a team combining the skill of several Stanford laboratories -- headed by Professor Dunn in chemical engineering, professors William Weis and W. James Nelson in the Department of Molecular and Cellular Physiology and associate professor of mechanical engineering Beth Pruitt -- used a tiny and ingenious molecular force sensor developed by Martin Schwartz and colleagues at the University of Virginia. The sensor combines fluorescent proteins from jellyfish with a springy protein from spider silk.
The genes for the sensor are incorporated into the cell's DNA. Under illumination, the cells glow in varying colors depending on how much stretch the sensor is under. In this study, the force sensor is inserted into the cadherin molecules -- when the Velcro stretches, so does the sensor.
The team then took things a step further. By turning the activity of myosin, actin and catenin on and off, they were able to determine that these proteins are in fact linked together and are at the heart of inter- and intra-cellular mechanical force transmission.
Lastly, using glass microneedles, the team tugged at connected pairs of cells, pulling at one cell to show that force gets communicated to the other through the cadherin interface.
"At this point we now know that a cell exerts exquisite control over the balance of its internal forces and can detect force exerted from outside by its neighbors, but we still know next to nothing about how," said Dunn. "We are extremely curious to find out more."
Read more at Science Daily
Engineering Technology Reveals Eating Habits of Giant Dinosaurs
High-tech technology, traditionally usually used to design racing cars and aeroplanes, has helped researchers to understand how plant-eating dinosaurs fed 150 million years ago.
A team of international researchers, led by the University of Bristol and the Natural History Museum, used CT scans and biomechanical modelling to show that Diplodocus -- one of the largest dinosaurs ever discovered -- had a skull adapted to strip leaves from tree branches.
The research is published today (July 16, 2012) in the natural sciences journal Naturwissenschaften.
The Diplodocus is a sauropod from the Jurassic Period and one of the longest animals to have lived on Earth, measuring over 30 metres in length and weighing around 15 tonnes.
While known to be massive herbivores, there has been great debate about exactly how they ate such large quantities of plants. The aberrant Diplodocus, with its long snout and protruding peg-like teeth restricted to the very front of its mouth, has been the centre of such controversy.
To solve the mystery, a 3D model of a complete Diplodocus skull was created using data from a CT scan. This model was then biomechanically analysed to test three feeding behaviours using finite element analysis (FEA).
FEA is widely used, from designing aeroplanes to orthopaedic implants. It revealed the various stresses and strains acting on the Diplodocus' skull during feeding to determine whether the skull or teeth would break under certain conditions.
The team that made this discovery was led by Dr Emily Rayfield of Bristol University's School of Earth Sciences and Dr Paul Barrett of The Natural History Museum in London. Dr Mark Young, a former student working at both institutions, ran the analyses during his PhD.
Dr Young said: "Sauropod dinosaurs, like Diplodocus, were so weird and different from living animals that there is no animal we can compare them with. This makes understanding their feeding ecology very difficult. That's why biomechanically modelling is so important to our understanding of long-extinct animals."
Dr Paul Barrett added: "Using these techniques, borrowed from the worlds of engineering and medicine, we can start to examine the feeding behaviour of this long-extinct animal in levels of detail which were simply impossible until recently."
Numerous hypotheses of feeding behaviour have been suggested for Diplodocus since its discovery over 130 years ago. These ranged from standard biting, combing leaves through peg-like teeth, ripping bark from trees similar to behaviour in some living deer, and even plucking shellfish from rocks.
Read more at Science Daily
A team of international researchers, led by the University of Bristol and the Natural History Museum, used CT scans and biomechanical modelling to show that Diplodocus -- one of the largest dinosaurs ever discovered -- had a skull adapted to strip leaves from tree branches.
The research is published today (July 16, 2012) in the natural sciences journal Naturwissenschaften.
The Diplodocus is a sauropod from the Jurassic Period and one of the longest animals to have lived on Earth, measuring over 30 metres in length and weighing around 15 tonnes.
While known to be massive herbivores, there has been great debate about exactly how they ate such large quantities of plants. The aberrant Diplodocus, with its long snout and protruding peg-like teeth restricted to the very front of its mouth, has been the centre of such controversy.
To solve the mystery, a 3D model of a complete Diplodocus skull was created using data from a CT scan. This model was then biomechanically analysed to test three feeding behaviours using finite element analysis (FEA).
FEA is widely used, from designing aeroplanes to orthopaedic implants. It revealed the various stresses and strains acting on the Diplodocus' skull during feeding to determine whether the skull or teeth would break under certain conditions.
The team that made this discovery was led by Dr Emily Rayfield of Bristol University's School of Earth Sciences and Dr Paul Barrett of The Natural History Museum in London. Dr Mark Young, a former student working at both institutions, ran the analyses during his PhD.
Dr Young said: "Sauropod dinosaurs, like Diplodocus, were so weird and different from living animals that there is no animal we can compare them with. This makes understanding their feeding ecology very difficult. That's why biomechanically modelling is so important to our understanding of long-extinct animals."
Dr Paul Barrett added: "Using these techniques, borrowed from the worlds of engineering and medicine, we can start to examine the feeding behaviour of this long-extinct animal in levels of detail which were simply impossible until recently."
Numerous hypotheses of feeding behaviour have been suggested for Diplodocus since its discovery over 130 years ago. These ranged from standard biting, combing leaves through peg-like teeth, ripping bark from trees similar to behaviour in some living deer, and even plucking shellfish from rocks.
Read more at Science Daily
When Galaxies Collide: Beautiful Images of Cosmic Impacts
Galactic collisions are among the most ferocious and stunning events in our universe.
These cosmic pile-ups occur whenever galaxies become gravitationally attracted to one another. Multiple galaxies may spiral around each other for billions of years, creating odd distortions and beautiful trails of stars as they pass. Eventually, the objects crash together in a forceful embrace.
Since the first galaxies coalesced several hundred million years after the Big Bang, their collisions have been influential in shaping the history of our universe.
“As small galaxies merge, they make larger galaxies, and those will then merge to make still larger galaxies, and so on, up to and including the present-day galaxies,” said astronomer Kirk Borne of George Mason University.
Because of the vast distances between them, there's a low probability that stars within galaxies will actually hit head-on. But gravitational forces can wrest stars from their previous orbits, scrambling the shape of the galaxies involved.
Friction between diffuse gas and dust inside each galaxy raises temperatures, and interstellar material often combines into huge molecular clouds. All this mass in one place triggers prodigious star formation, with stellar birth rates increasing by a hundredfold.
The increased light from this extra star formation allows astronomers to see galaxy mergers in the distant universe, helping them learn about some of the earliest periods of cosmic history. Understanding how larger galaxies coalesce from smaller galaxies provides important information on our own cosmic origin story: the formation of the Milky Way galaxy.
The Milky Way formed through the long-ago merging of progenitor galaxies and is on course to hit our neighbor, the Andromeda galaxy, in about 4 billion years. Simulations indicate that the Earth and solar system won’t be destroyed in this collision, though the sun may be tossed into a brand new galactic region.
Along with other astronomers, Borne is a member of the Galaxy Zoo project, an online collaboration between scientists and interested citizens to sift through astronomical telescope datasets and classify galaxies and their behavior. His specialty is the Galaxy Zoo: Mergers project, which studies galactic collisions.
Their website has an online Java applet that lets anyone simulate galaxies smashing together. Just input parameters like the galaxies’ relative masses, collision speeds and angles, and then watch the results.
The applet's purpose is to find the initial conditions for different real-world galaxy collisions. Volunteers on the project have run more than 3 million simulations and found models describing 54 real-world galaxies, said astronomer John Wallin, the Galaxy Zoo: Mergers project lead.
The project's initial data was recently presented at a cosmology workshop in Santa Fe, New Mexico, and they're still looking for more help. Volunteers can participate in Merger Wars, where different simulated outcomes are pitted against one another to see which best describes actual galaxy collisions.
On the following pages, Wired takes a look at some of the most amazing images ever taken of these cosmic pile-ups.
Above:
Antennae Galaxies
One of the most spectacular galactic mergers is known as the Antennae galaxies. Located about 62 million light-years away in the constellation Corvus, the object contains a jewelry case of stars that shine in the visible, infrared, and X-ray wavelengths in this image. The colliding galaxies have nearly engulfed one another since they first began to unite around 100 million years ago, triggering an intense period of star formation.
Read more and see more pictures at Wired Science
These cosmic pile-ups occur whenever galaxies become gravitationally attracted to one another. Multiple galaxies may spiral around each other for billions of years, creating odd distortions and beautiful trails of stars as they pass. Eventually, the objects crash together in a forceful embrace.
Since the first galaxies coalesced several hundred million years after the Big Bang, their collisions have been influential in shaping the history of our universe.
“As small galaxies merge, they make larger galaxies, and those will then merge to make still larger galaxies, and so on, up to and including the present-day galaxies,” said astronomer Kirk Borne of George Mason University.
Because of the vast distances between them, there's a low probability that stars within galaxies will actually hit head-on. But gravitational forces can wrest stars from their previous orbits, scrambling the shape of the galaxies involved.
Friction between diffuse gas and dust inside each galaxy raises temperatures, and interstellar material often combines into huge molecular clouds. All this mass in one place triggers prodigious star formation, with stellar birth rates increasing by a hundredfold.
The increased light from this extra star formation allows astronomers to see galaxy mergers in the distant universe, helping them learn about some of the earliest periods of cosmic history. Understanding how larger galaxies coalesce from smaller galaxies provides important information on our own cosmic origin story: the formation of the Milky Way galaxy.
The Milky Way formed through the long-ago merging of progenitor galaxies and is on course to hit our neighbor, the Andromeda galaxy, in about 4 billion years. Simulations indicate that the Earth and solar system won’t be destroyed in this collision, though the sun may be tossed into a brand new galactic region.
Along with other astronomers, Borne is a member of the Galaxy Zoo project, an online collaboration between scientists and interested citizens to sift through astronomical telescope datasets and classify galaxies and their behavior. His specialty is the Galaxy Zoo: Mergers project, which studies galactic collisions.
Their website has an online Java applet that lets anyone simulate galaxies smashing together. Just input parameters like the galaxies’ relative masses, collision speeds and angles, and then watch the results.
The applet's purpose is to find the initial conditions for different real-world galaxy collisions. Volunteers on the project have run more than 3 million simulations and found models describing 54 real-world galaxies, said astronomer John Wallin, the Galaxy Zoo: Mergers project lead.
The project's initial data was recently presented at a cosmology workshop in Santa Fe, New Mexico, and they're still looking for more help. Volunteers can participate in Merger Wars, where different simulated outcomes are pitted against one another to see which best describes actual galaxy collisions.
On the following pages, Wired takes a look at some of the most amazing images ever taken of these cosmic pile-ups.
Above:
Antennae Galaxies
One of the most spectacular galactic mergers is known as the Antennae galaxies. Located about 62 million light-years away in the constellation Corvus, the object contains a jewelry case of stars that shine in the visible, infrared, and X-ray wavelengths in this image. The colliding galaxies have nearly engulfed one another since they first began to unite around 100 million years ago, triggering an intense period of star formation.
Read more and see more pictures at Wired Science
Jul 15, 2012
In Case of Alligator Attack...
Even though alligators have been around for nearly 40 million years, they're still making headlines. In the past week, two Florida teenagers have been victims of alligator attacks, but were able to fend the gators off.
At Keaton Beach near Tallahassee, 15-year-old Kaleb Towles was spearfishing with his grandfather when a 10-foot-long alligator bit him across the chest. The alligator released its grip, and later Towles and his grandfather managed to kill it.
Earlier in the week, 17-year-old Kaleb "Fred" Langdale was swimming near Moore Haven, Fla., when he lost the lower half of his right arm trying to fend off an attack from an 11-foot-long alligator. The teen's quick thinking saved his life.
During the warmer weather months, alligators becomes more active as their metabolism increases. This time happens to be when alligators mate, making them more aggressive than they would be otherwise. As a result, alligator attacks can increase in frequency during the summer months.
Given how large the alligators were in both cases, both teenagers are fortunate to have survived their attacks. As Julienne Gage reported for Discovery News, between 2000 and 2010, nearly 13 people died as a result of attacks from alligators. Although alligator attacks are less frequent than shark attacks, they're more often fatal. And because of suburban sprawl, encounters between alligators and humans have increased slightly.
So while an alligator attack might be rare, knowing what to do should that unlikely event occur could be life-saving. Growing up in Florida, I remember being told that in the event of an encounter with an aggressive alligator on dry land, the best way to get away from it would be to run along a zig-zag, the thinking being that alligators would attempt to follow but be unable to since their bodies couldn't allow them to manuever quick turns. But is that really true? And what's the best way to survive an attack that takes place in water?
For starters, if you encounter an alligator on dry land, just run. Zig-zagging would only slow you down, so running in a straight line is the way to go. As Amy Hunter writing for HowStuffWorks.com points out, an alligator's running speed on land tops out at 11 miles per hour. They also can't run for very long, which makes it unlikely they'll put up much of a chase.
Read more at Discovery News
At Keaton Beach near Tallahassee, 15-year-old Kaleb Towles was spearfishing with his grandfather when a 10-foot-long alligator bit him across the chest. The alligator released its grip, and later Towles and his grandfather managed to kill it.
Earlier in the week, 17-year-old Kaleb "Fred" Langdale was swimming near Moore Haven, Fla., when he lost the lower half of his right arm trying to fend off an attack from an 11-foot-long alligator. The teen's quick thinking saved his life.
During the warmer weather months, alligators becomes more active as their metabolism increases. This time happens to be when alligators mate, making them more aggressive than they would be otherwise. As a result, alligator attacks can increase in frequency during the summer months.
Given how large the alligators were in both cases, both teenagers are fortunate to have survived their attacks. As Julienne Gage reported for Discovery News, between 2000 and 2010, nearly 13 people died as a result of attacks from alligators. Although alligator attacks are less frequent than shark attacks, they're more often fatal. And because of suburban sprawl, encounters between alligators and humans have increased slightly.
So while an alligator attack might be rare, knowing what to do should that unlikely event occur could be life-saving. Growing up in Florida, I remember being told that in the event of an encounter with an aggressive alligator on dry land, the best way to get away from it would be to run along a zig-zag, the thinking being that alligators would attempt to follow but be unable to since their bodies couldn't allow them to manuever quick turns. But is that really true? And what's the best way to survive an attack that takes place in water?
For starters, if you encounter an alligator on dry land, just run. Zig-zagging would only slow you down, so running in a straight line is the way to go. As Amy Hunter writing for HowStuffWorks.com points out, an alligator's running speed on land tops out at 11 miles per hour. They also can't run for very long, which makes it unlikely they'll put up much of a chase.
Read more at Discovery News
Instrument for Exploring the Cosmos and the Quantum World Created
Researchers at the California Institute of Technology (Caltech) and NASA's Jet Propulsion Laboratory (JPL) have developed a new type of amplifier for boosting electrical signals. The device can be used for everything from studying stars, galaxies, and black holes to exploring the quantum world and developing quantum computers.
"This amplifier will redefine what it is possible to measure," says Jonas Zmuidzinas, Caltech's Merle Kingsley Professor of Physics, the chief technologist at JPL, and a member of the research team.
An amplifier is a device that increases the strength of a weak signal. "Amplifiers play a basic role in a wide range of scientific measurements and in electronics in general," says Peter Day, a visiting associate in physics at Caltech and a principal scientist at JPL. "For many tasks, current amplifiers are good enough. But for the most demanding applications, the shortcomings of the available technologies limit us."
Conventional transistor amplifiers -- like the ones that power your car speakers -- work for a large span of frequencies. They can also boost signals ranging from the faint to the strong, and this so-called dynamic range enables your speakers to play both the quiet and loud parts of a song. But when an extremely sensitive amplifier is needed -- for example, to boost the faint, high-frequency radio waves from distant galaxies -- transistor amplifiers tend to introduce too much noise, resulting in a signal that is more powerful but less clear.
One type of highly sensitive amplifier is a parametric amplifier, which boosts a weak input signal by using a strong signal called the pump signal. As both signals travel through the instrument, the pump signal injects energy into the weak signal, therefore amplifying it.
About 50 years ago, Amnon Yariv, Caltech's Martin and Eileen Summerfield Professor of Applied Physics and Electrical Engineering, showed that this type of amplifier produces as little noise as possible: the only noise it must produce is the unavoidable noise caused by the jiggling of atoms and waves according to the laws of quantum mechanics. The problem with many parametric amplifiers and sensitive devices like it, however, is that they can only amplify a narrow frequency range and often have a poor dynamic range.
But the Caltech and JPL researchers say their new amplifier, which is a type of parametric amplifier, combines only the best features of other amplifiers. It operates over a frequency range more than ten times wider than other comparably sensitive amplifiers, can amplify strong signals without distortion, and introduces nearly the lowest amount of unavoidable noise. In principle, the researchers say, design improvements should be able to reduce that noise to the absolute minimum. Versions of the amplifier can be designed to work at frequencies ranging from a few gigahertz to a terahertz (1,000 GHz). For comparison, a gigahertz is about 10 times greater than commercial FM radio signals in the U.S., which range from about 88 to 108 megahertz (1 GHz is 1,000 MHz).
"Our new amplifier has it all," Zmuidzinas says. "You get to have your cake and eat it too."
The team recently described the new instrument in the journal Nature Physics.
One of the key features of the new parametric amplifier is that it incorporates superconductors -- materials that allow an electric current to flow with zero resistance when lowered to certain temperatures. For their amplifier, the researchers are using titanium nitride (TiN) and niobium titanium nitride (NbTiN), which have just the right properties to allow the pump signal to amplify the weak signal.
Although the amplifier has a host of potential applications, the reason the researchers built the device was to help them study the universe. The team built the instrument to boost microwave signals, but the new design can be used to build amplifiers that help astronomers observe in a wide range of wavelengths, from radio waves to X rays.
For instance, the team says, the instrument can directly amplify radio signals from faint sources like distant galaxies, black holes, or other exotic cosmic objects. Boosting signals in millimeter to submillimeter wavelengths (between radio and infrared) will allow astronomers to study the cosmic microwave background -- the afterglow of the big bang -- and to peer behind the dusty clouds of galaxies to study the births of stars, or probe primeval galaxies. The team has already begun working to produce such devices for Caltech's Owens Valley Radio Observatory (OVRO) near Bishop, California, about 250 miles north of Los Angeles.
These amplifiers, Zmuidzinas says, could be incorporated into telescope arrays like the Combined Array for Research in Millimeter-wave Astronomy at OVRO, of which Caltech is a consortium member, and the Atacama Large Millimeter/submillimeter Array in Chile.
Instead of directly amplifying an astronomical signal, the instrument can be used to boost the electronic signal from a light detector in an optical, ultraviolet, or even X-ray telescope, making it easier for astronomers to tease out faint objects.
Because the instrument is so sensitive and introduces minimal noise, it can also be used to explore the quantum world. For example, Keith Schwab, a professor of applied physics at Caltech, is planning to use the amplifier to measure the behavior of tiny mechanical devices that operate at the boundary between classical physics and the strange world of quantum mechanics. The amplifier could also be used in the development quantum computers -- which are still beyond our technological reach but should be able to solve some of science's hardest problems much more quickly than any regular computer.
Read more at Science Daily
"This amplifier will redefine what it is possible to measure," says Jonas Zmuidzinas, Caltech's Merle Kingsley Professor of Physics, the chief technologist at JPL, and a member of the research team.
An amplifier is a device that increases the strength of a weak signal. "Amplifiers play a basic role in a wide range of scientific measurements and in electronics in general," says Peter Day, a visiting associate in physics at Caltech and a principal scientist at JPL. "For many tasks, current amplifiers are good enough. But for the most demanding applications, the shortcomings of the available technologies limit us."
Conventional transistor amplifiers -- like the ones that power your car speakers -- work for a large span of frequencies. They can also boost signals ranging from the faint to the strong, and this so-called dynamic range enables your speakers to play both the quiet and loud parts of a song. But when an extremely sensitive amplifier is needed -- for example, to boost the faint, high-frequency radio waves from distant galaxies -- transistor amplifiers tend to introduce too much noise, resulting in a signal that is more powerful but less clear.
One type of highly sensitive amplifier is a parametric amplifier, which boosts a weak input signal by using a strong signal called the pump signal. As both signals travel through the instrument, the pump signal injects energy into the weak signal, therefore amplifying it.
About 50 years ago, Amnon Yariv, Caltech's Martin and Eileen Summerfield Professor of Applied Physics and Electrical Engineering, showed that this type of amplifier produces as little noise as possible: the only noise it must produce is the unavoidable noise caused by the jiggling of atoms and waves according to the laws of quantum mechanics. The problem with many parametric amplifiers and sensitive devices like it, however, is that they can only amplify a narrow frequency range and often have a poor dynamic range.
But the Caltech and JPL researchers say their new amplifier, which is a type of parametric amplifier, combines only the best features of other amplifiers. It operates over a frequency range more than ten times wider than other comparably sensitive amplifiers, can amplify strong signals without distortion, and introduces nearly the lowest amount of unavoidable noise. In principle, the researchers say, design improvements should be able to reduce that noise to the absolute minimum. Versions of the amplifier can be designed to work at frequencies ranging from a few gigahertz to a terahertz (1,000 GHz). For comparison, a gigahertz is about 10 times greater than commercial FM radio signals in the U.S., which range from about 88 to 108 megahertz (1 GHz is 1,000 MHz).
"Our new amplifier has it all," Zmuidzinas says. "You get to have your cake and eat it too."
The team recently described the new instrument in the journal Nature Physics.
One of the key features of the new parametric amplifier is that it incorporates superconductors -- materials that allow an electric current to flow with zero resistance when lowered to certain temperatures. For their amplifier, the researchers are using titanium nitride (TiN) and niobium titanium nitride (NbTiN), which have just the right properties to allow the pump signal to amplify the weak signal.
Although the amplifier has a host of potential applications, the reason the researchers built the device was to help them study the universe. The team built the instrument to boost microwave signals, but the new design can be used to build amplifiers that help astronomers observe in a wide range of wavelengths, from radio waves to X rays.
For instance, the team says, the instrument can directly amplify radio signals from faint sources like distant galaxies, black holes, or other exotic cosmic objects. Boosting signals in millimeter to submillimeter wavelengths (between radio and infrared) will allow astronomers to study the cosmic microwave background -- the afterglow of the big bang -- and to peer behind the dusty clouds of galaxies to study the births of stars, or probe primeval galaxies. The team has already begun working to produce such devices for Caltech's Owens Valley Radio Observatory (OVRO) near Bishop, California, about 250 miles north of Los Angeles.
These amplifiers, Zmuidzinas says, could be incorporated into telescope arrays like the Combined Array for Research in Millimeter-wave Astronomy at OVRO, of which Caltech is a consortium member, and the Atacama Large Millimeter/submillimeter Array in Chile.
Instead of directly amplifying an astronomical signal, the instrument can be used to boost the electronic signal from a light detector in an optical, ultraviolet, or even X-ray telescope, making it easier for astronomers to tease out faint objects.
Because the instrument is so sensitive and introduces minimal noise, it can also be used to explore the quantum world. For example, Keith Schwab, a professor of applied physics at Caltech, is planning to use the amplifier to measure the behavior of tiny mechanical devices that operate at the boundary between classical physics and the strange world of quantum mechanics. The amplifier could also be used in the development quantum computers -- which are still beyond our technological reach but should be able to solve some of science's hardest problems much more quickly than any regular computer.
Read more at Science Daily
Subscribe to:
Posts (Atom)