giovedì 26 marzo 2009

Chemists Create More Efficient Palladium Fuel Cell Catalysts

SOURCE

ScienceDaily (Mar. 26, 2009) — Even small devices need power, and much of that juice comes from fuel cells. As these devices become even smaller, the rush is on to find more efficient ways to power them.
In the last several years, scientists have discovered that palladium, a metal, is a strong candidate for providing that initial boost that helps fuel cells go. Palladium is far cheaper than another popular fuel cell catalyst, platinum, and it’s more abundant.
But researchers have wrestled with creating palladium nanoparticles with enough active surface area to make catalysis efficient in fuel cells while preventing particles from clumping together during the chemical processes that convert a fuel source to electricity. Two Brown University chemists have found a way to overcome those challenges.
The scientists report in the online edition of the Journal of the American Chemical Society that they have produced palladium nanoparticles with about 40 percent greater surface area than commercially available palladium particles. The Brown catalysts also remain intact four times longer than what’s currently available.
“This approach is very novel. It works,” said Vismadeb Mazumder, a graduate student who joined chemistry professor Shouheng Sun on the paper. “It’s two times as active, meaning you need half the energy to catalyze. And it’s four times as stable.”
Mazumder and Sun created palladium nanoparticles 4.5 nanometers in size. They attached the nanoparticles to a carbon platform at the anode end of a direct formic acid fuel cell. The researchers then did something new: They used weak binding amino ligands to keep the palladium nanoparticles separate and at the same size as they’re attached to the carbon platform. By keeping the particles separate and uniform in size, they increased the available surface area on the platform and raised the efficiency of the fuel cell reaction.
“It just works better,” Sun said.
What’s also special about the ligands is that they can be “washed” from the carbon platform without jeopardizing the integrity of the separated palladium nanoparticles. This is an important step, Mazumder emphasized, because previous attempts to remove binding ingredients have caused the particles to lose their rigid sizes and clump together, which gums up the reaction.
The Brown team said in experiments lasting 12 hours, their catalysts lost 16 percent of its surface area, compared to a 64-percent loss in surface area in commercial catalysts.
“We managed to ebb the decay of our catalyst by our approach,” said Mazumder, who is in his second year in Sun’s lab. “We made high-quality palladium nanoparticles, put them efficiently on a support, then removed them from the stabilizers efficiently without distorting catalyst quality.”
The Brown scientists now are looking at various palladium-based catalysts with enhanced activity and stability for future fuel cell applications.
“We want to make it cheaper with analogous activity,” Mazumder said.
The research was funded by the Division of Materials Research of the National Science Foundation and a Brown seed fund.
Journal reference:
Vismadeb Mazumder and Shouheng Sun. Oleylamine-Mediated Synthesis of Pd Nanoparticles for Catalytic Formic Acid Oxidation. Journal of the American Chemical Society, 2009; 090312154913042 DOI: 10.1021/ja9004915
Adapted from materials provided by Brown University.

lunedì 23 marzo 2009

'Cold Fusion' Rebirth? New Evidence For Existence Of Controversial Energy Source


ScienceDaily (Mar. 23, 2009) — Researchers are reporting compelling new scientific evidence for the existence of low-energy nuclear reactions (LENR), the process once called "cold fusion" that may promise a new source of energy. One group of scientists, for instance, describes what it terms the first clear visual evidence that LENR devices can produce neutrons, subatomic particles that scientists view as tell-tale signs that nuclear reactions are occurring.
Low-energy nuclear reactions could potentially provide 21st Century society a limitless and environmentally-clean energy source for generating electricity, researchers say. The report, which injects new life into this controversial field, will be presented March 23 in Salt Lake City, Utah, at the American Chemical Society's 237th National Meeting.*
"Our finding is very significant," says study co-author and analytical chemist Pamela Mosier-Boss, Ph.D., of the U.S. Navy's Space and Naval Warfare Systems Center (SPAWAR) in San Diego, Calif. "To our knowledge, this is the first scientific report of the production of highly energetic neutrons from an LENR device."
The first report on "cold fusion," presented in 1989 by Martin Fleishmann and Stanley Pons, was a global scientific sensation. Fusion is the energy source of the sun and the stars. Scientists had been striving for years to tap that power on Earth to produce electricity from an abundant fuel called deuterium that can be extracted from seawater. Everyone thought that it would require a sophisticated new genre of nuclear reactors able to withstand temperatures of tens of millions of degrees Fahrenheit.
Pons and Fleishmann, however, claimed achieving nuclear fusion at comparatively "cold" room temperatures — in a simple tabletop laboratory device termed an electrolytic cell.
But other scientists could not reproduce their results, and the whole field of research declined. A stalwart cadre of scientists persisted, however, seeking solid evidence that nuclear reactions can occur at low temperatures. One of their problems involved extreme difficulty in using conventional electronic instruments to detect the small number of neutrons produced in the process, researchers say.
In the new study, Mosier-Boss and colleagues inserted an electrode composed of nickel or gold wire into a solution of palladium chloride mixed with deuterium or "heavy water" in a process called co-deposition. A single atom of deuterium contains one neutron and one proton in its nucleus.
Researchers passed electric current through the solution, causing a reaction within seconds. The scientists then used a special plastic, CR-39, to capture and track any high-energy particles that may have been emitted during reactions, including any neutrons emitted during the fusion of deuterium atoms.
At the end of the experiment, they examined the plastic with a microscope and discovered patterns of "triple tracks," tiny-clusters of three adjacent pits that appear to split apart from a single point. The researchers say that the track marks were made by subatomic particles released when neutrons smashed into the plastic. Importantly, Mosier-Boss and colleagues believe that the neutrons originated in nuclear reactions, perhaps from the combining or fusing deuterium nuclei.
"People have always asked 'Where's the neutrons?'" Mosier-Boss says. "If you have fusion going on, then you have to have neutrons. We now have evidence that there are neutrons present in these LENR reactions."
They cited other evidence for nuclear reactions including X-rays, tritium (another form of hydrogen), and excess heat. Meanwhile, Mosier-Boss and colleagues are continuing to explore the phenomenon to get a better understanding of exactly how LENR works, which is key to being able to control it for practical purposes.
Mosier-Boss points out that the field currently gets very little funding and, despite its promise, researchers can't predict when, or if, LENR may emerge from the lab with practical applications. The U.S. Department of the Navy and JWK International Corporation in Annandale, Va., funded the study.
*It is among 30 papers on the topic that will be presented during a four-day symposium, "New Energy Technology," March 22-25, in conjunction with the 20th anniversary of the first description of cold fusion.
Adapted from materials provided by American Chemical Society, via EurekAlert!, a service of AAAS.

Dancing 'Adatoms' Help Chemists Understand How Water Molecules Split

SOURCE

ScienceDaily (Mar. 23, 2009) — Single oxygen atoms dancing on a metal oxide slab, glowing brighter here and dimmer there, have helped chemists better understand how water splits into oxygen and hydrogen. In the process, the scientists have visualized a chemical reaction that had previously only been talked about. The new work improves our understanding of the chemistry needed to generate hydrogen fuel from water or to clean contaminated water.
The scientists made the discovery while trying to determine the basics of how titanium dioxide -- a compound sometimes found in sunscreen -- breaks down water. The chemical reactions between water and oxygen are central to such varied processes as hydrogen production, breaking down pollutants, and in solar energy.
"Oxygen and water are involved in many, many reactions," said physicist Igor Lyubinetsky at the Department of Energy's Pacific Northwest National Laboratory, who reported the team's results in March 6 issue of the Physical Review Letters. "This mobility might interfere with some reactions and help others."
Bustling Bright Spots
While exploring titanium dioxide as a way to split water into its hydrogen and oxygen pieces, researchers can use a technique called scanning tunneling microscopy to watch the chemical reaction. The surface of a slab of titanium dioxide is like a corn field: rows of oxygen atoms rise from a patch of titanium atoms. The alternating oxygen and titanium rows look like stripes.
Scientists can also see some atoms and molecules that come to rest on the surface as bright spots. One such visible atom is a single oxygen atom that comes to rest on a titanium atom, called an "adatom". Chemists can only see water molecules if they drop the temperature dramatically -- at ambient temperature, water moves too fast for the method to pick them up.
In this work, PNNL scientists studied water's reactions with titanium dioxide at ambient temperature at EMSL, the DOE's Environmental Molecular Sciences Laboratory on the PNNL campus. Starting with a surface plated with a few oxygen adatoms, they added water -- and the adatoms started to dance.
"Suddenly, almost every adatom started to move back and forth along the titanium row," said Lyubinetsky. "From theory and previous work, we expected to see this along the row."
Remarkably, the adatoms didn't just slide up and down the stripes. They also bounced out of them and landed in others, like pogoing dancers in a mosh pit.
"We saw quite unexpected things. We thought it was very strange -- we saw adatoms jump over the rows," Lyubinetsky said. "We just couldn't explain it."
Calculating how much energy it would take for the adatoms to move by themselves, much less hop over an oxygen row, the chemists suspected the adatoms were getting help -- most likely from the invisible water molecules.
The Unseen Enabler
To make sense of the dancing adatoms, the team calculated how much energy it would take to move adatoms with the help of water molecules. If a water molecule sits down next to an adatom, one of the water's hydrogen atoms can jump to the adatom, forming two oxygen-hydrogen pairs.
These pairs are known as hydroxyls and tend to steal atoms from other molecules, including each other. One of the thieving hydroxyls can then nab the other's hydrogen atom, turning back into a water molecule. The water molecule floats off, leaving behind an adatom. Half the time, that adatom is one spot over -- which makes the original appear to have moved.
The chemists determined that water can help the adatom jump a row as well: If a water molecule and an adatom are situated on either side of a raised oxygen row, a row oxygen can serve as the middleman, handing over a hydrogen from the water molecule to the adatom. Again, two hydroxyls form, one ultimately stealing both hydrogens (with the help of the middleman) and zipping away as water. If the incoming water molecule has been stripped, the adatom appears to have hopped over.
The calculated energy required for these different scenarios fit well with the team's experimental data. When a row oxygen serves as a middleman, the process is known as "pseudo-dissociation", a reaction suggested by chemists but until now, never verified experimentally.
"We realized that only if we involved the pseudo-dissociative state of the water can we explain it," said Lyubinetsky. "Otherwise, all the calculations show there's too high a barrier, the adatom just cannot jump by itself."
Lyubinetsky points out that this shows that water itself can work as a catalyst. A catalyst is a molecule that can help a chemical reaction along and remain unchanged by the experience.
"Water is required to move the adatoms around, but like a catalyst it is not consumed in the reaction," he said. "You start with water and you end with water."
In the future, the team plans on determining if water can make the adatoms move other species and more than one space at a time. In addition, they will investigate how light affects the reaction.
This work was supported by the Department of Energy's Office of Science.
Journal reference:
Y. Du, N. A. Deskins, Z. Zhang, Z. Dohnálek, M. Dupuis, and I. Lyubinetsky. Two Pathways for Water Interaction with Oxygen Adatoms on TiO2(110). Phys Rev Letters, March 6, 2009 DOI: 10.1103/PhysRevLett.102.096102
Adapted from materials provided by DOE/Pacific Northwest National Laboratory.

Shifting Sound To Light May Lead To Better Computer Chips


ScienceDaily (Mar. 23, 2009) — By reversing a process that converts electrical signals into sounds heard out of a cell phone, researchers may have a new tool to enhance the way computer chips, LEDs and transistors are built.
Lawrence Livermore National Laboratory scientists have for the first time converted the highest frequency sounds into light by reversing a process that converts electrical signals to sound.
Commonly used piezo-electric speakers, such as those found in a cell phone, operate at low frequencies that human ears can hear.
But by reversing that process, lead researchers Michael Armstrong, Evan Reed and Mike Howard, LLNL colleagues, and collaborators from Los Alamos National Laboratory and Nitronex Corp., used a very high frequency sound wave - about 100 million times higher frequency than what humans can hear - to generate light.
“This process allows us to very accurately ‘see’ the highest frequency sound waves by translating them into light,” Armstrong said.
The research appears in the March 15 edition of the journal Nature Physics.
During the last decade, pioneering experiments using sub-picosecond lasers have demonstrated the generation and detection of acoustic and shock waves in materials with terahertz (THz) frequencies. These very same experiments led to a new technique for probing the structure of semiconductor devices.
However, the recent research takes those initial experiments a step further by reversing the process, converting high-frequency sound waves into electricity. The researchers predicted that high frequency acoustic waves can be detected by seeing radiation emitted when the acoustic wave passes an interface between piezoelectric materials.
Very high-frequency sound waves have wavelengths approaching the atomic-length scale. Detection of these waves is challenging, but they are useful for probing materials on very small length scales.
But that’s not the only application, according to Reed.
“This technique provides a new pathway to generation of THz radiation for security, medical and other purposes,” he said. “In this application, we would utilize acoustic-based technologies to generate THz.” Security applications include explosives detection and medical use may include detection of skin cancer.
And the Livermore method doesn’t require any external source to detect the acoustic waves.
“Usually scientists use an external laser beam that bounces off the acoustic wave – much like radar speed detectors – to observe high frequency sound. An advantage of our technique is that it doesn’t require an external laser beam – the acoustic wave itself emits light that we detect,” Armstrong said.
Journal reference:
Michael R. Armstrong, Evan J. Reed, Ki-Yong Kim, James H. Glownia, William M. Howard, Edwin L. Piner & John C. Roberts. Observation of terahertz radiation coherently generated by acoustic waves. Nature Physics, 2009; DOI: 10.1038/nphys1219
Adapted from materials provided by DOE/Lawrence Livermore National Laboratory.

sabato 21 marzo 2009

Water Acts As Catalyst In Explosives

ScienceDaily (Mar. 20, 2009) — The most abundant material on Earth exhibits some unusual chemical properties when placed under extreme conditions.
Lawrence Livermore National Laboratory scientists have shown that water, in hot dense environments, plays an unexpected role in catalyzing complex explosive reactions. A catalyst is a compound that speeds chemical reactions without being consumed. Platinum and enzymes are common catalysts. But water rarely, if ever, acts as a catalyst under ordinary conditions.
Detonations of high explosives made up of oxygen and hydrogen produce water at thousands of degrees Kelvin and up to 100,000 atmospheres of pressure, similar to conditions in the interiors of giant planets.
While the properties of pure water at high pressures and temperatures have been studied for years, this extreme water in a reactive environment has never been studied. Until now.
Using first-principle atomistic simulations of the detonation of the high explosive PETN (pentaerythritol tetranitrate), the team discovered that in water, when one hydrogen atom serves as a reducer and the hydroxide (OH) serves as an oxidizer, the atoms act as a dynamic team that transports oxygen between reaction centers.
"This was news to us," said lead researcher Christine Wu. "This suggests that water also may catalyze reactions in other explosives and in planetary interiors."
This finding is contrary to the current view that water is simply a stable detonation product.
"Under extreme conditions, water is chemically peculiar because of its frequent dissociations," Wu said. "As you compress it to the conditions you'd find in the interior of a planet, the hydrogen of a water molecule starts to move around very fast."
In the molecular dynamic simulations using the Lab's BlueGene L supercomputer, Wu and colleagues Larry Fried, Lin Yang, Nir Goldman and Sorin Bastea found that the hydrogen (H) and hydroxide (OH) atoms in water transport oxygen from nitrogen storage to carbon fuel under PETN detonation conditions (temperatures between 3,000 Kelvin and 4,200 Kelvin). Under both temperature conditions, this "extreme water" served both as an end product and as a key chemical catalyst.
For a molecular high explosive that is made up of carbon, nitrogen, oxygen and hydrogen, such as PETN, the three major gaseous products are water, carbon dioxide and molecular nitrogen.
But to date, the chemical processes leading to these stable compounds are not well understood.
The team found that nitrogen loses its oxygen mostly to hydrogen, not to carbon, even after the concentration of water reaches equilibrium. They also found that carbon atoms capture oxygen mostly from hydroxide, rather than directly from nitrogen monoxide (NO) or nitrogen dioxide (NO_). Meanwhile water disassociated and recombines with hydrogen and hydroxide frequently.
"The water that comes out is part of the energy release mechanism," Wu said. "This catalytic mechanism is completely different from previously proposed decomposition mechanisms for PETN or similar explosives, in which water is just an end product. This new discovery could have implications for scientists studying the interiors of Uranus and Neptune where water is in an extreme form."
The research appears in the premier issue (April 2009) of the new journal Nature Chemistry.
Adapted from materials provided by DOE/Lawrence Livermore National Laboratory.

Nanotech Batteries For A New Energy Future

SOURCE

ScienceDaily (Mar. 20, 2009) — Researchers at the Maryland NanoCenter at the University of Maryland have developed new systems for storing electrical energy derived from alternative sources that are, in some cases, 10 times more efficient than what is commercially available.
In order to save money and energy, many people are purchasing hybrid electric cars or installing solar panels on the roofs of their homes. But both have a problem -- the technology to store the electrical power and energy is inadequate.
Battery systems that fit in cars don't hold enough energy for driving distances, yet take hours to recharge and don't give much power for acceleration. Renewable sources like solar and wind deliver significant power only part time, but devices to store their energy are expensive and too inefficient to deliver enough power for surge demand.
Researchers at the Maryland NanoCenter at the University of Maryland have developed new systems for storing electrical energy derived from alternative sources that are, in some cases, 10 times more efficient than what is commercially available. The results of their research are available in a recent issue of Nature Nanotechnology.
"Renewable energy sources like solar and wind provide time-varying, somewhat unpredictable energy supply, which must be captured and stored as electrical energy until demanded," said Gary Rubloff, director of the University of Maryland's NanoCenter. "Conventional devices to store and deliver electrical energy -- batteries and capacitors -- cannot achieve the needed combination of high energy density, high power, and fast recharge that are essential for our energy future."
Researchers working with Professor Rubloff and his collaborator, Professor Sang Bok Lee, have developed a method to significantly enhance the performance of electrical energy storage devices.
Using new processes central to nanotechnology, they create millions of identical nanostructures with shapes tailored to transport energy as electrons rapidly to and from very large surface areas where they are stored. Materials behave according to physical laws of nature. The Maryland researchers exploit unusual combinations of these behaviors (called self-assembly, self-limiting reaction, and self-alignment) to construct millions -- and ultimately billions -- of tiny, virtually identical nanostructures to receive, store, and deliver electrical energy.
"These devices exploit unique combinations of materials, processes, and structures to optimize both energy and power density -- combinations that, taken together, have real promise for building a viable next-generation technology, and around it, a vital new sector of the tech economy," Rubloff said.
"The goal for electrical energy storage systems is to simultaneously achieve high power and high energy density to enable the devices to hold large amounts of energy, to deliver that energy at high power, and to recharge rapidly (the complement to high power)," he continued.
Electrical energy storage devices fall into three categories. Batteries, particularly lithium ion, store large amounts of energy but cannot provide high power or fast recharge. Electrochemical capacitors (ECCs), also relying on electrochemical phenomena, offer higher power at the price of relatively lower energy density. In contrast, electrostatic capacitors (ESCs) operate by purely physical means, storing charge on the surfaces of two conductors. This makes them capable of high power and fast recharge, but at the price of lower energy density.
The Maryland research team's new devices are electrostatic nanocapacitors which dramatically increase energy storage density of such devices - by a factor of 10 over that of commercially available devices - without sacrificing the high power they traditionally characteristically offer. This advance brings electrostatic devices to a performance level competitive with electrochemical capacitors and introduces a new player into the field of candidates for next-generation electrical energy storage.
Where will these new nanodevices appear? Lee and Rubloff emphasize that they are developing the technology for mass production as layers of devices that could look like thin panels, similar to solar panels or the flat panel displays we see everywhere, manufactured at low cost. Multiple energy storage panels would be stacked together inside a car battery system or solar panel. In the longer run, they foresee the same nanotechnologies providing new energy capture technology (solar, thermoelectric) that could be fully integrated with storage devices in manufacturing.
This advance follows soon after another accomplishment, the dramatic improvement in performance (energy and power) of electrochemical capacitors (ECC's), thus 'supercapacitors,' by Lee's research group, published recently in the Journal of the American Chemical Society. Efforts are under way to achieve comparable advances in energy density of lithium (Li) ion batteries but with much higher power density.
"The University of Maryland's successes are built upon the convergence and collaboration of experts from a wide range of nanoscale science and technology areas with researchers already in the center of energy research," Rubloff said.
The Research Team
Gary Rubloff is Minta Martin Professor of Engineering in the materials science and engineering department and the Institute for Systems Research at the University of Maryland's A. James Clark School of Engineering. Sang Bok Lee is associate professor in the Department of Chemistry and Biochemistry at the College of Chemical and Life Sciences and WCU (World Class University Program) professor at KAIST (Korea Advanced Institute of Science and Technology) in Korea. Lee and Rubloff are part of a larger team developing nanotechnology solutions for energy capture, generation, and storage at Maryland. Their collaborators on electrical energy storage include Maryland professors Michael Fuhrer (physics), associate director of the Maryland Nanocenter Reza Ghodssi (electrical and computer engineering), John Cumings (materials science engineering), Ray Adomaitis (chemical and biomolecular engineering), Oded Rabin (materials science and engineering), Janice Reutt-Robey (chemistry), Robert Walker (chemistry), Chunsheng Wang (chemical and biomolecular engineering), Yu-Huang Wang (chemistry) and Ellen Williams (physics), director of the Materials Research Science and Engineering Center at the University of Maryland.
This work was partially supported by the Laboratory for Physical Sciences and by the university's Materials Research Science and Engineering Center under a grant from the National Science Foundation
Adapted from materials provided by University of Maryland, College Park.

venerdì 20 marzo 2009

New Organic Material May Speed Internet Access; Telecom Breakthrough Mimics The Settling Snow

SOURCE

ScienceDaily (Mar. 21, 2009) — The next time an overnight snow begins to fall, take two bricks and place them side by side a few inches apart in your yard.
In the morning, the bricks will be covered with snow and barely discernible. The snowflakes will have filled every vacant space between and around the bricks.
What you will see, says Ivan Biaggio, resembles a phenomenon that, when it occurs at the smallest of scales on an integrated optical circuit, could hasten the day when the Internet works at superfast speeds.
Biaggio, an associate professor of physics at Lehigh University, is part of an international team of researchers that has developed an organic material with an unprecedented combination of high optical quality and strong ability to mediate light-light interaction and has engineered the integration of this material with silicon technology so it can be used in optical telecommunication devices.
A description of this material was published on online in the journal Nature Photonics on March 15.
The material, which is composed of small organic molecules with high nonlinear optical susceptibilities, mimics the behavior of the snowflakes covering the bricks when it is deposited into the slot, or gap, that separate silicon waveguides that control the propagation of light beams on an integrated optical circuit.
Just as the snowflakes, being tiny and mobile, fill every empty space between the two bricks, Biaggio says, the molecules completely and homogeneously fill the slot between the waveguides. The slots measure only tens of nanometers wide; 1 nm is one one-billionth of a meter, or about the width of a dozen carbon atoms.
"We have been able to make thin films by combining the molecules into a material that is perfectly transparent, flat, and free of any irregularities that would affect optical properties," says Biaggio.
The slot between the waveguides is the region where most of the light guided by the silicon propagates. By filling the slot, say Biaggio and his collaborators, the molecules add an ultra-fast all-optical switching capability to silicon circuitry, creating a new ability to perform the light-to-light interactions necessary for data processing in all-optical networks.
The nanophotonic device obtained in this way, says the group, has demonstrated the best all-optical demultiplexing rate yet recorded for a silicon-organic-hybrid device.
Multiplexing is the process by which multiple signals or data streams are combined and transmitted on a single channel, thus saving expensive bandwidth. Demultiplexing is the reverse process.
In tests, the novel hybrid device was able to extract every fourth bit of a 170-gigabit-per-second telecommunications data stream and to demultiplex the stream to 42.7 gigabits per second.
Biaggio's group is part of an international collaboration that includes scientists from the Institute of Photonics and Quantum Electronics at the University of Karlsruhe in Germany, the Photonics Research Group at Ghent University in Belgium, and the Laboratory for Organic Chemistry at the Swiss Federal Institute of Technology (ETH) in Zurich. Biaggio is affiliated with Lehigh's Center for Optical Technologies (COT). Another group member, Bweh Esembeson, earned a Ph.D. in physics from Lehigh earlier this year and is now an applications engineer with Thorlabs Inc. in New Jersey.
The silicon-organic-hybrid device and its breakthrough properties were presented for the first time as a postdeadline contribution at a meeting of the optical telecom industry last spring and at several other scientific conferences, and Biaggio's group published an article titled "A High-optical Quality Supramolecular Assembly for Third-order Integrated Nonlinear Optics" in the October 2008 issue of Advanced Materials.
A nonlinear optical answer to bandwidth demand
As Internet users demand greater bandwidth for ever faster communications, scientists and engineers are working to increase the speed at which information can be transmitted and routed along a network. They are hoping to achieve a major leap in velocity by designing circuits that rely solely on light-waves process data.
At present, data must be converted back and forth from optical signals to electrical signals for managing its progress within the optical telecommunication network. This limits the flexibility and the speed of optical telecommunication. All-optical circuits, experts say, could unleash the full potential of optical telecommunication and data processing.
All-optical circuits require nonlinear optical materials with good optical quality. A nonlinear optical response occurs in a material when the intensity of light alters the properties of the material through which light is passing, affecting, in turn, the manner in which the light propagates.
Biaggio's group is working with a small organic molecule called DDMEBT that possesses one of the strongest nonlinear optical responses yet observed when compared to its relatively small size. The molecule can condense from the vapor phase into a bulk material. The high, off-resonant bulk nonlinearity and large-scale homogeneity of this material, says Esembeson, represent a unique combination not often found in an organic material.
"Between high optical nonlinearity in a molecule and ability to actually fabricate a bulk plastic with excellent optical quality, there is always a compromise," he says.
The DDMEBT bulk material possesses 1,000 times the nonlinearity of silica glass. This organic material, however, is difficult to flexibly structure into nanoscale waveguides or other optical circuitry. Silicon, on the other hand, is structurally suited to the dense integration of components on photonic circuit devices. And silicon technology is mature and precise. It enables the creation of waveguides whose nanoscale flatness facilitates the control of light propagation.
"With pure silicon," says Biaggio, "you can build waveguides that enable you to control light beam propagation, but you cannot get ultrafast light-to-light interaction. Using only silicon, people have achieved a data switching rate of only 20 to 30 gigabits per second, and this is very slow.
"We need higher-speed switching to achieve a higher bit rate. Organic materials can do this, but they are not terribly good for building waveguides that control propagation of tightly confined light beams."
To combine the strengths of the DDMEBT and the silicon, Biaggio and his collaborators have fashioned silicon-organic hybrid (SOH) waveguides where silicon waveguides are covered with DDMEBT.
"We have combined the two approaches," he says. "We start from a silicon waveguide designed to guide the light between two silicon ridges . Then we use molecular beam deposition to fill the space between the ridges with the organic material [DDMEBT], creating a dense plastic with high optical quality and high nonlinearity where the light propagates.
"We combine the best of both technologies."
One of the group's singular achievements, he says, is the filling-in process.
"The key question was whether we could put the DDMEBT between the two silicon strips. There is a lot of research in this area, but no one had been able to make an organic material completely and homogeneously cover such a silicon structure, so that it spreads out and fills all the spaces. Homogeneity is necessary to prevent light scattering and losses.
We now achieved this by using a molecular structure that decreases inter-molecular interactions and promotes the formation of a homogeneous solid state. We then heated the molecules to a vapor phase and used a molecular beam to deposit the molecules on top of the silicon structure. The molecules were able to homogeneously fill the nanometer scale slot between the silicon ridges and to cover the whole structure we needed to cover.
"Our collaborators in Karlsruhe, who have state-of-the-art equipment for characterizing optical communications systems, were able to reliably switch individual bits out of a 170 gigabits per second data stream, which is impressive, but the organic material would be able to support even faster data rates"
The researchers summed up their achievements in one of their forthcoming articles: "To the best of our knowledge, this is the first time that nonlinear SOH [silicon-organic hybrid] slot waveguides were used in high-speed optical communication systems. We believe that there is still a large potential for improving the conversion efficiency and the signal quality."
Journal reference:
C. Koos, P. Vorreau, T. Vallaitis, P. Dumon, W. Bogaerts, R. Baets, B. Esembeson, I. Biaggio, T. Michinobu, F. Diederich, W. Freude & J. Leuthold. All-optical high-speed signal processing with silicon–organic hybrid slot waveguides. Nature Photonics, 2009; DOI: 10.1038/nphoton.2009.25
Adapted from materials provided by Lehigh University.

giovedì 19 marzo 2009

Nanocups Brim With Potential

ScienceDaily (Mar. 19, 2009) — Researchers at Rice University have created a metamaterial that could light the way toward high-powered optics, ultra-efficient solar cells and even cloaking devices.

Naomi Halas, an award-winning pioneer in nanophotonics, and graduate student Nikolay Mirin created a material that collects light from any direction and emits it in a single direction. The material uses very tiny, cup-shaped particles called nanocups.
In a paper in the February issue of the journal Nano Letters, co-authors Halas and Mirin explain how they isolated nanocups to create light-bending nanoparticles.
In earlier research, Mirin had been trying to make a thin gold film with nano-sized holes when it occurred to him the knocked-out bits were worth investigating. Previous work on gold nanocups gave researchers a sense of their properties, but until Mirin's revelation, nobody had found a way to lock ensembles of isolated nanocups to preserve their matching orientation.
"The truth is a lot of exciting science actually does fall in your lap by accident," said Halas, Rice's Stanley C. Moore Professor in Electrical and Computer Engineering and professor of chemistry and biomedical engineering. "The big breakthrough here was being able to lift the nanocups off of a structure and preserve their orientation. Then we could look specifically at the properties of these oriented nanostructures."
Mirin's solution involved thin layers of gold deposited from various angles onto polystyrene or latex nanoparticles that had been distributed randomly on a glass substrate. The cups that formed around the particles – and the dielectric particles themselves – were locked into an elastomer and lifted off of the substrate. "You end up with this transparent thing with structures all oriented the same way," he said.
In other words, he had a metamaterial, a substance that gets its properties from its structure and not its composition. Halas and Mirin found their new material particularly adept at capturing light from any direction and focusing it in a single direction.
Redirecting scattered light means none of it bounces off the metamaterial back into the eye of an observer. That essentially makes the material invisible. "Ideally, one should see exactly what is behind an object," said Mirin.
"The material should not only retransmit the color and brightness of what is behind, like squid or chameleons do, but also bend the light around, preserving the original phase information of the signal."
Halas said the embedded nanocups are the first true three-dimensional nanoantennas, and their light-bending properties are made possible by plasmons. Electrons inside plasmonic nanoparticles resonate with input from an outside electromagnetic source in the same way a drop of water will make ripples in a pool. The particles act the same way radio antennas do, with the ability to absorb and emit electromagnetic waves that, in this case, includes visible wavelengths.
Because nanocup ensembles can focus light in a specific direction no matter where the incident light is coming, they make pretty good candidates for, say, thermal solar power. A solar panel that doesn't have to track the sun yet focuses light into a beam that's always on target would save a lot of money on machinery.
Solar-generated power of all kinds would benefit, said Halas. "In solar cells, about 80 percent of the light passes right through the device. And there's a huge amount of interest in making cells as thin as possible for many reasons."
Halas said the thinner a cell gets, the more transparent it becomes. "So ways in which you can divert light into the active region of the device can be very useful. That's a direction that needs to be pursued," she said.
Using nanocup metamaterial to transmit optical signals between computer chips has potential, she said, and enhanced spectroscopy and superlenses are also viable possibilities.
"We'd like to implement these into some sort of useful device," said Halas of her team's next steps. "We would also like to make several variations. We're looking at the fundamental aspects of the geometry, how we can manipulate it, and how we can control it better.
"Probably the most interesting application is something we not only haven't thought of yet, but might not be able to conceive for quite some time."
Journal reference:
Nikolay A. Mirin and Naomi J. Halas. Light-Bending Nanoparticles. Nano Letters, 2009, 9 (3), pp 1255%u20131259; February 19, 2009 DOI: 10.1021/nl900208z
Adapted from materials provided by Rice University.

mercoledì 18 marzo 2009

Particle Oddball Surprises Physicists

ScienceDaily (Mar. 19, 2009) — Scientists of the Collider Detector at Fermilab (CDF) experiment at the Department of Energy's Fermi National Accelerator Laboratory announced that they have found evidence of an unexpected particle whose curious characteristics may reveal new ways that quarks can combine to form matter.

The CDF physicists have called the particle Y(4140), reflecting its measured mass of 4140 Mega-electron volts. Physicists did not predict its existence because Y(4140) appears to flout nature’s known rules for fitting quarks and antiquarks together.
“It must be trying to tell us something,” said CDF cospokesperson Jacobo Konigsberg of the University of Florida. “So far, we’re not sure what that is, but rest assured we’ll keep on listening.”
Matter as we know it comprises building blocks called quarks. Quarks fit together in various well-established ways to build other particles: mesons, made of a quark-antiquark pair, and baryons, made of three quarks. So far, it’s not clear exactly what Y(4140) is made of.
The Y(4140) particle decays into a pair of other particles, the J/psi and the phi, suggesting to physicists that it might be a composition of charm and anticharm quarks. However, the characteristics of this decay do not fit the conventional expectations for such a make-up. Other possible interpretations beyond a simple quark-antiquark structure are hybrid particles that also contain gluons, or even four-quark combinations.
The CDF scientists observed Y(4140) particles in the decay of a much more commonly produced particle containing a bottom quark, the B + meson. Sifting through trillions of proton-antiproton collisions from Fermilab’s Tevatron, CDF scientists identified a small sampling of B+ mesons that decayed in an unexpected pattern. Further analysis showed that the B+ mesons were decaying into Y(4140).
The Y(4140) particle is the newest member of a family of particles of similar unusual characteristics observed in the last several years by experimenters at Fermilab’s Tevatron as well as at KEK laboratory in Japan and at DOE’s SLAC National Accelerator Laboratory in California.
"We congratulate CDF on the first evidence for a new unexpected Y state that decays to J/psi and phi,” said Japanese physicist Masanori Yamauchi, a cospokesperson of KEK’s Belle experiment. “This state may be related to the Y(3940) state discovered by Belle and might be another example of an exotic hadron containing charm quarks. We will try to confirm this state in our own Belle data."
Theoretical physicists are trying to decode the true nature of these exotic combinations of quarks that fall outside our current understanding of mesons and baryons. Meanwhile experimentalists happily continue to search for more such particles.
“We’re building upon our knowledge piece by piece,” said CDF cospokesperson Rob Roser of Fermilab, “and with enough pieces, we’ll understand how this puzzle fits together.”
The Y(4140) observation is the subject of an article submitted by CDF to Physical Review Letters this week. Besides announcing Y(4140), the CDF experiment collaboration is presenting more than 40 new results at the Moriond Conference on Quantum Chromodynamics in Europe this week, including the discovery of electroweak top-quark production and a new limit on the Higgs boson, in concert with experimenters from Fermilab’s DZero collaboration. Both experiments are actively pursuing a very broad program of physics, including ever-more-precise measurements of the top and bottom quarks, W and Z bosons and searches for additional new particles and forces.
"Thanks to the remarkable performance of the Tevatron, we expect to greatly increase our data sample in the next couple of years, said Konigsberg. “We’ll study better what we’ve found and hopefully make more discoveries. It's a very exciting time here at Fermilab."
Adapted from materials provided by DOE/Fermi National Accelerator Laboratory.

Atomic Fountain Clocks Are Becoming Still More Stable


ScienceDaily (Mar. 18, 2009) — Caesium fountains are more accurate than "normal“ atomic caesium clocks, because in fountains the caesium atoms are cooled down with the aid of laser beams and come ever slower – from a rapid velocity at room temperature to a slow "creep pace“ of a few centimetres per second at a temperature close to the absolute zero point.
Thus, the atoms remain together for a longer time so that the physicists have considerably more time to measure the decisive property of the caesium atoms which is required for the "generation of time“: their resonance frequency. When a maximum of atoms has changed into an excited state, the frequency of the exciting signal is measured - those approximately nine billions of microwave oscillations which must elapse until exactly one second has past.
The new technique no longer employs an oscillating quartz for microwave generation, but a microwave oscillator which can be excellently stabilized with the aid of extremely stable lasers. For this purpose, a so-called optical comb is used - a technique which has been developed for the establishment of optical atomic clocks. In the case of these atomic clocks, no microwave transitions, but optical transitions with frequencies five orders of magnitude above the microwave frequencies are used. For their well-aimed excitation, these transitions require extremely low-noise laser light which is generated with the aid of lasers which have been stabilized to special high-quality resonators. For measurement, the frequency of this laser light can be converted with the aid of the optical comb into microwave or low-frequency oscillations which finally allow the second pulses to be generated.
For use with a fountain, the microwave oscillator - which has been pre-stabilized by the highly stable laser and the optical comb - is slowly readjusted by the fountain output signal (like formerly the quartz oscillator). The results so far achieved show an improvement of the relative frequency instability by approximately 50% which leads to a reduction in the measurement times by a factor of 3.2. Instead of in three days, a measurement can then, for example, be performed in one day.
The experiments show without a doubt that the microwave oscillator stabilized by the laser does no longer furnish any noise contribution so that the quantum projection noise limit has been reached. This noise is given by the quantum nature of the caesium atoms. This is caused by the fact that in clock operation, the atoms can never definitely change into the excited state, but that this always happens with a certain probability which leads to a noise contribution: the quantum projection noise.
The results clear the way for further improvements of the instability by increasing the atomic numbers used in the fountain clock. Improved instabilities are not only favourable as regards the required measurement times, but also allow systematic frequency-shifting effects to be investigated in closer detail. They are, therefore, also indispensable for future reductions in the overall uncertainty of the clock. This allows a fruitful interaction: while the fountains benefit from the technology of the optical clocks, the development of the latter benefits from the more exact fountain clock as an improved reference.
Journal reference:
Weyers et al. Reaching the quantum limit in a fountain clock using a microwave oscillator phase locked to an ultrastable laser. Physical Review A, 2009; 79 (3): 031803 DOI: 10.1103/PhysRevA.79.031803
Adapted from materials provided by Physikalisch-Technische Bundesanstalt (PTB).

Longest Nanowires Ever Made May Lead To Better Fuel Cells

ScienceDaily (Mar. 18, 2009) — Researchers from New York are reporting production of the longest platinum nanowires ever made — an advance that they say could speed development of fuel cells for cars, trucks, and other everyday uses. The wires, 1/50,000 the width of a human hair, are thousands of times longer than any previously made, according to a report in Nano Letters.

The creation of long platinum nanowires at the University of Rochester could soon lead to the development of commercially viable fuel cells.
Described in a paper published March 11 in the journal Nano Letters, the new wires should provide significant increases in both the longevity and efficiency of fuel cells, which have until now been used largely for such exotic purposes as powering spacecraft. Nanowire enhanced fuel cells could power many types of vehicles, helping reduce the use of petroleum fuels for transportation, according to lead author James C. M. Li, professor of mechanical engineering at the University of Rochester.
"People have been working on developing fuel cells for decades. But the technology is still not being commercialized," says Li. "Platinum is expensive, and the standard approach for using it in fuel cells is far from ideal. These nanowires are a key step toward better solutions."
The platinum nanowires produced by Li and his graduate student Jianglan Shui are roughly ten nanometers in diameter and also centimeters in length—long enough to create the first self-supporting "web" of pure platinum that can serve as an electrode in a fuel cell.
Much shorter nanowires have already been used in a variety of technologies, such as nanocomputers and nanoscale sensors. By a process known as electrospinning—a technique used to produce long, ultra-thin solid fibers—Li and Shui were able to create platinum nanowires that are thousands of times longer than any previous such wires.
"Our ultimate purpose is to make free-standing fuel cell catalysts from these nanowires," says Li.
Within a fuel cell the catalyst facilitates the reaction of hydrogen and oxygen, splitting compressed hydrogen fuel into electrons and acidic hydrogen ions. Electrons are then routed through an external circuit to supply power, while the hydrogen ions combine with electrons and oxygen to form the "waste" product, typically liquid or vaporous water.
Platinum has been the primary material used in making fuel cell catalysts because of its ability to withstand the harsh acidic environment inside the fuel cell. Its energy efficiency is also substantially greater than that of cheaper metals like nickel.
Prior efforts in making catalysts have relied heavily on platinum nanoparticles in order to maximize the exposed surface area of platinum. The basic idea is simple: The greater the surface area, the greater the efficiency. Li cites two main problems with the nanoparticle approach, both linked to the high cost of platinum.
First, individual particles, despite being solid, can touch one another and merge through the process of surface diffusion, combining to reduce their total surface area and energy. As surface area decreases, so too does the rate of catalysis inside the fuel cell.
Second, nanoparticles require a carbon support structure to hold them in place. Unfortunately, platinum particles do not attach particularly well to these structures, and carbon is subject to oxidization, and thus degradation. As the carbon oxidizes over time, more and more particles become dislodged and are permanently lost.
Li's nanowires avoid these problems completely.
With platinum arranged into a series of centimeter long, flexible, and uniformly thin wires, the particles comprising them are fixed in place and need no additional support. Platinum will no longer be lost during normal fuel cell operation.
"The reason people have not come to nanowires before is that it's very hard to make them," says Li. "The parameters affecting the morphology of the wires are complex. And when they are not sufficiently long, they behave the same as nanoparticles."
One of the key challenges Li and Shui managed to overcome was reducing the formation of platinum beads along the nanowires. Without optimal conditions, instead of a relatively smooth wire, you end up with what looks more like a series of interspersed beads on a necklace. Such bunching together of platinum particles is another case of unutilized surface area.
"With platinum being so costly, it's quite important that none of it goes to waste when making a fuel cell," says Li. "We studied five variables that affect bead formation and we finally got it—nanowires that are almost bead free."
His current objective is to further optimize laboratory conditions to obtain fewer beads and even longer, more uniformly thin nanowires. "After that, we're going to make a fuel cell and demonstrate this technology," says Li.
Journal reference:
Jianglan Shui and James C. M. Li. Platinum Nanowires Produced by Electrospinning. Nano Letters, Online March 4, 2009 DOI: 10.1021/nl802910h
Adapted from materials provided by University of Rochester.

martedì 17 marzo 2009

Quantum friction: does it exist after all?


For several decades physicists have been intrigued by the idea of quantum friction — that two objects moving past each other experience a friction–like lateral force that arises from quantum fluctuations in the vacuum.

Several independent groups of physicists have previously calculated that quantum friction could arise from the Casimir force between two plates — when those plates move relative to one another. There is also some indirect experimental evidence that such a lateral force exists.
Now, however, researchers in the UK having performed detailed calculations, which they claim show that there is no lateral force and that quantum friction therefore doesn’t exist.
In 1948 Dutch physicist Hendrik Casimir worked out that two uncharged, perfectly conducting metal plates placed in a vacuum should be attracted to one another. This force arises from the fact that, according to quantum mechanics, the energy of an electromagnetic field in a vacuum is not zero but continuously fluctuates around a certain mean value, known as the “zero–point energy”. Casimir showed that the radiation pressure of the field outside the plates will tend to be slightly greater than that between the plates and therefore the plates will experience an attractive force.

Reflection leads to friction?
Evgeny Lifshitz generalized Casimir’s prediction for real materials in 1956, and for the last thirty years researchers have also tried to calculate what happens to the Casimir force when the plates move relative to one another with uniform velocity. The electromagnetic waves, or modes, that exist between the plates bounce back and forth off the plates and this reflection will be affected by the motion of the plates.
Physicists agree that this changing reflection alters the now familiar perpendicular component of the Casimir force, but a number of researchers have also calculated that there should be a lateral force between the plates, a quantum-mechanical friction that could potentially be of great interest to engineers trying to improve the performance of ultra-small mechanical devices.
For example, John Pendry of Imperial College in London has calculated that differences between the Doppler shift of two modes reflecting off moving plates in different directions can lead to a frictional force if the reflectivities of the surfaces depend on frequency.
However, Thomas Philbin of the University of St Andrews says that trying to calculate this modified Casimir force is extremely difficult and that previous efforts to do so have not been satisfactory because they have used approximations. Working with his colleague at St Andrews, Ulf Leonhardt, he has used Lifshitz’s theory to carry out what he claims is an exact calculation and shows that there is no lateral force (arXiv:0810.3750v2, to appear in New Journal of Physics).

Thought experiment:
To illustrate the feasibility of this result in very general terms, Philbin describes a thought experiment using what is known as a “bi-anisotropic medium”. This is a material in which electric fields or magnetic fields applied separately will induce both magnetic dipoles and electric dipoles in the material, and that such a medium at rest is the equivalent of a certain kind of non bi-anisotropic material in motion.
Philbin then considers building caterpillar tracks from this bi-anisotropic medium and imagines what would happen when the tracks are placed on some other arbitrary medium — as far as electromagnetism is concerned, he says, the tracks are then moving. He goes on to argue that if quantum friction existed these tracks would experience a lateral force and therefore start to actually move, and that they would continue to move indefinitely, thereby performing the impossible — extracting unlimited energy from the quantum vacuum.
“If quantum friction existed one could think about trying to manipulate and engineer it, as experimentalists are now doing with the Casimir force between non-moving objects,” says Philbin. “But since it doesn't exist this possibility disappears.”
Pendry, however, defends his work, maintaining that he derived his result using two completely different lines of argument, and that these were also backed up by independent research. In addition, he says that quantum-frictional effects have been observed experimentally, albeit indirectly, in resistance measurements of a device consisting of two field-effect transistors placed one above the other and separated by a very small insulating gap.

About the author:
Edwin Cartlidge is a science writer based in Rome

The race to build a quantum computer

Quantum computation was a highly speculative enterprise facing serious technological obstacles until a shy young physicist came along. Dave Bacon tells the story of Alexei Kitaev’s big idea.

When Russian physicist Alexei Kitaev heard from a colleague that Peter Shor, a researcher from Bell Labs in New Jersey, had discovered an algorithm for factoring numbers on a quantum computer, he got excited. Kitaev had long pondered how the — at the time — obscure field of quantum computation could be useful, and he knew that the Bell Labs result was likely to radically change our understanding of computing. This was a major result that he just had to understand.
Unfortunately for Kitaev, the research facility where he worked, the L D Landau Institute for Theoretical Physics in Chernogolovka, Russia, did not have a copy of the conference proceedings in which Shor’s result was published. Now, most researchers when confronted with a missing article would probably get on the phone to their local librarian or to a colleague at a neighbouring institution and attempt to obtain a copy of the paper. But for Kitaev, by nature rather shy, this normal mode of proceeding did not hold. Famously, when Nobel-prize winner Richard Feynman died in 1988, written on one of his office blackboards was the phrase “What I cannot create, I do not understand”. It was in this vein that Kitaev did what comes naturally to brilliant minds like himself and Feynman: he simply sat down and rederived this major new result.
The ability to reconstruct, with only the hint of a solution, Shor’s algorithm, is no doubt the mark of an amazingly sharp mind. But the standards of genius in fields like theoretical physics and computer science extend even higher: one must not just recreate prior results, but also create something totally new.
In 1997, three years after Shor’s breakthrough, Kitaev wrote a paper that made a radical new suggestion about how one could build a quantum computer (arXiv:quant-ph/9707021). At the time, researchers vehemently believed that building a quantum computer was going to require a physics and engineering tour de force, akin to replaying many decades of the computer revolution. Kitaev, contrary to the opinion of nearly everyone involved in the field of quantum computation at the time, proposed that building a quantum computer might not be more difficult than finding a proper physical substrate, an analogue to the transistor for quantum computers.
The idea was a heresy of the highest order. But like many heretical prophets before him, Kitaev’s proposal soon attracted the attention of a group of dedicated followers that has been quietly pursuing his alternative path. Kitaev’s vision for what a future quantum computer might look like, then, sparked a race between his disciples and those following the traditional path towards constructing a quantum computer: a race for the soul of a future quantum computer.
In the February issue of Physics World, Dave Bacon looks at the world of quantum computing and the key figures in the quest for a working computer. Bacon looks at the physics of Alexei Kitaev who was met with bewilderment and scepticism, when he showed in 2003 that it was possible to build a many-body quantum system with an inbuilt quantum-correcting code. Over the following six years, with the backing of a Field’s medal winner, Kitaev’s work has come to represent a key milestone in the quest for a workable machine.
To read the full version of this article — and the rest of the February issue of Physics World — please subscribe to our print edition. Members of the Institute of Physics can read all articles for free by accessing our digital edition.

About the author:
Dave Bacon is assistant research professor at the Department of Physics and at the Department of Computing Science and Engineering in the University of Washington, US

Precision Measurement Of W Boson Mass Portends Stricter Limits For Higgs Particle


ScienceDaily (Mar. 17, 2009) — Scientists of the DZero collaboration at the Department of Energy’s Fermi National Accelerator Laboratory have achieved the world’s most precise measurement of the mass of the W boson by a single experiment. Combined with other measurements, the reduced uncertainty of the W boson mass will lead to stricter bounds on the mass of the elusive Higgs boson.
The W boson is a carrier of the weak nuclear force and a key element of the Standard Model of elementary particles and forces. The particle, which is about 85 times heavier than a proton, enables radioactive beta decay and makes the sun shine. The Standard Model also predicts the existence of the Higgs boson, the origin of mass for all elementary particles.
Precision measurements of the W mass provide a window on the Higgs boson and perhaps other not-yet-observed particles. The exact value of the W mass is crucial for calculations that allow scientists to estimate the likely mass of the Higgs boson by studying its subtle quantum effects on the W boson and the top quark, an elementary particle that was discovered at Fermilab in 1995.
Scientists working on the DZero experiment now have measured the mass of the W boson with a precision of 0.05 percent. The exact mass of the particle measured by DZero is 80.401 +/- 0.044 GeV/c^2. The collaboration presented its result at the annual conference on Electroweak Interactions and Unified Theories known as Rencontres de Moriond last Sunday.
“This beautiful measurement illustrates the power of the Tevatron as a precision instrument and means that the stress test we have ordered for the Standard Model becomes more stressful and more revealing,” said Fermilab theorist Chris Quigg.
The DZero team determined the W mass by measuring the decay of W bosons to electrons and electron neutrinos. Performing the measurement required calibrating the DZero particle detector with an accuracy around three hundredths of one percent, an arduous task that required several years of effort from a team of scientists including students.
Since its discovery at the European laboratory CERN in 1983, many experiments at Fermilab and CERN have measured the mass of the W boson with steadily increasing precision. Now DZero achieved the best precision by the painstaking analysis of a large data sample delivered by the Tevatron particle collider at Fermilab. The consistency of the DZero result with previous results speaks to the validity of the different calibration and analysis techniques used.
“This is one of the most challenging precision measurements at the Tevatron,” said DZero co-spokesperson Dmitri Denisov, Fermilab “It took many years of efforts from our collaboration to build the 5,500-ton detector, collect and reconstruct the data and then perform the complex analysis to improve our knowledge of this fundamental parameter of the Standard Model.“
The W mass measurement is another major result obtained by the DZero experiment this month. Less than a week ago, the DZero collaboration submitted a paper on the discovery of single top quark production at the Tevatron collider. In the last year, the collaboration has published 46 scientific papers based on measurements made with the DZero particle detector.
Adapted from materials provided by DOE/Fermi National Accelerator Laboratory.

lunedì 16 marzo 2009

Supercooled Silicon: Liquid-liquid Phase Transition In Silicon Confirmed


ScienceDaily (Mar. 16, 2009) — Using rigorous computer calculations, researchers from Carnegie Mellon University and the Carnegie Institution of Washington have established evidence that supercooled silicon experiences a liquid-liquid phase transition, where at a certain temperature two different states of liquid silicon exist.
The two states each have unique properties that could be used to develop new silicon-based materials. Furthermore, the methods developed can be applied to gain a better understanding of other materials.
The findings will be presented Friday, March 20 at the American Physical Society's March Meeting in Pittsburgh. The results also were published as an Editor's Selection in the Feb. 20 issue of Physical Review Letters.
Under normal conditions, phase transitions occur when the structure of a substance changes in response to a change in temperature and/or pressure. The most commonly thought of phase transitions are between solids, liquids and gases. However, it was recently discovered that some substances experience phase transitions within the same state, resulting in two different forms with their own individual characteristics. For example, it's thought that water has a liquid-liquid transition.
"Water and silicon share many unusual characteristics. For example, in most materials, their solid states are denser than their liquid states, but in water and silicon the opposite is true. That's why ice floats on water and solid silicon floats on liquid silicon," said Michael Widom, professor of physics at Carnegie Mellon. "The unusual volume expansion of frozen water and silicon that causes them to float is probably connected to the existence of a liquid-liquid transition."
Like water, it has been hypothesized that supercooled silicon — liquid silicon that has its temperature lowered to below the freezing point without crystallizing and becoming a solid — experienced a liquid-liquid phase shift. Computer simulations initially predicted the existence of two liquid phases, but further simulations and experiments failed to produce the necessary evidence to prove their presence.
To resolve the disparity between the prior experiments, Carnegie Mellon's Widom and Carnegie Institute of Washington post-doc Panchapakesan Ganesh, who began this work as a graduate student in Widom's lab, used rigorous first-principles calculations based on quantum mechanics to, for the first time, prove the existence of a liquid-liquid transition in silicon. First-principle calculations start with established laws of physics, and make no assumptions or approximations, leaving little room for question. Such calculations provide the most accurate predictions for the structural properties at high pressures and temperature, since conducting actual experiments in these conditions is near impossible.
Since the calculations are based on quantum mechanics, they were extremely complex and time-consuming. It took one month of computing time to complete the calculations needed to determine the molecular dynamics of silicon at one single experimental temperature and volume. The researchers applied novel methods of parallel tempering and histogram data analysis to look at nine temperatures and 12 volumes. The calculations required nine CPU years to be completed, but the experiment took only one actual calendar year because the calculations ran in parallel on many computers.
The computations revealed that a liquid-to-liquid phase shift, evidenced by the presence of a van der Waals loop, occurred when silicon was supercooled to 1200 degrees Kelvin; silicon normally freezes at 1700 degrees Kelvin. A van der Waals loop occurs when pressure grows as volume increases, marking a thermodynamically unstable situation. The unstable condition is resolved by transforming into two coexisting states of differing densities — in this case two distinct forms of liquid silicon, each having its own unique and dissimilar properties. One was high density and highly coordinated with metallic properties, much like normal liquid silicon, and the other was low density, low-coordinated and semi-metallic, with a structure closer to that of solid silicon.
"This study shows that accurate calculations based on quantum mechanics can now answer long-standing questions about familiar and unfamiliar materials," Widom said.
The simulation methods used by the researchers are a breakthrough on their own. The computational methods can be applied to achieve a better understanding of a wide range of elements and molecules and how they behave at extremely high temperatures. Revealing the structure and properties of different elements and compounds at previously untestable conditions could lead to the development of new materials with commercial applications. Widom, for example, is now using the tools to study metallic glass, a solid metal with the structure of a liquid that contain desirable properties not found in commonly used alloys.
Adapted from materials provided by Carnegie Mellon University, via EurekAlert!, a service of AAAS.

Magnetic Properties Of Iron-based Superconductors Explored


ScienceDaily (Mar. 16, 2009) — Scientists at the Naval Research Laboratory (NRL) have proposed theoretical models to explain the normal magnetic properties in iron-based superconductors. This research was published in the December 21, 2008 issue of Nature Physics.
Their research builds on earlier research they conducted proposing a theoretical model for superconductivity in newly discovered iron-based superconductors. That earlier research was published in Physical Review Letters.
To set the stage for the NRL researchers' recent accomplishments, looking back over the last 50 years, the following are three very important discoveries in terms of superconducting materials:
high-Tc cuprates in 1988, with critical temperature up to 160 K,
Magnesium diboride (MgB2) (2001, 39 K) and
iron-based superconductors (2008, up to 57 K).
Superconductivity in cuprates (chemical compounds containing copper oxide) is believed to originate from electron-electron interaction, magnetic or Coulomb, and is understood as so-called d-wave symmetry superconductivity.
While in conventional BCS superconductors, the superconducting order parameter is the same for all electrons, for d-wave it actually changes sign depending on the direction in which an electron moves (roughly, like cos2α). It is worth noting that in 20 years more than 100,000 papers have been published studying the high-Tc cuprates.
Many believe that MgB2 was the next milestone in the area of superconducting materials for the reason that the mechanism there is conventional, yet critical temperature is much higher that in any other conventional superconductor, and, at that time, was second only to cuprates. It appeared that MgB2 was the first example of a multigap superconductor, where the order parameter never changes sign, but is rather different for different groups of electrons. This fact was theoretically predicted at NRL and soon confirmed through experiments. While not elevating exactly to the level of excitement that cuprates produced, MgB2 resulted in 4,000 publications in seven years.
The most recent breakthrough in superconductivity was discovery of high-temperature superconductivity in iron-based material such as LaFeAsO, BaFe2As2, and others. With iron being a strongly magnetic species, these materials immediately promised a new paradigm in search of new superconductors. Indeed, it became increasingly clear that superconductivity here is very dissimilar to either cuprates or MgB2, and that string magnetism of iron likely plays a crucial role. Within a few months after the initial discovery, two NRL scientists, Dr. Igor Mazin and Dr. Michelle Johannes from the Materials Science and Technology Division, in collaboration with two researchers at Oak Ridge (both NRL alumni), proposed that a totally new superconducting state is realized in FeAs superconductors, which they dubbed "s±", where two groups of electrons sport not only different order parameters, but also different signs.
Soon experimental evidence began to accumulate in favor of the NRL researchers' proposal, and currently is generally considered as the most likely scenario. Their paper was posted on March 19, 2008, and published in Physical Review Letters on August 1, 2008, and by December 2008 had been cited in other articles and preprints more than a hundred times. These novel materials remain one of the hottest issues in physics; since their discovery, papers on this subject have been appearing at a steady rate of 2.5 per day. Should this rate remain active, the number of publications will surpass that on MgB2 in four years.
It appears though that superconductivity is not the only mystery of these so-called ferropnictides. In the undoped state, they demonstrate highly unusual magnetic properties with a very rare magnetic ordering pattern and highly unstable measurable magnitude of the magnetic moment. In fact, by small modifications of the materials they can be driven from practically nonmagnetic state to a strong magnetism comparable to that in pure iron. Most shockingly, the transition temperature barely changes. Some of the systems feature two transitions: a magnetic one, and a magnetically driven structural one. But, these occur counterintuitively; the structural transition occurs first and the magnetic one next. These and many other properties can hardly be explained by existing theories.
In the December 2008 Nature Physics article, Drs. Mazin and Johannes propose a highly unusual ground state. They suggest that the Fe ions are always magnetic in ferropnictides, but the observable magnetic moment is strongly reduced or entirely suppressed because of formation of antiferromagnetic domains whose boundaries are dynamic and strongly fluctuation. Observable long-range order appears when domains are large and their walls are pinned. Structural transition without observable long-range magnetism occurs when domain boundaries are predominantly antiphase ones so that the x/y symmetry is broken even though there is no long-range order. Superconducting composition where neither long-range magnetism nor structural distortions are observed corresponds to twin domains, which are small and their boundaries are dynamic. Should this conjecture be corroborated by the experiment, researchers will be entering an entirely new world of magnetic excitations (topological excitations of domain boundaries) that will most likely be very important, if not instrumental for the high-Tc superconductivity in ferropnictides.
Adapted from materials provided by Naval Research Laboratory, via EurekAlert!, a service of AAAS.

Dancing 'Adatoms' Help Chemists Understand How Water Molecules Split

ScienceDaily (Mar. 16, 2009) — Single oxygen atoms dancing on a metal oxide slab, glowing brighter here and dimmer there, have helped chemists better understand how water splits into oxygen and hydrogen. In the process, the scientists have visualized a chemical reaction that had previously only been talked about. The new work improves our understanding of the chemistry needed to generate hydrogen fuel from water or to clean contaminated water.

The scientists made the discovery while trying to determine the basics of how titanium dioxide -- a compound sometimes found in sunscreen -- breaks down water. The chemical reactions between water and oxygen are central to such varied processes as hydrogen production, breaking down pollutants, and in solar energy.
"Oxygen and water are involved in many, many reactions," said physicist Igor Lyubinetsky at the Department of Energy's Pacific Northwest National Laboratory, who reported the team's results in March 6 issue of the Physical Review Letters. "This mobility might interfere with some reactions and help others."
Bustling Bright Spots
While exploring titanium dioxide as a way to split water into its hydrogen and oxygen pieces, researchers can use a technique called scanning tunneling microscopy to watch the chemical reaction. The surface of a slab of titanium dioxide is like a corn field: rows of oxygen atoms rise from a patch of titanium atoms. The alternating oxygen and titanium rows look like stripes.
Scientists can also see some atoms and molecules that come to rest on the surface as bright spots. One such visible atom is a single oxygen atom that comes to rest on a titanium atom, called an "adatom". Chemists can only see water molecules if they drop the temperature dramatically -- at ambient temperature, water moves too fast for the method to pick them up.
In this work, PNNL scientists studied water's reactions with titanium dioxide at ambient temperature at EMSL, the DOE's Environmental Molecular Sciences Laboratory on the PNNL campus. Starting with a surface plated with a few oxygen adatoms, they added water -- and the adatoms started to dance.
"Suddenly, almost every adatom started to move back and forth along the titanium row," said Lyubinetsky. "From theory and previous work, we expected to see this along the row."
Remarkably, the adatoms didn't just slide up and down the stripes. They also bounced out of them and landed in others, like pogoing dancers in a mosh pit.
"We saw quite unexpected things. We thought it was very strange -- we saw adatoms jump over the rows," Lyubinetsky said. "We just couldn't explain it."
Calculating how much energy it would take for the adatoms to move by themselves, much less hop over an oxygen row, the chemists suspected the adatoms were getting help -- most likely from the invisible water molecules.
The Unseen Enabler
To make sense of the dancing adatoms, the team calculated how much energy it would take to move adatoms with the help of water molecules. If a water molecule sits down next to an adatom, one of the water's hydrogen atoms can jump to the adatom, forming two oxygen-hydrogen pairs.
These pairs are known as hydroxyls and tend to steal atoms from other molecules, including each other. One of the thieving hydroxyls can then nab the other's hydrogen atom, turning back into a water molecule. The water molecule floats off, leaving behind an adatom. Half the time, that adatom is one spot over -- which makes the original appear to have moved.
The chemists determined that water can help the adatom jump a row as well: If a water molecule and an adatom are situated on either side of a raised oxygen row, a row oxygen can serve as the middleman, handing over a hydrogen from the water molecule to the adatom. Again, two hydroxyls form, one ultimately stealing both hydrogens (with the help of the middleman) and zipping away as water. If the incoming water molecule has been stripped, the adatom appears to have hopped over.
The calculated energy required for these different scenarios fit well with the team's experimental data. When a row oxygen serves as a middleman, the process is known as "pseudo-dissociation", a reaction suggested by chemists but until now, never verified experimentally.
"We realized that only if we involved the pseudo-dissociative state of the water can we explain it," said Lyubinetsky. "Otherwise, all the calculations show there's too high a barrier, the adatom just cannot jump by itself."
Lyubinetsky points out that this shows that water itself can work as a catalyst. A catalyst is a molecule that can help a chemical reaction along and remain unchanged by the experience.
"Water is required to move the adatoms around, but like a catalyst it is not consumed in the reaction," he said. "You start with water and you end with water."
In the future, the team plans on determining if water can make the adatoms move other species and more than one space at a time. In addition, they will investigate how light affects the reaction.
This work was supported by the Department of Energy's Office of Science.
Journal reference:
Y. Du, N. A. Deskins, Z. Zhang, Z. Dohnálek, M. Dupuis, and I. Lyubinetsky. Two Pathways for Water Interaction with Oxygen Adatoms on TiO2(110). Phys Rev Letters, March 6, 2009 DOI: 10.1103/PhysRevLett.102.096102
Adapted from materials provided by DOE/Pacific Northwest National Laboratory.

Simple Filter Delivers Clean, Safe Drinking Water, Potentially To Millions


ScienceDaily (Mar. 16, 2009) — As an efficient, inexpensive, low-tech way to treat water, Dr. James Amburgey’s research could bring clean, safe drinking water to potentially millions upon millions of people.
Simplicity is the primary objective of the rapid sand filter system Amburgey is developing. “The idea is to make it as simple as possible,” he said. “All that is needed is some PVC pipe, sand and inexpensive treatment chemicals. The only way to practically deploy a system to the people of less developed countries is for it to be inexpensive and simple.”
Amburgey, an assistant professor of Civil and Environmental Engineering, specializes in drinking and recreational water treatment. He has done work in the past with slow sand filters, but his latest research with rapid sand filters is demonstrating the ability to clean water much more effectively and 30 to 50 times faster.
“One significant challenge with sand filters is in removing Cryptosporidium oocysts,” Amburgey said. “One ‘crypto’ is five microns in diameter, but the gaps between grains of sand are approximately 75 microns. So, we have to get the crypto to stick to the sand grains.”
To achieve this, Amburgey has developed a chemical pretreatment scheme based on ferric chloride and a pH buffer that is added to the water. In its natural state, Cryptosporidium is negatively charged, as are sand grains, so they repel one another. The chemical pretreatment changes the Cryptosporidium surface charge to near neutral, which eliminates the natural electrostatic repulsion and causes it to be attracted to and stick to the sand grains via van der Waals forces.
In research using a prototype of this system in his lab, Amburgey and his students have done preliminary tests on waters from local rivers, creeks and wastewater treatment plants. Their results are typically greater than 99 percent removal for Cryptosporidium-sized particles.
“A common problem in drinking water treatment facilities is that changing water quality requires changes in the chemical pretreatment dosages,” Amburgey said. “Our tests, so far, have shown that this system utilizing only a single set of chemical pretreatment dosages is effective on all waters tested to date.”
Another advantage of the system is that it can be adapted by using local sands or crushed rock that are indigenous to a particular region of the world.
Adapted from materials provided by University of North Carolina at Charlotte.

domenica 15 marzo 2009

Quantum Teleportation Between Distant Matter Qubits: First Between Atoms 1 Meter Apart


ScienceDaily (Jan. 23, 2009) — For the first time, scientists have successfully teleported information between two separate atoms in unconnected enclosures a meter apart – a significant milestone in the global quest for practical quantum information processing.
Teleportation may be nature's most mysterious form of transport: Quantum information, such as the spin of a particle or the polarization of a photon, is transferred from one place to another, without traveling through any physical medium. It has previously been achieved between photons over very large distances, between photons and ensembles of atoms, and between two nearby atoms through the intermediary action of a third. None of those, however, provides a feasible means of holding and managing quantum information over long distances.
Now a team from the Joint Quantum Institute (JQI) at the University of Maryland (UMD) and the University of Michigan has succeeded in teleporting a quantum state directly from one atom to another over a substantial distance. That capability is necessary for workable quantum information systems because they will require memory storage at both the sending and receiving ends of the transmission.
In the Jan. 23 issue of the journal Science, the scientists report that, by using their protocol, atom-to-atom teleported information can be recovered with perfect accuracy about 90% of the time – and that figure can be improved.
"Our system has the potential to form the basis for a large-scale 'quantum repeater' that can network quantum memories over vast distances," says group leader Christopher Monroe of JQI and UMD. "Moreover, our methods can be used in conjunction with quantum bit operations to create a key component needed for quantum computation." A quantum computer could perform certain tasks, such as encryption-related calculations and searches of giant databases, considerably faster than conventional machines. The effort to devise a working model is a matter of intense interest worldwide.
Teleportation works because of a remarkable quantum phenomenon, called "entanglement," which only occurs on the atomic and subatomic scale. Once two objects are put in an entangled state, their properties are inextricably entwined. Although those properties are inherently unknowable until a measurement is made, measuring either one of the objects instantly determines the characteristics of the other, no matter how far apart they are.
The JQI team set out to entangle the quantum states of two individual ytterbium ions so that information embodied in the condition of one could be teleported to the other. Each ion was isolated in a separate high-vacuum trap, suspended in an invisible cage of electromagnetic fields and surrounded by metal electrodes. [See illustrations.] The researchers identified two readily discernible ground (lowest energy) states of the ions that would serve as the alternative "bit" values of an atomic quantum bit, or qubit.
Conventional electronic bits (short for binary digits), such as those in a personal computer, are always in one of two states: off or on, 0 or 1, high or low voltage, etc. Quantum bits, however, can be in some combination, called a "superposition," of both states at the same time, like a coin that is simultaneously heads and tails – until a measurement is made. It is this phenomenon that gives quantum computation its extraordinary power.
At the start of the experimental process, each ion (designated A and B) is initialized in a given ground state. Then ion A is irradiated with a specially tailored microwave burst from one of its cage electrodes, placing the ion in some desired superposition of the two qubit states – in effect writing into memory the information to be teleported.
Immediately thereafter, both ions are excited by a picosecond (one trillionth of a second) laser pulse. The pulse duration is so short that each ion emits only a single photon as it sheds the energy gained from the laser pulse and falls back to one or the other of the two qubit ground states. Depending on which one it falls into, each ion emits a photon whose color (designated red and blue) is perfectly correlated with the two atomic qubit states. It is this entanglement between each atomic qubit and its photon that will eventually allow the atoms themselves to become entangled.
The emitted photons are captured by lenses, routed to separate strands of fiber-optic cable, and carried into opposite sides of a 50-50 beamsplitter where it is equally probable for either photon to pass straight through the splitter or to be reflected. On either side of the beamsplitter output are detectors that can record the arrival of a single photon.
Before reaching the beamsplitter, each photon is in a superposition of states. After encountering the beamsplitter, four color combinations are possible: blue-blue, red-red, blue-red and red-blue. In nearly all of those variations, the photons cancel each other out on one side and both end up in the same detector on the other side. But there is one – and only one – combination in which both detectors will record a photon at exactly the same time.
In that case, however, it is physically impossible to tell which ion produced which photon because it cannot be known whether the photon arriving at a detector passed through the beamsplitter or was reflected by it.
Thanks to the peculiar laws of quantum mechanics, that inherent uncertainty projects the ions into an entangled state. That is, each ion is in a correlated superposition of the two possible qubit states. The simultaneous detection of photons at the detectors does not occur often, so the laser stimulus and photon emission process has to be repeated many thousands of times per second. But when a photon appears in each detector, it is an unambiguous signature of entanglement between the ions.
When an entangled condition is identified, the scientists immediately take a measurement of ion A. The act of measurement forces it out of superposition and into a definite condition: one of the two qubit states. But because ion A's state is irreversibly tied to ion B's, the measurement of A also forces B into a complementary state. Depending on which state ion A is found in, the researchers now know precisely what kind of microwave pulse to apply to ion B in order to recover the exact information that had originally been stored in ion A. Doing so results in the accurate teleportation of the information.
What distinguishes this outcome as teleportation, rather than any other form of communication, is that no information pertaining to the original memory actually passes between ion A and ion B. Instead, the information disappears when ion A is measured and reappears when the microwave pulse is applied to ion B.
"One particularly attractive aspect of our method is that it combines the unique advantages of both photons and atoms," says Monroe. "Photons are ideal for transferring information fast over long distances, whereas atoms offer a valuable medium for long-lived quantum memory. The combination represents an attractive architecture for a 'quantum repeater,' that would allow quantum information to be communicated over much larger distances than can be done with just photons. Also, the teleportation of quantum information in this way could form the basis of a new type of quantum internet that could outperform any conventional type of classical network for certain tasks."
The Joint Quantum Institute is a partnership effort between the National Institute of Standards and Technology and UMD, with additional support from the Laboratory for Physical Science. The work reported in Science was supported by the Intelligence Advanced Research Project Activity program under U.S. Army Research Office contract, the National Science Foundation (NSF) Physics at the Information Frontier Program, and the NSF Physics Frontier Center at JQI.
Adapted from materials provided by University of Maryland.