sabato 25 luglio 2009

Nanotubes Weigh A Single Atom


ScienceDaily (July 23, 2009) — How can you weigh a single atom? European researchers have built an exquisite new device that can do just that. It may ultimately allow scientists to study the progress of chemical reactions, molecule by molecule.
Carbon nanotubes are ultra-thin fibres of carbon and a nanotechnologist’s dream.
They are made from thin sheets of carbon only one atom thick – known as graphene – rolled into a tube only a few nanometres across. Even the thickest is more than a thousand times thinner than a human hair.
Interest in carbon nanotubes blossomed in the 1990s when they were found to possess impressive characteristics that make them very attractive raw materials for nanotechnology of all kinds.
“They have unique properties,” explains Professor Pertti Hakonen of Helsinki University of Technology. “They are about 1000 times stronger than steel and very good thermal conductors and good electrical conductors.”
Hakonen is coordinator of the EU-funded CARDEQ project ( which is exploiting these intriguing materials to build a device sensitive enough to measure the masses of atoms and molecules.
Vibrating strings
A carbon nanotube is essentially an extremely thin, but stiff, piece of string and, like other strings, it can vibrate. As all guitar players know, heavy strings vibrate more slowly than lighter strings, so if a suspended carbon nanotube is allowed to vibrate at its natural frequency, that frequency will fall if atoms or molecules become attached to it.
It sounds simple and the idea is not new. What is new is the delicate sensing system needed to detect the vibration and measure its frequency. Some nanotubes turn out to be semiconductors, depending on how the graphene sheet is wound, and it is these that offer the solution that CARDEQ has developed.
Members of the consortium have taken the approach of building a semiconducting nanotube into a transistor so that the vibration modulates the current passing through it. “The suspended nanotube is, at the same time, the vibrating element and the readout element of the transistor,” Hakonen explains.
“The idea was to run three different detector plans in parallel and then select the best one,” he says. “Now we are down to two. So we have the single electron transfer concept, which is more sensitive, and the field effect transistor concept, which is faster.”
Single atoms
Last November, CARDEQ partners in Barcelona reported that they had sensed the mass of single chromium atoms deposited on a nanotube. But Hakonen says that even smaller atoms, of argon, can now be detected, though the device is not yet stable enough for such sensitivity to be routine. “When the device is operating well, we can see a single argon atom on short time scales. But then if you measure too long the noise becomes large.”
CARDEQ is not alone in employing carbon nanotubes as mass sensors. Similar work is going on at two centres in California – Berkeley and Caltech – though each has adopted a different method to measuring the mass.
All three groups have announced they can perform mass detection on the atomic level using nanotubes, but CARDEQ researchers provided the most convincing data with a clear shift in the resonance frequency.
But a single atom is nowhere near the limit of what is possible. Hakonen is confident they can push the technology to detect the mass of a single nucleon – a proton or neutron.
“It’s a big difference,” he admits, “but typically the improvements in these devices are jump-like. It’s not like developing some well-known device where we have only small improvements from time to time. This is really front-line work and breakthroughs do occur occasionally.”
Biological molecules
If the resolution can be pared down to a single nucleon, then researchers can look forward to accurately weighing different types of molecules and atoms in real time.
It may then become possible to observe the radioactive decay of a single nucleus and to study other types of quantum mechanical phenomena.
But the real excitement would be in tracking chemical and biological reactions involving individual atoms and molecules reacting right there on the vibrating nanotube. That could have applications in molecular biology, allowing scientists to study the basic processes of life in unprecedented detail. Such practical applications are probably ten years away, Hakonen estimates.
“It will depend very much on how the technology for processing carbon nanotubes develops. I cannot predict what will happen, but I think chemical reactions in various systems, such as proteins and so on, will be the main applications in the future.”
The CARDEQ project received funding from the FET-Open strand of the EU’s Sixth Framework Programme for ICT research.
Adapted from materials provided by
ICT Results.

Physicists Create First Nanoscale Mass Spectrometer

ScienceDaily (July 24, 2009) — Using devices millionths of a meter in size, physicists at the California Institute of Technology (Caltech) have developed a technique to determine the mass of a single molecule, in real time.
The mass of molecules is traditionally measured using mass spectrometry, in which samples consisting of tens of thousands of molecules are ionized, to produce charged versions of the molecules, or ions. Those ions are then directed into an electric field, where their motion, which is choreographed by both their mass and their charge, allows the determination of their so-called mass-to-charge ratio. From this, their mass can ultimately be ascertained.
The new technique, developed over 10 years of effort by Michael L. Roukes, a professor of physics, applied physics, and bioengineering at the Caltech and codirector of Caltech's Kavli Nanoscience Institute, and his colleagues, simplifies and miniaturizes the process through the use of very tiny nanoelectromechanical system (NEMS) resonators. The bridge-like resonators, which are 2 micrometers long and 100 nanometers wide, vibrate at a high frequency and effectively serve as the "scale" of the mass spectrometer.
"The frequency at which the resonator vibrates is directly proportional to its mass," explains research physicist Askshay Naik, the first author of a paper about the work that appears in the journal Nature Nanotechnology. Changes in the vibration frequency, then, correspond to changes in mass.
"When a protein lands on the resonator, it causes a decrease in the frequency at which the resonator vibrates and the frequency shift is proportional to the mass of the protein," Naik says.
As described in the paper, the researchers used the instrument to test a sample of the protein bovine serum albumin (BSA), which is known to have a mass of 66 kilodaltons (kDa; a dalton is a unit of mass used to describe atomic and molecular masses, with one dalton approximately equal to the mass of one hydrogen atom).
The BSA protein ions are produced in vapor form using an electrospray ionization (ESI) system.The ions are then sprayed on to the NEMS resonator, which vibrates at a frequency of 450 megahertz. "The flux of proteins reaching the NEMS is such that only one to two protein lands on the resonator in a minute," Naik says.
When the BSA protein molecule is dropped onto the resonator, the resonator's vibration frequency decreases by as much as 1.2 kiloHertz—a small, but readily detectable, change. In contrast, the beta-amylase protein molecule, which has a mass of about 200 kDa, or three times that of BSA, causes a maximum frequency shift of about 3.6 kHz.
In principle, Naik says, it should be possible to use the system to detect one dalton differences in mass, the equivalent of a single hydrogen atom, but this will require a next-generation of nanowire-based devices that are smaller and have even better noise performance.
Because the location where the protein lands on the resonator also affects the frequency shift—falling onto the center of the resonator causes a larger change than landing on the end or toward the sides, for example—"we can't tell the mass with a single measurement, but needed about 500 frequency jumps in the published work," Naik says. In future, the researchers will decouple measurements of the mass and the landing position of the molecules being sampled. This technique, which they have already prototyped, will soon enable mass spectra for complicated mixtures to be built up, molecule-by molecule.
Eventually, Roukes and colleagues hope to create arrays of perhaps hundreds of thousands of the NEMS mass spectrometers, working in parallel, which could determine the masses of hundreds of thousands of molecules "in an instant," Naik says.
As Roukes points out, "the next generation of instrumentation for the life sciences—especially those for systems biology, which allows us to reverse-engineer biological systems—must enable proteomic analysis with very high throughput. The potential power of our approach is that it is based on semiconductor microelectronics fabrication, which has allowed creation of perhaps mankind's most complex technology."
The other authors of the paper are graduate student Mehmet S. Hanay and staff scientist Philip Feng, from Caltech, and Wayne K. Hiebert of the National Research Council of Canada. The work was supported by the National Institutes of Health and, indirectly, by the Defense Advanced Research Projects Agency and the Space and Naval Warfare Systems Command.
Journal reference:
. Towards single-molecule nanomechanical mass spectrometry. Nature Nanotechnology, July 4, 2009
Adapted from materials provided by
California Institute of Technology.

giovedì 23 luglio 2009

Purer Water With Long Shelf Life Made Possible With One Atom Change To Water Purification Product

ScienceDaily (July 23, 2009) — By substituting a single atom in a molecule widely used to purify water, researchers at Sandia National Laboratories have created a far more effective decontaminant with a shelf life superior to products currently on the market.
Sandia has applied for a patent on the material, which removes bacterial, viral and other organic and inorganic contaminants from river water destined for human consumption, and from wastewater treatment plants prior to returning water to the environment.
“Human consumption of ‘challenged’ water is increasing worldwide as preferred supplies become more scarce,” said Sandia principal investigator May Nyman. “Technological advances like this may help solve problems faced by water treatment facilities in both developed and developing countries.”
The study was published in June 2009 in the journal Environmental Science & Technology (a publication of the American Chemical Society) and highlighted in the June 22 edition of Chemical & Engineering News. Sandia is working with a major producer of water treatment chemicals to explore the commercial potential of the compound.
The water-treatment reagent, known as a coagulant, is made by substituting an atom of gallium in the center of an aluminum oxide cluster — itself a commonly used coagulant in water purification, says Nyman.
The substitution isn’t performed atom by atom using nanoscopic tweezers but rather uses a simple chemical process of dissolving aluminum salts in water, gallium salts into a sodium hydroxide solution and then slowly adding the sodium hydroxide solution to the aluminum solution while heating.
“The substitution of a single gallium atom in that compound makes a big difference,” said Nyman. “It greatly improves the stability and effectiveness of the reagent. We’ve done side-by-side tests with a variety of commercially available products. For almost every case, ours performs best under a wide range of conditions.”
Wide-ranging conditions are inevitable, she said, when dealing with a natural water source such as a river. “You get seasonal and even daily fluctuations in pH, temperature, turbidity and water chemistry. And a river in central New Mexico has very different conditions than say, a river in Ohio.”
The Sandia coagulant attracts and binds contaminants so well because it maintains its electrostatic charge more reliably than conventional coagulants made without gallium, itself a harmless addition.
The new material also resists converting to larger, less-reactive aggregates before it is used. This means it maintains a longer shelf life, avoiding the problem faced by related commercially available products that aggregate over time.
“The chemical substitution [of a gallium atom for an aluminum atom] has been studied by Sandia’s collaborators at the University of California at Davis, but nobody has ever put this knowledge to use in an application such as removing water contaminants like microorganisms,” said Nyman.
The project was conceived and all water treatment studies were performed at Sandia, said Nyman, who worked with Sandia microbiologist Tom Stewart. Transmission electron microscope images of bacteriophages binding to the altered material were achieved at the University of New Mexico. Mass spectroscopy of the alumina clusters in solution was performed at UC Davis.
The work was sponsored by Sandia’s Laboratory Directed Research Development office.
Adapted from materials provided by DOE/Sandia National Laboratories.

Ytterbium's Broken Symmetry: Largest Parity Violations Ever Observed In An Atom

ScienceDaily (July 22, 2009) — Ytterbium was discovered in 1878, but until it recently became useful in atomic clocks, the soft metal rarely made the news. Now ytterbium has a new claim to scientific fame. Measurements with ytterbium-174, an isotope with 70 protons and 104 neutrons, have shown the largest effects of parity violation in an atom ever observed – a hundred times larger than the most precise measurements made so far, with the element cesium.
“Parity” assumes that, on the atomic scale, nature behaves identically when left and right are reversed: interactions that are otherwise the same but whose spatial configurations are switched, as if seen in a mirror, ought to be indistinguishable. Sounds like common sense but, remarkably, this isn’t always the case.
“It’s the weak force that allows parity violation,” says Dmitry Budker, who led the research team. Budker is a member of the Nuclear Science Division at the U.S. Department of Energy’s Lawrence Berkeley National Laboratory and a professor of physics at the University of California at Berkeley.
Of the four forces of nature – strong, electromagnetic, weak, and gravitational – the extremely short-range weak force was the last to be discovered. Neutrinos, having no electric charge, are immune to electromagnetism and only interact through the weak force. The weak force also has the startling ability to change the flavor of quarks, and to change protons into neutrons and vice versa.
Violating parity – neutrons and the weak force
Protons on their own last forever, apparently, but a free neutron falls apart in about 15 minutes; it turns into a proton by emitting an electron and an antineutrino, a process called beta decay. What makes beta decay possible is the weak force.
Scientists long assumed that nature, on the atomic scale, was symmetrical. It would look the same not only if left and right were reversed but also if the electrical charges of particles involved in an interaction were reversed, or even if the whole process ran backwards in time. Charge conjugation is written C, parity P, and time T; nature was thought to be C invariant, P invariant, and T invariant.
In 1957 researchers realized that the weak force didn’t play by the rules. When certain kinds of nuclei such as cobalt-60 are placed in a magnetic field to polarize them – line them up – and then allowed to undergo beta decay, they are more likely to emit electrons from their south poles than from their north poles.
This was the first demonstration of parity violation. Before the 1957 cobalt-60 experiment, renowned physicist Richard Feynman had said that if P violation were true – which he doubted – something long thought impossible would be possible after all: “There would be a way to distinguish right from left.”
It’s now apparent that many atoms exhibit parity violation, although it is not easy to detect. P violation has been measured with the greatest accuracy in cesium atoms, which have 55 protons and 78 neutrons in the nucleus, by using optical methods to observe the effect when atomic electrons are excited to higher energy levels.
The Berkeley researchers designed their own apparatus to detect the much larger parity violation predicted for ytterbium. In their experiment, ytterbium metal is heated to 500 degrees Celsius to produce a beam of atoms, which is sent through a chamber where magnetic and electric fields are oriented at right angles to each other. Inside the chamber the ytterbium atoms are hit by a laser beam, tuned to excite some of their electrons to higher energy states via a “forbidden” (highly unlikely) transition. The electrons then relax to lower energies along different pathways.
Weak interactions between the electron and the nucleus – plus weak interactions within the nucleus of the atom – act to mix some of the electron energy states together, making a small contribution to the forbidden transition. But other, more ordinary electromagnetic processes, which involve apparatus imperfections, also mix the states and blur the signal. The purpose of the chamber’s magnetic and electric fields is to amplify the parity-violation effect and to remove or identify these spurious electromagnetic effects.
Upon analyzing their data, the researchers found a clear signal for atomic parity violations, 100 times larger than the similar signal for cesium. With refinements to their experiment, the strength and clarity of the ytterbium signal promise significant advances in the study of weak forces in the nucleus.
Watching the weak force at work
The Budker group’s experiments are expected to expose how the weak charge changes in different isotopes of ytterbium, whose nuclei have the same number of protons but different numbers of neutrons, and will reveal how weak currents flow within these nuclei.
The results will also help explain how the neutrons in the nuclei of heavy atoms are distributed, including whether a “skin” of neutrons surrounds the protons in the center, as suggested by many nuclear models.
“The neutron skin is very hard to detect with charged probes, such as by electron scattering,” says Budker, “because the protons with their large electric charge dominate the interaction.”
He adds, “At a small level, the measured atomic parity violation effect depends on how the neutrons are distributed within the nucleus – specifically, their mean square radius. The mean square radius of the protons is well known, but this will be the first evidence of its kind for neutron distribution.”
Measurements of parity violation in ytterbium may also reveal “anapole moments” in the outer shell of neutrons in the nucleus (valence neutrons). As predicted by the Russian physicist Yakov Zel’dovich, these electric currents are induced by the weak interaction and circulate within the nucleus like the currents inside the toroidal winding of a tokamak; they have been observed in the valence protons of cesium but not yet in valence neutrons.
Eventually the experiments will lead to sensitive tests of the Standard Model – the theory that, although known to be incomplete, still best describes the interactions of all the subatomic particles so far observed.
“So far, the most precise data about the Standard Model has come from high-energy colliders,” says Budker. “The carriers of the weak force, the W and Z bosons, were discovered at CERN by colliding protons and antiprotons, a ‘high-momentum-transfer’ regime. Atomic parity violation tests of the Standard Model are very different – they’re in the low-momentum-transfer regime and are complementary to high-energy tests.”
Since 1957, when Zel’dovich first suggested seeking atomic variation in atoms by optical means, researchers have come ever closer to learning how the weak force works in atoms. Parity violation has been detected in many atoms, and its predicted effects, such as anapole moments in the valence protons of cesium, have been seen with ever-increasing clarity. With their new experimental techniques and the observation of a large atomic parity violation in ytterbium, Dmitry Budker and his colleagues have achieved a new landmark, moving closer to fundamental revelations about our asymmetric universe on the atomic scale.
Journal reference:
K. Tsigutkin, D. Dounas-Frazer, A. Family, J. E. Stalnaker, V. V. Yashchuck, and D. Budker. Observation of a large atomic parity violation in ytterbium. Physical Review Letters, (in press) [link]
Adapted from materials provided by DOE/Lawrence Berkeley National Laboratory.

Quantum Measurements: Common Sense Is Not Enough, Physicists Show

ScienceDaily (July 23, 2009) — In comparison to classical physics, quantum physics predicts that the properties of a quantum mechanical system depend on the measurement context, i.e. whether or not other system measurements are carried out. A team of physicists from Innsbruck, Austria, led by Christian Roos and Rainer Blatt, have for the first time proven in a comprehensive experiment that it is not possible to explain quantum phenomena in non-contextual terms.
The scientists report on their findings in the current issue of Nature.
Quantum mechanics describes the physical state of light and matter and formulates concepts that totally contradict the classical conception we have of nature. Thus, physicists have tried to explain non-causal phenomena in quantum mechanics by classical models of hidden variables, thereby excluding randomness, which is omnipresent in quantum theory. In 1967, however, the physicists Simon Kochen and Ernst Specker proved that measurements have to be contextual when explaining quantum phenomena by hidden variables. This means that the result of one measurement depends on which other measurements are performed simultaneously.
Interestingly, the simultaneous measurements here are compatible and do not disturb each other. The physicists led by Christian Roos and Rainer Blatt from the Institute of Quantum Optics and Quantum Information (IQOQI) of the Austrian Academy of Sciences and the University of Innsbruck have now been able to prove this proposition and rule out non-contextual explanations of quantum theory experimentally. In a series of measurements on a quantum system consisting of two ions they have shown that the measurement of a certain property is dependent on other measurements of the system.
Technological headstart
The experiment was carried out by the PhD students Gerhard Kirchmair and Florian Zähringer as well as Rene Gerritsma, a Dutch postdoc at the IQOQI. The scientists trapped a pair of laser-cooled calcium ions in an electromagnetic trap and carried out a series of measurements. „For this experiment we used techniques we had previously designed for building a quantum computer. We had to concatenate up to six quantum gates for this experiment", explains Christian Roos. „We were able to do this because, it is only recently that we can perform a quantum gate with high fidelity."
Only last year, a team of scientists led by Rainer Blatt realized an almost error-free quantum gate with a fidelity of 99 %. With this technological headstart, the scientists have now proven comprehensively in an experiment for the first time that the experimentally observed phenomena cannot be described by non-contextual models with hidden variables. The result is independent of the quantum state – it was tested in ten different states. Possible measurement disturbances could be ruled out by the experimental physicists with the help of theoreticians Otfried Gühne and Matthias Kleinmann from the group led by Prof. Hans Briegel at the IQOQI in Innsbruck.
Randomness cannot be excluded
In 1935 already, Albert Einstein, Boris Podolsky and Nathan Rosen questioned whether quantum mechanics theory is complete in the sense of a realistic physical theory – a criticism that is now well know in the scientific world as the EPR paradox. In the mid 1960s, John Bell showed that quantum theory cannot be a real and at the same time local theory, which, in the meantime, has also been proven experimentally. Kochen and Specker's results exclude other theoretical models but until now it was difficult to provide a convincing experimental proof. Following a proposition by the Spaniard Adán Cabello, the Innsbruck scientists have now successfully proven this point and produced unambiguous results experimentally. The physicists are supported by the Austrian Science Funds (FWF), the European Union, the Federation of Austrian Industry Tyrol, and Intelligence Advanced Research Projects Activity (IARPA).
Adapted from materials provided by University of Innsbruck, via EurekAlert!, a service of AAAS.

mercoledì 22 luglio 2009

'Lab On A Chip' To Give Growers Real-time Glimpse Into Water Stress In Plants

ScienceDaily (July 22, 2009) — Fifteen years ago, when Alan Lakso first sought to enlist Cornell's nanofabrication laboratory to develop a tiny sensor that would measure water stress in grapevines, the horticultural sciences professor ended up back at the drawing board.
It wasn't until Abraham Stroock, associate professor of chemical engineering, had a breakthrough of his own that Lakso's vision began to take shape. Stroock's lab recently developed a synthetic tree that mimics the flow of water inside plants using a slab of hydrogel with nanometer-scale pores. At last Lakso had access to the technology to move forward.
The device is an embedded microsensor capable of measuring real-time water stress in living plants. In theory, the sensor will help vintners strike the precise balance between drought and overwatering -- both of which diminish the quality of wine grapes.
"To manage for optimum stress," said Lakso, a researcher at the New York State Agricultural Experiment Station in Geneva, "we need to monitor ... exactly what's going on in the vine."
With Vinay Pagay, a graduate student with degrees in computer engineering and viticulture, the team is working at the Cornell Nanofabrication Facility in Ithaca to develop 4-inch diameter silicon wafer protoypes, each containing approximately 100 microsensors. They have also begun collaborating with Infotonics, a firm in Canandaigua, N.Y., that specializes in microelectromechanical systems (MEMS), to plan commercialization of the sensors. The partnership applies cutting-edge engineering to practical agricultural concerns.
The team hopes to design a sensor that will transmit field readings wirelessly to a central server; the data will then be summarized online for the grower. The concept has already received attention from E. & J. Gallo Winery in California as well as researchers and industry leaders from Australia, Spain and Italy. "It's not just for the big growers," Lakso said. "We hope the micro-manufacturing will provide low-cost sensors for small growers as well."
Looking ahead, the team is pursuing alternative sensors that could enhance research in fields from food science to forestry. They have begun development of a "multi-use sensor" that redirects water flow inside the plant through a shunt. In this case, the sensor could measure the flow of water and mineral nutrients through the plant, in addition to water stress. Pagay described it as "a lab on a chip."
Beyond winemaking, the technology has implications for manufacturing, food processing and electronics. Team member Taryn Bauerle, assistant professor of horticulture, described how such sensors could be implanted throughout trees in a forest ecosystem to measure water use and nutrient flow on a large scale with unprecedented accuracy. "All of these [researchers'] brains are coming together," she said. "There's no limit to where we can take this type of technology."
Adapted from materials provided by Cornell University.

Lighting Revolution Forecast By Top Scientist

ScienceDaily (July 22, 2009) — New developments in a substance which emits brilliant light could lead to a revolution in lighting for the home and office in five years, claims a leading UK materials scientist, Professor Colin Humphreys of Cambridge University. The source of the huge potential he foresees, gallium nitride (GaN), is already used for some lighting applications such as camera flashes, bicycle lights, mobile phones and interior lighting for buses, trains and planes.
But making it possible to use GaN for home and office lighting is the Holy Grail. If achieved, it could reduce the typical electricity consumption for lighting of a developed country by around 75% while delivering major cuts in carbon dioxide emissions from power stations, and preserving fossil fuel reserves.
‘GaN LEDs have a very exciting future' says Professor Humphreys. ‘In particular they are incredibly long lasting. A GaN LED can burn for 100,000hours - one hundred times longer than a conventional light bulb. In practice this means it only needs replacing after 60 years of normal household use. Also, unlike the energy-saving compact fluorescent lights now in use, GaN LEDs don't contain mercury so disposal is not such an environmental headache.'
But to unlock these benefits, important barriers need to be tackled by scientists. GaN LEDs are too expensive to manufacture for wide scale deployment in homes and workplaces. And the harsh quality of the light produced is another limiting factor. At the Cambridge Centre for Gallium Nitride where Professor Humphreys leads the research, a detailed new theory that explains the mystery of why GaN emits light so strongly has recently been developed in collaboration with Professor Phil Dawson of Manchester University. ‘
Such understanding is vital to improving GaN lighting's quality and efficiency' says Professor Humphreys. ‘Our centre is also working on an innovative technique for growing GaN on six-inch diameter silicon wafers, rather than the sapphire wafers used to date. This could deliver a tenfold reduction in manufacturing costs and so help GaN lighting penetrate new markets'. Another of the centre's projects is investigating how GaN lighting could be made to mimic sunlight which could have important benefits for sufferers of Seasonal Affective Disorder (SAD).
‘GaN lighting should start making its mark in homes and offices within about five years', predicts Professor Humphreys. ‘That won't just be good news for the environment - it will also benefit consumers in terms of convenience, electricity bills and quality of life.'
Looking further ahead, the possibilities for GaN light appear wide-ranging. Currently, GaN LEDs are phosphor -coated to transform the light from blue into white. But there could be scope to remove the coating and incorporate mini LEDs, each producing a different colour, in the overall ‘light bulb'. Together the mini LEDs would produce white light, but people in the home or office could alter the precise balance, for example to a bluish light, to suit their mood. ‘This and other applications, for example in healthcare for detecting tumours, and water treatment for developing countries, might be achievable in 10 years', says Professor Humphreys.
Adapted from materials provided by AlphaGalileo Foundation, via AlphaGalileo.

Testing Relativity, Black Holes And Strange Attractors In The Laboratory

ScienceDaily (July 22, 2009) — Even Albert Einstein might have been impressed. His theory of general relativity, which describes how the gravity of a massive object, such as a star, can curve space and time, has been successfully used to predict such astronomical observations as the bending of starlight by the sun, small shifts in the orbit of the planet Mercury and the phenomenon known as gravitational lensing. Now, however, it may soon be possible to study the effects of general relativity in bench-top laboratory experiments.
Xiang Zhang, a faculty scientist with the U.S. Department of Energy's Lawrence Berkeley National Laboratory (Berkeley Lab) and professor at the University of California Berkeley, lead a study in which it was determined that the interactions of light and matter with spacetime, as predicted by general relativity, can be studied using the new breed of artificial optical materials that feature extraordinary abilities to bend light and other forms of electromagnetic radiation.
"We propose a link between the newly emerged field of artificial optical materials to that of celestial mechanics, thus opening a new possibility to investigate astronomical phenomena in a table-top laboratory setting," says Zhang. "We have introduced a new class of specially designed optical media that can mimic the periodic, quasi-periodic and chaotic motions observed in celestial objects that have been subjected to complex gravitational fields."
A paper describing this work is now available on-line in the journal Nature Physics. The paper is titled: "Mimicking Celestial Mechanics in Metamaterials." Co-authoring it with Zhang were his post-doctoral students Dentcho Genov and Shuang Zhang.
Zhang, a principal investigator with Berkeley Lab's Materials Sciences Division and director of UC Berkeley's Nano-scale Science and Engineering Center, has been one of the pioneers in the creation of artificial optical materials. Last year, he and his research group made headlines when they fashioned unique metamaterials - composites of metals and dielectrics – that were able to bend light backwards, a property known as a negative refraction that is unprecedented in nature. More recently, he and his group fashioned a "carpet cloak" from nanostructured silicon that concealed the presence of objects placed under it from optical detection. These efforts not only suggested that true invisibility materials are within reach, Zhang said, but also represented a major step towards transformation optics that would "open the door to manipulating light at will."
Now he and his research group have demonstrated that a new class of metamaterials called "continuous-index photon traps" or CIPTs can serve as broadband and radiation-free "perfect" optical cavities. As such, CIPTs can control, slow and trap light in a manner similar to such celestial phenomena as black holes, strange attractors and gravitational lenses. This equivalence between the motion of the stars in curved spacetime and propagation of the light in optical metamaterials engineered in a laboratory is referred to as the "optical-mechanical analogy."
Zhang says that such specially designed metamaterials can be valuable tools for studying the motion of massive celestial bodies in gravitational potentials under a controlled laboratory environment. Observations of such celestial phenomena by astronomers can sometimes take a century of waiting.
"If we twist our optical metamaterial space into new coordinates, the light that travels in straight lines in real space will be curved in the twisted space of our transformational optics," says Zhang. "This is very similar to what happens to starlight when it moves through a gravitational potential and experiences curved spacetime. This analogue between classic electromagnetism and general relativity, may enable us to use optical metamaterials to study relativity phenomena such as gravitational lens."
In their demonstration studies, the team showed a composite of air and the dielectric Gallium Indium Arsenide Phosphide (GaInAsP). This material provided operation at the infrared spectral range and featured a high refractive index with low absorptions.
In their paper, Zhang and his coauthors cite as a particularly intriguing prospect for applying artificial optical materials to the optical-mechanical analogy the study of the phenomenon known as chaos. The onset of chaos in dynamic systems is one of the most fascinating problems in science and is observed in areas as diverse as molecular motion, population dynamics and optics. In particular, a planet around a star can undergo chaotic motion if a perturbation, such as another large planet, is present. However, owing to the large spatial distances between the celestial bodies, and the long periods involved in the study of their dynamics, the direct observation of chaotic planetary motion has been a challenge. The use of the optical-mechanical analogy may enable such studies to be accomplished in a bench-top laboratory setting on demand.
"Unlike astronomers, we will not have to wait 100 years to get experimental results," Zhang says.
This research was supported by the U.S. Army Research Office and by the National Science Foundation which funds the UC Berkeley Nano-scale Science and Engineering Center.
Adapted from materials provided by DOE/Lawrence Berkeley National Laboratory.

New Blue Light Nanocrystals Could Help Mitigate Global Warming

ScienceDaily (July 22, 2009) — Berkeley Lab researchers have produced non-toxic magnesium oxide nanocrystals that efficiently emit blue light and could also play a role in long-term storage of carbon dioxide, a potential means of tempering the effects of global warming.
In its bulk form, magnesium oxide is a cheap, white mineral used in applications ranging from insulating cables and crucibles to preventing sweaty-palmed rock climbers from losing their grip. Using an organometallic chemical synthesis route, scientists at the Molecular Foundry have created nanocrystals of magnesium oxide whose size can be adjusted within just a few nanometers. And unlike their bulk counterpart, the nanocrystals glow blue when exposed to ultraviolet light.
Current routes for generating these alkaline earth metal oxide nanocrystals require processing at high temperatures, which causes uncontrolled growth or fusing of particles to one another-not a desirable outcome when the properties you seek are size-dependent. On the other hand, vapor phase techniques, which provide size precision, are time and cost intensive, and leave the nanocrystals attached to a substrate.
“We’ve discovered a fundamentally new, unconventional mechanism for nicely controlling the size of these nanocrystals, and realized we had an intriguing and surprising candidate for optical applications,” said Delia Milliron, Facility Director of the Inorganic Nanostructures Facility at Berkeley Lab’s nanoscience research center, the Molecular Foundry. “This efficient, bright blue luminescence could be an inexpensive, attractive alternative in applications such as bio-imaging or solid-state lighting.”
Unlike conventional incandescent or fluorescent bulbs, solid-state lighting makes use of light-emitting semiconductor materials-in general, red, green and blue emitting materials are combined to create white light. However, efficient blue light emitters are difficult to produce, suggesting these magnesium oxide nanocrystals could be a bright candidate for lighting that consumes less energy and has a longer lifespan.
These minute materials do more than glow, however. Along with their promising optical behavior, these magnesium oxide nanocrystals will be a subject of study in an entirely different field of research: Berkeley Labs’ Energy Frontier Research Center (EFRC) for Nanoscale Control of Geologic CO2, designed to “establish the scientific foundations for the geological storage of carbon dioxide.”
Experts say carbon dioxide capture and storage will be vital to achieving significant cuts in greenhouse gas emissions, but the success of this technology hinges on sealing geochemical reservoirs deep below the earth’s surface without allowing gases or fluids to escape. If properly stored, the captured carbon dioxide pumped underground forms carbonate minerals with the surrounding rock by reacting with nanoparticles of magnesium oxide and other mineral oxides.
“These nanocrystals will serve as a test system for modeling the kinetics of dissolution and mineralization in a simulated fluid-rock reservoir, allowing us to probe a key pathway in carbon dioxide sequestration,” said Jeff Urban, a staff scientist in the Inorganic Nanostructures Facility at the Molecular Foundry who is also a member of the EFRC research team. “The geological minerals that fix magnesium into a stable carbonate are compositionally complex, but our nanocrystals will provide a simple model to mimic this intricate process.”
Hoi Ri Moon, a post-doctoral researcher at the Foundry working with Milliron and Urban, noted her team’s direct synthesis method could also be helpful for already-established purposes. “As a user facility that provides support to nanoscience researchers around the world, we would like to pursue studies with other scientists who could use our nanocrystals as ‘feedstock’ for catalysis, another application for which magnesium oxide thin films are commonly used,” said Moon.
“Size-controlled synthesis and optical properties of monodisperse colloidal magnesium oxide nanocrystals,” by Hoi Ri Moon, Jeffrey J. Urban and Delia J. Milliron, appears in Angewandte Chemie International Edition and is available in Angewandte Chemie International Edition online.
Work at the Molecular Foundry was supported by the Office of Basic Energy Sciences within the DOE Office of Science.
Adapted from materials provided by DOE/Lawrence Berkeley National Laboratory.

martedì 21 luglio 2009

CERN(LHC): Preshower ...what is it?

One way the elusive Higgs boson might decay is into high-energy photons and detecting them is one of the ECAL’s main jobs. However, short-lived particles called neutral pions, also produced in collisions, can inadvertently mimic high-energy photons when they decay into two closely-spaced lower energy photons that the ECAL picks up together.
In the endcap regions, where the angle between the two emerging photons from the decay of a neutral pion is likely to be small enough to cause this problem, a preshower detector sits in front of the ECAL to prevent such false signals. The preshower has a much finer granularity than the ECAL with detector strips 2 mm wide, compared to the 3 cm-wide ECAL crystals, and can see each of the pion-produced particles as a separate photon.
The preshower is made of two planes of lead followed by silicon sensors, similar to those used in the tracker. When a photon passes through the lead layer it causes an electromagnetic shower, containing electron-positron pairs, which the silicon sensors then detect and measure. From this we get a measure of the photon’s energy, whilst having two detector layers gives us two measurements, allowing us to pinpoint the particle’s position.
When seemingly high-energy photons are then found in the ECAL we can extrapolate their paths back to the centre of the collision and look for their “hits” in the preshower along the way, adding the energy deposited there to the total energy from the ECAL, and deducing if they really were individual high-energy photons or photon pairs.
Each endcap preshower uses 8 square metres of silicon (a material especially chosen for its accuracy, compactness, ability to deal with radiation, and easiness to handle). The silicon sensors, each measuring about 6.3cm x 6.3cm x 0.3mm and divided into 32 strips, are arranged in a grid in the endcaps to form an approximately circular shape covering most of the area of the crystal endcap. For optimum performance during the lifetime of the experiment (at least ten years), as in the tracker, the silicon detectors must be kept at a temperature of between -10oC and -15oC. However, the nearby ECAL is very sensitive and must be kept within precisely 0.1oC of its (higher) optimum temperature. The preshower must therefore be cold on the inside but warm on the outside, achieved using both heating and cooling systems.
The complete preshower system forms a disc, about 2.5m in circumference with a 50cm diameter hole in the middle (where the beam pipe passes through). This disc is only 20cm thick but manages to squeeze inside two layers of lead, two layers of sensors (and their electronics) as well as the cooling and heating systems – another example of the “compact” nature of CMS.

domenica 19 luglio 2009

Molecules containing an odd number of electrons are much more conductive at low bias voltages.

ScienceDaily (July 19, 2009) — Researchers from Graz University of Technology, Humboldt University in Berlin, M.I.T., Montan University in Leoben and Georgia Institute of Technology report an important advance in the understanding of electrical conduction through single molecules.
Minimum size, maximum efficiency: The use of molecules as elements in electronic circuits shows great potential. One of the central challenges up until now has been that most molecules only start to conduct once a large voltage has been applied. An international research team with participation of the Graz University of Technology has shown that molecules containing an odd number of electrons are much more conductive at low bias voltages. These fundamental findings in the highly dynamic research field of nanotechnology open up a diverse array of possible applications: More efficient microchips and components with considerably increased storage densities are conceivable.
One electron instead of two: Most stable molecules have a closed shell configuration with an even number of electrons. Molecules with an odd number of electrons tend to be harder for chemists to synthesize but they conduct much better at low bias voltages. Although using an odd rather than an even number of electrons may seem simple, it is a fundamental realization in the field of nanotechnology – because as a result of this, metal elements in molecular electronic circuits can now be replaced by single molecules. “This brings us a considerable step closer to the ultimate minitiurization of electronic components”, explains Egbert Zojer from the Institute for Solid State Physics of the Graz University of Technology.
Molecules instead of metal
The motivation for this basic research is the vision of circuits that only consist of a few molecules. “If it is possible to get molecular components to completely assume the functions of a circuit’s various elements, this would open up a wide array of possible applications, the full potential of which will only become apparent over time. In our work we show a path to realizing the highly electrically conductive elements”, Zojer excitedly reports the momentous consequences of the discovery.
Specific new perspectives are opened up in the field of molecular electronics, sensor technology or the development of bio-compatible interfaces between inorganic and organic materials: The latter refers to the contact with biological systems such as human cells, for instance, which can be connected to electronic circuits in a bio-compatible fashion via the conductive molecules.
Journal reference:
Georg Heimel, Egbert Zojer, Lorenz Romaner, Jean-Luc Brédas and Francesco Stellacci. Doping Molecular Wires. Nano Letters, Vol.9, Issue 7 (2009)
Adapted from materials provided by TU Graz.

venerdì 17 luglio 2009

Why Does Water Expand When it Cools? A New Explanation

( -- Most of us, when we take our first science classes, learn that when things cool down, they shrink. (When they heat up, we learn, they usually expand.) However, water seems to be the exception to the rule. Instead of shrinking as it cools, this common liquid actually expands. In order to explain this phenomenon, some scientists have adopted the “mixture” model, which purports that low-density, ice-like components dominate due to cooling. Masakazu Matsumoto, at the Nagoya University Research Center for Materials Science in Japan, has a different idea. He describes his findings in Physical Review Letters: "Why Does Water Expand When It Cools?"
"Theoreticians often describe that ice-like local structure emerges in the super-cooled liquid water by cooling, and increase of such heterogeneous low-density domain causes the density anomalies,” Matsumoto tells “Such an explanation is easy to imagine and looks plausible. Experimentalists tend to believe the theoretician’s beautiful and simple model, and interpret their data based on this.”
However, such heterogeneity as must occur in this mixed model has not been truly proven experimentally. Matsumoto set out to model super-cooled water, and see if he could discover the mechanism behind the expansion of water under conditions that should make it shrink. In a previous work (M. Matsumoto, A. Baba, and I. Ohmine, J. Chem. Phys. 127, 134504 (2007)), Matsumoto offered a new method of analyzing the hydrogen bond found in super-cooled liquid water. “I found that the structure of supercooled water can be tessellated into a variety of polyhedron-like structure, vitrites,” he says. “I thought the issue would be a good chance to test my method.”
“Water is a network-forming matter. You can imagine the structure of the network as a kitchen sponge,” Matsumoto continues. “The sponge structure is originally a kind of foam but membranes are lost, and only the beams - bonds - remain. In both network of water and kitchen sponge, four bonds meet at a point, or node, to form a three dimensionally connected random network. As Plateau pointed out in 19th century, four beams of a foam crosses at a node with regular tetrahedral angle - Maraldi's angle - similar to the water’s hydrogen bond network.”
Matsumoto used computer simulation to look at three ways to change the volume of the foam cells: extension of the bonds, a change in the containing angle between the bonds, and a change in network topology. “By discriminating the three contributions, the mechanism became very clear. One contributes to thermal expansion, another one contributes to thermal contraction, and the last one does not. Density maximum is a result of these competing contributions,” he explains.
“I found that the thermal volume contraction is due to the deviation of bond angles from the regular tetrahedral angle,” Matsumoto says. He also applied his former idea of vitrites to classify local structures. “Any kind of local structure shrinks when bond angle is distorted from the regular tetrahedral angle. In other words, local structural variety is not the principal factor contributing to the thermal contraction. Water shrinks homogeneously by thermal angular distortion, regardless of local structural variety.”
Right now, though, reproducing the results of Matsumoto’s simulation experimentally is a rather difficult task. “It is still very difficult to observe microscopic heterogeneity by experiments.” He hopes, though, that his simulation will at least get theoreticians and experimentalists thinking about alternatives to idea of an ice-like, low-density domain growing in liquid water through cooling. “My finding will affect to the interpretation of experimental data on super-cooled water as well as water in the vicinity of walls, solutes, biomolecules.”
Moving forward, Matsumoto hopes to use computer simulation to tackle water polyamorphism. “There are several materials which invoke liquid-liquid coexistence. Most apparent case is observed in phosphor, and tetrahedral network materials such as water, silicon, silica and germanium, are supposed to be the case, too,” he insists. “By computer simulations, many people also have reproduced the liquid-liquid coexistence. However, nobody ever explained how and why two liquid phases of a single component can share the interface.”
It appears that water is much more interesting than many of us ever could have imagined.
More Information:
Masakazu Matsumoto, “Why Does Water Expand When It Cools?” (2009). Available online: .
M. Matsumoto, A. Baba, and I. Ohmine, “Network Motif of Water.” Journal of Chemical Physics (2007). Available online: .

New Method To Encapsulate Substances In Nanospheres

ScienceDaily (July 17, 2009) — A group of researchers at the Catalan Institute of Nanoscience and Nanotechnology (CIN2), belonging to the Catalan Institute of Nanotechnology and the Spanish National Research Council (CSIC) located at the UAB Research Park, and the UAB Department of Chemistry have developed and patented a method which obtains minute organometallic capsules ranging from micrometric to nanometric sizes. These will encapsulate substances in nanospheres containing intrinsic metal properties, such as magnetism, fluorescence or conductivity, which could be useful when applied to radiodiagnostics, electronics or sensors.
Encapsulating substances and then controlling when and how much is released is one of the most recently developed strategies in the fields of chemistry, medicine, material science and environmental technologies. This strategy pursues the idea of the "magic bullet", which has been discussed for a long time, especially in the field of medicine: being able to transport therapeutic substances to the specific place where they are needed.
Until now this technique was possible with liposomes (commonly used in cosmetics), dendrimers (polymeric macromolecules) or polymeric organic particles. In these cases, the capsules are formed by organic molecules. However, encapsulating substances within metal-containing particles had not been achieved until now.
And that is precisely what has been done by the research group at the Catalan Institute of Nanoscience and Nanotechnology (CIN2) - belonging to the Catalan Institute of Nanotechnology and the Spanish National Research Council (CSIC) located at the UAB Research Park - and the UAB Department of Chemistry. Researchers have developed and patented a method to obtain minute organometallic capsules (i.e. formed by a partially organic, partially metallic material) ranging from micrometric to nanometric sizes. The incorporation of metal implies that the nanospheres will contain intrinsic metal properties, such as magnetism, fluorescence or conductivity, which can be useful in medical applications, e.g. radiodiagnostics, electronics or sensors.
The authors of this method are Daniel Maspoch, Inhar Imaz, and Daniel Ruiz-Molina, researchers of the NanoStructured Functional Materials (NanoSFun) group at CIN2, and Jordi Hernando, researcher at the UAB Department of Chemistry. Their names are all included in the article which will be published in the journal Angewandte Chemie International Edition, and which can be found online as one of the journal's highlights.
Efficient and easily scalable method
The method allows for the creation of micro and nanospheres by joining two units: an organic or binding molecule, which acts as an "adhesive", and a metal ion. Generally, the organic molecule shares an electron pair with a metal ion and this gives them the tendency to join. Described simply, the method consists in mixing a solution made up of metal ions, organic molecules and the active principle which is to be encapsulated. When the solution is shaken, either mechanically or with ultrasounds, the metal ions join the organic molecules to form spheres, thus capturing within them the active principle present in the solution. The system is therefore relatively simple and does not present any particular problems with regard to its use at industrial level.
"This simplicity however does not mean that it cannot be used for a variety of purposes. Depending on the composition of the mixture, its concentration, how fast and how long it is shaken, and the speed at which each of the components is added, the size of the nanospheres can be varied, as can characteristics such as the fluorescence or porosity. All these factors can be controlled and varied depending on which application is needed. Thus, porosity is relevant in nanospheres which are programmed to release the substance they contain through the capsule's pores," scientists explain.
In many other cases however, the substance is released during the degradation of the nanosphere, which "disintegrates" at a specific moment (which can also be programmed) and liberates its contents. The units forming the nanosphere (metal ion and organic molecule) can also be changed depending on the type of application desired. Thus, a hypothetic application could be made up of a sphere containing gadolinium, which would enable it to be used as a contrasting agent in radiodiagnostics and at the same time transport the active principal directly to the cells which need to be treated, thanks to the incorporation of an antibody which would detect target cells.
The possibilities are almost unlimited and the selection of molecules not only will depend on the application but on the stability that is expected from the sphere. In the article researchers detail the results achieved with spheres formed with zinc which, according to laboratory tests, can be maintained stable when stored in alcohol for five or six months. This period is reduced to a few days when they are stored in water or blood. Scientists explain that they are nevertheless working on making them more stable.
The advantage of encapsulation versus conventional drug administration processes resides in the fact that it limits the number of side effects by selectively releasing the drug in the specific area where the treatment is needed. Therefore, the amount of drug required is reduced and the necessary levels of the drug are maintained for a longer period of time. This strategy is already being applied to treatments for cancer and other lung diseases. It is calculated that in the United States alone, this business moved approximately 117 billion dollars in the year 2000, a figure which is expected to rise to 366 billion dollars in 2010.
Adapted from materials provided by Catalan Institute of Nanoscience and Nanotechnology.

Solar Power: New SunCatcher Power System Ready For Commercial Production In 2010

ScienceDaily (July 17, 2009) — Stirling Energy Systems (SES) and Tessera Solar recently unveiled four newly designed solar power collection dishes at Sandia National Laboratories’ National Solar Thermal Test Facility (NSTTF). Called SunCatchers™, the new dishes have a refined design that will be used in commercial-scale deployments of the units beginning in 2010.
“The four new dishes are the next-generation model of the original SunCatcher system. Six first-generation SunCatchers built over the past several years at the NSTTF have been producing up to 150KW [kilowatts] of grid-ready electrical power during the day,” says Chuck Andraka, the lead Sandia project engineer. “Every part of the new system has been upgraded to allow for a high rate of production and cost reduction.”
Sandia’s concentrating solar-thermal power (CSP) team has been working closely with SES over the past five years to improve the system design and operation.
The modular CSP SunCatcher uses precision mirrors attached to a parabolic dish to focus the sun’s rays onto a receiver, which transmits the heat to a Stirling engine. The engine is a sealed system filled with hydrogen. As the gas heats and cools, its pressure rises and falls. The change in pressure drives the piston inside the engine, producing mechanical power, which in turn drives a generator and makes electricity.
The new SunCatcher is about 5,000 pounds lighter than the original, is round instead of rectangular to allow for more efficient use of steel, has improved optics, and consists of 60 percent fewer engine parts. The revised design also has fewer mirrors — 40 instead of 80. The reflective mirrors are formed into a parabolic shape using stamped sheet metal similar to the hood of a car. The mirrors are made by using automobile manufacturing techniques. The improvements will result in high-volume production, cost reductions, and easier maintenance.
Among Sandia’s contributions to the new design was development of a tool to determine how well the mirrors work in less than 10 seconds, something that took the earlier design one hour.
“The new design of the SunCatcher represents more than a decade of innovative engineering and validation testing, making it ready for commercialization,” says Steve Cowman, Stirling Energy Systems CEO. “By utilizing the automotive supply chain to manufacture the SunCatcher, we’re leveraging the talents of an industry that has refined high-volume production through an assembly line process. More than 90 percent of the SunCatcher components will be manufactured in North America.”
In addition to improved manufacturability and easy maintenance, the new SunCatcher minimizes both cost and land use and has numerous environmental advantages, Andraka says.
“They have the lowest water use of any thermal electric generating technology, require minimal grading and trenching, require no excavation for foundations, and will not produce greenhouse gas emissions while converting sunlight into electricity,” he says.
Tessera Solar, the developer and operator of large-scale solar projects using the SunCatcher technology and sister company of SES, is building a 60-unit plant generating 1.5 MW (megawatts) by the end of the year either in Arizona or California. One megawatt powers about 800 homes. The proprietary solar dish technology will then be deployed to develop two of the world’s largest solar generating plants in Southern California with San Diego Gas & Electric in the Imperial Valley and Southern California Edison in the Mojave Desert, in addition to the recently announced project with CPS Energy in West Texas. The projects are expected to produce 1,000 MW by the end of 2012.
Last year one of the original SunCatchers set a new solar-to-grid system conversion efficiency record by achieving a 31.25 percent net efficiency rate, toppling the old 1984 record of 29.4.
Adapted from materials provided by Sandia National Laboratories.

Controlling The Electronic Surface Properties Of A Material

ScienceDaily (July 16, 2009) — It's commonly accepted that electrical resistance of a given material cannot be adjusted as is the case with, for example, density and color. However, Dr Meike Stöhr and her collaborators have now succeeded in developing a new method to selectively tune surface properties such as resistance.
The interdisciplinary team of physicists and chemists have developed a substance which, after heating on a copper surface, exhibits a two dimensional network with nanometer sized pores. The interaction of this network with the existing electron gas on the metal surface leads to the following effect: the electrons underneath the network are pushed into the pores to form small bunches of electrons called quantum dots.
Great potential for materials research
By varying parameters such as the height and diameter of the pores the possibility arises to selectively tune the properties of the material. Further possibilities arise from the ability to fill the pores with different molecules. This allows direct access to the properties of the material which are dependent on the electronic structure, such as conductivity, reflectivity and surface catalysis properties. This will lead to the emergence of new materials with adjustable electronic properties.
The underlying physical mechanisms can best be understood by a comparison of the electron-gas with waves in water. Waves on a water surface are reflected by any obstacle they meet. If the obstacle on the surface in question resembles a honeycomb structure, standing waves are set up in each cell of the honeycomb. This then leads to a wave pattern representative of the honeycomb structure of the same size and shape. “Applying this analogy to the electron gas, we see that the interaction of the network structure with the electron gas on the metal surface confines the electrons giving rise to a characteristic electron wave structure of the new material.” says Stöhr.
These pore networks are good candidates for new meta-materials. These are man-made materials which, due to their period architecture, have specific optical and electronic properties not found in nature. These properties can be tuned by changing the properties of their component materials. In the case of pore networks, it is the electronic surface properties which can be tuned by careful selection of the nano-pores.
The University of Basel and the Paul Scherrer Institute are long-term partners of the Swiss Nanoscience Institute (SNI), which is also financed by the Canton of Aargau. The SNI also includes both the Nationaler Forschungsschwerpunkt Nanowissenschaften which was founded in 2001, and the Argovia-Netzwerk, founded in 2006 and also financed by the Canton of Aargau. A key partner in this project was the Swiss Light Source of the Paul Scherrer institute.
Journal reference:
Jorge Lobo-Checa, Manfred Matena, Kathrin Müller, Jan Hugo Dil, Fabian Meier, Lutz H. Gade, Thomas A. Jung, Meike Stöhr. Band Formation from Coupled Quantum Dots Formed by a Nanoporous Network on a Copper Surface. Science, 2009; 325 (5938): 300 DOI: 10.1126/science.1175141
Adapted from materials provided by Paul Scherrer Institut (PSI), via AlphaGalileo.

giovedì 16 luglio 2009

Anton Zeilinger: From Quantum Puzzles to Quantum Information Technology

The conceptual challenges raised by quantum physics have given rise to a number of experiments on individual quantum particles which have now reached an advanced experimental stage. Experiments with entangled photons not only confirm the nonlocality of Nature, they have recently given rise to new concepts in quantum information. Of these, quantum teleportation and quantum cryptography have left the shielded environment of the laboratory and are now feasible over distances of many kilometers. In the future, such experiments might even involve photon sources on satellites. The most recent concept of oneway quantum computation realized with entangled photons implements a completely new paradigm where the information, instead of proceeding via operations on some input state, is actually performed through successive measurements on a sufficiently complex entangled state. The technological realization of future quantum computation systems hinges on the question whether decoherence can be overcome. Detailed studies with macromolecules indicate that decoherence might not be as severe as often feared.
Photons have various significant advantages. They can easily be prepared in a variety ofdifferent quantum states including entangled ones with very high purity. Also, photonicstates can easily be manipulated. Furthermore, photons are the only type of qubits which can be transported over kilometer distances. Hitherto photons have therefore become the backbone in quantum communication protocols. Distances of the order of 100 kilometershave been possible so far and quantum communication via satellites appears to be technically feasible in pribciple. The use of photonic qubits as the main information carrier in quantum computers has thus far been very limited essentially because of the unavailabilty of significant nonlinearities on the single-photon level. This has changed because of (a) the identification of effective nonlinearity due to the measurement process and (b) the the observation that the randomness of the individual quantum event can be overcome by active feed forward in a cluster state quantum system. All-photonic systemswere both communication and computation are performed by photonic qubits would be very desirable as then transfer of quantum information between different physical implementations of qubits would not be necessary. I will review some recent results and possibilities for the future.
Quantum World is full of paradoxes, of which the most well-known is Schrodinger's cat. There have been a number of attempts in the history of quantum physics to somehow bypass the conceptual problems of quantum physics, witness for example Albert Einstein's position. Not the least because all these attempts have turned out not to be very fruitful, the only productive approach is to accept quantum phenomena and ask what the message of the quantum really is. John Archibald Wheeler has formulated this in his far-reaching questions. It turns out that very naturally the referent of quantum physics is not reality per se but, as Niels Bohr said, it is "what can be said about the world", or in modern words, it is information. Thus, if information is the most fundamental notion in quantum physics, a very natural understanding of phenomena like quantum decoherence or quantum teleportation emerges. And quantum entanglement is then nothing else than the property of subsystems of a composed quantum systems to carry information jointly, independent of space and time; and the randomness of individual quantum events is a consequence of the finiteness of information.The quantum is then a reflection of the fact that all we can do is make statements about the world, expressed in a discrete number of bits. The universe is participatory at least in the sense that the experimentalist by choosing the measurement apparatus, defines out of a set of mutually complementary observables which possible property of a system can manifest itself as reality and the randomness of individual events stems form the finiteness of information.A number of experiments will be reviewed underlining these views. This will include an entangledphoton delayed choice experiment where the decision whether a photon that has passed a double slit did this as a particle or a wave is delayed not only until a time after its passage through the double slit assembly but even after it has already been registered. Thus, while the observed facts, i.e. the events registered by the detectors, are not changed, our physical picture changes depending on our choice what to measure. Another experiment discussed is the observation of the quantum interference of fullerenes which are so hot that they are not at all decoupled from the environment. The reason why interference is still observed is due to the fact that the photons emitted by the fullerenes do not carry path information into the environment. The criterion for observation of interference is simply whether or not path information is available anwhere in the universe, independent of whether or not an observer cares to read that information out. Finally an experiment on the teleportation of an entangled photon demonstrates that the decision whether or not two photons are entangled or not again can be made at a time long after these photons have already been observed. More precisely, the quantum state we assign two photons for a time period before they have been registered depends on our future choice whether or not we then implement the Bell state measurement these two photons are entangled with. This experiment lends support to the idea that the quantum state is just a representation of our knowledge and that this knowledge changes when an observation is made. Thus the reduction of the wave packet is just a reflection of the fact that the representation of our information has to change whenever the information itself changes as a consequence of an observation. In conclusion it may very well be said that information is the irreducible kernel from which everything else flows. Then the question why nature appears quantized is simply a consequence of the fact that information itself is quantized by necessity. It might even be fair to observe that the concept that information is fundamental is very old knowledge of humanity, witness for example the beginning of gospel according to John: "In the beginning was the Word".

Anton Zeilinger
Professor of Physics
Institut fuer Experimental physik
University of Vienna
51090 Wien
Tel. +43-1-4277-51201
Fax. +43-1-4277-9512

mercoledì 15 luglio 2009

Capturing Images In Non-traditional Way

ScienceDaily (July 14, 2009) — New research in imaging may lead to advancements for the Air Force in data encryption and wide-area photography with high resolution.
Lead researcher Dr. Jason W. Fleischer of Princeton University and his team used a special optical device called a nonlinear crystal, rather than an ordinary lens, to capture an image. Every image is made up of a collection of light waves, and a lens bends (refracts) the waves towards a detector. In contrast, in the nonlinear material, these waves "talk" to each other and interact, generating new waves and distorting themselves in the process.
"The mixing is a form of physical (vs. numerical) encryption, but it would be useless if the process could not be reversed. Our algorithm provides a way of undoing the image and thus recovering the original signal. If the signal itself is encrypted from the beginning, then our method would provide another layer of protection," he said.
The reversing algorithm also allows the researchers to capture information that is lost in other imaging systems. Experimentally, the method relies on imaging both the intensity and travel direction of the waves. This is done by taking a standard photograph of the object alone and then one with the object and an added plane waves. The result, called a hologram, is then fed into the numerical code.
The researchers obtained photos of various objects by using the image-capturing equipment, and in every instance, their images consistently have a wide view with a high resolution. They used an Air Force resolution chart, which is designed to check the quality of imaging systems.
Imaging applications include optical systems that maintain their field of view as they zoom, sharper microscopes, improved lithography, and dynamical imaging of 3D objects.
Fleischer and his team are now searching for new materials to increase the level of wave mixing for stronger, faster interactions at lower light levels.
Fleischer noted, "Light travels nearly instantaneously from one end of the crystal to the other, but it takes about a second for it to respond nonlinearly. It takes less than 10 seconds to capture multiple pictures of the output and another minute or so of computer time to put them together and run the code backwards to re-construct the images."
In the future, the multiple pictures may be taken simultaneously and reconstructed faster than the current processing time it takes on a normal computer.
This research is funded by AFOSR.
Adapted from materials provided by Air Force Office of Scientific Research.

martedì 14 luglio 2009

Forgotten news: Einstein's spooky action acts at 10,000 times the speed of light.

(download PDF file about Salart's experiment)

Satellite view of Geneva region where the experiment was performed.
A spooky effect that could in theory connect particles at the opposite ends of the universe has been measured and found to exert its unsettling influence more than 10,000 times faster than the speed of light.
The effect, once described by Albert Einstein as 'spooky action at a distance' underpins quantum teleportation, a Star Trek like ability, and the next generation of encryption methods and superfast quantum computers too.
Now, by making measurements in two Swiss villages separated by 11 miles, Daniel Salart, a doctoral student working with the team of Prof Nocolas Gisin at the University of Geneva, has run detailed measurements and concluded that if this spooky action really exists, it must act faster than light.The new work lays down a lower speed limit of 10,000 times the speed of light. Quantum weirdness still rules OK.This study in the journal Nature suggests that a physical signalling mechanism that connects the villages is deeply implausible, because of the well known limit of the speed of light.Yet the effect is real, none the less, and rests on the peculiar properties of the subatomic world.
These are described by quantum mechanics which is routinely called strange, bizarre or counter-intuitive because the mathematics of the theory make predictions that seem to run counter to our own experiences, a feature famously summed up by the late physicist John Bell.
Yet experiment after experiment back them up. And the Swiss "Bell experiment" by Salart supports this wacky worldview once again.
The origins of this new experiment published in the journal Nature rest, in part, on Bell's ideas and an intellectual dispute between Albert Einstein, who hated quantum theory's unsettling take on reality, and Niels Bohr, the Danish father of atomic physics.
In 1935, Einstein outlined one such perplexing feature in a thought experiment with his colleagues Boris Podolsky and Nathan Rosen.
They first noted that quantum theory applied not only to single atoms but also to molecules made of many atoms. So, for example, a molecule containing two atoms could be described by a single mathematical expression called a wave function.
Einstein realised that if you separated these atoms, even by a vast distance, they would still be described by the same wave function. In the jargon, they were "entangled", as if their fate was connected in some way.
This may not sound so special: after all, anyone with a cell phone can achieve something similar, talking to someone on the other side of the planet with ease. The difference is that even if entangled particles are separated by billions of light-years, the fate of one instantly affects the fate of all its partners.
Einstein famously dismissed even the theoretical possibility of entanglement as "spooky action-at-a-distance".
But the reality of entanglement was first demonstrated by French scientists in 1982, notably by Alain Aspect, using light emitted by atoms driven by lasers to create pairs of entangled photons.
In the experiment, each pair was split up and the two photons sent off in opposite directions towards devices that measured their properties.
According to standard physics, the devices should show a certain degree of similarity in the properties of the two entangled photons. The precise amount should, however, be limited by the finite speed of light: roughly speaking, the photons should not have enough time to "compare notes" with each other.
The French team found, however, that the entangled photons were far more similar than expected on the basis of communication at the speed of light. In fact, the results showed that the photons were somehow communicating instantaneously - as if they were not really separated at all.
Then Dr Charles Bennett of IBM and others theorised that entanglement can make a "quantum phoneline" that could "teleport" the details (quantum state) of one particle to another over an arbitrary distance without knowing its state. This opened up the possibility that a transporter could transmit atomic data - even people and also opened up new opportunities for computing.
Tests have all but ruled out a classical (that is a non-quantum) explanation for these correlations between entangled photons, by waves and particles moving between them, but the lingering possibility remains that a first event could influence a second one, if the means of influence act faster than the speed of light.
To look for this, Mr Salart entangled their photon pairs using a source in Geneva, then passed them through fibre-optical cables of exactly equal length to the villages of Jussy and Satigny, which lie respectively east and west of Lake Geneva.
Here, the photons' entanglement was checked by an identical pair of instruments to reveal consistent entanglement of their photons, and the effects of the Earth's rotation taken into account, so they conclude that any signal passing between the entangled photons is, if not instantaneous, travelling at least ten thousand times faster than light.
So the effect is real but, if one wanted to explain it by a transmission mechanism with waves and particles, therein lies madness. Dr Terence Rudolph of Imperial College, London, remarks that "any theory that tries to explain quantum entanglement... will need to be very spooky - spookier, perhaps, than quantum mechanics itself".
Note that Einstein's ban on faster-than-light communication remains intact: while the photons compare notes instantaneously, the contents of those notes are beyond our control, and so can't be used to transmit any useful messages.
In an attempt to rule out any kind of communication between entangled particles, physicists from the University of Geneva have sent two entangled photons traveling to different towns located 18 km apart – the longest distance for this type of quantum measurement. The distance enabled the physicists to completely finish performing their quantum measurements at each detector before any information could have time to travel between the two towns.
In an attempt to rule out any kind of communication between entangled particles, physicists from the University of Geneva have sent two entangled photons traveling to different towns located 18 km apart – the longest distance for this type of quantum measurement. The distance enabled the physicists to completely finish performing their quantum measurements at each detector before any information could have time to travel between the two towns. Many other experiments have observed quantum nonlocality – the “spooky interaction at a distance” that occurs between two entangled particles – and also known as a violation of Bell inequalities. But, as physicists Daniel Salart, et al., explain in a recent issue of Physical Review Letters, these Bell tests might not have gone far enough. If quantum measurements aren’t finished until after a mass has moved (as the team assumes here), then the Bell violations in previous tests might merely have been due to some type of classical communication between particles unknown to today’s physics.
In their experiment, the physicists sent pairs of entangled photons from Geneva through optical fibers leading to interferometers in two other Swiss towns: Satigny and Jussy, located 8.2 and 10.7 km away, respectively. The distance between the interferometers in Satigny and Jussy was 18 km.
With this large distance between the interferometers, the physicists could perform a more complete quantum measurement than has previously been done. Somewhat surprisingly, physicists have never decided exactly when a quantum measurement is finished (when the “collapse” occurs, if there is any).
Different interpretations of quantum mechanics lead to different answers. The most common view is that a quantum measurement is finished as soon as the photons are absorbed by detectors. Previous experiments have been set up to allow enough distance between particle detectors to prohibit communication under this view. But there are also other views of when the measurement is finished, including “when the result is secured in a classical system,” “when the information is in the environment,” or even that it is never over – a view that leads to the many worlds interpretation.
The Swiss team followed a view proposed independently by Penrose and Diosi, which assumes a connection between quantum measurements and gravity, and requires a macroscopic mass to be moved. In this view, the measurement takes more time than it does for a photon to be absorbed by a detector. The significance of the Swiss test is that it is the first “space-like separated” Bell test under the Penrose-Diosi assumption.“There is quite a large community of physicists that speculates on possible connections between quantum gravity and the measurement problem,” coauthor Hugo Zbinden told “The advantage of the Penrose-Diosi model is that it is testable using today's technology.”
In the physicists’ experiment, the detection of each photon by a single-photon detector triggers a voltage to a piezoelectric actuator. The actuator expands, which in turn causes a tiny gold-surfaced mirror to move. By measuring the mirror displacement, the researchers could confirm by the gravity-quantum connection that the quantum measurement had been successfully finished.All of the steps – from photon detection to mirror movement – take about 7.1 microseconds, which is significantly less than the 60 microseconds it would take a photon to cover the 18 km between interferometers. So measurements made simultaneously at each of the interferometers could not be been influenced by anything traveling at – or even a few times more than – the speed of light.
“The significance of our experiment lies entirely in achieving space-like separation, even under the assumption that a quantum measurement is only finished after a macroscopic mass has moved, as in the Penrose-Diosi model,” Zbinden explained.
Altogether, the experiment serves to fill a loophole by ruling out any kind of communication between two entangled particles separated by a distance, provided the collapse happens only after a mass has moved. By spatially separating the entangled photons, the test once again confirms the nonlocal nature of quantum correlations.
More information: Salart, D.; Baas, A.; van Houwelingen, J. A. W.; Gisin, N.; and Zbinden, H. “Spacelike Separation in a Bell Test Assuming Gravitationally Induced Collapses.” Physical Review Letters 100, 220404 (2008).

lunedì 13 luglio 2009

CERN,LHC: ATLAS 'til beam

“We start to run eight weeks before the day of first beam,” says Christophe Clement, Run Coordinator. Originally planned to start on August first, he now foresees this in early September.The first four weeks resemble the spring’s slice runs. “We test various functionalities,” he explains, “make sure everything is running as it should.” This time, the muon system starts the run, joined by the calorimeters, then the inner and forward detectors. The fourth week is devoted to the high level trigger.“Then we start running continuously,” says Christophe. ATLAS will run without beam for four weeks. Then, we’ll see more beam splash events. And after a few more weeks, with luck, the very first LHC collisions.This week, Run Coordination makes its call for shifters during the start-up period. Before those shifts begin, much work has been scheduled in the cavern. The muon system will see one new TGC chamber and a few EE chambers. Also, access structures are to be installed, and the cabling will be better organised. LUCID is set to receive its LUMAT electronics, luminosity and monitor trigger cards that also serve for readout, moved from May to July. “There were some repairs on PMTs and fixing some gas leaks too,” says Marzio Nessi, Technical Coordinator. The evaporative cooling distribution racks can expect an upgrade to boost the detail with which the system can be controlled and increase its reliability. The extra weeks before start up bought this project time to finish as some hardware arrived late, and a few unexpected problems arose during assembly. Carbon dioxide flushes out the Transition Radiation Tracker, but some of it may be lost in the detector, “escaping the ID front plates and entering the muon detector,” according to Marzio. Although even this leakage is well within safety range, the safety team is adding extraction pipes for the excess carbon dioxide to the lower part of the detector. “This is part of the consolidation plan and is part of a plan to minimize unnecessary risks,” he says. And, as usual, the cavern will receive a final cleaning to get rid of any loose magnetic material, during the first half of August, and forward shielding will be installed. New octagonal shielding has been added this year, to be placed around the previous forward shielding.The rest of the work is concerned with how ATLAS will handle data. The operating system, DCS, has just been upgraded to its final version. Christophe notes: “It’s never final because they keep improving the control system all the time, but at least there shouldn’t be major patches unless there is a serious problem.”The slice tests were a bit nerve-wracking, since the systems were integrating with new software for the first time. However, come August, the only all-new version will be the high level trigger (HLT). One aspect under improvement is the trigger menus, which list criteria on which collision or cosmic events are recorded.The front-end and read-out electronics subsystems will be facing high rate tests before beam, inundated with “fake” randomly generated triggers to beef up the real data from cosmic rays. In particular, each community needs to ensure that its subsystem can handle a data rate of up to 50 kilohertz. Also, ATLAS is still undergoing clock tests. Most systems have the high reliability required for these tests, but new readout drivers have recently been installed for the Cathode Strip Chambers (CSCs) of the muon system. The CSCs have been left out for now but will join later, during the summer. “Some systems are completely immune, and some systems are not,” Christophe says. The central Level 1 trigger team has found ways to recover or reset smoothly when the LHC’s clock stops ticking.For Run Coordination, one of the most critical aspects of preparing for beam is the combined run. “We find various problems which are not seen when subsystems run standalone,” says Christophe. And what seems an acceptable reliability for a subsystem on its own is not enough for the 12-system ATLAS detector: “Even if a subsystem breaks the run every day or second day, it’s six to twelve breaks a day for ATLAS.” But with four weeks set aside to get rid of any lingering difficulties with running ATLAS all together, the detector should be fully prepared for beam when it arrives.
Katie McAlpine
ATLAS e-News