lunedì 29 giugno 2009

Yale: the first rudimentary solid-state quantum processor of the history created

SOURCE

ScienceDaily (June 29, 2009) — A team led by Yale University researchers has created the first rudimentary solid-state quantum processor, taking another step toward the ultimate dream of building a quantum computer.
They also used the two-qubit superconducting chip to successfully run elementary algorithms, such as a simple search, demonstrating quantum information processing with a solid-state device for the first time. Their findings will appear in Nature's advanced online publication June 28.
"Our processor can perform only a few very simple quantum tasks, which have been demonstrated before with single nuclei, atoms and photons," said Robert Schoelkopf, the William A. Norton Professor of Applied Physics & Physics at Yale. "But this is the first time they've been possible in an all-electronic device that looks and feels much more like a regular microprocessor."
Working with a group of theoretical physicists led by Steven Girvin, the Eugene Higgins Professor of Physics & Applied Physics, the team manufactured two artificial atoms, or qubits ("quantum bits"). While each qubit is actually made up of a billion aluminum atoms, it acts like a single atom that can occupy two different energy states. These states are akin to the "1" and "0" or "on" and "off" states of regular bits employed by conventional computers. Because of the counterintuitive laws of quantum mechanics, however, scientists can effectively place qubits in a "superposition" of multiple states at the same time, allowing for greater information storage and processing power.
For example, imagine having four phone numbers, including one for a friend, but not knowing which number belonged to that friend. You would typically have to try two to three numbers before you dialed the right one. A quantum processor, on the other hand, can find the right number in only one try.
"Instead of having to place a phone call to one number, then another number, you use quantum mechanics to speed up the process," Schoelkopf said. "It's like being able to place one phone call that simultaneously tests all four numbers, but only goes through to the right one."
These sorts of computations, though simple, have not been possible using solid-state qubits until now in part because scientists could not get the qubits to last long enough. While the first qubits of a decade ago were able to maintain specific quantum states for about a nanosecond, Schoelkopf and his team are now able to maintain theirs for a microsecond—a thousand times longer, which is enough to run the simple algorithms. To perform their operations, the qubits communicate with one another using a "quantum bus"—photons that transmit information through wires connecting the qubits—previously developed by the Yale group.
The key that made the two-qubit processor possible was getting the qubits to switch "on" and "off" abruptly, so that they exchanged information quickly and only when the researchers wanted them to, said Leonardo DiCarlo, a postdoctoral associate in applied physics at Yale's School of Engineering & Applied Science and lead author of the paper.
Next, the team will work to increase the amount of time the qubits maintain their quantum states so they can run more complex algorithms. They will also work to connect more qubits to the quantum bus. The processing power increases exponentially with each qubit added, Schoelkopf said, so the potential for more advanced quantum computing is enormous. But he cautions it will still be some time before quantum computers are being used to solve complex problems.
"We're still far away from building a practical quantum computer, but this is a major step forward."
Authors of the paper include Leonardo DiCarlo, Jerry M. Chow, Lev S. Bishop, Blake Johnson, David Schuster, Luigi Frunzio, Steven Girvin and Robert Schoelkopf (all of Yale University), Jay M. Gambetta (University of Waterloo), Johannes Majer (Atominstitut der Österreichischen Universitäten) and Alexandre Blais (Université de Sherbrooke).
Citation: 10.1038/nature08121
Adapted from materials provided by Yale University.

domenica 28 giugno 2009

Scientists Reach Milestone In Study Of Emergent Magnetism

SOURCE

ScienceDaily (June 28, 2009) — Scientists at the U.S. Department of Energy's Argonne National Laboratory and the University of Chicago have reached a milestone in the study of emergent magnetism.
Studying simple metallic chromium, the joint UC-Argonne team has discovered a pressure-driven quantum critical regime and has achieved the first direct measurement of a "naked" quantum singularity in an elemental magnet. The team was led by University of Chicago scientist Rafael Jaramillo, working in the group of Thomas Rosenbaum, and Argonne scientist Yejun Feng of the Advanced Photon Source.
The sophisticated spin and charge order in chromium is often used as a stand-in for understanding similar phenomena in more complex materials, such as correlated oxides proximate to a quantum critical point.
"Chromium is a simple metallic crystal that exhibits a sophisticated form of antiferromagnetism," said Jaramillo. "The goal was to find a simple system."
Quantum criticality describes a continuous phase transition that is driven by quantum mechanical fluctuations, and is thought to underlie several enigmatic problems in condensed matter physics including high-temperature superconductivity. However, achieving a continuous quantum phase transition in a simple magnet has proved to be a challenging goal, as the critical behavior in all systems studied to date has been obscured by competing phenomena. The discovery of a "naked" transition in simple chromium metal therefore paves the way for a more detailed understanding of magnetic quantum criticality
Like many elements, chromium has been extensively studied for decades and a discovery of this magnitude in an element is particularly important.
"It's not often that you find out something new in an element," Feng said.
The pressure scale and experimental techniques required to measure quantum criticality in chromium necessitated extensive technical development at both Argonne and the University of Chicago. The resulting techniques for high precision measurement of condensed matter systems at high pressure, developed for use at Sector 4 of the Advanced Photon Source, now approach a level of precision and control comparable to more conventional techniques such as magnetic varying field and temperature.
This work is reported in the May 21 issue of the journal Nature.
Funding for this research was provided by the National Science Foundation Division of Materials Research and the U.S. Department of Energy, Office of Science, Office of Basic Energy Sciences.
Adapted from materials provided by DOE/Argonne National Laboratory.

sabato 27 giugno 2009

A scientist of Indian origin has created new ’superatoms’ with magnetic properties for the first time.


London, June 27 (ANI): A team of researchers led by a scientist of Indian origin has created new ’superatoms’ with magnetic properties for the first time, a breakthrough that could be used to make “spintronic devices”, faster computer processors and denser memory storage.
According to a report in New Scientist, the research was led by Shiv Khanna from Virginia Commonwealth University.
Superatoms were discovered in the 1980s when Walter Knight and colleagues at the University of California, Berkeley, found that groups of sodium atoms can share electrons amongst themselves.
The electrons form a collective “supershell” that coats the cluster.
Until now, clusters that copy the magnetic properties of other elements have proved more difficult to design.
Magnetism is caused by the spin of an atom’s electrons, which are arranged in shells, or orbitals, around the atom’s nucleus.
Their net spin determines the strength of the atom’s magnetic “moment,” and because they tend to occur in pairs that cancel each other out, it is the atom’s unpaired electrons that contribute to its magnetic moment.
Unpaired electrons, however, will make an atom, or a superatom, more likely to react with others in an attempt to fill its orbitals and become stable.
As a result, stability and magnetism have long been thought to be mutually exclusive.
A team led by Shiv Khanna at Virginia Commonwealth University has come up with a way around the problem.
Khanna’s team worked out that encapsulating an atom of vanadium in a cage of eight caesium atoms would create a stable supershell of electrons around the entire cluster.
This would prevent the vanadium atom’s unpaired electrons from reacting with other atoms, maintaining its magnetism.
The arrangement would yield a magnetic moment of five Bohr magnetons, which is the same as an atom of manganese.
“What we have done is expand the range of possible magnetic materials,” said Khanna.
Khanna’s magnetic superatoms are only calculations at this point, but he has funding from the Department of Energy to make them a reality.
He hopes the clusters can be used to give researchers a new dimension of control in designing new materials.
For example, stable magnetic clusters could one day be used in new “spintronic” devices, which compute or store information using magnetic moments rather than simply electrical charge.
Encoding data in this way means the devices can be far smaller than those used to make conventional electronic components, potentially providing an overall boost in computing power. (ANI)

A Higgs boson without the mess.

SOURCE

Particle physicists at CERN’s Large Hadron Collider (LHC) hope to discover the Higgs boson amid the froth of particles born from proton-proton collisions. Results in the 19 June Physical Review Letters show that there may be a way to cut through some of that froth. An experiment at Fermilab’s proton-antiproton collider in Illinois has identified a rare process that produces matter from the intense field of the strong nuclear force but leaves the proton and antiproton intact. There’s a chance the same basic interaction could give LHC physicists a cleaner look at the Higgs.
A proton is always surrounded by a swarm of ghostly virtual photons and gluons associated with the fields of the electromagnetic and strong nuclear forces. Researchers have predicted that when two protons (or a proton and an antiproton) fly past one another at close range, within about a proton’s diameter, these virtual particle clouds may occasionally interact to create new, real (not virtual) particles. The original protons would merely lose some momentum and separate from the beam. Such an “exclusive” reaction–where the original particles don’t break apart–gives unusually clean data because there are so few particles to detect.
In the new experiment, researchers were looking for signs that the interaction of virtual gluons had generated short-lived particles including the Χc (”Chi-c”) and J/ψ mesons, which are charm-anticharm quark pairs that decay into muons and antimuons. The Χc reaction would be especially rare because it requires protons to donate two gluons each, a requirement that also makes detailed predictions challenging, says Fermilab’s Mike Albrow, a member of the Collider Detector at Fermilab (CDF) collaboration.
In 2007, CDF researchers observed hints of exclusive, virtual gluon reactions in the form of high-energy photons radiating from colliding protons and antiprotons. Now the team has sifted through nearly 500 muon-antimuon pairs, identifying 65 that must have come from the decay of the Χc–very close to the rate predicted in 2005 by a team at Durham University in England. Because the Χc has similar particle properties to the much heavier Higgs boson, the same basic reaction should produce the Higgs at the higher collision energies provided by the LHC, says Albrow. “It’s the strongest evidence that the Higgs boson must be produced this way, if it does exist.”
Based on the rate of Χc production, Albrow estimates LHC collisions could produce 100 to 1000 Higgs bosons per year in each of the accelerator’s two largest particle detectors, ATLAS and CMS. “Even a few dozen events per year would enable you to measure the [Higgs's] mass, spin, and other properties,” he says. That’s why ATLAS and CMS teams are reviewing proposals to add detectors to look for exclusive Higgs events.
But not everyone is so optimistic that these events would be detectable in significant numbers. “It looks hard, but one should never say never,” says Joseph Incandela of the University of California, Santa Barbara, deputy physics coordinator for CMS. Incandela points out that once the LHC is operating at full capacity, every crossing of its twin proton beams is expected to yield about 20 collisions, throwing up other particles that may obscure exclusive reactions. But he says there are scenarios such as supersymmetry, a proposed extension to the standard model (the textbook theory of particle physics) in which there could be multiple Higgs bosons. In those situations, Albrow adds, exclusive reactions might be the only ones clean enough to distinguish the different Higgs particles.
by JR Minkel
JR Minkel is a freelance science writer in Nashville, Tennessee. His first book, Instant Egghead Guide: The Universe, comes out in July.
This story was first published in Physical Review Focus and is copyright American Physical Society. Reprinted with permission.
For more information on exclusive events, see the CERN Courier.

The Australian National University has developed a breakthrough approach to generate quantum entanglement.

SOURCE

Faster computers, more secure information transfer, smarter sensors – all technologies that could benefit from a new, simpler way of manipulating light to convey much more information using fewer light beams and resources.
A research team based at The Australian National University has developed a breakthrough approach to generate quantum entanglement, where information is coded in the physical relationship between two objects. Their findings are published in the journal Nature Photonics this week.
Research leader Dr Jiri Janousek from the ARC Centre of Excellence for Quantum-Atom Optics at ANU said that the discovery is based on existing optical technology, but uses it in such a way that much fewer components and light beams are required.
“Light beams produced from lasers can be used to convey information via a process known as quantum entanglement,” Dr Janousek said. “Basically this means that at the miniscule scale of the quantum world, information about one object – a stream of photons, for example – can convey information about another object to which it is linked. We can use quantum entanglement, or optical entanglement, to convey information.”
“Until now, the amount of information that could be conveyed using optical entanglement was limited by levels of complexity. The ability to scale up information transfer is hampered by the fact that you need to increase the number of nonclassical light sources, splitters and receivers each time you want to add another channel of information. This means that multi-channelling has been consigned to the too-hard basket – too many parts for too little effect.”
Dr Janousek said that the group’s discovery about mode manipulation in light meant that only one light source and one receiver is required to generate optical entanglement, meaning that this approach could be much more simply scaled up to convey many times more information channels.
“There are only ten labs in the world that would be able to do this kind of research, and we were the first to find a solution to this particular problem,” Dr Janousek said. “This finding is one more piece in the puzzle towards the future realization of quantum computers, which would be many times faster and more powerful than existing computers. But in the medium timeframe this discovery could assist in the development of quantum technologies – things like quantum communication and information processing.”
The research team included scientists from ANU, the Laboratoire Kastler Brossel in Paris and the Australian Defence Force Academy. The project was funded by the Australian Research Council, with support from ANU, CNRS the Ecole Normale Superieur, and the European Commission’s Seventh Framework Programme for Research.
Read the paper in Nature Photonics: http://www.nature.com/nphoton/journal/vaop/ncurrent/full/nphoton.2009.97.html
Contact: Penny Cox, Communications Officer, Tel: 02 6125 3549, Email: Penny.Cox@anu.edu.au
Source: The Australian National University ANU

giovedì 25 giugno 2009

A Cambridge University-led research team has discovered a technique to safely handle and transport white phosphorous.

SOURCE

For centuries it has been known for its violent combustion upon contact with air - but this week a Cambridge-led team of researchers reveals that it has tamed one of the most hazardous chemical substances.
Their work could also result in an array of hazardous chemicals being handled and transported more safely in future.
The substance in question is white phosphorous, a feedstock for the preparation of many useful chemicals such as weed killers, insecticides and fertiliser.
White phosphorous is also infamous for its propensity to burst into flame. For this reason it is often used in military campaigns to create smokescreens to mask movement from the enemy, as well as an incendiary in bombs, artillery and mortars.
This research, published this week in the journal Science, was carried out by a team consisting of Prasenjit Mal, Boris Breiner and senior author Jonathan Nitschke at the University of Cambridge's Department of Chemistry, together with Kari Rissanen from the University of Jyvaskyla in Finland.
The team created a 'container molecule' to stabilise white phosphorous indefinitely. This renders it safe until such time as a signal agent, benzene, is applied to release it.
The practical implications of the research are impressive: the technique of 'caging' individual molecules of the substance allows it to be manipulated and stored with greater safety, and has the potential to be used to tame other dangerous chemicals.
Dr Nitschke says: "It is foreseeable that our technique might be used to clean up a white phosphorous spill, either as part of an industrial accident or in a war zone. In addition to its ability to inflict grievous harm while burning, white phosphorous is very toxic and poses a major environmental hazard."
Source: University of Cambridge (news : web)

lunedì 22 giugno 2009

New method to detect quantum mechanical effects in ordinary objects

SOURCE

Scanning electron micrograph of a superconducting qubit in close proximity to a nanomechanical resonator. The nanoresonator is the bilayer (silicon nitride/aluminum) beam spanning the length of the trench in the center of the image; the qubit is the aluminum island located to the left of the nanoresonator. An aluminum electrode, located adjacent to the nanoresonator on the right, is used to actuate and sense the nanoresonator's motion. Credit: Electron beam lithography was performed by Richard Muller at JPL. Nanoresonator etch was performed by Junho Suh in the Roukes Lab. Image taken by Junho Suh.
At the quantum level, the atoms that make up matter and the photons that make up light behave in a number of seemingly bizarre ways. Particles can exist in "superposition," in more than one state at the same time (as long as we don't look), a situation that permitted Schrödinger's famed cat to be simultaneously alive and dead; matter can be "entangled" -- Albert Einstein called it "spooky action at a distance" -- such that one thing influences another thing, regardless of how far apart the two are.
Previously, scientists have successfully measured entanglement and in photons and in small collections of just a few atoms. But physicists have long wondered if larger collections of atoms--those that form objects with sizes closer to what we are familiar with in our day-to-day life--also exhibit quantum effects.
"Atoms and photons are intrinsically quantum mechanical, so it's no surprise if they behave in quantum mechanical ways. The question is, do these larger collections of atoms do this as well," says Matt LaHaye, a postdoctoral research scientist working in the laboratory of Michael L. Roukes, a professor of physics, applied physics, and bioengineering at the California Institute of Technology (Caltech) and codirector of Caltech's Kavli Institute.
"It'd be weird to think of ordinary matter behaving in a quantum way, but there's no reason it shouldn't," says Keith Schwab, an associate professor of applied physics at Caltech, and a collaborator of Roukes and LaHaye. "If single particles are quantum mechanical, then collections of particles should also be quantum mechanical. And if that's not the case--if the quantum mechanical behavior breaks down--that means there's some kind of new physics going on that we don't understand."
The tricky part, however is devising an experiment that can detect quantum mechanical behavior in such ordinary objects—without, for example, those effects being interfered with or even destroyed by the experiment itself.
Now, however, LaHaye, Schwab, Roukes, and their colleagues have developed a new tool that meets such fastidious demands and that can be used to search for quantum effects in a ordinary object. The researchers describe their work in the latest issue of the journal Nature.
In their experiment, the Caltech scientists used microfabrication techniques to create a very tiny nanoelectromechanical system (NEMS) resonator, a silicon-nitride beam—just 2 micrometers long, 0.2 micrometers wide, and weighing 40 billionths of a milligram—that can resonate, or flex back and forth, at a high frequency when a voltage is applied.
A small distance (300 nanometers, or 300 billionths of a meter) from the resonator, the scientists fabricated a second nanoscale device known as a single-Cooper-pair box, or superconducting "qubit"; a qubit is the basic unit of quantum information.
The superconducting qubit is essentially an island formed between two insulating barriers across which a set of paired electrons can travel. In the Caltech experiments, the qubit has only two quantized energy states: the ground state and an excited state. This energy state can be controlled by applying microwave radiation, which creates an electric field.
Because the NEMS resonator and the qubit are fabricated so closely together, their behavior is tightly linked; this allows the NEMS resonator to be used as a probe for the energy quantization of the qubit. "When the qubit is excited, the NEMS bridge vibrates at a higher frequency than it does when the qubit is in the ground state," LaHaye says.
One of the most exciting aspects of this work is that this same coupling should also enable measurements to observe the discrete energy levels of the vibrating resonator that are predicted by , the scientists say. This will require that the present experiment be turned around (so to speak), with the qubit used to probe the NEMS resonator. This could also make possible demonstrations of nanomechanical quantum superpositions and Einstein's spooky
"Quantum jumps are, perhaps, the archetypal signature of behavior governed by quantum effects," says Roukes. "To see these requires us to engineer a special kind of interaction between our measurement apparatus and the object being measured. Matt's results establish a practical and really intriguing way to make this happen."
More information: The paper, "Nanomechanical measurements of a superconducting qubit," was published in the June 18 issue of Nature.
Source: California Institute of Technology (news : web)

Unlike Rubber Bands, Molecular Bonds May Not Break Faster When Pulled

SOURCE

ScienceDaily (June 22, 2009) — From balloons to rubber bands, things always break faster when stretched. Or do they? University of Illinois scientists studying chemical bonds now have shown this isn't always the case, and their results may have profound implications for the stability of proteins to mechanical stress and the design of new high-tech polymers.
"Our findings contradict the intuitive notion that molecules are like rubber bands in that when we pull on a chemical bond, it should always break faster," said chemistry professor Roman Boulatov, who led the study. "When we stretch a sulfur-sulfur bond, for example, how fast it breaks depends on how the nearby atoms move."
The findings also contradict the conventional interpretation of experimental results obtained by other researchers studying the fragmentation rate of certain proteins containing sulfur-sulfur bonds when stretched with a microscopic force probe. In those experiments, as the force increased, the proteins fragmented faster, leading the researchers to conclude that as the sulfur-sulfur bond was stretched, it reacted faster and broke faster.
"Our experiments suggest a different conclusion," Boulatov said. "We believe the acceleration of the fragmentation was caused by a change in the protein's structure as it was stretched, and had little or nothing to do with increased reactivity of a stretched sulfur-sulfur bond."
In their experiments, the researchers use stiff stilbene as a molecular force probe to generate well-defined forces on molecules atom by atom.
The probe allows reaction rates to be measured as a function of the restoring force. Similar to the force that develops when a rubber band is stretched, the molecular restoring force contains information about how much the molecule was distorted, and in what direction.
In previous work, when Boulatov's team pulled on carbon-carbon bonds with the same force they would later apply to sulfur-sulfur bonds, they found the carbon-carbon bonds broke a million times faster than when no force was applied.
"Because the sulfur-sulfur bond is much weaker than the carbon-carbon bond, you might think it would be much more sensitive to being pulled on," Boulatov said. "We found, however, that the sulfur-sulfur bond does not break any faster when pulled."
Boulatov and his team report their findings in a paper accepted for publication in Angewandte Chemie, and posted on the journal's Web site.
"When we pulled on the sulfur-sulfur bond, the nearby methylene groups prevented the rest of the molecule from relaxing," Boulatov said, "thus eliminating the driving force for the sulfur-sulfur bond to break any faster."
Chemists must bear in mind that even in simple chemical reactions, such as a single bond dissociation, "we must take into account other structural changes in the molecule," Boulatov said. "The elongation alone, which occurs when a bond is stretched, does not represent the full picture of what happens when the reaction occurs."
The good news, Boulatov said, is that not every polymer that is stretched will break faster. "We might be able to design polymers, for example, that would resist fragmentation under modest mechanical stresses," he said, "or will not break along the stretched direction, but in some other desired direction."
With Boulatov, co-authors of the paper are graduate student and lead author Timothy Kucharski, research associate Qing-Zheng Yang, postdoctoral researcher Yancong Tian, and graduate students Zhen Huang, Nicholas Rubin and Carlos Concepcion.
Funding was provided by the National Science Foundation, the U.S. Air Force Office of Scientific Research, the American Chemical Society Petroleum Research Fund, and the U. of I.
Adapted from materials provided by University of Illinois at Urbana-Champaign.

Chemists Form World's Smallest Droplet Of Acid

SOURCE

ScienceDaily (June 22, 2009) — Exactly four water molecules and one hydrogen chloride molecule are necessary to form the smallest droplet of acid. This was the result of work by the groups of Prof. Dr. Martina Havenith (physical chemistry) and Prof. Dr. Dominik Marx (theoretical chemistry) within the research group FOR 618. They have carried out experiments at ultracold temperatures close to absolute zero temperature using infrared laser spectroscopy to monitor the molecules.
This has been accompanied by theoretical ab initio simulations. According to their calculations, the reaction at these extremely cold temperatures is only possible if the molecules are aggregating one after the other.
Chemistry at ultracold temperatures in space
If you put a classical acid, for example hydrogen chloride in water, the acid molecules will preferentially lose a proton (H+). Thereby the pH-value of the solution is decreased and the solution becomes acidic. In particular, so-called hydronium ions (H3O+), are formed by protonated water molecules. This hydronium ion is an important ingredient in many chemical reactions. Despite of the fact that this is one of the most fundamental reactions, it was not clear until now how many water molecules are actually required in order to form a charge separated negative Cl- ion and a positive H3O+ ion. “Whereas we all know acids from our daily life, we have now been able to observe for the first time acid formation on a molecular level.” "We will need this knowledge in order to understant chemical processes on nanoscopic structures, on small particles and on surfaces” explains Prof. Havenith-Newen. This indicates that there is a rich chemistry even at very low temperatures; a fundamental basis for reactions within stratospheric clouds or in interstellar media. Previously, it had been unclear whether reactions with only a few water molecules can take place at theses ultracold temperatures.
Ultracold trap
For their experiments, the researchers have successively embedded hydrogen chloride as well as single water molecules in a special ultracold trap. They used nanodroplets of suprafluid helium which have a temperature of less than -272,8 °C. Molecules will first be cooled down before they have a chance to aggregate. “Suprafluid” is a special property of the helium which implies that the embedded molecules are still free to rotate before they are frozen, thereby allowing monitoring with unsurpassed precision. Captured in such a way, it is possible to obtain the chemical fingerprint of the acid – its infrared spectrum. By combining trapping with high resolution IR laser spectroscopy and theoretical calculations, the chemists demonstrated that exactly four water molecules are required to form the smallest droplet of acid: (H3O)+(H2O)3Cl-.
Important: One molecule after the other
After these results, the researchers were left with the question of how this reaction can take place at ultracold temperatures near absolute zero. “Usually, activation of chemical reactions requires the input of energy, just like for cooking at home you need a cooking plate or a gas flame” explains Prof. Marx. “However, how should this be possible at a few Kelvin (close to absolute zero)?” The calculations, in combination with experiment, showed that the reaction is only possible by a successive aggregation process. Instead of putting together 4 water molecules and an HCl molecule simultanesously at the beginning and the waiting for a dissociation process to occur, they found in their simulations that when adding the water molecules step by step, a proton is transferred exactly when adding the fourth water molecule. Then, a hydronium ion will immediately form with one of the four added water molecules. This unusual mechanism is called “aggregation induced dissociation”. “We suspect that such aggregation induced reactions, can explain chemical transformations at ultracold conditions, such as can be found at small ice particles in clouds and in interstellar media”, explains Prof. Marx.
The work which described here is part of the research unit FOR 618 “Understanding the Aggregation of Small Molecules with Precise Methods - Interplay between Experiment and Theory”( Co-ordinator: Prof. Dr. Wolfram Sander (Faculty of Chemistry and Biochemistry) which has been funded by the Germany Science Foundation and which has just been extended for three more years after successful evaluation.
Journal reference:
Anna Gutberlet, et al. Below 1 K: The Smallest Droplet of Acid Aggregation-Induced Dissociation of HCl(H2O)4. Science, 324, 1545 (2009) DOI: 10.1126/science.1171753
Adapted from materials provided by Ruhr-Universitaet-Bochum, via AlphaGalileo.

sabato 20 giugno 2009

'Look Mom No Electricity': Transmitting Information with Chemistry


Burning an infofuse transmits a sequence of pulses of light, in which information is encoded using different wavelengths (determined by various metallic salts) and the order of the pattern. Image credit: Samuel W. Thomas III, et al. ©2009 PNAS.
(PhysOrg.com) -- While information technology is generally thought to require electrons or photons for transmitting information, scientists have recently demonstrated a third method of transmission: chemical reactions. Based on a flammable “infofuse,” the new system combines information technology and chemistry into a new area the researchers call "infochemistry."
In the study, led by George Whitesides of Harvard University, with other coauthors from Harvard, Tufts University, and DARPA, the scientists explain that their system transmits in the form of coded pulses of light generated entirely by , without electricity. The system is self-powered, with power being generated by combustion. The power density of the system is higher than that of electrochemical batteries, and has the advantage of not discharging over time.
As Whitesides explained to PhysOrg.com, the significance of the study is that it “demonstrates direct chemical to binary encoding, and transmission of information at a useful bit rate, without batteries.” The researchers hope that their prototype will one day make it possible to make systems that transmit useful information in circumstances in which electronics and batteries do not work, such as harsh environments and under water.
As the scientists explain, the system consists of a strip or fuse of combustible material (nitrocellulose) about 1 mm long. When ignited, a yellow-orange flame moves along the infofuse. To encode information, the scientists patterned the fuse with various metallic salts, which could be done using a desktop inkjet printer or a micropipettor. With their different emission wavelengths, the salts created distinct emission lines in different regions of the , similar to how the colors of fireworks are made: blue (copper), green (barium), yellow (sodium), red (lithium, strontium, calcium), or near-infrared (potassium, rubidium, cesium).
The infofuse, which burns at about 3-4 cm/sec depending on thickness and pattern spacing, is then read by a detector, such as a color CCD camera or fiber optic cable coupled to a spectrometer. The distance between the detector and burning infofuse was typically 2 m, but the detector could still detect a signal up to 30 m away in daylight.
By coding letters of the alphabet using patterns of metallic salts, the scientists transmitted the phrase, “LOOK MOM NO ELECTRICITY” on a single infofuse using the new technique. As the scientists explain, light pulses have several controllable variables that can be used to represent different letters and symbols. In addition to emission wavelength, other variables include pulse duration, time between pulses, and emission intensity. Using combinations of three alkali metals, the researchers demonstrated how to encode 40 different characters by varying some of these parameters.
“It needs a flame, but it does not need additional batteries or power, or auxiliary devices, to convert a chemical signal to a digital one,” Whitesides said. “The power needed to generate the light is produced by chemistry directly, not by drawing power from a battery.”
Although the current infofuses convert energy into light with only 1% of the efficiency of a battery-operated LED, the infofuses generate 10 times more energy per weight than an alkaline battery generates. In general, integrating and chemistry could have certain advantages, possibly leading to systems that operate beyond binary schemes by using a variety of parameters that allow each information unit to carry more information than a bit. Also, since infochemistry is not bound by the principles of electronics (such as fixed circuitry), but rather the principles of chemistry, new systems could lead to novel architectures.
The scientists hope that further improvements to their system could lead to lightweight, portable, self-powered systems that can transmit information and integrate with modern information technologies. Applications could include environmental sensing and transmitting the data optically over a distance. The system could also be used for message transmission in search-and-rescue type applications.
More information: “Infochemistry and infofuses for the chemical storage and transmission of coded information.” Samuel W. Thomas III, et al. Proceedings of the National Academy of Sciences. vol. 106, no. 23, 9147-9150.

venerdì 19 giugno 2009

Scientist Finds Plumber's Wonderland On Graphene

SOURCE

ScienceDaily (June 18, 2009) — Engineers from the University of Pennsylvania, Sandia National Laboratories and Rice University have demonstrated the formation of interconnected carbon nanostructures on graphene substrate in a simple assembly process that involves heating few-layer graphene sheets to sublimation using electric current that may eventually lead to a new paradigm for building integrated carbon-based devices.
Curvy nanostructures such as carbon nanotubes and fullerenes have extraordinary properties but are extremely challenging to pick up, handle and assemble into devices after synthesis. Penn materials scientist Ju Li and Sandia scientist Jianyu Huang have come up with a novel idea to construct curvy nanostructures directly integrated on graphene, taking advantage of the fact that graphene, an atomically thin two-dimensional sheet, bends easily after open edges have been cut on it, which can then fuse with other open edges permanently, like a plumber connecting metal fittings.
The "knife" and "welding torch" used in the experiments, which were performed inside an electron microscope, was electrical current from a Nanofactory scanning probe, generating up to 2000°C of heat. Upon applying the electrical current to few-layer graphene, they observed the in situ creation of many interconnected, curved carbon nanostructures, such as "fractional nanotube"-like graphene bi-layer edges, or BLEs; BLE rings on graphene equivalent to "anti quantum-dots"; and nanotube-BLE assembly connecting multiple layers of graphene.
Remarkably, researchers observed that more than 99 percent of the graphene edges formed during sublimation were curved BLEs rather than flat monolayer edges, indicating that BLEs are the stable edges in graphene, in agreement with predictions based on symmetry considerations and energetic calculations. Theory also predicts these BLEs, or "fractional nanotubes," possess novel properties of their own and may find applications in devices.
Li and Huang observed the creation of these interconnected carbon nanostructures using the heat of electric current and a high-resolution transmission electron microscope. The current, once passed through the graphene layers, improved the crystalline quality and surface cleanness of the graphene as well, both important for device fabrication.
The sublimation of few-layer graphene, such as a 10-layer stack, is advantageous over the sublimation of monolayers. In few-layer graphene, layers spontaneously fuse together forming nanostructures on top of one or two electrically conductive, extended, graphene sheets.
During heating, both the flat graphene sheets and the self-wrapping nanostructures that form, like bilayer edges and nanotubes, have unique electronic properties important for device applications. The biggest obstacle for engineers has been wrestling control of the structure and assembly of these nanostructures to best exploit the properties of carbon. The discoveries of self-assembled novel carbon nanostructures may circumvent the hurdle and lead to new approach of graphene-based electronic devices.
Researchers induced the sublimation of multilayer graphene by Joule-heating, making it thermodynamically favorable for the carbon atoms at the edge of the material to escape into the gas phase, leaving freshly exposed edges on the solid graphene. The remaining graphene edges curl and often welded together to form BLEs. Researchers attribute this behavior to nature's driving force to reduce capillary energy, dangling bonds on the open edges of monolayer graphene, at the cost of increased bending energy.
"This study demonstrates it is possible to make and integrate curved nanostructures directly on flat graphene, which is extended and electrically conducting," said Li, associate professor in the Department of Materials Science and Engineering in Penn's School of Engineering and Applied Science. "Furthermore, it demonstrates that multiple graphene sheets can be intentionally interconnected. And the quality of the plumbing is exceptionally high, better than anything people have used for electrical contacts with carbon nanotubes so far. We are currently investigating the fundamental properties of graphene bi-layer edges, BLE rings and nanotube-BLE junctions."
Short movies of the fabrication of these nanostructures can be viewed at http://www.youtube.com/user/MaterialsTheory.
The study is published in the current issue of the journal Proceedings of the National Academy of Sciences. The study was performed by Li and Liang Qi of Penn, Jian Yu Huang and Ping Lu of the Center for Integrated Nanotechnologies at Sandia and Feng Ding and Boris I. Yakobson of the Department of Mechanical Engineering and Materials Science at Rice.
It was supported by the National Science Foundation, the Air Force Office of Scientific Research, the Honda Research Institute, the Department of Energy and the Office of Naval Research.
Adapted from materials provided by University of Pennsylvania.

Most Efficient And Stable Source Of Pure White Light Ever Achieved

SOURCE

ScienceDaily (June 19, 2009) — Researchers are reporting the first use of a fundamentally new approach in the quest to snare the Holy Grail of the lighting industry: An LED (light-emitting diode) — those ultra-efficient, long-lived light sources — that emits pure white light. The new approach yielded what the scientists describe as the most efficient and stable source of pure white light ever achieved. The advance could speed the development of this next-generation technology for improved lighting of homes, offices, displays, and other applications, they say.
Soo Young Park and colleagues note that white LEDs show promise as a brighter, longer-lasting and more energy-efficient light source than conventional lighting, such as incandescent and fluorescent lights, which they may replace in the future. But scientists have had difficulty producing white LEDs that are suitable for practical use. Existing technologies produce tinted shades of white light, require complex components, and become unstable over time.
The researchers describe development of a new, simpler white LED that is the first to achieve stable white light emissions using a single molecule. Their specially engineered molecule combines two light-emitting materials, one orange and one blue, which together produce white light over the entire visible range. In laboratory studies, the scientists showed that light production from an LED using the new molecule was highly efficient and had excellent color stability and reproducibility, features that make it a practical white light source.
Journal reference:
Park et al. A White-Light-Emitting Molecule: Frustrated Energy Transfer between Constituent Emitting Centers. Journal of the American Chemical Society, 2009; 090610145759060 DOI: 10.1021/ja902533f
Adapted from materials provided by American Chemical Society.

venerdì 12 giugno 2009

ATLAS (LHC,CERN) e-News: It’s like déjà-vu all over again

SOURCE

In the movie Groundhog Day, a reporter is forced to re-live a day (February 2) over and over again. Perhaps the coming LHC re-startup would feel like Groundhog Day to some science reporters. CERN therefore has no plans to invite the media here for the first beams this autumn. The first collisions at 450 on 450 GeV (hopefully a short time later) will also be kept low key. What is not fully decided is how to handle the first high-energy collisions and the first events from ATLAS and the other experiments.The images from last September (both control room scenes and the splash events) made a vivid impression on the news media and the public. This time ATLAS will not allow reporters within the Control Room, but the new Visitor Center allows a view of the activities, and new high-quality cameras (mounted on the back wall) can provide the video footage the media wants. The ATLAS website had more than two million hits during the September 2008 activities.Rumors are swirling that Tom Hanks may return to CERN to “turn on” the LHC, but of course that depends on the schedules of Hanks and the LHC. There are several stages in the startup process that could be defined as “turn-on”.Four members of the ATLAS outreach group have been travelling to Sony Pictures (near Hollywood) to help develop a 15-minute CERN/ATLAS “extra” that will be added to the DVD of Angels & Demons that will be released in November. We have also requested to get a copy of the five-minute segment near the beginning of the movie that has some spectacular cinematography of ATLAS. Have you seen our ATLAS webpages on antimatter at http://atlas.ch/angels-demons/ and the new exhibition in the Globe? Finally there have been at least 60 very successful lectures worldwide discussing the science of the movie and the science of LHC.In other news, design work is in progress to replace the aging posters in the Bldg. 40 café. The idea is to make the café an appealing place for physicists to chat and sip coffee, as well as to make it an interesting place to bring visitors to show them the basics of ATLAS.If you haven’t seen modern pop-up books, you will be surprised by the complexity they can portray. ATLAS will soon have its own dramatic pop-up book, a production led by Emma Sanders. It portrays both the detector and the physics.Work is in progress on an animation in the style of the ATLAS animations Episodes I and II. This new project is a joint effort with CMS to portray the physics of the LHC. Two professional animators are working on the project, and a senior person at Pixar has been consulting periodically. Completion date is approximately October 2009. Thought is also being given to an update of the 12-year-old ATLAS movie. All of our videos are on YouTube. Both CERN and ATLAS are working on new/updated communication plans. In the case of CERN, a draft plan has been circulated amongst the experiments. The ATLAS plan is in progress and should be completed in June.These are just the highlights of our activities. You can catch up on the whole series of outreach efforts at the next ATLAS overview week.
Michael Barnett
ATLAS Outreach

ATLAS (LHC,CERN) e-News: Category 1, on shift


SOURCE

To get any physics out of ATLAS, we must manage the data that will course through its cables, from the shifts in the control room to data distribution and software. Christophe Clement, Run Coordinator of ATLAS, describes this important work as less visible since it doesn’t directly result in papers. “And there’s a lot of it,” he adds. “Nevertheless, this is the work that really makes you feel you are carrying out an experiment which has to do with reality.”Control room tasks make up only about 13 per cent of the operation activities, according to Steinar Stapnes, who deals with Operation Task overall planning in ATLAS almost daily, yet they are essential. “Any failure in coverage can have bad consequences, perhaps for hardware and certainly for data taking,” he says.Each institute needs to take its turn on shift; this critical work cannot be compensated for with other contributions that are easier to accomplish remotely or require less diligent attention. For this reason, the Operation Task Planning group has split the operation tasks into two categories: 1 and 2.Category 1 tasks are the real-time operation and monitoring of detector performance, and first line of defence when problems arise, carried out by shifters in the control room and the experts who are called in at anytime of the day or night, should something go wrong. “We make sure that these very important tasks are well-covered. Everybody should feel responsible for them,” says Steinar.He also emphasizes the tradition in particle physics of making sure that graduate students and post-docs get time in the driver’s seat. “For most young people, it’s incredibly interesting, educational, and rewarding for them to get the experience of being part of the team that operates the detector,” says Steinar.Anything beyond the shifts will be Category 2. This includes data acquisition and core software development and maintenance, databases, calibrations, managing data distribution through the Grid, recalibration of data as the detectors are better understood, and software tuning. Category 2 also comprises other tasks associated with processing the data for analysis and those related to longer-term hardware and software maintenance at Point 1.Along with the special designation for shifts and on-call time, the scheduling system has changed. Run Coordination wants to foster a team spirit among the shifters, bringing groups together multiple times over the course of a week. Christophe explained that in this team-based system, shifters: “Get to know the other crew members better, make new contacts, and become more confident with the operation of their sub-detector. Basically work more as a crew.” “It’s not something new in some sense; other experiments have done similar things,” says Steinar, “but it is different with ATLAS because of a larger crew and a large collaboration.”Since collaborators may only work shifts for a maximum of six consecutive days, Run Coordination tried to make a schedule with eight-day blocks, each shifter taking one day off during the block. However, this was in the end deemed too rigid, both for people travelling to CERN to do shifts and for CERN residents. This resulted in a spontaneously generated version with three- and four-day blocks. An unintended consequence was that visiting physicists were inclined to take two blocks in succession, resulting in seven consecutive shifts. CERN safety regulations must be respected, so Run Coordination adjusted the system to allow shifters to choose two consecutive three-day shifts.

This system is also designed to respect the experts who are on-call. In the pattern above, those who work over the weekend have at least a Friday’s worth of experience. This way, the teams are more likely to be able to handle problems without calling in an expert. Christophe notes that on the first day, shifters tend to do some re-learning, but during the rest of a block: “you’ve done this yesterday, and you know what is the problem and how to fix it.”Also, those who take night shift must have recent experience on a day or evening shift, to avoid exhausting the on-call experts with midnight questions and visits to the Control Room. Between the category definitions and block scheduling for shifts, the running of the ATLAS detector should be smooth and effective, with each institution carrying its weight on the front lines.The system starts up Week 26 (28 June to 4 July). For more information or to book a shift, check the webpage.

Katie McAlpine
ATLAS e-News

ATLAS (LHC,CERN) e-News: Cosmic shakedown


For the last seven weeks, ATLAS has been going through the motions, practicing and problem-solving in preparation for beam.Like a choir rehearsing ahead of a concert, the first five weeks – so called ‘slice weeks’ – each focused on running different combinations of parts of the detector, to inspect how each performed and cooperated.Week 1, commencing April 13th, was all about the SCT, Pixels, and Beam Condition Monitor.
Week 2 dealt with the Tile and Liquid Argon calorimeters, the L1 Calo trigger system, and some High Level Trigger (HLT) algorithms specific to the calorimeters. The third week concentrated on the suite of muon sub-systems.Week 4 took things up a notch, combining all of the above and focusing on testing HLT algorithms as much as possible with cosmics. Finally, week 5 was the first time that the forward detectors – LUCID and the Zero Degree Calorimeter – had been run together and integrated into the ATLAS data stream. Look out for articles reporting in more detail on these final two weeks in the next issue of e-News (June 15th).Due to ongoing work on the cooling for the silicon detectors, the SCT and Pixel detectors could not be switched on, and so week 1 involved only the off-detector part of Pixel and SCT electronics, with Monte Carlo data being 'plugged' into their readout devices. The rest of the slice weeks were full cosmic runs.The muon detectors with the largest surface areas can detect 1000 cosmic particles per second, but in LHC conditions, up to 100,000 ‘interesting’ events will be sent to the HLT per second. To simulate the stress on the HLT, ‘fake’ so-called random triggers were layered on top of the cosmic ones.“We’ve tested so far up to 80 kilohertz. The bulk of this is fake triggers, with about 1 kilohertz of real cosmic muons,” says Run Coordinator Christophe Clement. “The HLT can filter out the fake triggers, run algorithms on the real cosmic ones, and then write maybe 200 interesting ones per second to Tier 0.”According to Christophe, the slice weeks have been pretty successful, particularly considering how much the landscape has changed since the detectors were last run together, in Autumn 2008; the Detector Control System (DCS) which monitors the hardware has been upgraded, bits of the detectors have been replaced and repaired, the online software and HLT software have both been upgraded, and the whole detector has been opened and is now almost closed.“I think we can say now that we have upgraded all software for Data Acquisition and Trigger as well as the Detector Control System, and we’re more or less at the same level that we were last year in terms of stability,” says Christophe “and then we start to push more at the trigger rates.”Stability tests, where the system is left to run unhampered for extended periods, were performed on the weekends of the slice weeks. “It’s like a test-program for a plane,” explains Christophe, “they’re going to do all possible things: fly it into a storm, try to land it when it’s snowing. That’s what we do during the week. Then on the Friday evening, we say ‘OK, now we’ll just try to fly straight for the weekend, without touching anything, and see if it works for a long flight’.”Most of the sub-systems were able to run well for extended periods, although there were some unexpected instabilities at high rate. This week and next, experts from the calorimeters and L1 Calo will meet in Geneva to try to get to the bottom of those problems.Weekend tests were also run using a simulated beam schedule, to give groups a better sense of how they will need to work during beam time – stopping and starting and reconfiguring between LHC fills. “We were quite positively surprised at how well we were able to do this,” says Christophe, reporting crude data-taking efficiency calculations of 91 per cent during the long muon weekend. Ignoring a glitch on the general power grid on a certain Sunday morning at 3 a.m., the figure for the calorimeter weekend would have been 97 per cent.Since the slice weeks, more tests have been done with the muon system and the TRT, as ATLAS works towards a two-week combined magnet run, due to begin on June 22nd.“For these two weeks, we hope to run cosmics with less debugging,” says Christophe. “The weeks we had so far were really to try to address technical issues. Hopefully [the combined run] will be much smoother, and we can calculate our data taking efficiency much better.”The data taken so far will be analysed for weeks to come, and used to perfect the calibration, alignment, and synchronisation of ATLAS. Over 40 million cosmic events were triggered in the muon slice week alone, and there is now a big push to bring trigger timings for different parts of the detector and different types of triggers into alignment. When real collision particles start shooting through the different layers of the detector, all the electronics must be synchronised and shouting their findings in unison, a choir hitting its notes in time. “We’re fine-tuning this now,” says Christophe. “If we can get everything within 25 to 50 nanoseconds on the cosmics before the beam it would be a great success. And then we can improve with the collisions.”
Ceri Perkins
ATLAS e-News

The element 112, discovered at the GSI (Centre for Heavy Ion Research in Darmstadt)


ScienceDaily (June 12, 2009) — The element 112, discovered at the GSI Helmholtzzentrum für Schwerionenforschung (Centre for Heavy Ion Research) in Darmstadt, has been officially recognized as a new element by the International Union of Pure and Applied Chemistry (IUPAC). IUPAC confirmed the recognition of element 112 in an official letter to the head of the discovering team, Professor Sigurd Hofmann. The letter furthermore asks the discoverers to propose a name for the new element.
Their suggestion will be submitted within the next weeks. In about 6 months, after the proposed name has been thoroughly assessed by IUPAC, the element will receive its official name. The new element is approximately 277 times heavier than hydrogen, making it the heaviest element in the periodic table.
“We are delighted that now the sixth element – and thus all of the elements discovered at GSI during the past 30 years – has been officially recognized. During the next few weeks, the scientists of the discovering team will deliberate on a name for the new element”, says Sigurd Hofmann. 21 scientists from Germany, Finland, Russia and Slovakia were involved in the experiments around the discovery of the new element 112.
Already in 1996, Professor Sigurd Hofmann’s international team created the first atom of element 112 with the accelerator at GSI. In 2002, they were able to produce another atom. Subsequent accelerator experiments at the Japanese RIKEN accelerator facility produced more atoms of element 112, unequivocally confirming GSI’s discovery.
To produce element 112 atoms, scientists accelerate charged zinc atoms – zinc ions for short – with the help of the 120 m long particle accelerator at GSI and “fire” them onto a lead target. The zinc and lead nuclei merge in a nuclear fusion to form the nucleus of the new element. Its so-called atomic number 112, hence the provisional name “element 112”, is the sum of the atomic numbers of the two initial elements: zinc has the atomic number 30 and lead the atomic number 82. An element’s atomic number indicates the number of protons in its nucleus. The neutrons that are also located in the nucleus have no effect on the classification of the element. It is the 112 electrons, which orbit the nucleus, that determine the new element’s chemical properties.
Since 1981, GSI accelerator experiments have yielded the discovery of six chemical elements, which carry the atomic numbers 107 to 112. GSI has already named their officially recognized elements 107 to 111: element 107 is called Bohrium, element 108 Hassium, element 109 Meitnerium, element 110 Darmstadtium, and element 111 is named Roentgenium.
Adapted from materials provided by GSI Helmholtzzentrum für Schwerionenforschung.

martedì 9 giugno 2009

Many Worlds Interpretation (Hugh Everett's Theory) MWI: Testability and Objections

Testability:
Despite the name "interpretation", the MWI is a variant of quantum theory that is different from others. Experimentally, the difference is relative to collapse theories. It seems that there is no experiment distinguishing the MWI from other no-collapse theories such as Bohmian mechanics or other variants of MWI.
The collapse leads to effects that are, in principle, observable; these effects do not exist if the MWI is the correct theory. To observe the collapse we would need a super technology, which allows "undoing" a quantum experiment, including a reversal of the detection process by macroscopic devices. See Lockwood 1989 (p. 223), Vaidman 1998 (p. 257), and other proposals in Deutsch 1986. These proposals are all for gedanken experiments that cannot be performed with current or any foreseen future technology. Indeed, in these experiments an interference of different worlds has to be observed. Worlds are different when at least one macroscopic object is in macroscopically distinguishable states. Thus, what is needed is an interference experiment with a macroscopic body. Today there are interference experiments with larger and larger objects (e.g., fullerene molecules C60), but these objects are still not large enough to be considered "macroscopic". Such experiments can only refine the constraints on the boundary where the collapse might take place. A decisive experiment should involve the interference of states which differ in a macroscopic number of degrees of freedom: an impossible task for today's technology.[8]
The collapse mechanism seems to be in contradiction with basic physical principles such as relativistic covariance, but nevertheless, some ingenious concrete proposals have been made (see Pearle 1986 and the entry on collapse theories). These proposals (and Weissman's 1999 non-linear MW idea) have additional observable effects, such as a tiny energy non-conservation, that were tested in several experiments. The effects were not found and some (but not all!) of these models have been ruled out.
In most no-collapse interpretations, the evolution of the quantum state of the Universe is the same. Still, one might imagine that there is an experiment distinguishing the MWI from another no-collapse interepretation based on the difference in the correspondence between the formalism and the experience (the results of experiments).
An apparent candidate for such an experiment is a setup proposed in Englert et al. 1992 in which a Bohmian world is different from the worlds of the MWI (see also Aharonov and Vaidman 1996). In this example, the Bohmian trajectory of a particle in the past is contrary to the records of seemingly good measuring devices (such trajectories were named surrealistic). However, at present, there are no memory records that can determine unambiguously (without deduction from a particular theory) the particle trajectory in the past. Thus, this difference does not lead to an experimental way of distinguishing between the MWI and Bohmian mechanics. I believe that no other experiment can distinguish between the MWI and other no-collapse theories either, except for some perhaps exotic modifications, e.g., Bohmian mechanics with initial particle position distribution deviating from the quantum distribution. There are other opinions about the possibility of testing the MWI. It has frequently been claimed, e.g. by De Witt 1970, that the MWI is in principle indistinguishable from the ideal collapse theory. On the other hand, Plaga 1997 claims to have a realistic proposal for testing the MWI, and Page 2000 argues that certain cosmological observations might support the MWI.
Objections to the MWI:
Some of the objections to the MWI follow from misinterpretations due to the multitude of various MWIs. The terminology of the MWI can be confusing: "world" is "universe" in Deutsch 1996, while "universe" is "multiverse", etc. There are two very different approaches with the same name "The Many-Minds Interpretation (MMI)". The Albert and Loewer 1988 MMI mentioned above should not be confused with
Lockwood’ 1996 MMI (which resembles the approach of Zeh 1981). The latter is much closer to the MWI as it is presented here, see Sec. 17 of Vaidman 1998. Further, the MWI in the Heisenberg representation (Deutsch 2001) differs significantly from the MWI presented in the Schrödinger representation (used here). The MWI presented here is very close to Everett's original proposal, but in the entry on Everett's relative state formulation of quantum mechanics, as well as in his book Barrett 1999, Barrett uses the name "MWI" for the splitting worlds view publicized by De Witt 1970. This approach has been justly criticized: it has both some kind of collapse (an irreversible splitting of worlds in a preferred basis) and the multitude of worlds. Now I consider the main objections in detail.
Ockham's Razor:
It seems that the majority of the opponents of the MWI reject it because, for them, introducing a very large number of worlds that we do not see is an extreme violation of Ockham's principle: "Entities are not to be multiplied beyond necessity". However, in judging physical theories one could reasonably argue that one should not multiply physical laws beyond necessity either (such a verion of Ockham's Razor has been applied in the past), and in this respect the MWI is the most economical theory. Indeed, it has all the laws of the standard quantum theory, but without the collapse postulate, the most problematic of physical laws. The MWI is also more economic than Bohmian mechanics which has in addition the ontology of the particle trajectories and the laws which give their evolution. Tipler 1986 (p. 208) has presented an effective analogy with the criticism of Copernican theory on the grounds of Ockham's razor.
One might consider also a possible philosophical advantage of the plurality of worlds in the MWI, similar to that claimed by realists about possible worlds, such as Lewis 1986 (see the discussion of the analogy between the MWI and Lewis's theory by Skyrms 1976). However, the analogy is not complete: Lewis' theory considers all logically possible worlds, many more than all worlds incorporated in the quantum state of the Universe.

sabato 6 giugno 2009

Manipulating light on a chip for quantum technologies

SOURCE

An artist's impression of the on-chip quantum metrology experiment (making ultraprecise measurements on chip) Photo by Will Amery, University of Bristol.
(PhysOrg.com) -- A team of physicists and engineers at Bristol University has demonstrated exquisite control of single particles of light — photons — on a silicon chip to make a major advance towards long-sought-after quantum technologies, including super-powerful quantum computers and ultra-precise measurements.
The Bristol Centre for Quantum Photonics has demonstrated precise control of four photons using a microscopic metal electrode lithographically patterned onto a silicon chip.
The photons propagate in silica waveguides — much like in optical fibres — patterned on a silicon chip, and are manipulated with the electrode, resulting in a high-performance miniaturized device.
“We have been able to generate and manipulate of photons on a silicon chip” said PhD student, Jonathan Matthews, who together with Alberto Politi performed the experiments. “These entangled states are responsible for famously ‘weird’ behaviour arising in quantum mechanics, but are also at the heart of powerful quantum technologies.”
“This precise manipulation is a very exciting development for fundamental science as well as for future quantum technologies.” said Prof Jeremy O’Brien, Director of the Centre for Quantum Photonics, who led the research.
The team reports its results in the latest issue of Nature Photonics [June 2009], a sister journal of the leading science journal Nature, and in a Postdeadline Paper at 'The International Quantum Electronics Conference (IQEC)' on June 4 in Baltimore, USA [IQEC Postdeadline Papers].
Quantum technologies with photons
Quantum technologies aim to exploit the unique properties of quantum mechanics, the physics theory that explains how the world works at microscopic scales.
For example a quantum computer relies on the fact that quantum particles, such as photons, can exist in a “superposition” of two states at the same time — in stark contrast to the transistors in a PC which can only be in the state “0” or “1”.
Photons are an excellent choice for quantum technologies because they are relatively noise-free; information can be moved around at the speed of light; and manipulating single photons is easy.
Making two photons “talk” to each other to generate the all-important entangled states is much harder, but Professor O’Brien and his colleagues at the University of Queensland demonstrated this in a quantum logic gate back in 2003 [Nature 426, 264 (2003)].
Last year, the Centre for Quantum Photonics at Bristol showed how such interactions between photons could be realised on a , pointing the way to advanced quantum technologies based on photons [Science 320, 646 (2008)].
Photons are also required to “talk” to each other to realise the ultra-precise measurements that harness the laws of . In 2007 Professor O’Brien and his Japanese collaborators reported such a quantum metrology measurement with four photons [Science 316, 726 (2007)].
Manipulating photons on a silicon chip
“Despite these impressive advances, the ability to manipulate photons on a chip has been missing,” said Mr Politi. “For the last several years the Centre for Quantum Photonics has been working towards building fully functional quantum circuits on a chip to solve these problems,” added Prof O’Brien.
The team coupled photons into and out of the chip, fabricated at CIP Technologies, using optical fibres. Application of a voltage across the metal electrode changed the temperature of the silica waveguide directly beneath it, thereby changing the path that the photons travelled. By measuring the output of the device they confirmed high-performance manipulation of photons in the chip.
The researchers proved that one of the strangest phenomena of the quantum world, namely “quantum entanglement”, was achieved on-chip with up to four photons. Quantum entanglement of two particles means that the state of either of the particles is not defined, but only their collective state, and results in an instantaneous linking of the particles.
This on-chip entanglement has important applications in quantum metrology and the team demonstrated an ultra-precise measurement in this way.
“As well as and quantum metrology, on-chip photonic quantum circuits could have important applications in quantum communication, since they can be easily integrated with optical fibres to send photons between remote locations,” said Alberto Politi.
“The really exciting thing about this result is that it will enable the development of reconfigurable and adaptive quantum circuits for photons. This opens up all kinds of possibilities,” said Prof O’Brien.
A commentary on the work that appeared in the same issue [Nature Photonics 3, 317 (2009)] described it as “an important step in the quest for quantum computation” and concluded: “The most exciting thing about this work is its potential for scalability. The small size of the [device] means that far greater complexity is possible than with large-scale optics.”
The other co-author of the paper is Dr André Stefanov, formerly a Research fellow in the Centre for Quantum Photonics, and now at the Federal Office of Metrology METAS, Switzerland.
Provided by University of Bristol (news : web)

'Colossal' Magnetic Effect Under Pressure


The structure models for F-type and A-type magnetic ordering in manganite in response to pressure. The arrows inside orbitals indicate the spin direction of d electrons.
(PhysOrg.com) -- Millions of people today carry around pocket-sized music players capable of holding thousands of songs, thanks to the discovery 20 years ago of a phenomenon known as the “giant magnetoresistance effect,” which made it possible to pack more data onto smaller and smaller hard drives. Now scientists are on the trail of another phenomenon, called the “colossal magnetoresistance effect” (CMR) which is up to a thousand times more powerful and could trigger another revolution in computing technology.
Understanding, and ultimately controlling, this effect and the intricate coupling between and magnetism in these materials remains a challenge, however, because of competing interactions in manganites, the materials in which CMR was discovered. In the June 12, 2009, issue of the journal Physical Review Letters, a team of researchers report new progress in using high pressure techniques to unravel the subtleties of this coupling.
To study the magnetic properties of manganites, a form of manganese oxide, the research team, led by Yang Ding of the Carnegie Institution’s High Pressure Synergetic Center (HPSync), applied techniques called x-ray magnetic circular dichroism (XMCD) and angular-dispersive diffraction at the (APS) of Argonne National Laboratory in Illinois. High pressure XMCD is a newly developed technique that uses high-brilliance circularly polarized x-rays to probe the magnetic state of a material under pressures of many hundreds of thousands of atmospheres inside a diamond anvil cell.
The discovery of CMR in manganite compounds has already made manganites invaluable components in technological applications. An example is magnetic tunneling junctions in soon-to-be marketed magnetic random access memory (MRAM), where the tunneling of electrical current between two thin layers of manganite material separated by an electrical insulator depends on the relative orientation of magnetization in the manganite layers. Unlike conventional RAM, MRAM could yield instant-on computers. However, no current theories can fully explain the rich physics, including CMR effects, seen in manganites.
“The challenge is that there are competing interactions in manganites among the electrons that determine magnetic properties,” said Ding. “And the properties are also affected by external stimuli, such as, temperature, pressure, magnetic field, and chemical doping.”
“Pressure has a unique ability to tune the electron interactions in a clean and theoretically transparent manner,” he added. “It is a direct and effective means for manipulating the behavior of electrons and could provide valuable information on the magnetic and electronic properties of manganite systems. But of all the effects, pressure effects have been the least explored.”
The researchers found that when a manganite was subjected to conditions above 230,000 times atmospheric pressure it underwent a transition in which its magnetic ordering changed from a ferromagnetic type (electron spins aligned) to an antiferromagnetic type (electron spins opposed). This transition was accompanied by a non-uniform structural distortion called the Jahn-Teller effect.
“It is quite interesting to observe that uniform compression leads to a non-uniform structural change in a manganite, which was not predicted by theory,” said Ding, “Working with Michel van Veenendaal’s theoretical group at APS, we found that the predominant effect of pressure on this material is to increase the strength of an interaction known as superexchange relative to another known as the double exchange interaction. A consequence of this is that the overall ferromagnetic interactions in the system occur in a plane (two dimensions) rather than in three dimensions, which produces a non-uniform redistribution of electrons. This leads to the structural distortion.”
Another intriguing response of manganite to high pressure revealed by the experiments is that the magnetic transition did not occur throughout the sample at the same time. Instead, it spread incrementally.
“The results imply that even at ambient conditions, the manganite might already have two separate magnetic phases at the nanometer scale, with pressure favoring the growth of the antiferro-magnetic phase at the expense of the ferromagnetic phase,” said coauthor Daniel Haskel, a physicist at Argonne’s APS. “Manipulating phase separation at the nanoscale level is at the very core of nanotechnology and manganites provide an excellent playground to pursue this objective”.
“This work not only displays another interesting emergent phenomenon arising from the interplay between charge, spin, orbital and lattice in a strongly correlated electron system,” commented coauthor Dr. Ho-kwang Mao of Carnegie’s Geophysical Laboratory, Director of HPSync,” but it also manifests the role of pressure in magnetism studies of dense matter.”
More information: Pressure-induced magnetic transition in manganite (La0.75Ca0.25MnO3) Yang Ding, Daniel Haskel, Yuan-Chieh Tseng, Eiji Kaneshita, Michel van Veenendaal, John Mitchell, Stanislav V. Sinogeikin, Vitali Prakapenka, and Ho-kwang Mao, Physical Review Letters, June 2009.
Provided by Carnegie Institution

venerdì 5 giugno 2009

Lasers Are Making Solar Cells Competitive

SOURCE

ScienceDaily (June 4, 2009) — Solar electricity has a bright future: It is renewable and available in unlimited quantities, and it does not produce any gases detrimental to the climate. Its only drawback right now is the price: the electric power currently being produced by solar cells in northern Europe must be subsidized if it is to compete against the household electricity generated by traditional power plants. At "Laser 2009" in Munich, June 15 to 18, Fraunhofer researchers will be demonstrating how laser technology can contribute to optimizing the manufacturing costs and efficiency of solar cells.
Cell phones, computers, MP3 players, kitchen stoves, and irons all have one thing in common: They need electricity. And in the future, more and more cars will also be fuelled by electric power. If the latest forecast from the World Energy Council WEC can be believed, global electricity requirements will double in the next 40 years. At the same time, prices for the dwindling resources of petroleum and natural gas are climbing.
“Rising energy prices are making alternative energy sources increasingly cost-effective. Sometime in the coming years, renewable energy sources, such as solar energy, will be competitive, even without subsidization,” explains Dr. Arnold Gillner, head of the microtechnology department at the Fraunhofer Institute for Laser Technology in Aachen, Germany. “Experts predict that grid parity will be achieved in a few years. This means that the costs and opportunities in the grid will be equal for solar electricity and conventionally generated household electricity.” Together with his team at the Fraunhofer Institute for Laser Technology ILT in Aachen, this researcher is developing technologies now that will allow faster, better, and cheaper production of solar cells in the future. “Lasers work quickly, precisely, and without contact. In other words, they are an ideal tool for manufacturing fragile solar cells. In fact, lasers are already being used in production today, but there is still considerable room for process optimization.” In addition to gradually improving the manufacturing technology, the physicists and engineers in Aachen are working with solar cell developers - for example, at the Fraunhofer Institute for Solar Energy Systems ISE in Freiburg - on new engineering and design alternatives.
New production technologies allow new design alternatives
At “Laser 2009” in Munich, the researchers will be demonstrating how lasers can drill holes into silicon cells at breathtaking speed: The ILT laser system drills more than 3,000 holes within one second. Because it is not possible to move the laser source at this speed, the experts have developed optimized manufacturing systems which guide and focuses the light beam at the required points. “We are currently experimenting with various laser sources and optical systems,” Gillner explains. “Our goal is to increase the performance to 10,000 holes a second. This is the speed that must be reached in order to drill 10,000 to 20,000 holes into a wafer within the cycle time of the production machines.”
The tiny holes in the wafer - their diameter is only 50 micrometers – open up undreamt-of possibilities for the solar cell developers. “Previously, the electrical contacts were arranged on the top of the cells. The holes make it possible to move the contacts to the back, with the advantage that the electrodes, which currently act as a dark grid to absorb light, disappear. And so the energy yield increases. The goal is a degree of efficiency of 20 percent% in industrially-produced emitter wrap-through (EWT) cells, with a yield of one-third more than classic silicon cells,” Gillner explains. The design principle itself remains unchanged: In the semi-conductor layer, light particles, or photons, produce negative electrons and positive holes, each of which then wanders to the oppositely poled electrodes. The contacts for anodes and cathodes in the EWT cells are all on the back, there is no shading caused by the electrodes, and the degree of efficiency increases. With this technique, it may one day be possible to use unpurified “dirty” silicon to manufacture solar cells that have poorer electrical properties, but that are cheaper.
Drilling holes into silicon cells is only one of many laser applications in solar cell manufacturing. In the EU project Solasys – Next Generation Solar Cell and Module Laser Processing Systems – an international research team is currently developing new technologies that will allow production to be optimized in the future. ILT in Aachen is coordinating the six million euro project. “We are working on new methods that make the doping of semiconductors, the drilling and the surface structuring of silicon, the edge isolation of the cells, and the soldering of the modules more economical,” project coordinator Gillner explains. For example, “selective laser soldering” makes it possible to improve the rejection rates and quality of the contacting, and so reduce manufacturing costs. Until now, the electrodes were mechanically pressed onto the cells, and then heated in an oven. “But silicon cells often break during this process,” Gillner knows. “Breakage is a primary cost factor in production.” On the other hand, however, with “selective laser soldering” the contacts are pressed on to the cells with compressed air and then soldered with the laser. The mechanical stress approaches zero and the temperature can be precisely regulated. The result: Optimal contacts and almost no rejects.
Laser technology means more efficient thin film cells
Laser technology is also helping to optimize the manufacture of thin film solar cells. The extremely thin film packages made of semiconducting oxide, amorphous silicon, and metal that are deposited onto the glass panels still have a market share of only ten percent. But as Gillner knows, “This could be higher, because thin film solar cells can be used anywhere that non-transparent glass panels can be mounted, for example, on house facades or sound-insulating walls. But the degrees of efficiency are comparable low at five to eight percent, and the production costs are comparatively high.” The laser researchers are working to improve these costs. Until now, the manufacturers have used mechanical methods or solid-state lasers in the nanosecond range in order to structure the active layers on the glass panels. In order to produce electric connections between the semiconductor and the metal, grooves only a few micrometers wide must be created. At the Fraunhofer-Gesellschaft booth at “Laser 2009” the ILT researchers will be demonstrating a 400-watt ultrashort pulse laser that processes thin-film solar modules ten times faster than conventional diode-pumped solid-state lasers. “The ultrashort pulse laser is an ideal tool for ablating thin layers: It works very precisely, does not heat the material and, working with a pulse frequency of 80 MHz, can process a 2-by-3 meter glass panel in under two minutes,” Gillner reports. “The technology is still very new, and high-performance scanning systems and optical systems adapted to the process must be developed first. In the medium term, however, this technology will be able to reduce production costs.”
The rise of laser technology in solar technology is just taking off, and it still has a long way to go. “Lasers simplify and optimize the manufacture of classic silicon and thin-film cells, and they allow the development of new design alternatives,” Gillner continues. “And so laser technology is making an important contribution towards allowing renewable energy sources to penetrate further into the energy market.”
Adapted from materials provided by Fraunhofer-Gesellschaft.