tag:blogger.com,1999:blog-87847454344262674842017-05-17T19:37:34.712-07:00Physics,Chemistry & Nanotechnologies News & Press - A Blog by F.Intilla (WWW.OLOSCIENCE.COM)Fausto Intillahttps://plus.google.com/110377150394476015496noreply@blogger.comBlogger322125tag:blogger.com,1999:blog-8784745434426267484.post-41266523494482942392015-10-03T11:58:00.002-07:002015-10-03T11:58:18.387-07:00Research team claims to have directly sampled electric-field vacuum fluctuations. <div class="separator" style="clear: both; text-align: center;"><a href="http://3.bp.blogspot.com/-PXXlYUmfrug/VhAko0-OQxI/AAAAAAAADFM/rSoJvFG0c3c/s1600/physicistssu.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="192" src="http://3.bp.blogspot.com/-PXXlYUmfrug/VhAko0-OQxI/AAAAAAAADFM/rSoJvFG0c3c/s320/physicistssu.jpg" width="320" /></a></div><div style="text-align: center;">Source: <a href="http://phys.org/news/2015-10-team-sampled-electric-field-vacuum-fluctuations.html"><span style="color: yellow;">Phys.org</span></a></div><div style="text-align: center;">---------------------</div>(Phys.org)—A team of researchers working at the University of Konstanz, in Germany is claiming to have directly sampled electric-field vacuum fluctuations, which would be the first ever made. In their paper published in the journal <i>Science</i>, the team describes an experiment they carried out and a part of it which they claim indicates that they have measured vacuum fluctuations directly for the first time. <br />Theoretical physicists believe that empty space is not empty at all, instead it is filled with quantum particles that pop in and out of existence creating what are known as electric-field vacuum fluctuations. Prior research has led to efforts that have measured such fluctuations <i>indirectly</i>, but no one, until now, has claimed to be able to measure them directly.<br />The experiment conducted by the team in Germany involved using a long pulse of light to study a shorter pulse of light by firing both through a crystal at the same time. The long pulse had a horizontal polarization while the shorter pulse had a vertical polarization. In such an arrangement, properties of the crystal are dependent on the electric field that exists inside of it, which in turn causes a change in the polarization of the beams that are fired into it and then emerge on the other side. The researchers adjusted the timing of the light pulses to map out fluctuations in the electric field. To offset vacuum fluctuations related to their own existence, they put in just the probe pulse—nothing else. When repeated many times, the researchers found the polarization varied slightly, which the researchers attributed to vacuum fluctuations. To be able to actually see what was going on, the team varied the width and duration of the pulses but not the number of photons in a given beam. They noted that the shot noise should have stayed constant as the pulse grew in size, but it did not, which the team claims was due to electric-field vacuum fluctuations.<br />Not everyone is convinced—many in the field on reading the paper by the team were quick to point out that variations in the pulse could just as easily have come from something else. Clearly more work will have to be done before the claims made by the team are accepted by the physics community. <div class="news-relevant"><b>Explore further:</b> <a href="http://phys.org/news/2011-12-physicists-darkness-breakthrough-discovery.html" itemprop="relatedLink">Physicists’ ‘light from darkness’ breakthrough named a top 2011 discovery</a> </div><b>More information:</b> "Direct Sampling of Electric-Field Vacuum Fluctuations." <i>Science</i>. DOI: 10.1126/science.aac9788<br /><b>ABSTRACT</b> <br />The ground state of quantum systems is characterized by zero-point motion. Those vacuum fluctuations are generally deemed an elusive phenomenon that manifests itself only indirectly. Here, we report direct detection of the vacuum fluctuations of electromagnetic radiation in free space. The ground-state electric field variance is found to be inversely proportional to the four-dimensional space-time volume sampled electro-optically with tightly focused few-femtosecond laser pulses. Sub-cycle temporal readout and nonlinear coupling far from resonance provide signals from purely virtual photons without amplification. Our findings enable an extreme time-domain approach to quantum physics with nondestructive access to the quantum state of light. Operating at multi-terahertz frequencies, such techniques might also allow time-resolved studies of intrinsic fluctuations of elementary excitations in condensed matter.Fausto Intillahttps://plus.google.com/110377150394476015496noreply@blogger.com0tag:blogger.com,1999:blog-8784745434426267484.post-51823680678623844582015-05-27T08:17:00.004-07:002015-05-27T08:18:22.846-07:00Physicists solve quantum tunneling mystery. <table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody><tr><td style="text-align: center;"><a href="http://3.bp.blogspot.com/-R51K1aFmSA8/VWXfqgDuJ_I/AAAAAAAAC9g/R_QkGbcldgE/s1600/2-physicistsso.jpg" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="180" src="http://3.bp.blogspot.com/-R51K1aFmSA8/VWXfqgDuJ_I/AAAAAAAAC9g/R_QkGbcldgE/s320/2-physicistsso.jpg" width="320" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">Professor Anatoli Kheifets' theory has ultrafast physics wrapped up. Credit: Stuart Hay, ANU</td></tr></tbody></table><div style="text-align: center;">Source: <a href="http://phys.org/news/2015-05-physicists-quantum-tunneling-mystery.html"><span style="color: yellow;">Phys.org</span></a></div><div style="text-align: center;">---------------------</div>An international team of scientists studying ultrafast physics have solved a mystery of quantum mechanics, and found that quantum tunneling is an instantaneous process.<br />The new theory could lead to faster and smaller <a class="textTag" href="http://phys.org/tags/electronic+components/" rel="tag">electronic components</a>, for which <a class="textTag" href="http://phys.org/tags/quantum+tunneling/" rel="tag">quantum tunneling</a> is a significant factor. It will also lead to a better understanding of diverse areas such as electron microscopy, nuclear fusion and DNA mutations.<br />"Timescales this short have never been explored before. It's an entirely new world," said one of the international team, Professor Anatoli Kheifets, from The Australian National University (ANU).<br />"We have modelled the most delicate processes of nature very accurately."<br />At very small scales <a class="textTag" href="http://phys.org/tags/quantum+physics/" rel="tag">quantum physics</a> shows that particles such as electrons have wave-like properties - their exact position is not well defined. This means they can occasionally sneak through apparently impenetrable barriers, a phenomenon called quantum tunneling.<br />Quantum tunneling plays a role in a number of phenomena, such as <a class="textTag" href="http://phys.org/tags/nuclear+fusion/" rel="tag">nuclear fusion</a> in the sun, scanning tunneling microscopy, and flash memory for computers. However, the leakage of particles also limits the miniaturisation of electronic components.<br />Professor Kheifets and Dr. Igor Ivanov, from the ANU Research School of Physics and Engineering, are members of a team which studied ultrafast experiments at the attosecond scale (10-18 seconds), a field that has developed in the last 15 years.<br />Until their work, a number of attosecond phenomena could not be adequately explained, such as the time delay when a photon ionised an atom.<br />"At that timescale the time an electron takes to quantum tunnel out of an atom was thought to be significant. But the mathematics says the time during tunneling is imaginary - a complex number - which we realised meant it must be an instantaneous process," said Professor Kheifets.<br />"A very interesting paradox arises, because electron velocity during tunneling may become greater than the speed of light. However, this does not contradict the special theory of relativity, as the tunneling velocity is also imaginary" said Dr Ivanov, who recently took up a position at the Center for Relativistic Laser Science in Korea.<br />The team's calculations, which were made using the Raijin supercomputer, revealed that the delay in photoionisation originates not from quantum tunneling but from the electric field of the nucleus attracting the escaping electron.<br />The results give an accurate calibration for future attosecond-scale research, said Professor Kheifets.<br />"It's a good reference point for future experiments, such as studying proteins unfolding, or speeding up electrons in microchips," he said.<br /><div class="news-relevant"><b>Explore further:</b> <a href="http://phys.org/news/2014-06-long-range-tunneling-quantum-particles.html#inlRlv" itemprop="relatedLink">Long-range tunneling of quantum particles</a> </div><b>More information:</b> Interpreting attoclock measurements of tunnelling times, <i>Nature Physics</i> (2015) <a data-doi="1" href="http://dx.doi.org/10.1038/nphys3340" target="_blank">DOI: 10.1038/nphys3340</a>Fausto Intillahttps://plus.google.com/110377150394476015496noreply@blogger.com1tag:blogger.com,1999:blog-8784745434426267484.post-73226481358805361792015-05-27T08:08:00.003-07:002015-05-27T08:08:32.933-07:00Physicists simulate for the first time charged Majorana particles. <table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody><tr><td style="text-align: center;"><a href="http://4.bp.blogspot.com/-DEHZZLUsKi0/VWXdXV8mrhI/AAAAAAAAC9E/Sa4COOglyyk/s1600/experimentsi.jpg" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="192" src="http://4.bp.blogspot.com/-DEHZZLUsKi0/VWXdXV8mrhI/AAAAAAAAC9E/Sa4COOglyyk/s320/experimentsi.jpg" width="320" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">Alexander Szameit of the University Jena (Germany) and his team developed a photonic set-up that can simulate non-physical processes in a laboratory. Credit: Jan-Peter Kasper/FSU </td></tr></tbody></table><div style="text-align: center;">Source: <a href="http://phys.org/news/2015-05-physicists-simulate-majorana-particles.html"><span style="color: yellow;">Phys.org</span></a></div><div style="text-align: center;">---------------------</div>Physicists of Jena University simulate for the first time charged Majorana particles—elementary particles, which are not supposed to exist. In the new edition of the science magazine <i>Optica</i> they explain their approach: Professor Dr. Alexander Szameit and his team developed a photonic set-up that consists of complex waveguide circuits engraved in a glass chip, which enables them to simulate charged Majorana particles and, thus, allows to conduct physical experiments.<br />Jena (Germany) March 1938: The Italian <a class="textTag" href="http://phys.org/tags/elementary+particle/" rel="tag">elementary particle</a> physicist Ettore Majorana boarded a post ship in Naples, heading for Palermo. But he either never arrives there - or he leaves the city straight away - ever since that day there has been no trace of the exceptional scientist and until today his mysterious disappearance remains unresolved. Since then, Majorana, a pupil of the Nobel Prize winner Enrico Fermi, has more or less been forgotten. What the scientific world does remember though is a theory about nuclear forces, which he developed, and a very particular elementary particle.<br />"This particle named after Majorana, the so-called Majoranon, has some amazing characteristics", the physicist Professor Dr. Alexander Szameit of the Friedrich Schiller University Jena says. "Characteristics which are not supposed to be existent in our real world." Majorana <a class="textTag" href="http://phys.org/tags/particles/" rel="tag">particles</a> are, for instance, their own antiparticles: Internally they combine completely opposing characteristics - like opposing charges and spins. If they were to exist, they would extinguish themselves immediately. "Therefore, Majoranons are of an entirely theoretical nature and cannot be measured in experiments."<br />Together with colleagues from Austria, India, and Singapore, Alexander Szameit and his team succeeded in realizing the impossible. In the new edition of the science magazine <i>Optica</i> they explain their approach: Szameit and his team developed a photonic set-up that consists of complex waveguide circuits engraved in a <a class="textTag" href="http://phys.org/tags/glass+chip/" rel="tag">glass chip</a>, which enables them to simulate charged Majorana particles and, thus, allows to conduct <a class="textTag" href="http://phys.org/tags/physical+experiments/" rel="tag">physical experiments</a>.<br />"At the same time we send two rays of light through parallel running waveguide lattices, which show the opposing characteristics separately," explains Dr. Robert Keil, the first author of the study. After evolution through the lattices, the two waves interfere and form an optical Majoranon, which can be measured as a light distribution. Thus, the scientists create an image that catches this effect like a photograph - in this case the state of a Majoranon at a defined moment in time. "With the help of many of such single images the particles can be observed like in a film and their behaviour can be analyzed," says Keil.<br />This model allows the Jena scientists to enter completely unknown scientific territory, as Alexander Szameit stresses. "Now, it is possible for us to gain access to phenomena that so far only have been described in exotic theories." With the help of this system, one can conduct experiments in which conservation of charge - one of the pillars of modern physics - can easily be suspended. "Our results show that one can simulate non-physical processes in a laboratory and, thus, can make practical use of exotic characteristics of particles that are impossible to observe in nature." Szameit foresees one particular promising application of simulated Majoranons in a new generation of quantum computers. "With this approach, much higher computing capacities than are possible at the moment can be achieved."<br /><b>Explore further:</b> <a href="http://phys.org/news/2015-05-monopoly-aluminium-broken.html#inlRlv" itemprop="relatedLink">Quantum scientists break aluminium 'monopoly' (Update)</a><br /><b>More information:</b> Keil R. et al. Optical simulation of charge conservation violation and Majorana dynamics. <i>Optica</i>, Vol. 2, Issue 5, pp. 454-459 (2015), <a data-doi="1" href="http://dx.doi.org/10.1364/OPTICA.2.000454" target="_blank">DOI: 10.1364/OPTICA.2.000454</a>Fausto Intillahttps://plus.google.com/110377150394476015496noreply@blogger.com0tag:blogger.com,1999:blog-8784745434426267484.post-65488551193886518282015-05-27T08:02:00.003-07:002015-05-27T08:04:00.744-07:00How spacetime is built by quantum entanglement. <table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody><tr><td style="text-align: center;"><a href="http://4.bp.blogspot.com/-7DFNbHFo4uk/VWXbxnZuUHI/AAAAAAAAC84/nv5pFvX8QVo/s1600/howspacetime.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="191" src="http://4.bp.blogspot.com/-7DFNbHFo4uk/VWXbxnZuUHI/AAAAAAAAC84/nv5pFvX8QVo/s320/howspacetime.png" width="320" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">The mathematical formula derived by Ooguri and his collaborators relates local data in the extra dimensions of the gravitational theory, depicted by the red point, are expressed in terms of quantum entanglements, depicted by the blue domes. Credit: (c) 2015 Jennifer Lin et al. </td></tr></tbody></table><div style="text-align: center;">Source: <a href="http://phys.org/news/2015-05-spacetime-built-quantum-entanglement.html"><span style="color: yellow;">Phys.org</span></a></div><div style="text-align: center;">---------------------</div>A collaboration of physicists and a mathematician has made a significant step toward unifying general relativity and quantum mechanics by explaining how spacetime emerges from quantum entanglement in a more fundamental theory. The paper announcing the discovery by Hirosi Ooguri, a Principal Investigator at the University of Tokyo's Kavli IPMU, with Caltech mathematician Matilde Marcolli and graduate students Jennifer Lin and Bogdan Stoica, will be published in <i>Physical Review Letters</i> as an Editors' Suggestion "for the potential interest in the results presented and on the success of the paper in communicating its message, in particular to readers from other fields."<br />Physicists and mathematicians have long sought a Theory of Everything (ToE) that unifies <a class="textTag" href="http://phys.org/tags/general+relativity/" rel="tag">general relativity</a> and quantum mechanics. General relativity explains gravity and large-scale phenomena such as the dynamics of stars and galaxies in the universe, while quantum mechanics explains microscopic phenomena from the subatomic to molecular scales.<br />The holographic principle is widely regarded as an essential feature of a successful Theory of Everything. The holographic principle states that gravity in a three-dimensional volume can be described by quantum mechanics on a two-dimensional surface surrounding the volume. In particular, the three dimensions of the volume should emerge from the two dimensions of the surface. However, understanding the precise mechanics for the emergence of the volume from the surface has been elusive.<br />Now, Ooguri and his collaborators have found that quantum entanglement is the key to solving this question. Using a quantum theory (that does not include gravity), they showed how to compute <a class="textTag" href="http://phys.org/tags/energy+density/" rel="tag">energy density</a>, which is a source of gravitational interactions in three dimensions, using quantum entanglement data on the surface. This is analogous to diagnosing conditions inside of your body by looking at X-ray images on two-dimensional sheets. This allowed them to interpret universal properties of quantum entanglement as conditions on the energy density that should be satisfied by any consistent quantum theory of gravity, without actually explicitly including gravity in the theory. The importance of quantum entanglement has been suggested before, but its precise role in emergence of spacetime was not clear until the new paper by Ooguri and collaborators.<br />Quantum entanglement is a phenomenon whereby quantum states such as spin or polarization of particles at different locations cannot be described independently. Measuring (and hence acting on) one particle must also act on the other, something that Einstein called "spooky action at distance." The work of Ooguri and collaborators shows that this quantum entanglement generates the extra dimensions of the gravitational theory. <br />"It was known that quantum entanglement is related to deep issues in the unification of general relativity and <a class="textTag" href="http://phys.org/tags/quantum+mechanics/" rel="tag">quantum mechanics</a>, such as the black hole information paradox and the firewall paradox," says Hirosi Ooguri. "Our paper sheds new light on the relation between <a class="textTag" href="http://phys.org/tags/quantum+entanglement/" rel="tag">quantum entanglement</a> and the microscopic structure of spacetime by explicit calculations. The interface between <a class="textTag" href="http://phys.org/tags/quantum+gravity/" rel="tag">quantum gravity</a> and information science is becoming increasingly important for both fields. I myself am collaborating with information scientists to pursue this line of research further."<br /><div class="news-relevant"><b>Explore further:</b> <a href="http://phys.org/news/2015-04-universe-hologram.html#inlRlv" itemprop="relatedLink">Is the universe a hologram?</a> </div><b>More information:</b> Locality of Gravitational Systems from Entanglement of Conformal Field Theories, <i>Physical Review Letters</i>, 2015.Fausto Intillahttps://plus.google.com/110377150394476015496noreply@blogger.com0tag:blogger.com,1999:blog-8784745434426267484.post-60848758110546360162015-05-27T07:50:00.004-07:002015-05-27T07:50:49.347-07:00Experiment confirms quantum theory weirdness. <table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody><tr><td style="text-align: center;"><a href="http://1.bp.blogspot.com/-0weNYG0_vAk/VWXZGKIzo0I/AAAAAAAAC8g/UuKrLFMXQzk/s1600/experimentco.jpg" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="180" src="http://1.bp.blogspot.com/-0weNYG0_vAk/VWXZGKIzo0I/AAAAAAAAC8g/UuKrLFMXQzk/s320/experimentco.jpg" width="320" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">Associate Professor Andrew Truscott (L) with PhD student Roman Khakimov.</td></tr></tbody></table><div style="text-align: center;">Source: <a href="http://phys.org/news/2015-05-quantum-theory-weirdness.html"><span style="color: yellow;">Phys.org</span></a></div><div style="text-align: center;">---------------------</div>The bizarre nature of reality as laid out by quantum theory has survived another test, with scientists performing a famous experiment and proving that reality does not exist until it is measured.<br />Physicists at The Australian National University (ANU) have conducted John Wheeler's delayed-choice <a class="textTag" href="http://phys.org/tags/thought+experiment/" rel="tag">thought experiment</a>, which involves a moving object that is given the choice to act like a particle or a wave. Wheeler's experiment then asks - at which point does the object decide?<br />Common sense says the object is either wave-like or particle-like, independent of how we measure it. But <a class="textTag" href="http://phys.org/tags/quantum+physics/" rel="tag">quantum physics</a> predicts that whether you observe wave like behavior (interference) or particle behavior (no interference) depends only on how it is actually measured at the end of its journey. This is exactly what the ANU team found.<br />"It proves that measurement is everything. At the quantum level, reality does not exist if you are not looking at it," said Associate Professor Andrew Truscott from the ANU Research School of Physics and Engineering.<br />Despite the apparent weirdness, the results confirm the validity of <a class="textTag" href="http://phys.org/tags/quantum+theory/" rel="tag">quantum theory</a>, which governs the world of the very small, and has enabled the development of many technologies such as LEDs, lasers and computer chips.<br />The ANU team not only succeeded in building the experiment, which seemed nearly impossible when it was proposed in 1978, but reversed Wheeler's original concept of light beams being bounced by mirrors, and instead used <a class="textTag" href="http://phys.org/tags/atoms/" rel="tag">atoms</a> scattered by laser light.<br />"Quantum physics' predictions about interference seem odd enough when applied to light, which seems more like a wave, but to have done the experiment with atoms, which are complicated things that have mass and interact with electric fields and so on, adds to the weirdness," said Roman Khakimov, PhD student at the Research School of Physics and Engineering.<br />Professor Truscott's team first trapped a collection of <a class="textTag" href="http://phys.org/tags/helium+atoms/" rel="tag">helium atoms</a> in a suspended state known as a Bose-Einstein condensate, and then ejected them until there was only a single atom left.<br />The single atom was then dropped through a pair of counter-propagating laser beams, which formed a grating pattern that acted as crossroads in the same way a solid grating would scatter light.<br />A second light grating to recombine the paths was randomly added, which led to constructive or <a class="textTag" href="http://phys.org/tags/destructive+interference/" rel="tag">destructive interference</a> as if the atom had travelled both paths. When the second light grating was not added, no interference was observed as if the atom chose only one path.<br />However, the random number determining whether the grating was added was only generated after the atom had passed through the crossroads.<br />If one chooses to believe that the atom really did take a particular path or paths then one has to accept that a future measurement is affecting the atom's past, said Truscott.<br />"The atoms did not travel from A to B. It was only when they were measured at the end of the journey that their wave-like or particle-like behavior was brought into existence," he said.<br /><div class="news-relevant"><a href="http://phys.org/news/2015-05-quantum-theory-weirdness.html#"><img alt="" class="toolsicon ic-rel" height="16" src="http://cdn.phys.org/tmpl/v5/img/1x1.gif" width="14" /></a> <b>Explore further:</b> <a href="http://phys.org/news/2015-05-quantum-cats.html#inlRlv" itemprop="relatedLink">Squeezed quantum cats</a> </div><b>More information:</b> "Wheeler's delayed-choice gedanken experiment with a single atom" <i>Nature Physics</i> (2015) <a data-doi="1" href="http://dx.doi.org/10.1038/nphys3343" target="_blank">DOI: 10.1038/nphys3343</a>.Fausto Intillahttps://plus.google.com/110377150394476015496noreply@blogger.com0tag:blogger.com,1999:blog-8784745434426267484.post-89026954342717848602015-05-27T07:39:00.004-07:002015-05-27T07:39:35.884-07:00Quantum computer emulated by a classical system. <table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody><tr><td style="text-align: center;"><a href="http://1.bp.blogspot.com/-oIZjEASM180/VWXV103KkpI/AAAAAAAAC8U/jldczWp-aQo/s1600/3-quantumcompu.jpg" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="192" src="http://1.bp.blogspot.com/-oIZjEASM180/VWXV103KkpI/AAAAAAAAC8U/jldczWp-aQo/s320/3-quantumcompu.jpg" width="320" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">Drs. Granville Ott (left) and Brian La Cour (center) with student Michael Starkey (right) beside their prototype quantum emulation device. Credit Applied Research Laboratories, The University of Texas at Austin </td></tr></tbody></table><div style="text-align: center;">Source: <a href="http://phys.org/news/2015-05-quantum-emulated-classical.html"><span style="color: yellow;">Phys.org</span></a></div><div style="text-align: center;">--------------------</div>Quantum computers are inherently different from their classical counterparts because they involve quantum phenomena, such as superposition and entanglement, which do not exist in classical digital computers. But in a new paper, physicists have shown that a classical analog computer can be used to emulate a quantum computer, along with quantum superposition and entanglement, with the result that the fully classical system behaves like a true quantum computer.<br />Physicist Brian La Cour and electrical engineer Granville Ott at Applied Research Laboratories, The University of Texas at Austin (ARL:UT), have published a paper on the classical emulation of a quantum computer in a recent issue of <i>The New Journal of Physics</i>. Besides having fundamental interest, using classical systems to emulate quantum computers could have practical advantages, since such quantum emulation devices would be easier to build and more robust to decoherence compared with true quantum computers.<br />"We hope that this work removes some of the mystery and 'weirdness' associated with quantum computing by providing a concrete, classical analog," La Cour told <i>Phys.org</i>. "The insights gained should help develop exciting new technology in both classical analog computing and true quantum computing."<br />As La Cour and Ott explain, quantum computers have been <i>simulated</i> in the past using software on a classical computer, but these simulations are merely numerical representations of the quantum computer's operations. In contrast, <i>emulating</i> a quantum computer involves physically representing the qubit structure and displaying actual quantum behavior. One key quantum behavior that can be emulated, but not simulated, is parallelism. Parallelism allows for multiple operations on the data to be performed simultaneously—a trait that arises from <a class="textTag" href="http://phys.org/tags/quantum+superposition/" rel="tag">quantum superposition</a> and entanglement, and enables quantum computers to operate at very fast speeds.<br />To emulate a quantum computer, the physicists' approach uses electronic signals to represent qubits, in which a qubit's state is encoded in the amplitudes and frequencies of the signals in a complex mathematical way. Although the scientists use electronic signals, they explain that any kind of signal, such as acoustic and electromagnetic waves, would also work.<br />Even though this classical system emulates quantum phenomena and behaves like a quantum computer, the scientists emphasize that it is still considered to be classical and not quantum. "This is an important point," La Cour explained. "Superposition is a property of waves adding coherently, a phenomenon that is exhibited by many classical systems, including ours. <br />"Entanglement is a more subtle issue," he continued, describing entanglement as a "purely mathematical property of waves." <br />"Since our classical signals are described by the same mathematics as a true quantum system, they can exhibit these same properties."<br />He added that this kind of entanglement does not violate Bell's inequality, which is a widely used way to test for entanglement. <br />"Entanglement as a statistical phenomenon, as exhibited by such things as violations of Bell's inequality, is rather a different beast," La Cour explained. "We believe that, by adding an emulation of quantum noise to the signal, our device would be capable of exhibiting this type of entanglement as well, as described in <a href="http://link.springer.com/article/10.1007%2Fs10701-014-9829-6">another recent publication</a>."<br />In the current paper, La Cour and Ott describe how their system can be constructed using basic analog electronic components, and that the biggest challenge is to fit a large number of these components on a single integrated circuit in order to represent as many qubits as possible. Considering that today's best semiconductor technology can fit more than a billion transistors on an integrated circuit, the scientists estimate that this transistor density corresponds to about 30 qubits. An increase in transistor density of a factor of 1000, which according to Moore's law may be achieved in the next 20 to 30 years, would correspond to 40 qubits.<br />This 40-qubit limit is also enforced by a second, more fundamental restriction, which arises from the bandwidth of the signal. The scientists estimate that a signal duration of a reasonable 10 seconds can accommodate 40 qubits; increasing the duration to 10 hours would only increase this to 50 qubits, and a one-year duration would only accommodate 60 qubits. Due to this scaling behavior, the physicists even calculated that a signal duration of the approximate age of the universe (13.77 billion years) could accommodate about 95 qubits, while that of the Planck time scale (10<sup>-43</sup> seconds) would correspond to 176 qubits.<br />Considering that thousands of qubits are needed for some complex <a class="textTag" href="http://phys.org/tags/quantum+computing/" rel="tag">quantum computing</a> tasks, such as certain encryption techniques, this scheme clearly faces some insurmountable limits. Nevertheless, the scientists note that 40 qubits is still sufficient for some low-qubit applications, such as quantum simulations. Because the quantum emulation device offers practical advantages over quantum computers and performance advantages over most classical computers, it could one day prove very useful. For now, the next step will be building the device.<br />"Efforts are currently underway to build a two-qubit prototype device capable of demonstrating <a class="textTag" href="http://phys.org/tags/entanglement/" rel="tag">entanglement</a>," La Cour said. "The enclosed photo [see above] shows the current quantum emulation device as a lovely assortment of breadboarded electronics put together by one of my students, Mr. Michael Starkey. We are hoping to get future funding to support the development of an actual chip. Leveraging quantum parallelism, we believe that a coprocessor with as few as 10 <a class="textTag" href="http://phys.org/tags/qubits/" rel="tag">qubits</a> could rival the performance of a modern Intel Core at certain computational tasks. Fault tolerance is another important issue that we studying. Due to the similarities in mathematical structure, we believe the same quantum error correction algorithms used to make quantum computers fault tolerant could be used for our quantum emulation device as well."Fausto Intillahttps://plus.google.com/110377150394476015496noreply@blogger.com0tag:blogger.com,1999:blog-8784745434426267484.post-32312746617919217412014-10-02T08:22:00.003-07:002014-10-02T08:25:37.703-07:00A new approach to on-chip quantum computing. <table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody><tr><td style="text-align: center;"><a href="http://1.bp.blogspot.com/-XMqA5muMubI/VC1s7V0a9vI/AAAAAAAACss/5UytPGGA2sg/s1600/7-anewapproach.jpg" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" src="http://1.bp.blogspot.com/-XMqA5muMubI/VC1s7V0a9vI/AAAAAAAACss/5UytPGGA2sg/s1600/7-anewapproach.jpg" height="212" width="320" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">Cross-polarized pump photons (red and blue) interact in the micro-ring resonator to directly generate cross-polarized correlated photons (green and yellow). Credit: Lucia Caspani </td></tr></tbody></table><div style="text-align: center;">Source:<b> <a href="http://phys.org/news/2014-10-approach-on-chip-quantum.html"><span style="color: yellow;">Phys.org</span></a></b></div><div style="text-align: center;">----------------------</div>Commercial devices capable of encrypting information in unbreakable codes exist today, thanks to recent quantum optics advances, especially the generation of photon pairs—tiny entangled particles of light. Now, an international team of researchers led by professor Roberto Morandotti of INRS-EMT in Canada, is introducing a new method to achieve a different type of photon pair source that fits into the tiny space of a computer chip. <br />The team's method, which generates "mixed up" photon pairs from devices that are less than one square millimeter in area, could form the core of the next-generation of quantum <a class="textTag" href="http://phys.org/tags/optical+communication/" rel="tag">optical communication</a> and computing technology. The research will be presented at The Optical Society's (OSA) 98th Annual Meeting, Frontiers in Optics, being held Oct. 19-23 in Tucson, Arizona, USA.<br />One of the properties of light exploited within <a class="textTag" href="http://phys.org/tags/quantum+optics/" rel="tag">quantum optics</a> is "<a class="textTag" href="http://phys.org/tags/photon+polarization/" rel="tag">photon polarization</a>," which is essentially the direction in which the electric field associated with the photon oscillates. The research team set out to find a way to directly "mix up," or cross-polarize, the photons via a nonlinear optical process on a chip.<br />"While several efforts have been devoted to develop on-chip sources of polarization-entangled photons, the process typically used to generate these photons only allows the generation of photons with the same polarization as the laser beam used to pump the device—either both horizontal or vertical—after which entanglement can be achieved by accurately mixing these states. Now, we have found a way to directly generate cross-polarized photon pairs," says Lucia Caspani, a postdoctoral fellow at INRS-EMT and co-author of the Frontiers in Optics paper.<br />To generate the cross-polarized photons, Caspani and colleagues used two different laser beams at different wavelengths —one vertically polarized and another horizontally polarized. The approach, however, came with a potential pitfall: the classical process between the two pump beams could destroy the <a class="textTag" href="http://phys.org/tags/photons/" rel="tag">photons</a>' fragile quantum state.<br />To address this challenge, the team, which also includes researchers from RMIT University in Australia and City University of Hong Kong, pioneered a new approach based on a micro-ring resonator—a tiny optical cavity with a diameter on the order of tens to hundreds of micrometers—that operates in such a way that energy conservation constraints suppress classical effects while amplifying quantum processes.<br />While a similar suppression of classical effects has been observed in gas vapors and complex micro-structured fibers, this is the first time it has been reported on a chip, thus opening a clear route for building scalable integrated devices.<br />"Our approach opens the door to directly mixing different polarizations on a chip," Caspani points out. "At very low power, our device directly generates photon pairs with orthogonal polarizations, which can be exploited for quantum communication and computing protocols."<br />The fabrication process of the chip is also compatible with that currently used for electronic chips. "It enables a future coexistence of our device with standard integrated circuits," says Caspani, which is a fundamental requirement for the widespread adoption of optical quantum technologies.<br /><br /><section><b>Explore further:</b> <a href="http://phys.org/news/2014-09-charm-nist-detectors-reveal-entangled.html#inlRlv" itemprop="relatedLink">Three's a charm: NIST detectors reveal entangled photon triplets</a> </section><b>More information:</b> Presentation FTu2A.2, "Direct Generation of Orthogonally Polarized Photon Pairs via Spontaneous Non-Degenerate FWM on a Chip," takes place Tuesday, Oct. 21 at 11 a.m. MST at the Arizona Ballroom, Salon 8 at the JW Marriott Tucson Starr Pass Resort in Tucson. <a href="http://www.frontiersinoptics.org/">www.frontiersinoptics.org</a><br /><footer class="post-floor clearfix"><div class="post-copyright"><b>Provided by</b> <!--news infobox //--> <a class="textTag" href="http://phys.org/partners/optical-society-of-america/" rel="news">Optical Society of America</a> </div></footer>Fausto Intillahttps://plus.google.com/110377150394476015496noreply@blogger.com0tag:blogger.com,1999:blog-8784745434426267484.post-75167125812508550232014-09-12T22:39:00.000-07:002014-09-12T22:39:28.154-07:00The future of information search!<div class="separator" style="clear: both; text-align: center;"><img border="0" src="http://1.bp.blogspot.com/-TMMCsJrqiPw/VBPXkggQaXI/AAAAAAAACmM/KNYuMRXxH8c/s1600/Logo%2BEMN.png" height="286" width="320" /></div><div class="Predefinito" style="text-align: justify;"><span lang="IT" style="font-size: 12.0pt; line-height: 115%;"><br /></span></div><div class="Predefinito" style="text-align: justify;"><span lang="IT" style="font-size: 12.0pt; line-height: 115%;"><br /></span></div><div class="Predefinito" style="text-align: justify;"><span lang="IT" style="font-size: 12.0pt; line-height: 115%;"><a href="http://www.earthmapnews.com/"><span style="color: yellow;">Earthmapnews.com</span></a>, is the first and only website in the world that allows you to find all daily information regarding your country (State/Country) published mostly by local newspapers, as simply and quickly as possible! Thanks to its graphical interface, you will no longer need to do a long and and exhausting search on Google to read the most interesting articles relating to your home country (or any other country you may be interested in); with a simple click on the selected country, you will find all the daily local news in the twinkling of an eye … in the local language! (and in English, of course). Today, thanks to Spreng's triangle, we know that there is a mutual dependence between energy, time and information. Speeding up the process of information dissemination means, ultimately, having more time and energy to focus our attention on little-known contexts just waiting to be better understood and processed. As John D. Barrow has already pointed out (in one of his most popular books, <i>“Impossibility. Limits of science and the science of limits”</i>): <i>“If we have enough time, too much information is not useful, since we can abandon ourselves into a random search, proceeding by trial and error. But if our time is limited, then we need to know the right method to get things done faster, and this knowledge requires a great deal of information. According to Alvin Weinberg, this means that time is likely to become our most important resource. Ultimately, the value of energy and information lies in the greater freedom that they give us to better distribute our time”</i>. And ensuring that your valuable time, is not wasted on unnecessary activities is exactly the purpose of the website </span><span lang="IT"><a href="http://www.earthmapnews.com/"><span style="font-size: 12pt; line-height: 115%; text-decoration: none;"><span style="color: yellow;">www.earthmapnews.com</span></span></a></span><span lang="IT" style="color: windowtext; font-size: 12.0pt; line-height: 115%;">! </span><span lang="IT"><o:p></o:p></span></div><br /><div class="MsoNormal"><br /></div>Fausto Intillahttps://plus.google.com/110377150394476015496noreply@blogger.com0tag:blogger.com,1999:blog-8784745434426267484.post-37780755724844438472013-11-09T23:41:00.004-08:002013-11-09T23:41:54.007-08:00Quantum Mechanics: Collapse Theories. <div class="separator" style="clear: both; text-align: center;"><a href="http://4.bp.blogspot.com/-j6WHaEbJwAI/Un83l904_8I/AAAAAAAACjU/oY8zbhPbJqs/s1600/Quantum-Gravity.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="213" src="http://4.bp.blogspot.com/-j6WHaEbJwAI/Un83l904_8I/AAAAAAAACjU/oY8zbhPbJqs/s320/Quantum-Gravity.jpg" width="320" /></a></div><div style="text-align: center;">Source:</div><div style="text-align: center;"><a href="http://plato.stanford.edu/entries/qm-collapse/"><span style="color: yellow;">Stanford Encyclopedia of Philosophy</span></a></div><div style="text-align: center;">-----------------------------------------------</div>Quantum mechanics, with its revolutionary implications, has posed innumerable problems to philosophers of science. In particular, it has suggested reconsidering basic concepts such as the existence of a world that is, at least to some extent, independent of the observer, the possibility of getting reliable and objective knowledge about it, and the possibility of taking (under appropriate circumstances) certain properties to be objectively possessed by physical systems. It has also raised many others questions which are well known to those involved in the debate on the interpretation of this pillar of modern science. One can argue that most of the problems are not only due to the intrinsic revolutionary nature of the phenomena which have led to the development of the theory. They are also related to the fact that, in its standard formulation and interpretation, quantum mechanics is a theory which is excellent (in fact it has met with a success unprecedented in the history of science) in telling us everything about <em>what we observe</em>, but it meets with serious difficulties in telling us <em>what is</em>. We are making here specific reference to the central problem of the theory, usually referred to as <em>the measurement problem</em>, or, with a more appropriate term, as the <em>macro-objectification problem</em>. It is just one of the many attempts to overcome the difficulties posed by this problem that has led to the development of <em>Collapse Theories</em>, i.e., to the <em>Dynamical Reduction Program</em> (DRP). As we shall see, this approach consists in accepting that the dynamical equation of the standard theory should be modified by the addition of stochastic and nonlinear terms. The nice fact is that the resulting theory is capable, on the basis of a single dynamics which is assumed to govern all natural processes, to account at the same time for all well-established facts about microscopic systems as described by the standard theory as well as for the so-called postulate of wave packet reduction (WPR). As is well known, such a postulate is assumed in the standard scheme just in order to guarantee that <em>measurements have outcomes</em> but, as we shall discuss below, it meets with insurmountable difficulties if one takes the measurement itself to be a process governed by the linear laws of the theory. Finally, the collapse theories account in a completely satisfactory way for the classical behavior of macroscopic systems. <br />Two specifications are necessary in order to make clear from the beginning what are the limitations and the merits of the program. The only satisfactory explicit models of this type (which are essentially variations and refinements of the one proposed in the references Ghirardi, Rimini, and Weber (1985, 1986), and usually referred to as the GRW theory) are phenomenological attempts to solve a foundational problem. At present, they involve phenomenological parameters which, if the theory is taken seriously, acquire the status of new constants of nature. Moreover, the problem of building satisfactory relativistic generalizations of these models has encountered serious mathematical difficulties due to the appearance of intractable divergences. Only very recently, some important steps we will discuss in what follows have led to the first satisfactory formulations of genuinely relativistically invariant theories inducing reductions. More important, the debate raised by these attempts and by claims that the desired generalization is impossible to achieve have elucidated some crucial points and have made clear that there is no reason of principle preventing to reach this goal.<br />In spite of their phenomenological character, we think that Collapse Theories have a remarkable relevance, since they have made clear that there are new ways to overcome the difficulties of the formalism, to <em>close the circle</em> in the precise sense defined by Abner Shimony (1989), ways which until a few years ago were considered impracticable, and which, on the contrary, have been shown to be perfectly viable. Moreover, they have allowed a clear identification of the formal features which should characterize any unified theory of micro and macro processes. Last but not least, Collapse theories qualify themselves as rival theories of quantum mechanics and one can easily identify some of their physical implications which, in principle, would allow crucial tests discriminating between the two. This possibility, for the moment, seems to require experiments which go beyond the present technological possibilities. However two aspects of the problem have to be taken into account: due to the remarkable improvements in dealing with mesoscopic systems a crucial test of GRW might become feasible, and the model suggests the kind of physical processes in which a violation of the linear nature of the formalism might occur. Accordingly, even though the experimental investigations might very well turn out not to confirm the proposed new dynamical features of natural processes, they might lead, in the end, to extremely relevant discoveries.<br /><!--Entry Contents--> <br /><ul><li><a href="http://plato.stanford.edu/entries/qm-collapse/#GenCon">1. General Considerations</a></li><li><a href="http://plato.stanford.edu/entries/qm-collapse/#ForConSke">2. The Formalism: A Concise Sketch</a></li><li><a href="http://plato.stanford.edu/entries/qm-collapse/#MacObjPro">3. The Macro-Objectification Problem</a></li><li><a href="http://plato.stanford.edu/entries/qm-collapse/#BirColThe">4. The Birth of Collapse Theories</a></li><li><a href="http://plato.stanford.edu/entries/qm-collapse/#OriColMod">5. The Original Collapse Model</a></li><li><a href="http://plato.stanford.edu/entries/qm-collapse/#ConSpoLocModCSL">6. The Continuous Spontaneous Localization Model (CSL)</a></li><li><a href="http://plato.stanford.edu/entries/qm-collapse/#SimVerCSL">7. A Simplified Version of CSL</a></li><li><a href="http://plato.stanford.edu/entries/qm-collapse/#SomRemAboColThe">8. Some remarks about Collapse Theories</a></li><li><a href="http://plato.stanford.edu/entries/qm-collapse/#RelDynRedMod">9. Relativistic Dynamical Reduction Models</a></li><li><a href="http://plato.stanford.edu/entries/qm-collapse/#ColTheDefPer">10. Collapse Theories and Definite Perceptions</a></li><li><a href="http://plato.stanford.edu/entries/qm-collapse/#IntThePriOnt">11. The Interpretation of the Theory and its Primitive Ontologies </a></li><li><a href="http://plato.stanford.edu/entries/qm-collapse/#ProTaiWavFun">12. The Problem of the Tails of the Wave Function</a></li><li><a href="http://plato.stanford.edu/entries/qm-collapse/#13StaColModRecPosAboThe">13. The Status of Collapse Models and Recent Positions about them</a></li><li><a href="http://plato.stanford.edu/entries/qm-collapse/#Sum">Summary</a></li><li><a href="http://plato.stanford.edu/entries/qm-collapse/#Bib">Bibliography</a></li><li><a href="http://plato.stanford.edu/entries/qm-collapse/#Aca">Academic Tools</a></li><li><a href="http://plato.stanford.edu/entries/qm-collapse/#Oth">Other Internet Resources</a></li><li><a href="http://plato.stanford.edu/entries/qm-collapse/#Rel">Related Entries</a></li></ul><!--Entry Contents--> <br /><hr /><h2><a href="http://www.blogger.com/null" name="GenCon">1. General Considerations</a></h2>As stated already, a very natural question which all scientists who are concerned about the meaning and the value of science have to face, is whether one can develop a coherent worldview that can accommodate our knowledge concerning natural phenomena as it is embodied in our best theories. Such a program meets serious difficulties with quantum mechanics, essentially because of two formal aspects of the theory which are common to all of its versions, from the original nonrelativistic formulations of the 1920s, to the quantum field theories of recent years: the linear nature of the state space and of the evolution equation, i.e., the validity of the superposition principle and the related phenomenon of entanglement, which, in Schrödinger's words: <br /><blockquote>is not one but the characteristic trait of quantum mechanics, the one that enforces its entire departure from classical lines of thought (Schrödinger, 1935, p. 807).</blockquote>These two formal features have embarrassing consequences, since they imply <br /><ul><li>objective chance in natural processes, i.e., the nonepistemic nature of quantum probabilities;</li><li>objective indefiniteness of physical properties both at the micro and macro level;</li><li>objective entanglement between spatially separated and non-interacting constituents of a composite system, entailing a sort of holism and a precise kind of nonlocality.</li></ul>For the sake of generality, we shall first of all present a very concise sketch of ‘the rules of the game’.<br /><h2><a href="http://www.blogger.com/null" name="ForConSke">2. The Formalism: A Concise Sketch</a></h2>Let us recall the axiomatic structure of quantum theory: <br /><ol><li>States of physical systems are associated with normalized vectors in a Hilbert space, a complex, infinite-dimensional, complete and separable linear vector space equipped with a scalar product. Linearity implies that the superposition principle holds: if |<em>f</em>> is a state and |<em>g</em>> is a state, then (for <em>a</em> and <em>b</em> arbitrary complex numbers) also<br /><blockquote>|<em>K</em>> = <em>a</em>|<em>f</em>> + <em>b</em>|<em>g</em>></blockquote>is a state. Moreover, the state evolution is linear, i.e., it preserves superpositions: if |<em>f</em>,<em>t</em>> and |<em>g</em>,<em>t</em>> are the states obtained by evolving the states |<em>f</em>,0> and |<em>g</em>,0>, respectively, from the initial time <em>t</em>=0 to the time <em>t</em>, then <em>a</em>|<em>f</em>,<em>t</em>> + <em>b</em>|<em>g</em>,<em>t</em>> is the state obtained by the evolution of <em>a</em>|<em>f</em>,0> + <em>b</em>|<em>g</em>,0>. Finally, the completeness assumption is made, i.e., that the knowledge of its statevector represents, in principle, the most accurate information one can have about the state of an individual physical system.</li><li> The observable quantities are represented by self-adjoint operators B on the Hilbert space. The associated eigenvalue equations B|<em>b</em><sub><em>k</em></sub>> = <em>b</em><sub><em>k</em></sub>|<em>b</em><sub><em>k</em></sub>> and the corresponding eigenmanifolds (the linear manifolds spanned by the eigenvectors associated to a given eigenvalue, also called eigenspaces) play a basic role for the predictive content of the theory. In fact:<br /><ol type="i"><li>The eigenvalues <em>b</em><sub><em>k</em></sub> of an operator B represent the only possible outcomes in a measurement of the corresponding observable.</li><li>The square of the norm (i.e., the length) of the projection of the normalized vector (i.e., of length 1) describing the state of the system onto the eigenmanifold associated to a given eigenvalue gives the probability of obtaining the corresponding eigenvalue as the outcome of the measurement. In particular, it is useful to recall that when one is interested in the probability of finding a particle at a given place, one has to resort to the so-called configuration space representation of the statevector. In such a case the statevector becomes a square-integrable function of the position variables of the particles of the system, whose modulus squared yields the probability density for the outcomes of position measurements.</li></ol></li></ol>We stress that, according to the above scheme, quantum mechanics makes only conditional probabilistic predictions (conditional on the measurement being actually performed) for the outcomes of prospective (and in general incompatible) measurement processes. Only if a state belongs already before the act of measurement to an eigenmanifold of the observable which is going to be measured, can one predict the outcome with certainty. In all other cases—if the completeness assumption is made—one has objective nonepistemic probabilities for different outcomes.<br />The orthodox position gives a very simple answer to the question: what determines the outcome when different outcomes are possible? Nothing—the theory is complete and, as a consequence, it is illegitimate to raise any question about possessed properties referring to observables for which different outcomes have non-vanishing probabilities of being obtained. Correspondingly, the referent of the theory are the results of measurement procedures. These are to be described in classical terms and involve in general mutually exclusive physical conditions.<br />As regards the legitimacy of attributing properties to physical systems, one could say that quantum mechanics warns us against requiring too many properties to be actually possessed by physical systems. However—with Einstein—one can adopt as a sufficient condition for the existence of an objective individual property that one be able (without in any way disturbing the system) to predict with certainty the outcome of a measurement. This implies that, whenever the overall statevector factorizes into the product of a state of the Hilbert space of the physical system <em>S</em> and of the rest of the world, <em>S</em> does possess some properties (actually a complete set of properties, i.e., those associated to a maximal set of commuting observables).<br />Before concluding this section we must add some comments about the measurement process. Quantum theory was created to deal with microscopic phenomena. In order to obtain information about them one must be able to establish strict correlations between the states of the microscopic systems and the states of objects we can perceive. Within the formalism, this is described by considering appropriate micro-macro interactions. The fact that when the measurement is completed one can make statements about the outcome is accounted for by the already mentioned WPR postulate (Dirac 1948): <em>a measurement always causes a system to jump in an eigenstate of the observed quantity</em>. Correspondingly, also the statevector of the apparatus ‘jumps’ into the manifold associated to the recorded outcome.<br /><h2><a href="http://www.blogger.com/null" name="MacObjPro">3. The Macro-Objectification Problem</a></h2>In this section we shall clarify why the formalism we have just presented gives rise to the measurement or macro-objectification problem. To this purpose we shall, first of all, discuss the standard oversimplified argument based on the so-called von Neumann ideal measurement scheme. Then we shall discuss more recent results (Bassi and Ghirardi 2000), which relax von Neumann's assumptions.<br />Let us begin by recalling the basic points of the standard argument:<br /><blockquote>Suppose that a microsystem <em>S</em>, just before the measurement of an observable <em>B</em>, is in the eigenstate |<em>b</em><sub><em>j</em></sub>> of the corresponding operator. The apparatus (a macrosystem) used to gain information about <em>B</em> is initially assumed to be in a precise macroscopic state, its ready state, corresponding to a definite macro property—e.g., its pointer points at 0 on a scale. Since the apparatus <em>A</em> is made of elementary particles, atoms and so on, it must be described by quantum mechanics, which will associate to it the state vector |<em>A</em><sub>0</sub>>. One then assumes that there is an appropriate system-apparatus interaction lasting for a finite time, such that when the initial apparatus state is triggered by the state |<em>b</em><sub><em>j</em></sub>> it ends up in a final configuration |<em>A</em><sub><em>j</em></sub>>, which is macroscopically distinguishable from the initial one and from the other configurations |<em>A</em><sub><em>k</em></sub>> in which it would end up if triggered by a different eigenstate |<em>b</em><sub><em>k</em></sub>>. Moreover, one assumes that the system is left in its initial state. In brief, one assumes that one can dispose things in such a way that the system-apparatus interaction can be described as: <ol><li>(<em>initial state</em>): |<em>b</em><sub><em>k</em></sub>>|<em>A</em><sub>0</sub>> <br /> (<em>final state</em>): |<em>b</em><sub><em>k</em></sub>>|<em>A</em><sub><em>k</em></sub>></li></ol>Equation (1) and the hypothesis that the superposition principle governs all natural processes tell us that, if the initial state of the microsystem is a linear superposition of different eigenstates (for simplicity we will consider only two of them), one has:<br /><ol start="2"><li>(<em>initial state</em>): (<em>a</em>|<em>b</em><sub><em>k</em></sub>> + <em>b</em>|<em>b</em><sub><em>j</em></sub>>)|<em>A</em><sub>0</sub>> <br /> (<em>final state</em>): (<em>a</em>|<em>b</em><sub><em>k</em></sub>>|<em>A</em><sub><em>k</em></sub>>+ <em>b</em>|<em>b</em><sub><em>j</em></sub>>|<em>A</em><sub><em>j</em></sub>>).</li></ol></blockquote>Some remarks about this are in order:<br /><ul><li>The scheme is highly idealized, both because it takes for granted that one can prepare the apparatus in a precise state, which is impossible since we cannot have control over all its degrees of freedom, and because it assumes that the apparatus registers the outcome without altering the state of the measured system. However, as we shall discuss below, these assumptions are by no means essential to derive the embarrassing conclusion we have to face, i.e., that the final state is a linear superposition of two states corresponding to two macroscopically different states of the apparatus. Since we know that the + representing linear superpositions cannot be replaced by the logical alternative <em>either … or</em>, the measurement problem arises: what meaning can one attach to a state of affairs in which two macroscopically and perceptively different states occur simultaneously?</li><li>As already mentioned, the standard solution to this problem is given by the WPR postulate: in a measurement process reduction occurs: the final state is not the one appearing at the right hand side of equation (2) but, since macro-objectification takes place, it is <br /><br /><ol start="3"><li>either |<em>b</em><sub><em>k</em></sub>>|<em>A</em><sub><em>k</em></sub>> or |<em>b</em><sub><em>j</em></sub>>|<em>A</em><sub><em>j</em></sub>> with probabilities |<em>a</em>|<sup>2</sup> and |<em>b</em>|<sup>2</sup>, respectively.</li></ol></li></ul>Nowadays, there is a general consensus that this solution is absolutely unacceptable for two basic reasons:<br /><ol><li>It corresponds to assuming that the linear nature of the theory is broken at a certain level. Thus, quantum theory is unable to explain how it can happen that the apparata behave as required by the WPR postulate (which is one of the axioms of the theory).</li><li>Even if one were to accept that quantum mechanics has a limited field of applicability, so that it does not account for all natural processes and, in particular, it breaks down at the macrolevel, it is clear that the theory does not contain any precise criterion for identifying the borderline between micro and macro, linear and nonlinear, deterministic and stochastic, reversible and irreversible. To use J.S. Bell's words, there is nothing in the theory fixing such a borderline and the <em>split</em> between the two above types of processes is fundamentally <em>shifty</em>. As a matter of fact, if one looks at the historical debate on this problem, one can easily see that it is precisely by continuously resorting to this ambiguity about the split that adherents of the Copenhagen orthodoxy or <em>easy solvers</em> (Bell 1990) of the measurement problem have rejected the criticism of the <em>heretics</em> (Gottfried 2000). For instance, Bohr succeeded in rejecting Einstein's criticisms at the Solvay Conferences by stressing that some macroscopic parts of the apparatus had to be treated fully quantum mechanically; von Neumann and Wigner displaced the split by locating it between the physical and the conscious (but what is a conscious being?), and so on. Also other proposed solutions to the problem, notably certain versions of many-worlds interpretations, suffer from analogous ambiguities.</li></ol>It is not our task to review here the various attempts to solve the above difficulties. One can find many exhaustive treatments of this problem in the literature. On the contrary, we would like to discuss how the macro-objectification problem is indeed a consequence of very general, in fact unavoidable, assumptions on the nature of measurements, and not specifically of the assumptions of von Neumann's model. This was established in a series of theorems of increasing generality, notably the ones by Fine (1970), d'Espagnat (1971), Shimony (1974), Brown (1986) and Busch and Shimony (1996). Possibly the most general and direct proof is given by Bassi and Ghirardi (2000), whose results we briefly summarize. The assumptions of the theorem are:<br /><ol type="i"><li>that a microsystem can be prepared in two different eigenstates of an observable (such as, e.g., the spin component along the z-axis) and in a superposition of two such states;</li><li>that one has a sufficiently reliable way of ‘measuring’ such an observable, meaning that when the measurement is triggered by each of the two above eigenstates, the process leads in the vast majority of cases to macroscopically and perceptually different situations of the universe. This requirement allows for cases in which the experimenter does not have perfect control of the apparatus, the apparatus is entangled with the rest of the universe, the apparatus makes mistakes, or the measured system is altered or even destroyed in the measurement process;</li><li>that all natural processes obey the linear laws of the theory.</li></ol>From these very general assumptions one can show that, repeating the measurement on systems prepared in the superposition of the two given eigenstates, in the great majority of cases one ends up in a superposition of macroscopically and perceptually different situations of the whole universe. If one wishes to have an acceptable final situation, one mirroring the fact that we have definite perceptions, one is arguably compelled to break the linearity of the theory at an appropriate stage.<br /><h2><a href="http://www.blogger.com/null" name="BirColThe">4. The Birth of Collapse Theories</a></h2>The debate on the macro-objectification problem continued for many years after the early days of quantum mechanics. In the early 1950s an important step was taken by D. Bohm who presented (Bohm 1952) a mathematically precise deterministic completion of quantum mechanics (see the entry on Bohmian Mechanics). In the area of Collapse Theories, one should mention the contribution by Bohm and Bub (1966), which was based on the interaction of the statevector with Wiener-Siegel hidden variables. But let us come to Collapse Theories in the sense currently attached to this expression. <br />Various investigations during the 1970s can be considered as preliminary steps for the subsequent developments. In the years 1970-1973 L. Fonda, A. Rimini, T. Weber and G.C. Ghirardi were seriously concerned with quantum decay processes and in particular with the possibility of deriving, within a quantum context, the exponential decay law (Fonda, Ghirardi, Rimini, and Weber 1973; Fonda, Ghirardi, and Rimini et al. 1978). Some features of this approach are extremely relevant for the DRP. Let us list them:<br /><ul><li>One deals with individual physical systems;</li><li>The statevector is supposed to undergo random processes at random times, inducing sudden changes driving it either within the linear manifold of the unstable state or within the one of the decay products;</li><li>To make the treatment quite general (the apparatus does not know which kind of unstable system it is testing) one is led to identify the random processes with localization processes of the relative coordinates of the decay fragments. Such an assumption, combined with the peculiar resonant dynamics characterizing an unstable system, yields, completely in general, the desired result. The ‘relative position basis’ is the preferred basis of this theory;</li><li>Analogous ideas have been applied to measurement processes (Fonda, Ghirardi, and Rimini 1973);</li><li>The final equation for the evolution at the ensemble level is of the quantum dynamical semigroup type and has a structure extremely similar to the final one of the GRW theory.</li></ul>Obviously, in these papers the reduction processes which are involved were not assumed to be ‘spontaneous and fundamental’ natural processes, but due to system-environment interactions. Accordingly, these attempts did not represent original proposals for solving the macro-objectification problem but they have paved the way for the elaboration of the GRW theory. <br />Almost in the same years, P. Pearle (1976, 1979), and subsequently N. Gisin (1984) and others, had entertained the idea of accounting for the reduction process in terms of a stochastic differential equation. These authors were really looking for a new dynamical equation and for a solution to the macro-objectification problem. Unfortunately, they were unable to give any precise suggestion about how to identify the states to which the dynamical equation should lead. Indeed, these states were assumed to depend on the particular measurement process one was considering. Without a clear indication on this point there was no way to identify a mechanism whose effect could be negligible for microsystems but extremely relevant for the macroscopic ones. N. Gisin gave subsequently an interesting (though not uncontroversial) argument (Gisin 1989) that nonlinear modifications of the standard equation without stochasticity are unacceptable since they imply the possibility of sending superluminal signals. Soon afterwards, G. C. Ghirardi and R. Grassi (1991) showed that stochastic modifications without nonlinearity can at most induce ensemble and not individual reductions, i.e., they do not guarantee that the state vector of each individual physical system is driven in a manifold corresponding to definite properties.<br /><h2><a href="http://www.blogger.com/null" name="OriColMod">5. The Original Collapse Model</a></h2>As already mentioned, the Collapse Theory (Ghirardi, Rimini, and Weber 1986) we are going to describe amounts to accepting a modification of the standard evolution law of the theory such that microprocesses and macroprocesses are governed by a single dynamics. Such a dynamics must imply that the micro-macro interaction in a measurement process leads to WPR. Bearing this in mind, recall that the characteristic feature distinguishing quantum evolution from WPR is that, while Schrödinger's equation is linear and deterministic (at the wave function level), WPR is nonlinear and stochastic. It is then natural to consider, as was suggested for the first time in the above quoted papers by P. Pearle, the possibility of nonlinear and stochastic modifications of the standard Schrödinger dynamics. However, the initial attempts to implement this idea were unsatisfactory for various reasons. The first, which we have already discussed, concerns the choice of the preferred basis: if one wants to have a universal mechanism leading to reductions, to which linear manifolds should the reduction mechanism drive the statevector? Or, equivalently, which of the (generally) incompatible ‘potentialities’ of the standard theory should we choose to make actual? The second, referred to as the trigger problem by Pearle (1989), is the problem of how the reduction mechanism can become more and more effective in going from the micro to the macro domain. The solution to this problem constitutes the central feature of the Collapse Theories of the GRW type. To discuss these points, let us briefly review the first consistent Collapse model (Ghirardi, Rimini, and Weber 1985) to appear in the literature.<br />Within such a model, originally referred to as QMSL (Quantum Mechanics with Spontaneous Localizations), the problem of the choice of the preferred basis is solved by noting that the most embarrassing superpositions, at the macroscopic level, are those involving different spatial locations of macroscopic objects. Actually, as Einstein has stressed, this is a crucial point which has to be faced by anybody aiming to take a macro-objective position about natural phenomena: ‘A macro-body must always have a quasi-sharply defined position in the objective description of reality’ (Born, 1971, p. 223). Accordingly, QMSL considers the possibility of spontaneous processes, which are assumed to occur instantaneously and at the microscopic level, which tend to suppress the linear superpositions of differently localized states. The required trigger mechanism must then follow consistently.<br />The key assumption of QMSL is the following: each elementary constituent of any physical system is subjected, at random times, to random and spontaneous localization processes (which we will call hittings) around appropriate positions. To have a precise mathematical model one has to be very specific about the above assumptions; in particular one has to make explicit HOW the process works, i.e., which modifications of the wave function are induced by the localizations, WHERE it occurs, i.e., what determines the occurrence of a localization at a certain position rather than at another one, and finally WHEN, i.e., at what times, it occurs. The answers to these questions are as follows.<br />Let us consider a system of <em>N</em> distinguishable particles and let us denote by <em>F</em>(<strong><em>q</em></strong><sub>1</sub>, <strong><em>q</em></strong><sub>2</sub>, … , <strong><em>q</em></strong><em><sub>N</sub></em>) the coordinate representation (wave function) of the state vector (we disregard spin variables since hittings are assumed not to act on them).<br /><ol type="a"><li>The answer to the question HOW is then: if a hitting occurs for the <em>i</em>-th particle at point <strong><em>x</em></strong>, the wave function is instantaneously multiplied by a Gaussian function (appropriately normalized) <blockquote><em>G</em>(<strong><em>q</em></strong><sub><em>i</em></sub>, <strong><em>x</em></strong>) = <em>K</em> exp[−{1/(2 <em>d</em><sup>2</sup>)}(<strong><em>q</em></strong><sub><em>i</em></sub> −<strong><em>x</em></strong>)<sup>2</sup>],</blockquote>where <em>d</em> represents the localization accuracy. Let us denote as<br /><blockquote><em>L</em><sub><em>i</em></sub>(<strong><em>q</em></strong><sub>1</sub>, <strong><em>q</em></strong><sub>2</sub>, … , <strong><em>q</em></strong><sub><em>N</em></sub> ; <strong><em>x</em></strong>) = <em>F</em>(<strong><em>q</em></strong><sub>1</sub>, <strong><em>q</em></strong><sub>2</sub>, … , <strong><em>q</em></strong><sub><em>N</em></sub>) <em>G</em>(<strong><em>q</em></strong><sub><em>i</em></sub>, <strong><em>x</em></strong>) </blockquote>the wave function immediately after the localization, as yet unnormalized.</li><li>As concerns the specification of WHERE the localization occurs, it is assumed that the probability density <em>P</em>(<strong><em>x</em></strong>) of its taking place at the point <strong><em>x</em></strong> is given by the square of the norm of the state <em>L</em><sub><em>i</em></sub> (the length, or to be more precise, the integral of the modulus squared of the function <em>L</em><sub><em>i</em></sub> over the 3<em>N</em>-dimensional space). This implies that hittings occur with higher probability at those places where, in the standard quantum description, there is a higher probability of finding the particle. Note that the above prescription introduces nonlinear and stochastic elements in the dynamics. The constant <em>K</em> appearing in the expression of <em>G</em>(<strong><em>q</em></strong><sub><em>i</em></sub>,<strong><em> x</em></strong>) is chosen in such a way that the integral of <em>P</em>(<strong><em>x</em></strong>) over the whole space equals 1.</li><li>Finally, the question WHEN is answered by assuming that the hittings occur at randomly distributed times, according to a Poisson distribution, with mean frequency <em>f</em>.</li></ol>It is straightforward to convince oneself that the hitting process leads, when it occurs, to the suppression of the linear superpositions of states in which the same particle is well localized at different positions separated by a distance greater than <em>d</em>. As a simple example we can consider a single particle whose wavefunction is different from zero only in two small and far apart regions <em>h</em> and <em>t</em>. Suppose that a localization occurs around <em>h</em>; the state after the hitting is then appreciably different from zero only in a region around <em>h</em> itself. A completely analogous argument holds for the case in which the hitting takes place around <em>t</em>. As concerns points which are far from both <em>h</em> and <em>t</em>, one easily sees that the probability density for such hittings , according to the multiplication rule determining <em>L</em><sub><em>i</em></sub>, turns out to be practically zero, and moreover, that if such a hitting were to occur, after the wave function is normalized, the wave function of the system would remain almost unchanged.<br />We can now discuss the most important feature of the theory, i.e., the Trigger Mechanism. To understand the way in which the spontaneous localization mechanism is enhanced by increasing the number of particles which are in far apart spatial regions (as compared to <em>d</em>), one can consider, for simplicity, the superposition |<em>S</em>>, with equal weights, of two macroscopic pointer states |<em>H</em>> and |<em>T</em>>, corresponding to two different pointer positions <em>H</em> and <em>T</em>, respectively. Taking into account that the pointer is ‘almost rigid’ and contains a macroscopic number <em>N</em> of microscopic constituents, the state can be written, in obvious notation, as:<br /><ol start="4"><li>|<em>S</em>> = [|1 near <em>h</em><sub>1</sub>>… |<em>N</em> near <em>h</em><sub><em>N</em></sub>> + |1 near <em>t</em><sub>1</sub>> … |<em>N</em> near <em>t</em><sub><em>N</em></sub>>],</li></ol>where <em>h</em><sub><em>i</em></sub> is near <em>H</em>, and <em>t</em><sub><em>i</em></sub> is near <em>T</em>. The states appearing in first term on the right-hand side of equation (4) have coordinate representations which are different from zero only when their arguments (1,…,<em>N</em>) are all near <em>H</em>, while those of the second term are different from zero only when they are all near <em>T</em>. It is now evident that if any of the particles (say, the <em>i</em>-th particle) undergoes a hitting process, e.g., near the point <em>h</em><sub><em>i</em></sub>, the multiplication prescription leads practically to the suppression of the second term in (4). Thus any spontaneous localization of any of the constituents amounts to a localization of the pointer. The hitting frequency is therefore effectively amplified proportionally to the number of constituents. Notice that, for simplicity, the argument makes reference to an almost rigid body, i.e., to one for which all particles are around <em>H</em> in one of the states of the superposition and around <em>T</em> in the other. It should however be obvious that what really matters in amplifying the reductions is the number of particles which are in different positions in the two states appearing in the superposition itself.<br />Under these premises we can now proceed to choose the parameters <em>d</em> and <em>f</em> of the theory, i.e., the localization accuracy and the mean localization frequency. The argument just given allows one to understand how one can choose the parameters in such a way that the quantum predictions for microscopic systems remain fully valid while the embarrassing macroscopic superpositions in measurement-like situations are suppressed in very short times. Accordingly, as a consequence of the unified dynamics governing all physical processes, individual macroscopic objects acquire definite macroscopic properties. The choice suggested in the GRW-model is:<br /><ol start="5"><li><em>f</em> = 10<sup>−16</sup> s<sup>−1</sup> <br /><em>d</em> = 10<sup>−5</sup> cm</li></ol>It follows that a microscopic system undergoes a localization, on average, every hundred million years, while a macroscopic one undergoes a localization every 10<sup>−7</sup> seconds. With reference to the challenging version of the macro-objectification problem presented by Schrödinger with the famous example of his cat, J.S. Bell comments (1987, p.44): [within QMSL] <em>the cat is not both dead and alive for more than a split second</em> . Besides the extremely low frequency of the hittings for microscopic systems, also the fact that the localization width is large compared to the dimensions of atoms (so that even when a localization occurs it does very little violence to the internal economy of an atom) plays an important role in guaranteeing that no violation of well-tested quantum mechanical predictions is implied by the modified dynamics.<br />Some remarks are appropriate. First of all, QMSL, being precisely formulated, allows to locate precisely the ‘split’ between micro and macro, reversible and irreversible, quantum and classical. The transition between the two types of ‘regimes’ is governed by the number of particles which are well localized at positions further apart than 10<sup>−5</sup> cm in the two states whose coherence is going to be dynamically suppressed. Second, the model is, in principle, testable against quantum mechanics. As a matter of fact, an essential part of the program consists in proving that its predictions do not contradict any already established fact about microsystems and macrosystems.<br /><h2><a href="http://www.blogger.com/null" name="ConSpoLocModCSL">6. The Continuous Spontaneous Localization Model (CSL)</a></h2>The model just presented (QMSL) has a serious drawback: it does not allow to deal with systems containing identical constituents because it does not respect the symmetry or antisymmetry requirements for such particles. A quite natural idea to overcome this difficulty would be that of relating the hitting process not to the individual particles but to the particle number density averaged over an appropriate volume. This can be done by introducing a new phenomenological parameter in the theory which however can be eliminated by an appropriate limiting procedure (see below).<br />Another way to overcome this problem derives from injecting the physically appropriate principles of the GRW model within the original approach of P. Pearle. This line of thought has led to a quite elegant formulation of a dynamical reduction model, usually referred to as CSL (Pearle 1989; Ghirardi, Pearle, and Rimini 1990) in which the discontinuous jumps which characterize QMSL are replaced by a continuous stochastic evolution in the Hilbert space (a sort of Brownian motion of the statevector).<br />We will not enter into the rather technical details of this interesting development of the original GRW proposal, since the basic ideas and physical implications are precisely the same as those of the original formulation. Actually, one could argue that the above idea of tackling the problem of identical particles by considering the average particle number within an appropriate volume is correct. In fact it has been proved (Ghirardi, Pearle, and Rimini 1990) that for any CSL dynamics there is a hitting dynamics which, from a physical point of view, is ‘as close to it as one wants’. Instead of entering into the details of the CSL formalism, it is useful, for the discussion below, to analyze a simplified version of it.<br /><h2><a href="http://www.blogger.com/null" name="SimVerCSL">7. A Simplified Version of CSL</a></h2>With the aim of understanding the physical implications of the CSL model, such as the rate of suppression of coherence, we make now some simplifying assumptions. First, we assume that we are dealing with only one kind of particles (e.g., the nucleons), secondly, we disregard the standard Schrödinger term in the evolution and, finally, we divide the whole space in cells of volume <em>d</em><sup>3</sup>. We denote by |<em>n</em><sub>1</sub>, <em>n</em><sub>2</sub>, … > a Fock state in which there are <em>n<sub>i</sub></em> particles in cell <em>i</em>, and we consider a superposition of two states |<em>n</em><sub>1</sub>, <em>n</em><sub>2</sub>, … > and |<em>m</em><sub>1</sub>, <em>m</em><sub>2</sub>, … > which differ in the occupation numbers of the various cells of the universe. With these assumptions it is quite easy to prove that the rate of suppression of the coherence between the two states (so that the final state is one of the two and not their superposition) is governed by the quantity: <br /><ol start="6"><li>exp{−<em>f</em> [(<em>n</em><sub>1</sub> − <em>m</em><sub>1</sub>)<sup>2</sup> + (<em>n</em><sub>2</sub> − <em>m</em><sub>2</sub>)<sup>2</sup> + …]<em>t</em>},</li></ol>the sum being extended to all cells in the universe. Apart from differences relating to the identity of the constituents, the overall physics is quite similar to that implied by QMSL. <br />Equation 6 offers the opportunity of discussing the possibility of relating the suppression of coherence to gravitational effects. In fact, with reference to this equation we notice that the worst case scenario (from the point of view of the time necessary to suppress coherence) is the one corresponding to the superposition of two states for which the occupation numbers of the individual cells differ only by one unit. Indeed, in this case the amplifying effect of taking the square of the differences disappears. Let us then raise the question: how many nucleons (at worst) should occupy different cells, in order for the given superposition to be dynamically suppressed within the time which characterizes human perceptual processes? Since such a time is of the order of 10<sup>−2</sup> sec and <em>f</em> = 10<sup>−16</sup> sec<sup>−1</sup>, the number of displaced nucleons must be of the order of 10<sup>18</sup>, which corresponds, to a remarkable accuracy, to a Planck mass. This figure seems to point in the same direction as Penrose's attempts to relate reduction mechanisms to quantum gravitational effects (Penrose 1989).<br />Obviously, the model theory we are discussing implies various further physical effects which deserve to be discussed since they might allow a test of the theory with respect to standard quantum mechanics. For review, see (Bassi and Ghirardi 2001; Adler 2007). We briefly list the most promising type of experiments which in the future might allow such a crucial test.<br /><ol type="a"><li>Effects in superconducting devices. A detailed analysis has been presented in (Ghirardi and Rimini 1990). As shown there and as follows from estimates about possible effects for superconducting devices (Rae 1990; Gallis and Fleming 1990; Rimini 1995), and for the excitation of atoms (Squires 1991), it turns out not to be possible, with present technology, to perform clear-cut experiments allowing to discriminate the model from standard quantum mechanics (Benatti, et al. 1995).</li><li>Loss of coherence in diffraction experiments with macromolecules. The group of Arndt and Zeilinger in Vienna has performed several diffraction experiments involving macromolecules.The most well known include C<sub>60</sub>, (720 nucleons) (Arndt et al. 1999), C<sub>70</sub>, (840 nucleons) (Hackermueller et al. 2004) and C<sub>30</sub>H<sub>12</sub>F<sub>30</sub>N<sub>2</sub>O<sub>4</sub>, (1030 nucleons) (Gerlich et al. 2007). These experiments aim at testing the validity of the superposition principle towards the macroscopic scale. The challenge is very exciting and near-future technology will probably allow to perform interference experiments with molecules much bigger than those already employed. So far, the experimental results are compatible both with standard quantum predictions and with those of collapse models, so they do not represent decisive tests of these models.</li><li>Loss of coherence in o pto-mechanical interferometers. Very recently, an interesting proposal of testing the superposition principle by resorting to an experimental set-up involving a (mesoscopic) mirror has been advanced (Marshall et al. 2003). This stimulating proposal has led a group of scientists directly interested in Collapse Theories (Bassi et al. 2005) to check whether the proposed experiment might be a crucial one for testing dynamical reduction models versus quantum mechanics. The rigorous conclusion has been that this is not the case: in the devised situation the GRW and CSL theories have implications which agree with those of the standard theory, the main reason being that the (average) positions of the superposed states are much smaller than the localization accuracy of GRW, so that the localizations processes become ineffective.</li><li>Spontaneous X-ray emission from Germanium. Collapse models not only forbid macroscopic superpositions to be stable, they share several other features which are forbidden by the standard theory. One of these is the spontaneous emission of radiation from otherwise stable systems, like atoms. While the standard theory predicts that such systems—if not excited—do not emit radiation, collapse models allow for radiation to be produced. The emission rate has been computed both for free charged particles (Fu 1997) and for hydrogenic atoms (Adler et al. 2007). The theoretical predictions are compatible with current experimental data (Fu 1997), so that even this type of experiments do not represent decisive tests of collapse models. However, their importance lies in the fact that—so far—they provide the strongest upper bounds on the collapse parameters (Adler et al. 2007).</li></ol><h2><a href="http://www.blogger.com/null" name="SomRemAboColThe">8. Some remarks about Collapse Theories</a></h2>A. Pais famously recalls in his biography of Einstein: <br /><blockquote>We often discussed his notions on objective reality. I recall that during one walk Einstein suddenly stopped, turned to me and asked whether I really believed that the moon exists only when I look at it (Pais 1982, p. 5).</blockquote>In the context of Einstein's remarks in <em>Albert Einstein, Philosopher-Scientist</em> (Schilpp 1949), we can regard this reference to the moon as an extreme example of ‘a fact that belongs entirely within the sphere of macroscopic concepts’, as is also a mark on a strip of paper that is used to register the outcome of a decay experiment, so that <br /><blockquote>as a consequence, there is hardly likely to be anyone who would be inclined to consider seriously […] that the existence of the location is essentially dependent upon the carrying out of an observation made on the registration strip. For, in the macroscopic sphere it simply is considered certain that one must adhere to the program of a realistic description in space and time; whereas in the sphere of microscopic situations one is more readily inclined to give up, or at least to modify, this program (p. 671).</blockquote>However, <br /><blockquote>the ‘macroscopic’ and the ‘microscopic’ are so inter-related that it appears impracticable to give up this program in the ‘microscopic’ alone (p. 674).</blockquote>One might speculate that Einstein would not have taken the DRP seriously, given that it is a fundamentally indeterministic program. On the other hand, the DRP allows precisely for this middle ground, between giving up a ‘classical description in space and time’ altogether (the moon is not there when nobody looks), and requiring that it be applicable also at the microscopic level (as within some kind of ‘hidden variables’ theory). It would seem that the pursuit of ‘realism’ for Einstein was more a program that had been very successful rather than an a priori commitment, and that in principle he would have accepted attempts requiring a radical change in our classical conceptions concerning microsystems, provided they would nevertheless allow to take a macrorealist position matching our definite perceptions at this scale.<br />In the DRP, we can say of an electron in an EPR-Bohm situation that ‘when nobody looks’, it has no definite spin in any direction , and in particular that when it is in a superposition of two states localised far away from each other, it cannot be thought to be at a definite place (see, however, the remarks in Section 11). In the macrorealm, however, objects do have definite positions and are generally describable in classical terms. That is, in spite of the fact that the DRP program is not adding ‘hidden variables’ to the theory, it implies that the moon is definitely there even if no sentient being has ever looked at it. In the words of J. S. Bell, the DRP<br /><blockquote>allows electrons (in general microsystems) to enjoy the cloudiness of waves, while allowing tables and chairs, and ourselves, and black marks on photographs, to be rather definitely in one place rather than another, and to be described in classical terms (Bell 1986, p. 364).</blockquote>Such a program, as we have seen, is implemented by assuming only the existence of wave functions, and by proposing a unified dynamics that governs both microscopic processes and ‘measurements’. As regards the latter, no vague definitions are needed. The new dynamical equations govern the unfolding of any physical process, and the macroscopic ambiguities that would arise from the linear evolution are theoretically possible, but only of momentary duration, of no practical importance and no source of embarrassment.<br />We have not yet analyzed the implications about locality, but since in the DRP program no hidden variables are introduced, the situation can be no worse than in ordinary quantum mechanics: <em>‘by adding mathematical precision to the jumps in the wave function’</em>, the GRW theory <em>‘simply makes precise the action at a distance of ordinary quantum mechanics’</em> (Bell 1987, p. 46). Indeed, a detailed investigation of the locality properties of the theory becomes possible as shown by Bell himself (Bell 1987, p. 47). Moreover, as it will become clear when we will discuss the interpretation of the theory in terms of mass density, the QMSL and CSL theories lead in a natural way to account for a behaviour of macroscopic objects corresponding to our definite perceptions about them, the main objective of Einstein's requirements.<br />The achievements of the DRP which are relevant for the debate about the foundations of quantum mechanics can also be concisely summarized in the words of H.P. Stapp:<br /><blockquote>The collapse mechanisms so far proposed could, on the one hand, be viewed as ad hoc mutilations designed to force ontology to kneel to prejudice. On the other hand, these proposals show that one can certainly erect a coherent quantum ontology that generally conforms to ordinary ideas at the macroscopic level (Stapp 1989, p. 157).</blockquote><h2><a href="http://www.blogger.com/null" name="RelDynRedMod">9. Relativistic Dynamical Reduction Models</a></h2>As soon as the GRW proposal appeared and attracted the attention of J.S. Bell it also stimulated him to look at it from the point of view of relativity theory. As he stated subsequently (Bell 1989a):<br /><blockquote>When I saw this theory first, I thought that I could blow it out of the water, by showing that it was <em> grossly </em> in violation of Lorentz invariance. That's connected with the problem of ‘quantum entanglement’, the EPR paradox.</blockquote>Actually, he had already investigated this point by studying the effect on the theory of a transformation mimicking a nonrelativistic approximation of a Lorentz transformation and he arrived (Bell 1987) at a surprising conclusion:<br /><blockquote>… the model is as Lorentz invariant as it could be in its nonrelativistic version. It takes away the ground of my fear that any exact formulation of quantum mechanics must conflict with fundamental Lorentz invariance.</blockquote>What Bell had actually proved in a rather complicated way by resorting to a two-times formulation of the Schrödinger equation is that the model violates locality by violating outcome independence and not, as deterministic hidden variable theories do, parameter independence.<br />Indeed, with reference to this point we recall that, as is well known, (Suppes and Zanotti 1976; van Fraassen 1982; Jarrett 1984; Shimony 1983; see also the entry on <a href="http://plato.stanford.edu/entries/bell-theorem/">Bell's Theorem</a>), Bell's locality assumption is equivalent to the conjunction of two other assumptions, viz., in Shimony's terminology, parameter independence and outcome independence. In view of the experimental violation of Bell's inequality, one has to give up either or both of these assumptions. The above splitting of the locality requirement into two logically independent conditions is particularly useful in discussing the different status of CSL and deterministic hidden variable theories with respect to relativistic requirements. Actually, as proved by Jarrett himself, when parameter independence is violated, if one had access to the variables which specify completely the state of individual physical systems, one could send faster-than-light signals from one wing of the apparatus to the other. Moreover, in Ghirardi and Grassi (1994, 1996) it has been proved that it is impossible to build a <em>genuinely</em> relativistically invariant theory which, in its nonrelativistic limit, exhibits parameter dependence. Here we use the term <em>genuinely invariant</em> to denote a theory for which there is no (hidden) preferred reference frame. On the other hand, if locality is violated only by the occurrence of outcome dependence then faster-than-light signaling cannot be achieved (Eberhard 1978; Ghirardi, Rimini, and Weber 1980; Ghirardi, Grassi, Rimini, and Weber 1988). Few years after the just mentioned proof by Bell, it has been shown in complete generality (Ghirardi, Grassi, Butterfield, and Fleming 1993; Butterfield et al. 1993) that the GRW and CSL theories, just as standard quantum mechanics, exhibit only outcome dependence. This is to some extent encouraging and shows that there are no reasons of principle making unviable the project of building a relativistically invariant DRM.<br />Let us be more specific about this crucial problem. P. Pearle was the first to propose (Pearle 1990) a relativistic generalization of CSL to a quantum field theory describing a fermion field coupled to a meson scalar field enriched with the introduction of stochastic and nonlinear terms. A quite detailed discussion of this proposal was presented in (Ghirardi et al. 1990a) where it was shown that the theory enjoys of all properties which are necessary in order to meet the relativistic constraints. Pearle's approach requires the precise formulation of the idea of stochastic Lorentz invariance. The proposal can be summarized in the following terms: <br />One considers a fermion field coupled to a meson field and puts forward the idea of inducing localizations for the fermions through their coupling to the mesons and a stochastic dynamical reduction mechanism acting on the meson variables. In practice, one considers Heisenberg evolution equations for the coupled fields and a Tomonaga-Schwinger CSL-type evolution equation with a skew-hermitian coupling to a c-number stochastic potential for the state vector. This approach has been systematically investigated by Ghirardi, Grassi, and Pearle (1990a, 1990b), to which we refer the reader for a detailed discussion. Here we limit ourselves to stressing that, under certain approximations, one obtains in the non-relativistic limit a CSL-type equation inducing spatial localization. However, due to the white noise nature of the stochastic potential, novel renormalization problems arise: the increase per unit time and per unit volume of the energy of the meson field is infinite due to the fact that infinitely many mesons are created. This point has also been lucidly discussed by Bell (1989b) in the talk he delivered at Trieste on the occasion of the 25th anniversary of the International Centre for Theoretical Physics. This talk appeared under the title <em> The Trieste Lecture of John Stewart Bell </em> edited by A. Bassi and G.C. Ghirardi. For these reasons one cannot consider this as a satisfactory example of a relativistic reduction model.<br />In the years following the just mentioned attempts there has been a flourishing of researches aimed at getting the desired result. Let us briefly comment about them.<br />As already mentioned, the source of the divergences is the assumption of point interactions between the quantum field operators in the dynamical equation for the statevector, or, equivalently, the white character of the stochastic noise.<br /> Having this aspect in mind P. Pearle (1999), L. Diosi (1990) and A. Bassi and G.C. Ghirardi (2002) reconsidered the problem from the beginning by investigating nonrelativistic theories with nonwhite Gaussian noises. The problem turns out to be very difficult from the mathematical point of view, but steps forward have been made. In recent years, a precise formulation of the nonwhite generalization (Bassi and Ferialdi 2009) of the so-called QMUPL model, which represents a simplified version of GRW and CSL, has been proposed. Moreover, a perturbative approach for the CSL model has been worked out (Adler and Bassi 2007, 2008). Further work is necessary. The program is very interesting at the nonrelativistic level; however, it is not yet clear whether it will lead to a real step forward in the development of relativistic theories of spontaneous collapse. <br />In the same spirit, Nicrosini and Rimini (Nicrosini 2003) tried to smear out the point interactions without success because, in their approach, a preferred reference frame had to be chosen in order to circumvent the nonintegrability of the Tomonaga-Schwinger equation <br />Also other interesting and different approaches have been suggested. Among them we mention the one by Dove and Squires (Dove 1996) based on discrete rather than continuous stochastic processes and those by Dawker and Herbauts (Dawker 2004a) and Dawker and Henson (Dawker 2004b) formulated on a discrete space-time.<br />Before going on we consider it important to call attention to the fact that precisely in the same years similar attempts to get a relativistic generalization of the other existing ‘exact’ theory, i.e., Bohmian Mechanics, were going on and that they too have encountered some difficulties. Relevant steps are represented by a paper (Dürr 1999) resorting to a preferred spacetime slicing, by the investigations of Goldstein and Tumulka (Goldstein 2003) and by other scientists (Berndl 1996). However, we must recognize that no one of these attempts has led to a fully satisfactory solution of the problem of having a theory without observers, like Bohmian mechanics, which is perfectly satisfactory from the relativistic point of view, precisely due to the fact that they are not <em>genuinely Lorentz invariant</em> in the sense we have made precise before. Mention should be made also of the attempt by Dewdney and Horton (Dewdney 2001) to build a relativistically invariant model based on particle trajectories. <br />Let us come back to the relativistic DRP. Some important changes have occurred quite recently. Tumulka (2006a) succeeded in proposing a relativistic version of the GRW theory for N non-interacting distinguishable particles, based on the consideration of a multi-time wavefunction whose evolution is governed by Dirac like equations and adopts as its Primitive Ontology (see the next section) the one which attaches a primary role to the space and time points at which spontaneous localizations occur, as originally suggested by Bell (1987). To my knowledge this represents the first proposal of a relativistic dynamical reduction mechanism which satisfies all relativistic requirements. In particular it is divergence free and foliation independent. However it can deal only with systems containing a fixed number of noninteracting fermions. <br />At this point explicit mention should be made of the most recent steps which concern our problem. D. Bedingham (2011) following strictly the original proposal by Pearle (1990) of a quantum field theory inducing reductions based on a Tomonaga-Schwinger equation, has worked out an analogous model which, however, overcomes the difficulties of the original model. In fact, Bedingham has circumvented the crucial problems deriving from point interactions by (paying the price of) introducing, besides the fields characterizing the Quantum Field Theories he is interested in, an auxiliary relativistic field that amounts to a smearing of the interactions whilst preserving Lorentz invariance and frame independence. Adopting this point of view and taking advantage also of the proposal by Ghirardi (2000) concerning the appropriate way to define objective properties at any space-time point <em>x</em>, he has been able to work out a fully satisfactory and consistent relativistic scheme for almost all quantum field theories in which reduction processes may occur.<br /> In view of the last results by Tumulka and Bedingham and taking into account the interesting investigations concerning relativistic Bohmia-like theories,the conclusions that Tumulka has drawn concerning the status of attempts to account for the macro-objectification process from a relativistic perspective are well-founded:<br /><blockquote>A somewhat surprising feature of the present situation is that we seem to arrive at the following alternative: Bohmian mechanics shows that one can explain quantum mechanics, exactly and completely, if one is willing to pay with using a preferred slicing of spacetime; our model suggests that one should be able to avoid a preferred slicing of spacetime if one is willing to pay with a certain deviation from quantum mechanics, </blockquote>a conclusion that he has rephrased and reinforced in (Tumulka 2006c):<br /><blockquote> Thus, with the presently available models we have the alternative: either the conventional understanding of relativity is not right, or quantum mechanics is not exact. </blockquote>Very recently, a thorough and illuminating discussion of the important approach by Tumulka has been presented by Tim Maudlin (2011) in the third revised edition of his book <em>Quantum Non-Locality and Relativity</em>. Tumulka's position is perfectly consistent with the present ideas concerning the attempts to transform relativistic standard quantum mechanics into an ‘exact’ theory in the sense which has been made precise by J. Bell. Since the only unified, mathematically precise and formally consistent formulations of the quantum description of natural processes are Bohmian mechanics and GRW-like theories, if one chooses the first alternative one has to accept the existence of a preferred reference frame, while in the second case one is not led to such a drastic change of position with respect to relativistic concepts but must accept that the ensuing theory—even though only in a presently non-testable manner—disagrees with the predictions of quantum mechanics and acquires the status of a rival theory with respect to it.<br />In spite of the fact that the situation is, to some extent, still open and requires further investigations, it has to be recognized that the efforts which have been spent on such a program have made possible a better understanding of some crucial points and have thrown light on some important conceptual issues. First, they have led to a completely general and rigorous formulation of the concept of stochastic invariance (Ghirardi, Grassi, and Pearle 1990a). Second, they have prompted a critical reconsideration, based on the discussion of smeared observables with compact support, of the problem of locality at the individual level. This analysis has brought out the necessity of reconsidering the criteria for the attribution of objective local properties to physical systems. In specific situations, one cannot attribute any local property to a microsystem: any attempt to do so gives rise to ambiguities. However, in the case of macroscopic systems, the impossibility of attributing to them local properties (or, equivalently, the ambiguity associated to such properties) lasts only for time intervals of the order of those necessary for the dynamical reduction to take place. Moreover, no objective property corresponding to a local observable, even for microsystems, can emerge as a consequence of a measurement-like event occurring in a space-like separated region: such properties emerge only in the future light cone of the considered macroscopic event. Finally, recent investigations (Ghirardi and Grassi 1994, 1996; Ghirardi 1996, 2000) have shown that the very formal structure of the theory is such that it does not allow, even conceptually, to establish cause-effect relations between space-like events.<br />Accordingly, in concluding this section, we stress that the question of whether a relativistic dynamical reduction program can find a satisfactory formulation seems to admit a positive answer.<br />A last comment. Recently, a paper by Conway and Kochen (Conway 2006), which has raised a lot of interest, has been published. A few words about it are in order, to clarify possible misunderstandings. The first and most important aim of the paper is the derivation of what the authors have called <em> The Free Will Theorem </em>, putting forward the provocative idea that if human beings are free to make their choices about the measurements they will perform on one of a pair of far-away entangled particles, then one must admit that also the elementary particles involved in the experiment have free will. One might make several comments on this statement. For what concerns us here the relevant fact is that the authors claim that their theorem implies, as a byproduct, the impossibility of elaborating a relativistically invariant dynamical reduction model. A lively debate has arisen; we refer the reader to the papers by Adler (2006), Bassi and Ghirardi (Bassi 2007), Tumulka (2007) in which it is proved that the conclusion drawn by Conway and Kochen is not pertinent to the problem. Recently the above authors have replied (Conway et al. 2007) to all criticisms raised in the just mentioned papers. However, (Goldstein et al. 2010) have made clear why the argument of Conway and Kochen is not pertinent. We may conclude that nothing in principle forbids a relativistic generalization of the GRW theory, and, actually, as repeatedly stressed previously, there are many elements which indicate that this is actually feasible.<br /><h2><a href="http://www.blogger.com/null" name="ColTheDefPer">10. Collapse Theories and Definite Perceptions</a></h2>Some authors (Albert and Vaidman 1989; Albert 1990, 1992) have raised an interesting objection concerning the emergence of definite perceptions within Collapse Theories. The objection is based on the fact that one can easily imagine situations leading to definite perceptions, that nevertheless do not involve the displacement of a large number of particles up to the stage of the perception itself. These cases would then constitute actual measurement situations which cannot be described by the GRW theory, contrary to what happens for the idealized (according to the authors) situations considered in many presentations of it, i.e., those involving the displacement of some sort of pointer. To be more specific, the above papers consider a ‘measurement-like’ process whose output is the emission of a burst of few photons triggered by the position in which a particle hits a screen. This can easily be devised by considering, e.g., a Stern-Gerlach set-up in which the path followed by the microsystem according to the value of its spin component hit a fluorescent screen and excite a small number of atoms which subsequently decay, emitting a small number of photons. The argument goes as follows: if one triggers the apparatus with a superposition of two spin states, since only a few atoms are excited, since the excitations involve displacements which are smaller than the characteristic localization distance of GRW, since GRW does not induce reductions on photon states and, finally, since the photon states immediately overlap, there is no way for the spontaneous localization mechanism to become effective in suppressing the ensuing superposition of the states ‘photons emerging from point <em>A</em> of the screen’ and ‘photons emerging from point <em>B</em> of the screen’. On the other hand, since the visual perception threshold is quite low (about 6-7 photons), there is no doubt that the naked eye of a human observer is sufficient to detect whether the luminous spot on the screen is at <em>A</em> or at <em>B</em>. The conclusion follows: in the case under consideration no dynamical reduction can take place and as a consequence no measurement is over, no outcome is definite, up to the moment in which a conscious observer perceives the spot.<br />Aicardi et al. (1991) have presented a detailed answer to this criticism. The crucial points of the argument are the following: it is agreed that in the case considered the superposition persists for long times (actually the superposition must persist, since, the system under consideration being microscopic, one could perform interference experiments which everybody would expect to confirm quantum mechanics). However, to deal in the appropriate and correct way with such a criticism, one has to consider all the systems which enter into play (electron, screen, photons and brain) and the universal dynamics governing all relevant physical processes. A simple estimate of the number of ions which are involved in the visual perception mechanism makes perfectly plausible that, in the process, a sufficient number of particles are displaced by a sufficient spatial amount to satisfy the conditions under which, according to the GRW theory, the suppression of the superposition of the two nervous signals will take place within the time scale of perception.<br />To avoid misunderstandings, this analysis by no means amounts to attributing a special role to the conscious observer or to perception. The observer's brain is the only system present in the set-up in which a superposition of two states involving different locations of a large number of particles occurs. As such it is the only place where the reduction can and actually must take place according to the theory. It is extremely important to stress that if in place of the eye of a human being one puts in front of the photon beams a spark chamber or a device leading to the displacement of a macroscopic pointer, or producing ink spots on a computer output, reduction will equally take place. In the given example, the human nervous system is simply a physical system, a specific assembly of particles, which performs the same function as one of these devices, if no other such device interacts with the photons before the human observer does. It follows that it is incorrect and seriously misleading to claim that the GRW theory requires a conscious observer in order that measurements have a definite outcome.<br />A further remark may be appropriate. The above analysis could be taken by the reader as indicating a very naive and oversimplified attitude towards the deep problem of the mind-brain correspondence. There is no claim and no presumption that GRW allows a physicalist explanation of conscious perception. It is only pointed out that, for what we know about the purely physical aspects of the process, one can state that before the nervous pulses reach the higher visual cortex, the conditions guaranteeing the suppression of one of the two signals are verified. In brief, a consistent use of the dynamical reduction mechanism in the above situation accounts for the definiteness of the conscious perception, even in the extremely peculiar situation devised by Albert and Vaidman.<br /><h2><a href="http://www.blogger.com/null" name="IntThePriOnt">11. The Interpretation of the Theory and its Primitive Ontologies </a></h2>As stressed in the opening sentences of this contribution, the most serious problem of standard quantum mechanics lies in its being extremely successful in telling us about <em>what we observe</em>, but being basically silent on <em>what is</em>. This specific feature is closely related to the probabilistic interpretation of the statevector, combined with the completeness assumption of the theory. Notice that what is under discussion is the probabilistic interpretation, not the probabilistic character, of the theory. Also collapse theories have a fundamentally stochastic character, but, due to their most specific feature, i.e., that of driving the statevector of any individual physical system into appropriate and physically meaningful manifolds, they allow for a different interpretation. One could even say (if one wants to avoid that they too, as the standard theory, speak only of <em>what we find</em>) that they <em>require</em> a different interpretation, one that accounts for our perceptions at the appropriate, i.e., macroscopic, level.<br />We must admit that this opinion is not universally shared. According to various authors, the ‘rules of the game’ embodied in the precise formulation of the GRW and CSL theories represent all there is to say about them. However, this cannot be the whole story: stricter and more precise requirements than the purely formal ones must be imposed for a theory to be taken seriously as a fundamental description of natural processes (an opinion shared by J. Bell). This request of going beyond the purely formal aspects of a theoretical scheme has been denoted as (the necessity of specifying) the Primitive Ontology (PO) of the theory in an extremely interesting recent paper (Allori <em>et al</em>. 2007, Other Internet Resources). The fundamental requisite of the PO is that it should make absolutely precise what the theory is fundamentally about.<br />This is not a new problem; as already mentioned it has been raised by J. Bell since his first presentation of the GRW theory. Let me summarize the terms of the debate. Given that the wavefunction of a many-particle system lives in a (high-dimensional) configuration space, which is not endowed with a direct physical meaning connected to our experience of the world around us, Bell wanted to identify the ‘local beables’ of the theory, the quantities on which one could base a description of the perceived reality in ordinary three-dimensional space. In the specific context of QMSL, he (Bell 1987 p. 45) suggested that the ‘GRW jumps’, which we called ‘hittings’, could play this role. In fact they occur at precise times in precise positions of the three-dimensional space. As suggested in (Allori <em>et al</em>. 2007, Other Internet Resources) we will denote this position concerning the PO of the GRW theory as the ‘flashes ontology.’<br />However, later, Bell himself suggested that the most natural interpretation of the wavefunction in the context of a collapse theory would be that it describes the ‘density […] of stuff’ in the 3N-dimensional configuration space (Bell 1990, p. 30), the natural mathematical framework for describing a system of <em>N</em> particles. Allori <em>et al</em>. (2007, Other Internet Resources) appropriately have pointed out that this position amounts to avoiding commitment about the PO ontology of the theory and, consequently, to leaving vague the precise and meaningful connections it permits to be established between the mathematical description of the unfolding of physical processes and our perception of them.<br />The interpretation which, in the opinion of the present writer, is most appropriate for collapse theories, has been proposed in series of papers (Ghirardi, Grassi and Benatti 1995; Ghirardi 1997a, 1997b) and has been referred in Allori <em>et al</em>. 2007 (Other Internet Resources) as ‘the mass density ontology’. Let us briefly describe it.<br />First of all, various investigations (Pearle and Squires 1994) had made clear that QMSL and CSL needed a modification, i.e., the characteristic localization frequency of the elementary constituents of matter had to be made proportional to the mass characterizing the particle under consideration. In particular, the original frequency for the hitting processes <em>f</em> = 10<sup>−16</sup> sec<sup>−1</sup> is the one characterizing the nucleons, while, e.g., electrons would suffer hittings with a frequency reduced by about 2000 times. Unfortunately we have no space to discuss here the physical reasons which make this choice appropriate; we refer the reader to the above paper, as well as to the recent detailed analysis by Peruzzi and Rimini (2000). With this modification, what the nonlinear dynamics strives to make ‘objectively definite’ is the mass distribution in the whole universe. Second, a deep critical reconsideration (Ghirardi, Grassi, and Benatti 1995) has made evident how the concept of ‘distance’ that characterizes the Hilbert space is inappropriate in accounting for the similarity or difference between macroscopic situations. Just to give a convincing example, consider three states |<em>h</em>>, |<em>h</em>*> and |<em>t</em>> of a macrosystem (let us say a massive macroscopic bulk of matter), the first corresponding to its being located here, the second to its having the same location but one of its atoms (or molecules) being in a state orthogonal to the corresponding state in |<em>h</em>>, and the third having exactly the same internal state of the first but being differently located (there). Then, despite the fact that the first two states are indistinguishable from each other at the macrolevel, while the first and the third correspond to completely different and directly perceivable situations, the Hilbert space distance between |<em>h</em>> and |<em>h</em>*>, is equal to that between |<em>h</em>> and |<em>t</em>>.<br />When the localization frequency is related to the mass of the constituents, then, in completely generality (i.e., even when one is dealing with a body which is not almost rigid, such as a gas or a cloud), the mechanism leading to the suppression of the superpositions of macroscopically different states is fundamentally governed by the the integral of the squared differences of the mass densities associated to the two superposed states. Actually, in the original paper (Ghirardi, Grassi and Benatti 1995) the mass density at a point was identified with its average over the characteristic volume of the theory, i.e., 10<sup>−15</sup> cm <sup>3</sup> around that point. It is however easy to convince oneself that there is no need to do so (Ghirardi 2007) and that the mass density at any point, directly identified by the statevector (see below), is the appropriate quantity on which to base an appropriate ontology. Accordingly, we take the following attitude: what the theory is about, what is real ‘out there’ at a given space point <strong><em>x</em></strong>, is just a field, i.e., a variable <em>m(<strong>x</strong>,t)</em> given by the expectation value of the mass density operator <em>M</em>(<strong><em>x</em></strong>) at <strong><em>x</em></strong> obtained by multiplying the mass of any kind of particle times the number density operator for the considered type of particle and summing over all possible types of particles which can be present:<br /><ol start="7"><li><em>m</em>(<strong><em>x</em></strong>,<em>t</em>) =< <em>F</em>,<em>t</em> | <em>M</em>(<strong><em>x</em></strong>) | <em>F</em>,<em>t</em> >; <br /><em>M</em>(<strong><em>x</em></strong>)=Sum<sub>(<em>k</em>)</sub>m<sub>(<em>k</em>)</sub><em>a*</em><sub>(<em>k</em>)</sub>(<strong><em>x</em></strong>)<em>a</em><sub>(<em>k</em>)</sub>(<strong><em>x</em></strong>).</li></ol>Here |<em>F</em>,<em>t</em>> is the statevector characterizing the system at the given time, and <em>a*</em><sub>(<em>k</em>)</sub>(<strong><em>x</em></strong>) and <em>a</em><sub>(<em>k</em>)</sub>(<strong><em>x</em></strong>) are the creation and annihilation operators for a particle of type <em>k</em> at point <strong><em>x</em></strong>. It is obvious that within standard quantum mechanics such a function cannot be endowed with any objective physical meaning due to the occurrence of linear superpositions which give rise to values that do not correspond to what we find in a measurement process or what we perceive. In the case of GRW or CSL theories, if one considers only the states allowed by the dynamics one can give a description of the world in terms of <em>m</em>(<strong><em>x</em></strong>,<em>t</em>), i.e., one recovers a physically meaningful account of physical reality in the usual 3-dimensional space and time. To illustrate this crucial point we consider, first of all, the embarrassing situation of a macroscopic object in the superposition of two differently located position states. We have then simply to recall that in a collapse model relating reductions to mass density differences, the dynamics suppresses in extremely short times the embarrassing superpositions of such states to recover the mass distribution corresponding to our perceptions. Let us come now to a microsystem and let us consider the equal weight superposition of two states |<em>h</em>> and |<em>t</em>> describing a microscopic particle in two different locations. Such a state gives rise to a mass distribution corresponding to 1/2 of the mass of the particle in the two considered space regions. This seems, at first sight, to contradict what is revealed by any measurement process. But in such a case we know that the theory implies that the dynamics running all natural processes within GRW ensures that whenever one tries to locate the particle he will always find it in a definite position, i.e., one and only one of the Geiger counters which might be triggered by the passage of the proton will fire, just because a superposition of ‘a counter which has fired’ and ‘one which has not fired’ is dynamically forbidden.<br />This analysis shows that one can consider at all levels (the micro and the macroscopic ones) the field <em>m</em>(<strong>x</strong>,<em>t</em>) as accounting for ‘what is out there’, as originally suggested by Schrödinger with his realistic interpretation of the square of the wave function of a particle as representing the ‘ fuzzy’ character of the mass (or charge) of the particle. Obviously, within standard quantum mechanics such a position cannot be maintained because ‘wavepackets diffuse, and with the passage of time become infinitely extended … but however far the wavefunction has extended, the reaction of a detector … remains spotty’, as appropriately remarked in (Bell 1990). As we hope to have made clear, the picture is radically different when one takes into account the new dynamics which succeeds perfectly in reconciling the spread and sharp features of the wavefunction and of the detection process, respectively.<br />It is also extremely important to stress that, by resorting to the quantity (7) one can define an appropriate ‘distance’ between two states as the integral over the whole 3-dimensional space of the square of the difference of <em>m</em>(<strong><em>x</em></strong>,<em>t</em>) for the two given states, a quantity which turns out to be perfectly appropriate to ground the concept of macroscopically similar or distinguishable Hilbert space states. In turn, this distance can be used as a basis to define a sensible psychophysical correspondence within the theory.<br /><h2><a href="http://www.blogger.com/null" name="ProTaiWavFun">12. The Problem of the Tails of the Wave Function</a></h2>In recent years, there has been a lively debate around a problem which has its origin, according to some of the authors which have raised it, in the fact that even though the localization process which corresponds to multiplying the wave function times a Gaussian and thus lead to wave functions strongly peaked around the position of the hitting, they allow nevertheless the final wavefuntion to be different from zero over the whole of space. The first criticism of this kind was raised by A. Shimony (1990) and can be summarized by his sentence,<br /><blockquote>one should not tolerate tails in wave functions which are so broad that their different parts can be discriminated by the senses, even if very low probability amplitude is assigned to them.</blockquote>After a localization of a macroscopic system, typically the pointer of the apparatus, its centre of mass will be associated to a wave function which is different from zero over the whole space. If one adopts the probabilistic interpretation of the standard theory, this means that even when the measurement process is over, there is a nonzero (even though extremely small) probability of finding its pointer in an arbitrary position, instead of the one corresponding to the registered outcome. This is taken as unacceptable, as indicating that the DRP does not actually overcome the macro-objectification problem.<br />Let us state immediately that the (alleged) problem arises entirely from keeping the standard interpretation of the wave function unchanged, in particular assuming that its modulus squared gives the probability density of the position variable. However, as we have discussed in the previous section, there are much more serious reasons of principle which require to abandon the probabilistic interpretation and replace it either with the ‘flash ontology’, or with the ‘ mass density ontology’ which we have discussed above.<br />Before entering into a detailed discussion of this subtle point we need to focus better the problem. We cannot avoid making two remarks. Suppose one adopts, for the moment, the conventional quantum position. We agree that, within such a framework, the fact that wave functions never have strictly compact spatial support can be considered puzzling. However this is an unavoidable problem arising directly from the mathematical features (spreading of wave functions) and from the probabilistic interpretation of the theory, and not at all a problem peculiar to the dynamical reduction models. Indeed, the fact that, e.g., the wave function of the centre of mass of a pointer or of a table has not a compact support has never been taken to be a problem for standard quantum mechanics. When, e.g., the wave function of a table is extremely well peaked around a given point in space, it has always been accepted that it describes a table located at a certain position, and that this corresponds in some way to our perception of it. It is obviously true that, for the given wave function, the quantum rules entail that if a measurement were performed the table could be found (with an extremely small probability) to be kilometers far away, but this <em>is not</em> the measurement or the macro-objectification problem of the standard theory. The latter concerns a completely different situation, i.e., that in which one is confronted with a superposition with comparable weights of two macroscopically separated wave functions, both of which possess tails (i.e., have non-compact support) but are appreciably different from zero only in far-away narrow intervals. This is the really embarrassing situation which conventional quantum mechanics is unable to make understandable. To which perception of the position of the pointer (of the table) does this wave function correspond?<br />The implications for this problem of the adoption of the QMSL theory should be obvious. Within GRW, the superposition of two states which, when considered individually, are assumed to lead to different and definite perceptions of macroscopic locations, are dynamically forbidden. If some process tends to produce such superpositions, then the reducing dynamics induces the localization of the centre of mass (the associated wave function being appreciably different from zero only in a narrow and precise interval). Correspondingly, the possibility arises of attributing to the system the property of being in a definite place and thus of accounting for our definite perception of it. Summarizing, we stress once more that the criticism about the tails as well as the requirement that the appearance of macroscopically extended (even though extremely small) tails be strictly forbidden is exclusively motivated by uncritically committing oneself to the probabilistic interpretation of the theory, even for what concerns the psycho-physical correspondence: when this position is taken, states assigning non-exactly vanishing probabilities to different outcomes of position measurements should correspond to ambiguous perceptions about these positions. Since neither within the standard formalism nor within the framework of dynamical reduction models a wave function can have compact support, taking such a position leads to conclude that it is just the Hilbert space description of physical systems which has to be given up.<br />It ought to be stressed that there is nothing in the GRW theory which would make the choice of functions with compact support problematic for the purpose of the localizations, but it also has to be noted that following this line would be totally useless: since the evolution equation contains the kinetic energy term, any function, even if it has compact support at a given time, will instantaneously spread acquiring a tail extending over the whole of space. If one sticks to the probabilistic interpretation and one accepts the completeness of the description of the states of physical systems in terms of the wave function, the tail problem cannot be avoided.<br />The solution to the tails problem can only derive from abandoning completely the probabilistic interpretation and from adopting a more physical and realistic interpretation relating ‘what is out there’ to, e.g., the mass density distribution over the whole universe. In this connection, the following example will be instructive (Ghirardi, Grassi and Benatti 1995). Take a massive sphere of normal density and mass of about 1 kg. Classically, the mass of this body would be totally concentrated within the radius of the sphere, call it <em>r</em>. In QMSL, after the extremely short time interval in which the collapse dynamics leads to a ‘regime’ situation, and if one considers a sphere with radius <em>r</em> + 10<sup>−5</sup> cm, the integral of the mass density over the rest of space turns out to be an incredibly small fraction (of the order of 1 over 10 to the power 10<sup>15</sup>) of the mass of a single proton. In such conditions, it seems quite legitimate to claim that the macroscopic body is localised within the sphere.<br />However, also this quite reasonable position has been questioned and it has been claimed (Lewis 1997), that the very existence of the tails implies that the enumeration principle (i.e., the fact that the claim ‘particle 1 is within this box & particle 2 is within this box & … & particle <em>n</em> is within this box& no other particle is within this box’ implies the claim ‘there are <em>n</em> particles within this box’) does not hold, if one takes seriously the mass density interpretation of collapse theories. This paper has given rise to a long debate which would be inappropriate to reproduce here. We refer the reader to the following papers: Ghirardi and Bassi (1999), Clifton and Monton (1999a, 1999b), Bassi and Ghirardi (1999, 2001). Various arguments have been presented in favour and against the criticism by Lewis.<br />We conclude this brief analysis by stressing once more that, in the opinion of the present writer, all the disagreements and the misunderstandings concerning this problem have their origin in the fact that the idea that the probabilistic interpretation of the wave function must be abandoned has not been fully accepted by the authors who find some difficulties in the proposed mass density interpretation of the Collapse Theories. For a recent reconsideration of the problem we refer the reader to the paper by Lewis (2003).<br /><h2><a href="http://www.blogger.com/null" name="13StaColModRecPosAboThe"> 13. The Status of Collapse Models and Recent Positions about them</a></h2>We recall that, as stated in Section 3, the macro-objectification problem has been at the centre of the most lively and most challenging debate originated by the quantum view of natural processes. According to the majority of those who adhere to the orthodox position such a problem does not deserve a particular attention: classical concepts are a logical prerequisite for the very formulation of quantum mechanics and, consequently, the measurement process itself, the dividing line between the quantum and the classical world, cannot and must not be investigated, but simply accepted. This position has been lucidly summarized by J. Bell himself (1981):<br /><blockquote> Making a virtue of necessity and influenced by positivistic and instrumentalist philosophies, many came to hold not only that it is difficult to find a coherent picture but that it is wrong to look for one—if not actually immoral then certainly unprofessional</blockquote>The situation has seen many changes in the course of time, and the necessity of making a clear distinction between what is quantum and what is classical has given rise to many proposals for ‘easy solutions’ to the problem which are based on the possibility, <em>for all practical purposes</em> (FAPP), of locating the splitting between these two faces of reality at different levels.<br />Then came Bohmian mechanics, a theory which has made clear, in a lucid and perfectly consistent way, that there is no reason of principle requiring a dichotomic description of the world. A universal dynamical principle runs all physical processes and even though ‘it completely agrees with standard quantum predictions’ it implies wave-packet reduction in micro-macro interactions and the classical behaviour of classical objects.<br />As we have mentioned, the other consistent proposal, at the nonrelativistic level, of a conceptually satisfactory solution of the macro-objectification problem is represented by the Collapse Theories which are the subject of these pages. Contrary to bohmian mechanics, they are rival theory of quantum mechanics, since they make different predictions (even though quite difficult to put into evidence) concerning various physical processes.<br />Let us now analyze some of the recent critical positions concerning the two just mentioned approaches (in what follows I will take advantage of the nice analysis of a paper which I have been asked to referee and of which I do not know the author). Various physicists have criticized Bohm approach on the basis that, being empirically indistinguishable from quantum mechanics, such an approach is an example of ‘bad science’ or of ‘a degenerate research program’. Useless to say, I do not consider such criticisms as appropriate; the conceptual advantages and the internal consistency of the approach render it an extremely appealing theoretical scheme (incidentally, one should not forget that it has been just the critical investigations on such a theory which have led Bell to derive his famous and conceptually extremely relevant inequality).<br />This being the situation, one would think that theories like the GRW model would be exempt from an analogous charge, since they actually are (in principle) empirically different from the standard theory. For instance they disagree from such a theory since they forbid the occurrence of macroscopic massive entangled states. In spite of this, they have been the object of an analogous attack by the adherents to the ‘new orthodoxy’ (Bub 1997; Joos et al. 1996; Zurek, 1993) pointing out that environmental induced decoherence shows that, FAPP, collapse theories are simply phenomenological accounts of the reduced state to which one has to resort since one has no control of the degrees of freedom of the environment. When one takes such a position, one is claiming that, essentially, GRW cannot be taken as a fundamental description of nature, mainly because it suffers from the limitation of being empirically indistinguishable from the standard theory, provided such a theory is correctly applied taking into account the actual physical situation. Also in this case, and even at the level at which such an analysis is performed, the practical indistinguishability from the standard approach should not be regarded as a sufficient reason to not take seriously collapse models. In fact, there are many very well known and compelling reasons (see, e.g., Bassi 2000; Adler 2003) to prefer a logically consistent unified theory to one which makes sense only due to the alleged <em>practical</em> impossibility of detecting the superpositions of macroscopically distinguishable states. At any rate, in principle, such theories can be tested against the standard one.<br />But this is not the whole story. Another criticism, aimed to ‘deny’ the potential interest of collapse theories makes reference to the fact that within any such theory the ensuing dynamics for the statistical operator can be considered as the reduced dynamics deriving from a unitary (and, consequently, essentially a standard quantum) dynamics for the states of an enlarged Hilbert space of a composite quantum system <em>S+E</em> involving, besides the physical system <em>S</em> of interest, an ancilla <em>E</em> whose degrees of freedom are completely unaccessible:due to the quantum dynamical semigroup nature of the evolution equation for the statistical operator, any GRW-like model can always be seen as a phenomenological model deriving from a standard quantum evolution on a larger Hilbert space. In this way, the unitary deterministic evolution characterizing quantum mechanics would be fully restored.<br />Apart from the obvious remark that such a critical attitude completely fails to grasp---and indeed, purposefully ignores---the most important feature of collapse theories, i.e., of dealing with individual quantum systems and not with statistical ensembles and of yielding a perfectly satisfactory description, matching our perceptions concerning <em>individual macroscopic systems</em>. Invoking an unaccessible ancilla to account for the nonlinear and stochastic character of GRW-type theories is once more a purely verbal way of avoiding facing the real puzzling aspects of the quantum description of macroscopic systems. This is not the only negative aspect of such a position; any attempt considering legitimate to introduce unaccessible entities in the theory, when one takes into consideration that there are infinitely possible and inequivalent ways of doing this, amounts really to embarking oneself in a ‘degenerate research program’.<br />Other reasons for ignoring the dynamical reduction program have been put forward recently by the community of scientists involved in the interesting and exciting field of quantum information. We will not spend too much time in analyzing and discussing the new position about the foundational issues which have motivated the elaboration of collapse theories. The crucial fact is that, from this perspective, one takes the theory not to be about something real ‘occurring out there’ in a real word, but simply about information. This point is made extremely explicit in a recent paper (Zeilinger 2006):<br /><blockquote> information is the most basic notion of quantum mechanics, and it is information about possible measurement results that is represented in the quantum state. Measurement results are nothing more than state of the classical apparatus used by the experimentalist. The quantum system then is nothing other than the consistently constructed referent of the information represented in the quantum state.</blockquote>It is clear that if one takes such a position almost all motivations to be worried by the measurement problem disappear, and with them the reasons to work out what Bell has denoted as ‘an exact version of quantum mechanics’. The most appropriate reply to this type of criticisms is to recall that J. Bell (1990) has included ‘information’ among the words which must have no place in a formulation with any pretension to physical precision. In particular he has stressed that one cannot even mention information unless one has given a precise answer to the two following questions: <em>Whose information?</em> and <em>Information about what?</em><br />A much more serious attitude is to call attention, as many serious authors do, to the fact that since collapse theories represent rival theories with respect to standard quantum mechanics they lead to the identification of experimental situations which would allow, in principle, crucial tests to discriminate between the two. As we have discussed above, presently such tests seem not to be readily feasible, but the analysis we have performed, shows that such tests are not completely out of reach, and will become feasible as soon as some technological improvements in dealing with mesoscopic systems will become available.<br /><h2><a href="http://www.blogger.com/null" name="Sum">Summary</a></h2>We hope to have succeeded in giving a clear picture of the ideas, the implications, the achievements and the problems of the DRP. We conclude by stressing once more our position with respect to the Collapse Theories. Their interest derives entirely from the fact that they have given some hints about a possible way out from the difficulties characterizing standard quantum mechanics, by proving that explicit and precise models can be worked out which agree with all known predictions of the theory and nevertheless allow, on the basis of a universal dynamics governing all natural processes, to overcome in a mathematically clean and precise way the basic problems of the standard theory. In particular, the Collapse Models show how one can work out a theory that makes perfectly legitimate to take a macrorealistic position about natural processes, without contradicting any of the experimentally tested predictions of standard quantum mechanics. Finally, they might give precise hints about where to look in order to put into evidence, experimentally, possible violations of the superposition principle.<br /><h2><a href="http://www.blogger.com/null" name="Bib">Bibliography</a></h2><ul class="hanging"><li>Adler, S., 2003, “Why Decoherence has not Solved the Measurement Problem: A Response to P. W. Anderson”, <em>Stud.Hist.Philos.Mod.Phys.</em>, 34: 135.</li><li>Adler, S., 2007, “Lower and Upper Bounds on CSL Parameters from Latent Image Formation and IGM Heating”, <em>Journal of Physics</em>, A40: 2935.</li><li>Adler, S. and Bassi, A., 2007, “Collapse models with non-white noises” <em>Journal of Physics</em>, A41: 395308.</li><li>–––, 2008, “Collapse models with non-white noises II”, <em>Journal of Physics</em>, A40: 15083.</li><li>Adler, S. and Ramazanoglu, F.M., 2007, “Photon emission rate from atomic systems in the CSL model”, <em>Journal of Physics</em>, A40: 13395.</li><li>Aicardi, F., Borsellino, A., Ghirardi, G.C., and Grassi, R., 1991, “Dynamic models for state-vector reduction—Do they ensure that measurements have outcomes?”, <em>Foundations of Physics Letters</em>, 4: 109.</li><li>Albert, D.Z., 1990, “On the Collapse of the Wave Function”, in <em>Sixty-Two Years of Uncertainty</em>, A. Miller (ed.), Plenum, New York.</li><li>–––, 1992, <em>Quantum Mechanics and Experience</em>, Harvard University Press, Cambridge, Mass.</li><li>Albert, D.Z. and Vaidman, L., 1989, “On a proposed postulate of state reduction”, <em>Physics Letters</em>, A139: 1.</li><li> Arndt, M, Nairz, O., Vos-Adreae, J., van der Zouw, G. and Zeilinger, A., 1999, “Wave-particle duality of C60 molecules”, <em>Nature</em>, 401: 680.</li><li>Bassi, A. and Ferialdi, L., 2009, “Non-Markovian quantum trajectories: An exact result”, <em>Physical Review Letters</em>, 103: 050403.</li><li>–––, 2009, “Non-Markovian dynamics for a free quantum particle subject to spontaneous collapse in space: general solution and main properties”, <em>Physical Review</em>, A 80: 012116.</li><li>Bassi, A. and Ghirardi, G.C., 1999, “More about dynamical reduction and the enumeration principle”, <em>British Journal for the Philosophy of Science</em>, 50: 719.</li><li>–––, 2000, “A general argument against the universal validity of the superposition principle”, <em>Physics Letters</em>, A 275: 373.</li><li>–––, 2001, “Counting marbles: Reply to Clifton and Monton”, <em>British Journal for the Philosophy of Science</em>, 52: 125.</li><li>–––, 2002, “Dynamical reduction models with general Gaussian noises”, <em>Physical Review A</em>, 65: 042114.</li><li>–––, 2003, “Dynamical Reduction Models”, <em>Physics Reports</em>, 379: 257.</li><li>–––, 2007, “The Conway-Kochen argument and relativistic GRW models”, to appear in <em>Foundations of Physics </em>. Also <em> quant-phys 0610209 </em>.</li><li>Bassi, A., Ippoliti, E. and Adler, S., 2005, “Relativistic Reduction Dynamics”, <em>Foundations of Physics</em>, 41: 686.</li><li>Bedingham, D., 2011, “Towards Quantum Superpositions of a Mirror: an Exact Open Systems Analysis”, <em>Journal of Physics</em>, A38: 2715.</li><li>Bell, J.S., 1981, “Bertlmann's socks and the nature of reality”, <em>Journal de Physique, </em> Colloque C2, suppl. au numero 3, Tome 42: 41.</li><li>–––, 1986, “Six possible worlds of quantum mechanics”, in <em>Proceedings of the Nobel Symposium 65: Possible Worlds in Arts and Sciences,</em> de Gruyter, New York.</li><li>–––, 1987, “Are there quantum jumps?”, in <em>Schrödinger—Centenary Celebration of a Polymath</em>, C.W. Kilmister (ed.), Cambridge University Press, Cambridge.</li><li>–––, 1989a, “Towards an Exact Quantum mechanics”, in <em>Themes in Contemporary Physics II</em>, S. Deser, R.J. Finkelstein (eds.), World Scientific, Singapore.</li><li>–––, 1989b, “The Trieste Lecture of John Stuart Bell”, <em>Journal of Physics</em>, A40: 2919.</li><li>–––, 1990, “Against ‘measurement’”, in <em>Sixty-Two Years of Uncertainty</em>, A. Miller (ed.), Plenum, New York.</li><li>Benatti, F., Ghirardi, G.C., and Grassi, R., 1995, “Quantum Mechanics with Spontaneous Localization and Experiments”, in <em>Advances in quantum Phenomena</em>, E. Beltrametti et al. (eds), Plenum, New York.</li><li>Berndl, K., Duerr, D., Goldstein, S., Zanghi, N., 1996 , “Nonlocality, Lorentz Invariance, and Bohmian Quantum Theory”, <em>Physical Review </em>, A53: 2062. </li><li>Bohm, D., 1952, “A suggested interpretation of the quantum theory in terms of hidden variables. I & II.” <em>Physical Review</em>, 85: 166, <em>ibid</em>., 85: 180.</li><li>Bohm, D. and Bub, J., 1966, “A proposed solution of the measurement problem in quantum mechanics by a hidden variable theory”, <em>Reviews of Modern Physics</em>, 38: 453.</li><li>Born, M., 1971, <em>The Born-Einstein Letters</em>, Walter and Co., New York.</li><li>Brown, H.R., 1986, “The insolubility proof of the quantum measurement problem”, <em>Foundations of Physics</em>, 16: 857.</li><li>Bub, J., 1997, “Interpreting the Quantum World”, <em>Cambridge University Press</em>, Cambridge.</li><li>Busch, P. and Shimony, A., 1996, “Insolubility of the quantum measurement problem for unsharp observables”, <em>Studies in History and Philosophy of Modern Physics</em>, 27B: 397.</li><li>Butterfield, J., Fleming, G.N., Ghirardi, G.C., and Grassi, R., 1993, “Parameter dependence in dynamical models for state-vector reduction”, <em>International Journal of Theoretical Physics</em>, 32: 2287.</li><li>Clifton, R. and Monton, B., 1999a, “Losing your marbles in wavefunction collapse theories”, <em>British Journal for the Philosophy of Science</em>, 50: 697.</li><li>–––, 1999b, “Counting marbles with ‘accessible’ mass density: A reply to Bassi and Ghirardi”, <em>British Journal for the Philosophy of Science</em>, 51: 155.</li><li>Conway, J. and Kochen, S., 2006, “The Free Will Theorem”, to appear in <em>Foundations of Physics </em>. Also <em> quant-phys 0604079 </em>.</li><li>–––, 2006b, “On Adler's Conway Kochen Twin Argument”, <em>quant-phys 0610147 </em> to appear on <em>Foundations of Physics </em>.</li><li>–––, 2007, “Reply to Comments of Bassi, Ghirardi and Tumulka on the Free Will Theorem”, <em>quant-phys 0701016</em> to appear on <em>Foundations of Physics</em>.</li><li>Dawker, F. and Herbauts, I., 2004a, “Simulating Causal Collapse Models”, <em>Classical and Quantum Gravity</em>, 21: 2936.</li><li>–––, 2004b, “A Spontaneous Collapse Model on a Lattice”, <em>Journal of Statistical Physics</em>, 115: 1394.</li><li>d'Espagnat, B., 1971, “Conceptual Foundations of Quantum Mechanics”, W.A. Benjamin, Reading Mass.</li><li>Dirac, P.A.M., 1948, <em>Quantum Mechanics</em>, Clarendon Press, Oxford.</li><li>Dewdney, C. and Horton, G., 2001, “A non-local, Lorentz-invariant, hidden-variable interpretation of relativistic quantum mechanics based on particle trajectories”, <em>Journal of Physics A</em>, 34: 9871.</li><li>Diosi, L., 1990, “Relativistic theory for continuous measurement of quantum fields”, <em>Physical Review A</em>, 42: 5086.</li><li>Dürr, D., Goldstein, S., Münch-Berndl, K., Zanghi, N., 1999, “Hypersurface Bohm—Dirac models”, <em>Physical Review</em>, A60: 2729.</li><li>Eberhard, P., 1978, “Bell's theorem and different concepts of locality”, <em>Nuovo Cimento</em>, 46B: 392.</li><li>Fine, A., 1970, “Insolubility of the quantum measurement problem”, <em>Physical Review</em>, D2: 2783.</li><li>Fonda, L., Ghirardi, G.C., and Rimini A., 1973, “Evolution of quantum systems subject to random measurements”, <em>Nuovo Cimento</em>, 18B: 1.</li><li>–––, 1978, “Decay theory of unstable quantum systems”, <em>Reports on Progress in Physics</em>, 41: 587.</li><li>Fonda, L., Ghirardi, G.C., Rimini, A., and Weber, T., 1973, “Quantum foundations of exponential decay law”, <em>Nuovo Cimento</em>, 15A: 689.</li><li>Fu, Q., 1997, “Spontaneous radiation of free electrons in a nonrelativistic collapse model”, <em>Physical Review</em>, A56: 1806.</li><li>Gallis, M.R. and Fleming, G.N., 1990, “Environmental and spontaneous localization”, <em>Physical Review</em>, A42: 38.</li><li>Gerlich, S., Hackermüller, L., Hornberger, K., Stibor, A., Ulbricht, H., Gring, M., Goldfarb, F., Savas, T., Müri, M., Mayor, M and Arndt, M., 2007, “A Kapitza-Dirac-Talbot-Lau interferometer for highly polarizable molecules”, <em>Nature Physics</em>, 3: 711.</li><li>Ghirardi, G.C., 1996, “Properties and events in a relativistic context: Revisiting the dynamical reduction program”, <em>Foundations of Physics Letters</em>, 9: 313.</li><li>–––, 1997a, “Quantum Dynamical Reduction and Reality: Replacing Probability Densities with Densities in Real Space”, <em>Erkenntnis</em>, 45: 349.</li><li>–––, 1997b, “Macroscopic Reality and the Dynamical Reduction Program”, in <em>Structures and Norms in Science</em>, M.L. Dalla Chiara (ed.), Kluwer, Dordrecht.</li><li>–––, 2000, “Local measurements of nonlocal observables and the relativistic reduction process”, <em>Foundations of Physics</em>, 30: 1337.</li><li>–––, 2007, “Some reflections inspired by my research activity in quantum mechanics”, <em>Journal of Physics A</em>, 40: 2891.</li><li>Ghirardi, G.C. and Bassi, A., 1999, “Do dynamical reduction models imply that arithmetic does not apply to ordinary macroscopic objects”, <em>British Journal for the Philosophy of Science</em>, 50: 49.</li><li>Ghirardi, G.C. and Grassi, R., 1991, “Dynamical Reduction Models: some General Remarks”, in <em>Nuovi Problemi della Logica e della Filosofia della Scienza</em>, D. Costantini et al. (eds), Editrice Clueb, Bologna.</li><li>–––, 1994, “Outcome predictions and property attribution—The EPR argument reconsidered”, <em>Studies in History and Philosophy of Science</em>, 25: 397.</li><li>–––, 1996, “Bohm's Theory versus Dynamical Reduction”, in <em>Bohmian Mechanics and Quantum Theory: an Appraisal</em>, J. Cushing et al. (eds), Kluwer, Dordrecht.</li><li>Ghirardi, G.C., Grassi, R., and Benatti, F., 1995, “Describing the macroscopic world—Closing the circle within the dynamical reduction program”, <em>Foundations of Physics</em>, 25: 5.</li><li>Ghirardi, G.C., Grassi, R., Butterfield, J., and Fleming, G.N., 1993, “Parameter dependence and outcome dependence in dynamic models for state-vector reduction”, <em>Foundations of Physics</em>, 23: 341.</li><li>Ghirardi, G.C., Grassi, R., and Pearle, P., 1990a, “Relativistic dynamic reduction models—General framework and examples”, <em>Foundations of Physics</em>, 20: 1271.</li><li>–––, 1990b, “Relativistic Dynamical Reduction Models and Nonlocality”, in <em>Symposium on the Foundations of Modern Physics 1990</em>, P. Lahti and P. Mittelstaedt (eds), World Scientific, Singapore.</li><li>Ghirardi, G.C., Grassi, R., Rimini, A., and Weber, T., 1988, “Experiments of the Einstein-Podolsky-Rosen type involving CP-violation do not allow faster-than-light communication between distant observers”, <em>Europhysics Letters</em>, 6: 95.</li><li>Ghirardi, G.C., Pearle, P., and Rimini, A., 1990, “Markov-processes in Hilbert-space and continuous spontaneous localization of systems of identical particles”, <em>Physical Review</em>, A42: 78.</li><li>Ghirardi, G.C. and Rimini, A., 1990, “Old and New Ideas in the Theory of Quantum Measurement”, in <em>Sixty-Two Years of Uncertainty</em>, A. Miller (ed.), Plenum, New York .</li><li>Ghirardi, G.C., Rimini, A., and Weber, T., 1980, “A general argument against superluminal transmission through the quantum-mechanical measurement process”, <em>Lettere al Nuovo Cimento</em>, 27: 293.</li><li>–––, 1985, “A Model for a Unified Quantum Description of Macroscopic and Microscopic Systems”, in <em>Quantum Probability and Applications</em>, L. Accardi et al. (eds), Springer, Berlin.</li><li>–––, 1986, “Unified dynamics for microscopic and macroscopic systems”, <em>Physical Review,</em> D34: 470.</li><li>Gisin, N., 1984, “Quantum measurements and stochastic processes”, <em>Physical Review Letters</em>, 52: 1657, and “Reply”, <em>ibid</em>., 53: 1776.</li><li>–––, 1989, “Stochastic quantum dynamics and relativity”, <em>Helvetica Physica Acta</em>, 62: 363.</li><li>Goldstein, S. and Tumulka, R., 2003, “Opposite arrows of time can reconcile relativity and nonlocality”, <em>Classical and Quantum Gravity</em>, 20: 557.</li><li>Goldstein, S., Tausk, D.V., Tumulka, R., and Zanghi, N., 2010, “What does the Free Will Theorem Actually Prove?”, <em>Notice of the American Mathematical Society</em>, 57: 1451.</li><li>Gottfried, K., 2000, “Does Quantum Mechanics Carry the Seeds of its own Destruction?”, in <em>Quantum Reflections</em>, D. Amati et al. (eds), Cambridge University Press, Cambridge.</li><li>Hackermüller, L., Hornberger, K., Brexger, B., Zeilinger, A. and Arndt, M., 2004, “Decoherence of matter waves by thermal emission of radiation”, <em>Nature</em>, 427: 711.</li><li>Jarrett, J.P., 1984, “On the physical significance of the locality conditions in the Bell arguments”, <em>Nous</em>, 18: 569.</li><li>Joos, E., Zeh, H.D., Kiefer, C., Giulini, D., Kupsch, J., and Stamatescu, I.-O., 1996, “Decoherence and the Appearance of a Classical World”, Springer, Berlin.</li><li>Lewis, P., 1997, “Quantum mechanics, orthogonality and counting”, <em>British Journal for the Philosophy of Science</em>, 48: 313.</li><li>–––, 2003, “Four strategies for dealing with the counting anomaly in spontaneous collapse theories of quantum mechanics”, <em>International Studies in the Philosophy of Science</em>, 17: 137.</li><li>Marshall, W., Simon, C., Penrose, G. and Bouwmeester, D., 2003, “Towards quantum superpositions of a mirror”, <em>Physical Review Letters</em>, 91: 130401.</li><li>Maudlin, T., 2011, <em> Quantum Non-Locality and Relativity</em> Wiley-Blackwell.</li><li>Nicrosini, O. and Rimini, A., 2003, “Relativistic spontaneous localization: a proposal”, <em>Foundations of Physics</em>, 33: 1061.</li><li>Pais, A., 1982, <em>Subtle is the Lord</em>, Oxford University Press, Oxford.</li><li>Pearle, P., 1976, “Reduction of statevector by a nonlinear Schrödinger equation”, <em>Physical Review</em>, D13: 857.</li><li>–––, 1979, “Toward explaining why events occur”, <em>International Journal of Theoretical Physics</em>, 18: 489 .</li><li>–––, 1989, “Combining stochastic dynamical state-vector reduction with spontaneous localization”, <em>Physical Review</em>, A39: 2277.</li><li>–––, 1990, “Toward a Relativistic Theory of Statevector Reduction”, in <em>Sixty-Two Years of Uncertainty</em>, A. Miller (ed.), Plenum, New York.</li><li>–––, 1999, “Collapse Models”, in <em>Open Systems and measurement in Relativistic Quantum Theory</em>, H.P. Breuer and F. Petruccione (eds.), Springer, Berlin.</li><li>–––, 1999b, “Relativistic Collapse Model With Tachyonic Features”, <em>Physical Review</em>, A59: 80.</li><li>Pearle, P. and Squires, E., 1994, “Bound-state excitation, nucleon decay experiments, and models of wave-function collapse”, <em>Physical Review Letters</em>, 73: 1.</li><li>Penrose, R., 1989, <em>The Emperor's New Mind,</em> Oxford University Press, Oxford.</li><li>Peruzzi, G. and Rimini, A., 2000, “Compoundation invariance and Bohmian mechanics”, <em>Foundations of Physics</em>, 30: 1445.</li><li>Rae, A.I.M., 1990, “Can GRW theory be tested by experiments on SQUIDs?”, <em>Journal of Physics</em>, A23: 57.</li><li>Rimini, A., 1995, “Spontaneous Localization and Superconductivity”, in <em>Advances in Quantum Phenomena</em>, E. Beltrametti et al. (eds.), Plenum, New York.</li><li>Schrödinger, E., 1935, “Die gegenwärtige Situation in der Quantenmechanik”, <em>Naturwissenschaften</em>, 23: 807.</li><li>Schilpp, P.A. (ed.), 1949, <em>Albert Einstein: Philosopher-Scientist</em>, Tudor, New York.</li><li>Shimony, A., 1974, “Approximate measurement in quantum-mechanics. 2”, <em>Physical Review</em>, D9: 2321.</li><li>–––, 1983, “Controllable and uncontrollable non-locality”, in <em>Proceedings of the International Symposium on the Foundations of Quantum Mechanics</em>, S. Kamefuchi et al. (eds), Physical Society of Japan, Tokyo.</li><li>–––, 1989, “Search for a worldview which can accommodate our knowledge of microphysics”, in <em>Philosophical Consequences of Quantum Theory</em>, J.T. Cushing and E. McMullin (eds), University of Notre Dame Press, Notre Dame, Indiana.</li><li>–––, 1990, “Desiderata for modified quantum dynamics”, in <em>PSA 1990</em>, Volume 2, A. Fine, M. Forbes and L. Wessels (eds), Philosophy of Science Association, East Lansing, Michigan.</li><li>Squires, E., 1991, “Wave-function collapse and ultraviolet photons”, <em>Physics Letters</em>, A 158: 431.</li><li>Stapp, H.P., 1989, “Quantum nonlocality and the description of nature”, in <em>Philosophical Consequences of Quantum Theory</em>, J.T. Cushing and E. McMullin (eds), University of Notre Dame Press, Notre Dame, Indiana.</li><li>Suppes, P. and Zanotti, M., 1976, “On the determinism of hidden variables theories with strict correlation and conditional statistical independence of observables”, in <em>Logic and Probability in Quantum Mechanics</em>, P. Suppes (ed.), Reidel, Dordrecht.</li><li>Tumulka, R., 2006a, “A Relativistic Version of the Ghirardi-Rimini-Weber Model”, <em>Journal of Statistical Physics</em>, 125: 821.</li><li>–––, 2006b, “On Spontaneous Wave Function Collapse and Quantum Field Theory”, <em>Proceedings of the Royal Society, London</em>, A462: 1897.</li><li>–––, 2006c, “Collapse and Relativity”, in <em>Quantum Mechanics: Are there Quantum Jumps? and On the Present Status of Quantum Mechanics</em>, A. Bassi, D. Dürr, T. Weber and N. Zanghi (eds), AIP Conference Proceedings 844, American Institute of Physics </li><li>–––, 2007, “Comment on The Free Will Theorem, to appear in <em>Foundations of Physics</em>. Also <em> quant-phys 0611283 </em>.</li><li>van Fraassen, B., 1982, “The Charybdis of Realism: Epistemological Implications of Bell's Inequality”, <em>Synthese</em>, 52: 25.</li><li>Zeinlinger, A., 2005, “The message of the quantum”, <em>Nature</em>, 438: 743.</li><li>Zurek, W.H., 1993, “Decoherence—A reply to comments”, <em>Physics Today</em>, 46: ???.</li></ul>Fausto Intillahttps://plus.google.com/110377150394476015496noreply@blogger.com1tag:blogger.com,1999:blog-8784745434426267484.post-67753052402147732192013-08-11T22:48:00.001-07:002013-08-11T22:48:18.771-07:00Computer simulations reveal universal increase in electrical conductivity. <table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody><tr><td style="text-align: center;"><a href="http://1.bp.blogspot.com/-GgiEEn8TXh0/Ugh2aNbJPSI/AAAAAAAACYA/UCSjgNgA414/s1600/3-computersimu.jpg" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="320" src="http://1.bp.blogspot.com/-GgiEEn8TXh0/Ugh2aNbJPSI/AAAAAAAACYA/UCSjgNgA414/s320/3-computersimu.jpg" width="226" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">This image shows simulated correlation function for thermally excited charge pairs in a strong electric field. The lattice simulations provide access to atomic-scale details, giving new insights into the universal increase of electric conductivity predicted by Onsager in 1934. Credit: Credit: London Centre for Nanotechnology.<br /> </td></tr></tbody></table><div style="text-align: center;"><strong>Source: </strong><a href="http://phys.org/news/2013-08-simulations-reveal-universal-electrical.html"><span style="color: yellow;"><strong>Phys.org</strong></span></a></div><div style="text-align: center;"><strong>-----------------------</strong></div><strong>Computer simulations have revealed how the electrical conductivity of many materials increases with a strong electrical field in a universal way. This development could have significant implications for practical systems in electrochemistry, biochemistry, electrical engineering and beyond. </strong><br /><strong>The study, published in <i>Nature Materials</i>, investigated the </strong><a class="textTag" href="http://phys.org/tags/electrical+conductivity/" rel="tag"><strong>electrical conductivity</strong></a><strong> of a solid electrolyte, a system of positive and negative atoms on a </strong><a class="textTag" href="http://phys.org/tags/crystal+lattice/" rel="tag"><strong>crystal lattice</strong></a><strong>. The behaviour of this system is an indicator of the universal behaviour occurring within a broad range of materials from pure water to conducting glasses and </strong><a class="textTag" href="http://phys.org/tags/biological+molecules/" rel="tag"><strong>biological molecules</strong></a><strong>.</strong><br /><strong>Electrical conductivity, a measure of how strongly a given material conducts the flow of electric current, is generally understood in terms of Ohm's law, which states that the conductivity is independent of the magnitude of an applied electric field, i.e. the voltage per metre.</strong><br /><strong>This law is widely obeyed in weak applied fields, which means that most material samples can be ascribed a definite </strong><a class="textTag" href="http://phys.org/tags/electrical+resistance/" rel="tag"><strong>electrical resistance</strong></a><strong>, measured in Ohms.</strong><br /><strong>However, at strong electric fields, many materials show a departure from Ohm's law, whereby the conductivity increases rapidly with increasing field. The reason for this is that new current-carrying charges within the material are liberated by the electric field, thus increasing the conductivity.</strong><br /><strong>Remarkably, for a large class of materials, the form of the conductivity increase is universal - it doesn't depend on the material involved, but instead is the same for a wide range of dissimilar materials.</strong><br /><strong>The universality was first comprehended in 1934 by the future Nobel Laureate Lars Onsager, who derived a theory for the conductivity increase in electrolytes like acetic acid, where it is called the "second Wien effect". Onsager's theory has recently been applied to a wide variety of systems, including biochemical conductors, glasses, ion-exchange membranes, semiconductors, solar cell materials and to "magnetic monopoles" in spin ice.</strong><br /><strong>Researchers at the London Centre for Nanotechnology (LCN), the Max Plank Institute for Complex Systems in Dresden, Germany and the University of Lyon, France, succeeded for the first time in using computer simulations to look at the second Wien effect. The study, by Vojtech Kaiser, Steve Bramwell, Peter Holdsworth and Roderich Moessner, reveals new details of the universal effect that will help interpret a wide varierty of experiments.</strong><br /><strong>Professor Steve Bramwell of the LCN said: "Onsager's Wien effect is of practical importance and contains beautiful physics: with </strong><a class="textTag" href="http://phys.org/tags/computer+simulations/" rel="tag"><strong>computer simulations</strong></a><strong> we can finally explore and expose its secrets at the atomic scale.</strong><br /><strong>"As modern science and technology increasingly explores high electric fields, the new details of high field conduction revealed by these simulations, will have increasing importance."</strong>Fausto Intillahttps://plus.google.com/110377150394476015496noreply@blogger.com0tag:blogger.com,1999:blog-8784745434426267484.post-78666743969998713112013-08-11T22:36:00.002-07:002013-08-11T22:36:29.948-07:00Quantum Field Theory: What is QFT? <div class="separator" style="clear: both; text-align: center;"><a href="http://1.bp.blogspot.com/-9KBLfmtrrNI/Ughxpg5KviI/AAAAAAAACXY/1drO1obiDJY/s1600/fdiagram.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://1.bp.blogspot.com/-9KBLfmtrrNI/Ughxpg5KviI/AAAAAAAACXY/1drO1obiDJY/s1600/fdiagram.png" /></a></div><div style="text-align: center;"><strong>Source:</strong></div><div style="text-align: center;"> <a href="http://plato.stanford.edu/entries/quantum-field-theory/"><span style="color: yellow;">Stanford Encyclopedia of Philosophy</span></a> </div><div style="text-align: center;">--------------------------------------------------------</div>Quantum Field Theory (QFT) is the mathematical and conceptual framework for contemporary elementary particle physics. In a rather informal sense QFT is the extension of quantum mechanics (QM), dealing with particles, over to fields, i.e. systems with an infinite number of degrees of freedom. (See the entry on <a href="http://plato.stanford.edu/entries/qm/">quantum mechanics</a>.) In the last few years QFT has become a more widely discussed topic in philosophy of science, with questions ranging from methodology and semantics to ontology. QFT taken seriously in its metaphysical implications seems to give a picture of the world which is at variance with central classical conceptions of particles and fields, and even with some features of QM.<br />The following sketches how QFT describes fundamental physics and what the status of QFT is among other theories of physics. Since there is a strong emphasis on those aspects of the theory that are particularly important for interpretive inquiries, it does not replace an introduction to QFT as such. One main group of target readers are philosophers who want to get a first impression of some issues that may be of interest for their own work, another target group are physicists who are interested in a philosophical view upon QFT. <br /><!-- Entry Contents --> <br /><ul><li><a href="http://plato.stanford.edu/entries/quantum-field-theory/#Intro">1. What is QFT?</a></li><li><a href="http://plato.stanford.edu/entries/quantum-field-theory/#BasStrStaFor">2. The Basic Structure of the Conventional Formulation</a> <ul><li><a href="http://plato.stanford.edu/entries/quantum-field-theory/#LagForQFT">2.1 The Lagrangian Formulation of QFT</a></li><li><a href="http://plato.stanford.edu/entries/quantum-field-theory/#Int">2.2 Interaction</a></li><li><a href="http://plato.stanford.edu/entries/quantum-field-theory/#GauInv">2.3 Gauge Invariance</a></li><li><a href="http://plato.stanford.edu/entries/quantum-field-theory/#EffFieTheRen">2.4 Effective Field Theories and Renormalization</a></li></ul></li><li><a href="http://plato.stanford.edu/entries/quantum-field-theory/#BeyStaMod">3. Beyond the Standard Model</a> <ul><li><a href="http://plato.stanford.edu/entries/quantum-field-theory/#QuantumGravity">3.1 Quantum Gravity</a></li><li><a href="http://plato.stanford.edu/entries/quantum-field-theory/#StrThe">3.2 String Theory</a></li></ul></li><li><a href="http://plato.stanford.edu/entries/quantum-field-theory/#AltApp">4. Axiomatic Reformulations of QFT</a> <ul><li><a href="http://plato.stanford.edu/entries/quantum-field-theory/#DefStaForQFT">4.1 Deficiencies of the Conventional Formulation of QFT</a></li><li><a href="http://plato.stanford.edu/entries/quantum-field-theory/#AlgPoiVie">4.2 Algebraic Approaches to QFT</a></li><li><a href="http://plato.stanford.edu/entries/quantum-field-theory/#BasIdeAQF">4.3 Basic Ideas of AQFT</a></li><li><a href="http://plato.stanford.edu/entries/quantum-field-theory/#AQFPhi">4.4 AQFT and the Philosopher</a></li></ul></li><li><a href="http://plato.stanford.edu/entries/quantum-field-theory/#PhiIss">5. Philosophical Issues</a> <ul><li><a href="http://plato.stanford.edu/entries/quantum-field-theory/#Ont">5.1 Setting the Stage: Candidate Ontologies</a> <ul><li><a href="http://plato.stanford.edu/entries/quantum-field-theory/#Part">5.1.1 The Particle Interpretation</a></li><li><a href="http://plato.stanford.edu/entries/quantum-field-theory/#Field">5.1.2 The Field Interpretation</a></li><li><a href="http://plato.stanford.edu/entries/quantum-field-theory/#OSR">5.1.3 Ontic Structural Realism</a></li><li><a href="http://plato.stanford.edu/entries/quantum-field-theory/#Trope">5.1.4 Trope Ontology</a></li></ul></li><li><a href="http://plato.stanford.edu/entries/quantum-field-theory/#Wigner">5.2 Did Wigner Define the Particle Concept?</a></li><li><a href="http://plato.stanford.edu/entries/quantum-field-theory/#NonLoc">5.3 Non-Localizability Theorems</a></li><li><a href="http://plato.stanford.edu/entries/quantum-field-theory/#InRep">5.4 Inequivalent Representations</a></li><li><a href="http://plato.stanford.edu/entries/quantum-field-theory/#SymHeuObj">5.5 The Role of Symmetries</a></li><li><a href="http://plato.stanford.edu/entries/quantum-field-theory/#TakSto">5.6 Taking Stock: Where do we Stand?</a></li></ul></li><li><a href="http://plato.stanford.edu/entries/quantum-field-theory/#Bib">Bibliography</a></li><li><a href="http://plato.stanford.edu/entries/quantum-field-theory/#Aca">Academic Tools</a></li><li><a href="http://plato.stanford.edu/entries/quantum-field-theory/#Oth">Other Internet Resources</a></li><li><a href="http://plato.stanford.edu/entries/quantum-field-theory/#Rel">Related Entries</a></li></ul><!-- Entry Contents --> <br /><hr /><h2><a href="http://www.blogger.com/null" name="Intro">1. What is QFT?</a></h2>In contrast to many other physical theories there is no canonical definition of what QFT is. Instead one can formulate a number of totally different explications, all of which have their merits and limits. One reason for this diversity is the fact that QFT has grown successively in a very complex way. Another reason is that the interpretation of QFT is particularly obscure, so that even the spectrum of options is not clear. Possibly the best and most comprehensive understanding of QFT is gained by dwelling on its relation to other physical theories, foremost with respect to QM, but also with respect to classical electrodynamics, Special Relativity Theory (SRT) and Solid State Physics or more generally Statistical Physics. However, the connection between QFT and these theories is also complex and cannot be neatly described step by step.<br />If one thinks of QM as the modern theory of one particle (or, perhaps, a very few particles), one can then think of QFT as an extension of QM for analysis of systems with many particles—and therefore with a large number of degrees of freedom. In this respect going from QM to QFT is not inevitable but rather beneficial for pragmatic reasons. However, a general threshold is crossed when it comes to fields, like the electromagnetic field, which are not merely difficult but impossible to deal with in the frame of QM. Thus the transition from QM to QFT allows treatment of both particles and fields within a uniform theoretical framework. (As an aside, focusing on the number of particles, or degrees of freedom respectively, explains why the famous renormalization group methods can be applied in QFT as well as in Statistical Physics. The reason is simply that both disciplines study systems with a large or an infinite number of degrees of freedom, either because one deals with fields, as does QFT, or because one studies the thermodynamic limit, a very useful artifice in Statistical Physics.) Moreover, issues regarding the number of particles under consideration yield yet another reason why we need to extend QM. Neither QM nor its immediate relativistic extension with the Klein-Gordon and Dirac equations can describe systems with a variable number of particles. However, obviously this is essential for a theory that is supposed to describe scattering processes, where particles of one kind are destroyed while others are created.<br />One gets a very different kind of access to what QFT is when focusing on its relation to QM and SRT. One can say that QFT results from the successful reconciliation of QM and SRT. In order to understand the initial problem one has to realize that QM is not only in a <em>potential</em> conflict with SRT, more exactly: the locality postulate of SRT, because of the famous EPR correlations of entangled quantum systems. There is also a manifest contradiction between QM and SRT on the level of the dynamics. The Schrödinger equation, i.e. the fundamental law for the temporal evolution of the quantum mechanical state function, cannot possibly obey the relativistic requirement that all physical laws of nature be invariant under Lorentz transformations. The Klein-Gordon and Dirac equations, resulting from the search for relativistic analogues of the Schrödinger equation in the 1920s, do respect the requirement of Lorentz invariance. Nevertheless, ultimately they are not satisfactory because they do not permit a description of fields in a principled quantum-mechanical way.<br />Fortunately, for various phenomena it is legitimate to neglect the postulates of SRT, namely when the relevant velocities are small in relation to the speed of light and when the kinetic energies of the particles are small compared to their mass energies <em>mc</em><sup>2</sup>. And this is the reason why non-relativistic QM, although it cannot be the correct theory in the end, has its empirical successes. But it can never be the appropriate framework for electromagnetic phenomena because electrodynamics, which prominently encompasses a description of the behavior of light, is already relativistically invariant and therefore incompatible with QM. Scattering experiments are another context in which QM fails. Since the involved particles are often accelerated almost up to the speed of light, relativistic effects can no longer be neglected. For that reason scattering experiments can only be correctly grasped by QFT.<br />Unfortunately, the catchy characterization of QFT as the successful merging of QM and SRT has its limits. On the one hand, as already mentioned above, there also is a relativistic QM, with the Klein-Gordon- and the Dirac-equation among their most famous results. On the other hand, and this may come as a surprise, it is possible to formulate a non-relativistic version of QFT (see Bain 2011). The nature of QFT thus cannot simply be that it reconciles QM with the requirement of relativistic invariance. Consequently, for a discriminating criterion it is more appropriate to say that only QFT, and not QM, allows describing systems with an infinite number of degrees of freedom, i.e. fields (and systems in the thermodynamic limit). According to this line of reasoning, QM would be the modern (as opposed to classical) theory of particles and QFT the modern theory of particles <em>and</em> fields. Unfortunately however, and this shall be the last turn, even this gloss is not untarnished. There is a widely discussed no-go theorem by Malament (1996) with the following proposed interpretation: Even the quantum mechanics of one single particle can only be consonant with the locality principle of special relativity theory in the framework of a field theory, such as QFT. Hence ultimately, the characterization of QFT, on the one hand, as the quantum physical description of systems with an infinite number of degrees of freedom, and on the other hand, as the only way of reconciling QM with special relativity theory, are intimately connected with one another. <br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="http://3.bp.blogspot.com/-5DPozTPxrRg/Ugh0RhEagUI/AAAAAAAACXw/QAx3AuUl-yQ/s1600/cm-qm-qft.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="158" src="http://3.bp.blogspot.com/-5DPozTPxrRg/Ugh0RhEagUI/AAAAAAAACXw/QAx3AuUl-yQ/s320/cm-qm-qft.png" width="320" /></a></div><strong>Figure 1.</strong> <br />The diagram depicts the relations between different theories, where Non-Relativistic Quantum Field Theory is not a historical theory but rather an ex post construction that is illuminating for conceptual purposes. Theoretically, [(i), (ii), (iii)], [(ii), (i), (iii)] and [(ii), (iii), (i)] are three possible ways to get from Classical Mechanics to Relativistic Quantum Field Theory. But note that this is meant as a conceptual decomposition; history didn't go all these steps separately. On the one hand, by good luck, so to say, classical electrodynamics is relativistically invariant already, so that its successful quantization leads directly to Relativistic Quantum Field Theory. On the other hand, some would argue (e.g. Malament 1996) that the only way to reconcile QM and SRT is in terms of a field theory, so that (ii) and (iii) would coincide. Note that the steps (i), (ii) and (iii), i.e. quantization, transition to an infinite number of degrees of freedom, and reconciliation with SRT, are all ontologically relevant. In other words, by these steps the nature of the physical entities the theories talk about may change fundamentally. See Huggett 2003 for an alternative three-dimensional “map of theories”.<br /><strong>Further Reading on QFT and Philosophy of QFT</strong>. Mandl and Shaw (2010), Peskin and Schroeder (1995), Weinberg (1995) and Weinberg (1996) are standard textbooks on QFT. Teller (1995) and Auyang (1995) are the first systematic monographs on the philosophy of QFT. The anthologies Brown and Harré (1988), Cao (1999) and Kuhlmann et al. (2002) are anthologies with contributions by physicists and philosophers (of physics), where the last anthology has a focus on ontological issues. The literature on the philosophy of QFT has increased significantly in the last decade. Besides a number of separate papers there are two new monographs, Cao (2010) and Kuhlmann (2010), and one special issue (May 2011) of <em>Studies in History and Philosophy of Modern Physics</em>. Bain (2011), Huggett (2000) and Ruetsche (2002) provide article length discussions on a number of issues in the philosophy of QFT.<br />See also the following supplementary document: <br /><blockquote><a href="http://plato.stanford.edu/entries/quantum-field-theory/qft-history.html">The History of QFT</a>. </blockquote><h2><a href="http://www.blogger.com/null" name="BasStrStaFor">2. The Basic Structure of the Conventional Formulation</a></h2><h3><a href="http://www.blogger.com/null" name="LagForQFT">2.1 The Lagrangian Formulation of QFT</a></h3>The crucial step towards <em>quantum</em> field theory is in some respects analogous to the corresponding quantization in quantum mechanics, namely by imposing commutation relations, which leads to operator valued quantum fields. The starting point is the classical Lagrangian formulation of mechanics, which is a so-called analytical formulation as opposed to the standard version of Newtonian mechanics. A generalized notion of momentum (the <em>conjugate</em> or <em>canonical</em> momentum) is defined by setting <em>p</em> = ∂<em>L</em>/∂<em>q̇</em>, where <em>L</em> is the Lagrange function <em>L</em> = <em>T</em> − <em>V</em> (<em>T</em> is the kinetic energy and <em>V</em> the potential) and <em>q̇</em> ≡ <em>dq</em>/<em>dt</em>. This definition can be motivated by looking at the special case of a Lagrange function with a potential <em>V</em> which depends only on the position so that (using Cartesian coordinates) ∂<em>L</em>/∂<em>ẋ</em> = (∂/∂<em>ẋ</em>)(<em>m</em><em>ẋ</em><sup>2</sup>/2) = <em>m</em><em>ẋ</em> = <em>p</em><sub><em>x</em></sub>. Under these conditions the generalized momentum coincides with the usual mechanical momentum. In classical Lagrangian <em>field </em> theory one associates with the given field φ a second field, namely the conjugate field<br /><blockquote>(3.1) π = ∂<span class="scriptuc">L</span>/∂<em>φ̇</em> </blockquote>where <span class="scriptuc">L</span> is a Lagrangian density. The field φ and its conjugate field π are the direct analogues of the canonical coordinate <em>q</em> and the generalized (canonical or conjugate) momentum <em>p</em> in classical mechanics of point particles.<br />In both cases, QM and QFT, requiring that the canonical variables satisfy certain commutation relations implies that the basic quantities become operator valued. From a physical point of view this shift implies a restriction of possible measurement values for physical quantities some (but not all) of which can have their values only in discrete steps now. In QFT the canonical commutation relations for a field φ and the corresponding conjugate field π are<br /><blockquote><table><tbody><tr><td>(3.2) </td><td>[φ(<strong>x</strong>,<em>t</em>), π(<strong>y</strong>,<em>t</em>)]</td><td>= </td><td>iδ<sup>3</sup>(<strong>x</strong> − <strong>y</strong>)</td></tr><tr><td></td><td>[φ(<strong>x</strong>,<em>t</em>), φ(<strong>y</strong>,<em>t</em>)]</td><td>= </td><td>[π(<strong>x</strong>,<em>t</em>), π(<strong>y</strong>,<em>t</em>)] = 0</td></tr></tbody></table></blockquote>which are equal-time commutation relations, i.e., the commutators always refer to fields at the same time. It is not obvious that the equal-time commutation relations are Lorentz invariant but one can formulate a manifestly covariant form of the canonical commutation relations. If the field to be quantized is not a bosonic field, like the Klein-Gordon field or the electromagnetic field, but a fermionic field, like the Dirac field for electrons one has to use anticommutation relations.<br />While there are close analogies between quantization in QM and in QFT there are also important differences. Whereas the commutation relations in QM refer to a quantum object with three degrees of freedom, so that one has a set of 15 equations, the commutation relations in QFT do in fact comprise an infinite number of equations, namely for each of the infinitely many space-time 4-tuples (<strong>x</strong>,<em>t</em>) there is a new set of commutation relations. This infinite number of degrees of freedom embodies the field character of QFT.<br />It is important to realize that the operator valued field φ(<strong>x</strong>,<em>t</em>) in QFT is <em>not</em> analogous to the wavefunction ψ(<strong>x</strong>,<em>t</em>) in QM, i.e., the quantum mechanical state in its position representation. While the wavefunction in QM is acted upon by observables/operators, in QFT it is the (operator valued) field itself which acts on the space of states. In a certain sense the single particle wave functions have been transformed, via their reinterpretation as operator valued quantum fields, into observables. This step is sometimes called ‘second quantization’ because the single particle wave equations in relativistic QM already came about by a quantization procedure, e.g., in the case of the Klein-Gordon equation by replacing position and momentum by the corresponding quantum mechanical operators. Afterwards the solutions to these single particle wave equations, which are states in relativistic QM, are considered as classical fields, which can be subjected to the canonical quantization procedure of QFT. The term ‘second quantization’ has often been criticized partly because it blurs the important fact that the single particle wave function φ in relativistic QM and the operator valued quantum field φ are fundamentally different kinds of entities despite their connection in the context of discovery.<br />In conclusion, it must be emphasized that both in QM and QFT states <em>and</em> observables are equally important. However, to some extent their roles are switched. While states in QM can have a concrete spatio-temporal meaning in terms of probabilities for position measurements, in QFT states are abstract entities and it is the quantum field operators that seem to allow for a spatio-temporal interpretation. See the section on the field interpretation of QFT for a critical discussion.<br /><h3><a href="http://www.blogger.com/null" name="Int">2.2 Interaction</a></h3>Up to this point, the aim was to develop a free field theory. Doing so does not only neglect interaction with other particles (fields), it is even unrealistic for one free particle because it interacts with the field that it generates itself. For the description of interactions—such as scattering in particle colliders—we need certain extensions and modifications of the formalism. The immediate contact between scattering experiments and QFT is given by the scattering or S-matrix which contains all the relevant predictive information about, e.g., scattering cross sections. In order to calculate the S-matrix the interaction Hamiltonian is needed. The Hamiltonian can in turn be derived from the Lagrangian density by means of a Legendre transformation.<br />In order to discuss interactions one introduces a new representation, the <em>interaction picture</em>, which is an alternative to the Schrödinger and the Heisenberg picture. For the interaction picture one splits up the Hamiltonian, which is the generator of time-translations, into two parts <em>H</em> = <em>H</em><sub>0</sub> + <em>H<sub>int</sub></em>, where <em>H</em><sub>0</sub> describes the free system, i.e., without interaction, and gets absorbed in the definition of the fields and <em>H<sub>int</sub></em> is the interaction part of the Hamiltonian, or short the ‘interaction Hamiltonian’. Using the interaction picture is advantageous because the equations of motion as well as, under certain conditions, the commutation relations are the same for interacting fields as for free fields. Therefore, various results that were established for free fields can still be used in the case of interacting fields. The central instrument for the description of interaction is again the S-matrix, which expresses the connection between in and out states by specifying the transition amplitudes. In QED, for instance, a state |<em>in</em>⟩ describes one particular configuration of electrons, positrons and photons, i.e., it describes how many of these particles there are and which momenta, spins and polarizations they have before the interaction. The S-matrix supplies the probability that this state goes over to a particular |<em>out</em>⟩ state, e.g., that a particular counter responds after the interaction. Such probabilities can be checked in experiments.<br />The canonical formalism of QFT as introduced in the previous section is only applicable in the case of free fields since the inclusion of interaction leads to infinities (see the historical part). For this reason perturbation theory makes up a large part of most publications on QFT. The importance of perturbative methods is understandable realizing that they establish the immediate contact between theory and experiment. Although the techniques of perturbation theory have become ever more sophisticated it is somewhat disturbing that perturbative methods could not be avoided even in principle. One reason for this unease is that perturbation theory is felt to be rather a matter of (highly sophisticated) craftsmanship than of understanding nature. Accordingly, the corpus of perturbative methods plays a small role in the philosophical investigations of QFT. What does matter, however, is in which sense the consideration of interaction effects the general framework of QFT. An overview about perturbation theory is given in section 4.1 (“Perturbation Theory—Philosophy and Examples”) of Peskin & Schroeder (1995).<br /><h3><a href="http://www.blogger.com/null" name="GauInv">2.3 Gauge Invariance</a></h3>Some theories are distinguished by being <em>gauge invariant</em>, which means that <em>gauge transformations</em> of certain terms do not change any observable quantities. Requiring gauge invariance provides an elegant and systematic way of introducing terms for interacting fields. Moreover, gauge invariance plays an important role in selecting theories. The prime example of an intrinsically gauge invariant theory is electrodynamics. In the potential formulation of Maxwell's equations one introduces the vector potential <strong>A</strong> and the scalar potential φ, which are linked to the magnetic field <strong>B</strong>(<strong>x</strong>,<em>t</em>) and the electric field <strong>E</strong>(<strong>x</strong>,<em>t</em>) by<br /><blockquote><table><tbody><tr><td>(3.3) </td><td><strong>B</strong></td><td>= </td><td>∇ × <strong>A</strong></td></tr><tr><td></td><td><strong>E</strong></td><td>= </td><td>−(∂<strong>A</strong>/∂<em>t</em>) − ∇φ</td></tr></tbody></table></blockquote>or covariantly<br /><blockquote>(3.4) <em>F</em><sup>μν</sup> = ∂<sup>μ</sup> <em>A</em><sup>ν</sup> − ∂<sup>ν</sup> <em>A</em><sup>μ</sup> </blockquote>where <em>F</em><sup>μν</sup> is the electromagnetic field tensor and <em>A</em><sup>μ</sup> = (φ, <strong>A</strong>) the 4-vector potential. The important point in the present context is that given the identification (3.3), or (3.4), there remains a certain flexibility or freedom in the choice of <strong>A</strong> and φ, or <em>A</em><sup>μ</sup>. In order to see that, consider the so-called <em>gauge transformations</em><br /><blockquote><table><tbody><tr><td>(3.5) </td><td><strong>A</strong></td><td>→</td><td><strong>A</strong> − ∇ψ</td></tr><tr><td></td><td>φ</td><td>→</td><td>φ + ∂χ/∂<em>t</em></td></tr></tbody></table></blockquote>or covariantly<br /><blockquote>(3.6) <em>A</em><sup>μ</sup> → <em>A</em><sup>μ</sup> + ∂<sup>μ</sup>χ </blockquote>where χ is a scalar function (of space and time or of space-time) which can be chosen arbitrarily. Inserting the transformed potential(s) into equation(s) (3.3), or (3.4), one can see that the electric field <strong>E</strong> and the magnetic field <strong>B</strong>, or covariantly the electromagnetic field tensor <em>F</em><sup>μν</sup>, are not effected by a gauge transformation of the potential(s). Since only the electric field <strong>E</strong> and the magnetic field <strong>B</strong>, and quantities constructed from them, are observable, whereas the vector potential itself is not, nothing physical seems to be changed by a gauge transformation because it leaves <strong>E</strong> and <strong>B</strong> unaltered. Note that gauge invariance is a kind of symmetry that does not come about by space-time transformations.<br />In order to link the notion of gauge invariance to the Lagrangian formulation of QFT one needs a more general form of gauge transformations which applies to the field operator φ and which is supplied by<br /><blockquote><table><tbody><tr><td>(3.7) </td><td>φ</td><td>→</td><td>e<sup>−<em>i</em>Λ</sup>φ</td></tr><tr><td></td><td>φ<sup>*</sup></td><td>→</td><td>e<sup><em>i</em>Λ</sup>φ<sup>*</sup></td></tr></tbody></table></blockquote>where Λ is an arbitrary real constant. Equations (3.7) describe a <em>global gauge transformation</em> whereas a <em>local gauge transformation</em> <br /><blockquote><table><tbody><tr><td>(3.8) </td><td>φ(<em>x</em>)</td><td>→</td><td>e<sup>−<em>i</em>α(<em>x</em>)</sup>φ(<em>x</em>)</td></tr></tbody></table></blockquote>varies with <em>x</em>. <br />It turned out that requiring invariance under local gauge transformations supplies a systematic way for finding the equations describing fundamental interactions. For instance, starting with the Lagrangian for a free electron, the requirement of local gauge invariance can only be fulfilled by introducing additional terms, namely those for the electromagnetic field. Gauge invariance can be captured by certain symmetry groups: U(1) for electromagnetic, SU(2)⊗U(1) for electroweak and SU(3) for strong interaction. This is an important basis for unification programs, as is the analogy to general relativity where a local gauge symmetry is associated with the gravitational field. Moreover, it turned out that only gauge invariant quantum field theories are renormalizable. All this can be taken to show that a mathematically rich theory, with surplus structures, can be very valuable in the construction of theories.<br />Auyang (1995) emphasizes the general conceptual significance of invariance principles; Redhead (2002) and Martin (2002) focus specifically on gauge symmetries. Healey (2007) and Lyre (2004 and 2012) discuss the ontological significance of gauge theories, among other things concerning the Aharanov-Bohm effect and ontic structural realism.<br /><h3><a href="http://www.blogger.com/null" name="EffFieTheRen">2.4 Effective Field Theories and Renormalization</a></h3>In the 1970s a program emerged in which the theories of the standard model of elementary particle physics are considered as effective field theories (EFTs) which have a common quantum field theoretical framework. EFTs describe relevant phenomena only in a certain domain since the Lagrangian contains only those terms that describe particles which are relevant for the respective range of energy. EFTs are inherently approximative and change with the range of energy considered. EFTs are only applicable on a certain energy scale, i.e., they only describe phenomena in a certain range of energy. Influences from higher energy processes contribute to average values but they cannot be described in detail. This procedure has no severe consequences since the details of low-energy theories are largely decoupled from higher energy processes. Both domains are only connected by altered coupling constants and the renormalization group describes how the coupling constants depend on the energy.<br />The main idea of EFTs is that theories, i.e., in particular the Lagrangians, depend on the energy of the phenomena which are analysed. The physics changes by switching to a different energy scale, e.g., new particles can be created if a certain energy threshold is exceeded. The dependence of theories on the energy scale distinguishes QFT from, e.g., Newton's theory of gravitation where the same law applies to an apple as well as to the moon. Nevertheless, laws from different energy scales are not completely independent of each other. A central aspect of considerations about this dependence are the consequences of higher energy processes on the low-energy scale.<br />On this background a new attitude towards renormalization developed in the 1970s, which revitalizes earlier ideas that divergences result from neglecting unknown processes of higher energies. Low-energy behavior is thus affected by higher energy processes. Since higher energies correspond to smaller distances this dependence is to be expected from an atomistic point of view. According to the reductionist program the dynamics of constituents on the microlevel should determine processes on the macrolevel, i.e., here the low-energy processes. However, as, for instance hydrodynamics shows, in practice theories from different levels are not quite as closely connected because a law which is applicable on the macrolevel can be largely independent of microlevel details. For this reason analogies with statistical mechanics play an important role in the discussion about EFTs. The basic idea of this new story about renormalization is that the influences of higher energy processes are localizable in a few structural properties which can be captured by an adjustment of parameters. “In this picture, the presence of infinities in quantum field theory is neither a disaster, nor an asset. It is simply a reminder of a practical limitation—we do not know what happens at distances much smaller than those we can look at directly” (Georgi 1989: 456). This new attitude supports the view that renormalization is the appropriate answer to the change of fundamental interactions when the QFT is applied to processes on different energy scales. The price one has to pay is that EFTs are only valid in a limited domain and should be considered as approximations to better theories on higher energy scales. This prompts the important question whether there is a last fundamental theory in this tower of EFTs which supersede each other with rising energies. Some people conjecture that this deeper theory could be a string theory, i.e., a theory which is not a field theory any more. Or should one ultimately expect from physics theories that they are only valid as approximations and in a limited domain? Hartmann (2001) and Castellani (2002) discuss the fate of reductionism vis-à-vis EFTs. Wallace (2011) and Fraser (2011) discuss what the successful application of renormalization methods in quantum statistical mechanics means for their role in QFT, reaching very different conclusions.<br /><h2><a href="http://www.blogger.com/null" name="BeyStaMod">3. Beyond the Standard Model</a></h2>The “standard model of elementary particle physics” is sometimes used almost synonymously with QFT. However, there is a crucial difference. While the standard model is a theory with a fixed ontology (understood in a prephilosophical sense), i.e. three fundamental forces and a certain number of elementary particles, QFT is rather a frame, the applicability of which is open. Thus while quantum chromodynamics (or ‘QED’) is a <em>part</em> of the standard model, it is an <em>instance </em> of a quantum field theory, or short “<em>a</em> quantum field theory” and not a part of QFT. This section deals with only some particularly important proposals that go beyond the standard model, but which do not necessarily break up the basic framework of QFT. <br /><h3><a href="http://www.blogger.com/null" name="QuantumGravity">3.1 Quantum Gravity</a></h3>The standard model of particle physics covers the electromagnetic, the weak and the strong interaction. However, the fourth fundamental force in nature, gravitation, has defied quantization so far. Although numerous attempts have been made in the last 80 years, and in particular very recently, there is no commonly accepted solution up to the present day. One basic problem is that the mass, length and time scales quantum gravity theories are dealing with are so extremely small that it is almost impossible to test the different proposals.<br />The most important extant versions of quantum gravity theories are canonical quantum gravity, loop theory and string theory. Canonical quantum gravity approaches leave the basic structure of QFT untouched and <em>just</em> extend the realm of QFT by quantizing gravity. Other approaches try to reconcile quantum theory and general relativity theory not by supplementing the reach of QFT but rather by changing QFT itself. String theory, for instance, proposes a completely new view concerning the most fundamental building blocks: It does not merely incorporate gravitation but it formulates a new theory that describes all four interactions in a unified way, namely in terms of strings (see next subsection).<br />While quantum gravity theories are very complicated and even more remote from classical thinking than QM, SRT and GRT, it is not so difficult to see why gravitation is far more difficult to deal with than the other three forces. Electromagnetic, weak and strong force all act in a given space-time. In contrast, gravitation is, according to GRT, not an interaction that takes place <em>in</em> time, but gravitational forces are identified with the curvature of space-time itself. Thus quantizing gravitation could amount to quantizing space-time, and it is not at all clear what that could mean. One controversial proposal is to deprive space-time of its fundamental status by showing how it “emerges ” in some non-spatio-temporal theory. The “emergence” of space-time then means that there are certain derived terms in the new theory that have some formal features commonly associated with space-time. See Kiefer (2007) for physical details, Rickles (2008) for an accessible and conceptually reflected introduction to quantum gravity and Wüthrich (2005) for a philosophical evaluation of the alleged need to quantize the gravitational field. Also, see the entry on <a href="http://plato.stanford.edu/entries/quantum-gravity/">quantum gravity</a>. <br /><h3><a href="http://www.blogger.com/null" name="StrThe">3.2 String Theory</a></h3>String theory is one of the most promising candidates for bridging the gap between QFT and general relativity theory by supplying a unified theory of all natural forces, including gravitation. The basic idea of string theory is not to take particles as fundamental objects but strings that are very small but extended in one dimension. This assumption has the pivotal consequence that strings interact on an extended distance and not at a point. This difference between string theory and standard QFT is essential because it is the reason why string theory also encompasses the gravitational force which is very difficult to deal with in the framework of QFT.<br />It is so hard to reconcile gravitation with QFT because the typical length scale of the gravitational force is very small, namely at Planck scale, so that the quantum field theoretical assumption of point-like interaction leads to untreatable infinities. To put it another way, gravitation becomes significant (in particular in comparison to strong interaction) exactly where QFT is most severely endangered by infinite quantities. The extended interaction of strings brings it about that such infinities can be avoided. In contrast to the entities in standard quantum physics strings are not characterized by quantum numbers but only by their geometrical and dynamical properties. Nevertheless, “macroscopically” strings look like quantum particles with quantum numbers. A basic geometrical distinction is the one between open strings, i.e., strings with two ends, and closed strings which are like bracelets. The central dynamical property of strings is their mode of excitation, i.e., how they vibrate.<br />Reservations about string theory are mostly due to the lack of testability since it seems that there are no empirical consequences which could be tested by the methods which are, at least up to now, available to us. The reason for this “problem” is that the length scale of strings is in the average the same as the one of quantum gravity, namely the Planck length of approximately 10<sup>−33</sup> centimeters which lies far beyond the accessibility of feasible particle experiments. But there are also other peculiar features of string theory which might be hard to swallow. One of them is the fact that string theory implies that space-time has 10, 11 or even 26 dimensions. In order to explain the appearance of only four space-time dimensions string theory assumes that the other dimensions are somehow folded away or “compactified” so that they are no longer visible. An intuitive idea can be gained by thinking of a macaroni which is a tube, i.e., a two-dimensional piece of pasta rolled together, but which looks from the distance like a one-dimensional string.<br />Despite of the problems of string theory, physicists do not abandon this project, partly because many think that, among the numerous alternative proposals for reconciling quantum physics and general relativity theory, string theory is still the best candidate, with “loop quantum gravity” as its strongest rival (see the entry on <a href="http://plato.stanford.edu/entries/quantum-gravity/">quantum gravity</a>). Correspondingly, string theory has also received some attention within the philosophy of physics community in recent years. Probably the first philosophical investigation of string theory is Weingard (2001) in Callender & Huggett (2001), an anthology with further related articles. Dawid (2003) (see Other Internet Resources below) argues that string theory has significant consequences for the philosophical debate about realism, namely that it speaks against the plausibility of anti-realistic positions. Also see Dawid (2009). Johansson and Matsubara (2011) assess string theory from various different methodological perspectives, reaching conclusions in disagreement with Dawid (2009). Standard introductory monographs on string theory are Polchinski (2000) and Kaku (1999). Greene (1999) is a very successful popular introduction. An interactive website with a nice elementary introduction is ‘Stringtheory.com’ (see the Other Internet Resources section below).<br /><h2><a href="http://www.blogger.com/null" name="AltApp">4. Axiomatic Reformulations of QFT</a></h2><h3><a href="http://www.blogger.com/null" name="DefStaForQFT">4.1 Deficiencies of the Conventional Formulation of QFT</a></h3>From the 1930s onwards the problem of infinities as well as the potentially heuristic status of the Lagrangian formulation of QFT stimulated the search for reformulations in a concise and eventually axiomatic manner. A number of further aspects intensified the unease about the standard formulation of QFT. The first one is that quantities like total charge, total energy or total momentum of a field are unobservable since their measurement would have to take place in the whole universe. Accordingly, quantities which refer to infinitely extended regions of space-time should not appear among the observables of the theory as they do in the standard formulation of QFT. Another problematic feature of standard QFT is the idea that QFT is about field values at points of space-time. The mathematical aspect of the problem is that a field at a point, <span class="nw">φ(<em>x</em>),</span> is not an operator in a Hilbert space. The physical counterpart of the problem is that it would require an infinite amount of energy to measure a field at a point of space-time. One way to handle this situation—and one of the starting points for axiomatic reformulations of QFT—is not to consider fields at a point but instead fields which are smeared out in the vicinity of that point using certain functions, so-called test functions. The result is a smeared field φ(<em>f</em>) = ∫ φ(<em>x</em>)<em>f</em>(<em>x</em>)<em>dx</em> with supp(<em>f</em>) ⊂ <span class="scriptuc">O</span>, where supp(<em>f</em>) is the support of the test function <em>f</em> and <span class="scriptuc">O</span> is a bounded open region in Minkowski space-time.<br />The third important problem for standard QFT which prompted reformulations is the existence of <b>inequivalent representations</b>. In the context of quantum mechanics, Schrödinger, Dirac, Jordan and von Neumann realized that Heisenberg's matrix mechanics and Schrödinger's wave mechanics are just two (unitarily) equivalent representations of the same underlying abstract structure, i.e., an abstract Hilbert space <span class="scriptuc">H</span> and linear operators acting on this space. In other words, we are merely dealing with two different ways for representing the same physical reality, and it is possible to switch between these different representations by means of a unitary transformation, i.e. an operation that is analogous to an innocuous rotation of the frame of reference. <em>Representations</em> of some given algebra or group are sets of mathematical objects, like numbers, rotations or more abstract transformations (e.g. differential operators) together with a binary operation (e.g. addition or multiplication) that combines any two elements of the algebra or group, such that the structure of the algebra or group to be represented is preserved. This means that the combination of any two elements in the representation space, say <em>a</em> and <em>b</em>, leads to a third element which corresponds to the element that results when you combine the elements corresponding to <em>a</em> and <em>b</em> in the algebra or group that is represented. In 1931 von Neumann gave a detailed proof (of a conjecture by Stone) that the canonical commutation relations (CCRs) for position coordinates and their conjugate momentum coordinates in configuration space fix the representation of these two sets of operators in Hilbert space up to unitary equivalence (von Neumann's uniqueness theorem). This means that the specification of the purely algebraic CCRs suffices to describe a particular physical system. <br />In quantum <em>field</em> theory, however, von Neumann's uniqueness theorem looses its validity since here one is dealing with an infinite number of degrees of freedom. Now one is confronted with a multitude of <em>inequivalent</em> irreducible representations of the CCRs and it is not obvious what this means physically and how one should cope with it. Since the troublesome inequivalent representations of the CCRs that arise in QFT are all <em>irreducible</em> their inequivalence is not due to the fact that some are reducible while others are not (a representation is <em>reducible</em> if there is an invariant subrepresentation, i.e. a subset which alone represent the CCRs already). Since inequivalent irreducible representations (short: IIRs) seem to describe different physical states of affairs it is no longer legitimate to simply choose the most convenient representation, just like choosing the most convenient frame of reference. The acuteness of this problem is not immediately clear, since prima facie it is possibly that all but one of the IIRs are physically irrelevant, i.e. mathematical artefacts of a redundant formalism. However, although apparently this applies to most of the available IIRs, it seems that a number of irreducible representations of the CCRs remain that are inequivalent <em>and</em> physically relevant. <br /><h3><a href="http://www.blogger.com/null" name="AlgPoiVie">4.2 Algebraic Approaches to QFT</a></h3>According to the algebraic point of view <em>algebras</em> of observables rather than observables themselves in a particular representation should be taken as the basic entities in the mathematical description of quantum physics; thereby avoiding the above-mentioned problems from the outset. In standard QM the algebraic point of view in terms of <em>C</em>*-algebras makes no notable difference to the usual Hilbert space formulation since both formalisms are equivalent. However, in QFT this is no longer the case since the infinite number of degrees of freedom leads to unitarily <em>inequivalent</em> irreducible representations of a <em>C</em>*-algebra. Thus sticking to the usual Hilbert space formulation tacitly implies choosing one particular representation. The notion of <em>C</em>*-algebras, introduced abstractly by Gelfand and Neumark in 1943 and named this way by Segal in 1947, generalizes the notion of the algebra <span class="scriptuc">B</span>(<span class="scriptuc">H</span>) of all bounded operators on a Hilbert space <span class="scriptuc">H</span>, which is also the most important example for a <em>C</em>*-algebra. In fact, it can be shown that any <em>C</em>*-algebra is isomorphic to a (norm-closed, self-adjoint) algebra of bounded operators on a Hilbert space. The boundedness (and self-adjointness) of the operators is the reason why <em>C</em>*-algebras are considered as ideal for representing physical observables. The 'C' indicates that one is dealing with a complex vector space and the '*' refers to the operation that maps an element <em>A</em> of an algebra to its <em>involution</em> (or adjoint) <em>A</em>*, which generalizes the conjugate complex of complex numbers to operators. This involution is needed in order to define the crucial norm property of <em>C</em>*-algebras, which is of central importance for the proof of the above isomorphism claim. <br />Another point where algebraic formulations are advantageous derives from the fact that two quantum fields are physically equivalent when they generate the same algebras of local observables. Such equivalent quantum field theories belong to the same so-called Borchers class which entails that they lead to the same <em>S</em>-matrix. As Haag (1996) stresses, fields are only an instrument in order to “coordinatize” observables, more precisely: sets of observables, with respect to different finite space-time regions. The choice of a particular field system is to a certain degree conventional, namely as long as it belongs to the same Borchers class. Thus it is more appropriate to consider these algebras, rather than quantum fields, as the fundamental entities in QFT.<br />A prominent attempt to axiomatise QFT is Wightman's field axiomatics from the early 1950s. Wightman imposed axioms on polynomial algebras <span class="scriptuc">P</span>(<span class="scriptuc">O</span>) of smeared fields, i.e., sums of products of smeared fields in finite space-time regions <span class="scriptuc">O</span>. A crucial point of this approach is replacing the mapping <em>x</em> → φ(<em>x</em>) by <span class="scriptuc">O</span> → <span class="scriptuc">P</span>(<span class="scriptuc">O</span>). While the usage of unbounded field operators makes Wightman's approach mathematically cumbersome, <b>Algebraic Quantum Field Theory (AQFT)</b>—arguably the most successful attempt to reformulate QFT axiomatically—employs only bounded operators. AQFT originated in the late 1950s by the work of Haag and quickly advanced in collaboration with Araki and Kastler. AQFT itself exists in two versions, concrete AQFT (Haag-Araki) and abstract AQFT (Haag-Kastler, 1964). The concrete approach uses von Neumann algebras (or <em>W</em>*-algebras), the abstract one <em>C</em>*-algebras. The adjective ‘abstract’ refers to the fact that in this approach the algebras are characterized in an abstract fashion and not by explicitly using operators on a Hilbert space. In standard QFT, the CCRs together with the field equations can be used for the same purpose, i.e., an abstract characterization. One common aim of these axiomatizations of QFT is avoiding the usual approximations of standard QFT. However, trying to do this in a strictly axiomatic way, one only gets ‘reformulations’ which are not as rich as standard QFT. As Haag (1996) concedes, the “algebraic approach […] has given us a frame and a language not a theory”.<br /><h3><a href="http://www.blogger.com/null" name="BasIdeAQF">4.3 Basic Ideas of AQFT</a></h3>One of the crucial ideas of AQFT is taking so-called <em>nets of algebras</em> as basic for the mathematical description of a quantum physical system. A decade earlier, Segal (1947) used a single <em>C</em>*-algebra—generated by all bounded operators—and dismissed the availability of inequivalent representations as irrelevant to physics. Against this approach Haag argued that inequivalent representations can be understood physically by realizing that the important physical information in a quantum field theory is not contained in individual algebras but in the net of algebras, i.e. in the mapping <span class="scriptuc">O</span> → <span class="scriptuc">A</span>(<span class="scriptuc">O</span>) from finite space-time regions to algebras of local observables. The crucial point is that it is <em>not</em> necessary to specify observables explicitly in order to fix physically meaningful quantities. The very way how algebras of local observables are linked to space-time regions is sufficient to supply observables with physical significance. It is the partition of the algebra <span class="scriptuc">A</span><sub><em>loc</em></sub> of <em>all</em> local observables into subalgebras which contains physical information about the observables, i.e., it is the net structure of algebras which matters.<br />Physically the most important notion of AQFT is the principle of <em>locality</em> which has an external as well as an internal aspect. The external aspect is the fact that AQFT considers only observables connected with finite regions of space-time and not global observables like the total charge or the total energy momentum vector which refer to infinite space-time regions. This approach was motivated by the operationalistic view that QFT is a statistical theory about local measurement outcomes with all the experimental information coming from measurements in finite space-time regions. Accordingly everything is expressed in terms of <em>local algebras</em> of observables. The internal aspect of locality is that there is a constraint on the observables of such local algebras: All observables of a local algebra connected with a space-time region <span class="scriptuc">O</span> are required to commute with all observables of another algebra which is associated with a space-time region <span class="scriptuc">O</span>′ that is space-like separated from <span class="scriptuc">O</span>. This principle of (Einstein) <em>causality</em> is the main relativistic ingredient of AQFT.<br />The basic structure upon which the assumptions or conditions of AQFT are imposed are local observables, i.e., self-adjoint elements in local (non-commutative) von Neumann-algebras, and physical states, which are identified as positive, linear, normalized functionals which map elements of local algebras to real numbers. States can thus be understood as assignments of expectation values to observables. One can group the assumptions of AQFT into relativistic axioms, such as locality and covariance, general physical assumptions, like isotony and spectrum condition, and finally technical assumptions which are closely related to the mathematical formulation.<br />As a reformulation of QFT, AQFT is expected to reproduce the main phenomena of QFT, in particular properties which are characteristic of it being a field theory, like the existence of antiparticles, internal quantum numbers, the relation of spin and statistics, etc. That this aim could not be achieved on a purely axiomatic basis is partly due to the fact that the connection between the respective key concepts of AQFT and QFT, i.e., observables and quantum fields, is not sufficiently clear. It turned out that the main link between observable algebras and quantum fields are <em>superselection rules </em>, which put restrictions on the set of all observables and allow for classification schemes in terms of permanent or essential properties.<br />Introductions to AQFT are provided by the monographs Haag (1996) and Horuzhy (1990) as well as the overview articles Haag & Kastler (1964), Roberts (1990) and Buchholz (1998). Streater & Wightman (1964) is an early pioneering monograph on axiomatic QFT. Bratteli& Robinson (1979) emphasize mathematical aspects.<br /><h3><a href="http://www.blogger.com/null" name="AQFPhi">4.4 AQFT and the Philosopher</a></h3>In recent years, QFT has received a lot of attention in the philosophy of physics. Most philosophers who engage in that debate rest their considerations on AQFT; for instance, see Baker (2009), Baker & Halvorson (2010), Earman & Fraser (2006), Fraser (2008, 2009, 2011), Halvorson & Müger (2007), Kronz & Lupher (2005), Kuhlmann (2010a, 2010b), Lupher (2010), Rédei & Valente (2010) and Ruetsche (2002, 2003, 2006, 2011). While most philosophers of physics who are skeptical about this approach remained largely silent, Wallace (2006, 2011) launched an eloquent attack on the predominance of AQFT for foundational studies about QFT. To be sure, Wallace emphasizes, his critique is not directed against the use of algebraic methods, e.g. when studying inequivalent representations. Rather, he aims at AQFT as a physical theory, regarded as a rival to conventional QFT (CQFT). In his evaluation, viewed from the 21st century, one has to state that CQFT succeeded, while AQFT failed, so that “to be lured away from the Standard Model by [AQFT] is sheer madness” (Wallace 2011:124). So what may justify this drastic conclusion? On the one hand, Wallace points out that, the problem of ultraviolet divergences, which initiated the search for alternative approaches in the 1950s, was eventually solved in CQFT via the renormalization group techniques. On the other hand, AQFT never succeeded in finding realistic interacting quantum field theories in four dimensions (such as QED) that fit into their framework. <br />Fraser (2009, 2011) is most actively engaged in defending AQFT against Wallace's assault. She argues (2009) that consistency plays a central role in choosing between different formulations of QFT since they do not differ in their respective empirical success and AQFT fares better in this respect. Moreover, Fraser (2011) questions Wallace's crucial point in defense of CQFT, namely that the empirically successful application of renormalization group techniques in QFT removes all doubts about CQFT: The fact that renormalization in condensed matter physics and QFT are formally similar does not license Wallace's claim that there are also physical similarities concerning the freezing out of degrees of freedom at very small length scales. And if that physical analogy cannot be sustained, then the empirical success of renormalization in CQFT leaves the physical reasons for this success in the dark, in contrast to the case of condensed matter physics, where the physical basis for the empirical success of renormalization is intelligible, namely the fact that matter is discrete at atomic length scales. As a consequence, despite of the formal analogy with renormalization in condensed matter physics the empirical success of renormalization in CQFT does not, as Wallace claims, discredit the idea to work with arbitrarily small regions of spacetime, as it is done in AQFT. <br />Kuhlmann (2010b) also advocates AQFT as the prime object for foundational studies, focusing on ontological considerations. He argues that for matters of ontology AQFT is to be preferred over CQFT because, like ontology itself, AQFT strives for a clear separation of fundamental and derived entities and a parsimonious selection of basic assumptions. CQFT, on the other hand is a grown formalism that is very good for calculations but obscures foundational issues. Moreover, Kuhlmann contends that AQFT and CQFT should not be regarded as rival research programs. Nowadays at the very least, AQFT is not meant to replace CQFT, despite of the “kill it or cure it” slogan (Streater and Wightman 1964: 1, cited by Wallace 2011: 117). AQFT is suited and designed to illuminate the basic structure of QFT, but it is not and never will be the appropriate framework for the working physicist. <br /><h2><a href="http://www.blogger.com/null" name="PhiIss">5. Philosophical Issues</a></h2><h3><a href="http://www.blogger.com/null" name="Ont">5.1 Setting the Stage: Candidate Ontologies</a></h3>Ontology is concerned with the most general features, entities and structures of being. One can pursue ontology in a very general sense or with respect to a particular theory or a particular part or aspect of the world. With respect to the ontology of QFT one is tempted to more or less dismiss ontological inquiries and to adopt the following straightforward view. There are two groups of fundamental fermionic matter constituents, two groups of bosonic force carriers and four (including gravitation) kinds of interactions. As satisfying as this answer might first appear, the ontological questions are, in a sense, not even touched. Saying that, for instance the down quark is a fundamental constituent of our material world is the starting point rather than the end of the (philosophical) search for an ontology of QFT. The main question is what kind of entity, e.g., the down quark is. The answer does not depend on whether we think of down quarks or muon neutrinos since the sought features are much more general than those ones which constitute the difference between down quarks or muon neutrinos. The relevant questions are of a different type. What are particles at all? Can quantum particles be legitimately understood as particles any more, even in the broadest sense, when we take, e.g., their localization properties into account? How can one spell out what a field is and can “quantum fields” in fact be understood as fields? Could it be more appropriate not to think of, e.g., quarks, as the most fundamental entities at all, but rather of properties or processes or events?<br /><h4><a href="http://www.blogger.com/null" name="Part">5.1.1 The Particle Interpretation</a></h4>Many of the creators of QFT can be found in one of the two camps regarding the question whether particles or fields should be given priority in understanding QFT. While Dirac, the later Heisenberg, Feynman, and Wheeler opted in favor of particles, Pauli, the early Heisenberg, Tomonaga and Schwinger put fields first (see Landsman 1996). Today, there are a number of arguments which prepare the ground for a proper discussion beyond mere preferences.<br /><h5>5.1.1.1 The Particle Concept</h5>It seems almost impossible to talk about elementary <em>particle</em> physics, or QFT more generally, without thinking of particles which are accelerated and scattered in colliders. Nevertheless, it is this very interpretation which is confronted with the most fully developed counter-arguments. There still is the option to say that our classical concept of a particle is too narrow and that we have to loosen some of its constraints. After all, even in classical corpuscular theories of matter the concept of an (elementary) particle is not as unproblematic as one might expect. For instance, if the whole charge of a particle was contracted to a point, an infinite amount of energy would be stored in this particle since the repulsive forces become infinitely large when two charges with the same sign are brought together. The so-called <em>self energy</em> of a point particle is infinite.<br />Probably the most immediate trait of particles is their <em>discreteness</em>. Particles are countable or ‘aggregable’ entities in contrast to a liquid or a mass. Obviously this characteristic alone cannot constitute a sufficient condition for being a particle since there are other things which are countable as well without being particles, e.g., money or maxima and minima of the standing wave of a vibrating string. It seems that one also needs <em>individuality</em>, i.e., it must be possible to say that it is this or that particle which has been counted in order to account for the fundamental difference between ups and downs in a wave pattern and particles. Teller (1995) discusses a specific conception of individuality, <em>primitive thisness</em>, as well as other possible features of the particle concept in comparison to classical concepts of fields and waves, as well as in comparison to the concept of field quanta, which is the basis for the interpretation that Teller advocates. A critical discussion of Teller's reasoning can be found in Seibt (2002). Moreover, there is an extensive debate on individuality of quantum objects in quantum mechanical systems of ‘identical particles’. Since this discussion concerns QM in the first place, and not QFT, any further details shall be omitted here. French and Krause (2006) offer a detailed analysis of the historical, philosophical and mathematical aspects of the connection between quantum statistics, identity and individuality. See Dieks and Lubberdink (2011) for a critical assessment of the debate. Also consult the entry on <a href="http://plato.stanford.edu/entries/qt-idind/">quantum theory: identity and individuality</a>. <br />There is still another feature which is commonly taken to be pivotal for the particle concept, namely that particles are localizable in space. While it is clear from classical physics already that the requirement of <em>localizability</em> need not refer to point-like localization, we will see that even localizability in an arbitrarily large but still finite region can be a strong condition for quantum particles. Bain (2011) argues that the classical notions of localizability and countability are inappropriate requirements for particles if one is considering a relativistic theory such as QFT. <br />Eventually, there are some potential ingredients of the particle concept which are explicitly opposed to the corresponding (and therefore opposite) features of the field concept. Whereas it is a core characteristic of a field that it is a system with an infinite <em>number of degrees of freedom</em>, the very opposite holds for particles. A particle can for instance be referred to by the specification of the coordinates <strong>x</strong>(<em>t</em>) that pertain, e.g., to its center of mass—presupposing impenetrability. A further feature of the particle concept is connected to the last point and again explicitly in opposition to the field concept. In a pure particle ontology the interaction between remote particles can only be understood as an <em>action at a distance</em>. In contrast to that, in a field ontology, or a combined ontology of particles and fields, <em>local action</em> is implemented by mediating fields. Finally, classical particles are massive and impenetrable, again in contrast to (classical) fields.<br /><h5>5.1.1.2 Why QFT Seems to be About Particles</h5>The easiest way to quantize the electromagnetic (or: radiation) field consists of two steps. First, one Fourier analyses the vector potential of the classical field into normal modes (using periodic boundary conditions) corresponding to an infinite but denumerable number of degrees of freedom. Second, since each mode is described independently by a harmonic oscillator equation, one can apply the harmonic oscillator treatment from non-relativistic quantum mechanics to each single mode. The result for the Hamiltonian of the radiation field is<br /><blockquote><table cellspacing="0"><tbody><tr><td>(2.1) </td><td><em>H</em><sub>rad</sub> = </td><td align="center">∑<sub><small><strong>k</strong></small></sub></td><td align="center">∑<sub><small><em>r</em></small></sub></td><td>ℏω<sub><strong>k</strong></sub></td><td>(</td><td><em>a<sub>r</sub></em><sup>†</sup>(<strong>k</strong>)·<em>a<sub>r</sub></em>(<strong>k</strong>) + 1/2</td><td>),</td></tr></tbody></table></blockquote>where <em>a<sub>r</sub></em><sup>†</sup>(<strong>k</strong>) and <em>a<sub>r</sub></em>(<strong>k</strong>) are operators which satisfy the following commutation relations<br /><blockquote><table><tbody><tr><td>(2.2) </td><td>[<em>a<sub>r</sub></em>(<strong>k</strong>), <em>a<sub>s</sub></em><sup>†</sup>(<strong>k</strong>′)]</td><td> = </td><td>δ<sub><em>rs</em></sub>δ<sub><strong>kk′</strong></sub></td></tr><tr><td></td><td>[<em>a<sub>r</sub></em>(<strong>k</strong>), <em>a<sub>s</sub></em>(<strong>k</strong>′)]</td><td> = </td><td>[<em>a<sub>r</sub></em><sup>†</sup>(<strong>k</strong>), <em>a<sub>s</sub></em><sup>†</sup>(<strong>k</strong>′)] = 0.</td></tr></tbody></table></blockquote>with the index <em>r</em> labeling the polarisation. These commutation relations imply that one is dealing with a bosonic field. <br />The operators <em>a<sub>r</sub></em><sup>†</sup>(<strong>k</strong>) and <em>a<sub>r</sub></em>(<strong>k</strong>) have interesting physical interpretations as so-called particle creation and annihilation operators. In order to see this, one has to examine the eigenvalues of the operators<br /><blockquote>(2.3) <em>N<sub>r</sub></em>(<strong>k</strong>) = <em>a<sub>r</sub></em><sup>†</sup>(<strong>k</strong>)·<em>a<sub>r</sub></em>(<strong>k</strong>) </blockquote>which are the essential parts in <em>H</em><sub>rad</sub>. Due to the commutation relations (2.2) one finds that the eigenvalues of <em>N<sub>r</sub></em>(<strong>k</strong>) are the integers <em>n<sub>r</sub></em>(<strong>k</strong>) = 0, 1, 2, … and the corresponding eigenfunctions (up to a normalisation factor) are <br /><blockquote>(2.4) |<em>n<sub>r</sub></em>(<strong>k</strong>)⟩ = [<em>a<sub>r</sub></em><sup>†</sup>(<strong>k</strong>)]<sup><em>n<sub>r</sub></em>(<strong>k</strong>)</sup>|0⟩ </blockquote>where the right hand side means that <em>a<sub>r</sub></em><sup>†</sup>(<strong>k</strong>) operates <em>n<sub>r</sub></em>(<strong>k</strong>) times on |0⟩, the state vector of the vacuum with no photons present. The interpretation of these results is parallel to the one of the harmonic oscillator. <em>a<sub>r</sub></em><sup>†</sup>(<strong>k</strong>) is interpreted as the <em>creation operator</em> of a photon with momentum ℏ<strong>k</strong> and energy ℏω<sub><strong>k</strong></sub> (and a polarisation which depends on <em>r</em> and <strong>k</strong>). That is, equation (2.4) can be understood in the following way. One ets a state with <em>n<sub>r</sub></em>(<strong>k</strong>) photons of momentum ℏ<strong>k</strong> and energy ℏω<sub><strong>k</strong></sub> when the creation operator <em>a<sub>r</sub></em><sup>†</sup>(<strong>k</strong>) operates <em>n<sub>r</sub></em>(<strong>k</strong>) times on the vacuum state |0⟩. Accordingly, <em>N<sub>r</sub></em>(<strong>k</strong>) is called the <em>number operator</em> and <em>n<sub>r</sub></em>(<strong>k</strong>) the ‘occupation number’ of the mode that is specified by <strong>k</strong> and <em>r</em>, i.e., this mode is occupied by <em>n<sub>r</sub></em>(<strong>k</strong>) photons. Note that Pauli's exclusion principle is not violated since it only applies to fermions and not to bosons like photons. The corresponding interpretation for the <em>annihilation operator</em> <em>a<sub>r</sub></em>(<strong>k</strong>) is parallel: When it operates on a state with a given number of photons this number is lowered by one.<br />It is a widespread view that these results complete “the justification for interpreting <em>N</em>(<em>k</em>) as the number operator, and hence for the particle interpretation of the quantized theory” (Ryder 1996: 131). This is a rash judgement, however. For instance, the question of localizability is not even touched while it is certain that this is a pivotal criterion for something to be a particle. All that is established so far is that certain mathematical quantities in the formalism are discrete. However, countability is merely one feature of particles and not at all conclusive evidence for a <b>particle interpretation of QFT</b> yet . It is not clear at this stage whether we are in fact dealing with particles or with fundamentally different objects which only have this one feature of discreteness in common with particles.<br />Teller (1995) argues that the Fock space or “occupation number” representation does support a particle ontology in terms of <b>field quanta</b> since these can be counted or aggregated, although not numbered. The degree of excitation of a certain mode of the underlying field determines the number of objects, i.e. the particles in the sense of quanta. Labels for individual particles like in the Schrödinger many-particle formalism do not occur any more, which is the crucial deviation from the classical notion of particles. However, despite of this deviation, says Teller, quanta should be regarded as particles: Besides their countability another fact that supports seeing quanta as particles is that they have the same energies as classical particles. Teller has been criticized for drawing unduly far-reaching ontological conclusions from one particular representation, in particular since the Fock space representation cannot be appropriate in general because it is only valid for free particles (see, e.g., Fraser 2008). In order to avoid this problem Bain (2000) proposes an alternative quanta interpretation that rests on the notion of asymptotically free states in scattering theory. For a further discussion of the quanta interpretation see the subsection on inequivalent representations below.<br />The vacuum state |0⟩ is the energy ground state, i.e., the eigenstate of the energy operator with the lowest eigenvalue. It is a remarkable result in ordinary non-relativistic QM that the ground state energy of e.g., the harmonic oscillator is <em>not</em> zero in contrast to its analogue in classical mechanics. In addition to this, the relativistic <b>vacuum of QFT</b> has the even more striking feature that the expectation values for various quantities do not vanish, which prompts the question what it is that has these values or gives rise to them if the vacuum is taken to be the state with no particles present. If particles were the basic objects of QFT how can it be that there are physical phenomena even if nothing is there according to this very ontology? Eventually, studies of QFT in curved space-time indicate that the existence of a particle number operator might be a contingent property of the flat Minkowski space-time, because Poincaré symmetry is used to pick out a preferred representation of the canonical commutation relations which is equivalent to picking out a preferred vacuum state (see Wald 1994).<br />Before exploring whether other (potentially) necessary requirements for the applicability of the particle concept are fulfilled let us see what the alternatives are. Proceeding this way makes it easier to evaluate the force of the following arguments in a more balanced manner. <br /><h4><a href="http://www.blogger.com/null" name="Field">5.1.2 The Field Interpretation</a></h4>Since various arguments seem to speak against a particle interpretation, the allegedly only alternative, namely a field interpretation, is often taken to be the appropriate ontology of QFT. So let us see what a physical field is and why QFT may be interpreted in this sense. A classical point particle can be described by its position <strong>x</strong>(<em>t</em>) and its momentum <strong>p</strong>(<em>t</em>), which change as the time <em>t</em> progresses. So there are six degrees of freedom for the motion of a point particle corresponding to the three coordinates of the particle's position and three more coordinates for its momentum. In the case of a classical field one has an independent value for each single point <strong>x</strong> in space, where this specification changes as time progresses. The field value φ can be a scalar quantity, like temperature, a vectorial one as for the electromagnetic field, or a tensor, such as the stress tensor for a crystal. A field is therefore specified by a time-dependent mapping from each point of space to a field value φ(<strong>x</strong>,<em>t</em>). Thus a field is a system with an infinite number of degrees of freedom, which may be restrained by some field equations. Whereas the intuitive notion of a field is that it is something transient and fundamentally different from matter, it can be shown that it is possible to ascribe energy and momentum to a pure field even in the absence of matter. This somewhat surprising fact shows how gradual the distinction between fields and matter can be.<br />The transition from a classical field theory to a quantum field theory is characterized by the occurrence of <em>operator-valued</em> quantum fields φ̂(<strong>x</strong>,<em>t</em>), and corresponding conjugate fields, for both of which certain canonical commutation relations hold. Thus there is an obvious formal analogy between classical and quantum fields: in both cases field values are attached to space-time points, where these values are specified by real numbers in the case of classical fields and operators in the case of quantum fields. That is, the mapping <strong>x</strong> ↦ φ̂(<strong>x</strong>,<em>t</em>) in QFT is analogous to the classical mapping <strong>x</strong> ↦ φ(<strong>x</strong>,<em>t</em>). Due to this formal analogy it appears to be beyond any doubt that QFT is a field theory. <br />But is a systematic association of certain mathematical terms with all points in space-time really enough to establish a field theory in a proper physical sense? Is it not essential for a physical field theory that some kind of real physical <em>properties</em> are allocated to space-time points? This requirement seems not fulfilled in QFT, however. Teller (1995: ch. 5) argues that the expression <em>quantum field</em> is only justified on a “perverse reading” of the notion of a field, since no definite physical values whatsoever are assigned to space-time points. Instead, quantum field operators represent the whole spectrum of possible values so that they rather have the status of observables (Teller: “determinables”) or general solutions. Only a specific <em>configuration</em>, i.e. an ascription of definite values to the field observables at all points in space, can count as a proper physical field. <br />There are at least four proposals for a field interpretation of QFT, all of which respect the fact that the operator-valuedness of quantum fields impedes their direct reading as physical fields.<br />(i) Teller (1995) argues that definite physical quantities emerge when not only the quantum field operators but also the state of the system is taken into account. More specifically, for a given state |ψ⟩ one can calculate the expectation values ⟨ψ|φ(<em>x</em>)|ψ⟩ which yields an ascription of definite physical values to all points x in space and thus a <em>configuration</em> of the operator-valued quantum field that may be seen as a proper physical field. The main problem with proposal (i), and possibly with (ii), too, is that an expectation value is the average value of a whole sequence of measurements, so that it does not qualify as the physical property of any actual single field system, no matter whether this property is a pre-existing (or categorical) value or a propensity (or disposition). <br />(ii) The vacuum expectation value or <b>VEV interpretation</b>, advocated by Wayne (2002), exploits a theorem by Wightman (1956). According to this reconstruction theorem all the information that is encoded in quantum field operators can be equivalently described by an infinite hierarchy of <em>n</em>-point vacuum expectation values, namely the expectation values of all products of quantum field operators at <em>n</em> (in general different) space-time points, calculated for the vacuum state. Since this collection of vacuum expectation values comprises only definite physical values it qualifies as a proper field configuration, and, Wayne argues, due to Wightman's theorem, so does the equivalent set of quantum field operators. Thus, and this is the upshot of Wayne's argument, an ascription of quantum field operators to all space-time points does by itself constitute a field configuration, namely for the vacuum state, even if this is not the actual state.<br />But this is also a problem for the VEV interpretation: While it shows nicely that much more information is encoded in the quantum field operators than just unspecifically what could be measured, it still does not yield anything like an <em>actual</em> field configuration. While this last requirement is likely to be too strong in a quantum theoretical context anyway, the next proposal may come at least somewhat closer to it.<br />(iii) In recent years the term <b>wave functional interpretation</b> has been established as the name for the default field interpretation of QFT. Correspondingly, it is the most widely discussed extant proposal; see, e.g., Huggett (2003), Halvorson and Müger (2007), Baker (2009) and Lupher (2010). In effect, it is not very different from proposal (i), and with further assumptions for (i) even identical. However, proposal (ii) phrases things differently and in a very appealing way. The basic idea is that quantized fields should be interpreted completely analogously to quantized one-particle states, just as both result analogously from imposing canonical commutation relations on the non-operator-valued classical quantities. In the case of a quantum mechanical particle, its state can be described by a wave function ψ(x), which maps positions to probability amplitudes, where |ψ(<em>x</em>)|<sup>2</sup> can be interpreted as the probability for the particle to be measured at position <em>x</em>. For a field, the analogue of positions are classical field configurations φ(<em>x</em>), i.e. assignments of field values to points in space. And so, the analogy continues, just as a quantum particle is described by a wave function that maps positions to probabilities (or rather probability amplitudes) for the particle to be measured at <em>x</em>, quantum fields can be understood in terms of <em>wave functionals</em> ψ[φ(<em>x</em>)] that map functions to numbers, namely classical field configurations φ(<em>x</em>) to probability amplitudes, where |ψ[φ(<em>x</em>)]|<sup>2</sup> can be interpreted as the probability for a given quantum field system to be found in configuration φ(<em>x</em>) when measured. Thus just as a quantum state in ordinary single-particle QM can be interpreted as a superposition of classical localized particle states, the state of a quantum field system, so says the wave functional approach, can be interpreted as a superposition of classical field configurations. And what superpositions mean depends on one's general interpretation of quantum probabilities (collapse with propensities, Bohmian hidden variables, branching Everettian many-worlds,…). In practice, however, QFT is hardly ever represented in wave functional space because usually there is little interest in measuring field configurations. Rather, one tries to measures ‘particle’ states and therefore works in Fock space. <br />(iv) For a modification of proposal (iii), indicated in Baker (2009: sec. 5) and explicitly formulated as an alternative interpretation by Lupher (2010), see the end of the section “Non-Localizability Theorems” below. <br /><h4><a href="http://www.blogger.com/null" name="OSR">5.1.3 Ontic Structural Realism</a></h4>The multitude of problems for particle as well as field interpretations prompted a number of alternative ontological approaches to QFT. Auyang (1995) and Dieks (2002) propose different versions of event ontologies. Seibt (2002) and Hättich (2004) defend process-ontological accounts of QFT, which are scrutinized in Kuhlmann (2002, 2010a: ch. 10). In recent years, however, ontic structural realism (OSR) has become the most fashionable ontological framework for modern physics. While so far the vast majority of studies concentrates on ordinary QM and General Relativity Theory, it seems to be commonly believed among advocates of OSR that their case is even stronger regarding QFT, in light of the paramount significance of symmetry groups (also see below)—hence the name <em>group structural realism</em> (Roberts 2010). Explicit arguments are few and far between, however. <br />One of the rare arguments in favor of OSR that deal specifically with QFT is due to Kantorovich (2003), who opts for a Platonic version of OSR; a position that is otherwise not very popular among OSRists. Kantorovich argues that directly after the big bang “the world was baryon-free, whereas the symmetry of grand unification existed as an abstract structure” (p. 673). Cao (1997b) points out that the best ontological access to QFT is gained by concentrating on structural properties rather than on any particular category of entities. Cao (2010) advocates a “constructive structural realism” on the basis of a detailed conceptual investigation of the formation of quantum chromodynamics. However, Kuhlmann (2011) shows that Cao's position has little to do with what is usually taken to be ontic structural realism, and that it is not even clear whether it should at least be rated as an epistemic variant of structural realism. <br />Lyre (2004) argues that the central significance of gauge theories in modern physics supports structural realism, and offers a case study concerning the <em>U</em>(1) gauge symmetry group, which characterizes QED. Recently Lyre (2012) has been advocating an intermediate form of OSR, which he calls “Extended OSR (ExtOSR)”, according to which there are not only relational structural properties but also structurally derived intrinsic properties, namely the invariants of structure: mass, spin, and charge. Lyre claims that only ExtOSR is in a position to account for gauge theories. Moreover, it can make sense of zero-value properties, such as the zero mass of photons. See the Section 4.2 (OSR and Quantum Field Theory) in the SEP entry on <a href="http://plato.stanford.edu/entries/structural-realism/">structural realism</a>. <br /><h4><a href="http://www.blogger.com/null" name="Trope">5.1.4 Trope Ontology</a></h4>Kuhlmann (2010a) proposes a <b>Dispositional Trope Ontology (DTO)</b> as the most appropriate ontological reading of the basic structure of QFT, in particular in its algebraic formulation, AQFT. The term ‘trope’ refers to a conception of properties that breaks with tradition by regarding properties as particulars rather than repeatables (or ‘universals’). This new conception of properties permits analyzing objects as pure bundles of properties/tropes without excluding the possibility of having different objects with (qualitatively but not numerically) exactly the same properties. One of Kuhlmann's crucial points is that (A)QFT speaks in favor of a bundle conception of objects because the net structure of observable algebras alone (see section “Basic Ideas of AQFT” above) encodes the fundamental features of a given quantum field theory, e.g. its charge structure.<br />In the DTO approach, the essential properties/tropes of a trope bundle are then identified with the defining characteristics of a superselection sector, such as different kinds of charges, mass and spin. Since these properties cannot change by any state transition they guarantee the object's identity over time. Superselection sectors are inequivalent irreducible representations of the algebra of all quasi-local observables. While the essential properties/tropes of an object are permanent, its non-essential ones may change. Since we are dealing with quantum physical systems many properties are dispositions (or propensities); hence the name <em>dispositional</em> trope ontology.<br />A trope bundle is not individuated via spatio-temporal co-localization but because of the particularity of its constitutive tropes. Morganti (2009) also advocates a trope-ontological reading of QFT, which refers directly to the classification scheme of the Standard Model. <br /><h3><a href="http://www.blogger.com/null" name="wigner">5.2 Did Wigner Define the Particle Concept?</a></h3>Wigner's (1939) famous analysis of the Poincaré group is often assumed to provide a definition of elementary particles. The main idea of Wigner's approach is the supposition that each irreducible (projective) representation of the relevant space-time symmetry group yields the state space of one kind of elementary physical system, where the prime example is an elementary particle which has the more restrictive property of being structureless. The physical justification for linking up irreducible representations with elementary systems is the requirement that “there must be no relativistically invariant distinction between the various states of the system” (Newton & Wigner 1949). In other words the state space of an elementary system shall have no internal structure with respect to relativistic transformations. Put more technically, the state space of an elementary system must not contain any relativistically invariant subspaces, i.e., it must be the state space of an irreducible representation of the relevant invariance group. If the state space of an elementary system had relativistically invariant subspaces then it would be appropriate to associate these subspaces with elementary systems. The requirement that a state space has to be relativistically invariant means that starting from any of its states it must be possible to get to all the other states by superposition of those states which result from relativistic transformations of the state one started with. The main part of Wigner's analysis consists in finding and classifying all the irreducible representations of the Poincaré group. Doing that involves finding relativistically invariant quantities that serve to classify the irreducible representations. Wigner's pioneering identification of types of particles with irreducible unitary representations of the Poincaré group has been exemplary until the present, as it is emphasized, e.g., in Buchholz (1994). For an alternative perspective focusing on “Wigner's legacy” for ontic structural realism see Roberts (2011).<br />Regarding the question whether Wigner has supplied a definition of particles, one must say that although Wigner has in fact found a highly valuable and fruitful <em>classification</em> of particles, his analysis does not contribute very much to the question what a particle is and whether a given theory can be interpreted in terms of particles. What Wigner has given is rather a conditional answer. <em>If</em> relativistic quantum mechanics can be interpreted in terms of particles <em>then</em> the possible types of particles and their invariant properties can be determined via an analysis of the irreducible unitary representations of the Poincaré group. However, the question whether, and if yes in what sense, at least relativistic quantum mechanics can be interpreted as a particle theory at all is not addressed in Wigner's analysis. For this reason the discussion of the particle interpretation of QFT is not finished with Wigner's analysis as one might be tempted to say. For instance, the pivotal question of the localizability of particle states, to be discussed below, is still open. Moreover, once interactions are included, Wigner's classification is no longer applicable (see Bain 2000). Kuhlmann (2010a: sec. 8.1.2) offers an accessible introduction to Wigner's analysis and discusses its interpretive relevance.<br /><h3><a href="http://www.blogger.com/null" name="NonLoc">5.3 Non-Localizability Theorems</a></h3>The observed ‘particle traces’, e.g., on photographic plates of bubble chambers, seem to be a clear indication for the existence of particles. However, the theory which has been built on the basis of these scattering experiments, QFT, turns out to have considerable problems to account for the observed ‘particle trajectories’. Not only are sharp trajectories excluded by Heisenberg's uncertainty relations for position and momentum coordinates, which hold for non-relativistic quantum mechanics already. More advanced examinations in AQFT show that ‘quantum particles’ which behave according to the principles of relativity theory cannot be localized in any bounded region of space-time, no matter how large, a result which excludes even tube-like trajectories. It thus appears to be impossible that our world is composed of particles when we assume that localizability is a necessary ingredient of the particle concept. So far there is no single unquestioned argument against the possibility of a particle interpretation of QFT but the problems are piling up. Reeh & Schlieder, Hegerfeldt, Malament and Redhead all gained mathematical results, or formalized their interpretation, which prove that certain sets of assumptions, which are taken to be essential for the particle concept, lead to contradictions.<br />The <b>Reeh-Schlieder theorem</b> (1961) is a central result in AQFT. It asserts that acting on the vacuum state Ω with elements of the von Neumann observable algebra <em>R</em>(<em>O</em>) for open space-time region <em>O</em>, one can approximate as closely as one likes any state in Hilbertspace <span class="scriptuc">H</span>, in particular one that is very different from the vacuum in some space-like separated region <em>O</em>′. The Reeh-Schlieder theorem is thus exploiting long distance correlations of the vacuum. Or one can express the result by saying that local measurements do not allow for a distinction between an N-particle state and the vacuum state. Redhead's (1995a) take on the Reeh-Schlieder theorem is that local measurements can never decide whether one observes an N-particle state, since a projection operator <em>P</em><sub>Ψ</sub> which corresponds to an N-particle state Ψ can never be an element of a local algebra <em>R</em>(<em>O</em>). Clifton & Halvorson (2001) discuss what this means for the issue of entanglement. Halvorson (2001) shows that an alternative “Newton-Wigner” localization scheme fails to evade the problem of localization posed by the Reeh-Schlieder theorem. <br /><b>Malament</b> (1996) formulates a <b>no-go theorem</b> to the effect that a relativistic quantum theory of a fixed number of particles predicts a zero probability for finding a particle in any spatial set, provided four conditions are satisfied, namely concerning translation covariance, energy, localizability and locality. The <em>localizability condition</em> is the essential ingredient of the particle concept: A particle—in contrast to a field—cannot be found in two disjoint spatial sets at the same time. The <em>locality condition</em> is the main relativistic part of Malament's assumptions. It requires that the statistics for measurements in one space-time region must not depend on whether or not a measurement has been performed in a space-like related second space-time region. Malament's proof has the weight of a no-go theorem provided that we accept his four conditions as natural assumptions for a particle interpretation. A relativistic quantum theory of a fixed number of particles, satisfying in particular the localizability and the locality condition, has to assume a world devoid of particles (or at least a world in which particles can never be detected) in order not to contradict itself. Malament's no-go theorem thus seems to show that there is no middle ground between QM and QFT, i.e., no theory which deals with a fixed number of particles (like in QM) and which is relativistic (like QFT) without running into the localizability problem of the no-go theorem. One is forced towards QFT which, as Malament is convinced, can only be understood as a field theory. Nevertheless, whether or not a particle interpretation of QFT is in fact ruled out by Malament's result is a point of debate. At least prima facie Malament's no-go theorem alone cannot supply a final answer since it assumes a fixed number of particles, an assumption that is not valid in the case of QFT.<br />The results about non-localizability which have been explored above may appear to be not very astonishing in the light of the following facts about ordinary QM: Quantum mechanical wave functions (in position representation) are usually smeared out over all ℜ<sup>3</sup>, so that everywhere in space there is a non-vanishing probability for finding a particle. This is even the case arbitrarily close after a sharp position measurement due to the instantaneous spreading of wave packets over all space. Note, however, that ordinary QM is non-relativistic. A conflict with SRT would thus not be very surprising although it is not yet clear whether the above-mentioned quantum mechanical phenomena can actually be exploited to allow for superluminal signalling. QFT, on the other side, has been designed to be in accordance with special relativity theory (SRT). The local behavior of phenomena is one of the leading principles upon which the theory was built. This makes non-localizability within the formalism of QFT a much severer problem for a particle interpretation.<br />Malament's reasoning has come under attack in Fleming & Butterfield (1999) and Busch (1999). Both argue to the effect that there are <b>alternatives to Malament's conclusion</b>. The main line of thought in both criticisms is that Malament's ‘mathematical result’ might just as well be interpreted as evidence that the assumed concept of a sharp localization operator is flawed and has to be modified either by allowing for unsharp localization (Busch 1999) or for so-called “hyperplane dependent localization” (Fleming & Butterfield 1999). In Saunders (1995) a different conclusion from Malament's (as well as from similar) results is drawn. Rather than granting Malament's four conditions and deriving a problem for a particle interpretation Saunders takes Malament's proof as further evidence that one can not hold on to all four conditions. According to Saunders it is the localizability condition which might not be a natural and necessary requirement on second thought. Stressing that “relativity requires the language of events, not of things” Saunders argues that the localizability condition loses its plausibility when it is applied to events: It makes no sense to postulate that the same event can not occur at two disjoint spatial sets at the same time. One can only require for the same <em>kind</em> of event not to occur at both places. For Saunders the particle interpretation as such is not at stake in Malament's argument. The question is rather whether QFT speaks about things at all. Saunders considers Malament's result to give a negative answer to this question. A kind of meta paper on Malament's theorem is Halvorson & Clifton (2002). Various objections to the choice of Malament's assumptions and his conclusion are considered and rebutted. Moreover, Halvorson and Clifton establish two further no-go theorems which preserve Malament's theorem by weakening tacit assumptions and showing that the general conclusion still holds. One thing seems to be clear. Since Malament's ‘mathematical result’ appears to allow for various different conclusions it cannot be taken as conclusive evidence against the tenability of a particle interpretation of QFT and the same applies to Redhead's interpretation of the Reeh-Schlieder theorem. For a more detailed exposition and comparison of the Reeh-Schlieder theorem and Malament's theorem see Kuhlmann (2010a: sec. 8.3).<br />Does the <b>field interpretation</b> also suffer from problems concerning non-localizability? In the section “Deficiencies of the Conventional Formulation of QFT” we already saw that, strictly speaking, field operators cannot be defined at points but need to be smeared out in the (finite and arbitrarily small) vicinity of points, giving rise to smeared field operators <span class="overstrike">φ<span class="up">ˆ</span></span>(<em>f</em>), which represent the weighted average field value in the respective region. This procedure leads to operator-valued distributions instead of operator-valued fields. The lack of field operators at points appears to be analogous to the lack of position operators in QFT, which troubles the particle interpretation. However, for position operators there is no remedy analogous to that for field operators: while even unsharply localized particle positions do not exist in QFT (see Halvorson and Clifton 2002, theorem 2), the existence of smeared field operators demonstrates that there are at least point-like field operators. On this basis Lupher (2010) proposes a “modified field ontology”.<br /><h3><a href="http://www.blogger.com/null" name="InRep">5.4 Inequivalent Representations</a></h3>The occurrence of inequivalent representations is a grave obstacle for interpreting QFT, which is increasingly rated as the single most important problem, that has no counterpart whatsoever in standard QM. As we saw in the section “Deficiencies of the Conventional Formulation of QFT”, the quantization of a theory with an infinite number of degrees of freedom, such as a field theory, leads to unitarily <em>inequivalent</em> representations (UIR) of the canonical commutation relations. It is highly controversial what the availability of UIRs means. One possible stance is to dismiss them as mathematical artifacts with no physical relevance. Ruetsche (2002) calls this “Hilbert Space Conservatism”. On the one hand, this view fits well to the fact that UIRs are hardly even mentioned in standard textbooks on QFT. On the other hand, this cannot be the last word because UIRs undoubtedly do real work in physics, e.g. in quantum statistical mechanics (see Ruetsche 2003) and in particular when it comes to spontaneous symmetry breaking. <br />The coexistence of UIRs can be readily understood looking at ferromagnetism (see Ruetsche 2006). At high temperatures the atomic dipoles in ferromagnetic substances fluctuate randomly. Below a certain temperature the atomic dipoles tend to align to each other in some direction. Since the basic laws governing this phenomenon are rotationally symmetrical, no direction is preferred. Thus once the dipoles have “chosen” one particular direction, the symmetry is broken. Since there is a different ground state for each direction of magnetization, one needs different Hilbert spaces—each containing a unique ground state—in order to describe symmetry breaking systems. Correspondingly, one has to employ inequivalent representations. <br />One important interpretive issue where UIRs play a crucial role is the <b>Unruh effect</b>: a uniformly accelerated observer in a Minkowski vacuum should detect a thermal bath of particles, the so-called Rindler quanta (Unruh 1976, Unruh & Wald 1984). A mere change of the reference frame thus seems to bring particles into being. Since the very existence of the basic entities of an ontology should be invariant under transformations of the referential frame the Unruh effect constitutes a severe challenge to a particle interpretation of QFT. Teller (1995: 110-113) tries to dispel this problem by pointing out that while the Minkowski vacuum has the definite value zero for the Minkowski number operator, the particle number is indefinite for the Rindler number operator, since one has a superposition of Rindler quanta states. This means that there are only propensities for detecting different numbers of Rindler quanta but no actual quanta. However, this move is problematic since it seems to suggest that quantum physical propensities in general don't need to be taken fully for real.<br />Clifton and Halvorson (2001b) argue, contra Teller, that it is inapproriate to give priority to either the Minkowski or the Rindler perspective. Both are needed for a complete picture. The Minkowski as well as the Rindler representation are true descriptions of the world, namely in terms of objective propensities. Arageorgis, Earman and Ruetsche (2003) argue that Minkowski and Rindler (or Fulling) quantization do <em>not</em> constitute a satisfactory case of physically relevant UIRs. First, there are good reasons to doubt that the Rindler vacuum is a physically realizable state. Second, the authors argue, the unitary inequivalence in question merely stems from the fact that one representation is reducible and the other one irreducible: The restriction of the Minkowski vacuum to a Rindler wedge, i.e. what the Minkowski observer says about the Rindler wedge, leads to a mixed state (a thermodynamic KMS state) and therefore a reducible representation, whereas the Rindler vacuum is a pure state and thus corresponds to an irreducible representation. Therefore, the Unruh effect does not cause distress for the particle interpretation—which the authors see to be fighting a losing battle anyhow—because Rindler quanta are not real and the unitary inequivalence of the representations in question has nothing specific to do with conflicting particle ascriptions. <br />The occurrence of UIRs is also at the core of an analysis by Fraser (2008). She restricts her analysis to inertial observers but compares the particle notion for free and interacting systems. Fraser argues, first, that the representations for free and interacting systems are unavoidably unitarily inequivalent, and second, that the representation for an interacting system does not have the minimal properties that are needed for any particle interpretation—e.g. Teller's (1995) quanta version—namely the countability condition (quanta are aggregable) and a relativistic energy condition. Note that for Fraser's negative conclusion about the tenability of the particle (or quanta) interpretation for QFT there is no need to assume localizability. <br />Bain (2000) has a diverging assessment of the fact that only asymptotically free states, i.e. states very long before or after a scattering interaction, have a Fock representation that allows for an interpretation in terms of countable quanta. For Bain, the occurrence of UIRs without a particle (or quanta) interpretation for intervening times, i.e. close to scattering experiments, is irrelevant because the data that are collected from those experiments always refer to systems with negligible interactions. Bain concludes that although the inclusion of interactions does in fact lead to the break-down of the alleged duality of particles and fields it does not undermine the notion of particles (or fields) as such. <br />Fraser (2008) rates this as an unsuccessful “last ditch” attempt to save a quanta interpretation of QFT because it is ad hoc and can't even show that at least something similar to the free field total number operator exists for finite times, i.e. between the asymptotically free states. Moreover, Fraser (2008) points out that, contrary to what some authors suggest, the main source of the impossibility to interpret interacting systems in terms of particles is <em>not</em> that many-particle states are inappropriately described in the Fock representation if one deals with interacting fields but rather that QFT obeys special relativity theory (also see Earman and Fraser (2006) on Haag's theorem). As Fraser concludes, “[F]or a free system, special relativity and the linear field equation conspire to produce a quanta interpretation.” In his reply Bain (2011) points out that the reason why there is no total number operator in interacting relativistic quantum field theories is that this would require an absolute space-time structure, which in turn is not an appropriate requirement. <br />Baker (2009) points out that the main arguments against the particle interpretation—concerning non-localizability (e.g. Malament 1996) and failure for interacting systems (Fraser 2008)—may also be directed against the wave functional version of the field interpretation (see field interpretation (iii) above). Mathematically, Baker's crucial point is that wave functional space is unitarily equivalent to Fock space, so that arguments against the particle interpretation that attack the choice of the Fock representation may carry over to the wave functional interpretation. First, a Minkowski and a Rindler observer may also detect different field configurations. Second, if the Fock space representation is not apt to describe interacting systems, then the unitarily equivalent wave functional representation is in no better situation: Interacting fields are unitarily inequivalent to free fields, too. <br />It is difficult to say how the availability of UIRs should be interpreted in general. Clifton and Halvorson (2001b) propose seeing this as a form of complementarity. Ruetsche (2003) advocates a “Swiss army approach”, according to which the availability of UIRs shows that physical possibilities in different degrees must be included into our ontology. However, both proposals are yet too sketchy and await further elaboration. <br /><h3><a href="http://www.blogger.com/null" name="SymHeuObj">5.5 The Role of Symmetries</a></h3>Symmetries play a central role in QFT. In order to characterize a special symmetry one has to specify transformations T and features that remain unchanged during these transformations: invariants I. Symmetries are thus pairs {T, I}. The basic idea is that the transformations change elements of the mathematical description (the Lagrangians for instance) whereas the empirical content of the theory is unchanged. There are space-time transformations and so-called internal transformations. Whereas space-time symmetries are universal, i. e., they are valid for all interactions, internal symmetries characterize special sorts of interaction (electromagnetic, weak or strong interaction). Symmetry transformations define properties of particles/quantum fields that are conserved if the symmetry is not broken. The invariance of a system defines a conservation law, e.g., if a system is invariant under translations the linear momentum is conserved, if it is invariant under rotation the angular momentum is conserved. Inner transformations, such as gauge transformations, are connected with more abstract properties.<br />Symmetries are not only defined for Lagrangians but they can also be found in empirical data and phenomenological descriptions. Symmetries can thus bridge the gap between descriptions which are close to empirical results (‘phenomenology’) and the more abstract general theory which is a most important reason for their heuristic force. If a conservation law is found one has some knowledge about the system even if details of the dynamics are unknown. The analysis of many high energy collision experiments led to the assumption of special conservation laws for abstract properties like baryon number or strangeness. Evaluating experiments in this way allowed for a classification of particles. This phenomenological classification was good enough to predict new particles which could be found in the experiments. Free places in the classification could be filled even if the dynamics of the theory (for example the Lagrangian of strong interaction) was yet unknown. As the history of QFT for strong interaction shows, symmetries found in the phenomenological description often lead to valuable constraints for the construction of the dynamical equations. Arguments from group theory played a decisive role in the unification of fundamental interactions. In addition, symmetries bring about substantial technical advantages. For example, by using gauge transformations one can bring the Lagrangian into a form which makes it easy to prove the renormalizability of the theory. See also the entry on <a href="http://plato.stanford.edu/entries/symmetry-breaking/">symmetry and symmetry breaking</a>.<br />In many cases symmetries are not only heuristically useful but supply some sort of ‘justification’ by being used in the beginning of a chain of explanation. To a remarkable degree the present theories of elementary particle interactions can be understood by deduction from general principles. Under these principles symmetry requirements play a crucial role in order to determine the Lagrangian. For example, the only Lorentz invariant and gauge invariant renormalizable Lagrangian for photons and electrons is precisely the original Dirac Lagrangian. In this way symmetry arguments acquire an explanatory power and help to minimize the unexplained basic assumptions of a theory. Heisenberg concludes that in order “to find the way to a real understanding of the spectrum of particles it will therefore be necessary to look for the fundamental symmetries and not for the fundamental particles.” (Blum <em>et al</em>. 1995: 507).<br />Since symmetry operations change the perspective of an observer but not the physics an analysis of the relevant symmetry group can yield very general information about those entities which are unchanged by transformations. Such an invariance under a symmetry group is a necessary (but not sufficient) requirement for something to belong to the ontology of the considered physical theory. Hermann Weyl propagated the idea that objectivity is associated with invariance (see, e.g., his authoritative work Weyl 1952: 132). Auyang (1995) stresses the connection between properties of physically relevant symmetry groups and ontological questions. Kosso argues that symmetries help to separate objective facts from the conventions of descriptions; see his article in Brading & Castellani (2003), an anthology containing numerous further philosophical studies about symmetries in physics.<br />Symmetries are typical examples of structures that show more continuity in scientific change than assumptions about objects. For that reason structural realists consider structures as “the best candidate for what is ‘true’ about a physical theory” (Redhead 1999: 34). Physical objects such as electrons are then taken to be similar to fiction that should not be taken seriously, in the end. In the epistemic variant of structural realism structure is all we know about nature whereas the objects which are related by structures might exist but they are not accessible to us. For the extreme ontic structural realist there is nothing but structures in the world (Ladyman 1998).<br /><h3><a href="http://www.blogger.com/null" name="TakSto">5.6 Taking Stock: Where do we Stand?</a></h3>A <b>particle interpretation</b> of QFT answers most intuitively what happens in particle scattering experiments and why we seem to detect particle trajectories. Moreover, it would explain most naturally why particle talk appears almost unavoidable. However, the particle interpretation in particular is troubled by numerous serious problems. There are no-go theorems to the effect that, in a relativistic setting, quantum “particle” states cannot be localized in any finite region of space-time no matter how large it is. Besides localizability, another hard core requirement for the particle concept that seems to be violated in QFT is countability. First, many take the Unruh effect to indicate that the particle number is observer or context dependent. And second, interacting quantum field theories cannot be interpreted in terms of particles because their representations are unitarily inequivalent to Fock space (Haag's theorem), which is the only known way to represent countable entities in systems with an infinite number of degrees of freedom. <br />At first sight the <b>field interpretation</b> seems to be much better off, considering that a field is not a localized entity and that it may vary continuously—so no requirements for localizability and countability. Accordingly, the field interpretation is often taken to be implied by the failure of the particle interpretation. However, on closer scrutiny the field interpretation itself is not above reproach. To begin with, since “quantum fields” are operator valued it is not clear in which sense QFT should be describing physical fields, i.e. as ascribing physical properties to points in space. In order to get determinate physical properties, or even just probabilities, one needs a quantum state. However, since quantum states as such are not spatio-temporally defined, it is questionable whether field values calculated with their help can still be viewed as local properties. The second serious challenge is that the arguably strongest field interpretation—the wave functional version—may be hit by similar problems as the particle interpretation, since wave functional space is unitarily equivalent to Fock space. <br />The occurrence of <b>unitarily inequivalent representations (UIRs)</b>, which first seemed to cause problems specifically for the particle interpretation but which appears to carry over to the field interpretation, may well be a severe obstacle for any ontological interpretation of QFT. However, it is controversial whether the two most prominent examples, namely the Unruh effect and Haag's theorem, really do cause the contended problems in the first place. Thus one of the crucial tasks for the philosophy of QFT is further unmasking the ontological significance of UIRs. <br />The two remaining contestants approach QFT in a way that breaks more radically with traditional ontologies than any of the proposed particle and field interpretations. <b>Ontic Structural Realism (OSR)</b> takes the paramount significance of symmetry groups to indicate that symmetry structures as such have an ontological primacy over objects. However, since most OSRists are decidedly against Platonism, it is not altogether clear how symmetry structures could be ontologically prior to objects if they only exist in concrete realizations, namely in those objects that exhibit these symmetries.<br /><b>Dispositional Trope Ontology (DTO)</b> deprives both particles and fields of their fundamental status, and proposes an ontology whose basic elements are properties understood as particulars, called ‘tropes’. One of the advantages of the DTO approach is its great generality concerning the nature of objects which it analyzes as bundles of (partly dispositional) properties/tropes: DTO is flexible enough to encompass both particle and field like features without being committed to either a particle or a field ontology. <br />In conclusion one has to recall that one reason why the ontological interpretation of QFT is so difficult is the fact that it is exceptionally unclear which parts of the formalism should be taken to represent anything physical in the first place. And it looks as if that problem will persist for quite some time. <br /><h2><a href="http://www.blogger.com/null" name="Bib">Bibliography</a></h2><ul class="hanging"><li>Auyang, S. Y., 1995, <em>How is Quantum Field Theory Possible?</em>, Oxford-New York: Oxford University Press. </li><li>Bain, J., 2000, “Against particle/field duality: Asymptotic particle states and interpolating fields in interacting QFT (or: Who’s afraid of Haag’s theorem?)”, <em>Erkenntnis</em>, 53: 375–406.</li><li>–––, 2011, “Quantum field theories in classical spacetimes and particles”, <em>Studies in History and Philosophy of Modern Physics</em>, 42: 98–106.</li><li>Baker, D. J., 2009, “Against field interpretations of quantum field theory”, <em>British Journal for the Philosophy of Science</em>, 60: 585–609.</li><li>Baker, D.J. and H. Halvorson, 2010, “Antimatter”, <em>British Journal for the Philosophy of Science</em>, 61: 93–121.</li><li>Born, M., with W. Heisenberg, and P. Jordan, 1926, “Zur Quantenmechanik II”, <em>Zeitschr. für Physik</em> 35, 557.</li><li>Brading, K. and E. Castellani (eds.), 2003, <em>Symmetries in Physics: Philosophical Reflections</em>, Cambridge: Cambridge University Press.</li><li>Bratteli, O. and D. W. Robinson, 1979, <em>Operator Algebras and Quantum Statistical Mechanics 1: <em>C</em><sup>*</sup> and <em>W</em><sup>*</sup>-Algebras, Symmetry Groups, Decomposition of States</em>, New York et al.: Springer</li><li>Brown, H. R. and R. Harré (eds.), 1988, <em>Philosophical Foundations of Quantum Field Theory</em>, Oxford: Clarendon Press.</li><li>Buchholz, D., 1994, “On the manifestations of particles,” in R. N. Sen and A. Gersten, eds., <em>Mathematical Physics Towards the 21st Century</em>, Beer-Sheva: Ben-Gurion University Press. </li><li>–––, 1998, “Current trends in axiomatic qantum field theory,” in P. Breitenlohner and D. Maison, eds, <em>Quantum Field Theory. Proceedings of the Ringberg Workshop 1998</em>, pp. 43-64, Berlin-Heidelberg: Springer.</li><li>Busch, P., 1999, “Unsharp localization and causality in relativistic quantum theory,” <em>Journal of Physics A: Mathematics General</em>, 32: 6535.</li><li>Butterfield, J. and H. Halvorson (eds.), 2004, <em>Quantum Entanglements — Selected Papers — Rob Clifton</em>, Oxford: Oxford University Press. </li><li>Butterfield, J. and C. Pagonis (eds.), 1999, <em>From Physics to Philosophy</em>, Cambridge: Cambridge University Press.</li><li>Callender, C. and N. Huggett (eds.), 2001, <em>Physics Meets Philosophy at the Planck Scale</em>, Cambridge: Cambridge University Press. </li><li>Cao, T. Y., 1997a, <em>Conceptual Developments of 20th Century Field Theories</em>, Cambridge: Cambridge University Press. </li><li>–––, 1997b, “Introduction: Conceptual issues in QFT,” in Cao 1997a, pp. 1-27.</li><li>–––, (ed.), 1999, <em>Conceptual Foundations of Quantum Field Theories</em>, Cambridge: Cambridge University Press. </li><li>–––, 2010, <em>From Current Algebra to Quantum Chromodynamics: A Case for Structural Realism</em>, Cambridge: Cambridge University Press.</li><li>Castellani, E., 2002, “Reductionism, emergence, and effective field theories,” <em>Studies in History and Philosophy of Modern Physics</em>, 33: 251-267.</li><li>Clifton, R. (ed.), 1996, <em>Perspectives on Quantum Reality: Non-Relativistic, Relativistic, and Field-Theoretic</em>, Dordrecht et al.: Kluwer.</li><li>Clifton, R. and H. Halvorson, 2001, “Entanglement and open systems in algebraic quantum field theory,” <em>Studies in History and Philosophy of Modern Physics</em>, 32: 1-31; reprinted in Butterfield & Halvorson 2004.</li><li>Davies, P. (ed.), 1989, <em>The New Physics</em>, Cambridge: Cambridge University Press.</li><li>Dawid, R., 2009, “On the conflicting assessments of string theory”, <em>Philosophy of Science</em>, 76: 984–996.</li><li>Dieks, D., 2002, “Events and covariance in the interpretation of quantum field theory,” in Kuhlmann <em>et al</em>. 2002, pp. 215-234.</li><li>Dieks, D. and A. Lubberdink, 2011, “How classical particles emerge from the quantum world”, <em>Foundations of Physics</em>, 41: 1051–1064.</li><li>Dirac, P. A. M., 1927, “The quantum theory of emission and absorption of radiation,” <em>Proceedings of the Royal Society of London</em>, A 114: 243-256.</li><li>Earman, John, 2011, “The Unruh effect for philosophers”, <em>Studies In History and Philosophy of Modern Physics</em>, 42: 81 – 97.</li><li>Earman, J. and D. Fraser, 2006, “Haag’s theorem and its implications for the foundations of quantum field theory”, <em>Erkenntnis</em>, 64: 305–344.</li><li>Fleming, G. N. and J. Butterfield, 1999, “Strange positions,” in Butterfield & Pagonis 1999, pp. 108-165.</li><li>Fraser, D., 2008, “The fate of “particles” in quantum field theories with interactions”, <em>Studies in History and Philosophy of Modern Physics</em>, 39: 841–59.</li><li>–––, 2009, “Quantum field theory: Underdetermination, inconsistency, and idealization”, <em>Philosophy of Science</em>, 76: 536–567.</li><li>–––, 2011, “How to take particle physics seriously: A further defence of axiomatic quantum field theory”, <em>Studies in History and Philosophy of Modern Physics</em>, 42: 126–135.</li><li>Georgi, H., 1989, “Effective quantum field theories,” in Davies 1989, pp. 446-457.</li><li>Greene, B., 1999, <em>The Elegant Universe. Superstrings, Hidden Dimensions and the Quest for the Ultimate Theory</em>, New York: W. W. Norton and Company.</li><li>Haag, R., 1996, <em>Local Quantum Physics: Fields, Particles, Algebras</em>, 2nd edition, Berlin et al.: Springer.</li><li>Haag, R. and D. Kastler, 1964, “An algebraic approach to quantum field theory,” <em>Journal of Mathematical Physics</em>, 5: 848-861.</li><li>Halvorson, H., 2001, “Reeh-schlieder defeats newton-wigner: On alternative localization schemes in relativistic quantum field theory”, <em>Philosophy of Science</em>, 68: 111–133.</li><li>Halvorson, H. and R. Clifton, 2002, “No place for particles in relativistic quantum theories?” <em>Philosophy of Science</em>, 69: 1-28; reprinted in Butterfield and Halvorson 2004 and in Kuhlmann <em>et al</em>. 2002.</li><li>Halvorson, H. and M. Müger , 2007, “Algebraic quantum field theory (with an appendix by Michael Müger)”, in <em>Handbook of the Philosophy of Physics — Part A</em>, Jeremy Butterfield and John Earman (eds.), Amsterdam: Elsevier, 731–922.</li><li>Hartmann, S., 2001, “Effective field theories, reductionism, and explanation,” <em>Studies in History and Philosophy of Modern Physics</em>, 32: 267-304.</li><li>Hättich, F., 2004, <em>Quantum Processes — A Whiteheadian Interpretation of Quantum Field Theory</em>, Münster: agenda Verlag.</li><li>Healey, R., 2007, <em>Gauging What’s Real: The Conceptual Foundations of Contemporary Gauge Theories</em>, Oxford: Oxford University Press.</li><li>Heisenberg, W. and W. Pauli, 1929, “Zur Quantendynamik der Wellenfelder,” <em>Zeitschrift für Physik</em>, 56: 1-61.</li><li>Hoddeson, L., with L. Brown, M. Riordan, and M. Dresden (eds.), 1997, <em>The Rise of the Standard Model: A History of Particle Physics from 1964 to 1979</em>, Cambridge: Cambridge University Press.</li><li>Horuzhy, S. S., 1990, <em>Introduction to Algebraic Quantum Field Theory</em>, 1st edition, Dordrecht et al.: Kluwer.</li><li>Huggett, N., 2000, “Philosophical foundations of quantum field theory”, <em>The British Journal for the Philosophy Science</em>, 51: 617–637.</li><li>–––, 2003, “Philosophical foundations of quantum field theory”, in <em>Philosophy of Science Today</em>, P. Clark and K. Hawley, eds., Oxford: Clarendon Press, 617?37.</li><li>Johansson, L. G. and K. Matsubara, 2011, “String theory and general methodology: A mutual evaluation”, <em>Studies in History and Philosophy of Modern Physics</em>, 42: 199–210.</li><li>Kaku, M., 1999, <em>Introduction to Superstrings and M-Theory</em>, New York: Springer. </li><li>Kantorovich, A., 2003, “The priority of internal symmetries in particle physics”, <em>Studies in History and Philosophy of Modern Physics</em>, 34: 651–675.</li><li>Kastler, D. (ed.), 1990, <em>The Algebraic Theory of Superselection Sectors: Introduction and Recent Results</em>, Singapore et al.: World Scientific.</li><li>Kiefer, C., 2007, <em>Quantum Gravity</em>, Oxford: Oxford University Press. Second edition.</li><li>Kronz, F. and T. Lupher, 2005, “Unitarily inequivalent representations in algebraic quantum theory”, <em>International Journal of Theoretical Physics</em>, 44: 1239–1258.</li><li>Kuhlmann, M., 2010a, <em>The Ultimate Constituents of the Material World – In Search of an Ontology for Fundamental Physics</em>, Frankfurt: ontos Verlag.</li><li>–––, 2010b, “Why conceptual rigour matters to philosophy: On the ontological significance of algebraic quantum field theory”, <em>Foundations of Physics</em>, 40: 1625–1637.</li><li>–––, 2011, “Review of “From Current Algebra to Quantum Chromodynamics: A Case for Structural Realism” by T. Y. Cao”, <em>Notre Dame Philosophical Reviews</em>, <a href="http://ndpr.nd.edu/news/25552-from-current-algebra-to-quantum-chromodynamics-a-case-for-structural-realism/" target="other">available online</a>.</li><li>Kuhlmann, M. with H. Lyre and A. Wayne (eds.), 2002, <em>Ontological Aspects of Quantum Field Theory</em>, London: World Scientific Publishing.</li><li>Ladyman, J., 1998, “What is structural realism?” <em>Studies in History and Philosophy of Science</em>, 29: 409-424.</li><li>Landsman, N. P., 1996, “Local quantum physics,” <em>Studies in History and Philosophy of Modern Physics</em>, 27: 511-525.</li><li>Lupher, T., 2010, “Not particles, not quite fields: An ontology for quantum field theory”, <em>Humana Mente</em>, 13: 155–173.</li><li>Lyre, H., 2004, “Holism and structuralism in U(1) gauge theory,” <em>Studies in History and Philosophy of Modern Physics</em>, 35/4: 643-670.</li><li>–––, 2012, “Structural invariants, structural kinds, structural laws”, in <em>Probabilities, Laws, and Structures</em>, Dordrecht: Springer, 179–191.</li><li>Malament, D., 1996, “In defense of dogma: Why there cannot be a relativistic quantum mechanics of (localizable) particles,” in Clifton 1996, pp. 1-10.</li><li>Mandl, F. and G. Shaw, 2010, <em>Quantum Field Theory</em>, Chichester (UK): John Wiley & Sons, second ed.</li><li>Martin, C. A., 2002, “Gauge principles, gauge arguments and the logic of nature,” <em>Philosophy of Science</em>, 69/3: 221-234.</li><li>Morganti, M., 2009, “Tropes and physics”, <em>Grazer Philosophische Studien</em>, 78: 185–205.</li><li>Newton, T. D. and E. P. Wigner, 1949, “Localized states for elementary particles,” <em>Reviews of Modern Physics</em>, 21/3: 400-406.</li><li>Peskin, M. E. and D. V. Schroeder, 1995, <em>Introduction to Quantum Field Theory</em>, Cambridge (MA): Perseus Books.</li><li>Polchinski, J., 2000, <em>String Theory</em>, 2 volumes, Cambridge: Cambridge University Press. </li><li>Redhead, M. L. G., 1995a, “More ado about nothing,” <em>Foundations of Physics</em>, 25: 123-137.</li><li>–––, 1995b, “The vacuum in relativistic quantum field theory,” in Hull <em>et al</em>. 1994 (vol. 2), pp. 88-89.</li><li>–––, 1999, “Quantum field theory and the philosopher,” in Cao 1999, pp. 34-40.</li><li>–––, 2002, “The interpretation of gauge symmetry,” in Kuhlmann <em>et al</em>. 2002, pp. 281-301.</li><li>Reeh, H. and S. Schlieder, 1961, “Bemerkungen zur Unitäräquivalenz von Lorentzinvarianten Feldern,” <em>Nuovo Cimento</em>, 22: 1051-1068.</li><li>Rickles, D., 2008, “Quantum gravity: A primer for philosophers”, in <em>The Ashgate Companion to Contemporary Philosophy of Physics</em>, Dean Rickles (ed.), Aldershot: Ashgate, 262–382.</li><li>Roberts, B. W., 2011, “Group structural realism”, <em>The British Journal for the Philosophy Science</em>, 62: 47?69.</li><li>Roberts, J. E., 1990, “Lectures on algebraic quantum field theory,” in Kastler 1990, pp. 1-112.</li><li>Ruetsche, L., 2002, “Interpreting quantum field theory”, <em>Philosophy of Science</em>, 69: 348–378.</li><li>–––, 2003, “A matter of degree: Putting unitary equivalence to work,” <em>Philosophy of Science</em>, 70/5: 1329-1342.</li><li>–––, 2006, “Johnny's so long at the ferromagnet”, <em>Philosophy of Science</em>, 73: 473–486.</li><li>–––, 2011, “Why be normal?”, <em>Studies in History and Philosophy of Modern Physics</em>, 42: 107–115.</li><li>Ryder, L. H., 1996, <em>Quantum Field Theory</em>, 2nd edition, Cambridge: Cambridge University Press.</li><li>Saunders, S., 1995, “A dissolution of the problem of locality,” in Hull, M. F. D., Forbes, M., and Burian, R. M., eds., 1995, <em>Proceedings of the Biennial Meeting of the Philosophy of Science Association: PSA 1994</em>, East Lansing, MI: Philosophy of Science Association, vol. 2, pp. 88-98.</li><li>Saunders, S. and H. R. Brown (eds.), 1991, <em>The Philosophy of Vacuum</em>, Oxford: Clarendon Press.</li><li>Schweber, S. S., 1994, <em>QED and the Men Who Made It</em>,” Princeton: Princeton University Press. </li><li>Segal, I. E., 1947, “Postulates for general quantum mechanics,” <em>Annals of Mathematics</em>, 48/4: 930-948.</li><li>Seibt, J., 2002, “The matrix of ontological thinking: Heuristic preliminaries for an ontology of QFT,” in Kuhlmann <em>et al</em>. 2002, pp. 53-97.</li><li>Streater, R. F. and A. S. Wightman, 1964, <em>PCT, Spin and Statistics, and all that</em>, New York: Benjamin. </li><li>Teller, P., 1995, <em>An Interpretive Introduction to Quantum Field Theory</em>, Princeton: Princeton University Press.</li><li>Unruh, W. G., 1976, “Notes on black hole evaporation,” <em>Physical Review D</em>, 14: 870-92.</li><li>Unruh, W. G. and R. M. Wald, 1984, “What happens when an accelerating observer detects a Rindler particle?” <em>Physical Review D</em>, 29: 1047-1056.</li><li>Wallace, D., 2006, “In defence of naiveté: The conceptual status of Lagrangian quantum field theory”, <em>Synthese</em>, 151: 33–80.</li><li>–––, 2011, “Taking particle physics seriously: A critique of the algebraic approach to quantum field theory”, <em>Studies in History and Philosophy of Modern Physics</em>, 42: 116–125.</li><li>Wayne, Andrew, 2002, “A naive view of the quantum field”, in Kuhlmann et al. 2002, 127–133.</li><li>–––, 2008, “A trope-bundle ontology for field theory”, in <em>The Ontology of Spacetime II</em>, Dennis Dieks (ed.), Amsterdam: Elsevier, 1–15.</li><li>Weinberg, S., 1995, <em>The Quantum Theory of Fields – Foundations</em> (Volume 1), Cambridge: Cambridge University Press.</li><li>–––, 1996, <em>The Quantum Theory of Fields – Modern Applications</em> (Volume 2), Cambridge: Cambridge University Press.</li><li>Weingard, R., 2001, “A philosopher looks at string theory,” in Callender & Huggett 2001, pp. 138-151.</li><li>Weyl, H., 1952, <em>Symmetry</em>, Princeton: Princeton University Press. </li><li>Wightman, A. S., 1956, “Quantum field theory in terms of vacuum expectation values”, <em>Physical Review</em>, 101: 860–66.</li><li>Wigner, E. P., 1939, “On unitary representations of the inhomoneneous Lorentz group,” <em>Annals of Mathematics</em>, 40: 149-204.</li></ul><h2><a href="http://www.blogger.com/null" name="Aca">Academic Tools</a></h2><blockquote><table><tbody><tr><td valign="top"><img alt="sep man icon" src="http://plato.stanford.edu/symbols/sepman-icon.jpg" /></td><td><a href="http://plato.stanford.edu/cgi-bin/encyclopedia/archinfo.cgi?entry=quantum-field-theory" target="other">How to cite this entry</a>.</td></tr><tr><td valign="top"><img alt="sep man icon" src="http://plato.stanford.edu/symbols/sepman-icon.jpg" /></td><td><a href="https://leibniz.stanford.edu/friends/preview/quantum-field-theory/" target="other">Preview the PDF version of this entry</a> at the <a href="https://leibniz.stanford.edu/friends/" target="other">Friends of the SEP Society</a>.</td></tr><tr><td valign="top"><img alt="inpho icon" src="http://plato.stanford.edu/symbols/inpho.png" /></td><td><a href="https://inpho.cogs.indiana.edu/entity?sep=quantum-field-theory&redirect=True" target="other">Look up this entry topic</a> at the <a href="https://inpho.cogs.indiana.edu/" target="other">Indiana Philosophy Ontology Project</a> (InPhO).</td></tr><tr><td valign="top"><img alt="phil papers icon" src="http://plato.stanford.edu/symbols/pp.gif" /></td><td><a href="http://philpapers.org/sep/quantum-field-theory/" target="other">Enhanced bibliography for this entry</a> at <a href="http://philpapers.org/" target="other">PhilPapers</a>, with links to its database.</td></tr></tbody></table></blockquote><h2><a href="http://www.blogger.com/null" name="Oth">Other Internet Resources</a></h2><ul><li>Dawid, R., 2003, <a href="http://philsci-archive.pitt.edu/archive/00001240/" target="other">Realism in the Age of String Theory</a></li><li><a href="http://superstringtheory.com/basics/" target="other">String Theory Basics</a>, the official string theory web site</li><li><a href="http://www.rotman.uwo.ca/resources/video-audio/philosophy-of-quantum-field-theory-conference/" target="other">Philosophy of Quantum Field Theory Conference</a>, video-recorded talks and discussions of the 2009 conference on the philosophy of quantum field theory at the University of Western Ontario.</li></ul>Fausto Intillahttps://plus.google.com/110377150394476015496noreply@blogger.com0tag:blogger.com,1999:blog-8784745434426267484.post-8872764467697864542013-06-03T00:51:00.001-07:002013-06-03T00:51:07.796-07:00Andrew Hodges: Can quantum computing solve classically unsolvable problems?<div class="separator" style="clear: both; text-align: center;"><a href="http://arxiv.org/ftp/quant-ph/papers/0512/0512248.pdf"><img border="0" height="258" src="http://1.bp.blogspot.com/-gCqDaCbjDlg/UaxKoND3rGI/AAAAAAAACKs/ek_WFB056rE/s400/ad.png" width="400" /></a></div><br />Fausto Intillahttps://plus.google.com/110377150394476015496noreply@blogger.com0tag:blogger.com,1999:blog-8784745434426267484.post-66908904920375363132013-05-13T00:37:00.000-07:002013-05-13T00:37:06.303-07:00Graphene joins the race to redefine the ampere.<div class="separator" style="clear: both; text-align: center;"><a href="http://3.bp.blogspot.com/-f4oUZTrao2o/UZCXKzZ9IWI/AAAAAAAACFM/5ZK4fllSOX4/s1600/graphene.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="200" src="http://3.bp.blogspot.com/-f4oUZTrao2o/UZCXKzZ9IWI/AAAAAAAACFM/5ZK4fllSOX4/s200/graphene.jpg" width="200" /></a></div><div style="text-align: center;"><strong>Source: </strong><a href="http://phys.org/news/2013-05-graphene-redefine-ampere.html"><strong><span style="color: yellow;">Phys.org</span></strong></a></div><div style="text-align: center;"><strong>----------------------</strong></div><strong>A new joint innovation by the National Physical Laboratory (NPL) and the University of Cambridge could pave the way for redefining the ampere in terms of fundamental constants of physics. The world's first graphene single-electron pump (SEP), described in a paper today in <i>Nature Nanotechnology</i>, provides the speed of electron flow needed to create a new standard for electrical current based on electron charge. </strong><br /><strong>The </strong><a class="textTag" href="http://phys.org/tags/international+system+of+units/" rel="tag"><strong>international system of units</strong></a><strong> (SI) comprises seven base units (the metre, kilogram, second, </strong><a class="textTag" href="http://phys.org/tags/kelvin/" rel="tag"><strong>Kelvin</strong></a><strong>, ampere, mole and candela). Ideally these should be stable over time and universally reproducible. This requires definitions based on fundamental constants of nature which are the same wherever you measure them.</strong><br /><strong>The present definition of the Ampere, however, is vulnerable to drift and instability. This is not sufficient to meet the accuracy needs of present and certainly future electrical measurement. The highest global measurement authority, the Conférence Générale des Poids et Mesures, has proposed that the ampere be re-defined in terms of the electron charge.</strong><br /><strong>The frontrunner in this race to redefine the ampere is the single-electron pump (SEP). SEPs create a flow of individual electrons by shuttling them in to a quantum dot – a particle holding pen – and emitting them one at a time and at a well-defined rate. The paper published today describes how a graphene SEP has been successfully produced and characterised for the first time, and confirms its properties are extremely well suited to this application.</strong><br /><strong>A good SEP pumps precisely one electron at a time to ensure accuracy, and pumps them quickly to generate a sufficiently large current. Up to now the development of a practical electron pump has been a two-horse race. Tuneable barrier pumps use traditional semiconductors and have the advantage of speed, while the hybrid turnstile utilises </strong><a class="textTag" href="http://phys.org/tags/superconductivity/" rel="tag"><strong>superconductivity</strong></a><strong> and has the advantage that many can be put in parallel. Traditional metallic pumps, thought to be not worth pursuing, have been given a new lease of life by fabricating them out of the world's most famous super-material - graphene.</strong><br /><strong>Previous metallic SEPs made of aluminium are very accurate, but pump electrons too slowly for making a practical current standard. Graphene's unique semimetallic two-dimensional structure has just the right properties to let electrons on and off the quantum dot very quickly, creating a fast enough </strong><a class="textTag" href="http://phys.org/tags/electron+flow/" rel="tag"><strong>electron flow</strong></a><strong> - at near gigahertz frequency - to create a current standard. The Achillies heel of metallic pumps, slow pumping speed, has thus been overcome by exploiting the unique properties of graphene. </strong><br /><strong>The scientist at NPL and Cambridge still need to optimise the material and make more accurate measurements, but today's paper marks a major step forward in the road towards using graphene to redefine the ampere.</strong><br /><strong>The realisation of the ampere is currently derived indirectly from resistance or voltage, which can be realised separately using the quantum Hall effect and the Josephson Effect. A fundamental definition of the ampere would allow a direct realisation that National Measurement Institutes around the world could adopt. This would shorten the chain for calibrating current-measuring equipment, saving time and money for industries billing for electricity and using ionising radiation for cancer treatment.</strong><br /><strong>Current, voltage and resistance are directly correlated. Because we measure resistance and voltage based on </strong><a class="textTag" href="http://phys.org/tags/fundamental+constants/" rel="tag"><strong>fundamental constants</strong></a><strong> – electron charge and Planck's constant - being able to measure current would also allow us to confirm the universality of these constants on which many precise measurements rely.</strong><br /><strong>Graphene is not the last word in creating an ampere standard. NPL and others are investigating various methods of defining current based on </strong><a class="textTag" href="http://phys.org/tags/electron+charge/" rel="tag"><strong>electron charge</strong></a><strong>. But today's paper suggests graphene SEPs could hold the answer. Also, any redefinition will have to wait until the Kilogram has been redefined. This definition, due to be decided soon, will fix the value of electronic charge, on which any electron-based definition of the ampere will depend.</strong><br /><strong>Today's paper will also have important implications beyond measurement. Accurate SEPs operating at high frequency and accuracy can be used to make </strong><a class="textTag" href="http://phys.org/tags/electrons/" rel="tag"><strong>electrons</strong></a><strong> collide and form entangled electron pairs. Entanglement is believed to be a fundamental resource for quantum computing, and for answering fundamental questions in quantum mechanics.</strong><br /><strong>Malcolm Connolly, a research associate based in the Semiconductor Physics group at Cambridge, says: "This paper describes how we have successfully produced the first graphene single-electron pump. We have work to do before we can use this research to redefine the ampere, but this is a major step towards that goal. We have shown that graphene outperforms other materials used to make this style of SEP. It is robust, easier to produce, and operates at higher frequency. Graphene is constantly revealing exciting new applications and as our understanding of the material advances rapidly, we seem able to do more and more with it."</strong>Fausto Intillahttps://plus.google.com/110377150394476015496noreply@blogger.com0tag:blogger.com,1999:blog-8784745434426267484.post-61197000254927771212012-12-21T02:05:00.002-08:002012-12-21T02:05:15.149-08:00From Umezawa to Vitiello: Quantum Field Theory of Brain States.<div class="separator" style="clear: both; text-align: center;"><a href="http://2.bp.blogspot.com/-5KyVefA58-Q/UNQze6q_IxI/AAAAAAAABpA/LBAvVmzy7Kk/s1600/untitled.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://2.bp.blogspot.com/-5KyVefA58-Q/UNQze6q_IxI/AAAAAAAABpA/LBAvVmzy7Kk/s1600/untitled.png" /></a></div><div style="text-align: center;"><strong>Source: </strong><a href="http://plato.stanford.edu/entries/qt-consciousness/#4.3"><strong><span style="color: yellow;">Stanford Edu</span></strong></a></div><div style="text-align: center;">-----------------------------</div><strong>In the 1960s, Ricciardi and Umezawa (1967) suggested to utilize the formalism of quantum field theory to describe brain states, with particular emphasis on memory. The basic idea is to conceive of memory states in terms of states of many-particle systems, as inequivalent representations of vacuum states of quantum fields.<sup>[<a href="http://plato.stanford.edu/entries/qt-consciousness/notes.html#11" name="note-11">11</a>]</sup> This proposal has gone through several refinements (e.g., Stuart<em>et al.</em> 1978, 1979; Jibu and Yasue 1995). Major recent progress has been achieved by including effects of dissipation, chaos, and quantum noise (Vitiello 1995; Pessa and Vitiello 2003). For readable nontechnical accounts of the approach in its present form, embedded in quantum field theory as of today, see Vitiello (2001, 2002).</strong><br /><strong>Quantum field theory (see the entry on </strong><a href="http://plato.stanford.edu/entries/quantum-field-theory/"><strong>quantum field theory</strong></a><strong>) yields infinitely many representations of the commutation relations, which are inequivalent to the Schrödinger representation of standard quantum mechanics. Such inequivalent representations can be generated by spontaneous symmetry breaking (see the entry on </strong><a href="http://plato.stanford.edu/entries/symmetry-breaking/"><strong>symmetry and symmetry breaking</strong></a><strong>), occurring when the ground state (or the vacuum state) of a system is not invariant under the full group of transformations providing the conservation laws for the system. If symmetry breaks down, collective modes are generated (so-called Nambu-Goldstone boson modes), which propagate over the system and introduce long-range correlations in it.</strong><br /><strong>These correlations are responsible for the emergence of ordered patterns. Unlike in thermal systems, a large number of bosons can be condensed in an ordered state in a highly stable fashion. Roughly speaking, this provides a quantum field theoretical derivation of ordered states in many-body systems described in terms of statistical physics. In the proposal by Umezawa these dynamically ordered states represent coherent activity in neuronal assemblies.</strong><br /><strong>The <em>activation</em> of a neuronal assembly is necessary to make the encoded content consciously accessible. This activation is considered to be initiated by external stimuli. Unless the assembly is activated, its content remains unconscious, unaccessed memory. According to Umezawa, coherent neuronal assemblies correlated to such memory states are regarded as vacuum states; their activation leads to excited states with a finite lifetime and enables a conscious recollection of the content encoded in the vacuum (ground) state. The stability of such states and the role of external stimuli have been investigated in detail by Stuart <em>et al.</em> (1978, 1979).</strong><br /><strong>A decisive further step in developing the approach has been achieved by taking <em>dissipation</em> into account. Dissipation is possible when the interaction of a system with its environment is considered. Vitiello (1995) describes how the system-environment interaction causes a doubling of the collective modes of the system in its environment. This yields infinitely many differently coded vacuum states, offering the possibility of many memory contents without overprinting. Moreover, dissipation leads to finite lifetimes of the vacuum states, thus representing temporally limited rather than unlimited memory (Alfinito and Vitiello 2000; Alfinito <em>et al.</em>2001). Finally, dissipation generates a genuine arrow of time for the system, and its interaction with the environment induces entanglement. In a recent contribution, Pessa and Vitiello (2003) have addressed additional effects of chaos and quantum noise.</strong><br /><strong>The majority of presentations of this approach do<em>not</em> consistently distinguish between mental states and material states. This suggests reducibility of mental activity to brain activity, within </strong><a href="http://plato.stanford.edu/entries/qt-consciousness/#A"><strong>scenario (A)</strong></a><strong> of Sec. 2, as an underlying assumption. In this sense, Umezawa's proposal addresses the brain as a many-particle system as a whole, where the“particles” are more or less neurons. In the language of </strong><a href="http://plato.stanford.edu/entries/qt-consciousness/#3.1"><strong>Section 3.1</strong></a><strong>, this refers to the level of neuronal assemblies, which has the benefit that this is the level which<em>directly</em> correlates with mental activity. Another merit of the quantum field theory approach is that it avoids the restrictions of standard quantum mechanics in a formally sound way.</strong><br /><strong>Conceptually, however, it contains ambiguities demanding clarification, e.g., concerning the continuous confusion of mental and material states (and their properties). If mental states were the primary objects of reference, the quantum field theoretical treatment would be metaphorical in the sense of </strong><a href="http://plato.stanford.edu/entries/qt-consciousness/#4.1"><strong>Section 4.1</strong></a><strong>. That this is not the case has recently been clarified by Freeman and Vitiello (2008): the model “describes the brain, not mental states.”</strong><br /><strong>For a description of brain states, it remains to be specified how this is backed up by the results of contemporary neurobiology. In recent publications (see, e.g., Freeman and Vitiello 2006, 2008), potential neurobiologically relevant observables such as electric and magnetic field amplitudes or neurotransmitter concentration have been discussed. These observables are purely classical, so that neurons, glia cells, “and other physiological units are <em>not</em> quantum objects in the many-body model of brain” (Freeman and Vitiello 2008).</strong><br /><strong> This leads to the conclusion that the application of quantum field theory in the model serves the purpose of motivating that and why classical behavior emerges at the level of brain activity considered. The relevant brains states themselves are decidedly viewed as classical states. Similar to a classical thermodynamical description arising from quantum statistical mechanics, the idea is to identify different regimes of stable behavior (phases, attractors) and transitions between them. This way, quantum field theory provides formal elements from which a standard classical description of brain activity can be inferred, and this is its main role in the model.</strong> Fausto Intillahttps://plus.google.com/110377150394476015496noreply@blogger.com0tag:blogger.com,1999:blog-8784745434426267484.post-60346411436390442442012-10-16T05:45:00.004-07:002012-10-16T05:45:59.540-07:00Magnetic nanoparticles used to control thousands of cells simultaneously.<div class="separator" style="clear: both; text-align: center;"><a href="http://2.bp.blogspot.com/-QSYhljv2lSI/UH1WDcDJ4ZI/AAAAAAAABWA/huIGugVBHzk/s1600/uclaengineer.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://2.bp.blogspot.com/-QSYhljv2lSI/UH1WDcDJ4ZI/AAAAAAAABWA/huIGugVBHzk/s1600/uclaengineer.jpg" /></a></div><div style="text-align: center;"><strong>Source: <a href="http://phys.org/news/2012-10-magnetic-nanoparticles-thousands-cells-simultaneously.html"><span style="color: yellow;">Phys.org</span></a></strong></div><div style="text-align: center;"><strong>--------------------------</strong></div><strong>Using clusters of tiny magnetic particles about 1,000 times smaller than the width of a human hair, researchers from the UCLA Henry Samueli School of Engineering and Applied Science have shown that they can manipulate how thousands of cells divide, morph and develop finger-like extensions.</strong><br /><strong>This new tool could be used in developmental biology to understand how tissues develop, or in cancer research to uncover how cancer cells move and invade surrounding tissues, the researchers said. The UCLA team's findings were published online Oct. 14 in the journal Nature Methods. A cell can be considered a complex biological machine that receives an assortment of "inputs" and produces specific "outputs," such as growth, movement, division or the production of molecules. Beyond the type of input, cells are extremely sensitive to the location of an input, partly because cells perform "spatial multiplexing," reusing the same basic biochemical signals for different functions at different locations within the cell. Understanding this localization of signals is particularly challenging because scientists lack tools with sufficient resolution and control to function inside the miniature environment of a cell. And any usable tool would have to be able to perturb many cells with similar characteristics simultaneously to achieve an accurate distribution of responses, since the responses of individual cells can vary. To address this problem, an interdisciplinary UCLA team that included associate professor of bioengineering Dino Di Carlo, postdoctoral scholar Peter Tseng and professor of electrical engineering Jack Judy developed a platform to precisely manipulate magnetic nanoparticles inside uniformly shaped cells. These nanoparticles produced a local mechanical signal and yielded distinct responses from the cells.<br />By determining the responses of thousands of single cells with the same shape to local nanoparticle-induced stimuli, the researchers were able to perform an automated averaging of the cells' response. To achieve this platform, the team first had to overcome the challenge of moving such small particles (each measuring 100 nanometers) through the viscous interior of a cell once the cells engulfed them. Using ferromagnetic technologies, which enable magnetic materials to switch "on" and "off," the team developed an approach to embed a grid of small ferromagnetic blocks within a microfabricated glass slide and to precisely place individual cells in proximity to these blocks with a pattern of proteins that adhere to cells. When an external magnetic field is applied to this system, the ferromagnetic blocks are switched "on" and can therefore pull the nanoparticles within the cells in specific directions and uniformly align them. The researchers could then shape and control the forces in thousands of cells at the same time. Using this platform, the team showed that the cells responded to this local force in several ways, including in the way they divided. When cells go through the process of replication to create two cells, the axis of division depends on the shape of the cell and the anchoring points by which the cell holds on to the surface. The researchers found that the force induced by the nanoparticles could change the axis of cell division such that the cells instead divided along the direction of force. The researchers said this sensitivity to force may shed light on the intricate forming and stretching of tissues during embryonic development. Besides directing the axis of division, they found that nanoparticle-induced local force also led to the activation of a biological program in which cells generate filopodia, which are finger-like, actin-rich extensions that cells often use to find sites to adhere to and which aid in movement.<br />Di Carlo, the principal investigator on the research, envisions that the technique can apply beyond the control of mechanical stimuli in cells. "Nanoparticles can be coated with a variety of molecules that are important in cell signaling," he said. "We should now have a tool to quantitatively investigate how the precise location of molecules in a cell produces a specific behavior. This is a key missing piece in our tool-set for understanding cell programs and for engineering cells to perform useful functions." More information: www.nature.com/nmeth/journal/vaop/ncurrent/abs/nmeth.2210.html www.biomicrofluidics.com/ Provided by University of California, Los Angeles.</strong> Fausto Intillahttps://plus.google.com/110377150394476015496noreply@blogger.com0tag:blogger.com,1999:blog-8784745434426267484.post-59651291024339394472012-10-15T08:05:00.004-07:002012-10-15T08:05:36.693-07:00Graphene researchers make a layer cake with atomic precision. <div class="separator" style="clear: both; text-align: center;"><a href="http://4.bp.blogspot.com/-UcKszMyPWdM/UHwlmtyQQcI/AAAAAAAABTo/2XVgs8_g4Ks/s1600/graphene.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://4.bp.blogspot.com/-UcKszMyPWdM/UHwlmtyQQcI/AAAAAAAABTo/2XVgs8_g4Ks/s1600/graphene.jpg" /></a></div><div style="text-align: center;"><strong>Source: </strong><a href="http://phys.org/news/2012-10-graphene-layer-cake-atomic-precision.html"><strong><span style="color: yellow;">Phys.org</span></strong></a></div><div style="text-align: center;"><strong>--------------------------</strong></div><strong>Graphene and associated one-atom-thick crystals offer the possibility of a vast range of new materials and devices by stacking individual atomic layers on top of each other, new research from the University of Manchester shows.<br />In a report published in Nature Physics, a group led Dr Leonid Ponomarenko and Nobel prize-winner Professor Andre Geim has assembled individual atomic layers on top of each other in a desired sequence. The team used individual one-atom-thick crystals to construct a multilayer cake that works as a nanoscale electric transformer. Graphene, isolated for the first time at The University of Manchester in 2004, has the potential to revolutionise diverse applications from smartphones and ultrafast broadband to drug delivery and computer chips. It has the potential to replace existing materials, such as silicon, but the Manchester researchers believe it could truly find its place with new devices and materials yet to be invented. In the nanoscale transformer, electrons moving in one metallic layer pull electrons in the second metallic layer by using their local electric fields. To operate on this principle, the metallic layers need to be insulated electrically from each other but separated by no more than a few interatomic distances, a giant leap from the existing nanotechnologies. These new structures could pave the way for a new range of complex and detailed electronic and photonic devices which no other existing material could make, which include various novel architectures for transistors and detectors. The scientists used graphene as a one-atom-thick conductive plane while just four atomic layers of boron nitride served as an electrical insulator.<br />The researchers started with extracting individual atomic planes from bulk graphite and boron nitride by using the same technique that led to the Nobel Prize for graphene, a single atomic layer of carbon. Then, they used advanced nanotechnology to mechanically assemble the crystallites one by one, in a Lego style, into a crystal with the desired sequence of planes. The nano-transformer was assembled by Dr Roman Gorbachev, of The University of Manchester, who described the required skills. He said: "Every Russian and many in the West know The Tale of the Clockwork Steel Flea. "It could only be seen through the most powerful microscope but still danced and even had tiny horseshoes. Our atomic-scale Lego perhaps is the next step of craftsmanship". Professor Geim added: "The work proves that complex devices with various functionalities can be constructed plane by plane with atomic precision. "There is a whole library of atomically-thin materials. By combining them, it is possible to create principally new materials that don't exist in nature. This avenue promises to become even more exciting than graphene itself." More information: 'Strong Coulomb drag and broken symmetry in double-layer graphene', by L Ponomarenko, R Gorbachev and A Geim, Nature Physics, 2012.Journal reference: Nature Physics Provided by University of Manchester.</strong>Fausto Intillahttps://plus.google.com/110377150394476015496noreply@blogger.com0tag:blogger.com,1999:blog-8784745434426267484.post-84886977327566217812012-10-15T07:59:00.006-07:002012-10-15T07:59:59.745-07:00Accelerators can search for signs of Planck-scale gravity.<div class="separator" style="clear: both; text-align: center;"><a href="http://3.bp.blogspot.com/-KwgetiwGrwg/UHwjm25jnMI/AAAAAAAABTg/o50kyAE43j8/s1600/planckscalegravitytest.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://3.bp.blogspot.com/-KwgetiwGrwg/UHwjm25jnMI/AAAAAAAABTg/o50kyAE43j8/s1600/planckscalegravitytest.jpg" /></a></div><div style="text-align: center;"><strong>Source: </strong><a href="http://phys.org/news/2012-10-planck-scale-gravity.html"><strong><span style="color: yellow;">Phys.org</span></strong></a></div><div style="text-align: center;"><strong>--------------------------</strong></div><strong>Although quantum theory can explain three of the four forces in nature, scientists currently rely on general relativity to explain the fourth force, gravity. However, no one is quite sure of how gravity works at very short distances, in particular the shortest distance of all: the Planck length, or 10^-35 m. So far, the smallest distance accessible in experiments is about 10^-19 m at the LHC. </strong><br /><strong>Now in a new paper published in Physical Review Letters, physicist Vahagn Gharibyan of Deutsches Elektronen-Synchrotron (DESY) in Hamburg, Germany, has proposed a test of quantum gravity that can reach a sensitivity of 10^-31 m down to the Planck length, depending on the energy of the particle accelerator. As Gharibyan explains, several models of quantum gravity predict that empty space near the Planck length may behave like a crystal in the sense that the space is refractive (light is bent due to "gravitons," the hypothetical particles that mediate gravity) and has birefringence/chirality (the light's bending degree also depends on the light's polarization). In quantum gravity, both refractivity and birefringence are energy-dependent: the higher the photon energy, the stronger the photon-graviton interaction and the more bending. This correlation is the opposite of what happens when photons interact with electromagnetic fields or matter, where these effects are suppressed by photon energy. The predicted correlation also differs from what happens according to Newtonian gravity and Einstein's general relativity, where any bending of light is independent of the light's energy. "If one describes gravity at the quantum level, the bending of light by gravitation becomes energy-dependent – unlike in Newtonian gravity or Einstein's general relativity," Gharibyan told Phys.org. "The higher the energy of the photons, the larger the bending, or the stronger the photon-graviton interaction should be."</strong><br /><strong>Gharibyan suggests that this bending of light according to quantum gravity models may be studied using high-energy accelerator beams that probe the vacuum symmetry of empty space at small scales. Accelerators could use high-energy Compton scattering, in which a photon that scatters off another moving particle acquires energy, causing a change in its momentum. The proposed experiments could detect how the effects of quantum gravity change the photon's energy-momentum relation compared with what would be expected on a normal scale. For these experiments, the beam energy is vital in determining the sensitivity to small-scale effects. Gharibyan estimates that a 6 GeV energy lepton accelerator, such as PETRA-III at DESY, could test space birefringence down to 10^-31 m. Future accelerators that could achieve energies of up to 250 GeV, such as the proposed International Linear Collider (ILC), could test birefringence all the way down to the Planck length. For probing refractivity, Gharibyan estimates that a 6 GeV machine would have a sensitivity down to 10^-27 m, while a 250 GeV machine could reach about 10^-31 m. As Gharibyan explains, probing Planck-scale gravity in this way is somewhat similar to investigating nanoscale crystal structures.<br />"Conventional crystals have cell sizes around tens of nanometers and are transparent to, or do not interact with, photons with much larger (m or mm) wavelengths," Gharibyan said. "In order to investigate crystal cells/structures, one needs photons with compatible nm wavelength: X-rays. However, visible light with wavelengths 1000 times more than the crystal cell can still feel the averaged influence of the cells: the light could be reflected singly or doubly. Comparing this to the Planck-length crystal, we don't have photons with a Planck wavelength or that huge energy. Instead, we are able to feel the averaged effects of Planck crystal cells – or space grains – by using much [relatively] lower-energy photons." In fact, as Gharibyan has found, there are already experimental hints of gravitons. "This work presents evidence for quantum gravity interactions by applying the developed method to gamma rays faster than light, which I found earlier in data from the largest US and German electron accelerators," he said. "The absence of any starlight deflection in the cosmic vacuum hints that Earth's gravitons should be considered responsible for the observed bending of the accelerators' gamma rays." Gharibyan found that data from the now-closed 26.5 GeV Hadron-Electron Ring Accelerator (HERA) at DESY measured a Planck cell size of 2.6x10^-28 m, and data from the mothballed 45.6 GeV Stanford Linear Collider (SLC) at Stanford University in the US measured a space grain size of 3.5x10^-30 m. While these results provide some hints of Planck-scale gravity, neither of these experiments was designed as a tool to specifically test gravity, so Gharibyan warns that uncontrolled pieces of setups could mimic observed effects.<br />If Gharibyan's newly proposed experiments are performed, they would provide the first direct measurements of space near or even at the Planck scale, and by doing so, offer a closer glimpse of gravity in this enigmatic regime. More information: Vahagn Gharibyan. "Testing Planck-Scale Gravity with Accelerators." Physical Review Letters 109, 141103 (2012). DOI: 10.1103/PhysRevLett.109.141103 Vahagn Gharibyan. "Possible observation of photon speed energy dependence." Physics Letters B 611 231-238 (2005). DOI: 10.1016/j.physletb.2005.02.053</strong>Fausto Intillahttps://plus.google.com/110377150394476015496noreply@blogger.com0tag:blogger.com,1999:blog-8784745434426267484.post-64264704150084301932012-10-15T01:49:00.004-07:002012-10-15T01:49:58.796-07:00Quantum oscillator responds to pressure.<div class="separator" style="clear: both; text-align: center;"><a href="http://3.bp.blogspot.com/-wtuORiR3a_E/UHvNYRz8THI/AAAAAAAABS4/jtXVgPBr0Oo/s1600/quantumoscil.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="303" src="http://3.bp.blogspot.com/-wtuORiR3a_E/UHvNYRz8THI/AAAAAAAABS4/jtXVgPBr0Oo/s320/quantumoscil.jpg" width="320" /></a></div><div style="text-align: center;"><strong>Source: </strong><a href="http://phys.org/news/2012-10-quantum-oscillator-pressure.html"><strong><span style="color: yellow;">Phys.org</span></strong></a></div><div style="text-align: center;"><strong>----------------------</strong></div><strong>In the far future, superconducting quantum bits might serve as components of high-performance computers. Today already do they help better understand the structure of solids, as is reported by researchers of Karlsruhe Institute of Technology in the Science magazine. By means of Josephson junctions, they measured the oscillations of individual atoms "tunneling" be-tween two positions. This means that the atoms oscillated quantum mechanically. Deformation of the specimen even changed the frequency. </strong><br /><strong>"We are now able to directly control the frequencies of individual tunneling atoms in the solid," say Alexey Ustinov and Georg Weiß, Professors at the Physikalisches Institut of KIT and members of the Center for Functional Nanostructures CFN. Metaphorically speaking, the researchers so far have been confronted with a closed box. From inside, different clattering noises could be heard. Now, it is not only possible to measure the individual objects contained, but also to change their physical properties in a controlled manner. The specimen used for this purpose consists of a superconducting ring interrupted by a nanometer-thick non-conductor, a so-called Josephson junction. The qubit formed in this way can be switched very precisely between two quantum states. "Interestingly, such a Josephson qubit couples to the other atomic quantum systems in the non-conductor," explains Ustinov. "And we measure their tunneling frequencies via this coupling." At temperatures slightly above absolute zero, most sources of noise in the material are switched off. The only remaining noise is produced by atoms of the material when they jump between two equivalent positions. "These frequency spectra of atom jumps can be measured very precisely with the Josephson junction," says Ustinov. "Metaphorically speaking, we have a microscope for the quantum mechanics of individual atoms." In the experiment performed, 41 jumping atoms were counted and their frequency spectra were measured while the specimen was bent slightly with a piezo element. Georg Weiß explains: "The atomic dis-tances are changed slightly only, while the frequencies of the tunneling atoms change strongly." So far, only the sum of all tunneling atoms could be measured. The technology to separately switch atomic tunneling systems only emerged a few years ago. The new method developed at KIT to control atomic quantum systems might provide valuable insights into how qubits can be made fit for applica-tion. However, the method is also suited for studying materials of conventional electronic components, such as transistors, and estab-lishing the basis of further miniaturization. More information: DOI: 10.1126/science.1226487 Provided by Karlsruhe Institute of Technology.</strong>Fausto Intillahttps://plus.google.com/110377150394476015496noreply@blogger.com0tag:blogger.com,1999:blog-8784745434426267484.post-37288884319399479252012-10-13T05:36:00.002-07:002012-10-13T05:36:14.105-07:00Physicists propose method to determine if the universe is a simulation. <div class="separator" style="clear: both; text-align: center;"><a href="http://3.bp.blogspot.com/-4Nzwni-AwL0/UHlesA30mvI/AAAAAAAABSI/GzvrXyULFoM/s1600/600px-hubbleultradeepfieldwithscalecomparison.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://3.bp.blogspot.com/-4Nzwni-AwL0/UHlesA30mvI/AAAAAAAABSI/GzvrXyULFoM/s1600/600px-hubbleultradeepfieldwithscalecomparison.jpg" /></a></div><div style="text-align: center;"><strong>Source: </strong><a href="http://phys.org/news/2012-10-real-physicists-method-universe-simulation.html"><strong><span style="color: yellow;">Phys.org</span></strong></a></div><div style="text-align: center;"><strong>-------------------------</strong></div><div style="text-align: left;"><strong>(Phys.org)—A common theme of science fiction movies and books is the idea that we're all living in a simulated universe—that nothing is actually real. This is no trivial pursuit: some of the greatest minds in history, from Plato, to Descartes, have pondered the possibility. Though, none were able to offer proof that such an idea is even possible. Now, a team of physicists working at the University of Bonn have come up with a possible means for providing us with the evidence we are looking for; namely, a measurable way to show that our universe is indeed simulated. They have written a paper describing their idea and have uploaded it to the preprint server arXiv.<br />The team's idea is based on work being done by other scientists who are actively engaged in trying to create simulations of our universe, at least as we understand it. Thus far, such work has shown that to create a simulation of reality, there has to be a three dimensional framework to represent real world objects and processes. With computerized simulations, it's necessary to create a lattice to account for the distances between virtual objects and to simulate the progression of time. The German team suggests such a lattice could be created based on quantum chromodynamics—theories that describe the nuclear forces that bind subatomic particles. To find evidence that we exist in a simulated world would mean discovering the existence of an underlying lattice construct by finding its end points or edges. In a simulated universe a lattice would, by its nature, impose a limit on the amount of energy that could be represented by energy particles. This means that if our universe is indeed simulated, there ought to be a means of finding that limit. In the observable universe there is a way to measure the energy of quantum particles and to calculate their cutoff point as energy is dispersed due to interactions with microwaves and it could be calculated using current technology. Calculating the cutoff, the researchers suggest, could give credence to the idea that the universe is actually a simulation. Of course, any conclusions resulting from such work would be limited by the possibility that everything we think we understand about quantum chromodynamics, or simulations for that matter, could be flawed.</strong></div><div style="text-align: left;"><strong></strong> </div><div style="text-align: left;"><strong>More information: Constraints on the Universe as a Numerical Simulation, arXiv:1210.1847 [hep-ph] arxiv.org/abs/1210.1847 </strong></div><div style="text-align: left;"><strong></strong> </div><div style="text-align: left;"><strong>Abstract:</strong></div><div style="text-align: left;"><strong>Observable consequences of the hypothesis that the observed universe is a numerical simulation performed on a cubic space-time lattice or grid are explored. The simulation scenario is first motivated by extrapolating current trends in computational resource requirements for lattice QCD into the future. Using the historical development of lattice gauge theory technology as a guide, we assume that our universe is an early numerical simulation with unimproved Wilson fermion discretization and investigate potentially-observable consequences. Among the observables that are considered are the muon g-2 and the current differences between determinations of alpha, but the most stringent bound on the inverse lattice spacing of the universe, b^(-1) >~ 10^(11) GeV, is derived from the high-energy cut off of the cosmic ray spectrum. The numerical simulation scenario could reveal itself in the distributions of the highest energy cosmic rays exhibiting a degree of rotational symmetry breaking that reflects the structure of the underlying lattice.</strong></div><div style="text-align: left;"><strong>Journal reference: arXiv</strong></div>Fausto Intillahttps://plus.google.com/110377150394476015496noreply@blogger.com0tag:blogger.com,1999:blog-8784745434426267484.post-22739676791064662632012-10-11T03:46:00.002-07:002012-10-11T03:46:16.584-07:00Nanoparticles: Making Gold Economical for Sensing.<div class="separator" style="clear: both; text-align: center;"><a href="http://1.bp.blogspot.com/-IV1wZYxerDs/UHaiznShhrI/AAAAAAAABRg/xJgwlcXTXRY/s1600/121010150806.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="320" src="http://1.bp.blogspot.com/-IV1wZYxerDs/UHaiznShhrI/AAAAAAAABRg/xJgwlcXTXRY/s320/121010150806.jpg" width="250" /></a></div><div style="text-align: center;"><strong>Source: </strong><a href="http://www.sciencedaily.com/releases/2012/10/121010150806.htm"><strong><span style="color: yellow;">ScienceDaily</span></strong></a></div><div style="text-align: center;"><strong>-------------------------------</strong></div><div style="text-align: left;"><strong><span class="date">ScienceDaily (Oct. 10, 2012)</span> — Gold nanocluster arrays developed at A*STAR are well suited for commercial applications of a high-performance sensing technique.</strong></div><strong>Cancer, food pathogens and biosecurity threats can all be detected using a sensing technique called surface enhanced Raman spectroscopy (SERS). To meet ever-increasing demands in sensitivity, however, signals from molecules of these agents require massive enhancement, and current SERS sensors require optimization. An A*STAR-led research team recently fabricated a remarkably regular array of closely packed gold nanoparticle clusters that will improve SERS sensors.</strong><br /><strong>So-called 'Raman scattering' occurs when molecules scatter at wavelengths not present in the incident light. These molecules can be detected with SERS sensors by bringing them into contact with a nanostructured metal surface, illuminated by a laser at a particular wavelength. An ideal sensor surface should have: dense packing of metal nanostructures, commonly gold or silver, to intensify Raman scattering; a regular arrangement to produce repeatable signal levels; economical construction; and robustness to sustain sensing performance over time.</strong><br /><strong>Few of the many existing approaches succeed in all categories. However, Fung Ling Yap and Sivashankar Krishnamoorthy at the A*STAR Institute of Materials Research and Engineering, Singapore, and co-workers produced closely packed nanocluster arrays of gold that incorporate the most desirable aspects for fabrication and sensing. In addition to flat surfaces, they also succeeded in coating fiber-optic tips with similarly dense nanocluster arrays (see image), which is a particularly promising development for remote-sensing applications, such as hazardous waste monitoring.</strong><br /><strong>The researchers self-assembled their arrays by using surfaces coated with self-formed polymer nanoparticles, to which smaller gold nanoparticles spontaneously attached to form clusters. "It was surprising to reliably attain feature separations of less than 10 nanometers, at high yield, across macroscopic areas using simple processes such as coating and adsorption," notes Krishnamoorthy.</strong><br /><strong>By varying the size and density of the polymer features, Krishnamoorthy, Yap and co-workers tuned the cluster size and density to maximize SERS enhancements. Their technique is also efficient: less than 10 milligrams of the polymer and 100 milligrams of gold nanoparticles are needed to coat an entire 100 millimeter diameter wafer, or approximately 200 fiber tips. Both the polymer and the nanoparticles can be mass-produced at low cost. By virtue of being entirely 'self-assembled', the technique does not require specialized equipment or a custom-built clean room, so it is well suited to low-cost commercial implementation.</strong><br /><strong>"We have filed patent applications for the work in Singapore, the USA and China," says Krishnamoorthy. "The arrays are close to commercial exploitation as disposable sensor chips for use in portable SERS sensors, in collaboration with industry."</strong><br /><br />Fausto Intillahttps://plus.google.com/110377150394476015496noreply@blogger.com0tag:blogger.com,1999:blog-8784745434426267484.post-85975460573784573542012-09-29T07:22:00.000-07:002013-02-12T00:08:18.666-08:00"The synchro energy project, beyond the holographic universe"; e-book, pp.188, 5 USD.<div class="separator" style="clear: both; text-align: center;"><a href="http://3.bp.blogspot.com/-HRyEk-RWRzE/UGcDiQ7IpyI/AAAAAAAABJU/KXMkfAdVrys/s1600/978-88-488-0650-3%255B1%255D.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="320" src="http://3.bp.blogspot.com/-HRyEk-RWRzE/UGcDiQ7IpyI/AAAAAAAABJU/KXMkfAdVrys/s320/978-88-488-0650-3%255B1%255D.jpg" width="218" /></a></div><div class="separator" style="clear: both; text-align: center;">--------------------------------------</div><div class="separator" style="clear: both; text-align: center;"> </div><form action="https://www.paypal.com/cgi-bin/webscr" method="post"><input name="cmd" type="hidden" value="_s-xclick" /><input name="hosted_button_id" type="hidden" value="KX624RT47UEB8" /><input alt="PayPal - The safer, easier way to pay online!" border="0" name="submit" src="https://www.paypalobjects.com/en_US/CH/i/btn/btn_buynowCC_LG.gif" type="image" /><img alt="" border="0" height="1" src="https://www.paypalobjects.com/en_US/i/scr/pixel.gif" width="1" /><br /><br /><strong>This volume includes, for the first time, a description of the first experimental results of the Synchro Energy Project; it is a project created approximately two years ago, which brings forward the merging of concepts of </strong><a href="http://en.wikipedia.org/wiki/Synchronicity" target=""><strong>Synchronicity</strong></a><strong> (Jungian), </strong><a href="http://en.wikipedia.org/wiki/Non_Locality" target=""><strong>Non-locality</strong></a><strong> and </strong><a href="http://en.wikipedia.org/wiki/Wave_function_collapse" target=""><strong>wave function collapse</strong></a><strong>. The experiments were carried out in a small research laboratory in Switzerland (near the University of Lausanne) with the collaboration of: Patrick Reiner (Theoric Physicist,PhD), Jean-Michel Bonnet (electronic engineer,PhD) and Christine Duval (neuropsychologist and physiologist). </strong><strong>This volume therefore exposes and explains for the first time, the theoretical and experimental basis sustaining the "Principle of Quantum Compensation of Subconscious Nucleuses</strong><strong>". It will therefore provide a careful reader some excellent points for reflexion in relation to an ambit that has still been completely unexplored within the field of research of interaction between psyche and matter.</strong> <br />.<br /><a href="http://ja.wikipedia.org/wiki/%E5%88%A9%E7%94%A8%E8%80%85:Fausto_Intilla" target=""><strong>Fausto Intilla</strong></a><strong>, inventor and scientific popularizer, is of Italian origin but lives and works in Switzerland (Ticino County). In the editing sector, he made his debut in 1995 with “Journey beyond this life” (ed. Nuovi Autori, Milano), a captivating science fiction story which witnesses the polyhedral nature of the author. His last books are: "</strong><a href="http://www.ibs.it/code/9788848805285/intilla-fausto/dio-mc2-oltre-l-universo-olografico.html" target=""><strong>Dio=mc2. Oltre l'Universo Olografico</strong></a><strong>" and "</strong><a href="http://www.ibs.it/code/9788848805339/intilla-fausto/funzione-onda-della.html?shop=4159" target=""><strong>La funzione d'onda della Realtà</strong></a><strong>" . English books by Fausto Intilla: </strong><strong>"</strong><strong>The Synchro Energy Project, beyond the Holographic Universe</strong><strong>"</strong><strong>. In the field of inventions, however, his name is linked to the </strong><a href="http://patft.uspto.gov/netacgi/nph-Parser?Sect2=PTO1&Sect2=HITOFF&p=1&u=/netahtml/PTO/search-bool.html&r=1&f=G&l=50&d=PALL&RefSrch=yes&Query=PN/5505026" target=""><strong>“Tree Structure”</strong></a><strong> , one of the most popular anti-seismic structures for bridges and viaducts patented in Japan and in the United States (see: </strong><a href="http://www.uspto.gov/" target=""><strong>www.uspto.gov</strong></a><strong>) . His e-mail address is: </strong><a href="mailto:f.intilla@bluewin.ch" target=""><strong>f.intilla@bluewin.ch</strong></a> <br /><br /><strong>Intilla, is also the creator of “Principle of Quantum Compensation of Subconscious Nucleuses”. His research on subconscious nucleuses and the experiments proposed by him for the verification of such Principle, have been taken into consideration by several research groups in both Europe and the United States;one of these is the renowned P.E.A.R. laboratory (</strong><a href="http://www.princeton.edu/~pear/" target=""><strong>Princeton Engineering Anomalies Research</strong></a><strong>) situated in New Jersey, USA. The research in this science by </strong><a href="http://en.wikipedia.org/wiki/Roger_D._Nelson" target=""><strong>Dr.Roger D.Nelson</strong></a><strong> and colleagues, after the recent closure of the PEAR laboratory, were transferred here: </strong><a href="http://www.icrl.org/" target=""><strong>ICRL</strong></a><strong>. In this Institute, for several years, the research has been directed mainly toward the "</strong><a href="http://noosphere.princeton.edu/" target=""><strong>Global Consciousness Project</strong></a><strong>".</strong> </form>Fausto Intillahttps://plus.google.com/110377150394476015496noreply@blogger.com0tag:blogger.com,1999:blog-8784745434426267484.post-30527199607797943992012-02-28T05:02:00.000-08:002012-02-28T05:02:02.920-08:00Quantum Microphone Captures Extremely Weak Sound.<div class="separator" style="clear: both; text-align: center;"><a href="http://4.bp.blogspot.com/-n4b8t_tAfk8/T0zPlInalUI/AAAAAAAABBU/S0dbIlCACY8/s1600/sound.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://4.bp.blogspot.com/-n4b8t_tAfk8/T0zPlInalUI/AAAAAAAABBU/S0dbIlCACY8/s1600/sound.jpg" /></a></div><div style="text-align: center;"><em><span style="font-size: xx-small;">A "quantum microphone" based on a Single Electron Transistor (SET) detects sound waves on a chip surface, so called Surface Acoustic Waves (SAW). The waves make the charge of the atoms underneath the quantum microphon oscillate. Since the quantum microphone is an extremely sensitive charge detector, very low sound levels can be detected. (The size of the waves are exaggerated in the picture). (Credit: Philip Krantz, Chalmers)</span></em></div><div style="text-align: center;">Source: <a href="http://www.sciencedaily.com/releases/2012/02/120227093954.htm"><span style="color: yellow;">Science Daily</span></a></div><div style="text-align: center;">-------------------------------------</div><div style="text-align: left;"><span class="date">ScienceDaily (Feb. 27, 2012)</span> — Scientists from Chalmers University of Technology have demonstrated a new kind of detector for sound at the level of quietness of quantum mechanics. The result offers prospects of a new class of quantum hybrid circuits that mix acoustic elements with electrical ones, and may help illuminate new phenomena of quantum physics. </div>The results have been published in<em> Nature Physics.</em><br />The "quantum microphone" is based on a single electron transistor, that is, a transistor where the current passes one electron at a time. The acoustic waves studied by the research team propagate over the surface of a crystalline microchip, and resemble the ripples formed on a pond when a pebble is thrown into it. The wavelength of the sound is a mere 3 micrometers, but the detector is even smaller, and capable of rapidly sensing the acoustic waves as they pass by.<br />On the chip surface, the researchers have fabricated a three-millimeter-long echo chamber, and even though the speed of sound on the crystal is ten times higher than in air, the detector shows how sound pulses reflect back and forth between the walls of the chamber, thereby verifying the acoustic nature of the wave.<br />The detector is sensitive to waves with peak heights of a few percent of a proton diameter, levels so quiet that sound can be governed by quantum law rather than classical mechanics, much in the same way as light.<br />"The experiment is done on classical acoustic waves, but it shows that we have everything in place to begin studies of proper quantum-acoustics, and nobody has attempted that before," says Martin Gustafsson, PhD student and first author of the article.<br />Apart from the extreme quietness, the pitch of the waves is too high for us to hear: The frequency of almost 1 gigahertz is 21 octaves above one-lined A. The new detector is the most sensitive in the world for such high-frequency sound.<br /><div style="text-align: center;"><br /></div>Fausto Intillahttps://plus.google.com/110377150394476015496noreply@blogger.com0tag:blogger.com,1999:blog-8784745434426267484.post-66209554324794349912012-02-27T00:35:00.002-08:002012-02-27T00:35:33.898-08:00Replacing Electricity With Light: First Physical 'Metatronic' Circuit Created.<div class="separator" style="clear: both; text-align: center;"><a href="http://3.bp.blogspot.com/-9KWPcJvgAaA/T0s_sM1V8LI/AAAAAAAABBE/8ULvEVTGSEY/s1600/luce.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="320" src="http://3.bp.blogspot.com/-9KWPcJvgAaA/T0s_sM1V8LI/AAAAAAAABBE/8ULvEVTGSEY/s320/luce.jpg" width="271" /></a></div><div style="text-align: center;"><em><span style="font-size: xx-small;">Figure A. When the plane of the electric field is in line with the nanorods the circuit is wired in parallel. Figure B. When the plane of the electric field crosses both the nanorods and the gaps the circuit is wired in series. (Credit: Image courtesy of University of Pennsylvania).</span></em></div><div style="text-align: center;">Source: <a href="http://www.sciencedaily.com/releases/2012/02/120223183809.htm"><span style="color: yellow;">Science Daily</span></a></div><div style="text-align: center;">----------------------------------</div><div style="text-align: left;"><span class="date">ScienceDaily (Feb. 23, 2012)</span> — The technological world of the 21<sup>st</sup> century owes a tremendous amount to advances in electrical engineering, specifically, the ability to finely control the flow of electrical charges using increasingly small and complicated circuits. And while those electrical advances continue to race ahead, researchers at the University of Pennsylvania are pushing circuitry forward in a different way, by replacing electricity with light.</div><div style="text-align: left;"><br /></div>"Looking at the success of electronics over the last century, I have always wondered why we should be limited to electric current in making circuits," said Nader Engheta, professor in the electrical and systems engineering department of Penn's School of Engineering and Applied Science. "If we moved to shorter wavelengths in the electromagnetic spectrum -- like light -- we could make things smaller, faster and more efficient."<br />Different arrangements and combinations of electronic circuits have different functions, ranging from simple light switches to complex supercomputers. These circuits are in turn built of different arrangements of circuit elements, like resistors, inductors and capacitors, which manipulate the flow of electrons in a circuit in mathematically precise ways. And because both electric circuits and optics follow Maxwell's equations -- the fundamental formulas that describe the behavior of electromagnetic fields -- Engheta's dream of building circuits with light wasn't just the stuff of imagination. In 2005, he and his students published a theoretical paper outlining how optical circuit elements could work.<br />Now, he and his group at Penn have made this dream a reality, creating the first physical demonstration of "lumped" optical circuit elements. This represents a milestone in a nascent field of science and engineering Engheta has dubbed "metatronics."<br />Engheta's research, which was conducted with members of his group in the electrical and systems engineering department, Yong Sun, Brian Edwards and Andrea Alù, was published in the journal <em>Nature Materials.</em><br />In electronics, the "lumped" designation refers to elements that can be treated as a black box, something that turns a given input to a perfectly predictable output without an engineer having to worry about what exactly is going on inside the element every time he or she is designing a circuit.<br />"Optics has always had its own analogs of elements, things like lenses, waveguides and gratings," Engheta said, "but they were never lumped. Those elements are all much larger than the wavelength of light because that's all that could be easily built in the old days. For electronics, the lumped circuit elements were always much smaller than the wavelength of operation, which is in the radio or microwave frequency range."<br />Nanotechnology has now opened that possibility for lumped optical circuit elements, allowing construction of structures that have dimensions measured in nanometers. In this experiment's case, the structure was comb-like arrays of rectangular nanorods made of silicon nitrite.<br />The "meta" in "metatronics" refers to metamaterials, the relatively new field of research where nanoscale patterns and structures embedded in materials allow them to manipulate waves in ways that were previously impossible. Here, the cross-sections of the nanorods and the gaps between them form a pattern that replicates the function of resistors, inductors and capacitors, three of the most basic circuit elements, but in optical wavelengths.<br />"If we have the optical version of those lumped elements in our repertoire, we can actually make designs similar to what we do in electronics but now for operation with light," Engheta said. "We can build a circuit with light."<br />In their experiment, the researchers illuminated the nanorods with an optical signal, a wave of light in the mid-infrared range. They then used spectroscopy to measure the wave as it passed through the comb. Repeating the experiment using nanorods with nine different combinations of widths and heights, the researchers showed that the optical "current" and optical "voltage" were altered by the optical resistors, inductors and capacitors with parameters corresponding to those differences in size.<br />"A section of the nanorod acts as both an inductor and resistor, and the air gap acts as a capacitor," Engheta said.<br />Beyond changing the dimensions and the material the nanorods are made of, the function of these optical circuits can be altered by changing the orientation of the light, giving metatronic circuits access to configurations that would be impossible in traditional electronics.<br />This is because a light wave has polarizations; the electric field that oscillates in the wave has a definable orientation in space. In metatronics, it is that electric field that interacts and is changed by elements, so changing the field's orientation can be like rewiring an electric circuit.<br />When the plane of the field is in line with the nanorods, as in Figure A, the circuit is wired in parallel and the current passes through the elements simultaneously. When the plane of the electric field crosses both the nanorods and the gaps, as in Figure B, the circuit is wired in series and the current passes through the elements sequentially.<br />"The orientation gives us two different circuits, which is why we call this 'stereo-circuitry,'" Engheta said. "We could even have the wave hit the rods obliquely and get something we don't have in regular electronics: a circuit that's neither in series or in parallel but a mixture of the two."<br />This principle could be taken to an even higher level of complexity by building nanorod arrays in three dimensions. An optical signal hitting such a structure's top would encounter a different circuit than a signal hitting its side. Building off their success with basic optical elements, Engheta and his group are laying the foundation for this kind of complex metatronics.<br />"Another reason for success in electronics has to do with its modularity," he said. "We can make an infinite number of circuits depending on how we arrange different circuit elements, just like we can arrange the alphabet into different words, sentences and paragraphs.<br />"We're now working on designs for more complicated optical elements," Engheta said. "We're on a quest to build these new letters one by one."<br />This work was supported in part by the U.S. Air Force Office of Scientific Research.<br />Andrea Alù is now an assistant professor at the University of Texas at Austin.<br />Fausto Intillahttps://plus.google.com/110377150394476015496noreply@blogger.com0tag:blogger.com,1999:blog-8784745434426267484.post-65673491107167526502012-02-27T00:22:00.000-08:002012-02-27T00:22:18.296-08:00Scientists Score New Victory Over Quantum Uncertainty.<div class="separator" style="clear: both; text-align: center;"><a href="http://2.bp.blogspot.com/-xSmMZWBvhHs/T0s8bqa0S5I/AAAAAAAABA8/0ZQ8hpxjk6o/s1600/prof+c.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://2.bp.blogspot.com/-xSmMZWBvhHs/T0s8bqa0S5I/AAAAAAAABA8/0ZQ8hpxjk6o/s1600/prof+c.jpg" /></a></div><div style="text-align: center;"><em><span style="font-size: xx-small;">Michael Chapman, a professor in the School of Physics at Georgia Tech, poses with optical equipment in his laboratory. Chapman’s research team is exploring squeezed states using atoms of Bose-Einstein condensates. (Click image for high-resolution version. Credit: Gary Meek) (Credit: Image courtesy of Georgia Institute of Technology, Research Communications)</span></em></div><div style="text-align: center;">Source: <a href="http://www.sciencedaily.com/releases/2012/02/120226153510.htm"><span style="color: yellow;">Science Daily</span></a></div><div style="text-align: center;">-------------------------------</div><div style="text-align: left;"><span class="date">ScienceDaily (Feb. 26, 2012)</span> — Most people attempt to reduce the little uncertainties of life by carrying umbrellas on cloudy days, purchasing automobile insurance or hiring inspectors to evaluate homes they might consider purchasing. For scientists, reducing uncertainty is a no less important goal, though in the weird realm of quantum physics, the term has a more specific meaning.</div>For scientists working in quantum physics, the Heisenberg Uncertainty Principle says that measurements of properties such as the momentum of an object and its exact position cannot be simultaneously specified with arbitrary accuracy. As a result, there must be some uncertainty in either the exact position of the object, or its exact momentum. The amount of uncertainty can be determined, and is often represented graphically by a circle showing the area within which the measurement actually lies.<br />Over the past few decades, scientists have learned to cheat a bit on the Uncertainty Principle through a process called "squeezing," which has the effect of changing how the uncertainty is shown graphically. Changing the circle to an ellipse and ultimately to almost a line allows one component of the complementary measurements -- the momentum or the position, in the case of an object -- to be specified more precisely than would otherwise be possible. The actual area of uncertainty remains unchanged, but is represented by a different shape that serves to improve accuracy in measuring one property.<br />This squeezing has been done in measuring properties of photons and atoms, and can be important to certain high-precision measurements needed by atomic clocks and the magnetometers used to create magnetic resonance imaging views of structures deep inside the body. For the military, squeezing more accuracy could improve the detection of enemy submarines attempting to hide underwater or improve the accuracy of atom-based inertial guidance instruments.<br />Now physicists at the Georgia Institute of Technology have added another measurement to the list of those that can be squeezed. In a paper appearing online February 26 in the journal <em>Nature Physics</em>, they report squeezing a property called the nematic tensor, which is used to describe the rubidium atoms in Bose-Einstein condensates, a unique form of matter in which all atoms have the same quantum state. The research was sponsored by the National Science Foundation (NSF).<br />"What is new about our work is that we have probably achieved the highest level of atom squeezing reported so far, and the more squeezing you get, the better," said Michael Chapman, a professor in Georgia Tech's School of Physics. "We are also squeezing something other than what people have squeezed before."<br />Scientists have been squeezing the spin states of atoms for 15 years, but only for atoms that have just two relevant quantum states -- known as spin ½ systems. In collections of those atoms, the spin states of the individual atoms can be added together to get a collective angular momentum that describes the entire system of atoms.<br />In the Bose-Einstein condensate atoms being studied by Chapman's group, the atoms have three quantum states, and their collective spin totals zero -- not very helpful for describing systems. So Chapman and graduate students Chris Hamley, Corey Gerving, Thai Hoang and Eva Bookjans learned to squeeze a more complex measure that describes their system of spin 1 atoms: nematic tensor, also known as quadrupole.<br />Nematicity is a measure of alignment that is important in describing liquid crystals, exotic magnetic materials and some high temperature superconductors.<br />"We don't have a spin vector pointing in a particular direction, but there is still some residual information in where this collection of atoms is pointing," Chapman explained. "That next higher-order description is the quadrupole, or nematic tensor. Squeezing this actually works quite well, and we get a large degree of improvement, so we think it is relatively promising."<br />Experimentally, the squeezing is created by entangling some of the atoms, which takes away their independence. Chapman's group accomplishes this by colliding atoms in their ensemble of some 40,000 rubidium atoms.<br />"After they collide, the state of one atom is connected to that of the other atom, so they have been entangled in that way," he said. "This entanglement creates the squeezing."<br />Reducing uncertainty in measuring atoms could have important implications for precise magnetic measurements. The next step will be to determine experimentally if the technique can improve the measurement of magnetic field, which could have important applications.<br />"In principle, this should be a straightforward experiment, but it turns out that the biggest challenge is that magnetic fields in the laboratory fluctuate due to environmental factors such as the effects of devices such as computer monitors," Chapman said. "If we had a noiseless laboratory, we could measure the magnetic field both with and without squeezed states to demonstrate the enhanced precision. But in our current lab environment, our measurements would be affected by outside noise, not the limitations of the atomic sensors we are using."<br />The new squeezed property could also have application to quantum information systems, which can store information in the spin of atoms and their nematic tensor.<br />"There are a lot of things you can do with quantum entanglement, and improving the accuracy of measurements is one of them," Chapman added. "We still have to obey Heisenberg's Uncertainty Principle, but we do have the ability to manipulate it."Fausto Intillahttps://plus.google.com/110377150394476015496noreply@blogger.com0tag:blogger.com,1999:blog-8784745434426267484.post-80765232576892344562010-05-16T09:01:00.000-07:002010-05-16T09:04:08.575-07:00Quantum Dynamics of Matter Waves Reveal Exotic Multibody Collisions.<div align="center"><a href="http://4.bp.blogspot.com/_-LKF2JK_r2s/S_AXFofP-0I/AAAAAAAAAyc/A2xxvQ3HsN8/s1600/100514094836.jpg"><img style="TEXT-ALIGN: center; MARGIN: 0px auto 10px; WIDTH: 300px; DISPLAY: block; HEIGHT: 240px; CURSOR: hand" id="BLOGGER_PHOTO_ID_5471898932790426434" border="0" alt="" src="http://4.bp.blogspot.com/_-LKF2JK_r2s/S_AXFofP-0I/AAAAAAAAAyc/A2xxvQ3HsN8/s320/100514094836.jpg" /></a><strong> Source:<span style="color:#ffff66;"> </span></strong><a href="http://www.sciencedaily.com/releases/2010/05/100514094836.htm"><strong><span style="color:#ffff66;">ScienceDaily</span></strong></a></div><div align="center"><strong>-------------------------</strong></div><div align="left"><strong>ScienceDaily (May 16, 2010) — At extremely low temperatures atoms can aggregate into so-called Bose Einstein condensates forming coherent laser-like matter waves. Due to interactions between the atoms fundamental quantum dynamics emerge and give rise to periodic collapses and revivals of the matter wave field. </strong></div><div align="left"><strong>A group of scientists led by Professor Immanuel Bloch (Chair of Experimental Physics at the Ludwig-Maximilians-Universität München (LMU) and Director of the Quantum Many Body Systems Division at the Max Planck Institute of Quantum Optics in Garching) has now succeeded to take a glance 'behind the scenes' of atomic interactions revealing the complex structure of these quantum dynamics. By generating thousands of miniature BECs ordered in an optical lattice the researchers were able to observe a large number of collapse and revival cycles over long periods of time.<br />The research is published in the journal Nature.<br />The experimental results imply that the atoms do not only interact pairwise -- as typically assumed -- but also perform exotic collisions involving three, four or more atoms at the same time. On the one hand, these results have fundamental importance for the understanding of quantum many-body systems. On the other hand, they pave the way for the generation of new exotic states of matter, based on such multi-body interactions.<br />The experiment starts by cooling a dilute cloud of hundreds of thousands of atoms to temperatures close to absolute zero, approximately -273 degrees Celsius. At these temperatures the atoms form a so-called Bose-Einstein condensate (BEC), a quantum phase in which all particles occupy the same quantum state. Now an optical lattice is superimposed on the BEC: This is a kind of artificial crystal made of light with periodically arranged bright and dark areas, generated by the superposition of standing laser light waves from different directions. This lattice can be viewed as an 'egg carton' on which the atoms are distributed. Whereas in a real egg carton each site is either occupied by a single egg or no egg, the number of atoms sitting at each lattice site is determined by the laws of quantum mechanics: Depending on the lattice height (i.e. the intensity of the laser beam) the single lattice sites can be occupied by zero, one, two, three and more atoms at the same time.<br />The use of those "atom number superposition states" is the key to the novel measurement principle developed by the researchers. The dynamics of an atom number state can be compared to the dynamics of a swinging pendulum. As pendulums of different lengths are characterized by different oscillation frequencies, the same applies to the states of different atom numbers. "However, these frequencies are modified by inter-atomic collisions. If only pairwise interactions between atoms were present, the pendulums representing the individual atom number states would swing synchronously and their oscillation frequencies would be exact multiples of the pendulum frequency for two interacting atoms," Sebastian Will, graduate student at the experiment, explains.<br />Using a tricky experimental set-up the physicists were able to track the evolution of the different superimposed oscillations over time. Periodically interference patterns became visible and disappeared, again and again. From their intensity and periodicity the physicists found unambiguous evidence that the frequencies are actually not simple multiples of the two-body case. "This really caught us by surprise. We became aware that a more complex mechanism must be at work," Sebastian Will recalls. "Due to their ultralow temperature the atoms occupy the energetically lowest possible quantum state at each lattice site. Nevertheless, Heisenberg's uncertainty principle allows them to make -- so to speak -- a virtual detour via energetically higher lying quantum states during their collision. Practically, this mechanism gives rise to exotic collisions, which involve three, four or more atoms at the same time."<br />The results reported in this work provide an improved understanding of interactions between microscopic particles. This may not only be of fundamental scientific interest, but find a direct application in the context of ultracold atoms in optical lattices. Owing to exceptional experimental controllability, ultracold atoms in optical lattices can form a "quantum simulator" to model condensed matter systems. Such a quantum simulator is expected to help understand the physics behind superconductivity or quantum magnetism. Furthermore, as each lattice site represents a miniature laboratory for the generation of exotic quantum states, experimental set-ups using optical lattices may turn out to be the most sensitive probes for observing atomic collisions. </strong></div><div align="left"><strong>Story Source:<br />Adapted from materials provided by </strong><a class="blue" href="http://www.uni-muenchen.de/" rel="nofollow" target="_blank"><strong>Ludwig-Maximilians-Universität München</strong></a><strong>.<br />Journal Reference:<br />Sebastian Will, Thorsten Best, Ulrich Schneider, Lucia Hackermüller, Dirk-Sören Lühmann, Immanuel Bloch. Time-resolved observation of coherent multi-body interactions in quantum phase revivals. Nature, 2010; 465 (7295): 197 DOI: </strong><a href="http://dx.doi.org/10.1038/nature09036" rel="nofollow" target="_blank"><strong>10.1038/nature09036</strong></a><strong> </strong></div>Fausto Intillahttps://plus.google.com/110377150394476015496noreply@blogger.com33