Sent: Monday, May 17, 2010 8:14 AM
Subject: Fw: Theory - cateva materiale pentru explicatii de inceput

Nature 452, 705-706 (10 April 200 | doi:10.1038/452705a; Published online 9 April 2008

Quantum physics: Observations turn up the heat

Kimberly R. Chapin1 & Marlan O. Scully2

Abstract

The idea that observers can influence what they observe has a history that stretches back beyond quantum physics. That we can affect how a system heats up and cools down simply by probing it is a new twist.

As the great quantum physicist Werner Heisenberg — he of the uncertainty principle — made plain, in quantum mechanics, separation of the observer from the phenomenon to be observed is not possible. But in fact, the strange idea that consciousness, intelligence and the act of observation are intertwined with physical phenomena predates Heisenberg.

Kimberly R. Chapin and Marlan O. Scully are at the Institute for Quantum Studies, Texas A&M University, College Station, Texas 77843, USA.
Email:
Marlan O. Scully is also in the Applied Physics and Materials Science Group, Princeton University, Princeton, New Jersey 08544, USA.
Email:


New Scientist

Quantum wonders: Corpuscles and buckyballs

IT DOES not require any knowledge of quantum physics to recognise quantum weirdness. The oldest and grandest of the quantum mysteries relates to a question that has exercised great minds at least since the time of the ancient Greek philosopher Euclid: what is light made of?

History has flip-flopped on the issue. Isaac Newton thought light was tiny particles - "corpuscles" in the argot of the day. Not all his contemporaries were impressed, and in classic experiments in the early 1800s the polymath Thomas Young showed how a beam of light diffracted, or spread out, as it passed through two narrow slits placed close together, producing an interference pattern on a screen behind just as if it were a wave.

So which is it, particle or wave? Keen to establish its reputation for iconoclasm, quantum theory provided an answer soon after it bowled onto the scene in the early 20th century. Light is both a particle and a wave - and so, for that matter, is everything else. A single moving particle such as an electron can diffract and interfere with itself as if it were a wave, and believe it or not, an object as large as a car has a secondary wave character as it trundles along the road.

That revelation came in a barnstorming doctoral thesis submitted by the pioneering quantum physicist Louis de Broglie in 1924. He showed that by describing moving particles as waves, you could explain why they had discrete, quantised energy levels rather than the continuum predicted by classical physics.

De Broglie first assumed that this was just a mathematical abstraction, but wave-particle duality seems to be all too real. Young's classic wave interference experiment has been reproduced with electrons and all manner of other particles (see diagram).

Both waves and particles might be just constructs of our mind to facilitate everyday talking

We haven't yet done it with a macroscopic object such as a moving car, admittedly. Its de Broglie wavelength is something like 10-38 metres, and making it do wave-like things such as diffract would mean creating something with slits on a similar scale, a task way beyond our engineering capabilities. The experiment has been performed, though, with a buckyball - a soccer-ball-shaped lattice of 60 carbon atoms that, at about a nanometre in diameter, is large enough to be seen under a microscope (Nature, vol 401, p 680).

All that leaves a fundamental question: how can stuff be waves and particles at the same time? Perhaps because it is neither, says Markus Arndt of the University of Vienna, Austria, who did the buckyball experiments in 1999. What we call an electron or a buckyball might in the end have no more reality than a click in a detector, or our brain's reconstruction of photons hitting our retina. "Wave and particle are then just constructs of our mind to facilitate everyday talking," he says.




Quantum wonders: The Hamlet effect

A WATCHED pot never boils." Armed with common sense and classical physics, you might dispute that statement. Quantum physics would slap you down. Quantum watched pots do refuse to boil - sometimes. At other times, they boil faster. At yet other times, observation pitches them into an existential dilemma whether to boil or not.

This madness is a logical consequence of the Schrödinger equation, the formula concocted by Austrian physicist Erwin Schrödinger in 1926 to describe how quantum objects evolve probabilistically over time.

Imagine, for example, conducting an experiment with an initially undecayed radioactive atom in a box. According to the Schrödinger equation, at any point after you start the experiment the atom exists in a mixture, or "superposition", of decayed and undecayed states.

Each state has a probability attached that is encapsulated in a mathematical description known as a wave function. Over time, as long as you don't look, the wave function evolves as the probability of the decayed state slowly increases. As soon as you do look, the atom chooses - in a manner in line with the wave function probabilities - which state it will reveal itself in, and the wave function "collapses" to a single determined state.

This is the picture that gave birth to Schrödinger's infamous cat. Suppose the radioactive decay of an atom triggers a vial of poison gas to break, and a cat is in the box with the atom and the vial. Is the cat both dead and alive as long as we don't know whether the decay has occurred?

We don't know. All we know is that tests with larger and larger objects - including, recently, a resonating metal strip big enough to be seen under a microscope - seem to show that they really can be induced to adopt two states at once (Nature, vol 464, p 697).

The weirdest thing about all this is the implication that just looking at stuff changes how it behaves. Take the decaying atom: observing it and finding it undecayed resets the system to a definitive state, and the Schrödinger-equation evolution towards "decayed" must start again from scratch.

The corollary is that if you keep measuring often enough, the system will never be able to decay. This possibility is dubbed the quantum Zeno effect, after the Greek philosopher Zeno of Elea, who devised a famous paradox that "proved" that if you divided time up into ever smaller instants you could make change or motion impossible.

And the quantum Zeno effect does happen. In 1990, researchers at the National Institute of Standards and Technology in Boulder, Colorado, showed they could hold a beryllium ion in an unstable energy configuration rather akin to balancing a pencil on its sharpened point, provided they kept re-measuring its energy (Physical Review A, vol 41, p 2295).

The converse "anti-Zeno" effect - making a quantum pot boil faster by just measuring it - also occurs. Where a quantum object has a complex arrangement of states to move into, a decay into a lower-energy state can be accelerated by measuring the system in the right way. In 2001, this too was observed in the lab (Physical Review Letters, vol 87, p 040402).

The third trick is the "quantum Hamlet effect", proposed last year by Vladan Pankovic of the University of Novi Sad, Serbia. A particularly intricate sequence of measurements, he found, can affect a system in such a way as to make the Schrödinger equation for its subsequent evolution intractable. As Pankovic puts it: to be decayed or not-decayed, "that is the analytically unsolvable question".

Quantum wonders: Something for nothing

"NOTHING will come of nothing," King Lear admonishes Cordelia in the eponymous Shakespeare play. In the quantum world, it's different: there, something comes of nothing and moves the furniture around.

Specifically, if you place two uncharged metal plates side by side in a vacuum, they will move towards each other, seemingly without reason. They won't move a lot, mind. Two plates with an area of a square metre placed one-thousandth of a millimetre apart will feel a force equivalent to just over a tenth of a gram.

The Dutch physicist Hendrik Casimir first noted this minuscule movement in 1948. "The Casimir effect is a manifestation of the quantum weirdness of the microscopic world," says physicist Steve Lamoreaux of Yale University.

It has to do with the quantum quirk known as Heisenberg's uncertainty principle, which essentially says the more we know about some things in the quantum world, the less we know about others. You can't, for instance, deduce the exact position and momentum of a particle simultaneously. The more certain we are of where a particle is, the less certain we are of where it is heading.

A similar uncertainty relation exists between energy and time, with a dramatic consequence. If space were ever truly empty, it would contain exactly zero energy at a precisely defined moment in time - something the uncertainty principle forbids us from knowing.

It follows that there is no such thing as a vacuum. According to quantum field theory, empty space is actually fizzing with short-lived stuff that appears, looks around a bit, decides it doesn't like it and disappears again, all in the name of preventing the universe from violating the uncertainty principle. For the most part, this stuff is pairs of photons and their antiparticles that quickly annihilate in a puff of energy. The tiny electric fields caused by these pop-up particles, and their effect on free electrons in metal plates, might explain the Casimir effect.

Or they might not. Thanks to the uncertainty principle, the electric fields associated with the atoms in the metal plates also fluctuate. These variations create tiny attractions called van der Waals forces between the atoms. "You can't ascribe the Casimir force solely either to the zero point of the vacuum or to the zero point motion of the atoms that make up the plates," says Lamoreaux. "Either view is correct and arrives at the same physical result."

Whichever picture you adopt, the Casimir effect is big enough to be a problem. In nanoscale machines, for example, it could cause components in close proximity to stick together.

The way to avoid that might be simply to reverse the effect. In 1961, Russian physicists showed theoretically that combinations of materials with differing Casimir attractions can create scenarios where the overall effect is repulsion. Evidence for this strange "quantum buoyancy" was announced in January 2009 by physicists from Harvard University who had set up gold and silica plates separated by the liquid bromobenzene (Nature, vol 457, p 170).




Quantum wonders: The Elitzur-Vaidman bomb-tester

A BOMB triggered by a single photon of light is a scary thought. If such a thing existed in the classical world, you would never even be aware of it. Any photon entering your eye to tell you about it would already have set off the bomb, blowing you to kingdom come.

With quantum physics, you stand a better chance. According to a scheme proposed by the Israeli physicists Avshalom Elitzur and Lev Vaidman in 1993, you can use quantum trickery to detect a light-triggered bomb with light - and stay safe a guaranteed 25 per cent of the time (Foundations of Physics, vol 23, p 987).

The secret is a device called an interferometer. It exploits the quantumly weird fact that, given two paths to go down, a photon will take both at once. We know this because, at the far end of the device, where the two paths cross once again, a wave-like interference pattern is produced (see "Both and neither").

To visualise what is going on, think of a photon entering the interferometer and taking one path while a ghostly copy of itself goes down the other. In Elitzur and Vaidman's thought experiment, half the time there is a photon-triggered bomb blocking one path (see diagram). Only the real photon can trigger the bomb, so if it is the ghostly copy that gets blocked by the bomb, there is no explosion - and nor is there an interference pattern at the other end. In other words, we have "seen" the bomb without triggering it.

The computer retains a semblance of classical decency: to deliver a sensible answer, it must be switched on

Barely a year after Elitzur and Vaidman proposed their bomb-testing paradox, physicists at the University of Vienna, Austria, had brought it to life - not by setting off bombs, but by bouncing photons off mirrors (Physical Review Letters, vol 74, p 4763).

In 2000, Shuichiro Inoue and Gunnar Bjork of the Royal Institute of Technology in Stockholm, Sweden, used a similar technique to show that you could get an image of a piece of an object without shining light on it - something that could revolutionise medical imaging. "It would be very useful for something like X-ray scanning, if there were no radiation damage to the tissue because no X-rays actually hit it," says physicist Richard Jozsa of the University of Cambridge.

Josza is the brains behind perhaps the most eye-rubbing of such tricks: using a quantum computer to deliver the output of a program even when you don't run the program. As the team that implemented his idea in 2005 showed, quantum physics does at least retain some semblance of classical decency: to deliver a sensible answer, the computer does need to be switched on (Nature, vol 439, p 949).




Quantum wonders: Spooky action at a distance




ERWIN SCHRÖDINGER called it the "defining trait" of quantum theory. Einstein could not bring himself to believe in it at all, thinking it proof that quantum theory was seriously buggy. It is entanglement: the idea that particles can be linked in such a way that changing the quantum state of one instantaneously affects the other, even if they are light years apart.

This "spooky action at a distance", in Einstein's words, is a serious blow to our conception of how the world works. In 1964, physicist John Bell of the European Organization for Nuclear Research (CERN) in Geneva, Switzerland, showed just how serious. He calculated a mathematical inequality that encapsulated the maximum correlation between the states of remote particles in experiments in which three "reasonable" conditions hold: that experimenters have free will in setting things up as they want; that the particle properties being measured are real and pre-existing, not just popping up at the time of measurement; and that no influence travels faster than the speed of light, the cosmic speed limit.

As many experiments since have shown, quantum mechanics regularly violates Bell's inequality, yielding levels of correlation way above those possible if his conditions hold. That pitches us into a philosophical dilemma. Do we not have free will, meaning something, somehow predetermines what measurements we take? That is not anyone's first choice. Are the properties of quantum particles not real - implying that nothing is real at all, but exists merely as a result of our perception? That's a more popular position, but it hardly leaves us any the wiser.

Or is there really an influence that travels faster than light? Cementing the Swiss reputation for precision timing, in 2008 physicist Nicolas Gisin and his colleagues at the University of Geneva showed that, if reality and free will hold, the speed of transfer of quantum states between entangled photons held in two villages 18 kilometres apart was somewhere above 10 million times the speed of light (Nature, vol 454, p 861).

Whatever the true answer is, it will be weird. Welcome to quantum reality.




Quantum wonders: The field that isn't there




HERE'S a nice piece of quantum nonsense. Take a doughnut-shaped magnet and wrap a metal shield round its inside edge so that no magnetic field can leak into the hole. Then fire an electron through the hole.

There is no field in the hole, so the electron will act as if there is no field, right? Wrong. The wave associated with the electron's movement suffers a jolt as if there were something there.

Werner Ehrenberg and Raymond Siday were the first to note that this behaviour lurks in the Schrödinger equation (see "Off the boil"). That was in 1949, but their result remained unnoticed. Ten years later Yakir Aharonov and David Bohm, working at the University of Bristol in the UK, rediscovered the effect and for some reason their names stuck.

So what is going on? The Aharonov-Bohm effect is proof that there is more to electric and magnetic fields than is generally supposed. You can't calculate the size of the effect on a particle by considering just the properties of the electric and magnetic fields where the particle is. You also have to take into account the properties where it isn't.

Casting about for an explanation, physicists decided to take a look at a property of the magnetic field known as the vector potential. For a long time, vector potentials were considered just handy mathematical tools - a shorthand for electrical and magnetic properties that didn't have any real-world significance. As it turns out, they describe something that is very real indeed.

The Aharonov-Bohm effect showed that the vector potential makes an electromagnetic field more than the sum of its parts. Even when a field isn't there, the vector potential still exerts an influence. That influence was seen unambiguously for the first time in 1986 when Akira Tonomura and colleagues in Hitachi's laboratories in Tokyo, Japan, measured a ghostly electron jolt (Physical Review Letters, vol 48, p 1443).

Although it is far from an everyday phenomenon, the Aharonov-Bohm effect might prove to have uses in the real world - in magnetic sensors, for example, or field-sensitive capacitors and data storage buffers for computers that crunch light.

Quantum wonders: Superfluids and supersolids

FORGET radioactive spider bites, exposure to gamma rays, or any other accident favoured in Marvel comics: in the real world, it's quantum theory that gives you superpowers.

Take helium, for example. At room temperature, it is normal fun: you can fill floaty balloons with it, or inhale it and talk in a squeaky voice. At temperatures below around 2 kelvin, though, it is a liquid and its atoms become ruled by their quantum properties. There, it becomes super-fun: a superfluid.

At room temperature, helium is normal fun. Close to absolute zero, though, it becomes super-fun

Superfluid helium climbs up walls and flows uphill in defiance of gravity. It squeezes itself through impossibly small holes. It flips the bird at friction: put superfluid helium in a bowl, set the bowl spinning, and the helium sits unmoved as the bowl revolves beneath it. Set the liquid itself moving, though, and it will continue gyrating forever.

That's fun, but not particularly useful. The opposite might be said of superconductors. These solids conduct electricity with no resistance, making them valuable for transporting electrical energy, for creating enormously powerful magnetic fields - to steer protons around CERN's Large Hadron Collider, for instance - and for levitating superfast trains.

We don't yet know how all superconductors work, but it seems the uncertainty principle plays a part (see "Something for nothing"). At very low temperatures, the momentum of individual atoms or electrons in these materials is tiny and very precisely known, so the position of each atom is highly uncertain. In fact, they begin to overlap with each other to the point where you can't describe them individually. They start acting as one superatom or superelectron that moves without friction or resistance.

All this is nothing in the weirdness stakes, however, compared with a supersolid. The only known example is solid helium cooled to within a degree of absolute zero and at around 25 times normal atmospheric pressure.

Under these conditions, the bonds between helium atoms are weak, and some break off to leave a network of "vacancies" that behave almost exactly like real atoms. Under the right conditions, these vacancies form their own fluid-like Bose-Einstein condensate. This will, under certain circumstances, pass right through the normal helium lattice - meaning the solid flows, ghost-like, through itself.

So extraordinary is this superpower that Moses Chan and Eun-Seong Kim of Pennsylvania State University in University Park checked and re-checked their data on solid helium for four years before eventually publishing in 2004 (Nature, vol 427, p 225). "I had little confidence we would see the effect," says Chan. Nevertheless, researchers have seen hints that any crystalline material might be persuaded to perform such a feat at temperatures just a fraction above absolute zero. Not even Superman can do that.

Quantum wonders: Nobody understands

It is tempting, faced with the full-frontal assault of quantum weirdness, to trot out the notorious quote from Nobel prize-winning physicist Richard Feynman: "Nobody understands quantum mechanics."

It does have a ring of truth to it, though. The explanations attempted here use the most widely accepted framework for thinking about quantum weirdness, called the Copenhagen interpretation after the city in which Niels Bohr and Werner Heisenberg thrashed out its ground rules in the early 20th century.

With its uncertainty principles and measurement paradoxes, the Copenhagen interpretation amounts to an admission that, as classical beasts, we are ill-equipped to see underlying quantum reality. Any attempt we make to engage with it reduces it to a shallow classical projection of its full quantum richness.

Lev Vaidman of Tel Aviv University, Israel, like many other physicists, touts an alternative explanation. "I don't feel that I don't understand quantum mechanics," he says. But there is a high price to be paid for that understanding - admitting the existence of parallel universes.

In this picture, wave functions do not "collapse" to classical certainty every time you measure them; reality merely splits into as many parallel worlds as there are measurement possibilities. One of these carries you and the reality you live in away with it. "If you don't admit many-worlds, there is no way to have a coherent picture," says Vaidman.

Or, in the words of Feynman again, whether it is the Copenhagen interpretation or many-worlds you accept, "the 'paradox' is only a conflict between reality and your feeling of what reality ought to be".