I'm going to try to explain this in programming terms :)
There's a longstanding interpretation that the wave-particle duality is actually caused by (e.g. electrons) being particles riding invisible waves. This interpretation (Bohmian mechanics) has been largely ignored as "valid but uninteresting."
A key reason why this isn't historically that appealing is that the state of the "invisible waves" in the neighborhood of a single particle is mathematically dependent on the instantaneous state of every other particle in the universe.
If you assume the universe is a simulation, then the programmer would be a freshman computer science student who unnecessarily made stepping forward in time quadratic!
The oil-droplet experiments are an accidental existence proof that you can get behavior similar to quantum behavior in particle+wave systems without the quadratic global update rule.
Is it enough? There's still tons of open questions, and possibly (likely) yet another dead end.
my understanding of 'natural' physics is as a relation between particle(s) and every other particle in the universe
and our physics calculations trying to understand and predict those natural events wave away those relations valued insignificant to the desired probability threshold?
contrived but applicable example, "assume you are on a frictionless plane with zero wind resistance"
or in one of the most amazing results of applied physics in recent news :
"Fred Jansen, Rosetta's mission manager at ESA, said officials predicted a 70 to 75 percent probability of a successful landing by Philae before the mission's launch in 2004. But that number assumed the comet was a rounded body, not the oddly-shaped world found by Rosetta."
Is it instantaneous or is it the known or observable state propagated at the speed of light?
Perhaps unrelated, but they have already shown that entangled particles propagate state at the speed of light. So we probably wont have any Ender's game ansible communication devices any time soon.
I feel as though this hypothesis was recently disproven - a measurement confirmed that the waves and particles were really the same thing, not one thing riding on the carrier wave of another.
There's at least four mathematical representations of QM, ascribed to Shrödinger, Heisenberg, Dirac, and Bohm respectively, which are all mathematically equivalent.
So you aren't really going to disprove only the Bohm picture, because it makes the exact same claims as the more common Shrödinger picture.
Is there a good resource out there for laymen to explain "quantum stuff" in a meaningful way?
When I read articles that are using words like "spooky" and "weird" as terms of art, and using Schrödinger's cat as a way to clarify a topic, I get nothing out of it. And I'd like to be able to explain to my mom what this stuff is.
I read this book in high-school and learned about as much as I did in 4 undergrad physics courses about quantum mechanics.
There's no mathematics in this book, so it doesn't teach you how to calculate anything (for this, a physics degree helps), but the principles are presented very nicely.
I second that and after if you are keen it can be good to look at the Feynman Lectures vol III, Chapter 5 where he talks about spin. He deliberately leaves spin out of QED to keep things simple. Also if curious there's the Feynman lectures on Gravitation where he tries to figure out quantum gravity and fails like everyone else. One thing I like about Feynman is his talks are based on experiments - if you fire electrons at two slits this happens (QED) or if you send them past a magnet that happens (Ch 5). This is in contrast to most popular science books which go on about spooky cats or most academic treatments which launch into a bunch of abstract maths without looking that much at what's going on physically. I find it much easier to get my head around the real experiment approach.
† The Nature article characterises the Many-Worlds Interpretation like this:
"In the many-worlds picture, the wavefunction governs the evolution of reality so profoundly that whenever a quantum measurement is made, the Universe splits into parallel copies."
The first of the links above explains that this is a mischaracterisation.
It depends on what you mean by layman. I found the book Quantum Computing Since Democritus by Scott Aaronson to be very easy to understand, but it does require some linear algebra and complex numbers. I don't think you can get a good overview of entanglement (where the madness lies) without that basic level.
It's very hard to ELI5 quantum mechanics. The only thing we can definitively constructively say is that we have a mathematical system that describes the universe very well at certain time/energy/distance scales. Trying to ascribe meaningfulness to this math is an ongoing struggle.
Aside, the terms "spooky" and "weird" are used to describe phenomena that "have no classical analogue." Physicists don't agree on what Schrödinger's cat means. I find that it muddles more than it clarifies.
In the early 20th century, math ordinarily used to describe waves was applied to very small systems of particles. This was both strange and extraordinarily successful at modeling systems (if only probabilistically). Applying physical meaning to the math has proven to be more or less intractable. Because the models are so successful in their domain, most physicists tend to brush aside ascribing meaning to the math.
There are a lot of examples in the book "In search of Schrodingers cat" which can be explained in layman terms. There's double slit experiment, then radioactive decay rate, then double polarization of light. That is what I can remember right now. But i remember, the book has lot many other examples.
One way of understanding it, which I currently favour, is that by "observing" the result you you yourself also end up in a super position of states, one for each possible "observation". And since by observing it you've "amplified" the result there is suddenly a large difference between those states. These states are then no longer (as heavily) "coherent", and so you will only see the effects from a small region of the states. Most quantum effects will have vanished since those are a consequence of the "coherence" between those states. So from your perspective it looks like the wave function has "collapsed".
For instance say you're performing some slit experiment, directly after you launch the particle there is still a very broad spectrum of possible positions / states, which are all heavily correlated since states that differ only by the position of 1 particle are in some sense "close". However if the particle hits something then suddenly there can be a very large number of particles that have different positions, depending on where the initial particle hit, hence the difference between those states increases immensely, causing them to become decoherent. Hence, from the perspective of the resulting states, there is only a very small region where it could have hit, anything else becomes remotely unlikely.
This interpretation has some rather interesting issues when you try to interpret what consciousness is, but it's the most consistent one I've found so far.
What if we don't have a human observe? What if all data is only ever interpreted by canines and bovines (dogs and cows)? Will the wave function still collapse?
How would you isolate an experiment from the potentially-wave-function-collapsing influence of human consciousness, and yet still produce a usable contribution to science?
I'm not sure why this was down voted - it's a perfectly valid question.
There are serious philosophical issues with the idea of an observer, conscious or otherwise, as a useful concept.
Entanglement isn't really the issue, because it doesn't solve the problem. What does it mean to be entangled with a complex state in a state of some form of consciousness?
How much consciousness is needed to make a difference? Why should adult human consciousness be the benchmark, and not semi-consciousness, or distraction, or unconsciousness, or toddler-level pre-verbal awareness, or jellyfish consciousness, or LSD-induced hallucination?
If consciousness is necessary, why doesn't reality stop working when we fall asleep? Why do plants live in a consistent physical reality even though they don't know what a Hamiltonian is?
If physics had code smells, this idea of consciousness might not pass the sniff test.
Of course it's possible it's still the right answer. But if it is, it's interesting there's no useful explanation of anything in the concept yet. Nor is there any formalism to support it. (Obviously there's a formalism for wave function collapse. But so far as I know, there's nothing in the math that says "And these variables are where the observer works his/her/its magic.")
Really it's just something that might be true, maybe, because we have no idea what's going on and it's as good a guess as any other.
There is nothing special about human brains, whether they are conscious or not. Think of "observation" in this sense as "making your brain part of the experiment". The only way for your brain to know the results of the experiment is for photons to go in different directions depending on the result of the experiment (well, there are other ways, but usually it's by seeing).
The photon is in the state it's in because of the result, which means that, after that the photon enters your eye, your brain is also in the state it's in because of the result. But you could say the same of a dog brain, or a cow brain, or PC with a webcam for that matter. "Consciousness" has nothing to do with it.
Wave function collapse is not a global thing. It can collapse for one observer but not for another, if two observers are completely isolated. So even if some other human observes it, then the wave function does not necessarily collapse for you. So whether the observer is a human or a cat or a computer or even a rock does not matter.
I agree that this is a profound and perfectly valid question.
Sir Roger Penrose said many times that our current models of Physics are not including Consciousness and that is a big issue. He and Hameroff are working in an interesting line of research that is trying heretically to do just that, model Consciousness pushing the current statu quo https://en.m.wikipedia.org/wiki/Orch-OR
Here is a interview of Hameroff talking about why Consciousness is not an epiphenomenon (as the experiments with paramesiums shows) and is far more than computation: http://youtu.be/YpUVot-4GPM
> Why does observation cause the wave function to collapse?
According to the Many-Worlds Interpretation, it doesn't. Both the thing being observed and the observer exist in many states simultaneously (a superposition of states). Before the observation, those states were independent; the act of observation causes the observer and observed to become entangled, so each state of the observer corelates to a single state of the thing observed. From the observer's points of view, there appears to have been a collapse; but the result of the "collapse" will be different for each of the observer's states.
(Caveat: I am not a physicist, and it's possible the above is not even wrong.)
Observer is a poor choice of words. A single atom can cause a wave function to collapse. The next interaction with that atom then propagates that collapse down the chain. So, your desk chair being a complected system operates just fine as an observer. *
Though this being HN: Think of it like the edge of the bitcoin block chain two new blocks are can be in an unintended state, but over time the longest one wins.
* Not that this a useful model, but it's much closer than the often repeated voodoo mysticism people spout.
>Why does observation cause the wave function to collapse?
it becomes much clearer (at least to me) once "observation" is replaced with "interaction" because the former is really not possible without at least a quantum of the latter.
> Why does observation cause the wave function to collapse?
It doesn't. Classical behavior is an emergent property of any large system of mutually entangled particles. For macroscopic systems classical behavior is a damn good approximation, but it's only an approximation. See:
I think it may ultimately turn out that the best explanation for quantum mechanics is simply the set of postulates we already have, minus the Born rule. It's well known that a quantum system continues to evolve unitarily (deterministically) via the propagation operator regardless of whether any subsystems have collapsed. How can you have a discontinuous subsystem within a larger continuous system? You can't.
The explanation for the appearance of collapse lies in the phenomenon of decoherence, which basically says that subsystems tend to quickly evolve into something resembling an eigenstate. This evolution must necessarily occur on an incredibly short timescale. It might be possible to design an experiment that would test the assumption that collapse is instantaneous.
I think the best definition of "collapse" is that it is the moment in time in which a particular system can no longer be described (to good approximation) as the direct product of two subsystems (see http://en.wikipedia.org/wiki/Separable_state). The concept of a "good approximation" is of course subjective, but it can always be objectively metricized (totally made that word up) by using some kind of error term.
Epigrammatically, collapse is not so much a physical process as it is a characterization of the capability to represent a quantum state in a specific mathematical form.
(Of course, this doesn't preclude you from categorizing physical processes as "collapse events"; it just means that collapse isn't a fundamental phenomenon so much as it is an emergent one. Kind of like quasiparticles.)
So what about the randomness? I think it's better to refer to it as unpredictability. The difference is subtle but crucial. True randomness (assuming it exists) is the result of absolute indeterminism. On the other hand, if eigenstate selection is merely "unpredictable", then that implies collapse is in fact a deterministic process (specifically e^(-iHt) applied to Ψ over some time interval that we've decided to call a "measurement"); however, we're unable to extract enough information from the environment to make exact predictions because we ourselves constitute the required missing information. In other words, the information necessary for absolute predictive capability is trapped in the subsystem constituting the measuring environment, and it becomes lost when that subsystem becomes entangled with the subsystem being measured. And there's not really any way to prevent that from occurring, because entanglement must occur in order to learn anything about a system.
This even applies classically. The only difference is that classical entanglement occurs between localized physical boundaries instead of between subspace boundaries in an abstract Hilbert space.
To somewhat reify this, assume (for the sake of argument) that a classical description of physics is enough to describe a human. Then perform a large MD simulation of all the atoms inside a physics lab, including those of a physicist. The evolution of this simulated system is provably deterministic. Yet the physicist appears to have free will, and it appears like he is deciding which measurements to perform on his environment. But he's just an arbitrary collection of atoms that we've labeled "human", and he obeys the same time-transformation rules that the unlabeled atoms in the system obey. Mathematically, it's simply impossible for him to predict everything that occurs within the virtual system -- not because of indeterminism -- but because he isn't so much "choosing" what to measure as he is "appearing to choose". There's a limit to the amount of information any system can obtain about itself (well, maybe there's some fractals that are exceptions, but generally speaking, it holds true.)
That said, experiment is always the ultimate arbitrator of truth, and I wonder if there might yet be some clever way to tell whether our universe is simply unpredictable instead of random, despite the possibility that both potential mechanisms might impose the same limits on predictive capability (in fact, Colbeck and Renner recently proved that QM is already maximally predictive, independent of whatever underlying mechanism governs eigenstate selection -- see http://www.nature.com/ncomms/journal/v2/n8/abs/ncomms1416.ht...)
Great analysis. Is there even a theory of what causes randomness? Where does it come from? Why should we believe it is discrete from "unpredictability?" Even in random number generators, the game is about pulling from widely unpredictable sources to generate entropy - the word "random" should maybe be a misnomer.
I've never believed in anything besides unpredictability in various scopes of systems. By scopes of systems, I mean, in some contexts of analysis it makes sense to deem a system temporarily closed to analyze certain parts of it. For example, the earth is not a closed system, but for some discussions and analysis it makes sense to simply treat it like one.
I don't know why it is so hard for this description - or paradigm - to proliferate to the masses and various pop writers. Writers so often are tying human consciousness to QM experimentation as if it were something special. The fact of the matter is: in each QM experiment the only things really interacting with the experiment are the atoms of the measurement apparatuses, sensors, and whatnot. In the case of the double slit experiment, we could have them "interpreted" automatically - and say, kill a cat if an interference pattern is created and not kill it if one is not made. Making the discussion about consciousness is a distraction from the core issues.
The exploration of what the wavefunction "means" in terms of the world around us has been one of the more interesting things to watch. When I took quantum physics in school (under the generic heading "Modern Physics") psi and quantum mechanics was very much simply a mathematical treatment of things rather, unlike thermodynamics which was actually visible and "real". Between the teleporation work and recent wavefunction work it seems like a lot more layers of the universe are being revealed.
One question I have: if a pilot wave description of quantum mechanics was in fact the "right" one, would quantum computers be impossible? It's confusing because many articles claim that the various interpretations of QM yield the same predictions, and yet the irreducible randomness of the Copenhagen interpretation seems to be a prerequisite for quantum computing.
My own layman theory on this is that time has a wave like nature to it, not the particles. The idea is to rewrite the wave equation solving for time using the minkowski metric (space and time are related through relativity).
The results should be the same, but it is a different interpretation.
Someone more knowledgeable about solitons can chime in on this...
A friend of mine modeled soliton interactions. When one soliton passes through another, information can be exchanged such that colliding solitons contain bits of each other when they move apart. No matter how far apart these solitons get, the soliton "children" still "chat" with their "parents". One soliton can contain multiple elements of other solitons, all of them interacting at a distance. Their behavior can mimic "spooky action at a distance". Also, solitons can have wave like behavior or particle like behavior depending on how they are observed.
I am biased as a Scott A. fan and as somebody who knows essentially nothing about QM. However I see no way to conclude that Anderson and Brady even know what they're talking about, much less that they win the debate on the linked Scott Aaronson page. I just read through it again to confirm my recollection (skipping the Motl and Sidles posts fwiw).
The main reason I felt Scott A. (and company) lost the debate is because he could not point to a single experiment that actually uses qubits to do a calculation above a very small threshold. There is much written about experiments that seem to show entanglement of multi qubit systems, but none of them actually do any calculations with the qubits.
Brady and Anderson's point is that it's not a qubit until you can calculate with it. And their theory suggests it will get exponentially harder for each qubit above 4 (I think), which throws out all of the interesting quantum algorithms.
I also felt Brady and Anderson silenced every objection. After one or more of their rebuttals, they were no longer challenged. The most recent post about Feynman is also interesting.
There's a longstanding interpretation that the wave-particle duality is actually caused by (e.g. electrons) being particles riding invisible waves. This interpretation (Bohmian mechanics) has been largely ignored as "valid but uninteresting."
A key reason why this isn't historically that appealing is that the state of the "invisible waves" in the neighborhood of a single particle is mathematically dependent on the instantaneous state of every other particle in the universe.
If you assume the universe is a simulation, then the programmer would be a freshman computer science student who unnecessarily made stepping forward in time quadratic!
The oil-droplet experiments are an accidental existence proof that you can get behavior similar to quantum behavior in particle+wave systems without the quadratic global update rule.
Is it enough? There's still tons of open questions, and possibly (likely) yet another dead end.