HN2new | past | comments | ask | show | jobs | submitlogin

I don't really like them either. They're written well enough for one to basically follow along the sentences, and the tone is such that it seems they're really understanding something I'm not. But I looked pretty deeply into these questions. I read the original papers of Einstein Podolsky and Rosen, of Bohr, of Born. I read much of John Bell's original work, and I'm pretty sure that there are results being assumed here (and elsewhere) that haven't really been proven, or were even originally stated.

For example, it's commonly asserted that Bell's theorem rules out any deterministic quantum mechanical description of reality with local hidden variables. But this was, to my knowledge, never shown explicitly by Bell -- he simply ruled out a particular class of local hidden variable theories. There exist theories, consistent with experiment, which are local and deterministic: for what it's worth, a toy example is just a classical computer simulating everything in our universe we've yet seen.

We don't know where the Born probabilities come from, but we also don't really understand the implications of many body entanglement. We don't even have a consistent definition of entanglement for three or more particles. We don't know, we haven't been able to calculate, but I have reason to suspect, that measurement consistent with Born probabilities could be entirely explained by the deterministic axioms already part of Quantum Mechanics. We haven't really gone as far as we could go with them. And already people are treating Many-Worlds as axiomatic. I don't really buy it.



>for what it's worth, a toy example is just a classical computer simulating everything in our universe we've yet seen.

I think the state of the classical computer qualifies as a nonlocal hidden variable.

To use the language of formal logic, the classical computer is a model for quantum theory, i.e. (classical computer) |- QM. Within the QM theory, the computer qualifies as a non-local hidden variable, even though within the classical theory the computer is embedded in, it is local.

Another toy example is Bohmian mechanics on configuration space: the theory is just a local PDE + local particle. But that's non-local in physical space.

>We don't even have a consistent definition of entanglement for three or more particles. We don't know, we haven't been able to calculate, but I have reason to suspect, that measurement consistent with Born probabilities could be entirely explained by the deterministic axioms already part of Quantum Mechanics.

Decoherence makes sense on the macroscale (1000+ particles), although it's true that 40 particles is iffy. Different classical states (i.e., experimental apparatus has light on vs light off) are separated by a distance sqrt(number of particles) in configuration space, and don't interact.

As for explaining measurement with Born probabilities, that's reasonable. My co-conspirators and I currently have a physical, macroscale model where we show this to be true (no citation yet, but I'd be happy to explain more via email). But you still need some ontology.

All deterministic QM can show is that the probabilities work out correctly; i.e., the born probability of (measurement 1 says spin up, measurement 2 says spin down) = 0.

You still need a way to actually pick a configuration based on that probability distribution. The universe as we (you, me, even pg) know it is a point in configuration space, not a wavefunction. I'm happy with both MW and Bohm for that purpose.


I concede that by this definition of locality, a simulator doesn't quite count. But it is a physical model both consistent with our quantum experiments and special relativity, and it's local in something. And if it's local physics in something we're talking about, why is it 3-space that we're treating as the real embedding.

> Decoherence makes sense on the macroscale (1000+ particles), although it's true that 40 particles is iffy. Different classical states (i.e., experimental apparatus has light on vs light off) are separated by a distance sqrt(number of particles) in configuration space, and don't interact.

> As for explaining measurement with Born probabilities, that's reasonable. My co-conspirators and I currently have a physical, macroscale model where we show this to be true (no citation yet, but I'd be happy to explain more via email). But you still need some ontology.

Please do. I looked for a while at decoherence and others, and the mechanism behind the processes kept fading from view. It's was like thermodynamics, where we can say something about the equilibrium states eventually reached, but we're having a hard time explaining the processes by which it reaches one state or another, and by those processes, the reasoning in other parts of physics break down. Like microscopic <-> macroscopic reversibility.

But the people in nonequilbrium statistical mechanics have made a lot of progress in reconciling microscopic reversibility and macroscopic apparent irreversibility. Is such a thing possible for quantum measurement, or more generally, the quantum classical transition, as well? Might there be a reversible description -- Schrodinger's all the way down, so to speak?

Finally, I'm not so sure that MW, decoherence, Bohmian mechanics, etc. are truly equivalent. In other words, I expect that one might start getting different answers.

And there's reason to believe that they're incomplete descriptions. If you take one measurement, you'll notice it takes time. And the microphysics of QM says that it's time evolution should be unitary. So, halfway done, if we stop the clock, when we're doing a measurement, what do we find? Or rather, what would our laws tell us we'd find?

The more popular interpretation seem to tell me 'don't ask this question.' But it seems there's something important hidden here.


>Might there be a reversible description -- Schrodinger's all the way down, so to speak?

Basically, what I've got is a model of a particle interacting with a measurement apparatus (a BEC, to make the calculations simple). You can reduce the many body schrodinger equation to a mean field model on reduced configuration space: (particle coordinate X BEC coordinate). So yes, it is schrodinger (actually madelung) all the way down.

Measurements (of position) correspond to the particle making a splash in the BEC (1). Splashes at different locations correspond to different measured outcomes. Once the difference between splash sizes is macroscopic ( (number of particles) * splash profile =O(1) ), the measurement is complete.

By "complete', I mean that if you pick a random BEC configuration (N BEC particle locations), and you can determine with statistical significance (i.e. 99.9999% sure) where the splash is.

Before this occurs, you've just got two overlapping probability distributions in configuration space. Picking random BEC configurations won't tell you (statistically significantly) the particle location.

The process is continuous, but it doesn't look that way to us since it is also very fast, i.e. t = O(1/number of particles in observation apparatus).


Yes, this is exactly the sort of example I was looking for! Do you have any further details? Maybe I should work it out myself, to see if our results match up.

I suspect that there are all sorts of examples like this, accessible in theory and models of measurement in experiment, where things basically match up with with Born probabilities. Though I expect in some degenerate case they won't, just as, as the fluctuation theorem in nonequilibrium shows us, sometimes the second law is inaccurate, because whatever we call 'entropy' decreases.


The way I understand MWI it basically is a Schroedinger-all-the-way-down approach, and measurement just becomes the act of getting your measurement apparatus (and hence your own brain and mind) entangled with the system. Decoherence, likewise, is just a matter of getting your system entangled with its environment.

One of my lecturers used to say that the main problem with MWI is that it looked like all the maths was done on the back of a napkin. I'm inclined to agree. It's very easy to explain Many Worlds with the example of a single particle and a single observer, but once you start throwing in a large number of mutually interacting particles, several observers, and take note of the fact that the observer him/herself is made up of multiple interacting particles, it suddenly becomes horribly complicated. As far as I know, nobody has ever done a many-worlds treatment beyond the simplest possible examples.


I'd agree that "classical computer simulating the universe" is one theory which is local, deterministic and consistent with experiment, but are there really any others? Others which don't fall into the "grand conspiracy where everything is orchestrated from behind the scenes" category?


> for what it's worth, a toy example is just a classical computer simulating everything in our universe we've yet seen.

I suddenly feel the need for much, much more RAM...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: