Im fully willing to believe I just don’t “get it” but I took a pretty deep dive into quantum computing and the underlying mechanics and I kind of got the sense (with QC) that nobody really knows what they are talking about. I got this feeling so strongly that stopped studying the topic all together.
I’m probably way off base and I’m probably missing some insights that I could get by going to school or something but that’s was just my experience with the subject.
"I think I can safely say that nobody understands quantum mechanics."
--Richard Feynman
You're far from alone. Quantum physics is tricky because it frequently doesn't agree with our physical intuition. Humans are used to dealing with macroscopic objects. They surround us for our whole lives. Matter behaves in surprisingly different ways at the level of single quanta. Seemingly impossible things flop out of the math and then clever experiments show that reality is consistent with the math, but we struggle to reach the point where that reality feels correct. When we try to translate the math into human language, we often wind up overloading words and concepts in a way that can be misleading or even false.
Perhaps we just haven't reached the point where things are sufficiently well explained and simplified, but it may be be that quantum physics will always seem strange and counter-intuitive.
> Quantum physics is tricky because it frequently doesn't agree with our physical intuition.
Quantum physics tricky for two separate reasons.
(i) The mathematical theory (Schrödinger equation, wave function, operators, probabilities) is solid and well-defined, but may feel unintuitive, as you say.
(ii) But quantum mechanics is also an incomplete theory. Even if you learn to be at peace with the unintuitive aspects of the mathematical theory, the measurement problem remains an unsolved problem.
"The Schrödinger equation describes quantum systems but does not describe their measurement."
"Quantum theory offers no dynamical description of the "collapse" of the wave function"
Feynman, a famous man from an older era who tried to inspire, remind, and spur people...
> macroscopic objects
It's not about scale at all though. It's just that small systems tend to be observed with this other, specific property that we associate with causing "quantum" like effects. Not only do those effects happen at mesoscopic scale but aside from gravity, quantum theory already can be and is used to describe things on large scales too. Classical computers and desks are still "quantum" systems. Recently theory and experiments have developed to connect with gravity in many ways. I'm more confused when people say something is mysterious. They're usually referring to apparent randomness but I think even that is explained already with partitions or even just wave math (complementarity).
> I’m probably way off base and I’m probably missing some insights that I could get by going to school
A school would usually teach the "shut up (about philosophy) and calculate" approach. These philosophical problems about the meaning of quantum mechanics have been with us for 100 years, and mainstream physics sees them as too hard or even intractable, and thus as waste of time.
These debates over the interpretation of Quantum Mechanics (i.e. what ultimately happens when a “measurement” takes place) are important but don’t bear on the effectiveness of quantum computing. Regardless of your favorite interpretation (almost) everyone agrees that quantum computers should work and be able to do things classical computers cannot.
> [...] and the underlying mechanics and I kind of got the sense (with QC) that nobody really knows what they are talking about.
The math is fairly well known, and people can successfully apply it. As evidence in eg modern CPUs and GPUs and RAM actually working. And lots of other marvels of modern engineering that requiring an understanding of quantum mechanics to design and engineer.
However, it still doesn't really address the core question of when the collapse actually occurs. All it really seems to add is that the environment is an "observer" and that decoherence actually causes the collapse.
The interpretations of what the math is saying happens a varied and sometimes contradictory.
We can predict what's going to happen extremely well, we just can't tell the story of what's happening. And there's been a century of trying to avoid the weirdness and failing. The problem might just be our brains evolved in a world that behaves so much differently that we can't understand.
Yes but also absolutely not. The evolution of the wavefunction when nobody is looking is unitary, which among other things means it is time-reversible. That math works extremely well and predicts the correct outcome.
When we are measuring a quantum system, the probability distribution of the measurement outcome is described by the Born rule, the amplitude of the wavefunction squared, and the collapse postulate tells us that after the measurement the wave function will be in the measured state, which is a non-unitary and non-time-reversible process. That math works extremely well and predicts the correct outcome.
But - really big but - what is a measurement device but a huge quantum system, what is a measurement but a quantum system and a measurement device and an environment undergoing time evolution? So both descriptions should apply, unitary time evolution and wave function collapse, but that can not be the case because they are incompatible, one is unitary, the other is not. The mathematical description is inconsistent.
I often observe that humans are wired to create causal stories, whether we intend to or not, even in circumstances, we know are false.
A great example involves flipping a coin. Even people who know it's basically an independent 50/50 chance every time get drawn into thinking about "hot streaks" and "overdue for the opposite."
It's arguably a superpower that has given us lots of agriculture and tools and technology and culture, but like hunger and obesity we can't just turn it off when it gets maladaptive.
Humans are very good at pattern matching and explanation and that's what's given us success, but false-positive matches sometimes are the result and need to be corrected down a bit
Funny, I looked into quantum computing and came away knowing pretty well how to use a future quantum computer. The math is pretty straightforward and useful. Now, getting an actual quantum computer with error correction that is scalable... that is still elusive.
Nevertheless, commercial quantum computers exist and do exactly what scientists predicted they would do.
Do I get this right? Wave function collapse due to measurements is not real, the wave function evolves unitarily all the time. But as quantum states get amplified into the macroscopic world, superposition states are somehow amplified asymmetrically which makes it look like wavefunction collapse.
The wave function is still symmetric, but it takes on a bimodal distribution, with very little overlap. For any given event, it will be affected only by the half of the distribution that it's in. The other half has basically zero effect. The further time evolves, that effect becomes even smaller -- as in, the odds of an experiment demonstrating it quickly go towards 1 in 10^googol^googol.
You can round that down to exactly zero and call it "collapse". Or you can keep thinking about the entirety of the wave function, and call it a "multiverse". That rounding is technically invalid, but it simplifies the conceptualization (and the math) to a massive, massive degree without affecting the outcome in any pragmatically measurable way.
(One more caveat: "symmetry" implies we're talking about a wave function with a 50-50 superposition. That's not a requirement, but it simplifies an already complex explanation.)
But isn’t it conceivable, because the original quantum state contains probabilities of different outcomes, that one imprint might correspond to “up” and another to “down,” [...] [Zurek’s theory] predicts that all the imprints must be identical.
Does this not imply that there is an asymmetry, one half of the state gets imprinted, the other half neglected? This however also raises the question about the basis, what is a superposition and what is not depends on the choice of basis. Is there a special basis just as pointer states are somehow special?
Indeed, as you say, Decoherence explains why certain bases are special: when a system is in a pointer basis state, it does not continue entangling with environment (or, at least, does so minimally). When a spinning particle enters a Stern-Gerlach apparatus oriented in z-direction, spin-z is the pointer basis of the system during its time in the apparatus. A spin-up or spin-down particle does not entangle with the environment, but spin +x state would quickly entangle with environment, placing environment in a superposition and "branching" the total state vector of all the stuff in the universe.
Quantum Darwinism is just a refinement of this picture in which the "environment" interacting with the system is itself modeled a series of fragments (i.e. all the different photons that bounce off object). It turns out that the information about which pointer basis state the system is in (spin up or spin down) is redundantly encoded in each of these fragments. Hence, intercepting one photon that interacted with system and reveals "spin-up" (because the particle is in upper path) agrees with other photons that also bounce off object.
BUT, of course, due to linearity of unitary time evolution, there is another "branch" in which spin-down was the outcome of the measurement and everyone agrees on spin-down. This is exactly the Everett picture.
I feel that it really just gives an explanation of decoherence, but doesnt offer any testable hypothesis for darwinian pruning and collapse to pointer states.
It doesn’t. Decoherence is the technical step in the Everett picture defining what a “classical branch” even is and explaining how the state vector branches. Every claim that “Decoherence” somehow offers a distinct interpretation to Everett is pure confusion.
The article asks the same question in the last part, wondering whether it's just randomly selected. MWI proponents have always argued decoherence leads to the entire world being put into superposition as decoherence just spreads entanglement to the environment. The math never says entanglement destroys superposition beyond a certain point of complexity (many different entangled systems forming the environment).
The author does say the approach is a combination of Copenhagen and MWI, removing the outlandish parts of both. Seems to preserve the randomness of the former though.
> MWI proponents have always argued decoherence leads to the entire world being put into superposition as decoherence just spreads entanglement to the environment.
Well, duh. It's not like classic objects actually exist, or the classical/quantum divide: everything is quantum, including the "observers". The "classical observer" is a crude approximation that breaks down to a pointy enough question. Just like shorting the perfect battery (with zero internal resistance) with a perfect wire (with zero external resistance) — this scenario is not an approximation of any possible real scenario so it's paradoxicality (infinite current!) is irrelevant.
Random is a very interesting concept. In relation to nature we seem to use "random" as anything we can't or are currently unable to model.
To call something random doesn't mean it's impossible to model, in fact all sorts of natural facts seemed random one day before being covered by a model. One very relatable example example is the motion of stars in the the night sky, which seemed random for ages, until the Copernican revolution.
The fact we have access to random() function in programming seems to trip many people. random() is a particular model implementation of random, but stuff in nature isn't random().
My point is, using "just random" to do work in any scientific explanation is a clutch.
In science randomness is usually used to abstract over a large number of possible paths that result in some outcome without having to reason individually about any specific path or all such paths.
It does not have to mean something inherently non-deterministic or something that can't be modelled, although it certainly is the case that if something is inherently non-deterministic then it would necessarily have to be modelled randomly. Modelling things as a random process is very useful even in cases where the underlying phenomenon has a fully understood and deterministic model; a simple example of this would be chess. It's an entirely deterministic game with perfect information that is fully understood, but nevertheless all the best chess engines model positions probabilistically and use randomness as part of their search.
There's disagreement on this. You seem to just be saying that brute facts or brute contingencies don't exist, but I suspect most scientists would disagree with that.
The use of "random" as explanation or characterization in science has certainly spanned everything from "we don't know", to "there is inherent indivisible physical randomness".
And I would agree, in the latter case it is a crutch. A postulate that something gets decided by no mechanisms whatsoever (randomness obeying a distribution still leaves the unexplained "choice").
It is remarkable that people still suggest the latter, when the theory, both in theory and experiment, doesn't require a physical choice at all (even if we experience a choice, that experience is explained without the universe making a choice).
It is not incomplete to say that something does not require explanation, nor is it saying it's "magic". It is a cost that your model might incur, that's it.
In this paper a plurality of physicists stated that they felt that the initial conditions of the universe are brute facts that warrant no further explanation. This is not "our model doesn't yet account for it", it's "there is no explanation to be given".
A model is incomplete if it doesn't explain something.
That doesn't make a model wrong. All models we have are partial explanations.
But that doesn't make it rational to claim that an incomplete model is complete. Or to treat unexplained specifics as inherently "just so", without cause or reason (i.e. magic), and we must just accept them as unexplainable instead of pursuing them with further inquiry.
> A model is incomplete if it doesn't explain something.
I've just explained that this is not strictly true. I don't know what else to say. Brute contingencies, by definition, do not require explanation. I then gave you a paper where scientists largely believe in brute contingencies.
I think if you want to know more you can look into this. Just look for topics about brute facts and brute contingencies.
If you want to deny that brute contingencies are possible, by all means. That's a totally valid view. Just understand that it's probably not the majority view among scientists and that you aren't necessarily "right" (just as those who hold to brute contingencies aren't necessarily "right").
Constraint without an actual constraint? I am not denying it, it denies itself. It isn’t coherent.
Things are what they are because of constraints. Which is more general, and assumes less, than appeals to causes.
Constraints need not be prior. They can can be simultaneous, i.e. co-constraints. And they can be internal to the whole, i.e. all of reality can be a co-constrained structure without external constraint.
An ultimate law of conservation is a strong candidate for a self-constrained reality. All versions of forms existing that neither locally create nor destroy. And since all possible forms within that exist, no choices made universally and there are no conserving forms it excludes. A coherent, infinite, unique, zero information structure. (Uniqueness is inherent to a zero information structure. Non-uniqueness necessitates choice.)
But claiming that some things just are, with no structural necessity, is an appeal to magic. A specific, with no actual constraint matching the specificity isn’t coherent.
You don’t get something for nothing. There is no outside of reality to provide that.
Any “outside” just means that total reality was not being included in the analysis.
I am not saying we can practically figure everything out. Or that there may not be questions, that given the limited resources/laws of our universe may not be answerable from our position, even theoretically. There may be questions we can’t answer. But nothing specific “appears” with some magical independence from the rest of reality.
That is the non-taulogy of it “just is”.
It would also make reality as a whole irrational. Not even a structure that obeys a conservation law. Because it would have specifics that had no reason.
Look, I really don't care to explain this. You can reject brute contingencies, as I said. If you want to do so, great. Just don't pretend that it's indefensible, your view doesn't even align with the majority of scientists.
None of your conclusions actually follow from this, which you would be welcome to explore on your own. You can learn why conservation can be held as true while also allowing for brute contingencies. As the leading cosmologists I've cited do.
No, it's not magic. You just don't know what you're talking about. Only you can fix that.
Thanks for you patience. I wrote way too much. And have re-read what you wrote and the article.
> It is not incomplete to say that something does not require explanation
> Values of physical constants of nature
> the most popular choice was that the constants are considered
brute facts and thus require no exotic explanation
So yes, I deny the coherence of the concept of "brute facts".
If something is determined, something determined it. Some mechanism, constraint, context, structure, ... Perhaps we don't have the right word or connotations, but something.
A specific from a choice is a specific relation. That relation exists, as exemplified by our encountering the specific. Our experience of coming across the specific is not extricable from that specific's consistent connection to the rest of reality. That consistency has some basis, or there would be inconsistency.
A "maintaining" mechanism for an arbitrary consistency doesn't work, because that just pushes the choice of the specific that is maintained into the maintainer, which makes it more than a maintainer.
I can believe in things we will never be able to explain, as a result of observability limitations imposed on us by local physics. Eternal ignorance for any reason is always a practical possibility.
I can believe in undetermined things, which appear with each possibility, where we only experience one, because in the product of possibilities each plays out separately.
That would be the closest I could come to a "brute fact". Because it is in fact completely determined. The specific was not uniquely chosen, because the specific is not unique. Information is conserved, no explanation of each specific is needed. Even though each specific will behave as unique, across each possibility respectively, because differing specifics interact with a disjoint relation. The disjoint relation is the operating condition creating a localization of choice.
People invent ways to explain away persistence ignorance, instead accepting it, like a fractal attractor, over and over. The psychological need to resolve the dissonance, when encountering challenges to investigation that are potentially insurmountable. And then some "way" of sweeping away the lack of explanation gets translated into a proposed lack of reasons, and given a name and connotations. But never an explanation or reason for itself. It is always faith based. The existence or principle of brute facts, must remain meta-brute facts themselves. All untestable.
Scientists can "believe" that is a valid viewpoint. But inherently cannot every demonstrate any evidence for it.
The same reasoning, with different connotations and contexts, is rejected over and over by scientists. Mystical or religious connotations doom those different "versions". But stated in a sciency way, the same situation becomes palatable to some or many. But it doesn't become more coherent by virtue of being the "physics" version of "explanation" by acceptance of non-explanation.
> So yes, I deny the coherence of the concept of "brute facts".
Cool, that is fine. I deny lots of things as well. It's a position you can hold.
> If something is determined, something determined it.
That's fine but you'll likely find yourself in an infinite regress. That's a cost you'll have to take on under your theory.
> People invent ways to explain away our ignorance of the reasons behind things, instead of accepting the reality of ignorance, almost like an attractor fractal pattern, over and over.
That's not what's happening here. These concepts are pretty rigorously discussed and debated, it's certainly not a "cop out" - it's a metaphysical cost to your world view that you have to justify.
> Scientists can "believe" that is a valid viewpoint. But inherently cannot every demonstrate any evidence for it.
You've already said that you don't believe all things can be proven via evidence, so that's fine.
But it's incorrect to say that there is no evidence for the position. There are many arguments to support the view of brute facts or brute contingencies. One example is that it seems to not accept them would lead to infinite regress, which many people have reasons to reject as well. These are well evidenced positions, that is why so many scientists believe in them.
This has nothing to do with religion or mysticism. There is nothing about this that requires "magic". Many of our most advanced cosmological models support this view. You are just not aware of this, and so it sounds like magic, but it isn't. If you think it is then I would just suggest that you learn more about it, there are many scientists and philosophers writing on the topic and I'm sure quite a few youtube videos on the topic.
Zero information constraints: Specifics only as fully determined, full coverage of undetermined specifics, conservation of information. These axioms, unlike most, impose a lack of external information not just as a desirable property, but harness them as a tautological universal constraint. Unlike most axioms, which are imposed information themselves.
The "constraint" that a complete description of reality doesn't require external information, isn't a brute contingent, it is a tautology. One that can be leveraged as an axiom we get for free.
It has many forms. One is, nothing can be created (no external source), or destroyed (no external dump), so any local structures can be transformed, but must be conserved in some way. Transforms must be reversible. We now have the necessity for a law of conservation as a "for-free" requirement, as a result of no external information/interaction.
Local zero information constraints:
No specific exists, except those that are completely determined. Anything else would require external information. This is a law of fully determined intersection.
Anything not completely specified, must exist in all its disjoint alternatives. This is a law of fully exhausted union.
Think of the exhaustive superpositions (unions) over all possible conserving interactions (intersections) in quantum mechanics. A real "local physics" example of this principle.
Cancellation is caused by conservation. Duals that can be generated must be reducible. And it is cancellation of duals that create the non-trivial distributions that superposition and entanglement produce, out of otherwise a neutral exhaustion of possibilities. Instead of noise or uniformity, we get structure.
This all comes from "no external information or interaction".
It turns out, that tautology is far from a trivial constraint. I believe there will only be one structure that will meet that requirement. And its uniqueness will be another manifestation of no external information, no external choice. Uniqueness doesn't require choice.
In fact it is a very active constraint. Try to come up with a form in which everything is either determined, or exhaustively covered, and always locally conserved (i.e. all transforms are fully and exactly reversible). It will be a challenge! Exactly what we want. But you can fit a lot of our current physics in as consistent pieces. Like quantum mechanics. And historically, we have understood the universe better every time we have generalized or unified laws of conservation.
Superposition is simply conservation of information across disjoint conserving intersections. It doesn't collapse, because that would require external or created or "just is" information. Which besides being incoherent (in my opinion), would throw away the only "free axioms" we have as an explanation for why any structure exists at all. Conservation, closure, uniqueness.
I'm confused because you seem to be using the term tautology totally incorrectly. Your post is very confusing for this reason, because you're very clearly just appealing to a brute contingent fact, if not now multiple brute contingent facts.
edit: Okay, I think I am sort of getting what you're saying about tautologies but it's wrong. Either way, I don't think it matters much. You can just deny brute facts, I have no problem with that. I'm just saying you shouldn't assert that brute facts don't exist as if that's the standard position.
Any theory or model of reality, must take into account all of reality. It cannot depend on, or interact with, or export anything to, anything external. As that would not be a model of reality.
Yes, but nothing else that you've said follows from that. For example,
> One is, nothing can be created (no external source), or destroyed (no external dump), so any local structures can be transformed, but must be conserved in some way.
This is not a tautology, it is a metaphysical claim.
No, a model of reality cannot import something from anywhere else. Whatever is within reality, can only be determined by reality.
Nor can anything in reality, be exported outside of reality.
Reality is the one thing, the only thing, that cannot depend on anything undetermined or unchosen by itself.
The fact that reality must account for both itself, and any of its specifics, with no other domain to draw from, is a higher level of demand than for any other theory. That demand is a hard and unique constraint. A tautological constraint that is therefore usable as an axiom.
I mean, these are all just metaphysical claims. It also doesn't seem to address brute facts, which would be within reality, so it seems sort of pointless. It also doesn't seem to address infinite regresses.
Even if I grant your "axiom", which is just that "reality exclusively contains reality", nothing interesting follows from that for this conversation.
If there is only one such structure, if it is unique, then the question of its existence goes away. What would existence mean?
We would just know there was a unique self-consistent structure. And that anything within that structure would experience its own existence.
Existence would then mean, part of the unique self-consistent, zero-information, independent of any externality, structure. Any self-aware artifact in that structure will experience what we call as existence.
An existence that as a result of a tautological structure, not a result of any external composition.
I'm not sure that's true. Randomness has a well-defined meaning for me: unable to be computed by a finite program. The vast majority of real numbers are thus composed of truly random digits. Suppose the universe has a constant that is a real number. The overwhelmingly vast majority of real numbers are non-computable and cannot be described by a finite descrition of any kind. Thus, if the universe were simply sampling numbers from this real constant (or simply the answer to the math is this real constant as it is undergoing some dynamics), then the numbers would appear random because the true underlying constant is non-computable, and thus appears random.
There is no possible finite way to describe if this were the case or not.
Highly recommend looking at Jacob Barandes’ formulation of quantum mechanics as non-Markovian stochastic processes. It was the first introduction to quantum mechanics I could actually follow.
In a fews sentences: the evolution of a physical system (quantum and classical) can very successfully be modeled as a stochastic process, and ...
1. state of the system is a real-valued "vector" (could be a vector of with continuous indices), or to put it another way, a "point" in state space.
2. system evolution is described by a real-valued "matrix" (matrix in quotes because it is also possibly a matrix with continuous indices), defined by the laws physics as they apply to the system
3. evolution of the system is modeled by repeatedly applying the matrix to the system (to the vector), possibly with infinitesimal steps.
The major discovery Jacob made is that, historically, folks working on stochastic processes had restricted themselves to studying "markovian" stochastic processes, where the transformation matrix has specific mathematical properties, and this fails to be able to properly model QM.
Jacob removes the constraint that the matrix should obey markovian constraints and lands us in an area of maths that's woefully unexplored: non markovian stochastic processes.
The net result though: you can model quantum mechanics with simple real-valued probabilities and do away entirely with the effing complex numbers.
The whole thing is way more intuitive than the traditional complex number based approach.
Jacob also apparently formally demonstrates that his approach is equivalent to the traditional approach.
This was a good discussion on the topics involved as well; between Jacob Barandes & Tim Maudlin. Though I don't recommend watching this without first getting some familiarity with Barandes's ideas... while there's some explanatory dialog in this video I'm posting, mostly is a discussion. It's nice to see the ideas (politely) challenged and answered.
I'm not sure why you're okay with matrices but not the complex numbers. The complex numbers are a particular kind of matrix. Matrices and vector spaces (especially beyond the normal 3 dimensions) are even more mysterious. Complex numbers are fairly typical, and intuitive (rotations in space).
Maybe I am just out of my depth, but I don't understand what problem quantum Darwinism is solving. The Schrödinger equation already explains why observers seem to agree: the ones that don't are separated from each other.
This article is making some pilot-wave-like claim on top of quantum Darwinism that while the Schrödinger equation is real, all the 'real realness' exists in some pointer to a specific location inside it. Why does it do this? Where does this claim come from? At least collapse theories allow that the thing the Schrödinger equation is modelling is actually real up until the part God gets out his frustum culler.
I think the claim is this: the wave function never collapses. However, the effect of the wave function on the environment quickly converges to only one of the two states. We could not know the difference because we cannot directly observe the wave function. We only can see the result as it is magnified onto a macro scale by our observation equipment (or, lacking that, our eyes, which themselves turn a tiny microscopic phenomenon into macro signals). Once that particular outcome has been 'selected' for, the probability of the other outcome quickly becomes vanishingly small very fast. Thus, all future outcomes are that outcome, even though the underlying reality is still that fully entangled state.
Photons (and other objects that seem to behave 'quantumly') do not seem subject to this (and thus we can use them to understand quantum behavior) because they have particular properties wherein their behavior is not as affected by these macroscopic drop-offs quite as badly.
To me, the fact that quantum mechanics is intrinsically "random" and unknowable beforehand, is what makes living bearable in this universe as a sentient being. If we, two legged viruses that we are, could reach a level of understanding that could show the universe to be fully deterministic and every future state to be knowable given that you know the current states, then this human condition would be impossible to stand. I love the fact that we just can't predict the future. It's what makes existing be a good thing instead of a bad one.
#1: You do not want randomness. You may believe you do until the Titanic crashes into your front yard and your significant vanishes into thin air. You want quite a lot of predictability, up to a degree where it might not even matter if things at the lowest level of existence are not perfectly deterministic.
#2: What's so bad about thinking about life as an exciting rollercoaster ride? The tracks are laid but the ride is still fun.
If everything is deterministic, i.e. determined, there's no free will, so you/I are just a NPC. I prefer to live in a universe where my conscious decisions matter, or at least can't be predicted beforehand.
Randomness doesn't imply free will. What if you/I are NPCs that just roll the dice before doing something? It's not you that chose the outcome, it's the dice, i.e. the laws of physics.
I don't know how free will could actually work with any kind of universe governed by a set of laws, whether they include randomness or not. So I don't believe in it, but of course in my day to day life I act as if it exists.
Yet I don't know how qualia or subjective experiences could actually work with any kind of universe governed by a set of laws, whether they include randomness or not. But I believe I have this subjective view of the world that doesn't seem to be explainable with a set of equations.
So it's weird. At least philosophy and science agree that.
> How, for example, are we supposed to think about the domain in which all possibilities still exist before decoherence? How “real” is it?
The quantum function is the real object. Little balls we like to imagine the particles as are just perception of quantum functions very narrowed down by entangling with macroscopic objects. The way we measure anything is through the entanglement between the measured entity and our macroscopic instruments.
> None of the leading interpretations of quantum theory are very convincing. They ask us to believe, for example, that the world we experience is fundamentally divided from the subatomic realm it’s built from. Or that there is a wild proliferation of parallel universes, or that a mysterious process causes quantumness to spontaneously collapse.
Actually, the "many worlds" "interpretation", simply treats the highly successful equations as meaning what they say.
And it is misnamed. The field equations describe a highly interconnected "web universe" of "tangles" (what I call spans of entangled interactions) and "spangles". (My shorthand for superpositions, i.e. disjoint interactions of particles. Think of all the alternate lines leading from and two distinguishable states, like star patterns.) Basically, a graph of union and intersection relations where all combinations, individually and en masse, are determined exactly by the laws of conservation.
That's an amazingly good property for a theory. And we have it.
By including all consistent versions, no external information is required by the theory. It is informationally complete. A successful objective explanation. With deep experimental support that entanglement and superposition actually exist, because their interactions are easily testable.
In fact, entanglement doesn't "violate" locality, it is the more general case which explains locality. Locality is just tightly coupled entanglement/interaction. Not a fundamental constraint on connections. There is no fundamental "distance", just loose and dense connections. Locality is just what we see wherever there are patterns of dense connections. They are an effect, not a constraint.
Even in the classical world of large (highly tangled) objects, we take it for granted that dependent objects can separate over arbitrarily vast dimensions of space and time and yet return together. If that isn't entanglement over vast distances, what is it? It is a basic property of classical physics. Quantum mechanics reveals more subtlety in those maintained connections, including interactions between connections, but it didn't originate them.
Forces disappear. They become passive in an interesting way. Histories where information cancel, leave structured distribution patterns behind, which to us look like forces. Cancellation is just information being conserved. Not an active force. But the results appear active.
In a similar way to how the evolutionary umbrella seems very smart and creative, when really, it is just poorly adapted individual creatures independently cancelling themselves out blindly, leaving a distributional improvement behind.
There is no additional information needed to explain the effect of quantum "collapse" because it is already explained by the fast bifurcation of disjoint tangles when lots of particles interact in an unorganized manner. It is thermodynamics being thermodynamics.
Anyone attempting to invent a mechanism for "collapse" is like someone trying to explain why the spherical Earth appears "flat" by introducing additional speculative theories. Despite the spherical world theory already explaining why it looks flat locally.
And the only reason to not take the experimentally verified field equations as a plain reading, is the result is "too big" for someone's imagination.
Our everyday experience doesn't limit reality, despite humans having trouble with theories that reveal a bigger reality, over and over and over.
Bluntly: The total field equations preserve information - that is the plain implication and guarantee for having both unions (tangles) and intersections (spangles) of interactions.
Anything else requires a universal firehose of magically appearing information to choose collapses, i.e. particular interactions, in order to explain something already explained. In other words, dressed up voodoo. And by "re-complicating", uh, "re-explaining" the already explained, introduces a ridiculous new puzzle: Where does all that pervasively intrusive relentless injection of information (that determines every single extricable particle interaction!), come from? (Occam is spinning like a particle accelerator in his grave.)
Saying it "Just Happens" is like someone "explaining" their pet version of a creator with "Just Is". It is a psychological non-taulogy for "Don't Ask Questions".
The part that I have trouble wrapping around with many worlds interpretation is how I as an observer end up in one of the many bifurcations. Any links you can share that will help me with understanding that is welcome!
The Stanford Encyclopedia of Philosophy (https://plato.stanford.edu/entries/qm-manyworlds/) goes into this in some depth, and it seems like the right way to think about it is say that "I" in one branch is a different entity than the "I" in a different branch. I have somehow not been able to grok it yet.
And I agree about the naming. I really dislike the name "many worlds interpretation", which seems to imply that we have to postulate the existence of these additional worlds, whereas in fact they are branches of the wavefunction exactly predicted by standard quantum mechanics.
> The part that I have trouble wrapping around with many worlds interpretation is how I as an observer end up in one of the many bifurcations.
Pour water down a hill. Water clings to water, and we have hills that already have lots of correlations. We get streams that break up into multiple streams.
How did one stream end up where it is? It seems like a good question, but it is circular. The stream is defined by where it is. You are here (in some circumstance), because the version of you in this circumstance is you.
A transporter accident that creates several versions of you, on several planets with difference colors, doesn't need to explain to each version how they ended up at a planet with their color. Even if for a particular copy, it seems like there should be an answer why they showed up on a planet of a particular specific color. The "why" is just, all paths were taken.
What you said here makes sense. Forgive me, but I have trouble even articulating what it is that I don’t understand correctly.
Maybe what I meant was this: if I perform a quantum experiment where the spin measurement of an electron could be spin up or spin down, the future me would end up in one of two branches: I measure spin up, or I measure spin down. There wouldn’t be any possible world where I measure a superposition of spin up and spin down, because such a a state is going to decohere rapidly. This makes sense. What I’m unable to grasp is that even though the wave function of the universe contains both branches, “I” somehow experience only one of the two branches.
The answer to that I guess if that the two branches are nearly orthogonal they will merrily evolve independent of each other. But somehow “I” only experience only one of them.
Sorry for the rambling. I’m not able to articulate what I don’t understand.
>But somehow “I” only experience only one of them.
Using the example from the other comment, "You" are the stream and not a drop of water in it.
In other words, you are not an entity with unique identity that traverse the tree of possibilities. You are part of the tree, actually, part of a branch. The branch's existence and your existence implies each other. Like your hand's existence and your existence imply each other. Your hand could not have existed without you (a similar looking one could. but it wouldn't be yours). And you without your hand (you could have had a different hand, but that wouldn't be "you" (which also includes the hand))
> The future me would end up in one of two branches: I measure spin up, or I measure spin down.
The future "you's" would each see spin up, and spin down, respectively.
We are just as quantum as what we measure. There isn't a scale where entanglement and superposition turn into something else. No classical vs. quantum atoms.
Just as an up-spin qubit touching an up/down qubit results in an up-up qubit pair in superposition with an up-down superposition, conserving the qubit, when we touch a qubit we get "us"-up and "us"-down versions.
No information is created. None is destroyed. We experience a correlation = "collapse" (both versions of us), but the quantum information just continues on as before, qubit conserved.
The problem with Many Worlds is that it doesn't place a bound on the number of worlds, so you can't derive the Born Rule from it.
That's quite a serious issues. And arguments against that - like Self-Locating Uncertainty, or Zurek's Envariance - look suspiciously circular if you pull them apart.
There's also the issue that if you don't have a mechanism that constrains probability, you can't say anything about the common mechanism of any of the worlds you're in. Your world may be some kind of lottery-winning statistical freak world which happens to have very unusual properties, and generalising from them is absolutely misleading.
There's no way of testing that, so you end up with something unfalsifiable.
> The problem with Many Worlds is that it doesn't place a bound on the number of worlds, so you can't derive the Born Rule from it.
I have no idea what this means.
Is there a bound on anything in reality, in terms of scale? Beyond its own laws?
I am reminded of how often in history, too much time, or too much scale, were unsuccessful arguments against many theories we accept today. Those critiques died without any need for special arguments, because they don't have a logical basis.
Also, there are not a number of many "worlds". That is a reflection of poor naming. There is an interleaving of all interactions, so if you zoom out, a smeared landscape across all configurations, from the plank scale up.
Because the connections involve both intersection (entanglement) and union (alternate paths), we get bifurcation of classical sized paths (dense entanglements), while the individual particles continue unconcerned by how they appear to create different classical histories at large scale.
And yes it is experimentally validated. This is the theory that everyone accepts in the lab, even as larger scales of experiment continue to progress.
But some people have difficulty believing/visualizing that it continues to work at larger scales. Despite no scale limitation in the theory, no scale related violations ever suggested experimentally, and the strong likely that scale limitations would produce new physics in at-scale observations of our cosmos if they did exist.
> how I as an observer end up in one of the many bifurcations..
Just a pleb here, but that does not stop me from thinking about it..
I think your consciousness is a function of the world you belong. So asking why your are in a certain world, and not in other, is like asking why you are born to your specific parents, and not to others.
So you don't end up in some fork, by a roll of dice, you are already confined to, and defined by a single branch.
So I don't think the exact "you" don't exist in another branch. But another consciousness that only differ from "you" by only a single random event (ie you and this consciousness only differ from you in the observation of a single random event) exist in another branch.
And it is not like this is all orchestrated by some entity. It is just how consciousness and subjective experiences emerges in mathematical structures (+ the set of random events), that does not need rendering anywhere (Mathematical Universe Hypothesis).
Once you understand the hopeless inevitablity of existence, a lot of questions like "When", "how", or "why" of our existence disappears.
You can ask if there is any proof for this, except for some thought experiment. But I think the only thing that can come close to proving this is if we exhaustively searched for other extra terrestrial consciousness and don't find any.
"Hard problem" makes it out to be much more difficult than it actually is. To simplify things a little bit, if you combine a spatiotemporal sense (a sense of bounded being in space and time) with a general predictive ability (the ability to freely extrapolate in time and space from one's surroundings,) "consciousness" arises necessarily. It's what having such senses feels like from the inside; the first-person view. It's a matter of degree, of course.
The writing of Chalmers and its consequences have been a catastrophe for philosophy.
It's not hard at all when you acknowledge that such senses exist in the world, and that you (like others) possess them. As an aside it tends to foster a certain tendency towards empathy.
In essence, you're asking why there's an inside to being a self-modeling system. But "inside" isn't something extraneous, something additional -- rather, it's what "self-modeling" means.
Really the "hard problem" has a very easy answer, but it's a physical/functional answer, and dualists and obscurantists simply don't like it.
It's embarrassingly silly to say but I've frequently just boiled down the hard question to the question of "where is the experience of the color blue stored in the universe?" Even as a non-dualist, I still haven't found much of an answer that I like. I'm all ears if you've got a book recommendation.
The question presupposes that "the experience of the color blue" is a discrete object that needs a storage location. But that's the dualist picture in disguise. On a functionalist view, blueness isn't stored; it's what certain neural activity constitutively is when you're that system observing that blue.
As an aside, isn't it more weird that violet and purple look indistinguishable despite being physically so different? It's said that this is because our L-cones (red-sensitive) have a secondary sensitivity peak at short wavelengths. So violet light triggers S-cones + a bit of L-cone. Purple light (red + blue) also triggers S-cones + L-cones. Similar activation pattern = same quale. It's all functional/physical.
Read Tom Cuda "Against Neural Chauvinism." Also Daniel Dennett.
What is mysterious to me is why and how chemical reactions in a certain part of my brain create an experience of blue.
Yes some chemical change happened there, but so what.
These are not very unusual chemical reactions. They happen and are happening everywhere. Does all the chemical reactions going on generate an experience to some experiencer?
I think the flaw in your reasoning is the assumption that chemical reaction is causing the sensation of blue.
But imagine if the consciousness and what it senses cannot be separated. So the consciousness sensing blue and the chemical reaction happening in the brain, are just correlated. One did not cause the other.
One can ask where that correlation came from. I think that the such correlations are inherent in such worlds where consciousness is possible.
I think everything that we observe as physical laws, causality etc, are just such correlations.
This is where these questions take me. Since the experience is the only thing I can be certain of, I'm less drawn to "everything is physical" answers and more drawn to ideas from phenomenology and Bishop George Berkeley. And since I'm not super religious, I'm not really comfortable with those "answers" either.
> On a functionalist view, blueness isn't stored; it's what certain neural activity constitutively is when you're that system observing that blue.
Why should there be anything a certain neural activity is when making an observation? This is adding something additional to functionalism. You're just sneaking the hard problem back into the picture without realizing it.
>where is the experience of the color blue stored in the universe?
It is not stored anywhere. It is part of the consciousness that experience it. In other words consciousness comes bundled with everything it will ever feel.
The kneejerk response would be: Are you not conscious at this present moment? If we were to modulate your spatiotemporal senses with drugs or a lobotomy, do you doubt that you would be very differently conscious, or perhaps entirely unconscious?
I mean, there is a credible first-person answer to that question of yours, which each man can answer for himself.
But considered more seriously, the "hard problem" is an artifact of treating experience as a separate thing that needs to be generated. If you accept that self-modeling systems bounded in space and time exist, you've already accepted that experience exists -- because experience is what such a system is, from the inside. There's no second step where experience gets added. The question "why is there experience?" is exactly akin to "Why is there an interior to four walls and a roof?" The interior isn't a separate thing; it's necessarily constitutive.
I'm not a dualist or anything. I'm in the "it's weird and I have no idea what the answer is" camp. And yes, I've read Dennett. I'm trying to understand your views. Lots of questions follow, but don't feel like I'm barraging you unnecessarily. Just trying to figure out your view with what seem to me like interesting questions that I myself can't really answer.
I'm using "consciousness", "subjective experiences", "senses" and "qualia" as synonyms here, but if you see a difference, please mention it. Obviously "consciousness" has many definitions that have nothing to do with the "hard problem of consciousness", so I'm using it in this sense here. I'll use "qualia" as it's the word that relates most to the hard problem of consciousness. You can substitute it with "sense"/"senses" if you like.
1. Do you view qualia as an emergent property? Of what exactly? What is a self-modeling system? Is a human one? Where would the boundaries be; would they even be defined? The human body or the brain only or the nervous system? Or whatever neurons activate when a certain thing happens, like seeing blue or feeling pain? What about animals - pigs, dogs, rats, snails, ants, bacteria? What about AI, current and theoretical?
2. Could there be a set of minimal self-modelling systems in some abstract space that are the boundary of what has qualia and what doesn't? Like, these 1000000 neurons arranged like that qualify, but if you take 1 out, they don't? Or is it a fuzzy boundary somehow?
3. What kind of statements could be made about the qualia of yourself and of others? Not sure what kind of answer I'm looking for, but how objective or truthful would those statements be? Maybe "qualia is nothing really, we only have the set of equations that govern physics and everything else is an abstraction"? Like an apple isn't anything really, it's just a badly defined set of atoms and energy. There is no "apple" or "chair". Or is it something else?
4. What are your views on meta-ethics and ethics in general? Should we care about it at all?
> because experience is what such a system is, from the inside.
There being an inside to self-modelling systems bound in space and time is the hard problem.
> The question "why is there experience?" is exactly akin to "Why is there an interior to four walls and a roof?" The interior isn't a separate thing; it's necessarily constitutive.
That's given from three dimensions of space. This is not the case with subjective experience. Functional and physical terms don't have an inside where experience lives. It's what makes the p-zombie argument potent.
Let's put this another way. Functional terms are abstracted from experience to model the world. See Nagel's What It's Like to Be Bat paper on science being a view from nowhere, which is really about the fundamental objective/subjective split. Or Locke's primary and secondary qualities.
You can't get experience out of abstract terms. Experience doesn't live inside abstract concepts. We can model the world with them, but experience was left out at the start.
Would you agree that you are conscious at this point?
Would you agree that there are some set of physical laws, an initial state, and a set of random events to the universe that we inhabit?
Would you agree if we simulate this initial state on a computer, and step through it using the set of physical laws, and the random events, we will see the eventual emergence "you", who we know is conscious?
So are you saying that the entity inside the simulation is a zombie who is not actually conscious?
We have a theory whose plain reading matches experiment at all scales.
Consciousness is something else. It is tempting for humans to pair mysteries up, pyramids and aliens, or whatever. But there isn't any factual basis for linking the experience of self-awareness with quantum mechanics.
Is there a factual reason we know digital minds couldn't be conscious? Where quantum effects have been isolated from the operations of mental activity. That seems like a premature constraint to assume.
Yes, the MWI is falsifiable. It asserts that objective collapse does not occur, therefore any observation of objective collapse (such as predicted by GRW or Penrose-Diosi) would falsify it.
That's not true falsifiability; its asserting a negative.
I think people resort to MWI because they think it explains everything neatly; it does not!
For example, from my perspective, it does not explain what world I end up in, and if you are saying it's random, you need to come with a fundamental theory of randomness, unless the response is: it just exists, deal with it.
I think you're right, the many worlds interpretation makes the most sense. Unfortunately out current technology is very far from delivering any experimental confirmation or denial of any of the mainstream interpretations.
You are right, but I think there is a more positive viewpoint.
All experiments agree with the many worlds interpretation (again, better described as a quantum web interpretation), and it is the plain Occam's Razor interpretation.
No additional flourishes are needed. That is strong theoretical support. It is the default (plain reading) interpretation already.
And it is the interpretation that doesn't just conserve in one history (i.e. conservation of energy etc.), but conserves information universally.
So again, very strong specific theoretical support.
It is the conjectures about experimentally unmotivated elaborations, like "collapses", that would also break universal conservation of information, for no theoretically necessary reason, that need dramatic new evidence to prove themselves.
If I lack any optimism, it is for conjectured complications with no evidentiary support and weaker explanatory/conservation powers. In any other context, nobody would be entertaining the need for such conjectures.
The "Quantum Collapsers" are right up their with the "Flat Earthers", or solar system "Epicycle Theorists", for not being happy with accepting a working and successful theory as is. Even though their imagined shivs introduce more questions than they answer, and would dispense with its unique advantages.
What if we create a situation in a lab that can be labelled as a collapse of the wave function by interaction with a macroscopic object. Except the macroscopic object is under our control and we can reverse the collapse.
It touches you, and you are just as quantum as the bit.
So two entangled versions of you follow, one entangled with each state. (Actually as many quantum versions of you that touched the qubit times two.)
Which is what happens, as we know from experiment when any one qubit interacts with another independent qubit. We get the product of entangled states, each now correlated. But different entangles states are now in superpostion with each other.
So correlation/entanglement happens and is experienced, despite no collapse of superposition. No information was destroyed or created.
Each of you thinks, wow now the qubit only has one state. But that is because there are two versions of you, correlated respectively with the two uncollapsed qubit states.
Complete conservation. That is the "experience" of collapse that needs no explanation, because it is a predicted experience not requiring an actual collapse. Just as spherical Earth models don't need a special explanation for the appearance of locally flat Earth, because spherical models predict a local flat Earth experience.
Are the Mysteries of Quantum Mechanics Beginning to Dissolve? I don’t think so.
Zurek’s Decoherence and Quantum Darwinism is thought-provoking, but it’s still speculation without broad buy-in from researchers. We might need ASI to crack these mysteries — our brains weren’t built for this kind of problem.
I think the brains of our stone age ancestors were not built for relativity either. In the end the normal sequence of generations (having children and then die at some point) offers "re-trainings" of the brains. So, besides waiting/hoping for artificial intelligence, we should continue to make (and train) children. Worked great so far.
What we need are tractable experiments to test these theories.
Maybe ASI can help design these. Until it can, it will just be another voice arguing for one position over another on pretty weak arguments. Right now my money would be more on human researchers finding those experiments, but even among those few are even trying
"Thus the wave function can’t tell us what the quantum system is like before we measure it. "
Nothing is a particle, all measured things are a probability that we make a certainty when we measure them.
When you stop looking at things as things, but instead, see them as probabilities, it will all make sense. My hand and the beer bottle I pick up are both probabilities. Since the mind cannot navigate the world based on probabilities it turns them into certainties.
Physical science is is the only way we can perceive quantum science. There is no "collapse" outside of our brains perception.
Quite frankly, Quantum is probably known or solved by a nation state (probably the United States). Similar to AI, they will release it in a safe roll out (as they deem it).
Maybe, but the AI we see in the mainstream today -- generative image/video/text creations and Large Language Model chatbots -- were done via non-governmental public and private companies. And a lot of the work hitting the scene loudly and somewhat prematurely. My understanding is the amount of and type of compute needed for Quantum is pretty intense, so there'd be a huge footprint from its manufacturing to keep it hidden.
It would be interesting if most of our confusion with quantum mechanics came from treating probabilities as independent when they are actually highly correlated. I don’t really know any physics, but I’m familiar with probability and this type of problem seems to be the most common error in interpreting probabilities.
I don't have any skin in the game, but people should be aware of Induction vs Deduction.
Induction had the earth at the center of the solar system and had the best calculations to predict where Mars was. Copernicus said earth was at the center, the equations were simpler, but were worse at predicting the location of planets.(until we figured out they moved in ellipses)
When we say "All swans are white, because I've never seen a black swan." Its probabilistically true. That is induction. If we found swans didn't have the gene to make black feathers, that would be deduction.
Deduction is probably the most true, if it is true. (But it is often 100% wrong)
Induction is always semi true.
Quantum mechanics seems to be in the stage of induction. Particles are like the earth at the center of the solar system. We need a Copernican revolution.
Im fully willing to believe I just don’t “get it” but I took a pretty deep dive into quantum computing and the underlying mechanics and I kind of got the sense (with QC) that nobody really knows what they are talking about. I got this feeling so strongly that stopped studying the topic all together.
I’m probably way off base and I’m probably missing some insights that I could get by going to school or something but that’s was just my experience with the subject.
"I think I can safely say that nobody understands quantum mechanics."
--Richard Feynman
You're far from alone. Quantum physics is tricky because it frequently doesn't agree with our physical intuition. Humans are used to dealing with macroscopic objects. They surround us for our whole lives. Matter behaves in surprisingly different ways at the level of single quanta. Seemingly impossible things flop out of the math and then clever experiments show that reality is consistent with the math, but we struggle to reach the point where that reality feels correct. When we try to translate the math into human language, we often wind up overloading words and concepts in a way that can be misleading or even false.
Perhaps we just haven't reached the point where things are sufficiently well explained and simplified, but it may be be that quantum physics will always seem strange and counter-intuitive.
> Quantum physics is tricky because it frequently doesn't agree with our physical intuition.
Quantum physics tricky for two separate reasons.
(i) The mathematical theory (Schrödinger equation, wave function, operators, probabilities) is solid and well-defined, but may feel unintuitive, as you say.
(ii) But quantum mechanics is also an incomplete theory. Even if you learn to be at peace with the unintuitive aspects of the mathematical theory, the measurement problem remains an unsolved problem.
"The Schrödinger equation describes quantum systems but does not describe their measurement."
"Quantum theory offers no dynamical description of the "collapse" of the wave function"
https://en.wikipedia.org/wiki/Wave_function_collapse#The_mea...
> is solid and well-defined, but may feel unintuitive
I'm thinking that the nature of intuition is about training your neurons to approximate stuff without needing to detour through conscious calculation.
And QM is in too high of a complexity class for this to be a thing.
it's not complexity but lack of training data right
I like that quote.
I always fell back on "Spooky action at a distance"; If Einstein found it weird, I shouldn't feel that bad if I can't quite make sense of it.
Feynman, a famous man from an older era who tried to inspire, remind, and spur people...
> macroscopic objects
It's not about scale at all though. It's just that small systems tend to be observed with this other, specific property that we associate with causing "quantum" like effects. Not only do those effects happen at mesoscopic scale but aside from gravity, quantum theory already can be and is used to describe things on large scales too. Classical computers and desks are still "quantum" systems. Recently theory and experiments have developed to connect with gravity in many ways. I'm more confused when people say something is mysterious. They're usually referring to apparent randomness but I think even that is explained already with partitions or even just wave math (complementarity).
> I’m probably way off base and I’m probably missing some insights that I could get by going to school
A school would usually teach the "shut up (about philosophy) and calculate" approach. These philosophical problems about the meaning of quantum mechanics have been with us for 100 years, and mainstream physics sees them as too hard or even intractable, and thus as waste of time.
Hard /intractable is on an axis orthogonal to philosophical stuff like meaning.
These debates over the interpretation of Quantum Mechanics (i.e. what ultimately happens when a “measurement” takes place) are important but don’t bear on the effectiveness of quantum computing. Regardless of your favorite interpretation (almost) everyone agrees that quantum computers should work and be able to do things classical computers cannot.
Well, to be slightly more sophisticated (to paraphrase Scott Aaronson):
Either quantum computers work (at least in principle), or our understanding of the universe is way off and we are getting exciting new physics.
Credo.
... quia absurdum est?
> [...] and the underlying mechanics and I kind of got the sense (with QC) that nobody really knows what they are talking about.
The math is fairly well known, and people can successfully apply it. As evidence in eg modern CPUs and GPUs and RAM actually working. And lots of other marvels of modern engineering that requiring an understanding of quantum mechanics to design and engineer.
There is a somewhat easily digestible explanation of the quantum Darwinism theory here:
https://arxiv.org/pdf/1811.09062
However, it still doesn't really address the core question of when the collapse actually occurs. All it really seems to add is that the environment is an "observer" and that decoherence actually causes the collapse.
Isn't 'collapse' just something that the Copenhagen Interpretation people made up?
Its either collapse or many worlds.
> nobody really knows what they are talking about
Could you form a specific question that you're wondering about? (Have you looked at condensed matter physics yet?)
>nobody really knows what they are talking about
The mathematics of QM works extremely well.
The interpretations of what the math is saying happens a varied and sometimes contradictory.
We can predict what's going to happen extremely well, we just can't tell the story of what's happening. And there's been a century of trying to avoid the weirdness and failing. The problem might just be our brains evolved in a world that behaves so much differently that we can't understand.
The mathematics of QM works extremely well.
Yes but also absolutely not. The evolution of the wavefunction when nobody is looking is unitary, which among other things means it is time-reversible. That math works extremely well and predicts the correct outcome.
When we are measuring a quantum system, the probability distribution of the measurement outcome is described by the Born rule, the amplitude of the wavefunction squared, and the collapse postulate tells us that after the measurement the wave function will be in the measured state, which is a non-unitary and non-time-reversible process. That math works extremely well and predicts the correct outcome.
But - really big but - what is a measurement device but a huge quantum system, what is a measurement but a quantum system and a measurement device and an environment undergoing time evolution? So both descriptions should apply, unitary time evolution and wave function collapse, but that can not be the case because they are incompatible, one is unitary, the other is not. The mathematical description is inconsistent.
I often observe that humans are wired to create causal stories, whether we intend to or not, even in circumstances, we know are false.
A great example involves flipping a coin. Even people who know it's basically an independent 50/50 chance every time get drawn into thinking about "hot streaks" and "overdue for the opposite."
It's arguably a superpower that has given us lots of agriculture and tools and technology and culture, but like hunger and obesity we can't just turn it off when it gets maladaptive.
Philosophy calls this a "just so" explanation
Humans are very good at pattern matching and explanation and that's what's given us success, but false-positive matches sometimes are the result and need to be corrected down a bit
Funny, I looked into quantum computing and came away knowing pretty well how to use a future quantum computer. The math is pretty straightforward and useful. Now, getting an actual quantum computer with error correction that is scalable... that is still elusive.
Nevertheless, commercial quantum computers exist and do exactly what scientists predicted they would do.
Do I get this right? Wave function collapse due to measurements is not real, the wave function evolves unitarily all the time. But as quantum states get amplified into the macroscopic world, superposition states are somehow amplified asymmetrically which makes it look like wavefunction collapse.
Pretty close.
The wave function is still symmetric, but it takes on a bimodal distribution, with very little overlap. For any given event, it will be affected only by the half of the distribution that it's in. The other half has basically zero effect. The further time evolves, that effect becomes even smaller -- as in, the odds of an experiment demonstrating it quickly go towards 1 in 10^googol^googol.
You can round that down to exactly zero and call it "collapse". Or you can keep thinking about the entirety of the wave function, and call it a "multiverse". That rounding is technically invalid, but it simplifies the conceptualization (and the math) to a massive, massive degree without affecting the outcome in any pragmatically measurable way.
(One more caveat: "symmetry" implies we're talking about a wave function with a 50-50 superposition. That's not a requirement, but it simplifies an already complex explanation.)
Yes, except for the “asymmetrically” part. In other words, Many Worlds.
But isn’t it conceivable, because the original quantum state contains probabilities of different outcomes, that one imprint might correspond to “up” and another to “down,” [...] [Zurek’s theory] predicts that all the imprints must be identical.
Does this not imply that there is an asymmetry, one half of the state gets imprinted, the other half neglected? This however also raises the question about the basis, what is a superposition and what is not depends on the choice of basis. Is there a special basis just as pointer states are somehow special?
There are several layers of structure here.
Indeed, as you say, Decoherence explains why certain bases are special: when a system is in a pointer basis state, it does not continue entangling with environment (or, at least, does so minimally). When a spinning particle enters a Stern-Gerlach apparatus oriented in z-direction, spin-z is the pointer basis of the system during its time in the apparatus. A spin-up or spin-down particle does not entangle with the environment, but spin +x state would quickly entangle with environment, placing environment in a superposition and "branching" the total state vector of all the stuff in the universe.
Quantum Darwinism is just a refinement of this picture in which the "environment" interacting with the system is itself modeled a series of fragments (i.e. all the different photons that bounce off object). It turns out that the information about which pointer basis state the system is in (spin up or spin down) is redundantly encoded in each of these fragments. Hence, intercepting one photon that interacted with system and reveals "spin-up" (because the particle is in upper path) agrees with other photons that also bounce off object.
BUT, of course, due to linearity of unitary time evolution, there is another "branch" in which spin-down was the outcome of the measurement and everyone agrees on spin-down. This is exactly the Everett picture.
Here is an earlier article which explains it better:
https://www.quantamagazine.org/quantum-darwinism-an-idea-to-...
I feel that it really just gives an explanation of decoherence, but doesnt offer any testable hypothesis for darwinian pruning and collapse to pointer states.
How does "quantum darwinism" effectively select a state? Like how does that work in principle- what is the selection criteria.
It doesn’t. Decoherence is the technical step in the Everett picture defining what a “classical branch” even is and explaining how the state vector branches. Every claim that “Decoherence” somehow offers a distinct interpretation to Everett is pure confusion.
The article asks the same question in the last part, wondering whether it's just randomly selected. MWI proponents have always argued decoherence leads to the entire world being put into superposition as decoherence just spreads entanglement to the environment. The math never says entanglement destroys superposition beyond a certain point of complexity (many different entangled systems forming the environment).
The author does say the approach is a combination of Copenhagen and MWI, removing the outlandish parts of both. Seems to preserve the randomness of the former though.
> MWI proponents have always argued decoherence leads to the entire world being put into superposition as decoherence just spreads entanglement to the environment.
Well, duh. It's not like classic objects actually exist, or the classical/quantum divide: everything is quantum, including the "observers". The "classical observer" is a crude approximation that breaks down to a pointy enough question. Just like shorting the perfect battery (with zero internal resistance) with a perfect wire (with zero external resistance) — this scenario is not an approximation of any possible real scenario so it's paradoxicality (infinite current!) is irrelevant.
Random is a very interesting concept. In relation to nature we seem to use "random" as anything we can't or are currently unable to model.
To call something random doesn't mean it's impossible to model, in fact all sorts of natural facts seemed random one day before being covered by a model. One very relatable example example is the motion of stars in the the night sky, which seemed random for ages, until the Copernican revolution.
The fact we have access to random() function in programming seems to trip many people. random() is a particular model implementation of random, but stuff in nature isn't random().
My point is, using "just random" to do work in any scientific explanation is a clutch.
In science randomness is usually used to abstract over a large number of possible paths that result in some outcome without having to reason individually about any specific path or all such paths.
It does not have to mean something inherently non-deterministic or something that can't be modelled, although it certainly is the case that if something is inherently non-deterministic then it would necessarily have to be modelled randomly. Modelling things as a random process is very useful even in cases where the underlying phenomenon has a fully understood and deterministic model; a simple example of this would be chess. It's an entirely deterministic game with perfect information that is fully understood, but nevertheless all the best chess engines model positions probabilistically and use randomness as part of their search.
> Modelling things as a random process is very useful even in cases where the underlying phenomenon has a fully understood and deterministic model
Output of of a pseudorandom generater is a good example.
There's disagreement on this. You seem to just be saying that brute facts or brute contingencies don't exist, but I suspect most scientists would disagree with that.
I am not sure why you are being downvoted.
The use of "random" as explanation or characterization in science has certainly spanned everything from "we don't know", to "there is inherent indivisible physical randomness".
And I would agree, in the latter case it is a crutch. A postulate that something gets decided by no mechanisms whatsoever (randomness obeying a distribution still leaves the unexplained "choice").
It is remarkable that people still suggest the latter, when the theory, both in theory and experiment, doesn't require a physical choice at all (even if we experience a choice, that experience is explained without the universe making a choice).
It is not incomplete to say that something does not require explanation, nor is it saying it's "magic". It is a cost that your model might incur, that's it.
https://arxiv.org/abs/2503.15776
In this paper a plurality of physicists stated that they felt that the initial conditions of the universe are brute facts that warrant no further explanation. This is not "our model doesn't yet account for it", it's "there is no explanation to be given".
A model is incomplete if it doesn't explain something.
That doesn't make a model wrong. All models we have are partial explanations.
But that doesn't make it rational to claim that an incomplete model is complete. Or to treat unexplained specifics as inherently "just so", without cause or reason (i.e. magic), and we must just accept them as unexplainable instead of pursuing them with further inquiry.
> A model is incomplete if it doesn't explain something.
I've just explained that this is not strictly true. I don't know what else to say. Brute contingencies, by definition, do not require explanation. I then gave you a paper where scientists largely believe in brute contingencies.
I think if you want to know more you can look into this. Just look for topics about brute facts and brute contingencies.
If you want to deny that brute contingencies are possible, by all means. That's a totally valid view. Just understand that it's probably not the majority view among scientists and that you aren't necessarily "right" (just as those who hold to brute contingencies aren't necessarily "right").
Constraint without an actual constraint? I am not denying it, it denies itself. It isn’t coherent.
Things are what they are because of constraints. Which is more general, and assumes less, than appeals to causes.
Constraints need not be prior. They can can be simultaneous, i.e. co-constraints. And they can be internal to the whole, i.e. all of reality can be a co-constrained structure without external constraint.
An ultimate law of conservation is a strong candidate for a self-constrained reality. All versions of forms existing that neither locally create nor destroy. And since all possible forms within that exist, no choices made universally and there are no conserving forms it excludes. A coherent, infinite, unique, zero information structure. (Uniqueness is inherent to a zero information structure. Non-uniqueness necessitates choice.)
But claiming that some things just are, with no structural necessity, is an appeal to magic. A specific, with no actual constraint matching the specificity isn’t coherent.
You don’t get something for nothing. There is no outside of reality to provide that.
Any “outside” just means that total reality was not being included in the analysis.
I am not saying we can practically figure everything out. Or that there may not be questions, that given the limited resources/laws of our universe may not be answerable from our position, even theoretically. There may be questions we can’t answer. But nothing specific “appears” with some magical independence from the rest of reality.
That is the non-taulogy of it “just is”.
It would also make reality as a whole irrational. Not even a structure that obeys a conservation law. Because it would have specifics that had no reason.
Look, I really don't care to explain this. You can reject brute contingencies, as I said. If you want to do so, great. Just don't pretend that it's indefensible, your view doesn't even align with the majority of scientists.
None of your conclusions actually follow from this, which you would be welcome to explore on your own. You can learn why conservation can be held as true while also allowing for brute contingencies. As the leading cosmologists I've cited do.
No, it's not magic. You just don't know what you're talking about. Only you can fix that.
Thanks for you patience. I wrote way too much. And have re-read what you wrote and the article.
> It is not incomplete to say that something does not require explanation
> Values of physical constants of nature
> the most popular choice was that the constants are considered brute facts and thus require no exotic explanation
So yes, I deny the coherence of the concept of "brute facts".
If something is determined, something determined it. Some mechanism, constraint, context, structure, ... Perhaps we don't have the right word or connotations, but something.
A specific from a choice is a specific relation. That relation exists, as exemplified by our encountering the specific. Our experience of coming across the specific is not extricable from that specific's consistent connection to the rest of reality. That consistency has some basis, or there would be inconsistency.
A "maintaining" mechanism for an arbitrary consistency doesn't work, because that just pushes the choice of the specific that is maintained into the maintainer, which makes it more than a maintainer.
I can believe in things we will never be able to explain, as a result of observability limitations imposed on us by local physics. Eternal ignorance for any reason is always a practical possibility.
I can believe in undetermined things, which appear with each possibility, where we only experience one, because in the product of possibilities each plays out separately.
That would be the closest I could come to a "brute fact". Because it is in fact completely determined. The specific was not uniquely chosen, because the specific is not unique. Information is conserved, no explanation of each specific is needed. Even though each specific will behave as unique, across each possibility respectively, because differing specifics interact with a disjoint relation. The disjoint relation is the operating condition creating a localization of choice.
People invent ways to explain away persistence ignorance, instead accepting it, like a fractal attractor, over and over. The psychological need to resolve the dissonance, when encountering challenges to investigation that are potentially insurmountable. And then some "way" of sweeping away the lack of explanation gets translated into a proposed lack of reasons, and given a name and connotations. But never an explanation or reason for itself. It is always faith based. The existence or principle of brute facts, must remain meta-brute facts themselves. All untestable.
Scientists can "believe" that is a valid viewpoint. But inherently cannot every demonstrate any evidence for it.
The same reasoning, with different connotations and contexts, is rejected over and over by scientists. Mystical or religious connotations doom those different "versions". But stated in a sciency way, the same situation becomes palatable to some or many. But it doesn't become more coherent by virtue of being the "physics" version of "explanation" by acceptance of non-explanation.
> So yes, I deny the coherence of the concept of "brute facts".
Cool, that is fine. I deny lots of things as well. It's a position you can hold.
> If something is determined, something determined it.
That's fine but you'll likely find yourself in an infinite regress. That's a cost you'll have to take on under your theory.
> People invent ways to explain away our ignorance of the reasons behind things, instead of accepting the reality of ignorance, almost like an attractor fractal pattern, over and over.
That's not what's happening here. These concepts are pretty rigorously discussed and debated, it's certainly not a "cop out" - it's a metaphysical cost to your world view that you have to justify.
> Scientists can "believe" that is a valid viewpoint. But inherently cannot every demonstrate any evidence for it.
You've already said that you don't believe all things can be proven via evidence, so that's fine.
But it's incorrect to say that there is no evidence for the position. There are many arguments to support the view of brute facts or brute contingencies. One example is that it seems to not accept them would lead to infinite regress, which many people have reasons to reject as well. These are well evidenced positions, that is why so many scientists believe in them.
This has nothing to do with religion or mysticism. There is nothing about this that requires "magic". Many of our most advanced cosmological models support this view. You are just not aware of this, and so it sounds like magic, but it isn't. If you think it is then I would just suggest that you learn more about it, there are many scientists and philosophers writing on the topic and I'm sure quite a few youtube videos on the topic.
[DELETED]
Edit: Sorry didn't see you had already replied.
Zero information constraints: Specifics only as fully determined, full coverage of undetermined specifics, conservation of information. These axioms, unlike most, impose a lack of external information not just as a desirable property, but harness them as a tautological universal constraint. Unlike most axioms, which are imposed information themselves.
> Infinite regress is avoided by co-constraints, such as consistency and conservation.
You have to explain these constraints if you don't want them to be brute.
edit: > Edit: Sorry didn't see you had already replied.
It's cool. I don't understand the distinction you're trying to draw here about "zero information constraints".
edit: > but harness them as a tautological universal constraint.
This just sounds like a brute contingent fact. It's almost the definition of a brute contingency, as far as I can tell.
The "constraint" that a complete description of reality doesn't require external information, isn't a brute contingent, it is a tautology. One that can be leveraged as an axiom we get for free.
It has many forms. One is, nothing can be created (no external source), or destroyed (no external dump), so any local structures can be transformed, but must be conserved in some way. Transforms must be reversible. We now have the necessity for a law of conservation as a "for-free" requirement, as a result of no external information/interaction.
Local zero information constraints:
No specific exists, except those that are completely determined. Anything else would require external information. This is a law of fully determined intersection.
Anything not completely specified, must exist in all its disjoint alternatives. This is a law of fully exhausted union.
Think of the exhaustive superpositions (unions) over all possible conserving interactions (intersections) in quantum mechanics. A real "local physics" example of this principle.
Cancellation is caused by conservation. Duals that can be generated must be reducible. And it is cancellation of duals that create the non-trivial distributions that superposition and entanglement produce, out of otherwise a neutral exhaustion of possibilities. Instead of noise or uniformity, we get structure.
This all comes from "no external information or interaction".
It turns out, that tautology is far from a trivial constraint. I believe there will only be one structure that will meet that requirement. And its uniqueness will be another manifestation of no external information, no external choice. Uniqueness doesn't require choice.
In fact it is a very active constraint. Try to come up with a form in which everything is either determined, or exhaustively covered, and always locally conserved (i.e. all transforms are fully and exactly reversible). It will be a challenge! Exactly what we want. But you can fit a lot of our current physics in as consistent pieces. Like quantum mechanics. And historically, we have understood the universe better every time we have generalized or unified laws of conservation.
Superposition is simply conservation of information across disjoint conserving intersections. It doesn't collapse, because that would require external or created or "just is" information. Which besides being incoherent (in my opinion), would throw away the only "free axioms" we have as an explanation for why any structure exists at all. Conservation, closure, uniqueness.
I'm confused because you seem to be using the term tautology totally incorrectly. Your post is very confusing for this reason, because you're very clearly just appealing to a brute contingent fact, if not now multiple brute contingent facts.
edit: Okay, I think I am sort of getting what you're saying about tautologies but it's wrong. Either way, I don't think it matters much. You can just deny brute facts, I have no problem with that. I'm just saying you shouldn't assert that brute facts don't exist as if that's the standard position.
Any theory or model of reality, must take into account all of reality. It cannot depend on, or interact with, or export anything to, anything external. As that would not be a model of reality.
That is a tautology, no?
Yes, but nothing else that you've said follows from that. For example,
> One is, nothing can be created (no external source), or destroyed (no external dump), so any local structures can be transformed, but must be conserved in some way.
This is not a tautology, it is a metaphysical claim.
No, a model of reality cannot import something from anywhere else. Whatever is within reality, can only be determined by reality.
Nor can anything in reality, be exported outside of reality.
Reality is the one thing, the only thing, that cannot depend on anything undetermined or unchosen by itself.
The fact that reality must account for both itself, and any of its specifics, with no other domain to draw from, is a higher level of demand than for any other theory. That demand is a hard and unique constraint. A tautological constraint that is therefore usable as an axiom.
I mean, these are all just metaphysical claims. It also doesn't seem to address brute facts, which would be within reality, so it seems sort of pointless. It also doesn't seem to address infinite regresses.
Even if I grant your "axiom", which is just that "reality exclusively contains reality", nothing interesting follows from that for this conversation.
If there is only one such structure, if it is unique, then the question of its existence goes away. What would existence mean?
We would just know there was a unique self-consistent structure. And that anything within that structure would experience its own existence.
Existence would then mean, part of the unique self-consistent, zero-information, independent of any externality, structure. Any self-aware artifact in that structure will experience what we call as existence.
An existence that as a result of a tautological structure, not a result of any external composition.
Random events are those events who's occurrences will not prevent consciousness that observes it from existing.
No one is deciding anything.
I'm not sure that's true. Randomness has a well-defined meaning for me: unable to be computed by a finite program. The vast majority of real numbers are thus composed of truly random digits. Suppose the universe has a constant that is a real number. The overwhelmingly vast majority of real numbers are non-computable and cannot be described by a finite descrition of any kind. Thus, if the universe were simply sampling numbers from this real constant (or simply the answer to the math is this real constant as it is undergoing some dynamics), then the numbers would appear random because the true underlying constant is non-computable, and thus appears random.
There is no possible finite way to describe if this were the case or not.
Energy.
opens coat Hey kid, wanna try some superdeterminism?
The Block Universe Theory, which relativity all but demands, makes that a pretty potent drug!
But remember, don't (super)determine and drive
> which relativity all but demands
Hardly. Some philosophers say that. But I don't take much from philosophers reasoning about physics.
Superdeterminism is just “a wizard did it” in a lab coat.
Highly recommend looking at Jacob Barandes’ formulation of quantum mechanics as non-Markovian stochastic processes. It was the first introduction to quantum mechanics I could actually follow.
https://www.jacobbarandes.com/
He just discussed this on Robinson’s podcast, in conversation with Tim Maudlin.
might make sense to link to the actual material you're referring to
> Highly recommend looking at Jacob Barandes’ formulation of quantum mechanics as non-Markovian stochastic processes.
seconded
>might make sense to link to the actual material you're referring to
https://www.youtube.com/watch?v=sshJyD0aWXg
In a fews sentences: the evolution of a physical system (quantum and classical) can very successfully be modeled as a stochastic process, and ...
1. state of the system is a real-valued "vector" (could be a vector of with continuous indices), or to put it another way, a "point" in state space.
2. system evolution is described by a real-valued "matrix" (matrix in quotes because it is also possibly a matrix with continuous indices), defined by the laws physics as they apply to the system
3. evolution of the system is modeled by repeatedly applying the matrix to the system (to the vector), possibly with infinitesimal steps.
The major discovery Jacob made is that, historically, folks working on stochastic processes had restricted themselves to studying "markovian" stochastic processes, where the transformation matrix has specific mathematical properties, and this fails to be able to properly model QM.
Jacob removes the constraint that the matrix should obey markovian constraints and lands us in an area of maths that's woefully unexplored: non markovian stochastic processes.
The net result though: you can model quantum mechanics with simple real-valued probabilities and do away entirely with the effing complex numbers.
The whole thing is way more intuitive than the traditional complex number based approach.
Jacob also apparently formally demonstrates that his approach is equivalent to the traditional approach.
Really worth taking a read/listen at.
This was a good discussion on the topics involved as well; between Jacob Barandes & Tim Maudlin. Though I don't recommend watching this without first getting some familiarity with Barandes's ideas... while there's some explanatory dialog in this video I'm posting, mostly is a discussion. It's nice to see the ideas (politely) challenged and answered.
https://www.youtube.com/watch?v=8xPvxAdmhKM
I'm not sure why you're okay with matrices but not the complex numbers. The complex numbers are a particular kind of matrix. Matrices and vector spaces (especially beyond the normal 3 dimensions) are even more mysterious. Complex numbers are fairly typical, and intuitive (rotations in space).
Maybe I am just out of my depth, but I don't understand what problem quantum Darwinism is solving. The Schrödinger equation already explains why observers seem to agree: the ones that don't are separated from each other.
This article is making some pilot-wave-like claim on top of quantum Darwinism that while the Schrödinger equation is real, all the 'real realness' exists in some pointer to a specific location inside it. Why does it do this? Where does this claim come from? At least collapse theories allow that the thing the Schrödinger equation is modelling is actually real up until the part God gets out his frustum culler.
I think the claim is this: the wave function never collapses. However, the effect of the wave function on the environment quickly converges to only one of the two states. We could not know the difference because we cannot directly observe the wave function. We only can see the result as it is magnified onto a macro scale by our observation equipment (or, lacking that, our eyes, which themselves turn a tiny microscopic phenomenon into macro signals). Once that particular outcome has been 'selected' for, the probability of the other outcome quickly becomes vanishingly small very fast. Thus, all future outcomes are that outcome, even though the underlying reality is still that fully entangled state.
Photons (and other objects that seem to behave 'quantumly') do not seem subject to this (and thus we can use them to understand quantum behavior) because they have particular properties wherein their behavior is not as affected by these macroscopic drop-offs quite as badly.
Zurek published a book about Quantum Darwinism about a year ago. It is a text book, not a popular treatment, but it is quite a good read.
https://www.cambridge.org/core/books/decoherence-and-quantum...
I've been noodling on this: https://github.com/DeepBlueDynamics/das-eimerargument
One interpretation that isn't well known is Aristotelian. Robert Koons at UT Austin has published work recently on the subject.
[0] https://robkoons.net/uploads/1/3/5/2/135276253/prime_matter_...
[1] https://robkoons.net/uploads/1/3/5/2/135276253/hyl_esc_acpq_...
[2] https://robkoons.net/uploads/1/3/5/2/135276253/koons_the_man...
[3] https://robkoons.net/uploads/1/3/5/2/135276253/ejps_quantum_...
Betteridge's law of headlines -.-
https://en.wikipedia.org/wiki/Betteridge%27s_law_of_headline...
To me, the fact that quantum mechanics is intrinsically "random" and unknowable beforehand, is what makes living bearable in this universe as a sentient being. If we, two legged viruses that we are, could reach a level of understanding that could show the universe to be fully deterministic and every future state to be knowable given that you know the current states, then this human condition would be impossible to stand. I love the fact that we just can't predict the future. It's what makes existing be a good thing instead of a bad one.
#1: You do not want randomness. You may believe you do until the Titanic crashes into your front yard and your significant vanishes into thin air. You want quite a lot of predictability, up to a degree where it might not even matter if things at the lowest level of existence are not perfectly deterministic.
#2: What's so bad about thinking about life as an exciting rollercoaster ride? The tracks are laid but the ride is still fun.
If everything is deterministic, i.e. determined, there's no free will, so you/I are just a NPC. I prefer to live in a universe where my conscious decisions matter, or at least can't be predicted beforehand.
Randomness doesn't imply free will. What if you/I are NPCs that just roll the dice before doing something? It's not you that chose the outcome, it's the dice, i.e. the laws of physics.
I don't know how free will could actually work with any kind of universe governed by a set of laws, whether they include randomness or not. So I don't believe in it, but of course in my day to day life I act as if it exists.
Yet I don't know how qualia or subjective experiences could actually work with any kind of universe governed by a set of laws, whether they include randomness or not. But I believe I have this subjective view of the world that doesn't seem to be explainable with a set of equations.
So it's weird. At least philosophy and science agree that.
Compatibleism is when you have both free will and determinism.
What you seem to prefer is libertarianism.
I knew you were going to say that.
agreed! and here you go! https://thoughtforms.life/symposium-on-the-platonic-space/
Fully agree, feels good to say that it's all just kind of random.
> How, for example, are we supposed to think about the domain in which all possibilities still exist before decoherence? How “real” is it?
The quantum function is the real object. Little balls we like to imagine the particles as are just perception of quantum functions very narrowed down by entangling with macroscopic objects. The way we measure anything is through the entanglement between the measured entity and our macroscopic instruments.
> None of the leading interpretations of quantum theory are very convincing. They ask us to believe, for example, that the world we experience is fundamentally divided from the subatomic realm it’s built from. Or that there is a wild proliferation of parallel universes, or that a mysterious process causes quantumness to spontaneously collapse.
Actually, the "many worlds" "interpretation", simply treats the highly successful equations as meaning what they say.
And it is misnamed. The field equations describe a highly interconnected "web universe" of "tangles" (what I call spans of entangled interactions) and "spangles". (My shorthand for superpositions, i.e. disjoint interactions of particles. Think of all the alternate lines leading from and two distinguishable states, like star patterns.) Basically, a graph of union and intersection relations where all combinations, individually and en masse, are determined exactly by the laws of conservation.
That's an amazingly good property for a theory. And we have it.
By including all consistent versions, no external information is required by the theory. It is informationally complete. A successful objective explanation. With deep experimental support that entanglement and superposition actually exist, because their interactions are easily testable.
In fact, entanglement doesn't "violate" locality, it is the more general case which explains locality. Locality is just tightly coupled entanglement/interaction. Not a fundamental constraint on connections. There is no fundamental "distance", just loose and dense connections. Locality is just what we see wherever there are patterns of dense connections. They are an effect, not a constraint.
Even in the classical world of large (highly tangled) objects, we take it for granted that dependent objects can separate over arbitrarily vast dimensions of space and time and yet return together. If that isn't entanglement over vast distances, what is it? It is a basic property of classical physics. Quantum mechanics reveals more subtlety in those maintained connections, including interactions between connections, but it didn't originate them.
Forces disappear. They become passive in an interesting way. Histories where information cancel, leave structured distribution patterns behind, which to us look like forces. Cancellation is just information being conserved. Not an active force. But the results appear active.
In a similar way to how the evolutionary umbrella seems very smart and creative, when really, it is just poorly adapted individual creatures independently cancelling themselves out blindly, leaving a distributional improvement behind.
There is no additional information needed to explain the effect of quantum "collapse" because it is already explained by the fast bifurcation of disjoint tangles when lots of particles interact in an unorganized manner. It is thermodynamics being thermodynamics.
Anyone attempting to invent a mechanism for "collapse" is like someone trying to explain why the spherical Earth appears "flat" by introducing additional speculative theories. Despite the spherical world theory already explaining why it looks flat locally.
And the only reason to not take the experimentally verified field equations as a plain reading, is the result is "too big" for someone's imagination.
Our everyday experience doesn't limit reality, despite humans having trouble with theories that reveal a bigger reality, over and over and over.
Bluntly: The total field equations preserve information - that is the plain implication and guarantee for having both unions (tangles) and intersections (spangles) of interactions.
Anything else requires a universal firehose of magically appearing information to choose collapses, i.e. particular interactions, in order to explain something already explained. In other words, dressed up voodoo. And by "re-complicating", uh, "re-explaining" the already explained, introduces a ridiculous new puzzle: Where does all that pervasively intrusive relentless injection of information (that determines every single extricable particle interaction!), come from? (Occam is spinning like a particle accelerator in his grave.)
Saying it "Just Happens" is like someone "explaining" their pet version of a creator with "Just Is". It is a psychological non-taulogy for "Don't Ask Questions".
The part that I have trouble wrapping around with many worlds interpretation is how I as an observer end up in one of the many bifurcations. Any links you can share that will help me with understanding that is welcome!
The Stanford Encyclopedia of Philosophy (https://plato.stanford.edu/entries/qm-manyworlds/) goes into this in some depth, and it seems like the right way to think about it is say that "I" in one branch is a different entity than the "I" in a different branch. I have somehow not been able to grok it yet.
And I agree about the naming. I really dislike the name "many worlds interpretation", which seems to imply that we have to postulate the existence of these additional worlds, whereas in fact they are branches of the wavefunction exactly predicted by standard quantum mechanics.
> The part that I have trouble wrapping around with many worlds interpretation is how I as an observer end up in one of the many bifurcations.
Pour water down a hill. Water clings to water, and we have hills that already have lots of correlations. We get streams that break up into multiple streams.
How did one stream end up where it is? It seems like a good question, but it is circular. The stream is defined by where it is. You are here (in some circumstance), because the version of you in this circumstance is you.
A transporter accident that creates several versions of you, on several planets with difference colors, doesn't need to explain to each version how they ended up at a planet with their color. Even if for a particular copy, it seems like there should be an answer why they showed up on a planet of a particular specific color. The "why" is just, all paths were taken.
What you said here makes sense. Forgive me, but I have trouble even articulating what it is that I don’t understand correctly.
Maybe what I meant was this: if I perform a quantum experiment where the spin measurement of an electron could be spin up or spin down, the future me would end up in one of two branches: I measure spin up, or I measure spin down. There wouldn’t be any possible world where I measure a superposition of spin up and spin down, because such a a state is going to decohere rapidly. This makes sense. What I’m unable to grasp is that even though the wave function of the universe contains both branches, “I” somehow experience only one of the two branches.
The answer to that I guess if that the two branches are nearly orthogonal they will merrily evolve independent of each other. But somehow “I” only experience only one of them.
Sorry for the rambling. I’m not able to articulate what I don’t understand.
>But somehow “I” only experience only one of them.
Using the example from the other comment, "You" are the stream and not a drop of water in it.
In other words, you are not an entity with unique identity that traverse the tree of possibilities. You are part of the tree, actually, part of a branch. The branch's existence and your existence implies each other. Like your hand's existence and your existence imply each other. Your hand could not have existed without you (a similar looking one could. but it wouldn't be yours). And you without your hand (you could have had a different hand, but that wouldn't be "you" (which also includes the hand))
Good questions.
> The future me would end up in one of two branches: I measure spin up, or I measure spin down.
The future "you's" would each see spin up, and spin down, respectively.
We are just as quantum as what we measure. There isn't a scale where entanglement and superposition turn into something else. No classical vs. quantum atoms.
Just as an up-spin qubit touching an up/down qubit results in an up-up qubit pair in superposition with an up-down superposition, conserving the qubit, when we touch a qubit we get "us"-up and "us"-down versions.
No information is created. None is destroyed. We experience a correlation = "collapse" (both versions of us), but the quantum information just continues on as before, qubit conserved.
The problem with Many Worlds is that it doesn't place a bound on the number of worlds, so you can't derive the Born Rule from it.
That's quite a serious issues. And arguments against that - like Self-Locating Uncertainty, or Zurek's Envariance - look suspiciously circular if you pull them apart.
There's also the issue that if you don't have a mechanism that constrains probability, you can't say anything about the common mechanism of any of the worlds you're in. Your world may be some kind of lottery-winning statistical freak world which happens to have very unusual properties, and generalising from them is absolutely misleading.
There's no way of testing that, so you end up with something unfalsifiable.
There’s papers that “derive” Born’s rule from the many worlds interpretations, e.g. https://arxiv.org/abs/1405.7907
I don’t claim to understand them though. I have tried.
> The problem with Many Worlds is that it doesn't place a bound on the number of worlds, so you can't derive the Born Rule from it.
I have no idea what this means.
Is there a bound on anything in reality, in terms of scale? Beyond its own laws?
I am reminded of how often in history, too much time, or too much scale, were unsuccessful arguments against many theories we accept today. Those critiques died without any need for special arguments, because they don't have a logical basis.
Also, there are not a number of many "worlds". That is a reflection of poor naming. There is an interleaving of all interactions, so if you zoom out, a smeared landscape across all configurations, from the plank scale up.
Because the connections involve both intersection (entanglement) and union (alternate paths), we get bifurcation of classical sized paths (dense entanglements), while the individual particles continue unconcerned by how they appear to create different classical histories at large scale.
And yes it is experimentally validated. This is the theory that everyone accepts in the lab, even as larger scales of experiment continue to progress.
But some people have difficulty believing/visualizing that it continues to work at larger scales. Despite no scale limitation in the theory, no scale related violations ever suggested experimentally, and the strong likely that scale limitations would produce new physics in at-scale observations of our cosmos if they did exist.
> how I as an observer end up in one of the many bifurcations..
Just a pleb here, but that does not stop me from thinking about it..
I think your consciousness is a function of the world you belong. So asking why your are in a certain world, and not in other, is like asking why you are born to your specific parents, and not to others.
So you don't end up in some fork, by a roll of dice, you are already confined to, and defined by a single branch.
So I don't think the exact "you" don't exist in another branch. But another consciousness that only differ from "you" by only a single random event (ie you and this consciousness only differ from you in the observation of a single random event) exist in another branch.
And it is not like this is all orchestrated by some entity. It is just how consciousness and subjective experiences emerges in mathematical structures (+ the set of random events), that does not need rendering anywhere (Mathematical Universe Hypothesis).
Once you understand the hopeless inevitablity of existence, a lot of questions like "When", "how", or "why" of our existence disappears.
You can ask if there is any proof for this, except for some thought experiment. But I think the only thing that can come close to proving this is if we exhaustively searched for other extra terrestrial consciousness and don't find any.
Everybody is utterly confused by the hard problem of consciousness. That's just how it is.
"Hard problem" makes it out to be much more difficult than it actually is. To simplify things a little bit, if you combine a spatiotemporal sense (a sense of bounded being in space and time) with a general predictive ability (the ability to freely extrapolate in time and space from one's surroundings,) "consciousness" arises necessarily. It's what having such senses feels like from the inside; the first-person view. It's a matter of degree, of course.
The writing of Chalmers and its consequences have been a catastrophe for philosophy.
> It's what having such senses feels like from the inside; the first-person view.
The hard problem is that there is such a feeling at all.
It's not hard at all when you acknowledge that such senses exist in the world, and that you (like others) possess them. As an aside it tends to foster a certain tendency towards empathy.
In essence, you're asking why there's an inside to being a self-modeling system. But "inside" isn't something extraneous, something additional -- rather, it's what "self-modeling" means.
Really the "hard problem" has a very easy answer, but it's a physical/functional answer, and dualists and obscurantists simply don't like it.
It's embarrassingly silly to say but I've frequently just boiled down the hard question to the question of "where is the experience of the color blue stored in the universe?" Even as a non-dualist, I still haven't found much of an answer that I like. I'm all ears if you've got a book recommendation.
The question presupposes that "the experience of the color blue" is a discrete object that needs a storage location. But that's the dualist picture in disguise. On a functionalist view, blueness isn't stored; it's what certain neural activity constitutively is when you're that system observing that blue.
As an aside, isn't it more weird that violet and purple look indistinguishable despite being physically so different? It's said that this is because our L-cones (red-sensitive) have a secondary sensitivity peak at short wavelengths. So violet light triggers S-cones + a bit of L-cone. Purple light (red + blue) also triggers S-cones + L-cones. Similar activation pattern = same quale. It's all functional/physical.
Read Tom Cuda "Against Neural Chauvinism." Also Daniel Dennett.
What is mysterious to me is why and how chemical reactions in a certain part of my brain create an experience of blue.
Yes some chemical change happened there, but so what.
These are not very unusual chemical reactions. They happen and are happening everywhere. Does all the chemical reactions going on generate an experience to some experiencer?
I think the flaw in your reasoning is the assumption that chemical reaction is causing the sensation of blue.
But imagine if the consciousness and what it senses cannot be separated. So the consciousness sensing blue and the chemical reaction happening in the brain, are just correlated. One did not cause the other.
One can ask where that correlation came from. I think that the such correlations are inherent in such worlds where consciousness is possible.
I think everything that we observe as physical laws, causality etc, are just such correlations.
This is where these questions take me. Since the experience is the only thing I can be certain of, I'm less drawn to "everything is physical" answers and more drawn to ideas from phenomenology and Bishop George Berkeley. And since I'm not super religious, I'm not really comfortable with those "answers" either.
> On a functionalist view, blueness isn't stored; it's what certain neural activity constitutively is when you're that system observing that blue.
Why should there be anything a certain neural activity is when making an observation? This is adding something additional to functionalism. You're just sneaking the hard problem back into the picture without realizing it.
>where is the experience of the color blue stored in the universe?
It is not stored anywhere. It is part of the consciousness that experience it. In other words consciousness comes bundled with everything it will ever feel.
So you say that the hard problem of consciousness is explained by the fact that we appear to be conscious?
The kneejerk response would be: Are you not conscious at this present moment? If we were to modulate your spatiotemporal senses with drugs or a lobotomy, do you doubt that you would be very differently conscious, or perhaps entirely unconscious?
I mean, there is a credible first-person answer to that question of yours, which each man can answer for himself.
But considered more seriously, the "hard problem" is an artifact of treating experience as a separate thing that needs to be generated. If you accept that self-modeling systems bounded in space and time exist, you've already accepted that experience exists -- because experience is what such a system is, from the inside. There's no second step where experience gets added. The question "why is there experience?" is exactly akin to "Why is there an interior to four walls and a roof?" The interior isn't a separate thing; it's necessarily constitutive.
I'm not a dualist or anything. I'm in the "it's weird and I have no idea what the answer is" camp. And yes, I've read Dennett. I'm trying to understand your views. Lots of questions follow, but don't feel like I'm barraging you unnecessarily. Just trying to figure out your view with what seem to me like interesting questions that I myself can't really answer.
I'm using "consciousness", "subjective experiences", "senses" and "qualia" as synonyms here, but if you see a difference, please mention it. Obviously "consciousness" has many definitions that have nothing to do with the "hard problem of consciousness", so I'm using it in this sense here. I'll use "qualia" as it's the word that relates most to the hard problem of consciousness. You can substitute it with "sense"/"senses" if you like.
1. Do you view qualia as an emergent property? Of what exactly? What is a self-modeling system? Is a human one? Where would the boundaries be; would they even be defined? The human body or the brain only or the nervous system? Or whatever neurons activate when a certain thing happens, like seeing blue or feeling pain? What about animals - pigs, dogs, rats, snails, ants, bacteria? What about AI, current and theoretical?
2. Could there be a set of minimal self-modelling systems in some abstract space that are the boundary of what has qualia and what doesn't? Like, these 1000000 neurons arranged like that qualify, but if you take 1 out, they don't? Or is it a fuzzy boundary somehow?
3. What kind of statements could be made about the qualia of yourself and of others? Not sure what kind of answer I'm looking for, but how objective or truthful would those statements be? Maybe "qualia is nothing really, we only have the set of equations that govern physics and everything else is an abstraction"? Like an apple isn't anything really, it's just a badly defined set of atoms and energy. There is no "apple" or "chair". Or is it something else?
4. What are your views on meta-ethics and ethics in general? Should we care about it at all?
> because experience is what such a system is, from the inside.
There being an inside to self-modelling systems bound in space and time is the hard problem.
> The question "why is there experience?" is exactly akin to "Why is there an interior to four walls and a roof?" The interior isn't a separate thing; it's necessarily constitutive.
That's given from three dimensions of space. This is not the case with subjective experience. Functional and physical terms don't have an inside where experience lives. It's what makes the p-zombie argument potent.
Let's put this another way. Functional terms are abstracted from experience to model the world. See Nagel's What It's Like to Be Bat paper on science being a view from nowhere, which is really about the fundamental objective/subjective split. Or Locke's primary and secondary qualities.
You can't get experience out of abstract terms. Experience doesn't live inside abstract concepts. We can model the world with them, but experience was left out at the start.
>You can't get experience out of abstract terms.
Would you agree that you are conscious at this point?
Would you agree that there are some set of physical laws, an initial state, and a set of random events to the universe that we inhabit?
Would you agree if we simulate this initial state on a computer, and step through it using the set of physical laws, and the random events, we will see the eventual emergence "you", who we know is conscious?
So are you saying that the entity inside the simulation is a zombie who is not actually conscious?
How do you know they (and others) possess them?
I'd say we are confused about both the lowest (quantum) and highest level (consciousness) phenomena of the known Universe. Quite humbling.
We have a theory whose plain reading matches experiment at all scales.
Consciousness is something else. It is tempting for humans to pair mysteries up, pyramids and aliens, or whatever. But there isn't any factual basis for linking the experience of self-awareness with quantum mechanics.
Is there a factual reason we know digital minds couldn't be conscious? Where quantum effects have been isolated from the operations of mental activity. That seems like a premature constraint to assume.
I wasn't trying to link the two. Just pointed out that there seems to be a lot of unknowns on the map.
I think the MWI is actually the just-so explanation you claim to avoid.
Is it falsifiable?
If you have a theory that seems unassailable by any logic, that's a good signal it is tautological and not very useful.
Yes, the MWI is falsifiable. It asserts that objective collapse does not occur, therefore any observation of objective collapse (such as predicted by GRW or Penrose-Diosi) would falsify it.
That's not true falsifiability; its asserting a negative.
I think people resort to MWI because they think it explains everything neatly; it does not!
For example, from my perspective, it does not explain what world I end up in, and if you are saying it's random, you need to come with a fundamental theory of randomness, unless the response is: it just exists, deal with it.
I think you're right, the many worlds interpretation makes the most sense. Unfortunately out current technology is very far from delivering any experimental confirmation or denial of any of the mainstream interpretations.
You are right, but I think there is a more positive viewpoint.
All experiments agree with the many worlds interpretation (again, better described as a quantum web interpretation), and it is the plain Occam's Razor interpretation.
No additional flourishes are needed. That is strong theoretical support. It is the default (plain reading) interpretation already.
And it is the interpretation that doesn't just conserve in one history (i.e. conservation of energy etc.), but conserves information universally.
So again, very strong specific theoretical support.
It is the conjectures about experimentally unmotivated elaborations, like "collapses", that would also break universal conservation of information, for no theoretically necessary reason, that need dramatic new evidence to prove themselves.
If I lack any optimism, it is for conjectured complications with no evidentiary support and weaker explanatory/conservation powers. In any other context, nobody would be entertaining the need for such conjectures.
The "Quantum Collapsers" are right up their with the "Flat Earthers", or solar system "Epicycle Theorists", for not being happy with accepting a working and successful theory as is. Even though their imagined shivs introduce more questions than they answer, and would dispense with its unique advantages.
What if we create a situation in a lab that can be labelled as a collapse of the wave function by interaction with a macroscopic object. Except the macroscopic object is under our control and we can reverse the collapse.
A quantum computer is such a macroscopic state.
From the decoherence / Many-Worlds view: No collapse occurred. Only entanglement happened.
Isn't there a magical moment needed still when a single qubit "touches" the rest of the universe?
It touches you, and you are just as quantum as the bit.
So two entangled versions of you follow, one entangled with each state. (Actually as many quantum versions of you that touched the qubit times two.)
Which is what happens, as we know from experiment when any one qubit interacts with another independent qubit. We get the product of entangled states, each now correlated. But different entangles states are now in superpostion with each other.
So correlation/entanglement happens and is experienced, despite no collapse of superposition. No information was destroyed or created.
Each of you thinks, wow now the qubit only has one state. But that is because there are two versions of you, correlated respectively with the two uncollapsed qubit states.
Complete conservation. That is the "experience" of collapse that needs no explanation, because it is a predicted experience not requiring an actual collapse. Just as spherical Earth models don't need a special explanation for the appearance of locally flat Earth, because spherical models predict a local flat Earth experience.
[dead]
Are the Mysteries of Quantum Mechanics Beginning to Dissolve? I don’t think so.
Zurek’s Decoherence and Quantum Darwinism is thought-provoking, but it’s still speculation without broad buy-in from researchers. We might need ASI to crack these mysteries — our brains weren’t built for this kind of problem.
I think the brains of our stone age ancestors were not built for relativity either. In the end the normal sequence of generations (having children and then die at some point) offers "re-trainings" of the brains. So, besides waiting/hoping for artificial intelligence, we should continue to make (and train) children. Worked great so far.
What we need are tractable experiments to test these theories.
Maybe ASI can help design these. Until it can, it will just be another voice arguing for one position over another on pretty weak arguments. Right now my money would be more on human researchers finding those experiments, but even among those few are even trying
"Thus the wave function can’t tell us what the quantum system is like before we measure it. "
Nothing is a particle, all measured things are a probability that we make a certainty when we measure them.
When you stop looking at things as things, but instead, see them as probabilities, it will all make sense. My hand and the beer bottle I pick up are both probabilities. Since the mind cannot navigate the world based on probabilities it turns them into certainties.
Physical science is is the only way we can perceive quantum science. There is no "collapse" outside of our brains perception.
Why does a probability taste so good after work on a hot day?
Quite frankly, Quantum is probably known or solved by a nation state (probably the United States). Similar to AI, they will release it in a safe roll out (as they deem it).
Maybe, but the AI we see in the mainstream today -- generative image/video/text creations and Large Language Model chatbots -- were done via non-governmental public and private companies. And a lot of the work hitting the scene loudly and somewhat prematurely. My understanding is the amount of and type of compute needed for Quantum is pretty intense, so there'd be a huge footprint from its manufacturing to keep it hidden.
It would be interesting if most of our confusion with quantum mechanics came from treating probabilities as independent when they are actually highly correlated. I don’t really know any physics, but I’m familiar with probability and this type of problem seems to be the most common error in interpreting probabilities.
I don't have any skin in the game, but people should be aware of Induction vs Deduction.
Induction had the earth at the center of the solar system and had the best calculations to predict where Mars was. Copernicus said earth was at the center, the equations were simpler, but were worse at predicting the location of planets.(until we figured out they moved in ellipses)
When we say "All swans are white, because I've never seen a black swan." Its probabilistically true. That is induction. If we found swans didn't have the gene to make black feathers, that would be deduction.
Deduction is probably the most true, if it is true. (But it is often 100% wrong)
Induction is always semi true.
Quantum mechanics seems to be in the stage of induction. Particles are like the earth at the center of the solar system. We need a Copernican revolution.
I wonder how this work relates to Jacob Barandes’s indivisible stochastic processes.