Right, but adaptive refinement is bread and butter of today's physics simulations. I imagine that in order to simulate our Universe, you'd go down to elementary particle level only when some of the simulated folks run their accelerators, etc. But for an in-simulation billiard game you'd use a plain impulse-based rigid body dynamics.
The problem isn't particles, its particle interactions and the orders of magnitude in time these events occur at. There is a reason why planck length is shortest theoretical measurable time interval. I'm sure you are aware of the equation E=mc^2, at the planck length the value of c=1. Therefore a planck length, cubed, of the universe can be considered to be 1 bit, for the purposes of storage for a universe, or at least this one.
to see how strongly it conflicts with what you have written about it.
Among other things catalogued there, the Planck length has dimension of [Length], rather than [Time], and per Bekenstein [1973] (doi:10.1103/PhysRevD.7.2333) "1 bit"[0] relates to the minimal increase in the area of the event horizon of a hairless, monotonically growing, stationary, spherically symmetric (or with some further assumptions, axisymmetric) black hole into which matter is being thrown. This remains contentious because black hole mass is continuous and not discrete (one can throw almost arbitrary wavelength photons in, for instance).
By contrast with what you wrote, if you look at it (e.g. via sci-hub) Bekenstein's PRD paper sure doesn't assert that a horizon area on the order of Planck area would contain "one bit", especially as he was aware of the content of the about-to-be-published Nature paper, Hawking [1974] (doi:10.1038/248030a0), which details the "explosion" of black holes with small horizon areas (and was additionally the first Hawking radiation paper).
Finally, if you don't like wikipedia and sci-hub, your favourite search engine will surely supply numerous discussions among working physicists (including peer-reviewed publications) of the Planck length and whether it is physically significant in any context other than Bekenstein's or close relatives (e.g. Loop Quantum Gravity requires that all surface areas are quantized, although not to integer multiples of the fundamental quantum, which in turn is roughly on the order of the Planck length cubed).
- --
[0] A modern semiclassical statement of this is: to leading order, the entropy of a black hole is proportional to its event horizon area at one nat per four Planck areas.
https://en.wikipedia.org/wiki/Nat_(unit)
Let us begin with a maximally dense object, a black hole. It will be assumed that the entropy is found on the horizon and that no more than one bit per Planck area can
stored there.
The only thing I stated incorrectly was planck cubed instead of square.
The metaphorical name of the principle (’t Hooft, 1993)
originates here. In many situations, the covariant entropy bound dictates that all physics in a region of space is described by data that fit on its boundary surface, at one bit per Planck area
An important implication of the holographic principle is that all the information in a given region can be encoded on a surface B, at a density of one bit per Planck area. We can now ask ourselves if the information contained in an entire spacetime can be encoded on a certain
hypersurface, which we will call a screen.
What's that you were saying? I can't hear you over the sound of how right I am.
And why is a planck area one bit?
Because at the planck length, h=G=c=k=1. Where h is the Planck constant, G is Newton's constant, c is the speed of light and k is Boltzmann's constant. Or if you like some Einstein. E=mc^2 reduces to E=m
I'm not sure what you're trying to convince anyone of, other than that you have read at least some of several documents dealing with gravitational physics. That's promising, so I'll react. I'm not a string theory fan, but don't let that discourage you from digging deeper into the field.
> "no more than one bit per Planck area can be stored there"
Sure, that's a postulate to help clarify the Susskind's string theory argument about the information content of his 2d holographic screen at infinity from the black hole with all of the above in a 3d spacelike hypersurface at t=const. Being a postulate, the sentence fragment you quote is not proven in the paper, it's just assumed.
Changing the number of states at the horizon in that spacelike hypersurface to an arbitrary finite number does not really frustrate his gravitational argument (but see below about his matter GUT argument), while arguments about the upper and lower limit of states at the horizon are available in Bekenstein [1973] op. cit. and many subsequent papers.
It is perfectly normal for gravitational arguments to set c = G = 1, and possibly normalize some other terms to unity too; the choice of what to set to unity depends on the trade off between ease of writing down formulae and the difficulty of checking their dimensionality. The reason Susskind uses Planck units, and relevantly to our discussion the Planck length, is because it's convenient in analogies as he develops his argument further down the paper using units where the string length is set to unity.
(The majority of the Susskind paper is heavy (pardon the pun relating to scaling of interaction with momentum) with non-gravitational string physics, and I have no expertise on that, but his gravitational arguments are low energy ones, and there I am comfortable).
> The only thing I stated incorrectly was planck cubed instead of square.
You wrote, "shortest theoretical time interval". The word "time" is incorrect since it is a unit of length (this matters in a Lorentzian spacetime), but changing it to "spatial" does not let you claim that the Susskind paper supports your statement at all, for the reasons above.
You can certainly make arguments about states on a stretched horizon (Susskind does in the paper you found) -- complementarity is wildly popular with string theorists. However, their entropy:information argument, as also repeated in the Bousso paper and the master's degree thesis you found, does not set a minimum length scale rather than fixing a limit on the number of microstates you can squash into an area before the macrostate resembles a hairless black hole, the idea being that any sparser region's macrostate grows gently, and moreover that you can with some care use the macrostate to describe all the internal microstates (which is the core content of SUGRA theories, essentially, where the care one takes is in the choice of a conformal field theory to represent the evolution of the macrostate).
> "a planck area [is] one bit [b]ecause at the planck length, h=G=c=k=1"
No. More on that in the paragraph after the following one.
You also wrote "the equation E=mc^2, at the planck length the value of c=1", which is a bit confused. All Planck units set five constants, c = G = hbar = k_e = k_B = 1. Any quantity measured in Planck units will have c = 1 (and also Boltzmann's constant = 1 and the Coulomb constant = 1 etc.). The partial dispersion relation you provide has nothing to do with it, other than that you can solve it for an appropriate system in Planck units just as in S.I. or cms units, and you can simplify by dropping the term normalized to unity, just as your version has already simplified from the fuller special relativistic relation by normalizing momentum to zero.
In the stringy arguments, the reason there is supposedly four nats or one bit or a small finite number of microphysical degrees of freedom or whatever per unit area on the horizon of a black hole is complementarity described in another Susskind paper: https://arxiv.org/abs/hep-th/9306069 (amusingly it should be "coarse graining" in the Abstract, of course, as it is on e.g. p. 3 just above Postulate 3 and several more times throughout the paper; the published PRD version has the same error in the Abstract).
I do not find the BH complementarity model especially convincing as it was contrived to save information from being lost in traditional black holes but does not appear to do the job, appearing to require replacement of black holes with fuzzballs or some other complementarity-preserving resolution of the AMPS problem. Their cosmological model is even shakier, since it is mostly tested on a non-infinite (but large) distance boundary on anti-de Sitter space, with some arguments about how a series of slices of AdS space can resemble a series of slices of a space much more like our expanding universe. The master's thesis you found is interesting in that it tries to tackle the physics in (among others) de Sitter space, which is a fair approximation of our universe at late epochs. YMMV.
Even though your two-link reply seems like an obvious "I don't want to talk any more", I'll leave you with three things.
Firstly, there is nothing special about natural units compared to any other system of units, except that some popular formulae can take on especially simple forms in that system of units, provided one takes care not to lose dimensionality:
You (still) are confusing two different dimensions, length and time. Again, this matters in a spacetime like ours, where the difference in dimensionality gives us a system of causality[1] and a reasonable setting in which to do time-series physics[2]. It also matters when switching from natural units to a different system of units in which c != 1, such that we cannot omit the conversion constant (or change of sign when calculating spacetime intervals [3]).
Changing a physical system like a black hole from one system of units to another does not change the physics of the system, just how we describe them. It is enlightening to do so in general, because it is easy to fall into the trap of treating as physical a condition that vanishes upon switching to a different set of units. A quantity that is a ratio of two dimensionful quantities (area and information) surviving across changes of units on one or both dimensions may be interesting. In the case of black hole complementarity, which earlier you found links for, it's a (near-horizon) density that (according to some string theorists) corresponds directly with a (interior) density (states / volume) taking into account the Ricci tensor's effect on interior volume but removing the physical singularity through string interactions in a "fuzzball" or something similar[4].
Secondly, your second link says of Planck time, "Presumed to be the shortest theoretically measurable time interval (but not necessarily the shortest increment of time - see quantum gravity)", which does not really support your position. Rather than reinvent the wheel for you in qualifying that, which I think would be wasteful given your previous few replies, I'll direct you to physics se and in particular to Lubos Motl's 85 point comment at : https://physics.stackexchange.com/questions/9720/does-the-pl...
Finally, I don't know what you're trying to accomplish although it seems you're set on convincing yourself and perhaps some people who know even less physics than you do that you know what you're talking about. Hopefully it's more optimistic and instead you're trying to learn more than you already do and are just going about it inefficiently.
[2] https://arxiv.org/abs/1505.01403 (section 4) - sadly the wikipedia has only scattershot coverage of it in e.g. the short and/or not very accessible pages on the ADM and BSSM formalisms, and on Canonical Quantum Gravity (which is a quantization of the Hamiltonian formulation that is "only" an effective field theory good to one loop, and provokes the question of "the problem of time"). Unfortunately most introductory material about 3+1 formalisms in general is already highly technical (e.g., textbooks aimed at graduate students).
Some combination of the holographic principle and the the information overhead of classically representing a quantum state? (I don't know, I'm not a physicist.)
Surprised that the currently exponential overhead for simulating a quantum system is a distant second in commenter objections here. "Simulating the universe" in the holistic manner that I feel such a phrase entails would require at least clearing that bar (say, with a quantum computer? Or at a longer shot, showing BQP = P?), and that's before thinking about the status of quantum gravity.
That's not even my second objection. My second objection is the rate of Entropy of the computer that would be using to make that computation, the amount of mass of the computer that would be required to use and how long it could continue to make computations before the heat death of computer. Also you would have to consider the fact that as performing multivariate calculus of the expanding universe that is completely gravitationally bound would require that the computer that is performing the calculations that govern the simulation to expand at a rate faster than the simulation that it is computing. Eventually what ever the rate of Entropy is in the Universe of the computer that would be simulating our Universe is would exceed the capability of it to exchange information from one end of the machine to the next as it eventually starts to expand faster than whatever the equivalent of the speed of light would be in the simulating universe.
There's a lot of subjectivity to the "we can simulate the universe" variety of claim stated by the article, but I'd tend to forgive considerations of the scale of the entire universe in that claim. It should be obvious that simulating the trajectory of every last particle is straight up impossible. (Even in principle: you could probably even show outright impossibility on logical grounds with a diagonal argument a la the time hierarchy theorem.)
Instead I'd take it as meaning something like, "any slice of physical phenomena there is to observe in the universe, we can simulate given reasonable resources to do so." So, put an imaginary box around some reasonably isolated plot of reality, pick your precision and your time scale, and you could replicate what happens in that space with a "reasonable" computational resource overhead. That's what elevates the quantum simulation overhead objection to number 1 in my mind.
I ascribe a high likelihood that our universe is infinite in extent, so my answer would be ∞. But for any finite volume, the Bekenstein bound will give you an upper limit.
I may be incorrect but the radius of the upper bound of the Bekenstein bound would exceed the length of the lower bound of the whole universe. I came up with my number using the observable universe as the radius and the planck length as the volume. Your number would far exceed mine, I think.
Edit: Ah I'm reading this wrong. You are making the argument that an informational density that large would cause a singularity. You are correct, probably.
There are some assumptions that we can make about an upstream computers physics, things such as the Fundamental Constants. Not that the constants would be the same in upstream but that those constants need to be defined in order for a fine-tuned universe complex enough to have something like a computer.