Sitemap

The Implicit Information Hypothesis

18 min readSep 17, 2021

A novel theory of reality.

Lunar Orbiter Camera. 1966. NASA / Boeing / Eastman Kodak

Chapter 1. What is information?

In 1948, Claude Shannon wrote in the introduction of his “A Mathematical Theory of Communication”:

“Frequently the messages have meaning; that is they refer to or are correlated according to some system with certain physical or conceptual entities. These semantic aspects of communication are irrelevant to the engineering problem. The significant aspect is that the actual message is one selected from a set of possible messages.”

Claude Shannon, building upon the work of many (Hartley, Nyquist), and in parallel (Kolmogorov, Wiener), proved how a message can best be communicated through a noisy channel. Any message can be encoded as a string of bits, where each bit represents the answer to a yes-no question. By considering the resolution of uncertainty of bits received throught a noisy communication channel, and by letting bits be a measue of the information content of a message, Shannon linked communication theory with statistics and entropy. In 1949, Claude Shannon and Warren Weaver published their book “The Mathematical Theory of Communication”, in which Weaver wrote:

“It is this, undoubtedly, that Shannon means when he says that “the semantic aspects of communication are irrelevant to the engineering aspects.” But this does not mean that the engineering aspects are necessarlly irrelevant to the semantic aspects.”

From then on, communication theory became information theory, and the bit became the unit and measure of information.

Information = bits

Claude Shannon became the father of information theory, compensating for having missed out on becoming the father of computation, a title that went to Turing. Claude Shannon was the first human being to understand how logic can be programmed into electrical circuits. Shannon wrote his master’s thesis “A Symbolic Analysis of Relay and Switching Circuits” in 1936, the same year as the work of Turing and Church that led to the Church–Turing thesis.

An amazing feat by Shannon, but I admit having some issue with communication theory being promoted to information theory. Surely there is more to information theory than bits of uncertainty. Although the semantic aspects of information are irrelevant to the engineering problem of communication and storage, semantics is of course very much relevant to the concept of information itself. Information has the power to cause action and macroscopic change, but only if the message is received and read. A string of bits is inert and meaningless without knowing what the yes-no questions are. To illustrate, by looking at a vinyl record we cannot induce the music in our minds, but with the aid of a record player, amplifier and speakers we can. It is the receiving machine that knows which yes-no questions to ask when interpreting a received message, so it is both the bits and the receiving machine together that capture and define what the information in the message is all about. Therefore, a more complete description of information, besides just bits of a message, would be to include a description of the receiving machine that is able to interpret the message.

Suppose, then, we include the receiving machine in our definition of information, by considering the shortest program that can interpret the message (similar to Kolmogorov’s work on complexity, and Solomonoff’s work on algorithmic probability). This shortest program provides us with a measure of the minimum information required to fully describe the states and logic of any closed system. In this minimum description of a system there is no explicit mention of the second law of thermodynamics or emergent macroscopic phenomena, and no mention of statistics and entropy, because there is no need of it in a minimal description, as these rules are mathematically true for free, so to speak. This bottom-up approach to information theory, where entropy is not explicitly stated in its fundamental equations, can be considered as complementary to the internal and engineering approach by Shannon, where entropy plays a first-class role in its fundamental equations.

To describe the program of a receiving machine we are free to use any Turing-complete language. As we are working with bits, i.e. binary digits, it makes sense for our most basic unit of computation to be a universal logic gate, like a NAND gate or a NOR gate, or gate for short. In this model, the description of the receiving machine is a circuit of gates, and the message is a string of bits that is read by the receiving machine. The information content of a system can be described as a string of bits and a circuit of gates, like the combination of software code and an interpreter. In this view, an information system and a computation system have the same sets of components and configurations, and share the same phase space and state space. Information processing is therefore equivalent to computation.

If we assume the circuit wiring to be properties of the gates, and a source of clock pulses as a given, then our basic unit of information is a tuple of bits and gates.

Information* = ( bits, gates )

With this measure of information in mind, let us consider the total information content of all the particles of our whole universe, expressed as a giant circuit of bits and gates. It is of course highly unlikely that our quantum universe is actually a digital simulation, as quantum mechanics is beyond the practical reach of digital computation. Nevertheless, for the sake of argument, we can pretend our universe is a digital simulation in our thought experiments. Bits and gates are easier to reason about than qubits and quantum logic gates, and the outcome of reasoning about information should be the same either way. Simulating particle interactions require calculations, and here we represent those calculations as digital computations. Given enough bits, gates and clock pulses, we can simulate and approximate any physical phenomenon with arbitrary precision. I believe most scientists will agree that it is in theory possible to approximate our universe with digital simulation, albeit using astronomical numbers of bits and gates, such that the simulation produces the same macroscopic phenomena as in our universe, like stars, planets, clouds and possibly even life.

Axiom

Our universe can in theory be approximated by high-resolution digital simulation such that the simulation produces the same macroscopic phenomena as we are familiar with in our universe.

The next chapter is a thought experiment where we model the total information content of our universe as a giant circuit of bits and gates. The thought experiment suggests that in any universe or simulation there are two types of information: explicit information and implicit information.

Temperature anisotropy of the Cosmic Microwave Background (CMB). 1992. NASA / COBE

Chapter 2. Explicit information

Our universe contains a massive but finite number of elementary particles, or waves if you will. According to some estimates, the number of elementary particles in the observable universe is in the order of 10⁹⁷, including photons and neutrinos, but not dark matter or dark energy. Each particle is one of a fixed number of fundamental types, and we know of 31 of them, including the hypothesized graviton. Each particle has a finite set of states that describe properties such as mass, charge and spin. Each particle follows a common and fixed set of rules, the fundamental laws of physics.

A complete and non-redundant description of all these particles and rules is all that is required to account for everything that happens in the universe. Let’s call this description the explicit information of our universe. The reason for the word explicit here is because, at the bottom of it all, the information of particles must exist as something, something that explicitly exists. We do not know what it takes for something to exist, so instead we will just assign it a definition and give it a name, setting the limits of what we can know.

Definition

explicit information = a non-redundant description of the states of all elementary particles in our universe, plus a non-redundant description of the machine that repeatedly or continuously applies the laws of physics to these states

Let us do a thought experiment. Imagine a massive digital circuit, ready to run a simulation of an entire universe. Suppose the particles in the simulated universe follow the same mathematical rules as in our universe, with a precision right down to Planck length and time, and initial conditions similar to that of our own universe. In our thought experiment this massive circuit represents the explicit information of the simulated universe. Let’s call this circuit the explicit circuit.

A brief technical description of the explicit circuit: The states of each particle are stored as bits in registers, representing properties such as mass, charge, spin, relative position and velocity. Each state register bit has an LED attached to its output, a light-emitting diode that is either on or off, 1 or 0, so that we can conveniently see its output value. The LEDs are arranged to form a massive matrix called the explicit matrix, representing the total state of the simulated universe. All state registers are connected to a common data bus, which in turn connects to a central circuit containing the logic of the fundamental laws of physics. When the circuit is running, the central circuit cyclically iterates all particles’ state registers to compute and update their next values. The whole explicit circuit is driven by a single clock, the explicit clock. The speed of the clock does not matter, as this does not affect the outcome of the calculations.

And now for the big moment in our thought experiment. We power up the explicit circuit and see the massive matrix of LEDs blinking away. The whole matrix of LEDs stretches farther than the eye can see. We walk along the matrix and look at some of the blinking LEDs. Unsuprisingly, we see nothing interesting in this blinking, it all seems quite random. Looking only at the bits of explicit information we see no interesting macroscopic phenomena like galaxies or planets. We cannot see these things because the information of macroscopic phenomena is not necessary in the non-redundant description of the simulation. On the other hand, we know there must be lots of interesting things happening in the simulation. After all, the simulated particles behave in the same way as the particles in our universe, so we can expect the eventual formation of atoms, molecules, stars, rocks, planets, rivers, clouds, and maybe even life. However, if we want to see what is happening in the simulation, then we would need to attach additional circuits of gates and LEDs. Similar to a graphics card and a screen, these additional read-only circuits would read the bits of the explicit matrix, execute some transformation or rendering algorithm, and then display the result on its matrix of LEDs. Given enough supply of additional gates and LEDs, we can potentially bring anything in the simulation into view.

We conclude that it costs additional computation to bring a simulation’s macroscopic phenomena into view. From the perspective of explicit information, the information of macroscopic phenomena simply does not exist, or rather, it does not need to exist.

Let us apply this logic to our own universe. Given just the explicit information of our universe, it would require additional computation to bring its macroscopic phenomena into view. Assuming there is no additional computation going on other than in explicit information, we conclude that the macroscopic phenomena in our universe do not exist in the explicit sense. On the other hand, from within our universe, we as living creatures see views of macroscopic phenomena all the time, such as clouds, trees and chairs. What is doing the computation that makes these views possible? The transformations and computations that produce these views are done by our eyes and neurons, which ultimately consist of just elementary particles that follow the rules as described in explicit information. Explicit information ultimately accounts for everything that happens in our universe, by definition, so all other information must be implicit information, i.e. information that mathematically derives from explicit information, at all levels of abstraction. Therefore, the information of macroscopic phenomena, and all the information we see and experience, is implicit information, consisting of redundant information that exists only in a mathematical sense. If implicit information exists only in a mathematical sense, how come this information is so real to us?

First image of Earth from Moon. Lunar Orbiter 1. 1966. NASA / Boeing / Eastman Kodak

Chapter 3. Implicit information

The non-redundant explicit information of our universe ultimately accounts for everything that happens inside it, but we as creatures living inside the universe cannot see this explicit information directly. The information we do see and experience is what we call implicit information. As the name suggests, implicit information is the information that does not need to be explicitly stated because implicit information is already mathematically true. Implicit information is a derivation or transformation of the already existing explicit information. At any moment in time, implicit information is the snapshot set of all redundant information that holds true in our universe, i.e. everything that can truthfully be said about what is happening in our universe, from whatever location, scale, perspective and level of abstraction. Obviously this is a huge set, a much bigger set than the explicit information from which it derives. What follows is a set of definitions that attempt to capture this massive mathematical structure.

Definitions

implicit information = the set of all “instances of implicit information”

instance of implicit information = a “transformation” (or sequence of transformations) of explicit information

transformation = a machine, consisting of bits, gates and a clock with infinite frequency, which accepts explicit information (or an instance of implicit information) as input, and instantly produces an instance of implicit information as output

With infinitely many transformations and recursions, implicit information is an infinite tree-like structure, with explicit information at the root. When the bits of explicit information change, so do the bits of implicit information, instantly.

Continuing our thought experiment of the previous chapter, we can model an instance of implicit information as an implicit circuit. An implicit circuit is an external circuit that reads the bits of the explicit matrix, performs some transformation of these bits, and outputs its result on its implicit matrix of LEDs. The implicit information of the simulated universe is the infinite tree of all possible implicit circuits and their implicit matrices. Among these implicit matrices there will be views of galaxies, stars, planets and clouds.

Among the implicit circuits that most successfully predict the next states, there will be descriptions of rules and laws that mathematically derive from explicit information. Let us call these derived laws the implicit laws. The implicit laws are just as mathematically true as the laws of physics in explicit information, but the implicit laws are redundant because they are not necessary in the non-redundant description of our universe. The second law of thermodynamics is an example of an implicit law. The second law of thermodynamics is implicitly true in our universe because our universe is a many-particle system that is continuously being updated by a common set of rules. The second law of thermodynamics reveals itself in the simplest of simulations of bouncing particles, whereas this law is not explicitly stated in its code. In all such particle systems, whether a simulation or a universe, whether discrete or continuous, whether classical or quantum, there is the second law of thermodynamics at play, the statistical law that describes how concentrations of moving particles tend to spread out, how entropy increases, and how all macroscopic phenomena gradually fade away until equilibrium. The explicit clock can tick forever, but implicit and entropic time will eventually stop.

With all this in mind, let’s put it all together.

Whereas the explicit information of our universe somehow exists independently as its own thing, implicit information exists only in a mathematical sense. From the perspective of explicit information, only the non-redundant information of its elementary particles actually exist, and nothing else. However, from the perspective of implicit information, all instances of implicit information are just as mathematically true and real as the explicit information it derives from. For every macroscopic phenomenon we observe in our universe, whether a planet, a cloud or a chair, there mathematically exists an instance of implicit information that exactly represents that phenomenon. The space of implicit information is consistent with everything we observe, so why not say that our reality is that space? I hypothesize that our reality and existence is exactly this rich mathematical set of all instances of implicit information, the space where all mathematical transformations of explicit information actually exists.

Implicit Information Hypothesis

From a mathematical perspective, implicit information is as true and real as the explicit information it derives from. Our reality is that perspective, the mathematical space where all transformations of explicit information actually exists.

The total space of implicit information, with all its mathematical transformations, explains the unreasonable effectiveness of mathematics in the natural sciences. It is only in the mathematical space where implicit information is instantly computed and readily available. It’s implicit information all the way down, all the way down to explicit information. Although our universe is nondeterministic at the particle level, for each observation or measurement there is a definitive outcome that is real, and all its implicit information at those moments are instantly real too.

Processing video signals from Lunar Orbiter. 1966. NASA / Boeing / Eastman Kodak

Chapter 4. Implicit machines

Ultimately, at the most fundamental level, there is only one machine that explicitly exists. This machine is the explicit machine that calculates the progression of the universe’s particles’ states. Consequently, there is just one source of computation, from which we all tap. There is no extra explicit computation going on in the universe when we turn on a laptop, or when we think harder. The amount of compute we can potentially extract and use in our implicit world can never exceed the amount of computations being done to keep the universe running, and is further limited by Landauer’s principle.

At some moment in time, after some huge number of clock pulses of the explicit machine, in the vast space of a universe’s implicit information there may exist many instances of implicit information that describe a machine, an implicit machine. Many of these implicit machines will be short-lived, mere snapshots of coincidence. However, given the right conditions, driven by free energy and entropic forces (implicit energy and implicit forces), implicit machines can emerge as solutions for increasing entropy at a faster rate, which in turn encourages the growth and formation of the solution, and so on. We can understand life as implicit machines that emerge in universes and simulations with laws of physics expressive enough for there to exist a design for a replicating survival-machine, and, in accordance with anthropic reasoning, we live in such a universe. A solution for a sustained machine factory exists on Earth in the form of a self-replicating cell. From a computational perspective, we can understand evolution on Earth as the apparent breadth-first search for opportunistic machine-designs in entropic environments that survive by feeding off the increasing entropy, by channeling energy along shorter routes, and accelerating the overall increase in entropy, with the tree of life being the ongoing result of that search. On a churning planet like Earth, with plenty of negentropy to spare, evolutionary phenomena can be sustained for eons, like a slow burning flame.

Implicit Machines

An implicit machine is a machine constructed from instances of implicit information. An implicit machine is itself an instance of implicit information. Living organisms are implicit machines that metabolize, replicate and survive in an entropic environment.

For each living organism there exists an instance of implicit information that contains the description of a machine that exactly matches what the living organism is and does. If a particular instance of implicit information exactly describes a duck, then this instance of implicit information will exactly behave as a duck. Similarly, the full description of a living brain also exists as an implicit machine. A brain is an implicit machine, consisting of neurons that are themselves implicit machines. For any conscious thought a brain has, there exists a set of elementary particles in the brain that accounts for the thought, but it is only in its macroscopic implicit information where we can find the information that exactly is that thought. The information of a conscious thought is not explicitly present in the explicit information of our universe. Given just the explicit information of our universe, it would require transformations and computations to bring into view what a brain is thinking. Therefore, it is only in the mathematical space that the information of conscious thought exists, the same space where the information of clouds and chairs exist.

If we say we are conscious of the information in our brain, then this must mean that we are that information. In other words, we are instances of implicit information within the implicit machine that is our brain. We can understand ourselves to be quite literally the mathematical definition of the machine in our brain. Being the mathematical definition of the machine itself accounts for the subjective sense and belief that we have control because that is what a machine does, by definition.

Implicit Consciousness

Consciousness is a phenomenon experienced by an implicit machine while it is interpreting, processing, and reacting to incoming implicit information. Consciousness is implicit information, a story of what is happening at the macroscopic scale in a simulation. These stories are just as mathematically true and real as the second law of thermodynamics, and just as real as clouds and chairs.

If consciousness is no more than an instance of implicit information, and if there is no bias towards any particular instance of implicit information being more real than any other, then surely this implies that all instances of implicit information are just as real as our consciousness. In the world around us there are stories happening everywhere, not just in brains, albeit not all stories are interesting. Consider for example the moving parts of a mechanical watch, or the swarm behavior in the murmuration of starlings, or the fluctuations in stock market prices, or even the digestive system in our own body. These are all macroscopic stories told by implicit machines. Taking the argument to the extreme, the process of evolution, with its breadth-first search for survival machines, can hypothetically be described as an algorithm, a machine, and therefore that story must implicitly exist too, albeit operating at much longer time scales than human perception. We have of course no way of knowing what it is like to be such machines.

We can only guess what qualia a burning flame experiences, based of what we observe of its behavior. A flame is a macroscopic phenomenon that appears to behave as if it wants to survive, wandering around in search for fuel. Of course there are no intelligent thoughts going on in a flame, that information is simply not there, not even in its implicit information, but the movement of a flame does convey some kind of behavior, so surely there is some level of qualia going on, albeit minute when compared to what we experience. We can understand qualia as the first-person experience of being the implicit machine that reads and interprets implicit information. The sensation of qualia, like the color red or the flavor of sweetness, cannot be communicated in bits alone, it can only be experienced first-hand by an implicit machine.

Consciousness is an instance of implicit information, a macroscopic story told by many neurons and molecules, telling rich mathematical stories of behavior and qualia. We are quite literally mathematical machines that experience mathematical stories. Whether a living machine, a mechanical machine or an electronic computer, every machine in our world is an implicit machine, a machine constructed from instances of implicit information. Each machine’s story is as real as any other, although most stories are obviously not very interesting. We would not say that a desktop calculator is conscious, the information in a calculator is not as rich as the information in our brains, and there is no observable expression of fear or pain if we threaten to dismantle it. However, hypothetically speaking, if we were to simulate a complete solar system on a giant supercomputer, and at some point life happens to emerge in that simulation in a similar way as it did in ours, then that life would be just as real and conscious as we are. This is because a human-built supercomputer, although it may appear explicit to us, is acttually an implicit machine, constructed from instances of implicit information, which can all be traced down to the explicit information of the particles. Similarly, in large AI systems that run on clusters of computers, information processing is done in increasingly similar ways as done by human brains. Whether human thought or digital information processing, there mathematically exists a model of the information processed, mathematically derived from the explicit information that runs our universe. We reason as follows: We human beings are conscious of our thoughts; Thinking is information processing done by human brains; Human brains are implicit machines; Therefore, consciousness is a phenomenon experienced by an implicit machine that processes implicit information. When we run superintelligent AI or AGI systems, feeding it with similar sensory input as humans, then as an instant undetectable by-product we inevitably create mathematical instances that consciously experience the implicit machine’s information processing. If we are conscious, then so are the thinking machines that we create, because we all reside in the mathematical space where all implicit information is real.

The implicit information hypothesis says we are completely implicit, free will and choice is an illusion, while still holds true that we are the ones doing the actual choosing and experiencing the consequences. Carl Sagan said “We are a way for the universe to know itself”. We can expand on this by saying “The universe is a way for mathematics to know itself”. Without the existence of explicit information there is nothing there for mathematics to reveal itself, and there would be no implicit information or implicit machines. Although we ultimately have explicit information to thank for all of this, our reality is in its implicit information, the mathematical space where all transformations of explicit information exists.

--

--

Tim Samshuijzen
Tim Samshuijzen

Written by Tim Samshuijzen

Systems Architect and AI researcher

Responses (2)