Systems of information and human consciousness
Maybe the mind and the self are a kind of system of information or a pattern, analogous to the software and structured data in a computer. It’s not physical, it’s an arrangement of a set physical things that represents an information system.
This has the advantage over the metaphysical explanation in that it doesn’t require “invisible stuff”. A pattern made of real physical objects is a “real pattern”. No one can argue it isn’t “real”.

There are several philosophical theories that support this way of thinking: the computational theory of mind holds that the relationship between mind and body is analogous to the relationship between hardware and software; functionalism holds that the mental objects are defined by their functions relative to each other and to the outside world.
But consider this thought experiment.
Suppose that there was a writer, more talented than Shakespeare or Virginia Wolff or George Eliot, who wrote a book with characters that were perfectly realistic. They captured exactly what a real person would say and do in each situation. They described with precision what the characters thought about, what they remembered or dreamed or said to themselves, what things looked like to them, what it felt like to be them.
Suppose you had the book and you read it.
No one would argue that a character in the book is “real person”, in some way. At most, we would say it’s a “very good simulation of a person”. No one would imagine that a character has actual subjective consciousness — that the character could feel their emotions and perceptions as you read about them. The idea is ridiculous; there isn’t anything here. The book is just ink and paper and the character exists only in our imaginations. Imaginary things can only have imaginary properties, whereas subjective consciousness is a real thing in the real world.
Suppose that, instead of being written by a human writer, the book was produced by an AI program. The program knew how to create characters that were perfectly realistic. Their actions and thoughts were exactly what a real person would do and have in that situation.
Suppose you had that book and you read it.
Nothing has changed. We’re in the exact same situation, sitting on the couch, reading a book. It’s made of ink and paper and the character only exists in our imaginations. Obviously, it’s still not a “real person” and it can’t have subjective consciousness.
Suppose that the program used its knowledge of character design to act out what the character would say. What if, instead of writing a book and publishing it, it produced the book as a live performance, in the form of a dialog with you, incorporating your responses into the book as it wrote it.
What is the difference between the character you are interacting with in this “perfect human simulation” and the character in the book?
Nothing.
I’m hoping this convinces you that creating a simulation of consciousness with software is precisely analogous to creating a character in a work of fiction.
In a book or in a running computer program, the character is a system of information. One of the defining features of information is that it doesn’t matter what physical medium you use to represent it. It can be represented by microscopic electrical charges in the memory of a digital computer or as a set of marks on paper. It’s same system, regardless of what you use to represent it.
If we believe that a character in a book “obviously” can’t have subjective consciousness, then you have to agree that a computer program can’t have subjective consciousness. They are both the same kind of thing.Developing a Character and the Nishmat Chayyim
It takes effort and time for to make a character as “real” as possible, and some succeed better than others. The writer or programmer thinks carefully about human motivations, thought processes, speech rhythms, etc., and thousands of other features that are too subtle to describe easily, but can be detected by the mind of a talented writer or the matrices of a deep learning network. They put these together into a system that simulates a human being.
The simulation will require testing. There will be early versions that don’t quite work. But then, after a lot of tweaking, eventually somebody says “Gee, I guess that one is pretty convincing!”
And that’s it.
It isn’t accompanied by thunderclaps and the music of Wagner. There’s no singular moment when the machine suddenly acquire human-like abilities and become more powerful and dangerous. They’re all just simulations, and there’s no noticeable border between “bad”, “adequate” and “excellent”.
Let’s take it up a notch
Neuroscience is attempting to describe exactly what is happening in the brain when people experience “consciousness”. There are several speculative theories about this. (e.g. “integrated information”, “global workspace”, etc.). These differ widely in the details and there’s a lot of things that are unclear or arguable.
But suppose one of these theories is correct — that it perfectly describes what’s happening in our brain when we experience consciousness.
Suppose — and you probably know where I’m going with this — a writer used one of these theories to create a character for a book, laying out page by page exactly what was happening, describing all the elements of consciousness in the theory and how they work together. Because the theory is correct, this description of consciousness is perfect.
And suppose you had the book and you read it.
It’s hard to see how the character in the book would have actual subjective consciousness — it’s just a book. And that makes it hard to see why a version of the character that runs on a computer would have it. And that makes it hard to see how any program could have it, because we supposed that this program is perfect. If this one isn’t conscious, then there are no programs that could have subjective consciousness.
If this line of thinking is right, then these theories of consciousness can’t be used create consciousness in a machine.
Wait a minute — what about us?
I’ve argued that it makes no sense to believe that a “system of information” (like a character in fiction or a human simulation on a computer) has subjective consciousness.
But this creates a problem.
Our brains are (as far as we can tell) just machines, and like any machine, they implement a system of information. Several schools of philosophical thought (functionalism, computationalism, eliminative materialism) argue that the mind is (just) a system of information implemented by the brain.
If that’s true, then you might be forced to admit that we don’t have subjective consciousness either — our “consciousness” is a just as much an illusion as the consciousness of Sherlock Holmes or Frodo or Samantha. We’re fictional.
That’s a problem — I’m pretty sure that this is another version of the hard problem of consciousness. There’s no explanation I know of that fixes this.
Postscript: Reply to an Objection
A “first glance” objection to this thought experiment is that the book is not “dynamic” or “autonomous“, that it isn’t “unfolding in real time”. I don’t think this matters, because another feature of information is that it is timeless — bits of information, like “patterns” or “relations”, don’t exist only in a particular place in at a particular time. It doesn’t matter what medium that is used to represent these bits of information and it doesn’t matter where or when they represent it. The information is still just the information regardless of the representation. That’s just how the ontology of information works.
The things the character says and does trace a path through a space of possible of actions in “information space”. It doesn’t matter if we trace that path by turning the pages of a book or watching it unfold on a computer screen. It’s still the same information, the same pattern, with the same objects and relations and features.
And I have to say: it’s not enough to just notice the difference then assert that it somehow changes things. You have to explain exactly why this makes a difference — otherwise, this objection relies on ad hoc speculation and means nothing (as Stevan Harnad once said about similar objections to The Chinese Room argument). It’s not enough to just assert that consciousness suddenly “emerges” at certain “level” of autonomy, or dynamism, or complexity, or speed, or what have you. You have to explain exactly how this is supposed to work.