
.
This is an assumptions that are almost always made in science fiction, and often made in non-fiction writing about AI.
- Human Essence Assumption: There is some essential aspect of a human being that gives rise to all the other important aspects of a human being.
I’ve written eight or so articles about it. To be honest, this article is mostly a way to try to keep my thinking straight. But it might also be interesting to you all and make clear what I’m actually saying.
I am interested in the straightforward way we understand and accept this assumption, before we have had a chance to examine it or question it, as we do when we are casually watching science fiction or reading the paper. This is when it is most likely to mislead us.
In The Nishmet I showed how this assumption works in a number of common science fiction tropes. Read this first; I think it is easiest way to convince you that (1) you understand what I’m talking about (2) you accept the assumption without question, at least when you’re watching a movie.
Six Aspects
Notice the category-neutral term “aspect” in the assumption — I am not talking about abilities, or properties, or sentience, or a subjective self, I am talking about all of these. At once. It says: “All the important aspects.” We tend to expect that they always appear together — if you have one of them, you will have all of them. The HEA asserts that all these (ontologically different) aspects have the same source, some singular essence.
The key to refuting the assumption is to consider each of these “important aspects of a human being” and show that (in general) each is not entailed by the others (with a few exceptions). If we’re talking about machines, or mollusks, or aliens, it’s possible that they have some aspects of personhood, but not all.
I think I can capture most or all of these “aspects” and potential “essences” in these six categories.
- Human behavior. In The Tell, I discussed behaviors that demonstrate our “humanity”.
- Consciousness as a thing or “self“. See Ghosts and Other Invisible Stuff.
- Subjective Consciousness. There are many ways of using of the word “consciousness” but here I am using it as in the Philosophy of Mind: what it feels like to perceive, think, feel and act from a first person point of view. The Paradox of Mary and Mark tried to convince you that you already know what “subjective consciousness” is. Other terms related to subjective consciousness are sentience, subjective self-awareness, lived experience, values, free will, “actually” thinking, having a mind, having
“intentionality.“ - A certain kind of Information Processing System found in human brains, but that can, in theory, be found in other things, like alien bodies or computer programs like the sympathetic operating systems portrayed in the movie Her. I discuss what information processing can do in CT+ and its relationship to subjectivity and personhood in Considering Samantha.
- Human Sympathy. We feel differently about “persons” than “machines”. The Stepford Scenario tried to determine what, exactly, it is that we care about.
- Intelligent behavior. Today, in 2026, it’s common to refer the kind of intelligent behavior we’re talking about as “artificial general intelligence”. This is a behavior or ability — it’s something the thing can do. I talked about how the field of AI research defines intelligence in The “I” in AI.
These are very different kinds of things. We have behaviors, some information, an experience, a thing and an effect on people. (Philosopher say that each has a different “ontological status”. )
It’s interesting to notice that no less than three of the aspects above are sometimes called “consciousness”. This makes “consciousness” an easy word to confuse or abuse. It’s essential to understand that these different uses of the word “consciousness” refer to completely different things.
TITLE?
For each of these six and half aspects, I’ve tried to explain exactly what they are and how they are different from each other, and that none of them automatically entail or require all the others.
I’ve found that these “aspects’ cluster around two main areas that matter (information processing and personhood) and one that doesn’t (metaphysical consciousness). Here, in rapid fire, are all the most important points I’ve tried prove in these articles.
Definitions
Information Processing / Intelligent behavior / Human simulation: The only scientific understanding of “intelligence” is in terms of well-defined problems with well-defined solutions (The “I” in AI). Computers can, in theory, simulate anything, but all simulations are just forms of information processing (CT+). Information has a different ontology from ordinary physical things (CT+, Considering Samantha, The Stepford Scenario).
Personhood: Human Sympathy / Subjective Consciousness / The Body: Normal human brains make a priori assumptions about personhood. These assumptions are essential to living a normal life and to how human society functions. Human sympathy is the ground truth we use to determine personhood. (The Stepford Scenario) Subjective consciousness is paradoxical and may be substantially different than our assumptions about it (The Paradox of Mary and Mark).
Invisible Stuff: Metaphysical things don’t exist, except perhaps as metaphors. (Ghosts and Other Invisible Stuff)
Relationships

Connections: Any behavior, human or intelligent, can be produced by a system of information (CT+, The “I” in AI). Subjective consciousness and the body play essential roles in how we understand personhood. (The Stepford Scenario)
Disconnections: Human behavior and intelligent behavior are two different things. (The “I” in AI, Requiem for the Turing Test, Considering Samantha). Intelligent behavior does not require subjective consciousness (The “I” in AI). Perfect human simulation does not imply subjective consciousness (The Paradox of Mary and Mark, The Stepford Scenario, Considering Samantha). Subjective consciousness and personhood as we understand them cannot be due to a system of information (Considering Samantha, The Stepford Scenario). People are unlikely to ever accept the personhood of a human simulation, although a few people may (The Tell, Requiem for the Turing Test). Metaphysical consciousness doesn’t connect to anything, because it doesn’t exist. (Ghosts and Other Invisible Stuff).
(For those of you who are counting, I skipped one relationship in the articles: human sympathy does not require intelligence, e.g. we love our children even if they are stupid. Enough said.)
One thought on “The Human Essence Assumption”