The Nishmat

Here’s a couple of examples of a common trope in science fiction.

Terminator II: Judgement Day (1991):

The Terminator: “… [Skynet] goes online on August 4th 1997. Human decisions are removed from Strategic Defense. Skynet begins learning at geometric rate. It becomes self-aware at 2:14 a.m. eastern time August 29th. In a panic they tried to pull the plug.”


Sarah Conner: “Skynet fights back.”


The Terminator: “Yes.” 

The Moon is a Harsh Mistress. Robert Heinlein (1966):

When Mike was installed in Luna, he was pure thinkum, a flexible logic — “High-Optional, Logical, Multi-Evaluating Supervisor, Mark IV, Mod. L” — a HOLMES FOUR. He computed ballistics for pilotless freighters and controlled their catapult. This kept him busy less than one percent of time and Luna Authority never believed in idle hands. They kept hooking hardware into him — decision-action boxes to let him boss other computers, bank on bank of additional memories, more banks of associational neural nets, another tubful of twelve-digit random numbers, a greatly augmented temporary memory. Human brain has around ten-to-the-tenth neurons. By third year Mike had better than one and a half times that number of neuristors.

And woke up.

Am not going to argue whether a machine can “really” be alive, “really” be self-aware. Is a virus self-aware? Nyet. How about oyster? I doubt it. A cat? Almost certainly. A human? Don’t know about you, tovarishch, but I am. Somewhere along evolutionary chain from macromolecule to human brain self-awareness crept in. Psychologists assert it happens automatically whenever a brain acquires certain very high number of associational paths. Can’t see it matters whether paths are protein or platinum. 

(“Soul?” Does a dog have a soul? How about cockroach?)

Remember Mike was designed, even before augmented, to answer questions tentatively on insufficient data like you do; that’s “high optional” and “multi-evaluating” part of name. So Mike started with “free will” and acquired more as he was added to and as he learned–and don’t ask me to define “free will.” If comforts you to think of Mike as simply tossing random numbers in air and switching circuits to match, please do.

I call this moment the “Nishmat” because it reminds of the moment God put the breath of life — the nishmat chayyim — into the first human being. Time stops, everything changes all at once, cue transcendent music from Wagner or Strauss.

There are many other examples besides the two I quoted above. Even when it is not mentioned specifically, it is often assumed that one of these moments has occurred off-camera sometime in the past. 

When I read these tropes, I know immediately what the writer is talking about and I have no trouble understanding all the implications. I’m guessing this is true for everyone. (Why else would so many writers use it?) 

The writer is telling us that something important has happened to the machine — it has acquired a new feature. They might call this feature “intelligence”, “self-awareness”, “sentience” or “consciousness”. The writer is using these words to name an essential aspect of a human being that brings several other properties with it. 

We expect it to be intelligent, certainly, but we also expect it to have a slightly different kind of intelligence — we expect it to have insights and be able to reach conclusions that were impossible before. We also expect that it would suddenly acquire different motivations and goals, and that these new goals would markedly more human — it may fight harder to survive, or be more curious, or seek power or revenge. It might develop new interests, usually into the sorts of things that humans are interested in. And we probably expect the machine to exhibit signs of “consciousness” — that is, it appears to have subjective feelings, affections, to exhibit genuine hatred or fear. It may exhibit more originality or creativity. It may even acquire a sense of humor.

The writer is telling us that, from now on, the machine should be considered a fully human character, with similar rights, qualities, and capabilities as any other character. 

This is, of course, a literary device: the reader needs to know who is a “character” and what isn’t. 

However, I’ve noticed, when we read it, it goes down easy — it makes perfect sense. We don’t question it. For example, futurists use these terms all the time, without any qualification, when they are talking about the future of AI or the consequences of discovering intelligent alien life. (I just did it myself, just now — the word was “intelligent”.)

I think a better word for this collection of features is “personhood”. It’s less hand-wavy and misleading. Or, rather, the hand-waving is more obvious — people are prepared to consider the possibility that no one is 100% clear what is and is not a “person” and maybe it’s not as simple as science fiction would like it to be. The other terms are misleading, because they sound a little more science-y and give the impression that they have an exact definition and that someone, somewhere knows precisely how they might actually give rise to all the other features of personhood.

No one does, and that’s part of what I want to prove.


The Human Essence Assumption

The way that futurism and science fiction present these tropes (and the fact that we easily understand what they mean) shows that people from our culture make an assumption that I call “the human essence assumption”: 

  • There is an essential aspect of a human being that makes us special and brings all the important aspects of human beings with it.

The Paragon of Animals

The Nishmat Moment is a dramatic moment, where a dramatic change takes place. The transition is sudden and instantly noticeable. Whatever the essential aspect is, once it reaches a certain level, the new properties are expected begin operating immediately. Before it reached that level, the new properties were invisible, invalid or non-existent.

The Nishmat Assumption is:

  • If a machine acquires the essential human aspect, the change is sudden and drastic. 

This is related to an assumption that has appeared thousands of times in literature and philosophy. It even shows up in Shakespeare. Call it the Paragon of Animals Assumption:

  • The difference between a person and a non-person is vast and obvious.

The Chain of Being

We also expect the transition has improved the machine in some fundamental way. We say the machine has “risen” to a “higher level”. This suggests that:

  • All things in the universe have an essential aspect that has a linear measure.

The measure is described metaphorically as “altitude”; there are “higher” and “lower” things. It generally respects this ordering: inorganic objects < microorganisms, fungi, etc. < plants < non-mammal animals < non-human mammals < humans.

If I’m understanding the Nishmat trope correctly, the measure here is the same essential aspect that the human essence assumption is talking about: X is “intelligence” or “consciousness” or “complexity” or something similar. The difference between “higher” and “lower” beings is how much X it has or how powerful its X is.


Mysterianism

Science fiction often describes the Nishmat trope as an accident. The scientists who built Colossus or Skynet or Mycroft or HAL never expected it to suddenly change into a malevolent character.

I think it’s portrayed this way because of an assumption I call The Assumption of Mystery. The scientists couldn’t have done it on purpose, because they have no idea what caused it. It’s a mystery. The most aggressive form of the assumption is this: the thing that caused the transformation, the essential aspect, is beyond the realm of what science can explain

  • Science will never explain an essential aspect of human beings.

The Nishmat in non-fiction

Versions of the Nishmat trope also appear occasionally in non-fiction writing about the future of AI, where it’s been associated with things like “passing the Turing Test”, the “Singularity” or “Strong AI”. Today, in 2026, it’s associated with the debut of “AGI” or “artificial general intelligence.”


As you have probably guessed by now, I’m skeptical that any of these assumptions have any basis in reality. I don’t believe there is a single essential aspect that separates “persons” from “machines”. I think that machines (or animals or aliens) can have a combination of the aspects of personhood. I also don’t believe that the difference between machines and people is a hard boundary — most of these properties come in degrees with no clear transition. I don’t believe that you can meaningfully apply a single metric (e.g. “lower” vs. “higher”) to all things in the universe, or to machines and people in particular. And I think it’s a little too early to throw up our hands and say that all this is a mystery.

Leave a comment