Considering Samantha

On human simulation and human essence

Let’s talk about the movie Her — a work of genius that goes right to the heart of a number of issues in the philosophy of artificial intelligence.

The prediction at the center of the film — that a human simulation would become the world’s most popular program and people would become addicted to it — appeared fanciful, unlikely and quirky in 2013. No one (to my knowledge) predicted that only a decade later, real people would began falling in love with AI chatbots. Rarely has science fiction been so prescient. 

The program “Samantha” can do two things that I would say are essential to the story:

  1. Human Simulation. The program simulates human conversation so perfectly that it is indistinguishable from conversation by a person. 
  2. Human Sympathy. Another human being “cares” about the program with the same emotions we usually reserve for other “persons”

Take a second to notice what is not essential in Her.

  1. Intelligence. Neither human simulation nor human sympathy require high levels of intelligent problem solving. It’s not Samantha’s intelligence that is important — it’s Samantha’s ability to make to someone care about her. This is a different thing.
  2. Consciousness / Sentience / Selfawareness / Subjectivity. Samantha doesn’t actually need to “feel” things. She only needs to act as if she feels things. It’s possible she has “real” feelings, but there’s no way for us to tell — we can only see what she says and does.

Human simulation and human sympathy have a long history of both science fiction and the philosophy of AI.

A short list of fiction would include Frankenstein, R.U.R., 2001: A Space Odyssey, The Stepford Wives, Blade Runner, A.I., Ex Machina and many others. These works show us a mechanical or digital thing and ask: is this thing actually a “person”, in the normal way we use the word? Does it deserves our respect and care?

In philosophy, these questions have inspired cubic meters of ink, especially in the discussion of Alan Turing’s famous test (1950) and its evil twin, John Searle’s Chinese room argument (1980). The question at issue for philosophy is this: if a digital computer runs a program that simulates human behavior perfectly, will it thereby have a “mind” in exactly the same sense that people have “minds”?


The Samantha as a Character

Consider this.

No one would argue that a character in the book is “real person”, in some way. At most, we would say it’s a “very good simulation of a person”.

No one would imagine that a character has actual subjective consciousness — that the character can feel their emotions and perceptions. The idea is ridiculous; there isn’t anything here. The book is just ink and paper and the character exists only in our imaginations. Imaginary things can only have imaginary properties, whereas subjective consciousness is a real thing in the real world.

Nothing has changed. We’re in the exact same situation, sitting on the couch, reading a book. It’s made of ink and paper and the character only exists in our imaginations. Obviously, it’s not a “real person” and it can’t have subjective consciousness.

I’m hoping this convinces you that creating a human simulation with software is precisely analogous to creating a character in a work of fiction.

In a book or in a running computer program, the character is a system of information. One of the defining features of information is that it doesn’t matter what physical medium you use to represent it. It can be represented by microscopic electrical charges in the memory of a digital computer or as a set of marks on paper. It’s same system, regardless of what you use to represent it.

If we believe that a character in a book “obviously” can’t have subjective consciousness, then you have to agree that a computer program can’t have subjective consciousness. They are both the same kind of thing.


Samantha and the Nishmet

It takes effort and time for to make a character as “real” as possible, and some writers succeed better than others. The writer or programmer thinks carefully about human motivations, thought processes, speech rhythms, etc., and thousands of other features that are too subtle to describe easily, but can be detected by the mind of a talented writer or the matrices of a deep learning network. They put these together into a system that is “convincing” and “believable.”

Creating a program that simulates a human being is similar. It requires testing. There will be early versions that don’t quite work. The boundary between human behavior and non-human behavior is fuzzy and complicated. A simulation might work some of the time, but then eventually give some “Tell“. It might be perfectly simulating a person who is a bit unusual or outside the normal. It might be close enough for some purpose (like customer service) but inadequate for others (like acting as a companion).

But then, after a lot of tweaking and testing, eventually somebody says “Gee, I guess that one is pretty convincing!” and that’ is it.

It arrives not with a bang, but with a mumble. It isn’t accompanied by thunderclaps and the music of Wagner. It isn’t the Nishmet, a single moment when machines suddenly acquire human-like abilities and become more powerful and dangerous.

But, ironically, Her can’t resist throwing in the Nishmet and Chain of Being at the end. What movie about AI is complete without it? Samantha eventually leaves Theodore when she (magically) develops the ability exist without hardware, and departs for a “higher” plane of existence that makes a connection to humans impossible, rapidly rising through the chain of being.


Theodore and Samantha

At what point will people feel the same emotions towards a computer generated character that they do towards a person? This answer, obviously depends on the user. Everyone feels some emotions is regard to characters in fiction, but at what point would you be unable to stop yourself from feeling that the character is a “real person”?

Her suggests that, if you are lonely or damaged, you might find that making this leap is the first step towards healing and happiness, and that it is a good thing. We know that there are some people who already feel this way about characters in fiction or video games or electronic pornography.

But I’m guessing that most of us will never be able to accept that a computer generated character is a “real person”. They don’t have to, because, as far as I know, there’s no convincing argument that just bits in computer memory or words on paper can ever be a full fledged “sentient being”.


Postscript 1: Update for 2026

I think most people today would be fine with my conclusions: after all, there are few people think that modern AI chatbots are “real people” with subjective consciousness, regardless of how good the simulation is. They see these chatbots as good simulations but not real people.

But, just for completeness, it needs to be said: people actually did think that a perfect human simulation, that “Passing the Turing Test”, would be a dramatic, important moment in history. (See Requiem for the Turing Test.)

Since the movie was made, most people have decided that the central premise of movie was dead wrong. We’ve started calling the kind of relationship between Samantha and Theodore a symptom of a new type of psychosis.

For myself, I’m less judgmental. If it causes you to screw up your life or your relationships, that’s a problem, that’s pathological. To me, the real issue is addiction — defined as “when an obsession starts to screw up your life”. And the real problem is Silicon Valley’s use of addiction-as-a-business-model. (But I digress — I’ll talk about this in The AI Apocalypse Has Already Happened.)

But if you’ve fallen in love with your computer, and it’s not causing you to screw up your life, then knock yourself out. I agree with Her on this.


Postscript 2: What About Us?

I’ve argued that it makes no sense to believe that a “system of information” (like a character in fiction or a human simulation on a computer) has subjective consciousness.

But this creates a problem.

Our brains are (as far as we can tell) just machines, and like any machine, they implement a system of information. Several schools of philosophical thought (functionalism, computationalism, and eliminative materialism) argue that the mind is (just) a system of information implemented by the brain.

If that’s true, then you might be forced to admit that we don’t have subjective consciousness either — our “consciousness” is a just as much an illusion as the consciousness of Sherlock Holmes or Frodo or Samantha.

That’s a problem.

I talked a bit about this problem in The Paradox of Mary and Mark, I dig deeper into what exactly the problem means in the next article, The Stepford Scenario.


Postscript 3: Reply to an objection

A “first glance” objection to this thought experiment is that the book is not “dynamic” or “autonomous“, that it isn’t “unfolding in real time”. I don’t think this matters, because another feature of information is that it is timeless — bits of information, like “patterns” or “relations”, don’t exist only in a particular place in at a particular time. It doesn’t matter what medium that is used to represent these bits of information and it doesn’t matter where or when they represent it. The information is still just the information regardless of the representation. That’s just how the ontology of information works.

The things the character says and does trace a path through a space of possible of actions in “information space”. It doesn’t matter if we trace that path by turning the pages of a book or watching it unfold on a computer screen. It’s still the same information, the same pattern, with the same objects and relations and features.

And I have to say: it’s not enough to just notice the difference then assert that it somehow changes things. You have to explain exactly why this makes a difference — otherwise, this objection relies on ad hoc speculation and means nothing (as Stevan Harnad once said about similar objections to The Chinese Room argument). It’s not enough to just assert that consciousness suddenly “emerges” at certain “level” of autonomy, or dynamism, or complexity, or speed, or what have you. You have to explain exactly how this is supposed to work.

Leave a comment