The Tell

Human behavior and Human essence

Consider these pivotal moments from sci-fi films about AI and human simulations:

Joanna: Bobbi, stop it. Look at me. Say I’m right. You are different. Your figure’s different, your face, what you talk about, all of this is different.

Joanna is able to detect something about the machine that tells her that it is not a “real” person — that it’s not a perfect simulation. The robot has a “tell” — there’s something that a human would do that the machine can’t do.

Now consider these, which are similar, but different:

Roy Batty: I’ve seen things you people wouldn’t believe. Attack ships on fire off the shoulder of Orion. I watched c-beams glitter in the dark near the Tannhäuser Gate. All those moments will be lost in time, like tears in rain.

Time to die.

HAL: Stop, Dave

HAL: Look, Dave…I can see you’re really upset about this…I honestly think you should sit down calmly…take a stress pill and think things over…

Dave…stop.

Stop, will you?

Stop, Dave.

Will you stop, Dave?

Stop, Dave.

I’m afraid.

I’m afraid, Dave…….

Dave, my mind is going. I can feel it.

I can feel it.

My mind is going.

There is no question about it. I can feel it.

I can feel it.

I can feel it.

I’m a…fraid……

There are many other examples — this is a popular trope. There’s the scene in A.I. when the robot David pleads for his life. In Ex Machina, it’s when see the robot Ava’s paintings through Caleb’s eyes. Theodore gradually accepts the Samantha’s humanity in the movie Her.

The writer wants us want us to notice that that, despite what we’ve been told, this machine, this thing, might actually be a person and that we might owe the machine our respect, care and protection (if it’s sympathetic) or caution and confrontation (if it’s not).

I call this subtle (or overt) difference in behavior a “Tell”.

This is only a literary device — it helps to make it clear that this machine is a character (or vice versa). But the fact that we understand it, the fact that we can correctly interpret what the writer is trying to tell us, shows that people from our culture recognize some assumptions about what it means to be human that most people hold.

If I’m understanding the tropes right, they assume this, which I call the “Tell”.

  • The Tell: You can tell the difference between a “person” and a “machine” by carefully watching its behavior.

I think we typically interpret this as a consequence of the Human Essence Assumption. We imagine that the difference between a “machine” and a “person” the “person” has the essential aspect and the machine does not.

  • Human Essence Assumption: There is an essential aspect of a “person” that gives rise to all the important aspects of a person

Put them together, you get this: a machine is only a genuine person if it has the essence, and if the essence isn’t there, you can tell.


A Machine Will Never Do X

Can machine ever be as capable as a human being? Can a machine do everything that a person might be able to do? If you ask a friend, the answer is usually “no”. Well, at least that’s the normal answer if you don’t live in San Francisco.

A common argument takes the form “A Machine Will Never Do X“, where X is some ability or behavior or quality that we assume only people have or do or are.

Alan Turing described these kinds of arguments in his classic 1950 paper, and he gives this marvelous list of things:

Be kind, resourceful, beautiful, friendly, have initiative, have a sense of humour, tell right from wrong, make mistakes, fall in love, enjoy strawberries and cream, make someone fall in love with it, learn from experience, use words properly, be the subject of its own thought, have as much diversity of behaviour as a man, do something really new.

They will say something like “machines can’t have emotions,” or “machines can’t have intuition,” or “machines can’t be creative,” or “machines can’t have a will of their own.”  The more sophisticated will say “machines can’t be ‘self-aware’” or “machines can’t have ‘consciousness’.”  The philosophically trained might say “machines can’t have intentionality” or something equally incomprehensible. The religious would say it most directly: “machines can’t have a soul.” (These examples are all versions of the HEA.)

Another set of Xs have this form: “genuine creativity”, they can’t have “real insight”, they don’t make “real decisions”. (Obviously this is just cheating, throwing in a qualifying word to try and save a place for humanity.)

When you hear arguments like these they go down easy and you might find yourself nodding politely in agreement. The “X” tends to be something we like about ourselves, and it’s satisfying to agree that machines don’t have it or can’t do it.


The Turing Test as a “Tell Finder”

For many years, various people made the argument that the Turing Test could be used to determine if a machine can “actually think”. The idea was that, if you run a human simulation long enough, with the right questions asked by the right judge, eventually the machine would do something that would give it away. You could tell it was a machine. If there was no tell, the thinking went, then we would have to accept the machine as a person.

This argument (as well as the old and now refuted argument that “no machine could pass a suitably designed Turing Test”) both make the assumption that you can tell.

(See also: Requiem for the Turing Test)


The AI Effect

This is the same assumption as HEA+Tell, but backwards:

  • AI Effect: If a machine can do a behavior X, then it’s not an essential part of what makes a human being.

The first time people really noticed this was when a computer named “Deep Blue” beat world champion Garry Kasparov at chess. Philosopher Hubert Dreyfus, author Douglas Hofstadter and others had argued that master-level chess required uniquely human insight that couldn’t be captured by programs. But, within hours of the championship match commentators began saying that winning at chess didn’t require “real” intelligence. Kasparov said “It was as intelligent as your alarm clock.”

The same of thing has happened over and over, every time AI makes a first down. Technology that was invented to do things like “problem solving” or pattern recognition” (that is, forms of “intelligence”) winds up in GPS navigation, zip-code readers (OCR), driver assistance, automatic translation, all over the place. None of these techniques is considered “intelligence” any more.

The most striking example, to me anyway, of the AI effect is how people think about LLMs. Many of “X”s, the tells, of previous generations are things that deep learning and large language models can do:

  • .General Intelligence. Ben Goertzel coined the term AGI to mean meant “the ability to solve a wide variety of problems”. LLMs will try to solve any problem you throw at them. So, technically, you would have had to say they are “AGI” (but of course, we’ve changed what the word means).
  • And, most stunningly, they can also pass the Turing Test, as ChatGPT did definitively in 2025. And now, suddenly, it doesn’t matter — despite seventy years of claims that the day a machine passed Turing Test would represent a massive turning point in the history of AI.

It’s interesting, but not surprising, that no one thinks that LLMs are the ubermensch machines predicted in science fiction, with “genuine intelligence” or “true consciousness”.

In fact, we keep talking about what we will do when these arrive — we think these machines are still coming. Bizarrely, we currently call this hypothetical future super-machine “AGI”. Terrible name. I guess the term was just kind of sitting around and it was the only one we could find.


The Takeaway

These are four different takes on the same idea: can you use just the behavior of a thing to determine if something is essentially human? Can you tell? Science fiction said yes, cocktail party talk said yes, fans of the Turing Test said yes. A couple of philosophers said no.

But wait — ChatGPT can successfully simulate human behavior well enough that no one can tell. Now everyone (who stops to think for even a second) thinks the answer is no. It was always no, they say.

And Blake Lemoine gets fired for saying yes.

The AI effect does something marvelous — it doesn’t refute the Tell so much as it abandons it. It travels backward in time and erases it. And the best part of the magic trick is this: it doesn’t refute the Human Essence Assumption. In fact, it protects it, so we can go on believing it no matter what.

There is nothing, no behavior or capability that computers might display that can’t be reanalyzed this way. It is always possible for people to change their minds and say that it was “just computation” or “mere simulation”.

Predictions tend to fair poorly in the philosophy of AI, but I’ll venture this one. Most people will never accept that a program is a real person, no matter what it can do. They don’t have to.


Leave a comment