Human sympathy and human essence
Consider this thought experiment, which is a variation on the 1972 movie “Stepford Wives”. (This version works better for parents.)
THE STEPFORD SCENARIO
Suppose I kidnap your children and replace them with robot copies. They look and behave exactly the same and you never suspect I made the switch.
You suffer no harm, because you never find out. We could even suppose that over the next few years the robots gradually become perfect children (unlike your real children). Suppose that, similarly, I also set up the children with perfect copies of you, and they believe that they simply moved to a new country with their ever-more-perfect parents. I’ll even throw in a million dollar trust fund for the kids.
Since no one was harmed, since everyone is a little better off, it should follow that this situation is a good thing, an improvement over letting you raise your own children.
So here’s the question: have I done anything wrong? If you found out this had happened, would you be angry? What, exactly, did I do that was immoral?
If it helps, suppose I am an evil corporation or a totalitarian government.
Of course I’ve done something wrong. If this was a movie, we would expect you to fight to death to get your children back. That’s the only way this movie can go. At least, it’s the only way that would make any sense to the audience.
This thought experiment (if it worked) should convince you that you believe this assumption:
- There is something about a person that is the unique target of empathy and love.
You can call it whatever you like: a “self” or “personhood” or a unique “consciousness”. Whatever it is, we believe it exists —or, at the very least, we understand what it is — otherwise, we couldn’t understand the thought experiment above.
In fact, anyone who doesn’t believe that other people have a unique “self” or “consciousness” has a serious psychiatric problem — they are profoundly autistic or sociopathic or have some other debilitating mental aphasia. If you sincerely don’t believe in this, then it will be difficult for you to live a normal life.
I would argue that we cannot imagine that this quality does not exist; the idea is built into us. If your brain is functioning properly, you will find yourself unable to believe that everyone else is a mindless robot who feels nothing and has no “self”. We are behaving in the only way that makes any sense to us.
We are behaving as if “souls” exist.
The paradox is that, although we behave as if it exists, although we assume it must exist, we have no consistent way of describing what exactly it is. The experiment explicitly eliminates several possibilities. Whatever it is, it can’t be explained by just (1) The way a person looks, (2) Anything they might say or do (3) Anything you might think you know about them. Here are some other possibilities.
Invisible Stuff
If consciousness is a metaphysical thing, the there are actually two crimes. First, I stole the thing you love, their spirit or “self”. Second, I have given you creatures that have no “consciousness” at all, that appear human but, just like Frankenstein’s monster or classic zombies, they are soulless creatures with no “self”. This is horrifying, at least, even if it’s not currently illegal.
This is the way I imagine the writer would expect us to understand this story.
This way of understanding “consciousness” is called “substance dualism” by philosophers. None of them (at least, none of them alive today) think it is correct. The basic problem is this: it seems very unlikely that this invisible stuff exists. If it did, physicists and neuroscientists would have found some evidence of it by now. (See Ghosts and Other Invisible Stuff.)
But, of course, feel free to believe in a metaphysical explanation if you like. As I said above, you have to believe in something in order to live a normal life. Just be aware, for now, that science will probably never find it, so there will always be a few people who will say (after a few drinks, late at night, as a way of being smart) that it isn’t “real”.
But of course, when they wake up, they will immediately go back to assuming it is real; they won’t start running around yelling that all the other people are just robots.
Information
Another possibility is that the mind and the self are a kind of system of information or a pattern, analogous to the software and structured data in a computer. It’s not physical, it’s an arrangement of a set physical things that represents an information system. This has the advantage over the metaphysical explanation in that it doesn’t require “invisible stuff”. A pattern made of real physical objects is a “real pattern”. No one can argue it isn’t “real”.

There are several philosophical theories that support this way of thinking: the computational theory of mind holds that the relationship between mind and body is analogous to the relationship between hardware and software; functionalism holds that the mental objects are defined by their functions relative to each other and to the outside world.
Let’s talk a bit about how information works. If I write your address on two pieces of paper, it’s the same address. The pieces of paper are both physical representations of the same information. Two copies of a book contain the same book. You can only copy the representation of the information, you can’t copy the information itself. It’s the same information, no matter how many physical things are used to represent it.
In the Stepford Scenario, my robot copies contain exactly the same information as your children — they have the same pattern, they implement the same function. And we know that if I make two copies of a bit of information, then they contain the same information. So it follows that, according to computational theory of mind or functionalism, the robot copies are not just simulations of your children, they are your children.
If that’s true, then in the scenario above I have done nothing wrong — in fact, I have done nothing.
Now, maybe this is the right explanation. We haven’t disproved it. We’ve only shown that we are uncomfortable with it. No one would accept that the robot copies of their children are “just fine”. We wouldn’t say “that’s all right, you can keep my children. These are good enough for me.” This is not how we see the situation. This is not how we act or how we feel.
The thought experiment above shows that, in your day-to-day life, you do not accept either the computational theory of mind or functionalism as complete descriptions of what’s going on with other people. We believe there is more to our children than “a system of information that produces a particular behavior”, regardless of how accurate the simulation is. The “ghost inside the machine”, the child that we love, can’t be composed of just information.
Ordinary Stuff
What if the “self” we are attached to is just their bodies — every last hair on their heads, their organs, their brains, the whole machine balanced on two feet, including everything it has done and everywhere it has been. This is not as naive as it sounds. After all, this is how we talk in our ordinary, day-to-day life. We talk as if the child’s “self” and the child’s body are the same thing. No philosophy required.
Consider that, according to mechanism, your children’s bodies are just machines, their brains are just machines, and everything about them is just mechanical. It follows that your children were already robots. I just gave you new robots, indistinguishable from the old ones. The only problem: they’re the wrong robots.
In this case, we know what the crime is in the scenario: it’s theft. I stole some things from you. A particularly heinous theft, but still just a theft.
We can, of course, love a thing. We can love historical artifacts, prize possessions, particular places. We get angry if they are destroyed. We are offended if we find out they have been replaced with fakes. If this is the right way to look at it, then the fact that you care about these children is no different than the way your child might feel about a particular stuffed toy. Your anger is the same anger your child would feel if we replaced a favorite stuffy with a new one from the store.
But a replacement isn’t enough. It’s not just the thing that matters but the provenance of the thing. Where it’s been, its origin, its history, what’s happened to it. It’s also what you feel about the things that have happened to it. Your children’s bodies have a provenance and you know what it is. There are things you know about them, things in your head like: these are your children, no one else’s, you saw them born, you’ve been with them their whole life, you imagine a future for them, and so on.
What makes a thing important is something that you know about the thing, and what you know about it has to be true. To have the right provenance, it has to be the right thing, the actual thing.
In the scenario, you think the robots are your children and what’s in your head tells you to love the robots the same way you loved your children. I’ve tricked you into being wrong about where you target your love, and this might be the most horrible aspect of the scenario.
This account is compelling, but I can’t help feeling that something has been left out. I’m still not entirely comfortable thinking of another human being as a just a “thing”, like any other thing. I can’t get myself to sincerely believe that we love our children the same way we love a favorite guitar.
This doesn’t prove that the “Ordinary Stuff” explanation is wrong (just as I couldn’t disprove the “Information System” explanation). In both cases, I’ve only proved that I am uncomfortable with it.
Subjectivity and Sentience
Your children can actually feel and experience what’s happening, whereas the robots’ gears and motors and circuits don’t feel a thing. This, I’m sure I’ll agree, is a difference between them actually matters: the robots don’t have subjective consciousness.
Before we go on, remember that subjective consciousness is paradoxical and we have no idea how it actually works. All we know for sure is that there is a difference between experiencing something for yourself and just hearing about it or knowing about it.
It would probably be smarter if I just stop here — and conclude just that there are two things that definitely seem to matter in the scenario. (1) The children’s bodies and their provenance. (2) The children’s subjective consciousness and how we feel about it. What I stole, what matters, is the children’s bodies with their subjective consciousness intact.
That’s what I’ve got — both together — that’s the argument.
But I can’t help myself — I need to dig into this. It goes without saying, the next few pages are going to be rough sailing. Skipping them doesn’t affect the whole argument.
(1) Sentience Matters
We feel sympathy for things that can suffer and empathy for things that can feel love. We “feel for them” — our feelings mirror the feelings we think they are having. We worry that our children will get hurt. This is different than when we worry if our car will break.
This ability to “feel” is called “sentience“. We recognize sentience in other people (or animals or things), so we feel sympathy and empathy, and this motivates us to prevent their suffering, to reciprocate their kindness and so on.
This logic is built into us — it’s part of how our brains work. Every normal brain will follow this path: similarity -> sentience -> sympathy -> morality. Consider how we assign rights to animals based on our feelings; we care for a kitten but we step on a bug. A similar thing happens in the pro-choice / pro-life debate.
In many contexts, people trust their capacity for human sympathy to make these choices — people find it more convincing than any scientific or philosophical theory. Sympathy is, in fact, the only reliable way we have to measure “personhood”. It’s the ground truth of morality and law, the first and final thing that actually matters. But, as near as I can tell, we don’t really understand why we find it so convincing and fundamental.
(2) Sentience Doesn’t Exactly Explain the Scenario
The scenario breaks the first link: the robots are similar to our children, but in this case it doesn’t imply they have sentience. This means they get a misplaced sympathy from us. Basically, I’ve caused you to love something that doesn’t deserve it.
But is this a horrific crime?
You believe the robots have feelings when they don’t. This is awkward for you, of course — you’re showering love on them, they couldn’t care less. I’ve put you in a situation where you love something that doesn’t love you back. This isn’t usually a crime, but it’s definitely wrong. You will probably go to a lot of trouble to prevent the robots from suffering, and all of this is a waste of your time — they can’t feel a thing. Wasting your time is rude but not usually considered a crime. Your actual children, the ones I stole, are not suffering at all. As I said, they are in a better situation that they were with you. So as far their welfare goes, you don’t have to worry about it. Definitely not a crime.
But none of this is what we’re actually talking about. The crime in the scenario is that I took something. I didn’t take their “sentience” — sentience is just an ability to have an experience. We don’t love our children’s experience, or the fact that they have some experience, or their ability to have an experience — we love the thing that has the experience.
So we’re back to the thing.
(3) Mental Experience
Another aspect of subjective consciousness is our inner life — the things that we notice happening to us when we are thinking or imagining or planning or remembering. We have a train of thought, that we can picture things in our imagination, that we can form words in our heads without saying them out loud, talking to ourselves in a way that only we can hear. We experience these things subjectively, first hand. We know they exist and we know what they’re like.
People say “I have thought in my head” or they talk about the “content of their consciousness” — as if they’re in a kind of container that holds all these things that are going on when we’re “thinking”. This is another reification — a metaphor for all the things that are happening. So here’s another “thing”-that-isn’t-actually-a-thing, that we have no way to talk about except through metaphor. Really it’s a “process“, a series of “events” (because it takes place over time) involving “thoughts”, some of which can see or hear (that is “experiences”).
So suppose that’s what I stole: the children’s “stream of consciousness” — the ongoing process of experience and thought that was happening in their heads.
This kind of works, but the ontology is still awkward. I don’t think of my children as a “process” of “experiences”. It still seems like that you also need a self to actually experience those thoughts and images. We always come back to the thing, the “self“.
(4) System of Information (again)
What’s to stop me from giving the robots a “stream of consciousness”, just like the children’s? What if I gave them simulated thoughts, simulated inner speech, a simulated train of thought?
This would be a simulation of course — it would just be a system of information. As I’ve argued twice before, a system of information can’t be sentient being — the ontology is all wrong. There’s nothing about a system that could definitely imply subjectivity is present. It’s very easy to imagine the same exact system of information with no subjectivity (as you would expect if the operation of the system is extensively described in a shelf full of books).
This kind of system is what some theories of “consciousness” describe (such as Global Workspace Theory or Integrated Information Theory).
If they’re right, if consciousness is a particular algorithm, if subjectivity as we normally understand it is an illusion or metaphor, then fine — we would have to agree that the robots have the same consciousness as the children and that the only difference is their bodies. It would follow that we owe the robots the same love and protection that we owe to our children.
But that still doesn’t feel right, at least not to me.