The parasocial power of AI – The Christian Century
Since 1900, the Christian Century has published reporting, commentary, poetry, and essays on the role of faith in a pluralistic society.
© 2023 The Christian Century.
Contact Us
Privacy Policy
The parasocial power of AI
For some reason, it is sometimes easier for us to see personhood in a chatbot than in a fellow human.
Illustration by Matt Dorfman
On a dreary day this past winter, our two teenagers were feeling glum, and I was looking for a way to cheer them up. In a moment of weird inspiration, I grabbed a plump little clementine from the fruit bowl, pulled out a pen, and drew a furious little grumpy face across its tiny orange expanse. I showed the result to the kids, along with the sort of scowling growl I would imagine such a displeased fruit would make when it had reached its limits and wanted to speak to the manager. Suddenly, we were all laughing, and the mood was lifted. Mission accomplished.
In the days that followed, I noticed something interesting. Though we went through a lot of fruit and even restocked the clementines in the bowl, nobody touched the one with the face. Finally I moved it from the bowl to the top of the fridge, where it could cast its baleful, disapproving gaze down upon all of our conversations and meal prep in peace.
Years ago, Steven Levy wrote a little book called Insanely Great, which told the history of Apple’s development of the first Macintosh computers. He talks about doing an interview with Joanna Hoffman, who was in charge of Apple’s early marketing efforts. “What I remember most about that encounter,” Levy writes, “was that as Joanna spoke, her Macintosh was sitting on her desk, and she could not keep her hands off it. Every second sentence or so she would go back to it, caress it, stroke it, as if it were some rare breed of cat.”
This passage came to mind recently when I heard our older kid say thank you to the Google device on the counter after it gave her the weather report. “You’re welcome,” it responded. “I’m here to help.” We all laughed, both because the response was so unexpected and because it was so perfectly polite. In the moments after, however, I found myself wondering why I was suddenly imagining that somebody was there.
Scientists might tell us that it’s fairly easy to distinguish a human from other creatures: a human is anything that has human DNA. It is a much stranger and more complex question to ask what a person is. We might think at first that these two identities, the human and the person, closely overlap. But the more I consider even the short list of examples I noted above, the farther down the rabbit hole I go.
Recent advances in artificial intelligence have brought the subject to the forefront of both cultural and policy discussions. Whether we look at education or entertainment, medical diagnoses or internet searches, we now find that some version of AI is often part of the equation. Given that ubiquity, it would be easy to imagine that having intelligence is something like flipping a switch; yesterday machines didn’t have it, and today they do.
But it seems more complex than that. When I think about AI, I find myself drawn more and more to the sort of existential question I mentioned above: Why is it so easy for me to imagine that there is someone on the other end of the wire?
One approach to the question of AI was laid out by computer scientist Alan Turing back in 1950, when he wrote a profoundly odd little paper, “Computing Machinery and Intelligence.” In it he waves aside the question, “Can machines think?” In his view, the terms in that question are far too abstract to be useful. Turing seeks to tackle the question from a different angle.
First he imagines a scenario that he calls “the imitation game.” The setup involves three players: a man, a woman, and a judge. They are sitting in three different rooms, and they communicate with each other only in writing. The judge questions the two participants to try to determine which is male and which is female—and the man tries to trick the judge into getting it wrong. If the man succeeds in this deceit for a certain period of time, he wins the game; otherwise, the woman is the winner.
With the basic mechanics of play in place, Turing then asks us to imagine that, at some later point in the game, we replace the male participant with a machine capable of generating written responses. What would we say if, after a certain period of time, by giving the right sort of answers, the machine managed to convince the judge it was in fact the woman? Turing’s response was that once the machine could accomplish this, we would have no meaningful reason to deny that the machine was intelligent.
When folks talk about the Turing test these days, they tend to simplify it, suggesting that what Turing really meant was that any scenario where a machine can convince you that it is a human signifies the presence of artificial intelligence. But I prefer Turing’s original formulation of the game. Set aside the fascinating implications about gender construction and performance that the test stirs up. I like it because it points us away from asking if intelligence is something the machine has, pointing us instead to the relationship we have with the machine. The question is, Can I get to the point where I feel like this machine is a person?
Michael Graziano is a neuroscientist at Princeton University who has spent more than three decades researching questions of consciousness. In a TED Talk at Cornell University in 2019, he noted that human brains function as well as they do because we have evolved to build models of the world as we interact with other things and other creatures. Graziano points out, however, that these models work as well as they do precisely because they are not accurate images of our world. They are rough sketches that give us just enough to go on that we can move through the world, interacting in a safe and coherent manner. Given all the information we have to process, and the number of neurons in our brains, we just don’t have the time for things to work any other way.
Graziano goes on to suggest that when we are talking about human consciousness, we are really talking about two very distinct processes. The first, which he calls I-consciousness, is the part that simply processes information, and it is with this part that we most closely resemble our computers, our smartphones, and our pocket calculators. He distinguishes this apparatus from what he calls m-consciousness, a more mysterious, experiential side of our thinking and processing. It is this second type of consciousness that offers us not only our sense of self but also the possibility to identify with and have empathy for others. As Graziano describes it, this second apparatus allows us to construct very quick and efficient models, not only about facts and events in the world, but of ourselves and others as persons.
This helps explain why, when I drew the face on the clementine, suddenly nobody in our house could bring themselves to eat it. As absurd as it sounds, with that simple scribble of ink, we tapped into something deep and ancient in our brains, and suddenly we saw something like a person there. I think it’s this same mechanism that got me imagining some real presence when that Google box on the counter said, “You’re welcome.” It’s not something we really choose to do. It seems deeper even than a reflex. We get grabbed by something, and we feel like something, however ghostly, is there.
We are the judges in Turing’s imitation game. Who will we allow ourselves to treat as a person?
Psychologists use the word parasociality to describe the sort of one-sided relationships that we form with any number of persons or things that are not in a position to relate back to us. If you’ve ever had a crush on a Hollywood star, or if you’ve ever imagined that you might prefer a life in Narnia to the one you are living in our world, you’ve experienced a bit of parasociality. Graziano and others who study human consciousness suggest that such parasocial effects arise because we are hardwired by evolution to form fast and person-like relationships with all manner of stimuli we encounter.
This social aspect helps us understand AI as well. Seen from this angle, intelligence is very much a social phenomenon, as opposed to something each person possesses in isolation. We are “intelligent” when we are able to participate productively in a community that generates, preserves, and disseminates knowledge. In Homo Ludens, Dutch cultural theorist Johan Huizinga suggests that humans developed rationality itself through social interactions, when our ancient forbears gathered together in groups to play games.
So we can understand perhaps why Turing was drawn to the imitation game when he wanted to explore the question of machines being intelligent. By focusing on the communal and social aspects of intelligence, he was able to leverage those more mysterious parts of our understanding, those pattern-seeking and person-finding dimensions of human consciousness. In that context, the question becomes less whether the machine itself is intelligent and more whether we, ourselves, have the capacity to recognize and welcome something alien to us into the community. Where once we saw a thing, can we now see a person?
This question invites us to consider as well the other side of the coin. Given that humans are evolved to imagine the personhood of so many wild and wondrous things, why is it that we also have so much evidence of our inability to welcome those who share not only our social history, but our very DNA? What is the mechanism at work when we fail to see a person?
This mechanism takes us right back to the imitation game itself, of course, and this is one reason why I find the oddity of Turing’s original version in his 1950 paper so intriguing. The task of both players is to say to the judge, “I am a woman,” in a manner convincing enough that the judge will respond, “I believe you.” Now, if the man is able to convince the judge, what then? If the machine, acting in the place of the man, is able to convince the judge, what then?
And if the woman, for whatever reason, is unable to convince the judge that she is a woman, what then?
If we only imagine we are playing a game, the stakes are low. But these matters of recognition and welcome are very real to many members of our communities today. Whether it is the immigrant seeking asylum at our border, the victim of sexual assault in our church, or the transgender person seeking medical support and health care, we are faced with the same fundamental questions posed by Turing’s imitation game: Are we willing to believe people when they tell us who they are? Will we believe the testimony they give us about themselves? Or will we tell them they are not welcome or, even worse, that we refuse to see them as a person at all?
I wish I better understood what it is inside me that allows me to extend my empathy so easily to a smart device or a piece of fruit but leaves me reluctant to do so when I encounter a homeless person on the street. It comes down to the question of who we allow ourselves to treat as persons. We are the judges in the imitation game, constantly saying to others, “You’re in, you’re out.” We may have inherited a set of mysterious reflexes for making those judgments, but that’s really no excuse. We should notice when we include the smart device in our community but leave out the stranger or the widow, and we should repent and do better next time.
In sci-fi movies, the big worry around AI is that it might somehow become powerful enough to do global harm while not being smart enough to recognize the value of preserving human life. But that’s not just a hypothetical danger of AI. That’s the very reality we are living in, right here and now. We have the technology to exterminate not only human life but all life on the planet. Moreover, we are wired with a perverse mechanism that allows that horror to be possible. The answer, in the end, is not only to rein in AI but to rein in ourselves.
No one in my family dared to peel and eat the scowling clementine. There was something about the cartoon face I drew that interrupted us from treating it like the other fruit in the bowl. Over the few weeks it spent up on top of the fridge, it gradually shrank and shriveled, and the orange hue of the peel darkened to a dull brown. Eventually you could no longer make out the ink unless you looked very closely; the face had all but disappeared. At that point, I moved it to the compost bin. It struck me that we had given it space to age among us with dignity, to spend its days unmolested by our desires. I imagine it had a good life, for a clementine, and even a good death—thanks to the fact that we all saw something in that cartoon face that made us hesitate, that made us choose some other path than ripping it apart.
David Dault is assistant professor of Christian spirituality at Loyola University of Chicago and host and executive producer of the podcast Things Not Seen.
We would love to hear from you. Let us know what you think about this article by writing a letter to the editors.
The self-appointed cowboy messiah
In historic appointment, Sarah Mullally will be the first female archbishop of Canterbury
Hundreds of Northwestern students can’t register for class because they won’t watch an antisemitism training video
Readers Write
The silent, suffering God
The priesthood of all chatbots?
Since 1900, the Christian Century has published reporting, commentary, poetry, and essays on the role of faith in a pluralistic society.
© 2025 The Christian Century. All rights reserved.
Contact Us
Privacy Policy