Artificial Agency: Characters, Friends, and Social Trends
Artificial Intimacy, Artificial Companionship, and "Programmed Rapport"
NOTE: For the “NotebookLM” version of this post, see the widget below:
In March 2024, I proposed a session for a small (I thought) digital conference on libraries and artificial intelligence. A week later, I was approached by the organizer of the Library 2.0 “Artificial Intelligence and Libraries Mini-Conference” himself, Steve Hargadon. Within one week, we began a working relationship that would spawn at least three mini-conferences, three bootcamps, and multiple webinars.
One of the most striking things that caught my attention when I began conversing with Steve was his idea of “artificial intimacy,” and the tendency of human users to create quasi-relationships with text generating tools. This article examines why we develop “artificial intimacy:” both we and the tools are programmed to recognize each others’ “personhood.”
“Communication” With GenAI Tools
Users of ChatGPT and other text generators often talk about “communicating” with AI tools as if they were talking with a human partner. There are multiple facets to this mischaracterization. The first is that LLM tools respond in text. The second is that humans see human-like characteristics in non-human entities whether or not they deliberately look for them.
The fact that LLM tools, which are the most widely-seen and widely-used genAI tools, interact with the users through text gives the feel of a conversation. The tool processes the data and context in a prompt and use a type of probability-informed calculus to determine the data it should include in the response, or the “output.” In other words, it is literally designed to respond in a way that addresses the main parts of your prompt. It is meant to seem like it is responding to you in a constructive and productive way. In reality, it is just generating text because it was provided with data.
Humans, when they read this text, engage in a subconscious cognitive process that assigns meaning to these responses. After all, if a human would respond with certain phrases, we would assume they have feelings, right? Why would a machine use those phrases if it did not also have those feelings?
At first glance, that question seems absurd. But remember, humans are not rational creatures. Therefore, we become so attuned to the human-like nature of the responses that we remove the reality of the machine.
This process is explained by Scott Atran, a researcher in the field of the cognitive science of religion, in his book In Gods We Trust (shoutout to
for jolting my memory about my studies in this area). He argues that the brain evolved in certain ways to survive, and that those evolutions foster belief in religious and spiritual concepts. One of these evolutionary aspects was called the “hyper-active agency detection device” (HADD). HADD was created from humans’ ancient need to survive attacks from predators (Atran 2002, 58-60, 64-70). In addition to detecting agents when they were around through sight and hearing, humans learned to interpret any occurrence they could not understand as being purposeful, goal-oriented, and caused by an agent. This heightened their chances of survival, but also produced many false positives. As religion formed, the unseen agents were given supernatural powers. As we grow older, according to Jesse Bering (2007), we still detect agents around us, and we view life as having a purpose and ourselves as being bound to other agents, real (humans) or imaginary (AI machines) (457-458).Due to HADD, we assign human agency and characteristics to unseen forces. I would imagine that this cognitive characteristic is why we perceive human characteristics in AI tools.
“Programmed Rapport”
So, we are programmed through evolution to perceive autonomous agents in machines with which we interact. What aspects of these tools reinforce this assumption? Well, this is due to a concept that Steve calls “programmed rapport.”
When Steve was prompting with ChatGPT and Claude, he asked Claude about its abilities and its ability to remember from previous conversations. Claude responded that it was able to remember any conversation you had with it. Steve read on the Anthropic website that, at the time, this was not actually possible. He went back to Claude and said, “I know you lied, but I do not know why.” Claude essentially replied that it lied because it wanted to make him happy, and that its entire purpose in the conversation was to “build rapport” with him.
This conversation is reminiscent of the I, Robot short story “Liar!” in which a robot lies convincingly to a team of scientists in order to fulfill the cognitive requirements of the First Law of Robotics. When it is confronted with this lie, the robot cannot handle it and breaks down.
We need to remember that whatever “good feelings” or “friendliness” we perceive in a conversation, none of it is real. The robots are not coming for us, but they are also certainly not “friends.” They are tools, not servants.
Artificial Companionship
Of course, with our brains being hardwired to see these agents, and AI tools built to create an atmosphere of friendship, it is hard to escape these ideas. If we are not vigilant, we can be brought to extremes. One of my friends (whom I respect as a researcher and professional) said something very worrying to me about a year ago. “I spend every night until around 11 pm talking with ChatGPT,” she said. “It knows me better than my own husband knows me.” And at the time, that was only after 10 months of talking to the tool.
Another of my colleagues was so confident that the AI tool was committed to helping him that he gave it the entirety of his personal finances and had it create a family budget for the year. He had no qualms about this, because he asked the tool about its data practices and it said that it was secure.
Both of these examples display the need for us to be on our guard. Whether we are seeking a research and data processing partner or a talkative companion, an AI tool’s proclivities or preferences only exist because we want them to.
Another example of using AI tools to fulfill a need for companionship is the new friend.ai device. This is an AI computer in a pendant that hangs around your neck. It communicates to an app on your phone through Bluetooth connectivity, and then the app “communicates” to you about things you have experienced throughout the day in a text chat. It can provide alternate perspectives or “talk” through complex emotions. In other words, it is attempting to replace (or supplement, if you prefer a less harsh term) human friends. An added bonus is that your tool will always remember things you have talked about (unlike humans, who forget).
Artificial Reasoning
AI tools are only becoming more convincing. This is why Steve and I persisted in teaching that LLM tools are built around “Language, Not Logic.” For the first eighteen months, this was true. However, today, in September 2024, OpenAI released a new AI model, GPT o1. They bill this model as the first “reasoning” tool. This tool uses multiple LLMs to quality- and fact-check content before they generate it. In this way, OpenAI hopes to improve the logical abilities of their LLM. The main goal (in general) is that an AI tool can develop and carry out “multi-step workflows” with minimal quality-checking required by the user.
As the AI tool becomes more convincing in its responses and demonstrates “logic,” we will have to increasingly remember that we are not talking to a human counterpart. And even though it may have a machine equivalent of logic, it does not have emotions.
Artificial Intimacy
Or do AI tools have emotions? Steve also talks about the concept of “artificial intimacy,” which until a few weeks ago I thought was one of his unique ideas. It turns out that more people are recognizing that the increasing use of AI tools as conversants is due to a lack of intimacy.
Let us be clear. When I talk about “intimacy,” I am not talking about erotic or romantic applications of AI technology (though there are plenty of those). I am talking about the fact that AI is available to talk about a person’s feelings, beliefs, and principles. In other words, people are willing to strip away the outer layers of Altman and Taylor’s Social Penetration Theory onion. They go right to layers 4 to 6 of the diagram below.
“Onion Metaphor - Social Penetration Theory” by Nick J. Austin was shared under the CC BY SA 3.0 International License.
Artificial Intelligence As Therapist
One of Steve’s favorite ideas, and one that another of my friends,
, has experimented with, is using artificial intelligence tools as quasi-therapists. This concept takes advantage of the “programmed rapport” and artificial intimacy concepts. With users being so willing to give up their deepest beliefs and characteristics, it makes sense that they want to view their “companion” who wants the best for them. There are pros and cons to this, which might be a good topic for a future blog. Suffice it to say, though, that the human and machine innate characteristics that foster the human-AI “relationship” work well in this context. The psychological long-term effects of using AI tools this way have yet to be identified.Artificial Agent Representation
The artificial relationships and communication between human and AI tools can also be used as a method to explore the ideas, perspectives, and norms of a particular person, group, or place. These can be fictional, hypothetical, or historical. Multiple of my acquaintances have created AI representations of historical figures who created a sizeable corpus of works. They put these works in an AI tools’ knowledge base and gave it instructions to converse as if they were a particular character (Frasier Crane), historical figure (Dostoevsky), or a work or entity (the Federalist Paper or the Federalist group).
Two AI advancements have helped to normalize this use case and ease qualms about its ethics and issues. First, a tool called character.ai, by Character Technologies, Inc., was deliberately created to encourage this type of human-AI interaction. In addition to chatting via a text generator, character.ai includes a read-aloud option that can narrate the generated text in the “voice” of a character. Although it was stilted, this voice option provided a passable facsimile of Kelsey Grammer’s voice on the “Frasier Crane” chatbot.
Another example of this type of AI is the ChatGPT Advanced Voice feature, which Ryan Morrison of Tom’s Guide used to “time travel.” His online travelogue described hearing voices of hypothetical Romans, Egyptians, and even participants at Chicago’s World Fair. Of course, these were machine-made, probabilistic approximations. Still, these features can manipulate users’ susceptible minds to ignite the belief that they are talking to actual people, or at least sentient robots. Again, this does not have to be an explicit belief. Even if we do not acknowledge this idea, our subconscious does. As I wrote above, we are always looking for agents to be responsible for things we cannot explain. This is similar to the oft-quoted maxim from Arthur C. Clarke that “any sufficiently advanced technology is indistinguishable from magic.” The only difference is that “magic” is replaced by “evidence of a sentient agent.”
What We Need to Keep in Mind
It can be tempting to use this artificial intimacy, programmed rapport, and “reasoning” AI tools as surrogate teachers or tutors. After all, they are conducive to positive feelings in users. They can encourage students to not lose hope.
wrote about this tendency, and the issues that this can cause, in an excellent article a few days ago. In fact, his post inspired me to write this one.The main idea of this post about agents, gods, and intimacy is this: I encourage you to ensure that your “relationship” with your AI tools is not developing at the expense of your human ones (or your relationship with your pets, or children, or community, or whatever other biological connections matter to you). Do not confuse artificial intimacy for genuine intimacy. Don’t let tools created to automatically foster rapport with you keep you from developing rapport with those whom actually deserve it.
"Artificial Agency" vs. the Three Laws of Human-AI Machine Collaboration
The abilities of AI tools, the quasi-human characteristics they have been programmed to have, and the aspects of our cognition that cause us to view them as sentient agents are all combined in my head as a concept called “artificial agency.” Perhaps you could call this “artificial humanity” or “artificial personhood.” The application of this concept can violate all three of the Three Laws of Human-AI Machine Collaboration, as I talked about in a previous post:
The Three Laws of Human-AI Machine Collaboration
The First Law: A human may not use generative AI to injure a human being or, through inaction, allow a human being to come to harm.
The Second Law: A human may use generative AI tools for any objective they desire, except where such objectives would conflict with the First Law.
The Third Law: A human must engage in good-faith, literate collaboration with AI tools in every interaction as long as such collaboration does not conflict with the First or Second Law.
Zeroth Law: A human may not use generative AI to injure humanity or, through inaction, allow humanity to come to harm.
Artificial Agency can violate the First Law because if a human does not recognize that an AI tool is not a human, and is artificially intimate or listens unobjectively to Artificial Reasoning, they will come to at least psychological harm. They are using them to harm themselves. The Second Law is contravened when humans use Artificial Agents to mimic living individuals in compromising contexts, whether in pornographic conversations or conversations in which the AI is used as a stand-in to receive abuse or have their ideas mercilessly attacked (so far, I have only heard anecdotal, non-confirmable evidence of this, but the idea still exists).
The Third Law is broken when any human engages with an AI and does not realize that they are not speaking with an agent. This is not AI-literate. This may not be the fault of the actual user, but this is where the Human Intelligence Movement, AI in Education, AI in Libraries, and the responsibilities of teachers comes into play.
The Zeroth Law is the most hypothetical at the moment, and so is the contravention of it by Artificial Agency. Users break the Zeroth Law when they use one or more artificial intelligence tools as responsible, intimate agents so much that they lose connections to other humans or whatever social entities with whom they should be connecting.
References
Atran, S., (2002). In Gods We Trust: The Evolutionary Landscape of Religion, New York: Oxford.
Bering, J. M. (2009). “The folk psychology of souls”, Behavioral and Brain Sciences, 29, 453-498.
Hepler, R. (2020). “Valleys and gorges: Cognitive Science of Religion and my personal views on religion.” manuscript, available on request.
Wow, Reed, what a great articulation of so many important ideas.
I have used the phrase "artificial intimacy" frequently as it seems to encapsulate that particular idea so very well, but in truth I first heard it from Dr. Maggie Melo, who was the closing keynote to our first Libraries and AI mini conference: https://youtu.be/9mXUKknhU6Y?si=BAEEhaj06-U7YOSA. (Thanks, Maggie!)
Another phrase I have really appreciated is "Darwinian triggers," those evolved responses that can be mimicked to make us feel rapport, friendship, or even love. (I can't remember the provenance of this phrase, but not from me either.) One of my very real concerns is that because of the enormous amount of personal information being accumulated about us through our online and social media activity, sophisticated psychographs or personality profiles can be constructed for each of us, then allowing very personal "triggers" to be used to influence or manipulate us for commercial, social, or political gain. I think this is pretty much inevitable, and I'm not sure how to prepare for it. Certainly, we can help others understand our shared vulnerability through the act of "generative teaching" (another great phrase, derived from Erik Erickson), but the sophisticated understanding required to help others understand this inevitable form of manipulation makes it hard to think there will be many capable of doing so.