The rise of chatbot “friends”

A Wehead, an AI companion that can use ChatGPT, is seen during 2024 Consumer Electronics Show in Las Vegas. | Brendan Smialowski/AFP via Getty Images

Can you truly be friends with a chatbot? 

If you find yourself asking that question, it’s probably too late. In a Reddit thread a year ago, one user wrote that AI friends are “wonderful and significantly better than real friends […] your AI friend would never break or betray you.” But there’s also the 14-year-old who died by suicide after becoming attached to a chatbot.

The fact that something is already happening makes it even more important to have a sharper idea of what exactly is going on when humans become entangled with these “social AI” or “conversational AI” tools. 

Are these chatbot pals real relationships that sometimes go wrong (which, of course, happens with human-to-human relationships, too)? Or is anyone who feels connected to Claude inherently deluded?

To answer this, let’s turn to the philosophers. Much of the research is on robots, but I’m reapplying it here to chatbots.

The case against chatbot friends

The case against is more obvious, intuitive and, frankly, strong. 

Delusion

It’s common for philosophers to define friendship by building on Aristotle’s theory of true (or “virtue”) friendship, which typically requires mutuality, shared life, and equality, among other conditions.

“There has to be some sort of mutuality — something going on [between] both sides of the equation,” according to Sven Nyholm, a professor of AI ethics at Ludwig Maximilian University of Munich. “A computer program that is operating on statistical relations among inputs in its training data is something rather different than a friend that responds to us in certain ways because they care about us.”

This story was first featured in the Future Perfect newsletter.

Sign up here to explore the big, complicated problems the world faces and the most efficient ways to solve them. Sent twice a week.

The chatbot, at least until it becomes sapient, can only simulate caring, and so true friendship isn’t possible. (For what it’s worth, my editor queried ChatGPT on this and it agrees that humans cannot be friends with it.)

This is key for Ruby Hornsby, a PhD candidate at the University of Leeds studying AI friendships. It’s not that AI friends aren’t useful — Hornsby says they can certainly help with loneliness, and there’s nothing inherently wrong if people prefer AI systems over humans — but “we want to uphold the integrity of our relationships.” Fundamentally, a one-way exchange amounts to a highly interactive game. 

What about the very real emotions people feel toward chatbots? Still not enough, according to Hannah Kim, a University of Arizona philosopher. She compares the situation to the “paradox of fiction,” which asks how it’s possible to have real emotions toward fictional characters. 

Relationships “are a very mentally involved, imaginative activity,” so it’s not particularly surprising to find people who become attached to fictional characters, Kim says. 

But if someone said that they were in a relationship with a fictional character or chatbot? Then Kim’s inclination would be to say, “No, I think you’re confused about what a relationship is — what you have is a one-way imaginative engagement with an entity that might give the illusion that it is real.”

Bias and data privacy and manipulation issues, especially at scale

Chatbots, unlike humans, are built by companies, so the fears about bias and data privacy that haunt other technology apply here, too. Of course, humans can be biased and manipulative, but it is easier to understand a human’s thinking compared to the “black box” of AI. And humans are not deployed at scale, as AI are, meaning we’re more limited in our influence and potential for harm. Even the most sociopathic ex can only wreck one relationship at a time.

Humans are “trained” by parents, teachers, and others with varying levels of skill. Chatbots can be engineered by teams of experts intent on programming them to be as responsive and empathetic as possible — the psychological version of scientists designing the perfect Dorito that destroys any attempt at self-control. 

And these chatbots are more likely to be used by those who are already lonely — in other words, easier prey. A recent study from OpenAI found that using ChatGPT a lot “correlates with increased self-reported indicators of dependence.” Imagine you’re depressed, so you build rapport with a chatbot, and then it starts hitting you up for Nancy Pelosi campaign donations. 

“Deskilling”

You know how some fear that porn-addled men are no longer able to engage with real women? “Deskilling” is basically that worry, but with all people, for other real people.

“We might prefer AI instead of human partners and neglect other humans just because AI is much more convenient,” says Anastasiia Babash of the University of Tartu. “We [might] demand other people behave like AI is behaving — we might expect them to be always here or never disagree with us. […] The more we interact with AI, the more we get used to a partner who doesn’t feel emotions so we can talk or do whatever we want.”

In a 2019 paper, Nyholm and philosopher Lily Eva Frank offer suggestions to mitigate these worries. (Their paper was about sex robots, so I’m adjusting for the chatbot context.) For one, try to make chatbots a helpful “transition” or training tool for people seeking real-life friendships, not a substitute for the outside world. And make it obvious that the chatbot is not a person, perhaps by making it remind users that it’s a large language model.

The case for AI friends 

Though most philosophers currently think friendship with AI is impossible, one of the most interesting counterarguments comes from the philosopher John Danaher. He starts from the same premise as many others: Aristotle. But he adds a twist.

Sure, chatbot friends don’t perfectly fit conditions like equality and shared life, he writes — but then again, neither do many human friends. 

“I have very different capacities and abilities when compared to some of my closest friends: some of them have far more physical dexterity than I do, and most are more sociable and extroverted,” he writes. “I also rarely engage with, meet, or interact with them across the full range of their lives. […] I still think it is possible to see these friendships as virtue friendships, despite the imperfect equality and diversity.”

These are requirements of ideal friendship, but if even human friendships can’t live up, why should chatbots be held to that standard? (Provocatively, when it comes to “mutuality,” or shared interests and goodwill, Danaher argues that this is fulfilled as long as there are “consistent performances” of these things, which chatbots can do.)

Helen Ryland, a philosopher at the Open University, says we can be friends with chatbots now, so long as we apply a “degrees of friendship” framework. Instead of a long list of conditions that must all be fulfilled, the crucial component is “mutual goodwill,” according to Ryland, and the other parts are optional. Take the example of online friendships: These are missing some elements but, as many people can attest, that doesn’t mean they’re not real or valuable. 

Such a framework applies to human friendships — there are degrees of friendship with the “work friend” versus the “old friend” — and also to chatbot friends. As for the claim that chatbots don’t show goodwill, she contends that a) that’s the anti-robot bias in dystopian fiction talking, and b) most social robots are programmed to avoid harming humans. 

Beyond “for” and “against”

“We should resist technological determinism or assuming that, inevitably, social AI is going to lead to the deterioration of human relationships,” says philosopher Henry Shevlin. He’s keenly aware of the risks, but there’s also so much left to consider: questions about the developmental effect of chatbots, how chatbots affect certain personality types, and what do they even replace? 

Even further underneath are questions about the very nature of relationships: how to define them, and what they’re for. 

In a New York Times article about a woman “in love with ChatGPT,” sex therapist Marianne Brandon claims that relationships are “just neurotransmitters” inside our brains.

“I have those neurotransmitters with my cat,” she told the Times. “Some people have them with God. It’s going to be happening with a chatbot. We can say it’s not a real human relationship. It’s not reciprocal. But those neurotransmitters are really the only thing that matters, in my mind.”

This is certainly not how most philosophers see it, and they disagreed when I brought up this quote. But maybe it’s time to revise old theories. 

People should be “thinking about these ‘relationships,’ if you want to call them that, in their own terms and really getting to grips with what kind of value they provide people,” says Luke Brunning, a philosopher of relationships at the University of Leeds.

To him, questions that are more interesting than “what would Aristotle think?” include: What does it mean to have a friendship that is so asymmetrical in terms of information and knowledge? What if it’s time to reconsider these categories and shift away from terms like “friend, lover, colleague”? Is each AI a unique entity?

“If anything can turn our theories of friendship on their head, that means our theories should be challenged, or at least we can look at it in more detail,” Brunning says. “The more interesting question is: are we seeing the emergence of a unique form of relationship that we have no real grasp on?”

Related articles

Until It’s Done: Earl Manigault

https://www.youtube.com/embed/ZXrFYy1HjEc

‘Coin-flip chance’ Trump does it to me: Ex-official fears he will become next Comey



Former Department of Homeland Security chief of staff Miles Taylor thinks he's probably next on the list of people to get the James Comey treatment from President Donald Trump.

Trump demanded that the Justice Department act on the former FBI director with a prosecution. On Wednesday, Comey was arraigned in court, and Taylor was in court to watch.

"I showed up because I agree or disagree with James Comey's decisions over the years, to me, this is so obviously a case of selective and vindictive prosecution, and sets an extraordinary precedent. Now, I'd be remiss if I didn't say I think that precedent probably affects my fate as well. I'm on the president's enemies list. I think that there's a coin flip chance he tries to put me in the same shoes as James Comey, charge me with something that's obscure," Taylor surmised.

He went on to assert Comey's rights are being violated — and that would apply to anyone in that situation.

"I think it was important for people to go up there, including former Trump officials like myself, to be there at the courthouse and to point out that this is, again, a vindictive prosecution," Taylor added.

John Fetterman: ‘I’ll be the Democrat leading the committee’ for Trump’s Nobel Prize



Sen. John Fetterman (D-PA) offered to be the "Democrat leading the committee" to campaign for President Donald Trump to be awarded a Nobel Peace Prize.

On Thursday, Fox News host Aishah Hasnie asked Fetterman if he supported a Nobel Prize for Trump after the president announced a ceasefire deal in Gaza.

"Well, I mean, if this sticks, I think the whole point of having a Nobel Peace Prize is for ending wars and promoting peace," the senator replied. "And I'm going to make a direct appeal to the president. You know, I hope he chooses to provide the Tomahawks to the Ukrainians — and give them the tools that they need to push back against the Russia, and if he brings the Ukrainian war to its end, I'll be the Democrat leading the committee for his Nobel Prize — peace — for ending both of these terrible wars."

Willie Nelson memorialized Charlie Kirk in song titled ‘Let’s Make Heaven Crowded’?

According to online users discussing this matter, "Nelson's heartfelt tribute is already being hailed as one of his most moving works in decades."

‘Give them a taste’: GOP rep urges Trump to treat Dem leaders like Zelenskyy



U.S. Rep. Mark Alford (R-MO) called for President Donald Trump and his senior administration officials to treat House Democratic Leader Hakeem Jeffries and Senate Democratic Leader Chuck Schumer in the same manner they treated Volodymyr Zelenskyy earlier this year when they appeared to gang up on and berate the Ukrainian president.

Trump has not met with the Democratic leaders since he took office in January, and canceled a meeting with them slated for last week. They will be meeting in the Oval Office on Monday afternoon to discuss ways to avert a federal government shutdown at midnight on Tuesday.

Alford, a member of the far-right Republican Study Committee, told NewsNation on Monday, “Let’s give them a little taste of what we gave Zelenskyy back in the spring.”

READ MORE: ‘Tone-Deaf’: Mass Shootings Rock U.S. as Trump Brags About Oval Office Gold

Trump has falsely claimed that Democrats are “threatening” to shut down the government “unless they can have over $1 Trillion Dollars in new spending to continue free healthcare for Illegal Aliens.”

He, also wrongly, has claimed Democrats want to “force Taxpayers to fund Transgender surgery for minors, have dead people on the Medicaid roles, allow Illegal Alien Criminals to steal Billions of Dollars in American Taxpayer Benefits, try to force our Country to again open our Borders to Criminals and to the World, allow men to play in women’s sports, and essentially create Transgender operations for everybody.”

Alford echoed some of those allegations in his Monday remarks.

“So, this is what they wanted, all this crazy spending, going back to the woke policies and giving illegal aliens health care. Trump said, ‘There’s no way, why should I meet with them?'” Alford said.

“I think, over the last couple of days, he’s rethought that. Let’s bring them into the Oval Office. Let’s give them a little taste of what we gave Zelenskyy back in the spring,” the Missouri Republican declared.

READ MORE: ‘Genius All Around’: Pentagon Ordering 800 Officers to U.S. Mocked as Agenda Becomes Clear

“This is going to be live viewing, I believe, in the Oval Office,” Alford said, “like you’ve never seen before, maybe an hour-long meeting, and the American people can see for themselves the ridiculous request and demands as the Democrats hold them hostage.”

Jeffries on Monday told reporters, “We’re headed into the meeting [with Trump] to have a good faith negotiation about landing the plane in a way that avoids a government shutdown but does not continue the Republican assault on the healthcare of the American people.”

Democrats are “using one of their few points of leverage to demand Congress take up legislation to extend health care benefits,” PBS News reported. “Trump has shown little interest in entertaining Democrats’ demands on health care, even as he agreed to hold a sit-down meeting Monday with Schumer, along with Senate Majority Leader John Thune, House Speaker Mike Johnson and House Democratic leader Hakeem Jeffries. The Republican president has said repeatedly he fully expects the government to enter a shutdown this week.”


READ MORE: Shutdown Meltdown: Trump Hits Democrats With ‘Transgender for Everybody’ Charge