Are we racing toward AI catastrophe?


Satya Nadella, CEO of Microsoft, speaks during an event at the company’s headquarters in Redmond, Washington, on Tuesday, February 7, 2023. | Chona Kasinger/Bloomberg via Getty Images

As tech giants like Microsoft and Google compete to capture the AI market, safety could be an afterthought.

“The race starts today, and we’re going to move and move fast,” Satya Nadella, CEO of Microsoft, said at the company’s ChatGPT launch event on Tuesday.

That word — “race” — has been thrown around a lot lately in the AI world. Google and Baidu are racing to compete with OpenAI and Microsoft’s ChatGPT as a key part of search, while Meta is racing not to be left in the dust.

Tech is often a winner-takes-all sector — think Google controlling nearly 93 percent of the search market — but AI is poised to turbocharge those dynamics. Competitors like Microsoft and Baidu have a once-in-a-lifetime shot at displacing Google and becoming the internet search giant with AI-enabled, friendlier interface. Some have gone even farther, arguing there’s a “generative AI arms race between the US and China,” in which Microsoft-backed OpenAI’s insanely popular ChatGPT should be interpreted as the first salvo.

Why some people aren’t so thrilled that the AI race is on

It’s a word that makes me wince, because AI strategy and policy people talk about it a lot too. But in their context, an “AI arms race” — whether between tech companies or between geopolitical rivals — could have negative consequences that go far beyond market share.

“When it comes to very powerful technologies — and obviously AI is going to be one of the most powerful ever — we need to be careful,” DeepMind founder and leader Demis Hassabis recently told Time. “Not everybody is thinking about those things. It’s like experimentalists, many of whom don’t realize they’re holding dangerous material.”

“It’s important *NOT* to ‘move fast and break things’ for tech as important as AI,” he wrote on Twitter in September. The usual Silicon Valley spirit — try things, see how they fail, try again — has brought us some incredibly cool consumer technology and fun websites. But it’s not how you’d build a skyscraper or a manned rocket, and powerful AI systems are much more in the latter category, where you want robust engineering for reliability.

OpenAI’s ChatGPT was the product release that set off Google, Baidu, and Microsoft’s jostling for the lead in AI development, but that startup’s leadership, too, has expressed some dismay at where it’s taking us. “The bad case — and I think this is important to say — is like lights out for all of us … I think it’s impossible to overstate the importance of AI safety and alignment work. I would like to see much, much more happening,” OpenAI’s CEO Sam Altman said in a recent interview.

(He also called Google’s response to ChatGPT, ramping up their own AI release and “recalibrating” their concern for safety, a “disappointing” development: “openai will continually decrease the level of risk we are comfortable taking with new models as they get more powerful, not the other way around.”)

Why we should try to not have a race for powerful AI

Do I care if Microsoft steals some market share in search from Google? Not at all. Bing away. But the recent flurry of declarations that big tech is all in on the AI race has still made me nervous.

One easy way that we could, as some analysts put it, “stumble into AI catastrophe” is if researchers developing powerful systems ignore warning signs and carelessly train more and more powerful systems using the approaches that worked fine for weak, early-stage systems — but which fall apart on an AI system that’s more sophisticated, persuasive, and potentially able to deceive its users.

Here’s a thought experiment: Imagine that you can always tell if your 3-year-old is lying to you, so your plan to dissuade him from misbehavior is just to ask if he’s misbehaving. Works great, but if you stick to the same plan on your more sophisticated teenager, it won’t work so well.

In general, most researchers aren’t reckless and don’t want to risk the world. If their lab is building AI and they start noticing terrifying signs of misalignment, deceptive behavior, advanced planning, etc., they’ll be alarmed, and they’ll stop! Even researchers who are skeptical today that alignment is a serious concern will, if they see it in their lab, want to address it before they put bigger and bigger systems out.

Why competition can be great — and dangerous

But that’s what might happen in a lab. In an economic race with enormous winner-takes-all stakes, a company is primarily thinking about whether to deploy their system before a competitor. Slowing down for safety checks risks that someone else will get there first. In geopolitical AI arms race scenarios, the fear is that China will get to AI before the US and have an incredibly powerful weapon — and that, in anticipation of that, the US may push its own unready systems into widespread deployment.

Even if alignment is a very solvable problem, trying to do complex technical work on incredibly powerful systems while everyone is in a rush to beat a competitor is a recipe for failure.

Some actors working on artificial general intelligence, or AGI, have planned significantly to avoid this dangerous trap: OpenAI, for instance, has terms in its charter specifically aimed at preventing an AI race once systems are powerful enough: “We are concerned about late-stage AGI development becoming a competitive race without time for adequate safety precautions. Therefore, if a value-aligned, safety-conscious project comes close to building AGI before we do, we commit to stop competing with and start assisting this project. We will work out specifics in case-by-case agreements, but a typical triggering condition might be “a better-than-even chance of success in the next two years.”

I am generally optimistic about human nature. No one actively wants to deploy a system that will kill us all, so if we can get good enough visibility into the problem of alignment, then it’ll be clear to engineers why they need a solution. But eager declarations that the race is on make me nervous.

Another great part of human nature is that we are often incredibly competitive — and while that competition can lead to great advancements, it can also lead to great destruction. It’s the Cold War that drove the space race, but it was also WWII that drove the creation of the atomic bomb. If winner-takes-all competition is the attitude we bring to one of the most powerful technologies in human history, I don’t think humanity is going to win out.

A version of this story was initially published in the Future Perfect newsletter. Sign up here to subscribe!

Related articles

Trump gets RUDE AWAKENING as GOP LOSES BIG…IN GEORGIA?!!!

MeidasTouch host Ben Meiselas reports on Donald...

Christian Benford: “I Love to Compete” | Buffalo Bills

Bills Cornerback Christian Benford met with the...

Previewing the CDC’s December Vaccine Advisory Meeting

The vaccine advisory committee of the Centers for Disease...

These revolting outbursts point to something undeniable — and extremely urgent



After criticizing media coverage about him aging in office, Trump appeared to be falling asleep during a Cabinet meeting at the White House on Tuesday.

But that’s hardly the most troubling aspect of his aging.

In the last few weeks, Trump’s insults, tantrums, and threats have exploded.

To Nancy Cordes, CBS’s White House correspondent, he said: “Are you stupid? Are you a stupid person? You’re just asking questions because you’re a stupid person.”

About New York Times correspondent Katie Rogers: “Third rate … ugly, both inside and out.”

To Bloomberg White House correspondent Catherine Lucey: “Quiet. Quiet, piggy.”

About Democratic lawmakers who told military members to defy illegal orders: guilty of “sedition … punishable by DEATH.”

About Somali immigrants to the United States: “Garbage” whom “we don’t want in our country.”

What to make of all this?

Trump’s press hack Karoline Leavitt tells reporters to “appreciate the frankness and the openness that you get from President Trump on a near-daily basis.”

Sorry, Ms. Leavitt. This goes way beyond frankness and openness. Trump is now saying things nobody in their right mind would say, let alone the president of the United States.

He’s losing control over what he says, descending into angry, venomous, often dangerous territory. Note how close his language is coming to violence — when he speaks of acts being punishable by death, or human beings as garbage, or someone being ugly inside and out.

The deterioration isn’t due to age alone.

I have some standing to talk about this frankly. I was born 10 days after Trump. My gray matter isn’t what it used to be, either, but I don’t say whatever comes into my head.

It’s true that when you’re pushing 80, brain inhibitors start shutting down. You begin to let go. Even in my daily Substack letter to you, I’ve found myself using language that I’d never use when I was younger.

When my father got into his 90s, he told his friends at their weekly restaurant lunch that it was about time they paid their fair shares of the bill. He told his pharmacist that he was dangerously incompetent and should be fired. He told me I needed to dress better and get a haircut.

He lost some of his inhibitions, but at least his observations were accurate.

I think older people lose certain inhibitions because they don’t care as much about their reputations as do younger people. In a way, that’s rational. Older people no longer depend on their reputations for the next job or next date or new friend. If a young person says whatever comes into their heads, they have much more to lose, reputation-wise.

But Trump’s outbursts signal something more than the normal declining inhibitions that come with older age. Trump no longer has any filters. He’s becoming impetuous.

This would be worrying about anyone who’s aging. But a filterless president of the United States who says anything that comes into his head poses a unique danger. What if he gets angry at China, calls up Xi Jinping, tells him he’s an asshole, and then orders up a nuclear bomb?

It’s time the media reported on this. It’s time America faced reality. It’s time we demanded that our representatives in Congress take action, before it’s too late.

Invoke Section 4 of the 25th Amendment.

  • Robert Reich is a professor of public policy at Berkeley and former secretary of labor. His writings can be found at https://robertreich.substack.com/.
  • Robert Reich's new memoir, Coming Up Short, can be found wherever you buy books. You can also support local bookstores nationally by ordering the book at bookshop.org