Are we racing toward AI catastrophe?


Satya Nadella, CEO of Microsoft, speaks during an event at the company’s headquarters in Redmond, Washington, on Tuesday, February 7, 2023. | Chona Kasinger/Bloomberg via Getty Images

As tech giants like Microsoft and Google compete to capture the AI market, safety could be an afterthought.

“The race starts today, and we’re going to move and move fast,” Satya Nadella, CEO of Microsoft, said at the company’s ChatGPT launch event on Tuesday.

That word — “race” — has been thrown around a lot lately in the AI world. Google and Baidu are racing to compete with OpenAI and Microsoft’s ChatGPT as a key part of search, while Meta is racing not to be left in the dust.

Tech is often a winner-takes-all sector — think Google controlling nearly 93 percent of the search market — but AI is poised to turbocharge those dynamics. Competitors like Microsoft and Baidu have a once-in-a-lifetime shot at displacing Google and becoming the internet search giant with AI-enabled, friendlier interface. Some have gone even farther, arguing there’s a “generative AI arms race between the US and China,” in which Microsoft-backed OpenAI’s insanely popular ChatGPT should be interpreted as the first salvo.

Why some people aren’t so thrilled that the AI race is on

It’s a word that makes me wince, because AI strategy and policy people talk about it a lot too. But in their context, an “AI arms race” — whether between tech companies or between geopolitical rivals — could have negative consequences that go far beyond market share.

“When it comes to very powerful technologies — and obviously AI is going to be one of the most powerful ever — we need to be careful,” DeepMind founder and leader Demis Hassabis recently told Time. “Not everybody is thinking about those things. It’s like experimentalists, many of whom don’t realize they’re holding dangerous material.”

“It’s important *NOT* to ‘move fast and break things’ for tech as important as AI,” he wrote on Twitter in September. The usual Silicon Valley spirit — try things, see how they fail, try again — has brought us some incredibly cool consumer technology and fun websites. But it’s not how you’d build a skyscraper or a manned rocket, and powerful AI systems are much more in the latter category, where you want robust engineering for reliability.

OpenAI’s ChatGPT was the product release that set off Google, Baidu, and Microsoft’s jostling for the lead in AI development, but that startup’s leadership, too, has expressed some dismay at where it’s taking us. “The bad case — and I think this is important to say — is like lights out for all of us … I think it’s impossible to overstate the importance of AI safety and alignment work. I would like to see much, much more happening,” OpenAI’s CEO Sam Altman said in a recent interview.

(He also called Google’s response to ChatGPT, ramping up their own AI release and “recalibrating” their concern for safety, a “disappointing” development: “openai will continually decrease the level of risk we are comfortable taking with new models as they get more powerful, not the other way around.”)

Why we should try to not have a race for powerful AI

Do I care if Microsoft steals some market share in search from Google? Not at all. Bing away. But the recent flurry of declarations that big tech is all in on the AI race has still made me nervous.

One easy way that we could, as some analysts put it, “stumble into AI catastrophe” is if researchers developing powerful systems ignore warning signs and carelessly train more and more powerful systems using the approaches that worked fine for weak, early-stage systems — but which fall apart on an AI system that’s more sophisticated, persuasive, and potentially able to deceive its users.

Here’s a thought experiment: Imagine that you can always tell if your 3-year-old is lying to you, so your plan to dissuade him from misbehavior is just to ask if he’s misbehaving. Works great, but if you stick to the same plan on your more sophisticated teenager, it won’t work so well.

In general, most researchers aren’t reckless and don’t want to risk the world. If their lab is building AI and they start noticing terrifying signs of misalignment, deceptive behavior, advanced planning, etc., they’ll be alarmed, and they’ll stop! Even researchers who are skeptical today that alignment is a serious concern will, if they see it in their lab, want to address it before they put bigger and bigger systems out.

Why competition can be great — and dangerous

But that’s what might happen in a lab. In an economic race with enormous winner-takes-all stakes, a company is primarily thinking about whether to deploy their system before a competitor. Slowing down for safety checks risks that someone else will get there first. In geopolitical AI arms race scenarios, the fear is that China will get to AI before the US and have an incredibly powerful weapon — and that, in anticipation of that, the US may push its own unready systems into widespread deployment.

Even if alignment is a very solvable problem, trying to do complex technical work on incredibly powerful systems while everyone is in a rush to beat a competitor is a recipe for failure.

Some actors working on artificial general intelligence, or AGI, have planned significantly to avoid this dangerous trap: OpenAI, for instance, has terms in its charter specifically aimed at preventing an AI race once systems are powerful enough: “We are concerned about late-stage AGI development becoming a competitive race without time for adequate safety precautions. Therefore, if a value-aligned, safety-conscious project comes close to building AGI before we do, we commit to stop competing with and start assisting this project. We will work out specifics in case-by-case agreements, but a typical triggering condition might be “a better-than-even chance of success in the next two years.”

I am generally optimistic about human nature. No one actively wants to deploy a system that will kill us all, so if we can get good enough visibility into the problem of alignment, then it’ll be clear to engineers why they need a solution. But eager declarations that the race is on make me nervous.

Another great part of human nature is that we are often incredibly competitive — and while that competition can lead to great advancements, it can also lead to great destruction. It’s the Cold War that drove the space race, but it was also WWII that drove the creation of the atomic bomb. If winner-takes-all competition is the attitude we bring to one of the most powerful technologies in human history, I don’t think humanity is going to win out.

A version of this story was initially published in the Future Perfect newsletter. Sign up here to subscribe!

Related articles

Fumbling Kristi Noem ends presser when asked whether CPB hit a party balloon with a laser



Homeland Security Secretary Kristi Noem abruptly shut down a news conference Friday after being pressed about reports that Border Patrol officials may have mistakenly targeted party balloons with a high-energy laser, triggering a brief airspace shutdown over El Paso, Texas.

The moment came during a press conference in Phoenix, where a reporter asked Noem to clarify why the Federal Aviation Administration had abruptly closed the airspace over El Paso earlier this week – a decision that was reversed just eight hours later with little explanation.

When the subject came up on Friday, Noem declined to answer directly.

“This was a joint agency task force mission that was undertaken, and we're continuing to work on the communication through that,” Noem said. “But recognize we’re grateful for the partnership of the Department of War and the FAA as we go forward. Thank you.”

As the reporter attempted to follow up, Noem immediately cut off the exchange.

“All right, thanks everybody," she said, abruptly ending the news conference before additional questions could be asked.

According to multiple reports, the closure followed a test by Customs and Border Protection at nearby Fort Bliss of a high-energy laser against suspected foreign drones, which turned out to be party balloons.

Officials from the FAA and the Pentagon are scheduled to meet on Feb. 20 to discuss the technology and its potential risk to civilian aircraft.

Q: Can you confirm that CBP actually hit a party balloon it thought was a drone with a laser? And why wasn't that coordinated with the FAA?KRISTI NOEM: You know, this was a joint agency task force, um, mission that was undertaken. And um, we're continuing to work on communication*ends presser*

[image or embed]
— Aaron Rupar (@atrupar.com) February 13, 2026 at 1:28 PM

‘Do You Speak Billionaire?’ and Other Stories From the Fall of the Washington Post

I think I can say with little fear of contradiction that I know as much as anyone else in modern...