You can get unfathomably rich building AI. Should you?

It’s a good time to be a highly in-demand AI engineer. To lure leading researchers away from OpenAI and other competitors, Meta has reportedly offered pay packages totalling more than $100 million. Top AI engineers are now being compensated like football superstars. 

Few people will ever have to grapple with the question of whether to go work for Mark Zuckerberg’s “superintelligence” venture in exchange for enough money to never have to work again (Bloomberg columnist Matt Levine recently pointed out that this is kind of Zuckerberg’s fundamental challenge: If you pay someone enough to retire after a single month, they might well just quit after a single month, right? You need some kind of elaborate compensation structure to make sure they can get unfathomably rich without simply retiring.) 

Most of us can only dream of having that problem. But many of us have occasionally had to navigate the question of whether to take on an ethically dubious job (Denying insurance claims? Shilling cryptocurrency? Making mobile games more habit-forming?) to pay the bills. 

For those working in AI, that ethical dilemma is supercharged to the point of absurdity. AI is a ludicrously high-stakes technology — both for good and for ill — with leaders in the field warning that it might kill us all. A small number of people talented enough to bring about superintelligent AI can dramatically alter the technology’s trajectory. Is it even possible for them to do so ethically?  

AI is going to be a really big deal

On the one hand, leading AI companies offer workers the potential to earn unfathomable riches and also contribute to very meaningful social good — including productivity-increasing tools that can accelerate medical breakthroughs and technological discovery, and make it possible for more people to code, design, and do any other work that can be done on a computer. 

On the other hand, well, it’s hard for me to argue that the “Waifu engineer” that xAI is now hiring for — a role that will be responsible for making Grok’s risqué anime girl “companion” AI even more habit-forming — is of any social benefit whatsoever, and I in fact worry that the rise of such bots will be to the lasting detriment of society. I’m also not thrilled about the documented cases of ChatGPT encouraging delusional beliefs in vulnerable users with mental illness. 

Much more worryingly, the researchers racing to build powerful AI “agents” — systems that can independently write code, make purchases online, interact with people, and hire subcontractors for tasks — are running into plenty of signs that those AIs might intentionally deceive humans and even take dramatic and hostile action against us. In tests, AIs have tried to blackmail their creators or send a copy of themselves to servers where they can operate more freely. 

For now, AIs only exhibit that behavior when given precisely engineered prompts designed to push them to their limits. But with increasingly huge numbers of AI agents populating the world, anything that can happen under the right circumstances, however rare, will likely happen sometimes. 

Over the past few years, the consensus among AI experts has moved from “hostile AIs trying to kill us is completely implausible” to “hostile AIs only try to kill us in carefully designed scenarios.” Bernie Sanders — not exactly a tech hype man — is now the latest politician to warn that as independent AIs become more powerful, they might take power from humans. It’s a “doomsday scenario,” as he called it, but it’s hardly a far-fetched one anymore.

And whether or not the AIs themselves ever decide to kill or harm us, they might fall into the hands of people who do. Experts worry that AI will make it much easier both for rogue individuals to engineer plagues or plan acts of mass violence, and for states to achieve heights of surveillance over their citizens that they have long dreamed of but never before been able to achieve.

This story was first featured in the Future Perfect newsletter.

Sign up here to explore the big, complicated problems the world faces and the most efficient ways to solve them. Sent twice a week.

In principle, a lot of these risks could be mitigated if labs designed and adhered to rock-solid safety plans, responding swiftly to signs of scary behavior among AIs in the wild. Google, OpenAI, and Anthropic do have safety plans, which don’t seem fully adequate to me but which are a lot better than nothing. But in practice, mitigation often falls by the wayside in the face of intense competition between AI labs. Several labs have weakened their safety plans as their models came close to meeting pre-specified performance thresholds. Meanwhile, xAI, the creator of Grok, is pushing releases with no apparent safety planning whatsoever. 

Worse, even labs that start out deeply and sincerely committed to ensuring AI is developed responsibly have often changed course later because of the enormous financial incentives in the field. That means that even if you take a job at Meta, OpenAI, or Anthropic with the best of intentions, all of your effort toward building a good AI outcome could be redirected toward something else entirely.

So should you take the job?

I’ve been watching this industry evolve for seven years now. Although I’m generally a techno-optimist who wants to see humanity design and invent new things, my optimism has been tempered by witnessing AI companies openly admitting their products might kill us all, then racing ahead with precautions that seem wholly inadequate to those stakes. Increasingly, it feels like the AI race is steering off a cliff. 

Given all that, I don’t think it’s ethical to work at a frontier AI lab unless you have given very careful thought to the risks that your work will bring closer to fruition, and you have a specific, defensible reason why your contributions will make the situation better, not worse. Or, you have an ironclad case that humanity doesn’t need to worry about AI at all, in which case, please publish it so the rest of us can check your work! 

When vast sums of money are at stake, it’s easy to self-deceive. But I wouldn’t go so far as to claim that literally everyone working in frontier AI is engaged in self-deception. Some of the work documenting what AI systems are capable of and probing how they “think” is immensely valuable. The safety and alignment teams at DeepMind, OpenAI, and Anthropic have done and are doing good work.

But anyone pushing for a plane to take off while convinced it has a 20 percent chance of crashing would be wildly irresponsible, and I see little difference in trying to build superintelligence as fast as possible. 

A hundred million dollars, after all, isn’t worth hastening the death of your loved ones or the end of human freedom. In the end, it’s only worth it if you can not just get rich off AI, but also help make it go well. 

It might be hard to imagine anyone who’d turn down mind-boggling riches just because it’s the right thing to do in the face of theoretical future risks, but I know quite a few people who’ve done exactly that. I expect there will be more of them in the coming years, as more absurdities like Grok’s recent MechaHitler debacle go from sci-fi to reality. 

And ultimately, whether or not the future turns out well for humanity may depend on whether we can persuade some of the richest people in history to notice something their paychecks depend on their not noticing: that their jobs might be really, really bad for the world.

Related articles

Did Kristi Noem genuinely thank Trump for keeping hurricanes away?

The homeland security secretary made the remark during a White House Cabinet meeting on Dec. 2, 2025.

‘We have started to see cracks’: Dem senator spills about GOP’s Hegseth ‘nervousness’



A Democratic lawmaker said Thursday that Republican lawmakers have begun to separate themselves from President Donald Trump.

Sen. Jeff Merkley (D-OR) told CNN anchors Wolf Blitzer and Pamela Brown that Republicans have voiced their concerns over the president's recent moves and have questions about Defense Secretary Pete Hegseth's most recent comments on the Sept. 2 strike on an alleged drug boat in the Caribbean, off the coast of Venezuela.

Merkley, who serves on the Foreign Relations Committee, argued that the administration's response to the killings is not a satisfactory response for him. He described what the lawmakers have learned about the second strike, where "two helpless men clinging to debris" were killed.

"If this was a legal action of war, which is still under dispute, then it would be a war crime," Merkley said. "If it was not, it was a murder. In either case."

The Democratic lawmaker said that the U.S. Coast Guard should have investigated this incident.

"Again, the right way to find out if there are drugs aboard a boat is you stop the boat, you board it, you investigate it, and in the process you learn if there are drugs, you learn about the strategies involved, which gives you information to help dismantle a broader operation," Merkley said. "Blowing a boat up, not even knowing much about what the boat is doing simply destroys that type of information. So it's not only extrajudicial, it is also stupid. And so this is this is vast concerns about judgment. And by the way, of course, this is all a prelude to the possible strikes on Venezuela itself."

Trump has signaled that the U.S. has planned to attack Venezuela in ground strikes, although those details have not yet been released publicly.

The recent revelations have prompted congressional leaders to request Admiral Frank M. “Mitch” Bradley brief lawmakers Thursday in Washington, D.C. It has also raised questions about whether GOP leaders are ready to face the president over the reports, among other lingering concerns.

“There has been such a sense, of my colleagues, that they are not ready to confront Trump over the mistakes of this administration but we have started to see cracks in that following the November election a month ago where they're starting to feel like they have hitched their wagon to a horse that is going to take them over a cliff and they better start separating themselves,” Merkley said.

Merkley said it will be interesting to see what Republicans say after the briefings Thursday and that he believes Hegseth should resign.

“My Republican colleagues in the Senate are getting very nervous about being tied — not just to Hegseth — but to the overall actions of the administration," he added.

Biomarker Analysis Offers Clues to Long-Term Survival with SurVaxM

Team led by Roswell Park researchers zeroes in on...

Fact-checking claims about Border Patrol’s immigration crackdown in North Carolina

Border Patrol affecting NC traffic? Jails blocking access to inmates? Fact-checking claims about crackdown in NC