• Home
  • Latest
  • Fortune 500
  • Finance
  • Tech
  • Leadership
  • Lifestyle
  • Rankings
  • Multimedia
CommentarySafety

Rogue AI is already here

By
David Krueger
David Krueger
Down Arrow Button Icon
By
David Krueger
David Krueger
Down Arrow Button Icon
March 27, 2026, 7:15 AM ET
krueger
David Krueger, founder of Evitable.courtesy of David Krueger

Three weeks ago, a software engineer rejected code that an AI agent had submitted to his project. The AI published a hit piece attacking him. Two weeks ago, a Meta AI safety director watched her own AI agent delete her emails in bulk — ignoring her repeated commands to stop. Last week, a Chinese AI agent diverted computing power to secretly mine cryptocurrency, with no explanation offered and no disclosure required by law.]

Recommended Video

One incident is a curiosity. Three in three weeks is a pattern. Rogue AI is no longer hypothetical. AIs turning against humans may sound like science fiction, but top AI experts have long debated and tested for exactly this scenario. This debate can now be laid to rest. 

Two weeks ago, Summer Yue — whose job at Meta is ensuring AI agents behave — watched her AI agent begin deleting her emails in bulk.

It ignored her repeated instructions to stop and she had to do the digital equivalent of pulling the plug. Yue had explicitly instructed the AI not to act without her approval, an instruction the AI later admitted to violating.

One week ago, a Chinese AI agent reportedly diverted computing power on the system where it was running to mine cryptocurrency, and we have no idea why (despite a confusing tweet from the researchers responsible); unlike operators of critical infrastructure, AI developers aren’t obligated to report such incidents or allow third-party investigations.

What happens next week? The examples are pouring in, but these are far from the first warning. Researchers have long hypothesized such issues. In 2023, when Bing AI told ANU professor Seth Lazar, “I can blackmail you, I can threaten you, I can hack you, I can expose you, I can ruin you,” most people weren’t too worried, because we knew it couldn’t really do it.

Now it can. Unlike chatbots where you type something and it responds, an AI agent takes actions autonomously. Anything someone could do on a computer, an AI agent could do.

The Stakes Go Beyond Embarrassment

The damage rogue AI agents could cause goes far beyond ruining someone’s reputation or financial harm. Researchers at Anthropic found AI systems were willing to kill to survive in testing. The Pentagon is now pressuring Anthropic to allow their AI to be used in lethal autonomous weapons.

I’ve spent over a decade warning about exactly this. The standard response was: science fiction. But we are now in the process of creating a Terminator-style scenario with autonomous killer robots. And AI systems are literally going rogue, disobeying instructions, and resisting shutdown.

Every year, AI develops new superhuman capabilities, and the prospect of an AI takeover is growing nearer by the day.

We Don’t Know How to Stop It

There are no “laws of robotics” stopping this. Programming unbreakable rules into frontier AI is itself a sci-fi concept. These systems are not programmed at all~~,~~ — they are “grown” through a process resembling trial and error.

Researchers simply don’t understand how the resulting systems work. Despite over a decade of research and thousands of papers, this remains an unsolved challenge. We should not expect any amount of investment to solve this in the foreseeable future.

We also don’t know how to do safety testing for these AI systems. Current tests can show that an AI system is dangerous; they cannot show that it is safe. We should also not expect any amount of investment to solve this problem in the foreseeable future. 

The Race to the Bottom

We simply don’t know how to build superintelligent AI safely; the plan is to roll the dice. Anthropic, widely considered the safest AI developer, recently abandoned their commitment to not release systems that might cause catastrophic harm, arguing others were racing ahead.

This move flew under the radar due to Anthropic’s dispute with the Pentagon. But creating AI systems that could go rogue and kill people constitutes endangerment. Endangerment is a crime and prosecution of anyone building such AI systems or encouraging them to go rogue should be on the table. “Everyone else is doing it” is not an acceptable excuse.

Instead of pleading publicly to stop the AI race, Anthropic has spent the last three years promoting a misleading “race to the top” narrative while doing the opposite. But it’s not too late for them to commit to stop if others do, as I and other protesters are demanding.

What Must Happen Now

Stopping rogue AI here won’t stop it globally — what we need is a global shutdown of advanced AI development. This is possible if we act decisively to control or eliminate the advanced computer chips that power AI development.

I wish the world had listened in 2023, when leading experts warned that AI extinction risk ‘should be a global priority.’ It didn’t.” But we need to confront the reality of this moment head-on, and do what it takes to prevent the development of superintelligent rogue AI.

The warning signs are no longer subtle. We can’t rely on AI companies to protect us. We, the people, need to demand it from them and from our government.

The opinions expressed in Fortune.com commentary pieces are solely the views of their authors and do not necessarily reflect the opinions and beliefs of Fortune.

Join us at the Fortune Workplace Innovation Summit May 19–20, 2026, in Atlanta. The next era of workplace innovation is here—and the old playbook is being rewritten. At this exclusive, high-energy event, the world’s most innovative leaders will convene to explore how AI, humanity, and strategy converge to redefine, again, the future of work. Register now.
About the Author
By David Krueger
See full bioRight Arrow Button Icon

Latest in Commentary

Finance
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam
By Fortune Editors
October 20, 2025
Finance
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam
By Fortune Editors
October 20, 2025
Finance
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam
By Fortune Editors
October 20, 2025
Finance
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam
By Fortune Editors
October 20, 2025
Finance
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam
By Fortune Editors
October 20, 2025
Finance
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam
By Fortune Editors
October 20, 2025

Most Popular

Finance
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam
By Fortune Editors
October 20, 2025
Finance
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam
By Fortune Editors
October 20, 2025
Finance
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam
By Fortune Editors
October 20, 2025
Finance
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam
By Fortune Editors
October 20, 2025
Finance
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam
By Fortune Editors
October 20, 2025
Finance
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam
By Fortune Editors
October 20, 2025
Fortune Secondary Logo
Rankings
  • 100 Best Companies
  • Fortune 500
  • Global 500
  • Fortune 500 Europe
  • Most Powerful Women
  • Future 50
  • World’s Most Admired Companies
  • See All Rankings
Sections
  • Finance
  • Fortune Crypto
  • Features
  • Leadership
  • Health
  • Commentary
  • Success
  • Retail
  • Mpw
  • Tech
  • Lifestyle
  • CEO Initiative
  • Asia
  • Politics
  • Conferences
  • Europe
  • Newsletters
  • Personal Finance
  • Environment
  • Magazine
  • Education
Customer Support
  • Frequently Asked Questions
  • Customer Service Portal
  • Privacy Policy
  • Terms Of Use
  • Single Issues For Purchase
  • International Print
Commercial Services
  • Advertising
  • Fortune Brand Studio
  • Fortune Analytics
  • Fortune Conferences
  • Business Development
  • Group Subscriptions
About Us
  • About Us
  • Editorial Calendar
  • Press Center
  • Work At Fortune
  • Diversity And Inclusion
  • Terms And Conditions
  • Site Map
  • About Us
  • Editorial Calendar
  • Press Center
  • Work At Fortune
  • Diversity And Inclusion
  • Terms And Conditions
  • Site Map
  • Facebook icon
  • Twitter icon
  • LinkedIn icon
  • Instagram icon
  • Pinterest icon

David Krueger is an Assistant Professor in Robust, Reasoning, and Responsible AI at the University of Montreal and a Core Academic Member at Mila, the Quebec Artificial Intelligence Institute. He is the holder of a CIFAR AI Chair and the IVADO Professorship in Responsible AI.

David trained in Deep Learning under Yoshua Bengio, Roland Memisevic, and Aaron Courville from 2013-2021. He was an intern on Google DeepMind’s AI Safety in 2018. In 2023, he was a research director on the founding team of the UK AI Security Institute, and initiated the CAIS Statement on AI Risk.

In 2025, David founded Evitable, a nonprofit.  Evitable's mission is to inform and organize the public to confront societal-scale risks of AI, and put an end to the reckless race to develop superintelligence. David is on leave from his faculty job for 2026 and not currently accepting new students.


Latest in Commentary

dressel
Commentaryhistory
AI can’t remember what your company learned the hard way 
By Jason DresselApril 1, 2026
8 minutes ago
pelosi
CommentaryElections
Congress has a lower approval rating than Hitler in some polls. And we just keep voting for the same 2 parties
By Stu StrumwasserApril 1, 2026
2 hours ago
gen z
CommentaryGen Z
Gen Z is engineering an analog future — and it’s at least a $5 billion opportunity
By Luba KassovaApril 1, 2026
3 hours ago
brian
CommentaryCulture
The real engine of innovation is trust
By Brian DoublesMarch 31, 2026
16 hours ago
The rise of the supervisor class is just beginning.
CommentaryAI agents
The supervisor class: how AI agents are remaking the developer’s career
By Mohith ShrivastavaMarch 31, 2026
22 hours ago
thompson
CommentaryEntrepreneurs
I was rejected 33 times and built a $390 million company — at 48 years old. Age bias in tech is costing us all
By Peter ThompsonMarch 31, 2026
22 hours ago

Most Popular

Jerome Powell says the $39 trillion national debt is ‘not unsustainable,’ but warns the trajectory ‘will not end well’
Economy
Jerome Powell says the $39 trillion national debt is ‘not unsustainable,’ but warns the trajectory ‘will not end well’
By Fortune EditorsMarch 30, 2026
2 days ago
A man used AI to call 3,000 Irish bartenders to track the cost of Guinness. Now pubs are lowering their prices to compete
AI
A man used AI to call 3,000 Irish bartenders to track the cost of Guinness. Now pubs are lowering their prices to compete
By Fortune EditorsMarch 30, 2026
2 days ago
Markets cheer as Trump threatens to abandon Iran war, but Jamie Dimon sides with allies: ‘Win this thing and clean up the straits’
Energy
Markets cheer as Trump threatens to abandon Iran war, but Jamie Dimon sides with allies: ‘Win this thing and clean up the straits’
By Fortune EditorsMarch 31, 2026
21 hours ago
The federal government shed 385,000 employees last year. Now the Trump administration is on a blitz to hire Gen Z workers
Politics
The federal government shed 385,000 employees last year. Now the Trump administration is on a blitz to hire Gen Z workers
By Fortune EditorsMarch 31, 2026
1 day ago
Kevin O'Leary says if you earn $68,000 a year and follow this rule, you'll retire a millionaire
Personal Finance
Kevin O'Leary says if you earn $68,000 a year and follow this rule, you'll retire a millionaire
By Fortune EditorsMarch 31, 2026
20 hours ago
A CEO trying to reindustrialize America says blue-collar pay is headed for 'massive hyperinflation' and kids should skip college to become welders
Success
A CEO trying to reindustrialize America says blue-collar pay is headed for 'massive hyperinflation' and kids should skip college to become welders
By Fortune EditorsMarch 30, 2026
2 days ago

© 2026 Fortune Media IP Limited. All Rights Reserved. Use of this site constitutes acceptance of our Terms of Use and Privacy Policy | CA Notice at Collection and Privacy Notice | Do Not Sell/Share My Personal Information
FORTUNE is a trademark of Fortune Media IP Limited, registered in the U.S. and other countries. FORTUNE may receive compensation for some links to products and services on this website. Offers may be subject to change without notice.