• Home
  • Latest
  • Fortune 500
  • Finance
  • Tech
  • Leadership
  • Lifestyle
  • Rankings
  • Multimedia

Trendingnow

1

Microsoft AI chief gives it 18 months—for all white-collar work to be automated by AI

2

Former top Russian official admits the country is over Putin and can 'imagine a future without him' — even elites bail as Kremlin seizes their assets 

3

The Bezos family just donated $100 million to help achieve one of Mayor Zohran Mamdani’s top campaign promises

1

Microsoft AI chief gives it 18 months—for all white-collar work to be automated by AI

2

Former top Russian official admits the country is over Putin and can 'imagine a future without him' — even elites bail as Kremlin seizes their assets 

3

The Bezos family just donated $100 million to help achieve one of Mayor Zohran Mamdani’s top campaign promises
TechAI

Google DeepMind 145-page paper predicts AGI will match human skills by 2030 — and warns of existential threats that could ‘permanently destroy humanity’

By
Beatrice Nolan
Beatrice Nolan
Tech Reporter
Down Arrow Button Icon
By
Beatrice Nolan
Beatrice Nolan
Tech Reporter
Down Arrow Button Icon
April 4, 2025, 12:07 PM ET
Google DeepMind cofounder and CEO Demis Hassabis
Google DeepMind CEO Demis Hassabis. Researchers at the AI lab have just put out a paper saying that human-like "artificial general intelligence" could arrive by 2030 and pose an existential risk to humanity.Stefan Wermuth—Bloomberg via Getty Images
  • DeepMind’s latest 145-page safety paper warns AGI could arrive by 2030 and cause “severe harm.” However, some experts say the concept of AGI is still too vague and the timeline too uncertain to be properly evaluated.

Google DeepMind says in a new research paper that human-level AI could plausibly arrive by 2030 and “permanently destroy humanity.”

Recommended Video

In a discussion of the spectrum of risks posed by Artificial General Intelligence, or AGI, the paper states, “existential risks … that permanently destroy humanity are clear examples of severe harm. In between these ends of the spectrum, the question of whether a given harm is severe isn’t a matter for Google DeepMind to decide; instead it is the purview of society, guided by its collective risk tolerance and conceptualisation of harm. Given the massive potential impact of AGI, we expect that it too could pose potential risk of severe harm.”

The statements are contained in a 145-page paper outlining Google DeepMind’s approach to AI safety as it attempts to build advanced systems that may one day surpass human intelligence.

The papers’ co-authors, who include DeepMind co-founder Shane Legg, did not specifically say how AGI might result in human extinction. And most of the paper is focused on the steps Google DeepMind thinks it and other AI labs should take to reduce the threat that AGI results in what the researchers called “severe harm.”

Legg has for decades said that his “median forecast” for AGI’s arrival is 2028. Last month, Legg’s cofounder, DeepMind CEO Demis Hassabis told NBC News that he thought AGI would likely arrive in the next “five to 10 years,” putting 2030 at the earlier end of that range.

The paper separates the risks of advanced AI into four major categories: misuse, which refers to people intentionally using AI for harm; misalignment, meaning systems developing unintended harmful behavior; mistakes, categorized as unexpected failures due to design or training flaws; and structural risks, which refers to conflicting incentives between multiple parties, including different groups of people, such as countries or companies, and possibly multiple AI systems.

The researchers also outline DeepMind’s risk mitigation strategy, which is focused on misuse prevention and emphasizes the importance of identifying dangerous capabilities early.

DeepMind also throws some subtle jabs at the AGI safety approaches of fellow AI labs Anthropic and OpenAI. It critiques Anthropic for placing comparatively limited focus on rigorous training, oversight, and security protocols, while accusing OpenAI as being overly focused on alignment research.

The paper has failed to win over some AI safety experts.

Anthony Aguirre, Co-Founder and Executive Director at the AI-safety-focused Future of Life Institute, told Fortune that while the DeepMind team was making an “admirable effort to address the risks of AGI, much more is needed.”

“Superhuman artificial intelligence threatens social and political upheaval unmatched in human history,” he said. “As the authors themselves indicate, AGI could arrive soon—indeed at almost any time—and could rapidly self-improve and vastly surpass human capability. Such systems are inherently unpredictable, and we are far closer to building them than to understanding how to control them, if it is even possible.” 

There are also questions about the timeline, plausibility, and definition of AGI itself. The concept is not clearly defined, Heidy Khlaaf, chief AI scientist at the nonprofit AI Now Institute, told TechCrunch. She said AGI was still too loosely defined to be “rigorously evaluated scientifically.” 

A Google spokesperson told Fortune: “In the context of our position paper, the approach and mitigations outlined can apply to more than one definition of AGI.”

“While a strict definition is not central to our argument, our risk assessment prioritizes foreseeable long-term capabilities and aims to provide solutions that can help the AI safety community build this technology responsibly,” they said. “Additionally, collaboration and dialogue on AGI is key to ensure it benefits as many people as possible. For us, this means not only baking safety and security into everything we do, but also gaining international consensus on how best to manage and deploy AI, and generally approaching AGI with the seriousness and humility such a powerful technology deserves.”

While researchers at top AI labs have predicted AGI could arrive in the next five years, many other computer and cognitive scientists remain skeptical that the standard is even achievable with current methods.

For instance, Gary Marcus, an emeritus professor of cognitive science at New York University who has emerged as a leading skeptic of today’s approaches to AI, has written that today’s AI based on large language models is incapable of matching human-level intelligence across all domains, especially when one considers aspects of human intelligence such as the ability to learn from relatively few examples and common sense reasoning.

Uncertain timelines

In the paper, the Google researchers also say that they “are highly uncertain about the timelines until powerful AI systems are developed,” but that “crucially, we find it plausible that they will be developed by 2030.”

Google’s statement that AGI is “plausible” by 2030 is a slightly lengthier timeline than that adopted by those affiliated with other leading AI labs. Anthropic CEO Dario Amodei, while saying he increasingly finds the term AGI problematic, has said publicly that he expects AI that will surpass human capabilities “in almost everything” will arrive in the next “two to three years.”

OpenAI CEO Sam Altman has been more circumspect, but written that “systems that start to point to AGI are coming into view” and also that OpenAI is now “confident we know how to build AGI as we have traditionally understood it.”

Meanwhile, former OpenAI policy researcher Daniel Kokotajlo—who resigned from OpenAI last year, claiming the company was being “reckless” in pursuit of AGI—and Scott Alexander, whose Astral Codex Ten blog is widely-read among AI safety researchers, published a scenario online this week that foresees
AI surpassing human intelligence in 2027. Their scenario has received widespread attention on social media and among people working on AI.

With reporting assistance from Jeremy Kahn.

Join us at the Fortune Workplace Innovation Summit May 19–20, 2026, in Atlanta. The next era of workplace innovation is here—and the old playbook is being rewritten. At this exclusive, high-energy event, the world’s most innovative leaders will convene to explore how AI, humanity, and strategy converge to redefine, again, the future of work. Register now.
About the Author
By Beatrice NolanTech Reporter
Twitter icon

Beatrice Nolan is a tech reporter on Fortune’s AI team, covering artificial intelligence and emerging technologies and their impact on work, industry, and culture. She's based in Fortune's London office and holds a bachelor’s degree in English from the University of York. You can reach her securely via Signal at beatricenolan.08

See full bioRight Arrow Button Icon

Latest in Tech

Finance
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam
By Fortune Editors
October 20, 2025
Finance
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam
By Fortune Editors
October 20, 2025
Finance
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam
By Fortune Editors
October 20, 2025
Finance
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam
By Fortune Editors
October 20, 2025
Finance
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam
By Fortune Editors
October 20, 2025
Finance
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam
By Fortune Editors
October 20, 2025

Most Popular

Finance
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam
By Fortune Editors
October 20, 2025
Finance
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam
By Fortune Editors
October 20, 2025
Finance
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam
By Fortune Editors
October 20, 2025
Finance
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam
By Fortune Editors
October 20, 2025
Finance
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam
By Fortune Editors
October 20, 2025
Finance
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam
By Fortune Editors
October 20, 2025
Fortune Secondary Logo
Rankings
  • 100 Best Companies
  • Fortune 500
  • Global 500
  • Fortune 500 Europe
  • Most Powerful Women
  • World's Most Admired Companies
  • See All Rankings
  • Lists Calendar
Sections
  • Finance
  • Fortune Crypto
  • Features
  • Leadership
  • Health
  • Commentary
  • Success
  • Retail
  • Mpw
  • Tech
  • Lifestyle
  • CEO Initiative
  • Asia
  • Politics
  • Conferences
  • Europe
  • Newsletters
  • Personal Finance
  • Environment
  • Magazine
  • Education
Customer Support
  • Frequently Asked Questions
  • Customer Service Portal
  • Privacy Policy
  • Terms Of Use
  • Single Issues For Purchase
  • International Print
Commercial Services
  • Advertising
  • Fortune Brand Studio
  • Fortune Analytics
  • Fortune Conferences
  • Business Development
  • Group Subscriptions
About Us
  • About Us
  • Press Center
  • Work At Fortune
  • Terms And Conditions
  • Site Map
  • About Us
  • Press Center
  • Work At Fortune
  • Terms And Conditions
  • Site Map
  • Facebook icon
  • Twitter icon
  • LinkedIn icon
  • Instagram icon
  • Pinterest icon

Latest in Tech

murdochs
CommentaryMedia
OpenAI paid $100 million for a talk show. James Murdoch is eyeing an even bigger deal. The hot new asset class is humanity
By Lin CherryMay 17, 2026
3 hours ago
dennis
CommentaryAI agents
Freshworks CEO: why agile enterprises are winning the AI race — and what they did differently
By Dennis WoodsideMay 17, 2026
3 hours ago
A man with a headset sits at a desk in a call center.
EconomyAutomation
The AI boom hasn’t stopped U.S. companies from hiring cheap offshore labor, and overseas call center employment is still skyrocketing
By Sasha RogelbergMay 17, 2026
3 hours ago
Zillow CEO doubles down on remote-work model: ‘There is talent everywhere in this country’
Workplace Cultureremote work
Zillow CEO doubles down on remote-work model: ‘There is talent everywhere in this country’
By Marco Quiroz-GutierrezMay 17, 2026
3 hours ago
Stressed job seeker
SuccessGen Z
Gen Z is right about the job hunt—it really is worse than it was for millennials, with nearly 60% of fresh-faced grads frozen out of the workforce
By Emma BurleighMay 17, 2026
3 hours ago
A 45,000-person labor strike at Samsung’s memory chip plants could throw a wrench into the AI boom
EconomySamsung
A 45,000-person labor strike at Samsung’s memory chip plants could throw a wrench into the AI boom
By Catherina GioinoMay 17, 2026
6 hours ago

Most Popular

Microsoft AI chief gives it 18 months—for all white-collar work to be automated by AI
AI
Microsoft AI chief gives it 18 months—for all white-collar work to be automated by AI
By Jake AngeloMay 16, 2026
1 day ago
Former top Russian official admits the country is over Putin and can 'imagine a future without him' — even elites bail as Kremlin seizes their assets 
Politics
Former top Russian official admits the country is over Putin and can 'imagine a future without him' — even elites bail as Kremlin seizes their assets 
By Jason MaMay 16, 2026
17 hours ago
The Bezos family just donated $100 million to help achieve one of Mayor Zohran Mamdani’s top campaign promises
Politics
The Bezos family just donated $100 million to help achieve one of Mayor Zohran Mamdani’s top campaign promises
By Jake AngeloMay 12, 2026
5 days ago
‘You’re not a hero, you’re a liability’: Shark Tank’s Kevin O’Leary warns Gen Z founders to stop glorifying hustle culture
Future of Work
‘You’re not a hero, you’re a liability’: Shark Tank’s Kevin O’Leary warns Gen Z founders to stop glorifying hustle culture
By Jacqueline MunisMay 16, 2026
1 day ago
Meet the 20-year-old CEO who launched a company in high school to solve Gen Z's entry-level job crisis
Future of Work
Meet the 20-year-old CEO who launched a company in high school to solve Gen Z's entry-level job crisis
By Jake AngeloMay 16, 2026
1 day ago
SpaceX heads into a record-shattering IPO with the 'deepest moat that exists today' as investors vow to 'never bet against Elon'
Innovation
SpaceX heads into a record-shattering IPO with the 'deepest moat that exists today' as investors vow to 'never bet against Elon'
By Jason MaMay 16, 2026
23 hours ago

© 2026 Fortune Media IP Limited. All Rights Reserved. Use of this site constitutes acceptance of our Terms of Use and Privacy Policy | CA Notice at Collection and Privacy Notice | Do Not Sell/Share My Personal Information
FORTUNE is a trademark of Fortune Media IP Limited, registered in the U.S. and other countries. FORTUNE may receive compensation for some links to products and services on this website. Offers may be subject to change without notice.