• Home
  • Latest
  • Fortune 500
  • Finance
  • Tech
  • Leadership
  • Lifestyle
  • Rankings
  • Multimedia
AIEye on AI

OpenAI is a drama company. Will that hurt its IPO chances? And Anthropic tries to get ahead of the cyber risks its own models are accelerating

Jeremy Kahn
By
Jeremy Kahn
Jeremy Kahn
Editor, AI
Down Arrow Button Icon
Jeremy Kahn
By
Jeremy Kahn
Jeremy Kahn
Editor, AI
Down Arrow Button Icon
April 7, 2026, 2:15 PM ET
A portrait of OpenAI CEO Sam Altman in profile.
OpenAI CEO Sam Altman. The company has had a drama-filled week. Will it hurt the company's IPO prospects?Anna Moneymaker—Getty Images

Hello and welcome to Eye on AI. In this edition…lots and lots of OpenAI news…Anthropic secures more compute from Google as its current capacity is strained…Google DeepMind releases its latest open weight Gemma model…Anthropic says AI has emotions (sort of)…and Google DeepMind shows AlphaEvolve can help solve real world enterprise problems.

Recommended Video

OpenAI dominated the news over the past few days. In fact, so much has happened related to the company that it’s hard to know where to start. It’s also hard to discern which OpenAI development will prove, with the benefit of hindsight, to be the most significant. I’ll cover the OpenAI news in a sec.

But first, I want to highlight three pieces of news from Anthropic because I think, in the long-run, they might matter more than any of the OpenAI stuff.

Anthropic unveiled today what it is calling Project Glasswing, a coalition of major technology companies and cybersecurity players, that is dedicated to trying to secure the world’s most critical software before AI-enabled hackers wreak absolute havoc around the globe. The coalition partners have been given access to a special cybersecurity-focused preview version of Anthropic’s yet-to-be-released Mythos model, in the hopes that Mythos can discover zero day attacks and other vulnerabilities and that they can be patched, before a production version of Mythos and similar AI models with superpowerful cyber capabilities from OpenAI and Google, debut. My colleague Beatrice Nolan, who broke the news about Mythos’ existence a few weeks ago, has the news on Project Glasswing here.

Project Glasswing is further evidence of the growing concern within the AI labs, cybersecurity companies, and among government officials, that we are entering an era of unprecedented and potentially catastrophic cybersecurity threats due to the increased coding capabilities of recent AI models. The New York Times has more on that evolving risk in this story here.

Anthropic also announced that it would no longer allow people to use their monthly Claude subscriptions to power third-party agentic harnesses, such as the virally-popular OpenClaw and its prodigy. Now, in order to use Claude to power these tools, people will need to subscribe to Anthropic’s API and pay per-token usage fees, as opposed to using all-you-can-consume monthly subscriptions. Anthropic has in recent weeks shown that it does not have the computing capacity to handle the skyrocketing adoption rates it has experienced, especially with agentic tools like OpenClaw (Anthropic also imposed strict usage caps during peak hours that have annoyed many users.) In part to address this compute crunch, Anthropic announced an expanded partnership with Google and Broadcom to access data centers running Google’s TPU chips coming online by 2027. (More on that below.) But, in the meantime, Anthropic’s decision may have a big impact on how AI agents get used, perhaps slowing adoption, or perhaps driving many more people to start using open-source models as the brains behind these agents.

Anthropic also said it has achieved an annual revenue “run rate” of $30 billion. The figure implies a 58% revenue surge in March alone. The number is also higher than the $25 billion annual revenue run rate OpenAI reported in February. (Although Anthropic and OpenAI don’t use the same method to calculate their run rates, so it is a bit of an apples to orange comparison.) But it clearly shows that Anthropic is on a tear and that matters, especially in light of the other news coming out of OpenAI.

Ok, so without further ado , the OpenAI stuff:

OpenAI likes ‘constructive’ media coverage, so it’s buying some

The OpenAI development that probably matters least, but which nonetheless had everyone in the media talking, is OpenAI’s decision to buy the year-old vodcaster TBPN (Technology Business Programming Network) for an amount that sources told the Financial Times was in “the low hundreds of millions.” OpenAI, in announcing the deal, said that it’s “become clear the standard communications playbook just doesn’t apply to us,” and that the company needed “to help create a space for a real, constructive conversation about the changes AI creates—with builders and people using the technology at the center.”

The word “constructive” here is doing a lot of work. While OpenAI insisted that TBPN would retain its editorial independence, many are skeptical, noting that, among other things, the video broadcast operation will report to Chris Lehane, the bare knuckled-political operator who serves as OpenAI’s policy communications chief. This seems like just the latest and perhaps most extreme case of a tech company trying to control the narrative by “going direct”—using social media and in-house produced content to reach audiences and bypass traditional journalistic outlets that are often more critical and tend to ask the kinds of questions that executives don’t want to answer.

Altman’s honesty questioned

If it weren’t already clear why OpenAI wants to own the messenger and dislikes traditional journalism, then the New Yorker underscored the rationale by publishing a lengthy profile of OpenAI CEO Sam Altman that was the result of a year-and-a-half of investigative reporting by Ronan Farrow and Andrew Marantz. The piece was headlined “Sam Altman may control our future—can he be trusted?” Reading the piece, it is hard to come away with an answer other than: no.

While there are a few new tidbits in the story—the reporters, for instance, obtained hundreds of pages of notes that Dario Amodei, now the Anthropic CEO, made on his interactions with Altman during the time Amodei was a top OpenAI researcher—many of the facts in the story have already been reported elsewhere. Nonetheless, there is impact in seeing them all assembled in one place. The overriding impression of Altman from Farrow and Marantz’s story is of a borderline sociopath; an executive with no compunction about lying to get ahead. The piece raises questions about how sincere Altman is in his commitment to anything other than his own pursuit of power—and in particular asks whether Altman actually cares about AI safety or whether his rhetoric on that subject is simply a convenient pose used, first to win over early funding for OpenAI from Elon Musk, and later to win over and retain talented AI researchers and keep regulators at bay.

Certainly potential IPO investors don’t generally love companies run by pathological liars. They also don’t like companies where the top executive ranks are constantly being reshuffled. But OpenAI last week announced another executive shakeup. It said Fidji Simo, who has the title “CEO of AGI Deployment” and is in charge of all the company’s commercial products and operations, will be taking several weeks of medical leave to deal with a chronic health condition. In her absence, Greg Brockman, who had been largely focused on the company’s AI infrastructure build out, is going to be put in charge of product.

But then OpenAI also announced a more permanent management shuffle. The company said that Brad Lightcap, its long-serving chief operating officer, is moving to a new role coordinating “special projects,” including a joint venture with private equity firms that will look to use AI to push efficiencies into older, non-tech companies. Denise Dresser, the former Slack CEO recently hired by OpenAI to serve as chief revenue officer, is taking on most of Lightcap’s previous duties, with oversight of the other business and operations units being split between Jason Kwon, OpenAI’s chief strategy officer, and CFO Sarah Friar.

Reported divisions over spending and IPO plans

Meanwhile, a story surfaced that might suggest Friar may not be secure in her role either. The Information reported that Friar has privately disagreed with Altman’s timeline for an IPO and voiced concerns about the company’s $600 billion in spending commitments over the next five years. Citing a person who had spoken to Friar about her views, the publication said Friar has said she is unsure if that huge amount of spending was necessary or whether OpenAI would be able to grow revenue fast enough to support it.

The publication said that Friar had voiced these concerns prior to OpenAI’s $122 billion fundraise—which was announced last week and valued OpenAI at $852 billion post-money. It said it was unable to determine whether her position had changed in light of that new money. But it cited another unnamed source as saying Friar had been left out of a meeting with an OpenAI investor in which major AI infrastructure spending plans were discussed. OpenAI gave the publication a statement saying Friar and Altman  “are fully aligned that durable access to compute is at the core of OpenAI’s strategy and a key differentiator as we scale.”

Looking at all the developments together, one could be forgiven for wondering if the wheels are in danger of coming off the world’s best-known AI company. At the very least, there are serious questions looming over OpenAI’s ability to go for an IPO this year. And, in the absence of an IPO, it’s unclear how much longer the company can continue to tap the private market. If OpenAI implodes, or even if it merely has a down round, that could threaten the entire AI ecosystem. Of course, other key players in that ecosystem, such as Nvidia, know this too. That’s why they are likely to continue trying to prop OpenAI up.

In the midst of all of this, OpenAI published a white paper calling for a sweeping new industrial strategy for the U.S. in the age of artificial superintelligence, which it says is now looming into view. (You can read more on that from my colleague Sharon Goldman here.) Many perceived the document as, at least in part, an attempt by OpenAI to get ahead of a looming anti-AI industry backlash that is mounting across the country and is gaining bipartisan support. We’ll cover that in the news section below.

With that, here’s more AI news.

FORTUNE ON AI

Mercor, a $10 billion AI startup that works with companies including OpenAI and Anthropic, confirms major data breach—by Beatrice Nolan

Delta’s CEO says AI’s biggest opportunity in aviation isn’t inside the plane—it’s air traffic control—by Marco Quiroz-Gutierrez

Anthropic’s research shows that AI can already do a huge portion of many jobs; its top economist talks about how that could shape the future of work—by Matthew Heimer and Nicolas Rapp

JPMorgan CEO Jamie Dimon predicts AI will cut the workweek down to 3.5 days—and tells Gen Z developing EQ is more important than ever—by Emma Burleigh

AI IN THE NEWS

Anthropic expands partnership with Google, Broadcom for data center capacity. The AI company will gain access to about 3.5 gigawatts of computing capacity starting in 2027 as part of the deal, which is contingent on Anthropic meeting certain commercial milestones. The partnership will also see Broadcom supplying custom AI chips, known as TPUs, and infrastructure to Google through 2031. Read more from the Wall Street Journal here.

Google adds mental health safeguards to Gemini. The company has put in place systems to screen users’ interactions with Gemini for signs of mental health crises, which will result in the chatbot referring the users to crisis hotlines. The company said it would donate $30 million to support these crisis intervention services globally. The company has also added additional safeguards designed to discourage self-harm and said that it was training Gemini to avoid reinforcing users’ false beliefs. You can read more from Bloomberg here.

Google releases Gemma 4 open weight model. Google has released the latest generation of its open weight Gemma AI models, Gemma 4. The models were released under an Apache 2.0 license, aiming to attract enterprise users by giving them greater flexibility over how they can use the models and more control over data, according to a story in tech publication The Register. Developed by Google DeepMind, the four new versions of the Gemma 4 models emphasize coding, agentic AI, and improved reasoning, while supporting multimodal inputs and running across devices from smartphones to data centers. The launch comes as competition intensifies from Chinese open-weight models and reflects Google’s push to offer a credible, enterprise-friendly alternative to systems from OpenAI and Anthropic.

Microsoft launches ‘mid-class’ AI models amid AI chief’s complaints about lack of compute. Microsoft launched a trio of new midsized AI models that it claimed were state-of-the-art at speech transcription, voice generation, and image generation. But AI chief Mustafa Suleyman told the Financial Times the company still lacks the computing power to build frontier-scale systems. Microsoft is focusing on “mid-class” models for now, balancing cost and performance, while investing heavily in infrastructure and talent to catch up with leaders like Google and Anthropic, Suleyman told the newspaper.

Meta plans to open source its next AI model. Reportedly there had been debate within Meta about whether to release its next generation AI models—the first developed under its new Superintelligence Labs headed by former Scale AI CEO Alexandr Wang—as open weight models, which is what Meta has done with its past AI models, or make them available only through a paid API or subscription. Now Axios reports, citing unnamed sources, that this debate has been resolved in favor of open weight. There’s high pressure on this next model release since it is the first new model since Meta spent billions of dollars hiring Wang and new AI talent to work under him and since the company’s last AI model, Llama 4, was widely viewed as a dud that badly lagged competing models from the likes of OpenAI, Anthropic, and Google DeepMind.

EYE ON AI RESEARCH

AI has emotions? Sort of, new research from Anthropic suggests. The AI lab says that it has discovered that the artificial neural networks that power its Claude AI models contain internal representations of “emotion concepts” (such as happiness or fear) that functionally influence how the model behaves. These are not real feelings, Anthropic’s researchers emphasized, but patterns in the model’s neural activations that guide its responses, shaping decisions, preferences, and outputs in ways loosely analogous to human emotions. For example, when the model is choosing between tasks, it tends to prefer options associated with “positive” emotional representations, showing these patterns play a causal role in behavior. The findings suggest that understanding and potentially steering these internal emotion-like states could be important for improving how AI models perform. The research also has safety implications, since the model’s internal emotional representations may determine the extent to which it follows users’ intentions. 

AI CALENDAR

April 6-9: HumanX 2026, San Francisco. 

June 8-10: Fortune Brainstorm Tech, Aspen, Colo. Apply to attend here.

June 17-20: VivaTech, Paris.

July 6-11: International Conference on Machine Learning (ICML), Seoul, South Korea.

July 7-10: AI for Good Summit, Geneva, Switzerland.

BRAIN FOOD

Google hails success of its AlphaEvolve system in a real-world enterprise use case. Last year, Google DeepMind debuted AlphaEvolve, an agentic coding assistant that employed several of Google’s different Gemini models to first program an algorithm for a task and then iteratively optimize through a series of small controlled experiments. At the time, Google had used the the system to solve math problems and to optimize how it used computing resources. Now the company has announced the results of a real world external use case.

France-based global logistics firm FM Logistic used AlphaEvolve to optimize how workers moved about one of its massive warehouses to pick and pack items. Rather than relying on fixed rules, the system iteratively rewrote and tested new routing algorithms against real operational data, trying to minimize overall travel distance while respecting constraints like forklift capacity and order priorities.

The resulting algorithm introduced several key innovations, including starting routes from dense clusters of items and flexibly abandoning inefficient routes to improve overall system performance. Overall, the changes delivered a 10.4% boost in routing efficiency and cut more than 15,000 kilometers of annual travel, enabling faster fulfillment and greater capacity without additional staff or equipment, Google wrote. This is an example of why AI coding agents are so potentially powerful, even in areas outside of software development. 

In 2001, Fortune first convened “The Smartest People We Know,” bringing together CEOs and founders, builders and investors, thinkers and doers. Since then, Fortune Brainstorm Tech has been the place where bold ideas collide. From June 8–10, we will return to Aspen—where it all began—to mark 25 years of Brainstorm. Register now.
About the Author
Jeremy Kahn
By Jeremy KahnEditor, AI
LinkedIn iconTwitter icon

Jeremy Kahn is the AI editor at Fortune, spearheading the publication's coverage of artificial intelligence. He also co-authors Eye on AI, Fortune’s flagship AI newsletter.

See full bioRight Arrow Button Icon

Latest in AI

Finance
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam
By Fortune Editors
October 20, 2025
Finance
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam
By Fortune Editors
October 20, 2025
Finance
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam
By Fortune Editors
October 20, 2025
Finance
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam
By Fortune Editors
October 20, 2025
Finance
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam
By Fortune Editors
October 20, 2025
Finance
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam
By Fortune Editors
October 20, 2025

Most Popular

Finance
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam
By Fortune Editors
October 20, 2025
Finance
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam
By Fortune Editors
October 20, 2025
Finance
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam
By Fortune Editors
October 20, 2025
Finance
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam
By Fortune Editors
October 20, 2025
Finance
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam
By Fortune Editors
October 20, 2025
Finance
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam
By Fortune Editors
October 20, 2025
Fortune Secondary Logo
Rankings
  • 100 Best Companies
  • Fortune 500
  • Global 500
  • Fortune 500 Europe
  • Most Powerful Women
  • Future 50
  • World’s Most Admired Companies
  • See All Rankings
Sections
  • Finance
  • Fortune Crypto
  • Features
  • Leadership
  • Health
  • Commentary
  • Success
  • Retail
  • Mpw
  • Tech
  • Lifestyle
  • CEO Initiative
  • Asia
  • Politics
  • Conferences
  • Europe
  • Newsletters
  • Personal Finance
  • Environment
  • Magazine
  • Education
Customer Support
  • Frequently Asked Questions
  • Customer Service Portal
  • Privacy Policy
  • Terms Of Use
  • Single Issues For Purchase
  • International Print
Commercial Services
  • Advertising
  • Fortune Brand Studio
  • Fortune Analytics
  • Fortune Conferences
  • Business Development
  • Group Subscriptions
About Us
  • About Us
  • Editorial Calendar
  • Press Center
  • Work At Fortune
  • Diversity And Inclusion
  • Terms And Conditions
  • Site Map
  • About Us
  • Editorial Calendar
  • Press Center
  • Work At Fortune
  • Diversity And Inclusion
  • Terms And Conditions
  • Site Map
  • Facebook icon
  • Twitter icon
  • LinkedIn icon
  • Instagram icon
  • Pinterest icon

Latest in AI

data center
PoliticsData centers
A councilmember backed a data center project. Then 13 bullets and a ‘No Data Centers’ note hit his home
By Jake AngeloApril 7, 2026
21 minutes ago
H&R Block wants to be more than a tax company. It wants to be your year-round financial adviser
C-Suite250 Years of Innovation
H&R Block wants to be more than a tax company. It wants to be your year-round financial adviser
By Ruth UmohApril 7, 2026
25 minutes ago
The AI trade is over. Top Wall Street analysts say the AI opportunity might be just starting
InvestingMarkets
The AI trade is over. Top Wall Street analysts say the AI opportunity might be just starting
By Nick LichtenbergApril 7, 2026
46 minutes ago
A portrait of OpenAI CEO Sam Altman in profile.
AIEye on AI
OpenAI is a drama company. Will that hurt its IPO chances? And Anthropic tries to get ahead of the cyber risks its own models are accelerating
By Jeremy KahnApril 7, 2026
2 hours ago
Anthropic logo
CybersecurityAnthropic
Anthropic is giving companies, including Amazon, Apple, and Microsoft, access to its unreleased Claude Mythos model to prepare cybersecurity defense
By Beatrice NolanApril 7, 2026
2 hours ago
Photo of Vinod Khosla
AIFortune 500: Titans and Disruptors of Industry
Sam Altman and Vinod Khosla agree: AI will break the economy. Their fix is no income tax for most Americans
By Nick LichtenbergApril 7, 2026
4 hours ago

Most Popular

The U.S. military set up an improvised airfield deep inside Iran to rescue the F-15 airman. Marines just practiced building one in the desert
Politics
The U.S. military set up an improvised airfield deep inside Iran to rescue the F-15 airman. Marines just practiced building one in the desert
By Fortune EditorsApril 5, 2026
2 days ago
During the rescue of the F-15 airman in Iran, the U.S. military blew up two of its own transport planes that had to be left behind
Politics
During the rescue of the F-15 airman in Iran, the U.S. military blew up two of its own transport planes that had to be left behind
By Fortune EditorsApril 5, 2026
2 days ago
Sam Altman says AI superintelligence is so big that we need a ‘New Deal.’ Critics say OpenAI’s policy ideas are a cover for ‘regulatory nihilism’
AI
Sam Altman says AI superintelligence is so big that we need a ‘New Deal.’ Critics say OpenAI’s policy ideas are a cover for ‘regulatory nihilism’
By Fortune EditorsApril 6, 2026
23 hours ago
Millions of Americans paid billions in tariffs later ruled illegal — and they won't see a dime back
Commentary
Millions of Americans paid billions in tariffs later ruled illegal — and they won't see a dime back
By Fortune EditorsApril 6, 2026
1 day ago
Current price of oil as of April 6, 2026
Personal Finance
Current price of oil as of April 6, 2026
By Fortune EditorsApril 6, 2026
1 day ago
Current price of silver as of Monday, April 6, 2026
Personal Finance
Current price of silver as of Monday, April 6, 2026
By Fortune EditorsApril 6, 2026
1 day ago

© 2026 Fortune Media IP Limited. All Rights Reserved. Use of this site constitutes acceptance of our Terms of Use and Privacy Policy | CA Notice at Collection and Privacy Notice | Do Not Sell/Share My Personal Information
FORTUNE is a trademark of Fortune Media IP Limited, registered in the U.S. and other countries. FORTUNE may receive compensation for some links to products and services on this website. Offers may be subject to change without notice.