OpenAI says the world needs to rethink everything from the tax system to the length of the workday in order to prepare for the wrenching changes of superintelligence technology—the point at which AI systems are capable of outperforming the smartest humans.
On Monday, in a 13-page paper titled “Industrial Policy for the Intelligence Age,” OpenAI said it wanted to “kick-start” the conversation with a “slate of people-first policy ideas.” How much faith to put in OpenAI’s words and motives, however, seems to be one of the key questions among many of the people reading the paper. The paper was released on the same day that The New Yorker published the results of a lengthy one-and-a-half-year investigation into OpenAI that raised questions about CEO Sam Altman’s trustworthiness on various issues, including AI safety.
Written by the OpenAI global affairs team, the paper outlines many of the expected economic impacts of superintelligence and floats various approaches for addressing them. “We offer them not as a comprehensive or final set of recommendations, but as a starting point for discussion that we invite others to build on, refine, challenge, or choose among through the democratic process,” said the introductory blog post.
The self-described “slate of ideas” in the document—spanning everything from public wealth funds to shorter workweeks—may not do much to reassure a public increasingly nervous about and disenchanted with the pace and consequences of AI-driven change. And OpenAI, of course, is one of the least neutral parties in this ongoing discussion, which is the core tension of the document, said Lucia Velasco, a senior economist and AI policy leader at D.C.-based Inter-American Development Bank and former head of AI policy at the United Nations Office for Digital and Emerging Technologies.
“OpenAI is the most interested party in how this conversation turns out, and the proposals it advances shape an environment in which OpenAI operates with significant freedom under constraints it has largely helped define,” she said, adding that this wasn’t a reason to dismiss the document, but “it is a reason to ensure that the conversation it is trying to start does not end with the same company that started it.”
Still, she emphasized that OpenAI is correct in saying that governments are behind in advancing policy solutions. “Most are still treating AI as a technology problem when it’s actually a structural economic shift that needs proper industrial policy,” she said. “That‘s a useful contribution, and the document deserves to be taken seriously as an agenda-setting exercise, even if it’s a starting point.”
Soribel Feliz, an independent AI policy advisor who previously served as a senior AI and tech policy advisor for the U.S. Senate, agreed that OpenAI deserves credit for “putting this on paper.” The acknowledgment that both U.S. institutions and safety nets are falling behind AI development and deployment is correct, she said, “and the conversation needs to happen at this level at this moment.”
However, she emphasized that most of what is being proposed is not new: “Some of these pillars—‘share prosperity broadly, mitigate risks, democratize access’—have been the framework for every major AI governance conversation since ChatGPT came out in November 2022.
“I worked in the U.S. Senate in 2023–24, and we had nine AI policy fora sessions where all of this was said. I have it in my handwritten notes! All of this was already said, all of it,” she wrote to Fortune in a direct message. “The language around public-private partnerships, AI literacy, and worker voice reads like it came out of a Unesco or OECD AI policy framework report. The ideas are not wrong. The problem is the gap between naming the solutions and building real mechanisms to achieve them.”
Clearly, the target audience is not its hundreds of millions of weekly ChatGPT users. Instead, it is the Beltway policymakers who have been pushing for AI regulation (or kicking the can down the road) in various forms ever since ChatGPT was released in November 2022. In that sense, some said it represents an improvement over earlier efforts.
“I found this document to genuinely be a real improvement from previous documents that were even more floaty and high-level,” said Nathan Calvin, vice president of state affairs and general counsel of Encode AI. “I think some of the concrete suggestions around things like auditing or incident reporting and government restrictions on certain uses of AI are good ideas.”
But he also pointed to lobbying efforts led by OpenAI executives with the Leading the Future PAC, which lobbies for AI-industry-friendly policies. Global affairs head Chris Lehane is considered a force behind these efforts, while president Greg Brockman has been the biggest donor.
“I hope this document signals a move toward more constructive engagement, instead of attacking politicians pushing the very policies OpenAI is now endorsing,” said Calvin, pointing specifically to Leading the Future’s lobbying against New York congressional candidate Alex Bores, author and primary sponsor of the RAISE Act, the New York AI safety and transparency law recently signed by Gov. Kathy Hochul.
Calvin has also accused OpenAI of using intimidation tactics to undermine California’s SB 53, the California Transparency in Frontier Artificial Intelligence Act, while it was still being debated. He alleged as well that OpenAI used its ongoing legal battle with Elon Musk as a pretext to target and intimidate critics, including Encode, which the company implied was secretly funded by Musk.
Still, while OpenAI CEO Sam Altman compared Monday’s slate of policy ideas to the New Deal in an interview with Axios, some say it reads less like FDR-era legislation and more like a Silicon Valley thought experiment that won’t magically turn into action.
For example, Anton Leicht, a visiting scholar with the Carnegie Endowment’s technology and international affairs team, wrote on X that in reality, the ideas are fundamental societal changes and heavy political lifts. “They’re not just going to emerge as an organic alternative,” he wrote. “On that read, this is comms work to provide cover for regulatory nihilism.”
A better version of this, he said, would be to redirect the AI industry’s political funding and lobbying skills to make progress on this kind of policy agenda. However, he said that the “vague nature and timing” of the document “doesn’t make me too optimistic.”












