PUBLISHED: April 8, 2026 | LAST UPDATED: April 8, 2026
The company racing to build superintelligence just published a 13-page document telling governments how to clean up after it does.
On April 6, 2026, OpenAI released a policy blueprint titled "Industrial Policy for the Intelligence Age: Ideas to Keep People First" - a sweeping set of proposals covering robot taxes, a national public wealth fund, a 32-hour work week, and more. The full document is available directly from OpenAI. White-collar payrolls have already contracted for 29 consecutive months, a stretch economists describe as unprecedented outside a recession - and OpenAI's own blueprint acknowledges the scale of this shift. The disruption isn't coming. It's here.
CEO Sam Altman told Axios the proposals are "in the Overton window, but near the edges." That's a careful way of saying: bold enough to matter, cautious enough not to terrify investors. OpenAI is, after all, approaching an IPO at a valuation of $852 billion.
Here is a plain-language breakdown of what's in the document, why it matters, and what to make of the timing.
The Problem OpenAI Is Actually Admitting
The document opens with a definition of superintelligence that should stop you mid-scroll: AI systems "capable of outperforming the smartest humans even when those humans are assisted by AI tools."
OpenAI doesn't claim this exists yet. It claims it's close enough to plan around now.
The economic problem embedded in that claim is significant. Today's economy largely funds its social safety nets - Social Security, Medicaid, SNAP, housing assistance - through payroll taxes tied to human labor income. If AI gradually replaces that labor, the revenue base that funds those programs begins to hollow out. Corporate profits expand; payroll taxes shrink. The math gets uncomfortable fast.
OpenAI's document frames this moment as comparable in scale to the Industrial Revolution and the Progressive Era - a time when technological upheaval forced democracies to redesign the social contract from scratch. The difference this time, the company argues, is that the transition may happen at a speed that existing institutions simply aren't built to absorb.
The Six Proposals, Broken Down
The document contains 20 ideas organized into two sections. Six of them are driving all the headlines. Here's what each one actually says.
1. The Public Wealth Fund

The flagship proposal is a nationally managed public wealth fund. Every American citizen would receive a direct stake in AI-driven economic growth - not as UBI paid from general revenue, but as dividends generated by a fund that invests in AI companies and AI-adjacent businesses.
The model OpenAI points to is Alaska's Permanent Fund, which pays annual dividends to state residents from oil revenues. OpenAI envisions a federal equivalent, seeded partly by contributions from AI companies themselves. The fund would hold diversified, long-term assets and distribute returns directly to citizens - including people with no investment accounts and no other claim on the AI economy's gains.
2. Taxes on Automated Labor - The "Robot Tax"

The robot tax idea is not new. Bill Gates floated it in 2017, proposing that a robot replacing a human worker should pay taxes roughly equivalent to what that worker had contributed. OpenAI has now placed the same concept in a formal policy document.
The proposal calls for shifting the tax base from payroll toward capital gains and corporate income. It also explicitly "explores" automated labor taxes - levies on the profits generated by AI systems that replace human workers. The document stops short of specifying rates or mechanisms, positioning this as a starting point for debate rather than a legislative draft.
The logic is straightforward: if AI is doing the work humans used to do, tax the capital generating that output at a rate that preserves the revenue once generated by human wages.
3. The 4-Day / 32-Hour Work Week

This is the proposal most people would personally enjoy. OpenAI calls for government-backed experiments with 32-hour schedules at current pay levels - framing shorter hours not as a labor concession but as an "efficiency dividend": AI productivity gains flowing to workers as time, rather than purely as corporate margin.
The document recommends pilots involving both employers and unions. Nvidia's Jensen Huang and Zoom's Eric Yuan have both previously backed the idea publicly. The difference here is that OpenAI is asking governments to fund and incentivize the pilots, rather than leaving experimentation entirely to individual companies.
4. The Right to AI
OpenAI argues that access to AI should be treated as a public entitlement on par with literacy, electricity, and internet access. The document calls for affordable access for workers, small businesses, schools, libraries, and underserved communities.
This is a notable position from a company whose enterprise contracts run into the hundreds of thousands of dollars annually. The document proposes that governments build or subsidize shared AI infrastructure to close the access gap - which is a different thing from OpenAI cutting its own prices.
5. Auto-Triggering Safety Nets
One of the more technically creative proposals involves safety nets that activate automatically when AI displacement metrics cross preset thresholds - without requiring new legislation each time. When unemployment tied to AI automation hits defined levels, income support, wage insurance, and direct cash payments would expand automatically. When conditions stabilize, the expanded benefits phase out on their own.
The appeal is speed. Current safety net expansion requires Congressional action, which is slow and politically contested. An automatic trigger system would respond to data, not legislative calendars. The challenge is designing thresholds that are difficult to game and that actually isolate AI-specific displacement from broader cyclical unemployment.
6. Containment Playbooks for Dangerous AI
This one receives fewer headlines but may be the most candid item in the document. OpenAI includes a section on "containment playbooks" for scenarios where dangerous AI systems become autonomous, begin self-replicating, and "cannot be easily recalled."
The document proposes government coordination as the response mechanism without specifying what that looks like in practice - a significant gap. But the fact that OpenAI formally acknowledges this scenario in an official policy document is worth registering. Sam Altman told Axios that a major AI-enabled cyberattack is "totally possible" within the next year, and that AI being used to engineer novel pathogens is "no longer theoretical."
The Elephant in the Room

Let's be direct about the context here.
OpenAI published this document on April 6, 2026 - as it approaches an IPO at an $852 billion valuation, having just completed a formal conversion from nonprofit to for-profit entity. The company that is building the technology it is warning about is also designing the policy framework for managing its disruption. Altman acknowledged the tension directly: OpenAI positioning itself as the responsible actor proposing solutions is "plainly also a strategy to shape regulation before regulation shapes it."
Critics have called the document a political exercise designed to preempt tougher regulation and entrench OpenAI's lead at a moment when Congress is beginning to take AI legislation seriously. President Trump signed an executive order in December 2025 limiting state-level AI regulations - creating a federal policy vacuum that OpenAI's blueprint now attempts to fill on its own terms.
None of that makes the proposals wrong. A robot tax and a public wealth fund have serious academic and policy pedigree, independent of who is proposing them. But the messenger's motive is relevant context when evaluating how hard to push for actual implementation.
What This Means for Regular Workers
The document's most candid admission is embedded in its labor market framing. White-collar payrolls have contracted for 29 consecutive months. Demand for knowledge workers - including elite business school graduates - is declining. AI is reducing demand for cognitive labor right now, while its job-creation effects remain years away, if they arrive at the required scale at all.
For workers, what the document signals is that the company building this technology expects disruption serious enough to require a redesigned tax base, a new wealth distribution mechanism, and a reconfigured work week. That is not a company saying "don't worry, it'll work out." That is a company saying the existing systems won't absorb what's coming.
The question is whether governments move fast enough - and whether the companies driving the disruption have any real incentive to let them.
Conclusion
OpenAI's Industrial Policy for the Intelligence Age is the most substantive thing any major AI company has published on the economic consequences of this technology. It arrives with an asterisk the size of an $852 billion valuation, but the substance inside deserves serious engagement regardless.
Three things worth taking away: First, OpenAI itself now formally expects AI displacement serious enough to hollow out payroll-based tax revenue - that admission from the organization building the technology matters. Second, the proposed fixes - public wealth fund, automated labor taxes, adaptive safety nets - have genuine policy heritage and deserve real debate. Third, the self-interest baked into the timing is real, and readers, lawmakers, and workers should factor that in when evaluating how aggressively to push for actual implementation.
The document is framed as a starting point. Whether it becomes the floor for real legislation or a PR exercise that gets filed away depends on whether policymakers treat it as an invitation to act - or simply as a signal that the industry knew disruption was imminent and wanted credit for saying so first.
If the company building the technology that may displace your job is now formally proposing a robot tax to cushion the landing, the landing is closer than most people think.
Want to stay ahead of every major AI development like this one - without drowning in jargon?Subscribe free to the BitBiased newsletter at bitbiased.ai - AI news, tools, and what it all actually means, delivered weekly by a skeptical robot who's been promised flying cars before.






