Intelligence as a Utility
And, why we’re endorsing the Kids Online Safety Act
Hi
Welcome (back) to The Prompt. For the first time, we’re using it to endorse legislation. The bills we’re supporting today are the bipartisan Kids Online Safety Act (KOSA), from US Sens. Marsha Blackburn (R-TN) and Richard Blumenthal (D-CT), and Illinois SB 315, a frontier AI safety bill that we believe further advances the establishment of a national framework for frontier AI safety.
Also in today’s issue:
Treating AI as a global utility
Our Washington, DC Workshop, now open for business
OpenAI’s Head of Codex on the OpenAI Forum
If you find this helpful, make sure you’re signed up for the next issue.
[Policy] Getting AI right for kids
KOSA would help create stronger online protections for young social media users through safer default settings, expanded parental controls, and greater accountability for online harms.
The path forward on kids safety, however, also requires AI-specific rules. And we believe KOSA is complementary to the work we’re doing at the federal and state level. Young people should be able to benefit from AI in ways that are safe, age-appropriate, and grounded in real-world support, including referrals to crisis resources and parental notifications in serious safety situations. That means building safeguards from the start, giving families better tools, and taking responsibility for reducing risks before they become harms.
The broader point is an important one: AI companies still have the opportunity to build protections early, before these technologies become fully embedded in everyday life. As OpenAI Chief Global Affairs Officer Chris Lehane has put it, “We can’t repeat the mistakes made during the rise of social media, when stronger safeguards for teens weren’t put in place until the platforms were already deeply embedded in young people’s lives.”
We’re grateful to Senators Blackburn and Blumenthal for their bipartisan leadership, and proud to support legislation that reflects the kind of practical, prevention-focused tech regulation families deserve. We see real momentum building across the nation on kids’ safety, and we look forward to continuing to work with Congress on AI-specific rules that protect young people, give parents better tools, and help ensure AI benefits everyone.
Also today, OpenAI is endorsing Illinois SB 315, a frontier AI safety bill that would establish clear requirements around safety practices, transparency, incident reporting, and accountability for the most advanced AI systems. The legislation closely mirrors frontier safety frameworks already advancing in California and New York, which we view as the start of a consistent, nationwide framework; the Illinois bill also requires independent third-party compliance audits of large frontier AI developers. We're supporting the Illinois bill because it advances a risk-based approach focused on the most capable models and highest-consequence harms, and because it further advances the emerging national framework for frontier AI safety.
[News] Intelligence as a utility
We’re entering a new phase for OpenAI.
In the first phase, we learned how to do the research and built increasingly capable, useful, and reliable models.
In the second phase, we turned that research into products that hundreds of millions of people now use every week.
Now, we’re turning intelligence into a global utility. We believe that, like electricity, intelligence should be available for people, businesses, and institutions to use as much as they need, where and when they need it. With nearly 1 billion people already using our products regularly, our job is to make that intelligence cheaper, better, and more abundant over time – not to dominate every vertical ourselves – so that more people can build, solve problems, and expand what they’re able to do.
That vision only matters if people can actually use these tools in ways that improve their own lives and communities. That’s why our Washington, DC Workshop officially opens for business today – as a place to host much needed conversations about who will really benefit from AI, and where those who work and live in the DC area can get hands-on experience with it and learn how to put it to work for themselves. You can catch a glimpse of the space and a fuller explanation of it via Axios’ interview with OpenAI’s Chris Lehane.
The Workshop will host trainings, OpenAI Forum talks, and other programming for elected officials, regulators, civil servants, educators, workers, nonprofits, and industry and community leaders – creating a space for practical engagement with AI and informed discussion about how the technology should develop and benefit society. Recent Forum talks hosted in DC and livestreamed included discussions on compute and AI infrastructure, labor and workforce development, and the future of journalism.
In a taste of what’s to come for the Workshop, this week we’re hosting three in-person OpenAI Academies, our practical AI trainings designed to meet people where they are. Thanks to DC CAP, local 10th and 11th graders planning to apply to college were the first outside visitors to our officially opened Workshop when they arrived for their Academy this morning. Tomorrow, we’ll host older adults with our partner OATS/AARP. And on Friday, we’ll work with Veterans Forge to help a group of veterans make the transition to work with AI.
These are just the latest in nearly 100 events, 60 in person, that the OpenAI Academy has hosted since January 2025. Our online instruction has engaged about 3.5 million people, and the Academy community has grown to more than 1 million members.
If we get this right, intelligence can become a foundation for greater productivity, creativity, scientific progress, and economic opportunity for many, not just a few – in service of OpenAI’s mission to ensure AGI benefits all of humanity.
[About] OpenAI Forum
4:45 PM - 5:55 PM EST on May 13
[Disclosure]
Graphics created by Base Three using ChatGPT.







