Hi
Welcome (back) to The Prompt. AI isn’t well understood, but we learn a lot in our work that can help. In this newsletter, we share some of those learnings with you. If you find them helpful, make sure you’re signed up for the next issue. You have choices here – to receive our weekly US newsletter, our monthly EU newsletter, or if you really want to load up, then both.
[Insight] The glint of global AI rails
You don’t have to squint hard to see the emerging glint of global rails on which to build AI that is both responsible and democratic—i.e., led by the US and advanced by allied, democratically-governed countries in ways that benefit everyone, not just a few. These rails have the potential to form a global network for AI standards that will spur adoption and yet more innovation, driving huge leaps in productivity and prosperity.
In the US, this starts with the Trump Administration’s AI Action Plan, on tap to be introduced next week. OpenAI and other leading AI labs support this work, which is expected to streamline data‑center and grid‑connection approvals and make it easier to build AI infrastructure. The Administration is also creating a new Center for AI Standards and Innovation to help ensure the US wins the AI competition.
In the EU, the new Code of Practice (CoP) turns the high-level principles of the EU AI Act into practical standards for AI, echoing many of the safety approaches OpenAI and other leading AI companies have developed and led. Last week, we announced our intention to sign the CoP because these safety practices reflect existing best-in-class approaches that are largely reflected in our Preparedness Framework and in what we share with the US government.
In both the EU and the UK, regulators are requiring foreign AI providers to meet the same high safety standards as domestic providers. That’s a big deal—companies like China’s Baidu or Tencent will have to abide by democratic rules in order to operate there.
In Japan, the government has championed the “Hiroshima Process,” outlined at a G7 summit two years ago, which helped democratic governments understand how to pursue AI safety and AI opportunity together. In May, Japan adopted its first AI law, meant to balance between these two objectives and help Japan become the most favorable country in the world for AI development and deployment.
South Korea has also passed AI legislation and is developing enforcement rules that aim to support AI development and uptake, build public trust, and enable responsible use cases.
Lastly, we’re looking forward to more AI leadership from India, host of the next global AI impact summit next February.
These moves, collectively, show what forward-thinking governments and responsible AI companies can accomplish together. Democracies have done this before—building the internet, laying high-speed rail, expanding nuclear energy. Choices made now will shape AI’s role in our economy and society, and determine whether AI gets into the hands of the most people possible, to everyone’s benefit, or whether its benefits are concentrated in the hands of a few.
[Data] How DC is using AI
Most tech-informed DC dwellers use AI tools in their daily life and/or work, often multiple times per day, according to a recent Penta survey. But how? Penta used ChatGPT to categorize responses from an open-ended question—one of our favorite simple hacks—to learn more.
[AI Economics] Copping to AI use
But how many of you are actually telling your co-workers and bosses that you use AI? Several new studies shed light on how often workers hide their use of AI and why.
Ivanti’s 2025 Technology at Work report surveyed over 3,000 workers and found that roughly one-third of them didn’t share with colleagues that they use certain AI tools. Another study from KPMG found that over 40% of surveyed workers regularly hide their use of AI. The reasons for this obfuscation vary – some workers fear that it would lead to their being assigned more tasks, that their co-workers would question their abilities, or that it could even lead to their job being eliminated.
There are good reasons for these fears. A recent paper from Duke’s Fuqua School of Business found that there are social penalties for use of AI in terms of negative assessments of “competence and motivation,” and this also applied to use of AI by job applicants.
Going forward, however, we could also see this trend lead in the opposite direction. A number of firms including Shopify and Duolingo have said they expect their employees to use AI in their work, and that AI use will even be incorporated into the facets assessed in performance reviews. This could lead more employees to overstate their use of AI for fear of being punished for not using new tools.
All of this can create a muddle for researchers studying the impact of AI on workers’ productivity – a dynamic we’re keeping tabs on – and for managers and companies making procurement decisions and determining how AI use is actually affecting their firms. The most helpful thing employers can do is to set clear expectations around when, what type, and how AI should be used, while also leaving room to experiment. – Rachel Brown, Economic Research
[Event] We’re jamming with nonprofits
Today, in 10 locations across the US from Bentonville, Arkansas to Columbus, Ohio and from Salinas and Los Angeles in California to New York City, OpenAI is bringing together more than 1,000 nonprofit leaders to explore how to use ChatGPT to meet their needs – from streamlining case management and improving community outreach, to enhancing service delivery.
This Nonprofit Jam is modeled on our 1,000 Scientist Jam Session from earlier this year, when nearly 1,500 scientists (that’s not a typo) representing nine different US national labs from Livermore to Los Alamos gathered to explore the use of AI to speed up scientific discovery.
About 150 OpenAI employee volunteers are staffing these 10 sites today, and sponsors include the Walton Family Foundation, Emerson Collective, and dozens of local organizations. We’re excited to see what problems these nonprofit leaders on the frontlines in their communities figure out how to solve with our tools in their hands.
[About] OpenAI Forum
Explore past and upcoming programming by and for our community of more than 30,000 AI experts and enthusiasts from across tech, science, medicine, education and government, among other fields.
8:00 PM – 9:50 AM ET on Jul 25
[Disclosure]
Graphics created by Base Three using ChatGPT.