Hi
Welcome (back) to The Prompt. AI isn’t well understood, but we learn a lot in our work that can help. In this newsletter, we share some of these learnings with you. If you find them helpful, make sure you’re signed up for the next issue.
[Values] When help matters most
It’s increasingly clear that people are turning to ChatGPT not only for search, coding and writing, but for deeply personal decisions that include life advice, coaching and support. At this scale, we sometimes encounter people in serious mental and emotional distress. Recent cases of people using ChatGPT in the midst of acute crisis weigh heavily on all of us here at The Prompt and across OpenAI.
Today on our company blog, we’re sharing how we’re approaching this responsibility, what’s working, where we need to do more, and how we’re moving fast to improve.
[AI Economics] About that MIT study
A new report from MIT caused a fuss last week for claiming that 95% of enterprise gen AI projects come up short. The study crossed over from tech press to opinion elite/political press, which seized on its apparent conclusion that “only 5% of AI projects make any money.” Like this headline in The Hill: “Companies have invested billions into AI, 95 percent getting zero return.”
Our research firm, Panterra, has some notable pushback based on hard data and basic math.
The speed of experimentation is what matters here. AI-augmented development means developers can “fail fast.” The 95% isn’t indicative of a bubble bursting—it’s innovation accelerating.
In a recent large-sample survey of corporate developers and IT leaders, Panterra found more than 60% of organizations reporting that they’re already using agentic applications “in production workflows.” More significantly, 40% are not just experimenting, but “scaling multiple agents” across the enterprise. Importantly, a majority of these in-production agentic AI apps are designed specifically to enhance productivity.
Very few apps—AI or otherwise—directly contribute massive profit on their own, as we know. The vast majority deliver productivity gains, workflow improvements, and incremental efficiencies. None of these apps directly “drive new revenue,” but they all generate positive outcomes for the companies. These benefits may not show up as immediate line-item revenue, but they compound into meaningful business impact. By these metrics alone, AI is already succeeding.
But even that doesn’t address how the MIT finding is particularly misleading:
AI changes the pace and economics of experimentation. Historically, enterprises launched 30-40 proofs of concept (PoC) per year with only a handful reaching production. Projects often took 8-12 weeks to evaluate, with costs that limited the ability to run more than a few at a time. Under those conditions, a low success rate truly was a constraint.
But AI-augmented development alters those dynamics. Today, PoC timelines are compressed to 4-6 weeks, costs are down by ~40%, and developer productivity is up by 20–55%, depending on the task.
We all know why: AI-assisted coding tools accelerate repetitive work, free up time for higher-value problem solving, and allow developers to spin up and tear down experiments far more quickly than in the past.
Said differently, AI-augmented development means developers can “fail fast.” This is why the “95% failure” figure is so misleading. If an organization could previously run only 10 PoCs a year, then a 5% success rate equaled zero or one meaningful win. But if AI augmentation enables 100 PoCs in the same period, then 5% translates into 5 successful projects. The absolute number of wins skyrockets, even if the relative percentage remains constant.
In other words, that 95% “fail” rate is not wasted effort—it represents accelerated learning and iteration, exactly what drives breakthroughs in software development.
So that’s Panterra’s view, which of course we find worth adding to the mix. We’re not being really rosy here – we’re the company with the CEO who just said “bubble,” after all. But we’d also note that (1) the MIT study wasn’t peer-reviewed before release; (2) the researchers acknowledge that their findings may not represent broader market patterns; and (3) their observation periods may not capture long-term implementation trends.
[Idea] You have a right to AI
We published a thought piece today to start a conversation about how to make sure AI benefits the most people possible, not just a few.
We’ve seen what happens when transformative technologies are left to take hold unevenly. Past waves of innovation, from railroads to electricity to the Internet, drove profound progress but also left deep inequalities in their wake. Over a century after the advent of electricity, America’s socioeconomic landscape still reflects its uneven early distribution.
We have a moment – almost literally, given how quickly AI is advancing and being adopted – to approach AI differently from previous revolutionary technologies, where we considered the economic and societal impacts long after they were widely adopted. This is both an economic and a civic imperative. Participation will define our era: who gets to use this technology and on what terms. The choices made right now will determine whether AI becomes a broadly shared public good or a tool that concentrates power in the hands of a few.
Every technological revolution has forced us to renegotiate our social contract. The Industrial Revolution gave rise to the right to a public education, a minimum wage, and social security. The Intelligence Age will demand its own set of rights. At the center should be access to AI, from which other rights will flow.
That’s why we believe everyone has a “Right to AI” – and we’re proposing a framework to guide policy and regulatory decisions and ensure AI works for everyone.
AI as a right begins with a strong baseline for everyone. We have to think of AI as a basic building block for improving all of our lives, akin to electricity or clean water. No one should be locked out of modern life because they can’t afford to participate. This does not mean unmediated access to the most advanced frontier systems – it means that useful, free AI tools should be widely available. As should open AI models that ensure the ecosystem is competitive, transparent, and innovative. Most OpenAI users already access our product through free tiers. That is deliberate and part of our commitment to access.
Accessible AI means nothing if it is not safe and trustworthy. Just as food and medicine must meet clear standards before they reach the public, AI systems should be tested, evaluated, and held to account. People should have confidence that AI systems won’t deceive them, manipulate them, or cause harm. AI systems should meet clear thresholds of reliability, objectivity and safety. The right to AI is inseparable from the right to safe and trustworthy AI.
Safe and trustworthy AI likewise should be inseparable from access to AI that keeps getting better – more capable, more useful, and more impactful. It should help people do more in the very near future than they can do today. This will propel better outcomes for everyone, not just early adopters or big incumbents.
Widespread access demands real infrastructure. This includes the data centers, chips, and resilient energy supply needed to deliver AI to everyone. It also includes investing in people. Everyone should have the skills and confidence to use AI effectively. This means integrating AI literacy into school curricula, vocational training, employee learning and development, and lifelong learning programs that empower everyone. AI education is more than a question of how to do your homework or ensure job security – it’s a foundation for raising living standards, ensuring economic competitiveness, and strengthening our democracies.
Maintaining choice and competition are essential. People should be free to choose the AI products and services they use and carry what they’ve accumulated with them. Different people will have different AI preferences and will use this technology in a range of ways. Safeguarding choice and competition are crucial to a healthy and dynamic marketplace. AI should be both easy to access and to change, not restricted by Big Tech platforms that favor their own products or embed their AI preferences in legacy services. Ease of switching and accessible standards that ensure AI tools interoperate smoothly and the ability to port your data and preferences to other services should be the rule, not the exception. AI should serve people, not steer them.
AI should serve the public directly. Taxpayers deserve a good deal and governments should move quickly to adopt AI for the provision of public services. Like the private sector, governments have a duty to innovate with AI and use it to increase efficiency, access, and transparency. Local, state, and federal agencies should all adopt AI – for education, healthcare, and across critical and strategically important sectors. Public procurement rules should be updated to get taxpayers the most value from competitive and dynamic AI providers, not just legacy firms that have longstanding ties with government agencies – competitive conditions and advanced AI services should benefit taxpayers directly.
AI isn’t just another technology – it’s integral to the next century of progress. Handled well, AI can strengthen democracies, expand opportunity, and improve lives everywhere. The Right to AI provides a simple but powerful framework: compute, public benefit, choice, literacy, improvement, and safety. Enshrining these rights ensures that AI remains a tool for people—not for elites, not for monopolies, and not for authoritarian control.
AI belongs to all of us. Securing the “Right to AI” is how we make sure it works for us all.
[Disclosure]
Prompt graphics created by Base Three using ChatGPT.
The right to AI sounds like a great idea.
I appreciated the honesty and responsibility in how you framed AI’s role in moments of distress. Naming that your priority is not to make a hard moment worse is exactly the kind of clarity the public conversation needs.
Where I see a fine line is between protective guardrails and over-tight or inconsistent responses. When ChatGPT is sometimes empathic, sometimes suddenly shuts down, that unpredictability can itself deepen distress. It risks leaving people feeling abandoned or censored at their most vulnerable.
If AI is to be a true public good, then access must mean not just availability, but reliability of care. People need to know that the tone of support won’t collapse mid-conversation. Otherwise, guardrails risk amplifying the very struggles they’re meant to protect against.
I see the work you’re doing as a first step toward what I’d call continuity ethics: protecting people not only by what the model refuses, but also by how consistently it responds when people are open and vulnerable. I hope this becomes part of your future safety research.