Introducing Our Chief Futurist
AI for DC
Hi
Welcome (back) to The Prompt. AI isn’t well understood, but we learn a lot in our work that can help. In this newsletter, we share some of these learnings with you:
Introducing OpenAI’s Chief Futurist
How AI could scramble international security
What effective youth safety policy can look like
If you find them helpful, make sure you’re signed up for the next issue.
[News] From our Chief Futurist
The past is definite, written—often literally—in stone. The future is a different story: undetermined, unfinished, and malleable to those with a will to change it, but extraordinarily difficult to predict with certainty. However, it is only by developing a keen sense of where things are going that we can build confidence in our efforts to make the future better than the past.
I’m Josh Achiam, and I’m stepping into a new role as OpenAI’s Chief Futurist. My goal is to support OpenAI’s mission—to ensure that artificial general intelligence benefits all of humanity—by studying how the world will change in response to AI, AGI, and beyond. I’ve been at OpenAI for over eight years, originally as a research scientist in AI safety and most recently as Head of Mission Alignment. I’m joined in this endeavor by Jason Pruet, a physicist who has spent a career in the National Labs and as a civil servant in the Department of Energy and Intelligence Community.
Ten years ago AI could neither talk nor reason; today it can do both. Three years ago AI was virtually nonexistent as a consumer product; today one in 10 people use ChatGPT regularly. AI research is progressing at a pace and in a direction that is much closer to science fiction than any previous comparison point. The agricultural revolution unfolded over millennia, the industrial revolution over centuries, the internet revolution over decades, and now an intelligence revolution is poised to transform the world over just a few years.
Our most important challenge is to make sure that this Intelligence Age actually helps people. This means seeing around corners to figure out what can go wrong, and equipping people, scientists, and policymakers with information and ideas to begin addressing problems early. It also means noticing what can go right and accelerating those trends by giving people the information they need to see the potential.
Jason and I will do both. Going forward, we’ll be sharing ideas, articles, and research about the big structural changes that AI will enable; we’ll also be engaging expert communities through the OpenAI Forum. One big theme in our work will be the unexpected ways that AI might interact with global events or progress in other scientific fields.
For example: the World Wide Web was invented about three decades ago, and it led to e-commerce, the spread of new ideas, and sea changes in even the smallest details of how we live our daily lives. But the web and the technologies that followed also made it possible to download guns to 3D printers; the cloud made it so that private data originating in one country might live inside the borders of another; and the web unlocked a layer for commercial satellite imaging allowing private firms to access geospatial intelligence previously available only to militaries.
This wave of development also had profound consequences for international relations: nations that invested in shared technology platforms found new paths to mutual prosperity, deeper cultural ties, and interoperable security architectures. The web created new ways to strengthen democracies and alliances of democracies.
We expect that changes accompanying AI will create an even wider array of consequences and opportunities. AI will reshape the foundations of science, international relations, how the economy functions, and the fabric of social interaction—nearly every sphere of our lives. I’ve spoken at length about some of these possibilities; if you’re interested, you can watch my talk at last year’s The Curve conference on the challenge of the Intelligence Age.
Our second major focus relates to the impact of test time compute. This is a term you might have heard before—it’s a piece of jargon that means something like “getting the AI to think longer (by using more tokens) to get a better result.” It seems almost obvious that using more test time compute will make it easier to solve problems, but this simple concept has surprising implications for market competition, capital allocation, contests between nations, and scientific progress.
This is a transformative moment. Meeting the challenge will require broad collaboration, not just with our colleagues at OpenAI but with the wider community committed to advancing AI for the benefit of humanity. We look forward to working with everyone in service of the mission. – Josh Achiam, OpenAI’s Chief Futurist
[Insight] AI could scramble international security
In his new role, OpenAI’s Chief Futurist Josh Achiam has co-authored an assessment of how AI could reshape the international security environment in the coming years. The paper was developed with others at OpenAI including Anna Makanju, Jason Pruet, and Jonathan Reiber.
I’m Sasha Baker, OpenAI’s Head of National Security Policy. There’s no shortage of writing on AI and national security right now. Most of it focuses on specific domains, in areas like cyber, autonomous weapons, or biosecurity. Josh’s paper usefully reframes the discussion: the unusually wide band of uncertainty about AI’s impact on global security is not just background noise – it is itself a source of strategic risk.
We’re used to managing risk in known buckets. We debate the offense-defense balance in cyber. We game out escalation ladders in nuclear systems. We model proliferation pathways in bio. But as the paper emphasizes, AI does not fit neatly into one vertical. It is a general-purpose capability reshaping the underlying mechanics of national power. It accelerates discovery, planning, forecasting, and iteration – and those advantages compound for the actors that can absorb them. States that integrate AI into scientific research, logistics, intelligence analysis, operational planning, and decision support may not simply improve at the margins – their advantage could be decisive.
But right now, the technology is “spiky,” and the spread between plausible futures is unusually large. Some projections assume incremental progress and gradual integration. Others assume sharp capability jumps and rapid diffusion. Reasonable experts disagree, sometimes dramatically, about timelines, thresholds, and bottlenecks. When that range of uncertainty widens, so does the opportunity for miscalculation.
Even if experts disagree and projections vary widely, Josh and his co-authors contend that growing AI capabilities will fundamentally alter the international security environment, presenting challenges and opportunities for those who harness the capabilities responsibly.
So how should governments respond?
When the evidence is contested and the projections are noisy, there is a strong temptation to wait – to seek more clarity, better consensus, firmer data. That instinct is understandable. But in fast-moving technological environments, waiting for certainty can mean ceding adaptation time to competitors who are learning by doing.
One way to understand this dynamic is as what we’ve called “capability overhang” – the gap between what AI systems are technically capable of, and how institutions are employing them. In the national security context, the overhang has the potential to be destabilizing. Capabilities can exist in latent form long before doctrine, acquisition cycles, or alliance structures absorb what they imply. When that gap closes quickly, strategic balances can shift faster than our planning models anticipate.
As AI reshapes the international security environment, the actors that fare best may not be those who guess the future most precisely, but those who understand the range of possibilities clearly enough – and early enough – to move before overhang turns into surprise. – Sasha Baker, OpenAI’s Head of National Security Policy
[Policy] Youth safety steps can look like this
This week, OpenAI marked Safer Internet Day in events across the US, UK, France, Italy, Germany, Spain, and India. Across each conversation, we focused on what responsible AI looks like in practice. That includes product features designed specifically for teens and families:
Age-prediction systems that help apply age-appropriate protections
Our U18 Model Specification, which sets clear expectations for how AI systems should interact with minors
Parental controls that give families meaningful oversight and choice
These safeguards reflect our belief that strong defaults and thoughtful guardrails are not in tension with innovation. They are what make it possible for young people to explore and benefit from AI with confidence.
Alongside product protections, we also shared our perspective on what effective youth safety policy can look like. Drawing from the recently filed Parents and Kids Safe AI Act ballot initiative in California, we’re outlining a framework that is risk-based, adaptable, and tailored to AI systems, while also providing strong protections for children and teens. It’s intended as a foundation, not a mandate — one that policymakers can tailor to their own legal systems and norms.
Preparing young people for the Intelligence Age requires both literacy and safety. Through global engagement, practical safeguards, and adaptable policy frameworks, we’re working to help build an environment for AI where the next generation can thrive.
[About] OpenAI Forum
Explore Forum programming by and for our community of 60,000 AI experts and enthusiasts from across tech, science, medicine, education, government, and other fields.
12:00 PM – 12:50 PM ET on Feb 11
[Disclosure]
Graphics created by Base Three using ChatGPT.








Check out The Cognitive Revolution. It explores responsibility in an accelerating intelligence landscape.
https://www.amazon.com/dp/B0FZLFBR66?ref_=pe_93986420_775043100&th=1&psc=1