Hi
Welcome (back) to The Prompt. AI isn’t well understood, but we learn a lot in our work that can help. In this newsletter, we share some of those learnings with you. If you find them helpful, make sure you’re signed up for the next issue.
[AI Economics] AI and the labor market
As Chief Economist at OpenAI, one of my most important tasks is to understand the impacts of AI on the job market. No doubt, AI will transform our jobs and how our organizations function, but the story of how it will play out is complex. Here are a few observations from our work:
📉 Current metrics aren’t telling us enough: We need to build better and smarter metrics to understand AI. GDP, unemployment, or business revenues don’t yet show a systemic shift due to AI. Instead, we’re developing new indicators—like salary premia for human vs. AI outputs, citations for AI-authored research, and AI-related firm revenue—that may better capture the contours of this transformation.
📈 Impacts of AI will be uneven across industries: Historically, technologies like electricity and the integrated circuit took decades to transform productivity. Although AI has been adopted much more quickly than previous technologies, our best read is that AI diffusion will still follow a somewhat uneven path—shaped by infrastructure, workforce readiness, and sector. For instance, finance and law are highly regulated and slower to adopt some AI uses due to compliance concerns. Meanwhile, jobs that are more digitized and routine are seeing faster AI exposure.
🔍 Job loss headlines are not the full story: While some firms are cutting jobs and attributing it to AI, those stories miss the baseline churn: around 4 million people leave or lose their jobs every month in the US—and 4 million are hired. AI-related layoffs are still a small fraction of total labor flows. It will take more time and more granular data for us to assess to what degree AI is the source of major job changes.
🔧 Jobs will be created, lost, and changed: As with past technological revolutions, we will see new jobs emerge and others go away or shift dramatically. Already, we’re seeing large firms retrain thousands of workers for new roles enhanced by AI—not eliminated by it. New firms will also be created to do work we couldn’t have imagined prior to AI and making some hires along the way.
💡 Our strategy: Based on these principles, our team is taking a systematic approach: tracking economic indicators, building new metrics, convening AI and economic policymakers for tabletop exercises around the world, and launching research projects to measure the impact of AI on the economy. This is how we’re developing insights, analyses, and recommendations for our organization, policymakers and stakeholders. – OpenAI Chief Economist Ronnie Chatterji
[Data] Familiarity → favorability
We added the following question to our partner TrueDot.ai’s latest TrendLine omnibus survey (N=1,000 US adults, MOE +/-3.1%):
Which statement about artificial intelligence (AI) is closest to your views?
📈 I’m optimistic about AI, because while there are drawbacks, AI offers promising ways to increase productivity, speed up scientific breakthroughs, and create new knowledge and opportunities
📉 I’m pessimistic about AI, because while there are benefits, AI is a threat to job creation, the environment, and my current way of life
The response: 48% optimistic, 52% pessimistic – not surprising. Also not surprising, the more familiar one is with AI, the more optimistic they tend to be about it: 65% of those who say “I know a lot” about AI are optimistic about it; 35% are pessimistic. Whereas 69% of those who say they know “not much at all” about it are pessimistic and 31% are optimistic. In between knowing a lot and not knowing much, views are more mixed.
We figured this must be a pretty typical journey for people and emerging tech, and according to ChatGPT, it is:
ChatGPT prompt: We have new survey findings that the more people know about AI, the more optimistic they are – is that typical for a new technology? Un/familiarity creates un/favorability?
ChatGPT: Yes, it is fairly typical for greater familiarity with a new technology—especially one as complex and potentially disruptive as AI—to correlate with greater optimism. This dynamic reflects a well-documented pattern in public opinion and risk perception research known as the "familiarity-optimism" effect, and it operates across several domains.
Chat goes on to say (paraphrasing here) that there are historical parallels – think the internet, smartphones/social media, biotech/mRNA vaccines. But there’s also important nuance: The curve doesn’t rise forever. Here’s what typically happens:
Familiarity → Favorability (early stage)
Overexposure → Nuanced views (mid stage)
Widespread adoption → Polarization or backlash (late stage, especially when harms become visible)
Unlike with previous technologies, however, AI is both advancing and being adopted so quickly (see Mary Meeker/BOND’s latest trends deck) that you could argue that these three phases of perception are stacking on top of one another, and this is something we’re digging into here at The Prompt.
[Policy] Chinese AI-generated video: a viewer guide
Got the sense that AI videos are popping up more and more? While American AI labs, including OpenAI, are developing new video capabilities, videos generated by PRC-backed Chinese labs are already flooding websites and social media. What we’re watching for isn’t in the video content – it’s what’s behind it.
This content is highly realistic and high-resolution, largely because PRC labs have unlimited access to training data – including social media videos on TikTok and other applications, TV shows, movies, livestreams, and even footage from surveillance cameras. These varied video inputs can be used to train AI video models in China, while US developers face greater constraints on video access, including proprietary film and TV libraries and social media sites that block independent labs from using their content for training. As an example, Meta uses Instagram to train its video models, but doesn’t make this content available to independent developers.
The sheer quantity of training data is a key factor underpinning AI models’ video capabilities – it’s a law of AI scaling. And it’s an advantage that Chinese AI labs are using to outcompete US rivals. Policymakers should consider ways to ensure more video training data is available to AI labs.
There’s more at stake than just high-quality AI-generated videos. Most AI researchers see AI video capabilities as the key to advancing real-world AI applications and robotics – helping AI navigate the real world through cameras that interpret their surroundings the same way AI models might “understand” the content of a short video or TV show. This is a gateway to greater innovations and novel applications where American AI leadership will be critical – Adam Cohen, Head of Economic Policy
[News of day] AI+ Expo
This week, the Special Competitive Studies Project (SCSP) is hosting its annual AI+ Expo, drawing leading industry voices and government officials in DC. Like SCSP, OpenAI believes there’s an urgent need to bolster America’s long-term competitiveness as geopolitical competition intensifies and AI advances at unprecedented speed.
Today, our Head of National Security Policy Sasha Baker will be speaking alongside her counterparts at other frontier labs about delivering AI for national security missions. If you’ll be at the Expo, be sure to find her and say hello! Here’s what she’s looking for at the event:
Seeing AI in action: The conflict in Ukraine and its accelerating use of drone systems has challenged assumptions about how quickly militaries can adopt new technology into their operations. SCSP’s Drone Arena will showcase how machine learning, computer vision, and autonomy are converging with hardware to redefine intelligence and reconnaissance, search and rescue, and logistics missions and amplify people’s capabilities on the battlefield.
Gauging government adoption of AI: I’m keen to see how federal agencies and defense organizations are integrating AI into their workflows, including through procurement demos in the Government Pavilion, case studies on AI-driven intelligence analysis, and sessions on navigating acquisition and compliance. Understanding how policymakers and program managers are accelerating AI deployment is critical to ensuring labs like ours can meet national security needs at scale.
Igniting creative solutions: OpenAI is also proud to sponsor the Expo Hackathon, which will challenge teams to leverage our API in addressing real-world security problems. We’ll award $25,000 in API credits to the team that most powerfully advances a solution to a challenging global security problem, and I’m eager to see creative use of our models in areas like crisis mapping, pandemic early warning, secure supply-chain monitoring, and other use cases!
[About] OpenAI Forum
Explore past and upcoming programming by and for our community of more than 30,000 AI experts and enthusiasts from across tech, science, medicine, education and government, among other fields.
8:00 PM – 9:30 PM EDT on June 4.
[About] OpenAI Academy
The Academy is OpenAI’s free online and in-person AI literacy trainings for beginners through experts.
OpenAI has called for a nationwide AI education strategy – rooted in local communities in partnership with American companies – to help our current workforce and students become AI-ready, bolster the economy, and secure America’s continued leadership on innovation.
8:00 PM – 9:00 PM EDT on June 4.
[Disclosure]
Graphics created by Base Three using ChatGPT.
The concern I have with Open AI, isn’t just the lack of algorithmic innovation, but why it won’t use MemComputing? It’s a quantum computing alternative that can solve for the global optimum in a fraction of the time as a classical computer. It could reduce so much of the environmental harm it’s doing to the world. What is the latest estimate, 180k worth of electricity DAILY? AND GROWING. And Amazon Web Services — they haven’t switched over to photonic computing, phase change computing, or laser logic gates (e.g., Lightelligence) for the things quantum/memcomputing can’t do.
As a tech futurist, I’m frustrated. I feel a company that claims to care about ethical AI not only doesn’t make ethical AI, but as a customer, gives me a substandard product with a real harm to the environment. I see a company that can do better, but seems incapable of rising to the occasion.