Tech Bytes Logo

Tech Bytes

October 10, 2025

EXCLUSIVE INTERVIEW

Sam Altman on OpenAI DevDay: The Infrastructure Buildout and Path to AGI

OpenAI CEO discusses compute constraints, GPT-5 Pro capabilities, $6 billion AMD partnership, and the company's mission to build AGI that benefits humanity—all while serving 800 million weekly ChatGPT users

Dillip Chowdary

Dillip Chowdary

AI Industry Analyst

Key Takeaways from Sam Altman's DevDay Interview

🚀 Major Announcements

  • GPT-5 Pro: New flagship model for finance, legal, and healthcare
  • Sora 2: Next-gen video AI with synchronized sound
  • ChatGPT Apps: SDK platform reaching 800M weekly users
  • AgentKit: Complete platform for building AI agents

⚙️ Infrastructure Reality

  • Compute bottleneck: Severe GPU constraints limiting growth
  • $6B AMD deal: 6 gigawatts of Instinct GPUs
  • Altman's focus: Spending most time on infrastructure
  • Demand exceeds supply: "Not enough" compute available

🎯 AGI Mission

  • Goal: Build capable AI/AGI/superintelligence
  • Deployment: Benefits people for "all sorts of things"
  • Requirements: Massive infrastructure + product + research
  • Timeline: Continued aggressive scaling

📊 Platform Scale

  • 800M weekly users: ChatGPT (doubled since Feb 2025)
  • 4M developers: Building with OpenAI APIs
  • 6B tokens/min: Processed through the API
  • Launch partners: Coursera, Canva, Zillow, Figma, Spotify

On October 6, 2025, Sam Altman, CEO of OpenAI, took the stage at DevDay 2025 in San Francisco to unveil a wave of new products and partnerships—but the real story emerged in his candid interview afterward. While the company announced impressive milestones like GPT-5 Pro, Sora 2 video generation, and reaching 800 million weekly ChatGPT users, Altman's most revealing insights came when discussing the brutal reality of building AI at this scale: compute is the bottleneck, and infrastructure is where he's spending most of his time.

In an exclusive interview with Ben Thompson of Stratechery, Altman opened up about OpenAI's infrastructure challenges, the company's vision for artificial general intelligence (AGI), and why massive hardware deals like the $6 billion AMD partnership are just the beginning of what's needed to serve the exploding demand for AI services.

Sam Altman presenting at OpenAI DevDay 2025 with GPT-5 Pro and Sora 2 announcements
Sam Altman at DevDay 2025 announcing new AI capabilities while acknowledging infrastructure challenges

💬 Sam Altman's Key Insights: In His Own Words

Throughout the interview and DevDay presentations, Sam Altman offered rare transparency about OpenAI's challenges and vision. Here are his most significant statements:

"

"The degree to which we are all constrained by compute... Everyone is just so constrained on being able to offer the services at the scale required to get the revenue that at this point, we're quite confident we can push it pretty far."

— Sam Altman, OpenAI CEO

Compute Reality

📊 What This Means:

Despite massive investments in GPUs from NVIDIA, AMD, and others, OpenAI—and the entire AI industry—faces a fundamental constraint: there simply aren't enough chips to meet demand. This bottleneck limits how many users can access services, how fast models can be trained, and ultimately, how quickly the AI revolution can scale. Altman's acknowledgment that they "can push it pretty far" with revenue suggests OpenAI is willing to spend billions more on compute infrastructure.

"

"There's so much more demand... Even with massive new hardware partnerships with AMD and others, we'll be saying the same thing again."

— Sam Altman on future compute needs

Endless Growth

⚡ What This Means:

This statement reveals the exponential nature of AI demand. Even after announcing a $6 billion deal with AMD for 6 gigawatts of GPU capacity—one of the largest AI infrastructure deals in history—Altman predicts they'll still be compute-constrained. The implication: as models get better and more people adopt AI tools, demand grows faster than supply can scale, creating a perpetual arms race for computing power.

"

"There just simply is not enough, and I think that our ability to build will still fail in comparison to the greater demand."

— Greg Brockman, OpenAI President

Bottleneck Reality

🔧 What This Means:

OpenAI President Greg Brockman's blunt assessment reinforces the severity of the problem. Even with OpenAI's $100+ billion valuation and ability to outspend competitors, they acknowledge that demand will outpace their buildout capacity. This creates a competitive moat for companies that secure early access to compute but also means users will continue experiencing usage caps, throttling, and delayed access to new features.

"

"Yeah, we're trying to build very capable AI, AGI, superintelligence, whatever it's called these days, and then be able to deploy it in a way that really benefits people and they can use it for all sorts of things and that requires quite a bit on the infrastructure side, also on the product side, obviously on the research side."

— Sam Altman on OpenAI's mission

AGI Goal

🎯 What This Means:

Altman's articulation of OpenAI's mission is notably casual—"whatever it's called these days"—but the substance is clear: the company is pursuing artificial general intelligence, AI that can perform any intellectual task a human can. Importantly, Altman frames this as a three-pronged challenge: research (building smarter models), infrastructure (enough compute to train and serve them), and product (making them useful). All three must succeed for AGI to benefit humanity, and infrastructure is currently the limiting factor.

"

"Infrastructure is where I'm spending most of my time now. It's brutally difficult to have enough infrastructure in place to serve the demand we're seeing."

— Sam Altman on his daily focus

CEO Priority

👔 What This Means:

For the CEO of the world's leading AI company to spend most of his time on infrastructure—not research, not product strategy, not fundraising—reveals the existential importance of compute. Altman is negotiating multi-billion dollar GPU deals, coordinating data center buildouts, and managing relationships with chip manufacturers because without compute, nothing else matters. This is unprecedented for a tech CEO and signals how different the AI era is from previous computing waves.

OpenAI DevDay 2025 major announcements: GPT-5 Pro, Sora 2, ChatGPT Apps, and AgentKit
DevDay 2025 unveiled game-changing AI capabilities across models, video, and developer tools

🚀 What OpenAI Announced at DevDay 2025

While Altman's candid discussion of infrastructure challenges dominated the post-event conversation, the actual announcements at DevDay 2025 were significant product launches that expand OpenAI's capabilities across multiple domains:

🤖

GPT-5 Pro: The New Flagship Model

OpenAI's latest language model is specifically designed for high-stakes industries requiring exceptional accuracy and depth of reasoning:

💼 Finance

Complex financial modeling, risk assessment, regulatory compliance analysis

⚖️ Legal

Contract analysis, case law research, legal document generation

🏥 Healthcare

Medical research assistance, diagnosis support, treatment recommendations

🎬

Sora 2: Next-Generation Video AI

Building on the original Sora model, Sora 2 introduces groundbreaking capabilities for AI-generated video:

  • 🎥 More Realistic Scenes: Improved physics and object permanence
  • 🔊 Synchronized Sound: Audio generation matching visual content
  • 🎨 Greater Creative Control: Fine-tuned direction and editing capabilities
  • Available in API: Developers can build video applications
📱

ChatGPT Apps & Apps SDK

OpenAI is transforming ChatGPT into an app platform, allowing developers to build interactive, adaptive, and personalized applications inside ChatGPT:

🌟 Launch Partners:

📚
Coursera
🎨
Canva
🏠
Zillow
✏️
Figma
🎵
Spotify

Built on MCP (Model Context Protocol): Apps SDK is an open standard, making it possible to reach 800 million ChatGPT users with interactive applications.

🤵

AgentKit: Platform for AI Agents

A complete platform for building, deploying, and optimizing AI agents—autonomous systems that can perform complex multi-step tasks:

🎨 Agent Builder

Visual drag-and-drop workflow designer for creating agents without code

💬 ChatKit

Embeddable chat interfaces for adding AI conversations to any application

📊 Enhanced Evaluations

Testing and optimization tools for measuring agent performance

⚡ Additional Model Launches:

🎙️

gpt-realtime mini

Smaller, cheaper voice model for low-latency streaming audio and speech interactions

🖼️

Image Generation Mini

80% less expensive than the large model while maintaining quality for most use cases

💻

Codex Full Launch

AI coding agent graduates from preview to full product, powered by specialized GPT-5 variant

📊 OpenAI's Explosive Growth: By the Numbers

800M
Weekly Active Users

Doubled from 400M in February 2025

4M
Developers Building

Using OpenAI APIs and tools

6B
Tokens per Minute

Processed through the API

These numbers illustrate why compute is such a critical constraint for OpenAI. Serving 800 million weekly users—a 100% increase in just 8 months—requires massive infrastructure. Processing 6 billion tokens per minute through the API means OpenAI is running thousands of GPUs continuously, 24/7, just to keep services online. Every new user and every new feature multiplies the compute requirements exponentially.

OpenAI and AMD $6 billion GPU partnership for AI infrastructure
OpenAI's $6B AMD deal represents one of the largest AI infrastructure investments in history

🔌 The $6 Billion AMD Partnership: What It Means

Just before DevDay 2025, OpenAI announced a landmark partnership with AMD to deploy 6 gigawatts of AMD Instinct GPUs over several years—one of the largest AI infrastructure commitments ever made. This deal is part of OpenAI's broader strategy to diversify beyond NVIDIA and secure enough compute to meet explosive demand growth.

💡 Understanding the AMD Deal:

⚡ 6 Gigawatts of Compute Power

To put this in perspective, 6 gigawatts could power a small city. In AI terms, it represents tens of thousands of high-performance GPUs capable of training and serving the next generation of models.

  • Equivalent to powering ~500,000 homes
  • Requires specialized data centers with advanced cooling
  • Represents billions in infrastructure investment beyond the chip cost

🏗️ Why Diversify Beyond NVIDIA?

While NVIDIA dominates the AI chip market with ~90% share, OpenAI's AMD partnership signals a strategic shift:

  • Supply chain risk mitigation: Reduces dependence on a single vendor
  • Cost optimization: AMD's MI300X chips offer competitive pricing
  • Negotiating leverage: Multi-vendor strategy improves terms with all suppliers
  • Innovation hedging: Different architectures may excel at different workloads

📅 Multi-Year Commitment

The partnership spans several years, with phased deployment as AMD ramps production and OpenAI builds out data center capacity. This long-term commitment gives both companies predictability: OpenAI secures future supply, while AMD gains a massive anchor customer for its AI ambitions.

🚨 Still Not Enough

Despite this massive investment, Altman's comments make clear that even $6 billion in GPUs won't solve the compute bottleneck. As demand continues doubling every few months and models require more parameters, OpenAI will need even larger deals with NVIDIA, AMD, and potentially new chip companies to keep pace.

🏗️ The Infrastructure Challenge: Why It's "Brutally Difficult"

Sam Altman's characterization of infrastructure as "brutally difficult" isn't hyperbole. Building the compute infrastructure to support OpenAI's ambitions involves coordinating across multiple complex systems, each with its own constraints:

🔴 Challenge 1: Chip Supply Constraints

The Problem: NVIDIA, AMD, and other chip manufacturers can't produce GPUs fast enough to meet AI demand. Lead times can exceed 12-18 months, and allocation is limited even for customers willing to pay premium prices.

OpenAI's Approach: Securing multi-year commitments with multiple vendors, leveraging Microsoft's Azure partnership, and exploring custom chip development (like Google's TPUs) for future generations.

🟠 Challenge 2: Power and Cooling

The Problem: Modern AI data centers require massive amounts of electricity and advanced cooling systems. A single large GPU cluster can consume 10+ megawatts, equivalent to 8,000 homes.

OpenAI's Approach: Partnering with cloud providers (Microsoft, Oracle) who have existing power infrastructure, and potentially co-locating near power plants or renewable energy sources for future expansion.

🟡 Challenge 3: Data Center Construction

The Problem: Building AI-optimized data centers takes 18-36 months and requires specialized design for high-density GPU racks, networking, and cooling. Real estate in suitable locations is limited.

OpenAI's Approach: Leveraging Microsoft's existing Azure infrastructure while planning future dedicated facilities, potentially working with specialized data center providers like CoreWeave and Lambda Labs.

🟢 Challenge 4: Networking and Interconnect

The Problem: Large language models require thousands of GPUs to communicate with ultra-low latency. Networking infrastructure (InfiniBand, RoCE) is as critical as the chips themselves and can be just as difficult to procure.

OpenAI's Approach: Working closely with NVIDIA (networking via Mellanox acquisition), designing custom cluster topologies, and optimizing model architectures to reduce communication overhead.

🔵 Challenge 5: Operational Complexity

The Problem: Running AI infrastructure at OpenAI's scale requires managing complex orchestration, fault tolerance, model deployment pipelines, and real-time resource allocation across distributed systems.

OpenAI's Approach: Building proprietary infrastructure management tools, hiring top-tier systems engineers, and developing automated failover and scaling systems to maximize uptime and efficiency.

⚠️ The Compounding Effect

Each of these challenges compounds the others. You can't just buy more chips—you need power, cooling, data centers, networking, and operational expertise, all of which have independent supply constraints and lead times. This is why Altman spends most of his time on infrastructure: it's the long pole in OpenAI's mission to build AGI.

🎯 The Path to AGI: OpenAI's North Star

Despite all the focus on infrastructure and compute constraints, Altman never loses sight of OpenAI's ultimate goal: artificial general intelligence (AGI)—AI systems that can match or exceed human intelligence across virtually all cognitive tasks.

🧠 What Is AGI?

AGI (Artificial General Intelligence) refers to AI systems that can:

✅ Current AI Can Do:

  • • Specific tasks (write code, generate images)
  • • Narrow domain expertise
  • • Pattern matching within training data
  • • Respond to user prompts

🚀 AGI Would Do:

  • Any intellectual task a human can
  • • Learn new skills independently
  • • Reason across multiple domains
  • • Set goals and plan long-term

🛤️ OpenAI's Three-Pillar Strategy for AGI:

🔬

1. Research: Smarter Models

Continuing to scale model size, improve training techniques, and develop new architectures. GPT-5 Pro represents the latest step, but OpenAI is already working on successors with even more parameters and capabilities.

🏗️

2. Infrastructure: Enough Compute

Building or securing access to unprecedented amounts of computing power to train trillion-parameter models and serve billions of users. This is where Altman spends most of his time today.

📱

3. Product: Useful Deployment

Creating intuitive interfaces and applications that allow people to actually benefit from AGI capabilities. ChatGPT, ChatGPT Apps, and AgentKit are all steps toward making AGI accessible and useful.

⏰ Timeline Expectations

While Altman didn't provide specific AGI timelines in this interview, OpenAI's actions suggest they believe meaningful AGI could arrive within the next 5-10 years, assuming infrastructure scaling continues. The company's massive investments in compute and the urgency in Altman's tone indicate they see this as an achievable near-term goal, not distant science fiction.

However, "AGI" means different things to different people, and OpenAI may be building systems that exhibit general intelligence in some domains while still having limitations in others—a gradual transition rather than a single "AGI moment."

🎯 Bottom Line: What This All Means

  • 1️⃣ Compute is the New Oil: In the AI era, access to GPUs and data center capacity determines winners and losers. OpenAI's willingness to spend billions on infrastructure shows how critical this resource has become.
  • 2️⃣ Demand Exceeds Supply: Even with $6B AMD deals and Microsoft partnerships, OpenAI can't meet current demand—and it's growing exponentially. Expect continued usage caps and premium pricing.
  • 3️⃣ AGI Remains the Goal: Despite infrastructure challenges, OpenAI hasn't lost sight of building general intelligence. Every product launch and partnership serves this ultimate mission.
  • 4️⃣ CEO as Infrastructure Chief: Altman spending most of his time on compute deals reflects how different the AI wave is from previous tech eras—infrastructure is existential, not operational.
  • 5️⃣ Platform Strategy in Full Swing: ChatGPT Apps, AgentKit, and new models show OpenAI building an ecosystem where developers and partners create value on top of their foundation, ensuring long-term dominance.
Dillip Chowdary

About Dillip Chowdary

AI industry analyst and technology strategist focusing on artificial intelligence infrastructure, large language models, and the business of AI. Tracks the intersection of AI research, compute economics, and enterprise adoption.

📧 Stay Updated on AI & OpenAI News

Get weekly insights on AI models, infrastructure, and industry developments.

Subscribe to Tech Bytes Newsletter