The announcement that shook CES
CES has seen plenty of dramatic moments over the years. But January 2025 delivered something genuinely unexpected. AMD, the company most people associate with gaming graphics cards and affordable processors, stepped into the spotlight with a message that couldn't be ignored: We're coming for Nvidia's AI throne.
The setting was perfect. Las Vegas, thousands of tech journalists and industry executives packed into an auditorium, and Lisa Su — one of the most respected CEOs in technology — commanding the stage. But this wasn't just another product announcement. This was a declaration of intent.
Lisa Su's Opening Statement
"The demand for AI compute is growing faster than anyone predicted. But here's the problem: there's essentially one company supplying most of that compute. That's not healthy for innovation. That's not healthy for our customers. And frankly, that's not healthy for the future of AI."
She didn't name Nvidia directly. She didn't have to. Everyone in the room knew exactly what she meant. And when she started unveiling AMD's response — the MI455, the MI440X, and a new generation of Ryzen AI processors — the implications became crystal clear.
But the real shock came midway through the presentation. Lisa Su introduced a guest: an executive from OpenAI, the company behind ChatGPT and one of the largest consumers of AI computing power on the planet. The message? Even the companies that helped build Nvidia's AI empire are looking for alternatives.
According to Reuters reporting from the event, AMD's push to compete directly with Nvidia has been years in the making. But CES 2025 was the moment they went public with their ambitions — and brought receipts in the form of high-profile customer partnerships.
MI455 and MI440X: AMD's data center weapons
Let's get into the hardware that's generating all this excitement. The AMD Instinct MI455 and MI440X are the company's answer to Nvidia's H100 and the upcoming B100/B200 chips from the Blackwell architecture.
AMD Instinct MI455: The Flagship
The MI455 represents everything AMD has learned about building AI accelerators over the past five years. Here's what makes it special:
- CDNA 4 Architecture: A completely redesigned compute architecture optimized specifically for AI training and inference workloads.
- 256GB HBM3e Memory: Massive memory capacity that allows training of larger models without the memory constraints that plague smaller accelerators.
- 6 TB/s Memory Bandwidth: Enough bandwidth to keep the processors fed with data, reducing the idle time that wastes energy and money.
- Enhanced Infinity Fabric: AMD's chip-to-chip interconnect technology, enabling efficient multi-GPU configurations for distributed training.
AMD Instinct MI440X: The Workhorse
While the MI455 grabs headlines, the MI440X might be the more important product for many organizations. It's designed as a more accessible option that still delivers serious AI performance:
- Lower Power Consumption: Easier to deploy in existing data centers without major infrastructure upgrades.
- 192GB HBM3 Memory: Still substantial memory, suitable for most production AI workloads.
- Better Price-to-Performance: AMD is positioning this as the "smart choice" for organizations that don't need bleeding-edge performance but want competitive AI capabilities.
- Inference Optimization: Particular attention paid to inference workloads, where many organizations actually spend most of their compute budget.
The ROCm Software Story
Hardware is only half the equation. AMD's ROCm (Radeon Open Compute) platform has historically been the Achilles heel of their AI strategy. CUDA — Nvidia's proprietary software platform — has such deep integration with AI frameworks that switching to AMD often meant significant engineering work.
At CES, AMD addressed this directly. They announced:
- ROCm 7.0 with dramatically improved PyTorch and TensorFlow support
- Partnerships with Hugging Face for optimized model deployment
- Native support for popular training frameworks like DeepSpeed and Megatron-LM
- A dedicated software engineering team working with major AI labs on compatibility
The message was clear: AMD understands that great hardware means nothing if the software experience is frustrating. And they're investing heavily to close that gap.
Ryzen AI: Bringing intelligence to your laptop
While data center chips dominate headlines, AMD's Ryzen AI announcement might have more direct impact on everyday users. The concept is simple but powerful: run AI models locally on your laptop or PC, without sending data to the cloud.
What is On-Device AI?
Right now, when you use ChatGPT or similar AI tools, your requests travel to massive data centers where they're processed on expensive hardware. The responses then travel back to you. This approach has limitations:
- Privacy concerns — your data leaves your device
- Latency — network round trips add delay
- Cost — cloud computing isn't free
- Connectivity dependence — no internet, no AI
On-device AI solves these problems by running AI models directly on your computer's processor. Your data never leaves your device, responses are nearly instant, and you can work offline.
The New Ryzen AI Processors
AMD's new Ryzen AI lineup, built on the Zen 5 architecture, includes dedicated Neural Processing Units (NPUs) alongside traditional CPU and GPU cores. Key features include:
- 50 TOPS (Trillion Operations Per Second): Enough processing power to run sophisticated language models and image generation locally.
- Dedicated NPU: A specialized processor that handles AI workloads more efficiently than general-purpose CPU or GPU cores.
- Improved Power Efficiency: AI processing that doesn't drain your battery in minutes.
- Windows Copilot+ Support: Native integration with Microsoft's AI features coming to Windows.
Real-World Applications
So what can you actually do with on-device AI? More than you might think:
- Real-time translation: Translate spoken language as you hear it, without internet connection.
- Enhanced video calls: AI-powered background blur, noise cancellation, and automatic framing that runs locally.
- Content creation: Generate images, edit photos, and even run small language models for writing assistance.
- Privacy-sensitive applications: Analyze documents or images without uploading them to external servers.
- Gaming: AI-powered upscaling and frame generation that rivals Nvidia's DLSS technology.
Microsoft, Adobe, and numerous other software companies are building applications specifically designed to leverage these on-device AI capabilities.
Why was OpenAI on stage with AMD?
This is the question everyone has been asking since CES. Why would OpenAI — a company that has reportedly spent billions on Nvidia hardware — publicly align themselves with AMD?
The Supply Crunch Reality
Let's start with the obvious: there aren't enough AI chips. OpenAI, Google, Meta, Microsoft, and dozens of other companies are all competing for a limited supply of Nvidia's most advanced processors. Lead times can stretch to 18 months or more. Prices have skyrocketed.
For organizations training frontier AI models, this isn't just inconvenient — it's existential. Your competitors might get their chips before you do. A second supplier isn't just nice to have; it's essential for business continuity.
The Pricing Pressure
Nvidia's near-monopoly has allowed them to command premium prices. The H100, for example, can cost $30,000-$40,000 per unit. When you need thousands of them, the numbers become staggering.
AMD is positioning itself as the value alternative. Not cheap — these are still expensive chips — but offering better price-to-performance ratios. For organizations spending hundreds of millions on compute, even a 20% savings is significant.
Strategic Independence
There's also a strategic dimension. Being dependent on a single supplier puts you at that supplier's mercy. Pricing, allocation, roadmap decisions — you have limited negotiating power. Having a viable alternative changes the dynamic entirely.
What OpenAI Said on Stage
"We're excited about what AMD is building. The AI industry needs more options. We need more competition. And we need partners who are investing heavily in the hardware and software that will power the next generation of AI systems."
This isn't OpenAI abandoning Nvidia. They'll continue using Nvidia hardware extensively. But it's a clear signal that the relationship isn't exclusive — and that AMD's offerings have reached a level where serious AI organizations are willing to invest engineering resources to support them.
AMD vs. Nvidia: The real comparison
Let's cut through the marketing and look at how AMD's new offerings actually compare to Nvidia's current and upcoming chips.
Hardware Specifications
| Specification | AMD MI455 | Nvidia H100 | Nvidia B200 |
|---|---|---|---|
| Memory | 256GB HBM3e | 80GB HBM3 | 192GB HBM3e |
| Memory Bandwidth | 6 TB/s | 3.35 TB/s | 8 TB/s |
| FP16 Performance | ~3.5 PetaFLOPS* | 1.98 PetaFLOPS | ~4.5 PetaFLOPS* |
| Power (TDP) | ~700W | 700W | ~1000W |
| Manufacturing | TSMC 4nm | TSMC 4nm | TSMC 4nm |
*Estimated based on AMD's CES presentations; final specifications may vary.
Software Ecosystem
This is where Nvidia still has a significant advantage. CUDA has been the dominant AI software platform for over a decade. Most AI researchers learned on CUDA. Most frameworks are optimized for CUDA first, everything else second.
AMD's ROCm is improving rapidly, but gaps remain:
- Framework support: PyTorch and TensorFlow work well, but some specialized libraries still require workarounds.
- Community resources: Fewer tutorials, Stack Overflow answers, and community projects compared to CUDA.
- Optimization tools: Nvidia's profiling and debugging tools are more mature.
However, AMD is closing the gap. Their partnership with major AI labs (like OpenAI) means the most important workloads will be well-supported. And for many production use cases, ROCm is already "good enough."
The Total Cost Picture
Raw chip specs only tell part of the story. Total cost of ownership includes:
- Chip purchase price
- Data center infrastructure (power, cooling)
- Engineering time for optimization
- Ongoing software support and updates
AMD is positioning themselves to win on the first two factors while minimizing the pain of the latter two. Whether they succeed will determine how much market share they can capture.
How to prepare for AMD AI hardware
Whether you're a developer, a data center operator, or a technology decision-maker, here's how to position yourself for the shifting AI hardware landscape.
For AI Developers
- Start testing with ROCm: Even if you're not ready to switch, begin experimenting with AMD's ROCm platform. Understanding its quirks now will pay dividends later.
- Abstract your hardware dependencies: Use framework-level APIs (PyTorch, JAX) rather than CUDA-specific code where possible. This makes switching hardware easier.
- Monitor framework updates: Watch for ROCm-specific optimizations in your preferred frameworks. Performance is improving rapidly.
- Join the community: AMD's developer forums and Discord channels are active and can help troubleshoot issues.
For Enterprise Decision-Makers
- Evaluate your workloads: Not all AI workloads are equal. Some will run well on AMD hardware today; others may require more optimization work.
- Start small: Consider piloting AMD hardware for specific projects before committing to large deployments.
- Negotiate with both vendors: Having AMD as a viable alternative gives you leverage in Nvidia negotiations too.
- Plan for hybrid environments: The future likely includes both AMD and Nvidia hardware, optimized for different workloads.
For Individual Users
If you're in the market for a new laptop or PC:
- Look for systems with Ryzen AI processors if you plan to use AI features regularly.
- Verify software support for your specific use cases — not all applications support on-device AI yet.
- Consider future-proofing: AI capabilities in consumer devices will only become more important over time.
Common misconceptions about AMD AI chips
There's a lot of noise around AMD's AI hardware. Let's separate fact from fiction.
Misconception 1: "AMD is years behind Nvidia"
This was true a few years ago. It's not true anymore. The MI455 is competitive with Nvidia's current-generation hardware, and AMD's roadmap shows continued aggressive development. The gap has narrowed significantly.
Misconception 2: "ROCm doesn't work"
ROCm had a rough start, but it's now a functional platform that major AI labs are using in production. Are there still rough edges? Yes. Is it unusable? Absolutely not. Most mainstream AI workloads run well on ROCm today.
Misconception 3: "Only gaming companies should use AMD"
AMD's enterprise and data center business is growing rapidly. Major cloud providers like Microsoft Azure and Google Cloud offer AMD-based instances. This isn't just for gamers anymore.
Misconception 4: "The OpenAI partnership is just marketing"
Partnerships announced at CES can be empty marketing. But OpenAI putting an executive on stage — at a competitor's event, with Nvidia watching — is a significant statement. They're committing real engineering resources to AMD support.
Misconception 5: "Switching from Nvidia is too hard"
Switching has costs, certainly. But they're not insurmountable. For many workloads, the transition is relatively straightforward. And the potential savings — both in hardware costs and supply chain diversification — can justify the investment.
Misconception 6: "On-device AI is just a gimmick"
The NPUs in Ryzen AI processors might seem like marketing fluff, but they enable real capabilities that cloud-based AI cannot match: privacy, latency, and offline operation. As software catches up to hardware, these will become increasingly important.
What's next for AMD and AI computing
CES 2025 wasn't the end of AMD's AI story — it was just the beginning. Here's what to watch for in the coming years.
AMD's Announced Roadmap
Lisa Su outlined AMD's plans through 2027:
- 2025 (Late): MI455 and MI440X general availability to enterprise customers.
- 2026: Next-generation CDNA architecture (reportedly codenamed "CDNA 5") with significant performance improvements.
- 2026-2027: Expanded Ryzen AI lineup with more powerful NPUs capable of running larger on-device models.
- Ongoing: Continued ROCm improvements and expanded framework support.
Industry Trends to Watch
AMD's announcements fit into broader industry patterns:
- Supply diversification: Major AI companies are actively seeking alternatives to Nvidia, including AMD, Intel, and custom chips from Google and Amazon.
- On-device AI growth: The shift toward local AI processing will accelerate as models become more efficient and hardware more capable.
- Open ecosystems: There's growing pressure for AI software stacks that work across multiple hardware platforms, reducing vendor lock-in.
- Power efficiency focus: As AI workloads grow, power consumption becomes increasingly critical — a potential advantage for AMD's approach.
The Competitive Response
Nvidia won't sit idle. Expect accelerated product launches, aggressive pricing for strategic accounts, and continued investment in CUDA's ecosystem advantages. Competition benefits everyone — expect innovation from both companies to accelerate.
What This Means for You
Whether you're running AI workloads in a data center or using AI features on your laptop, the increased competition between AMD and Nvidia means:
- Better prices as companies compete for market share
- Faster innovation as each company tries to outdo the other
- More choice in hardware and software platforms
- Improved software support as frameworks optimize for multiple platforms
Data visualizations
To better understand AMD's position in the AI accelerator market and the projected growth of on-device AI, here are two data visualizations based on industry analysis and company announcements.
Projected market share changes in the data center AI accelerator market. While Nvidia maintains dominance, AMD and other competitors are expected to capture increasing share as alternatives mature. Data based on industry analyst reports and company guidance.
Neural Processing Unit (NPU) performance in consumer laptops has grown dramatically, enabling increasingly sophisticated on-device AI capabilities. AMD's new Ryzen AI processors represent a significant leap in local AI processing power.
Conclusion: A new era of AI hardware competition
AMD's CES 2025 announcements mark a genuine inflection point in the AI hardware landscape. For the first time in years, there's a credible alternative to Nvidia's dominance — one backed by serious hardware, improving software, and partnerships with industry leaders like OpenAI.
Does this mean Nvidia is in trouble? Not immediately. They still have the superior software ecosystem, the deepest customer relationships, and a track record of execution that's hard to match. But they're no longer running unopposed.
For the AI industry as a whole, this competition is unambiguously good news. More suppliers mean better prices, improved innovation, and reduced supply chain risk. The companies building AI applications — and ultimately the people using them — will benefit.
The MI455 and MI440X represent AMD's most serious challenge to Nvidia's data center dominance. The Ryzen AI processors bring sophisticated AI capabilities to everyday laptops and PCs. And the OpenAI partnership signals that even the most committed Nvidia customers are ready to diversify.
CES 2025 wasn't just a product announcement. It was AMD declaring that the AI hardware race is now officially a two-horse contest. The next few years will determine whether they can convert that ambition into lasting market share.
One thing is certain: the AI hardware landscape just got a lot more interesting. And that's good for everyone.
Further Reading & Authoritative Sources
FAQs: AMD AI Chips and CES 2025 Announcements
AMD indicated that the MI455 and MI440X will begin shipping to enterprise customers in late 2025. Major cloud providers are expected to offer instances powered by these chips shortly after. Consumer availability through system integrators will follow, likely in early 2026.
ROCm has matured significantly and is now used in production by several major organizations. PyTorch and TensorFlow support is solid, and the partnership with OpenAI suggests that cutting-edge AI workloads will be well-supported. However, some specialized libraries may still require additional optimization work compared to CUDA equivalents.
Both AMD and Intel are adding dedicated NPUs to their consumer processors. AMD's latest Ryzen AI chips offer approximately 50 TOPS of NPU performance, competitive with Intel's Meteor Lake and Arrow Lake offerings. The real differentiation often comes down to software support and specific application optimization rather than raw NPU performance.
OpenAI's appearance at AMD's CES presentation indicates a serious commitment to evaluating and potentially deploying AMD hardware. However, Nvidia will likely remain their primary hardware supplier for the foreseeable future. The partnership is about diversification and ensuring competition in the market, not about completely replacing Nvidia.
You can run smaller language models locally on Ryzen AI hardware, but not the full-scale GPT-4 or similar frontier models — those require data center-class hardware. However, quantized versions of open-source models like Llama and Mistral can run locally, offering ChatGPT-like experiences for many use cases without cloud connectivity.
AMD hasn't announced official pricing, but industry analysts expect the MI455 to be priced competitively with Nvidia's H100, likely in the $20,000-$35,000 range per unit. AMD's strategy appears focused on offering better value rather than dramatically lower prices, emphasizing performance-per-dollar advantages.
This depends on your timeline and risk tolerance. If you need AI compute capacity now, Nvidia's current offerings are excellent and well-supported. If you can wait until late 2025 or beyond, AMD's new chips may offer competitive performance at better prices. Many organizations are planning to use both, allocating different workloads to whichever platform offers the best value.
AMD announced partnerships with major laptop manufacturers including Lenovo, HP, ASUS, and Acer for Ryzen AI-powered devices. Expect new models throughout 2025, particularly in the premium ultrabook and creator laptop segments. Microsoft's Copilot+ certification will also help identify laptops with strong on-device AI capabilities.