CES 2026: Nvidia Rubin, AMD Helios and Intel 18A — AI hardware enters a new era
If you’ve been wondering what the “theme of the year” at CES 2026 is, the answer is pretty clear: AI is entering its infrastructure phase. There’s less talk about one magical chatbot, and more about who can deliver faster training, cheaper inference, and AI in every device.
In other words: 2026 looks like the year when AI moves from hype into data centers, factories, and laptops.

Nvidia: the Rubin platform, 10× token efficiency, and a serious autonomy narrative
At CES, Nvidia highlighted the Vera Rubin platform as the next major step after the Blackwell generation. The idea is straightforward: more AI performance, but even more importantly, better efficiency in generating “tokens” (the basic unit of work for LLM systems).
In practice, Nvidia is pushing the message that the next generation needs to be faster and more economical when AI is serving users (chatbots, agents, search, tools), not only when models are being trained.
At the same time, Nvidia again emphasized its autonomy track (Alpamayo), repeating the message that it wants to be the “brain” for Level 4 scenarios in specific conditions — directly competing in the space where Tesla and Waymo are often mentioned.
AMD: Helios as a rack-scale product and the MI400 family for enterprise AI
AMD delivered a very data-center-focused CES message: talk about the yotta-scale era (an explosion in compute demand) and platforms that aren’t just a single GPU, but a full rack as a system.
The key point is that AMD wants to offer a complete path: from MI400-family accelerators (including enterprise variants for on-prem environments) to larger rack-scale solutions like the “Helios” concept — with next-gen roadmap hints (MI500) in the background.
For businesses, this is interesting because “AI in the enterprise” increasingly means: we won’t put everything in the cloud — we want some inference and fine-tuning locally, but without redesigning the entire infrastructure.
Intel: Core Ultra Series 3 on 18A — the “AI PC” wave goes mainstream
Intel used CES to push the story of Core Ultra Series 3 (Panther Lake) built on Intel 18A, with the ambition to make the “AI PC” more than a premium niche.
For everyday users, the takeaway is simple: AI features (local processing, smarter app tools, on-device assistants) increasingly run on the device itself, with a better balance between performance and power.
What they all share (and why it matters)
Three different approaches, one shared core message:
- AI is no longer just a model — it’s infrastructure.
- The focus is shifting to cost, efficiency, and real-world delivery (tokens, watts, bandwidth, latency).
- 2026 is shaping up as the year of “AI everywhere”: from data centers to laptops — and soon to wearables.
Conclusion
CES 2026 sends a very direct signal: the next phase of the AI race won’t be won only by a “smarter model,” but by whoever can build a cheaper, more accessible system — from rack-scale machines to AI-capable PCs.
If this trend continues, over the next 6–12 months we’ll see more “AI features you actually use,” and fewer demos that only sound impressive on slides.






