By Harshit, Washington, D.C., October 27, 2025 – 8:30 AM EDT
In a bold move set to reshape the landscape of artificial intelligence hardware, Qualcomm announced Monday the release of its upcoming AI accelerator chips, AI200 and AI250, marking its entry into the high-stakes data center market long dominated by Nvidia and, to a lesser extent, AMD.
The company’s stock surged 15% following the announcement, signaling strong investor confidence in Qualcomm’s long-term ambitions to compete in the rapidly growing AI semiconductor space.
A Strategic Leap Into AI Data Centers
For years, Qualcomm has been synonymous with mobile and wireless chip innovation, powering smartphones across the globe. However, with the AI boom fueling unprecedented demand for data center infrastructure, the company is now turning its focus toward large-scale AI computing.
The AI200, expected to hit the market in 2026, and the AI250, planned for 2027, are designed to handle AI inference workloads — the process of running and deploying trained AI models. These chips will be available in liquid-cooled, full-rack systems capable of housing as many as 72 chips acting as a single computing unit.
This configuration mirrors the approach taken by Nvidia and AMD, whose GPUs (graphics processing units) have been the backbone of modern AI model training and inference tasks. Qualcomm’s innovation, however, stems from its Hexagon Neural Processing Units (NPUs) — the same AI cores that power its Snapdragon smartphone chips, now scaled up for enterprise-level performance.
“We’re Ready for the Data Center Game”
Durga Malladi, Qualcomm’s general manager for data center and edge, told reporters, “We first wanted to prove ourselves in other domains, and once we built our strength there, it was easy for us to move up a notch into the data center level.”
This shift is no small feat. The AI data center market is expected to attract more than $6.7 trillion in capital expenditure by 2030, according to a McKinsey estimate, making it one of the most lucrative and competitive sectors in the tech industry.
By emphasizing energy efficiency and lower total cost of ownership, Qualcomm aims to differentiate itself from Nvidia’s power-hungry GPUs. Its systems reportedly consume around 160 kilowatts per rack, comparable to some of Nvidia’s high-end GPU setups but at a lower operational cost.
Nvidia’s Grip and the Race for Alternatives
Currently, Nvidia controls over 90% of the AI accelerator market, thanks largely to the explosive popularity of its H100 GPUs, used to train large language models like OpenAI’s GPT series. Nvidia’s market capitalization has skyrocketed to over $4.5 trillion, driven by insatiable demand for AI computing power.
But the competition is heating up. OpenAI, in search of supply stability and cost efficiency, recently announced plans to buy chips from AMD and explore a strategic investment in the company. Meanwhile, Google, Amazon, and Microsoft continue to develop their own AI accelerators for in-house cloud services, attempting to reduce dependence on Nvidia’s hardware ecosystem.
Targeting Inference, Not Training
Unlike Nvidia, which has dominated AI training — the resource-intensive process of teaching neural networks using vast datasets — Qualcomm is focusing on inference, where pre-trained models are deployed to perform real-time tasks like image recognition, text generation, or predictive analytics.
This decision aligns with Qualcomm’s strength in power efficiency and scalability, crucial factors for companies running continuous AI workloads in massive server farms.
Malladi explained that Qualcomm will offer its AI chips and CPUs both as part of rack-scale systems and as modular components, catering to hyperscalers who prefer to design their own configurations. Interestingly, he hinted that even Nvidia or AMD could become customers for Qualcomm’s CPU and data center components.
Global Expansion and Partnerships
Qualcomm also announced a partnership with Saudi Arabia’s Humain, committing to provide AI inference systems across regional data centers. The deal could scale to support up to 200 megawatts of power capacity, signaling Qualcomm’s growing international footprint in the AI infrastructure race.
While the company declined to disclose pricing details or exact chip specifications, it highlighted several performance advantages — including support for 768 gigabytes of memory per card, surpassing the capacities of Nvidia and AMD’s current offerings.
A New Era of AI Competition
As AI reshapes global industries, the introduction of new competitors like Qualcomm could bring much-needed balance to a market long monopolized by Nvidia. If the company successfully delivers on its efficiency and cost promises, it could carve out a significant share of the AI inference segment — and perhaps even challenge Nvidia’s broader dominance in the years ahead.
At a time when AI hardware demand is outpacing supply, Qualcomm’s entry signals not just a business opportunity, but a shift in the global AI ecosystem toward greater diversity, competition, and innovation.

