(IM Imagery/Shutterstock)
Qualcomm is expanding beyond its mobile roots with a new line of data center hardware designed for AI inference. The San Diego-based company introduced its new AI200 and AI250 accelerator chips on Monday, positioning itself to compete in the AI hardware market with Nvidia and AMD. According to Qualcomm, these new chips will deliver enhanced memory capacity, rack-scale configurations, and compatibility with major AI frameworks.
Qualcomm says its AI200 accelerator supports 768 GB of LPDDR memory per card and is optimized for AI inference across large language models, multimodal systems, and related workloads. Although the AI200 is designed as an accelerator card, Qualcomm describes it as a rack-level system for running generative AI applications. The AI250 features a near-memory computing design that reduces data movement and power draw while supporting disaggregated AI inference, the company said. Both rack systems use direct liquid cooling and have a rated power draw of 160 kW. They also support PCIe and Ethernet and include confidential computing features to secure sensitive workloads.
The announcement is a significant move for Qualcomm into the data center accelerator market, which is currently dominated by Nvidia’s GPUs and, more recently, AMD’s MI300 series. Qualcomm’s core business has historically centered on wireless and mobile processors, including its Snapdragon system-on-chips for smartphones. But as the company faces slowing handset demand, it has pushed into other sectors such as automotive, personal computing, and now large-scale AI infrastructure. So far, its data center strategy seems to be focused on energy efficiency rather than raw throughput alone.
This could be a smart strategy for Qualcomm, as inference workloads are reshaping data center design around power and memory constraints. With large models demanding greater throughput and efficiency, vendors are increasingly developing inference-optimized systems for production-scale AI. Qualcomm’s near-memory and liquid-cooled designs address those same pressures on performance and energy use.
In May, Qualcomm announced a partnership with state-backed Saudi AI firm Humain to supply its data centers in the region with AI chips. Humain has committed to deploying up to 200 megawatts of Qualcomm AI systems starting in 2026, adding to the 18,000 Blackwell GPUs promised to Humain by Nvidia and a $10 billion investment by AMD. The agreement could give Qualcomm and its rivals a foothold in a region investing heavily in sovereign AI infrastructure. It will also provide a large-scale deployment to demonstrate Qualcomm’s new hardware in action.
The AI200 and AI250 will roll out on a staggered timeline, with the AI200 expected to reach availability in 2026 and the AI250 following in 2027. Qualcomm did not disclose detailed performance specs such as throughput or process node, though the company said the chips will support standard AI frameworks and tools. As a fabless chip designer, Qualcomm did not disclose which foundry (or foundries) will produce the chips, leaving questions about manufacturing scale and lead times.
Qualcomm’s recent acquisitions also point to its growing investment in AI and data center tech. This year, the company bought Movian AI, a generative AI unit of Vietnam’s VinAI; Autotalks, an Israeli automotive communications chipmaker; Alphawave IP Group, a British semiconductor company focused on data center connectivity; and Arduino, an Italian maker of open-source microcontroller platforms. While Qualcomm has not linked these acquisitions directly to the AI200 or AI250, they show how the company could be positioning itself for a larger role in AI and data center infrastructure.
Qualcomm’s announcement comes during a time of serious competition for fueling AI workloads in the data center and HPC sectors, and while Nvidia holds the lion’s share of the market (with some estimates as high as 94%), companies like AMD, Intel, and startups like NextSilicon are stepping up to provide alternatives. Qualcomm now joins this crowded field of vendors aiming to improve energy efficiency as AI workloads expand.
Qualcomm’s ability to scale its low-power expertise into the data center will determine whether its entry into AI infrastructure is a niche experiment or a lasting presence in the market. Investors seem confident: Shares of the company rose over 11% on Monday, after rising as much as 22% earlier in the day. The company has the technical pedigree and capital to compete, but execution will depend on how well its hardware performs against other AI platforms and how quickly it can scale. The next few years will test whether the firm’s efficiency-driven approach can compete in an industry controlled by established chipmakers.


