(Timofeev Vladimir/Shutterstock)
Tuesday, at its GTC event in Washington, D.C., Nvidia announced it is working with the U.S. Department of Energy’s national labs to build seven new supercomputers. Five of these systems will be hosted by Argonne National Laboratory, and two will be hosted by Los Alamos National Laboratory.
In his GTC keynote, Nvidia CEO Jensen Huang said, “Computing is the fundamental instrument of science, and we are going through several platform shifts,” adding that “every future supercomputer will be a GPU-based supercomputer.”
The largest of the new systems will be Argonne’s Solstice supercomputer. Solstice will incorporate 100,000 Nvidia Blackwell GPUs, or more than twice the number of accelerators in current Top500 leader El Capitan, making the new system one of the largest GPU-based supercomputers ever built for scientific research. Solstice will be built using the DOE’s new public-private partnership model, Nvidia said. This model is designed to accelerate deployment of large-scale AI systems by aligning DOE research goals with private sector capabilities, bringing in industry co-investment and supporting collaborative research projects on the system.
A smaller system called Equinox will include 10,000 NVIDIA Blackwell GPUs and is expected to arrive in 2026. Both Solstice and Equinox will be housed at Argonne, interconnected by Nvidia networking, and will deliver 2,200 exaflops of AI performance, according to Nvidia.
As part of the collaboration, Oracle will serve as a key industry partner on the Argonne systems. The company will also provide the Department of Energy with immediate access to AI computing resources based on Nvidia’s Hopper and Blackwell architectures. These resources will be available to scientists at Argonne and other research institutions nationwide, supporting work in AI for science and energy applications.
The Solstice and Equinox supercomputers will be used to train and deploy autonomous AI research agents designed to accelerate scientific discovery across DOE programs. Argonne director Paul Kearns said the systems will support a wide range of AI-driven scientific workflows and connect to DOE experimental facilities such as the Advanced Photon Source, enabling researchers to apply AI methods directly to experimental data and address complex national research challenges.
Three additional Nvidia-based systems at Argonne were also announced: Tara, Minerva, and Janus. These systems will be built with support from Nvidia, Hewlett Packard Enterprise, and World Wide Technology, and will be tailored to accelerate AI inference and workforce development, Argonne said. Tara will be an AI inference system designed to be an integrated AI-HPC environment, Argonne said, meant to convert exascale computation and AI advances into scientific breakthroughs. Minerva, developed with Nvidia and WWT, will also be designed for scientific inference workloads. Janus, developed with HPE and Nvidia, will serve as a workforce training and research system to help DOE scientists and students gain practical experience with large-scale AI and HPC workloads.
Rick Stevens, Argonne’s associate laboratory director for Computing, Environment and Life Sciences, said that modern science now depends not only on powerful computers but also on advanced AI capabilities: “Inference allows us to streamline how we test hypotheses, design experiments, and gain insights from large, complex datasets,” he said.
Collectively, the five new AI systems at Argonne are expected to shorten the path from concept to discovery by giving researchers faster access to HPC and AI tools, Argonne said, noting that the effort combines DOE’s scientific expertise with advanced technologies from industry partners to expand the capabilities of the national research infrastructure.
The new HPC resources align with the Trillion Parameter Consortium initiatives outlined at TPC25, according to Charles Catlett, the TPC’s executive director.
“One of the new TPC collaborative initiatives that has energized the community is to collectively design and build an Open Frontier Model, harnessing insights and expertise from leaders in the U.S., Japan, and Europe,” said Catlett, who is also a senior computer scientist at Argonne. “This partnership will be instrumental in enabling the kind of scale we need.”
While Argonne’s new systems will focus on expanding open science and AI-driven discovery, Los Alamos National Laboratory will deploy two new systems built by HPE in collaboration with Nvidia. The Mission and Vision supercomputers will extend the lab’s leadership in modeling and simulation, incorporating AI capabilities for scientific and national security research.
Mission will be the fifth Advanced Technology System (ATS) within the National Nuclear Security Administration’s (NNSA) Advanced Simulation and Computing program. When it becomes operational in 2027, Mission will run exclusively in the classified computing environment, supporting the modeling and simulation work that provides the foundation for U.S. nuclear security, LANL said in a release.
The system will replace the current Crossroads supercomputer and introduce a new level of concurrency, allowing multiple large-scale simulations to run simultaneously. Los Alamos described Mission as the first NNSA system designed for the post-exascale era, combining traditional high-fidelity simulations with AI-assisted methods to improve accuracy and speed.
“Mission and Vision represent a significant investment in our national security science and basic science capabilities,” said Los Alamos Director Thom Mason. “These systems are purpose-built for supercomputing in the AI era.”
The companion system, Vision, will serve the unclassified side of the lab and is designed for open scientific research. It will build on the success of Venado, the HPE-Nvidia system installed at Los Alamos in 2024, and will use the same architectural foundation as Mission. Vision will support projects in materials and nuclear science, energy modeling, and biomedical research, while also advancing AI development for scientific applications.
Both Mission and Vision will be based on the new HPE Cray Supercomputing GX5000 platform, featuring Nvidia’s Vera Rubin architecture. The design integrates Nvidia’s Vera CPUs and Rubin GPUs, interconnected with the company’s Quantum-X800 InfiniBand network and cooled by a direct liquid system. Los Alamos worked with HPE and Nvidia through a co-design process that began with the Venado project. The approach aligns system architecture with the scientific workloads it will serve, drawing on expertise across hardware design, software optimization, and domain science.
“Mission and Vision deepen our longstanding partnership and joint innovation with Los Alamos,” said Trish Damkroger, SVP and GM of HPE’s HPC & AI Infrastructure Solutions group. “We are proud to expand our partnership and deliver some of the first systems featuring the new HPE Cray GX supercomputing architecture that will support scientific discovery and help solve important challenges.”
Together, Mission and Vision are a step forward in Los Alamos’ strategy to integrate AI into large-scale simulation workflows.“Los Alamos has long advanced the frontier of scientific discovery,” said Ian Buck, Nvidia’s VP of hyperscale and HPC, adding that the new systems will combine accelerated computing and AI to push the boundaries of simulation and generative intelligence for national research needs.
With seven new systems announced across two national labs, this week marks one of the most significant expansions of DOE computing in years. The new Argonne and Los Alamos systems also follow this week’s announcement of Discovery and Lux at Oak Ridge National Laboratory, demonstrating how AI-driven architectures are reshaping DOE computing across both open science and national security domains. Though more detailed performance and configuration information on these new systems has not yet been released, AIwire will be watching closely as more technical specifications emerge in the months ahead.
This article first appeared on HPCwire.
Related




