Modernizing US Government Systems by Scaling AI and HPC

Scaling AI and HPC to Modernize U.S. Government
To maintain our nation’s competitive edge and keep pace with the rate of innovation coming out of Silicon Valley, the U.S. government faces an urgent need to modernize legacy infrastructure to meet the challenges of a data-driven, virtual future. Artificial intelligence (AI) and high-performance computing (HPC) are essential tools in national security and providing efficient public services. To realize their full potential, government agencies must address barriers in scalability, efficiency, and procurement strategy.
Automation and Simulation with AI and HPC
AI is transforming how agencies operate, enabling them to automate slow, manual tasks such as passport and visa processing, financial fraud detection, and citizen service triage. The technology is proving to be game-changing for a wide range of applications and the potential is limitless. Similarly, HPC allows simulation of costly, complex, and often risky real-world scenarios—like climate modeling, energy production, and national defense scenarios — entirely within the digital domain. Digital simulation is orders of magnitude faster and cheaper than building and testing physical systems and it allows for a vastly broader set of scenarios, operating conditions or design parameters to be explored than could ever be achieved manually. These technologies not only enhance national capabilities but also deliver measurable savings and better outcomes for the American taxpayer.
High-Performance Networking for AI and HPC at Scale
For these computing systems to scale effectively you need a network with high-bandwidth, low latency, and scalable to tens of thousands of compute nodes without dropping data or getting congested with network traffic. A high performance network purpose-built for AI and HPC systems is critical.
Built on decades of innovation, Cornelis Networks has been a trusted technology partner to the Department of Energy and Department of Defense since our company’s founding. Through deployment of our Omni-Path Architecture in the U.S. government’s advanced computing centers, we have proven to be the lowest latency, best performing network at scale for critical applications in defense and energy. Our CN5000 network solution demonstrates a leap forward in scalable, open, high-performance networking.
Up to 2× higher message rates and 35% lower latency1 than current 400 Gbps InfiniBand NDR, leading to up to 45% faster2 throughput on HPC workloads
6X faster AI communication collectives performance versus RoCE-based networks, driving model training time speed-up3
Truly lossless with sub-microsecond latency
Support to scale to >500,000 nodes with seamless integration across CPUs, GPUs, operating systems, and storage systems
Multi-Vendor, Open Ecosystems To Maintain Leadership
To foster sustainable innovation and continued technological leadership, U.S. government agencies should actively support open standards and multi-vendor ecosystems that encourage competition based on performance, price, and scalability.
Open, competitive bidding in HPC and AI infrastructure RFPs will:
Ensure the best performing solutions are deployed
Improve efficiency, uptime, and innovation through diversity of supply
Support an ecosystem of vendors who drive cost and performance competition
Lower total cost of ownership (TCO) across deployments
Empower underrepresented and emerging vendors to compete
Accelerate adoption of cutting-edge U.S.-built technologies
These procurement strategies align with national objectives to enhance U.S. competitiveness in energy, defense, and intelligence, while strengthening domestic R&D and job creation. Cornelis Networks offers multi-vendor network solutions that scale performance from small to massive installations that are compatible with all major CPUs, GPUs, accelerators, server vendors, and software frameworks. This gives customers the choice of vendor diversity, improved cost competition, and technological agility.
From Concept to Deployment: CN5000 Omni-Path is Here Now
Large scale CN5000 installations are under way today in government labs. Our end-to-end high performance networking portfolio includes SuperNICs, 48-port switches, 576-port Director-Class switches, and network management software to make deployment quick and easy. The company is helping shape future UltraEthernet standards, ensuring interoperability and a smooth path forward for agencies transitioning from proprietary platforms. Cornelis partners with system integrators and all major OEMs and invites federal agencies to include them in RFQs to benchmark performance and price against competitors.
The Bottom Line is Scalable, Sustainable Innovation in the U.S.A.
Scaling AI and HPC is not just a technical challenge—it’s a policy and procurement challenge. The U.S. government has an opportunity to lead by example: adopting high-performance, efficient, scalable, and interoperable networking infrastructure that supports its mission-critical workloads today while empowering innovation for tomorrow.
By embracing open competition and investing in domestic technologies, the federal government can build systems that are faster and more cost-effective—while ensuring the U.S. remains at the forefront of the AI and HPC.
If you’d like to learn more, contact us at sales@cornelisnetworks.com
Performance Configuration Details
2x Greater Message Rate and 35% Lower Latency details: Intel MPI 2021.15, Intel(R) MPI Benchmarks 2021.9. Tests performed on 2 socket AMD EPYC 9334 32-Core Processor. Turbo enabled with acpi-cpufreq driver. Rocky Linux 9.5 (Blue Onyx). 5.14.0-503.14.1.el9_5.x86_64 kernel. 24x16GB, 384 GB total, Memory Speed: 4800 MT/s. Cornelis OPX 12.0.0.0.17. NVIDIA NDR InfiniBand: Mellanox Technologies MT2910 Family [ConnectX-7]. MQM9700-NS2F Quantum 2 switch. 2M passive copper cables. UCX as packaged in hpcx-v2.23.
Ansys Fluent. Tests performed on 2 socket AMD EPYC 9755 Eng Sample. Turbo enabled with acpi-cpufreq driver. Rocky Linux 9.5 (Blue Onyx). 5.14.0-503.14.1.el9_5.x86_64 kernel. 24x32GB, 768 GB total, Memory Speed: 5600 MT/s. Cornelis OPX 12.0.0.0.17., Open MPI 5.0.7. NVIDIA NDR InfiniBand: Mellanox Technologies MT2910 Family [ConnectX-7]. MQM9700-NS2F Quantum 2 switch. 2M passive copper cables. UCX and Open MPI 4.1.7rc1 as packaged in hpcx-v2.23.
Cornelis simulations of all-to-all traffic on industry-standard simulation framework (SST) modeling CN5000 with credit-based flow control and standard 400G Ethernet with priority flow control.