New product releases deliver up to 27x faster KV cache loading with GPUDirect Storage for Objects and NVIDIA Dynamo integration—enabling secure, software-defined AI factory operations without all-flash dependence
DDN, the global leader in AI and data intelligence solutions, today announced major new releases across its AI data platform. As AI moves from experimentation into production, data infrastructure has become a direct determinant of revenue and efficiency— token generation, inference serving, and model training deliver strong return on investment only when GPUs are continuously and intelligently fed with data. DDN delivers exactly that—inference acceleration that drives down cost per token, multi-tenant storage operations built for the demands of production AI factories, and DPU-accelerated data services running directly on NVIDIA BlueField-4hardware.
"AI infrastructure is now a revenue system—every token generated is a return on invested capital. DDN ensures that return is never lost to a data bottleneck." — Sven Oehme, CTO, DDN
DDN Infinia accelerates inference performance and improves AI factory economics by eliminating data bottlenecks at scale:
- Up to 27x faster KV cache loading with a distributed acceleration fabric and deep NVIDIA Dynamo integration
- Sub-millisecond latency that eliminates I/O stalls and maximizes GPU utilization at production scale
- Double-digit reductions in cost per token, materially improving inference economics
- Removes KV cache memory capacity as the critical path for large context windows and agentic AI workloads
- Aligned with NVIDIA Vera Rubin architecture goals, enabling up to 10x lower inferencing token cost
- Available on Oracle Cloud Infrastructure via Oracle Cloud Marketplace, supporting production-ready deployment in minutes
“AI factories deliver maximum value when accelerated computing and accelerated data move as one system. DDN’s deep integration with NVIDIA Dynamo and NVIDIA BlueField-4 is providing the performance and security needed to eliminate data bottlenecks and enable multi-tenant AI infrastructure for lower-cost, at-scale inference.”.”— Jason Hardy, vice president, storage technologies, NVIDIA
DDN EXAScaler transforms shared GPU infrastructure into a secure, API-driven, revenue-ready AI factory platform:
- Comprehensive multi-tenancy architecture purpose-built for production AI factories
- Per-tenant KMIP encryption, quota enforcement, and API-driven VLAN lifecycle management for secure isolation at scale
- Self-service provisioning and full API control, eliminating manual intervention and operational friction
- Instant tenant onboarding, modification, and retirement via API call, removing specialist effort and planned downtime
- Runs on customer-selected standard servers, decoupling performance from all-flash hardware dependency
- Up to 10x more training throughput on infrastructure customers already own and control
DDN's software-defined AI data services are now aligned with the NVIDIA STX reference architecture, and run directly on NVIDIA BlueField-4 data processing units (DPUs). By offloading storage processing and data movement from CPUs and GPUs, DDN delivers direct GPU-to-data paths that reduce latency, lower power consumption, and increase effective GPU utilization across both training and inference workloads—making the most of every watt and every cycle in the data center.
All three solutions will be generally available by the summer. Full platform details and technical briefings are available at NVIDIA GTC 2026. To learn more, schedule a private demo, book a meeting, or visit the DDN booth at #1621.
About DDN
DDN is the world’s leading AI and data intelligence company, powering the world’s most demanding AI workloads by keeping GPUs fed, efficient, and productive—at massive scale—so organizations can train, checkpoint, and infer faster with less footprint and power while achieving tremendous ROI from their AI investments. From hyperscalers and next-gen cloud builders to enterprises, governments, and research institutions, DDN delivers proven data intelligence at exabyte scale across hundreds of thousands of GPUs—so customers can deploy AI with confidence, accelerate time-to-value, and realize outsized returns. Discover more at ddn.com.
Follow DDN: LinkedIn, X, and YouTube
View source version on businesswire.com: https://www.businesswire.com/news/home/20260316815534/en/
Contacts
DDN Media Contact:
Amanda Lee, VP, Marketing—Analyst & Public Relations
amlee@ddn.com
