DGX Station Now Available via OEM

๐กNvidia DGX Station launches via OEMโdream AI workstation now accessible for pros.
โก 30-Second TL;DR
What Changed
Available via OEM distributors on Nvidia Marketplace
Why It Matters
Enables enterprise and prosumers to acquire powerful Nvidia AI workstations without direct purchase limits. Boosts local AI training capabilities for teams needing DGX-level compute.
What To Do Next
Check Nvidia Marketplace for DGX Station availability and review specs for your AI workload fit.
๐ง Deep Insight
Web-grounded analysis with 8 cited sources.
๐ Enhanced Key Takeaways
- โขDGX Station is powered by the GB300 Grace Blackwell Ultra Superchip, integrating a 72-core Grace Neoverse V2 CPU and Blackwell Ultra GPU with 279GB HBM3e GPU memory and 496GB LPDDR5X CPU memory for 784GB total coherent memory.[1][6]
- โขIt delivers up to 20 petaFLOPS of AI performance and supports local execution of AI models up to 1 trillion parameters using FP4 precision.[2][7]
- โขFeatures include NVLink-C2C interconnect at 900 GB/s, ConnectX-8 SuperNIC for 800Gb/s networking, and Multi-Instance GPU (MIG) support for up to seven isolated instances.[1][3][6]
๐ Competitor Analysisโธ Show
| Feature | NVIDIA DGX Station | NVIDIA DGX Spark |
|---|---|---|
| Superchip | GB300 Grace Blackwell Ultra (72-core Neoverse V2 CPU + Blackwell Ultra GPU) | GB10 Grace Blackwell (20-core ARM CPU + Blackwell GPU) |
| Coherent Memory | 784GB (279GB HBM3e + 496GB LPDDR5X) | 128GB LPDDR5X |
| AI Performance | Up to 20 petaFLOPS | Up to 1 petaFLOP (sparse FP4) |
| Model Capacity | Up to 1T parameters | Up to 70B parameters (fine-tuning) |
| Networking | ConnectX-8 SuperNIC (800Gb/s) | ConnectX-7 (10GbE, up to 200Gbps) |
| Form Factor | Desktop workstation | NUC-sized (portable, 170W) |
๐ ๏ธ Technical Deep Dive
- โขSuperchip: GB300 Grace Blackwell Ultra with 1x Grace-72 Core Neoverse V2 CPU and 1x NVIDIA Blackwell Ultra GPU.
- โขMemory: 279GB HBM3e GPU memory at 8 TB/s bandwidth; 496GB LPDDR5X CPU memory at 396 GB/s; total 784GB unified coherent memory.
- โขInterconnect: NVLink-C2C at 900 GB/s for CPU-GPU communication.
- โขNetworking: Integrated NVIDIA ConnectX-8 SuperNIC supporting up to 800Gb/s.
- โขSoftware: Runs NVIDIA DGX OS, supports NVIDIA AI Enterprise, CUDA-X AI platform, NIM microservices, and MIG for up to 7 instances.
- โขAdditional: Fifth-generation Tensor Cores, FP4 precision support for trillion-parameter models.
๐ฎ Future ImplicationsAI analysis grounded in cited sources
โณ Timeline
๐ Sources (8)
Factual claims are grounded in the sources below. Forward-looking analysis is AI-generated interpretation.
- marketplace.uvation.com โ Nvidia Dgx Station AI Workstation
- twowintech.com โ Nvidia Dgx Spark vs Nvidia Dgx Station a Comprehensive Comparison
- constellationr.com โ Nvidia Launches Dgx Spark Dgx Station Personal AI Supercomputers
- NVIDIA โ Dgx Station Datasheet
- theregister.com โ Nvidia Dgx Spark Speed
- NVIDIA โ Dgx Station
- blogs.nvidia.com โ Dgx Spark and Station Open Source Frontier Models
- NVIDIA โ Dgx Spark
Weekly AI Recap
Read this week's curated digest of top AI events โ
๐Related Updates
AI-curated news aggregator. All content rights belong to original publishers.
Original source: Reddit r/LocalLLaMA โ