⚛️量子位•Stalecollected in 70m
NVIDIA Delivers First DGX GB300 Personally

💡First DGX GB300 delivered—NVIDIA's Blackwell AI beast ships to pioneer
⚡ 30-Second TL;DR
What Changed
NVIDIA shipped its first DGX GB300 unit
Why It Matters
Signals start of DGX GB300 shipments, enabling early adopters to deploy cutting-edge AI training hardware ahead of broader availability.
What To Do Next
Inquire with NVIDIA sales about DGX GB300 preorder timelines for your cluster.
Who should care:Enterprise & Security Teams
🧠 Deep Insight
Web-grounded analysis with 5 cited sources.
🔑 Enhanced Key Takeaways
- •DGX GB300 is a desktop AI supercomputer powered by the NVIDIA Grace Blackwell Ultra Superchip, integrating a Grace CPU with 72 Arm Neoverse V2 cores and a Blackwell Ultra GPU.[1][4]
- •The system provides 748GB of unified coherent memory (252GB HBM3e GPU + 496GB LPDDR5X CPU) and up to 20 petaFLOPS of AI performance in a workstation form factor.[3][4][5]
- •NVIDIA DGX Station with GB300 was unveiled at GTC 2025 and is now available for order from partners like Asus, Dell, Gigabyte, MSI, Supermicro, and HP, with shipping starting soon.[3]
🛠️ Technical Deep Dive
- •Powered by NVIDIA GB300 Grace Blackwell Ultra Desktop Superchip: 1x Grace CPU (72 Arm Neoverse V2 cores, 496GB LPDDR5X at 396-14 TB/s bandwidth), 1x Blackwell Ultra GPU (252-279GB HBM3e at 8 TB/s).[1][3][5]
- •Performance: Up to 20 petaFLOPS AI compute; supports FP4/FP8 Tensor Cores; NVLink-C2C interconnect for high-bandwidth CPU-GPU communication.[1][4]
- •Networking: Integrated ConnectX-8 SuperNIC (up to 800 Gb/s via 2x QSFP112 ports); 3x PCIe Gen5 x16 slots for additional GPUs like RTX Pro series.[3]
- •Features: NVIDIA DGX OS, AI Enterprise software, Multi-Instance GPU (MIG) for up to 7 isolated instances, 1600W power draw, advanced liquid cooling option in rack-scale variants.[1][3][4]
- •Scalability: Supports linking two DGX Stations for expanded model capacity; ideal for local AI training, inference, Physical AI, and multi-user workloads.[4]
🔮 Future ImplicationsAI analysis grounded in cited sources
DGX GB300 enables desktop-scale training of trillion-parameter AI models
Its 748GB coherent memory and 20 petaFLOPS compute allow ingestion of massive datasets directly into memory, accelerating local development of large LLMs and Physical AI agents without data center dependency.[4]
GB300 platforms boost AI factory throughput by 50x over Hopper generation
Rack-scale GB300 NVL72 delivers 10x user responsiveness and 5x throughput per megawatt via larger HBM3e memory and FP4 Tensor Cores, scaling desktop benefits to enterprise AI infrastructure.[2]
⏳ Timeline
2025-03
NVIDIA unveils DGX Station with GB300 Grace Blackwell Superchip at GTC 2025
2026-03
NVIDIA launches DGX GB300 for order; shipping begins via OEM partners
2026-03
Jensen Huang personally delivers first DGX GB300 to Kapasi
📎 Sources (5)
Factual claims are grounded in the sources below. Forward-looking analysis is AI-generated interpretation.
- marketplace.uvation.com — Nvidia Dgx Station AI Workstation
- NVIDIA — Gb300 Nvl72
- Tom's Hardware — Nvidia Launches Dgx Station with Its Bleeding Edge Gb300 Grace Blackwell Superchip Now Available to Order and Will Begin Shipping in the Coming Months
- NVIDIA — Dgx Station
- guru3d.com — Nvidia Dgx Station Gb300 Superchip Specifications and 748gb Unified Memory
📰
Weekly AI Recap
Read this week's curated digest of top AI events →
👉Related Updates
AI-curated news aggregator. All content rights belong to original publishers.
Original source: 量子位 ↗