🏠Stalecollected in 7m

NVIDIA Tests M10 for AI Racks

NVIDIA Tests M10 for AI Racks
PostLinkedIn
🏠Read original on IT之家

💡NVIDIA's M10 supply shift opens China PCB sourcing for AI infra builders

⚡ 30-Second TL;DR

What Changed

NVIDIA and Shulun Shares testing M10 CCL based on Guo Mingxi's supply chain intel.

Why It Matters

Diversifying suppliers reduces NVIDIA's dependency on Taiwan, boosting China supply chain role in AI infra. Could lower costs via Low Dk-2 material shift, aiding scalable AI racks.

What To Do Next

Track Shulun Shares' Q1 2026 M10 samples for AI rack PCB prototyping opportunities.

Who should care:Enterprise & Security Teams

🧠 Deep Insight

Web-grounded analysis with 8 cited sources.

🔑 Enhanced Key Takeaways

  • M10 CCL represents an evolution from M9 materials, incorporating advanced quartz fiber fabric for lower dielectric constant and thermal expansion to support 224 Gb/s signaling in Rubin CPX and LPX midplanes[5][6].
  • Shulun Shares' 52-layer M9 Q-glass PCBs, already in use for LPX inference cabinets, integrate liquid-cooling cold plates and RealScale connectors for 256 LPUs per rack[5].
  • NVIDIA's LPX racks are positioned as inference complements to high-end Rubin GPUs, targeting long-context reasoning and real-time video/speech generation workloads[5].

🛠️ Technical Deep Dive

  • M9/M10 CCL uses quartz fiber fabric for reduced dielectric constant (Dk) and coefficient of thermal expansion (CTE), enabling 20–36+ layer stack-ups with heavy copper routing for SI/PI at 224 Gb/s PAM4 signaling[5][6].
  • 52-layer PCBs in LPX support dense LPU integration (256 per rack), power delivery, high-speed signal routing, blind-mate interconnects, and mounting for liquid-cooling cold plates[5].
  • Material upgrades from M8 to M9 address insertion loss, eye-diagram integrity, and thermal management in NVLink 6 racks achieving 260 TB/s aggregate bandwidth[6].

🔮 Future ImplicationsAI analysis grounded in cited sources

NVIDIA LPX racks will achieve 4x LPU density increase to 256 units by GTC 2026
Enhanced LPX design with 52-layer M9/M10 PCBs enables denser integration for inference workloads complementing Rubin GPUs[5].
Supply chain diversification to three CCL vendors reduces dependency risks
Adding Shulun and others alongside Taiguang mitigates single-supplier vulnerabilities for M10 mass production in 2027[article].
M10 materials will be critical for 1.6 Tb/s transmission in next-gen racks
Quartz-based M10 CCL supports higher layer counts and signal integrity needed for scaling beyond Rubin-era bandwidths[6].

Timeline

2025-12
NVIDIA unveils Vera Rubin platform and rack-scale AI systems at CES 2026 preview[7][8]
2026-01
CES 2026: NVIDIA shifts roadmap to rack-scale systems with Rubin GPUs and LPX inference racks[3][7][8]
2026-03
GTC 2026 outlook details enhanced LPX with 256 LPUs and M9 52-layer PCBs[5]
📰

Weekly AI Recap

Read this week's curated digest of top AI events →

👉Related Updates

AI-curated news aggregator. All content rights belong to original publishers.
Original source: IT之家