๐ฌ๐งThe Register - AI/MLโขFreshcollected in 32m
Tenstorrent Launches Galaxy Blackhole AI Servers

๐กRISC-V AI servers pack 32 accelerators in $110K 6U chassis โ Nvidia alternative?
โก 30-Second TL;DR
What Changed
General availability of Galaxy Blackhole AI servers announced
Why It Matters
Offers high-density AI compute alternative using open RISC-V architecture, potentially lowering costs and vendor lock-in for AI training clusters.
What To Do Next
Contact Tenstorrent sales for Galaxy Blackhole pricing and demo.
Who should care:Enterprise & Security Teams
๐ง Deep Insight
AI-generated analysis for this event.
๐ Enhanced Key Takeaways
- โขThe Galaxy system utilizes Tenstorrent's proprietary 'Blackhole' chip, which integrates RISC-V CPU cores directly onto the AI accelerator die, enabling a unified memory architecture that reduces data movement overhead.
- โขThe platform is specifically optimized for large-scale inference and fine-tuning of LLMs, leveraging Tenstorrent's 'TT-Mesh' interconnect technology to scale across multiple chassis without significant latency penalties.
- โขTenstorrent is positioning the Galaxy platform as a cost-effective alternative to proprietary GPU-based clusters by focusing on high-density, power-efficient RISC-V compute rather than raw FP64 performance.
๐ Competitor Analysisโธ Show
| Feature | Tenstorrent Galaxy | NVIDIA HGX H200 | AMD Instinct MI300X |
|---|---|---|---|
| Architecture | RISC-V + AI Accelerator | Hopper GPU | CDNA 3 GPU |
| Interconnect | TT-Mesh | NVLink | Infinity Fabric |
| Target Market | Inference/Fine-tuning | Training/Inference | Training/Inference |
| Price Point | ~$110K (32 chips) | Significantly Higher | High (System dependent) |
๐ ๏ธ Technical Deep Dive
- โขBlackhole SoC: Features a heterogeneous design combining Tenstorrent's proprietary Tensix cores for matrix math with high-performance RISC-V cores for general-purpose compute.
- โขMemory Architecture: Supports high-bandwidth memory (HBM3e) integrated on-package to maximize throughput for memory-bound AI workloads.
- โขScalability: The 6U chassis design utilizes a custom backplane to facilitate low-latency communication between the 32 Blackhole accelerators, supporting multi-node scaling via standard Ethernet or proprietary high-speed links.
- โขSoftware Stack: Fully supported by Tenstorrent's 'Buda' and 'Metal' software stacks, which provide compilers and runtime environments for PyTorch and ONNX models.
๐ฎ Future ImplicationsAI analysis grounded in cited sources
Tenstorrent will gain significant market share in the inference-as-a-service sector.
The combination of a lower price point and RISC-V flexibility allows cloud providers to offer more competitive pricing for inference workloads compared to NVIDIA-based instances.
The Galaxy platform will trigger a shift toward heterogeneous AI compute architectures.
By proving the viability of RISC-V-based AI accelerators at scale, Tenstorrent is validating the industry trend of moving away from monolithic GPU reliance.
โณ Timeline
2016-05
Tenstorrent founded by Ljubisa Bajic to develop AI hardware.
2021-01
Jim Keller joins Tenstorrent as CTO, accelerating RISC-V integration.
2023-08
Tenstorrent raises $100M in strategic funding to accelerate product roadmap.
2024-05
Tenstorrent announces the Blackhole AI processor architecture.
2026-04
General availability of Galaxy Blackhole AI servers.
๐ฐ
Weekly AI Recap
Read this week's curated digest of top AI events โ
๐Related Updates
AI-curated news aggregator. All content rights belong to original publishers.
Original source: The Register - AI/ML โ

