🔥36氪•Stalecollected in 13m
SK Hynix & SanDisk Launch HBF AI Memory Standard
💡New HBF memory standard for AI inference; key for efficient scaling
⚡ 30-Second TL;DR
What Changed
HBF standardization alliance launched
Why It Matters
Advances high-bandwidth memory standards critical for scaling AI inference, benefiting data center efficiency.
What To Do Next
Review HBF specs on SK Hynix site for AI hardware integration planning.
Who should care:Developers & AI Engineers
🧠 Deep Insight
Web-grounded analysis with 9 cited sources.
🔑 Enhanced Key Takeaways
- •HBF positions as a new memory layer between HBM and SSD, offering HBM-comparable bandwidth with 8-16x greater capacity at similar cost for AI inference.[1][2][4]
- •SanDisk's HBF leverages advanced BiCS NAND technology and proprietary CBA wafer bonding, with a first-generation prototype featuring 16-layer chips up to 512GB per stack.[2][4][5]
- •The collaboration includes a dedicated workstream under the Open Compute Project (OCP) and follows an MOU signed prior to the kick-off event.[1]
- •SanDisk formed a Technical Advisory Board with industry experts, including input from AI players and figures like Raja Koduri, and won 'Best of Show, Most Innovative Technology' at FMS 2025.[2][3][4]
🛠️ Technical Deep Dive
- •HBF uses SanDisk’s BiCS 3D-NAND technology with proprietary CBA (Chip-on-Wafer Bonding) wafer bonding to integrate NAND flash into HBM-like packages.[2][4]
- •First-generation HBF features 16-layer memory chips, enabling up to 512GB capacity per stack while matching HBM bandwidth.[5]
- •Architecture breaks NAND memory arrays into smaller sections operating simultaneously, stacked vertically for higher capacity (8-16x HBM) at comparable cost and power efficiency.[5]
- •SK hynix's related 'LPW NAND' trademark enhances I/O channels while reducing per-channel speed to lower power for AI inference.[5]
🔮 Future ImplicationsAI analysis grounded in cited sources
SanDisk targets HBF samples in 2H 2026
HBF standardization under OCP to drive ecosystem adoption
Joint workstream launched under Open Compute Project will define specs and foster industry-wide compatibility for AI infrastructure.[1]
HBF reduces AI data center TCO via capacity and efficiency
Positioned between HBM and SSD, HBF bridges performance-capacity gaps to lower total cost of ownership for inference workloads.[6]
⏳ Timeline
2025-07
SanDisk forms Technical Advisory Board for HBF development.
2025-08
SanDisk unveils HBF prototype at FMS 2025, wins Most Innovative Technology award, signs MOU with SK hynix.
2025-08
SK hynix files 'LPW NAND' trademark related to high-performance inference memory.
2026-02
SK hynix and SanDisk host HBF Spec Standardization Consortium kick-off at SanDisk Milpitas HQ.
📎 Sources (9)
Factual claims are grounded in the sources below. Forward-looking analysis is AI-generated interpretation.
- barchart.com — Sk Hynix and Sandisk Begin Global Standardization of Next Generation Memory Hbf
- nasdaq.com — Sandisk Collaborate Sk Hynix Drive Standardization High Bandwidth Flash Memory
- sandisk.com — 2025 08 06 Sandisk to Collaborate with Sk Hynix to Drive Standardization of High Bandwidth Flash Memory Technology
- Tom's Hardware — Sandisk and Sk Hynix Join Forces to Standardize High Bandwidth Flash Memory a Nand Based Alternative to Hbm for AI Gpus Move Could Enable 8 16x Higher Capacity Compared to Dram
- trendforce.com — News Memory Giants Sandisk Sk Hynix Unite for Hbf Standard with Samples Expected in 2h26
- koreatimes.co.kr — Sk Hynix Sandisk Kick Off Standardization of Hbf
- storagenewsletter.com — Fms 2025 Sandisk to Collaborate with Sk Hynix
- koreabizwire.com — 344961
- en.gamegpu.com — Sandisk I Sk Hynix Sozdayut Novyj Standart High Bandwidth Flash Gibrid Nand I Hbm
📰
Weekly AI Recap
Read this week's curated digest of top AI events →
👉Related Updates
AI-curated news aggregator. All content rights belong to original publishers.
Original source: 36氪 ↗