🔥Stalecollected in 13m

SK Hynix & SanDisk Launch HBF AI Memory Standard

SK Hynix & SanDisk Launch HBF AI Memory Standard
PostLinkedIn
🔥Read original on 36氪

💡New HBF memory standard for AI inference; key for efficient scaling

⚡ 30-Second TL;DR

What Changed

HBF standardization alliance launched

Why It Matters

Advances high-bandwidth memory standards critical for scaling AI inference, benefiting data center efficiency.

What To Do Next

Review HBF specs on SK Hynix site for AI hardware integration planning.

Who should care:Developers & AI Engineers

🧠 Deep Insight

Web-grounded analysis with 9 cited sources.

🔑 Enhanced Key Takeaways

  • HBF positions as a new memory layer between HBM and SSD, offering HBM-comparable bandwidth with 8-16x greater capacity at similar cost for AI inference.[1][2][4]
  • SanDisk's HBF leverages advanced BiCS NAND technology and proprietary CBA wafer bonding, with a first-generation prototype featuring 16-layer chips up to 512GB per stack.[2][4][5]
  • The collaboration includes a dedicated workstream under the Open Compute Project (OCP) and follows an MOU signed prior to the kick-off event.[1]
  • SanDisk formed a Technical Advisory Board with industry experts, including input from AI players and figures like Raja Koduri, and won 'Best of Show, Most Innovative Technology' at FMS 2025.[2][3][4]

🛠️ Technical Deep Dive

  • HBF uses SanDisk’s BiCS 3D-NAND technology with proprietary CBA (Chip-on-Wafer Bonding) wafer bonding to integrate NAND flash into HBM-like packages.[2][4]
  • First-generation HBF features 16-layer memory chips, enabling up to 512GB capacity per stack while matching HBM bandwidth.[5]
  • Architecture breaks NAND memory arrays into smaller sections operating simultaneously, stacked vertically for higher capacity (8-16x HBM) at comparable cost and power efficiency.[5]
  • SK hynix's related 'LPW NAND' trademark enhances I/O channels while reducing per-channel speed to lower power for AI inference.[5]

🔮 Future ImplicationsAI analysis grounded in cited sources

SanDisk targets HBF samples in 2H 2026
Press releases confirm first HBF memory samples planned for second half of 2026, with AI-inference devices in early 2027.[2][3][7]
HBF standardization under OCP to drive ecosystem adoption
Joint workstream launched under Open Compute Project will define specs and foster industry-wide compatibility for AI infrastructure.[1]
HBF reduces AI data center TCO via capacity and efficiency
Positioned between HBM and SSD, HBF bridges performance-capacity gaps to lower total cost of ownership for inference workloads.[6]

Timeline

2025-07
SanDisk forms Technical Advisory Board for HBF development.
2025-08
SanDisk unveils HBF prototype at FMS 2025, wins Most Innovative Technology award, signs MOU with SK hynix.
2025-08
SK hynix files 'LPW NAND' trademark related to high-performance inference memory.
2026-02
SK hynix and SanDisk host HBF Spec Standardization Consortium kick-off at SanDisk Milpitas HQ.
📰

Weekly AI Recap

Read this week's curated digest of top AI events →

👉Related Updates

AI-curated news aggregator. All content rights belong to original publishers.
Original source: 36氪