๐ผVentureBeatโขFreshcollected in 26m
Poolside Launches Free Open Laguna XS.2

๐กFree US open 33B MoE crushes rivals for local GPU coding agents (Hugging Face now)
โก 30-Second TL;DR
What Changed
Laguna XS.2: 33B MoE (3B active), Apache 2.0, runs on desktop/laptop GPUs offline.
Why It Matters
Enables private, efficient local AI coding agents, challenging costly proprietary models and Chinese open alternatives. Boosts US open-source AI innovation for developers and enterprises.
What To Do Next
Download Laguna XS.2 from Hugging Face and test local agentic coding on your GPU.
Who should care:Developers & AI Engineers
๐ง Deep Insight
AI-generated analysis for this event.
๐ Enhanced Key Takeaways
- โขPoolside's training methodology utilized a proprietary 'Code-First' curriculum that emphasizes long-context reasoning over standard instruction tuning, specifically targeting complex repository-level refactoring tasks.
- โขThe 'pool' harness introduces a novel 'speculative execution' layer that allows the model to simulate code changes in a sandboxed environment before committing them to the user's local workspace.
- โขThe Laguna M.1 model utilizes a specialized 'sparse-attention' mechanism designed to reduce KV-cache memory overhead by 40% compared to standard MoE architectures, enabling larger context windows on enterprise hardware.
๐ Competitor Analysisโธ Show
| Feature | Laguna XS.2 | DeepSeek-V3 | Qwen2.5-Coder-32B |
|---|---|---|---|
| Architecture | 33B MoE (3B active) | 671B MoE (37B active) | Dense 32B |
| License | Apache 2.0 | MIT | Apache 2.0 |
| Primary Use | Local Agentic Coding | General Purpose/Coding | General Purpose/Coding |
| Hardware Req | Consumer GPU (12GB+ VRAM) | Enterprise Cluster | High-end Consumer GPU |
๐ ๏ธ Technical Deep Dive
- โขArchitecture: Mixture-of-Experts (MoE) with top-2 expert routing per token.
- โขTraining Data: Proprietary dataset consisting of 15 trillion tokens of high-quality, curated code repositories and technical documentation.
- โขContext Window: Native 128k token support for both XS.2 and M.1 models.
- โขQuantization: XS.2 is natively compatible with GGUF and EXL2 formats for 4-bit and 8-bit inference on consumer hardware.
- โขShimmer IDE: Built on a WASM-based runtime that allows the model to execute Python and JavaScript snippets directly within the browser environment.
๐ฎ Future ImplicationsAI analysis grounded in cited sources
Poolside will transition to a 'freemium' model for the 'pool' agent harness by Q4 2026.
The current free availability of the M.1 API is explicitly labeled as temporary, suggesting a shift toward monetizing the agentic orchestration layer.
The release of Laguna XS.2 will trigger a shift in local IDE development toward agent-first architectures.
By providing a high-performance, Apache 2.0 model, Poolside lowers the barrier for third-party developers to integrate autonomous coding capabilities into lightweight local tools.
โณ Timeline
2024-06
Poolside emerges from stealth with $126M in seed funding led by Felicis.
2025-02
Poolside announces the development of their proprietary 'Code-First' foundation models.
2026-04
Official release of Laguna XS.2 and Laguna M.1 models.
๐ฐ
Weekly AI Recap
Read this week's curated digest of top AI events โ
๐Related Updates
AI-curated news aggregator. All content rights belong to original publishers.
Original source: VentureBeat โ

