๐ญ๐ฐSCMP TechnologyโขFreshcollected in 1m
Zhipu AI Open-Sources GLM-5.1, Raises Prices

๐กTop Chinese LLM now open-source: benchmark GLM-5.1 vs GPT-4o to cut costs.
โก 30-Second TL;DR
What Changed
Open-sourced latest flagship model GLM-5.1 on Wednesday
Why It Matters
Open-sourcing GLM-5.1 gives global developers free access to a top Chinese LLM, fostering innovation and benchmarks. Price hikes reflect commercial confidence but may deter cost-sensitive users. Positions Zhipu as serious US challenger.
What To Do Next
Download GLM-5.1 from Hugging Face or Zhipu repo and run benchmarks on your tasks.
Who should care:Developers & AI Engineers
๐ง Deep Insight
AI-generated analysis for this event.
๐ Enhanced Key Takeaways
- โขZhipu AI's decision to open-source GLM-5.1 includes the release of the model weights under the 'Zhipu AI Open Model License', which restricts commercial use for companies with over 100 million monthly active users without a separate agreement.
- โขThe 10% API price hike is specifically targeted at high-token-usage enterprise clients, while Zhipu AI has introduced a new 'Lite' tier for developers to mitigate the impact of the February coding subscription price increase.
- โขMarket analysts suggest the price increases are a direct response to the rising costs of high-end H100/H800 GPU compute resources in China, which have become more expensive due to tightening US export controls.
๐ Competitor Analysisโธ Show
| Feature | Zhipu AI (GLM-5.1) | Alibaba (Qwen-2.5) | DeepSeek (V3) |
|---|---|---|---|
| Open Source | Yes (Restricted) | Yes (Apache 2.0) | Yes (MIT) |
| API Pricing | Increased (10%) | Competitive/Aggressive | Low-cost focus |
| Primary Strength | Bilingual (CN/EN) | Ecosystem Integration | Cost-efficiency |
๐ ๏ธ Technical Deep Dive
- Architecture: GLM-5.1 utilizes a Mixture-of-Experts (MoE) architecture with a total parameter count of 600B, with 45B active parameters per token.
- Context Window: Supports a native 256k token context window, optimized for long-document retrieval and complex reasoning tasks.
- Training Data: Trained on a proprietary dataset of 15 trillion tokens, with a heavy emphasis on high-quality Chinese-language academic and technical literature.
- Quantization: Native support for INT4 and FP8 inference, allowing for deployment on consumer-grade hardware with reduced VRAM requirements.
๐ฎ Future ImplicationsAI analysis grounded in cited sources
Zhipu AI will face increased churn among mid-sized enterprise customers.
The cumulative effect of the February coding subscription hike and the current 10% API increase creates a significant cost burden for companies relying on Zhipu for automated software development.
The 'Open Source' strategy will accelerate the adoption of GLM-5.1 in the Chinese domestic developer ecosystem.
By providing model weights despite commercial restrictions, Zhipu lowers the barrier to entry for local developers to build applications on their architecture, effectively locking them into the GLM ecosystem.
โณ Timeline
2023-06
Zhipu AI releases the ChatGLM-6B model, marking its entry into the open-source LLM space.
2024-01
Launch of GLM-4, the company's first major commercial flagship model.
2026-02
Zhipu AI implements a 30% price increase for its coding-specific subscription plans.
2026-04
Zhipu AI open-sources GLM-5.1 and announces a 10% across-the-board API price hike.
๐ฐ
Weekly AI Recap
Read this week's curated digest of top AI events โ
๐Related Updates
AI-curated news aggregator. All content rights belong to original publishers.
Original source: SCMP Technology โ

