Chinese Models Top Global AI Token Usage

๐กChinese open-source LLMs surpass US models in global usageโbenchmark for your next project.
โก 30-Second TL;DR
What Changed
MiniMax M2.5 tops OpenRouter token usage ranking
Why It Matters
Highlights growing competitiveness of Chinese AI firms in open-source space, offering cost-effective high-performance alternatives to US models. AI practitioners gain more options for scalable deployments via platforms like OpenRouter.
What To Do Next
Deploy MiniMax M2.5 via OpenRouter API to test its top-ranked token efficiency.
๐ง Deep Insight
Web-grounded analysis with 9 cited sources.
๐ Enhanced Key Takeaways
- โขChinese models captured 61% of total token volume on OpenRouter, with MiniMax M2.5 at 2.45 trillion tokens, Kimi K2.5 at 1.21 trillion, and Zhipu's GLM-5 at 780 billion.[2][3]
- โขMiniMax M2.5 launched on February 13, 2026, as the worldโs first production-grade model natively designed for agent scenarios, achieving 3.07 trillion tokens in its first seven days and a 197% week-over-week surge.[2]
- โขMoonshot AI's Kimi K2.5 generated revenue in under 20 days post-launch exceeding its entire 2025 total, with overseas revenue surpassing domestic for the first time, fueled by global paid subscribers and API usage.[2]
- โขPricing advantage is key: MiniMax M2.5 and GLM-5 at $0.30 per million input tokens vs. Claude Opus 4.6 at $5.00, making them 16.7 times cheaper.[2]
๐ Competitor Analysisโธ Show
| Model | Parameters | Context Window | Key Benchmarks | Pricing (Input/Output per 1M tokens) |
|---|---|---|---|---|
| MiniMax M2.5 | 230B | 205K | SWE-bench Verified 80.2, HumanEval 89.6 | $0.30 / ? |
| Kimi K2.5 | 1T total (32B active MoE) | 262K | HumanEval 99.0, MMLU 92.0, MATH-500 98.0 | ? |
| GLM-5 | ? | ? | Top coding usage | $0.30 / ? |
| Claude Opus 4.6 | ? | ? | Lower token usage | $5.00 / ? |
๐ ๏ธ Technical Deep Dive
- โขMiniMax M2.5: 230B parameters, 205K context window; excels in real-world software engineering with SWE-bench Verified 80.2 (highest), Multi-SWE-Bench 51.3, BrowseComp 76.3; trained for agent scenarios, office tools (Word, Excel, PowerPoint), context switching, and token-efficient planning.[1][4]
- โขKimi K2.5: 1T total parameters (32B active per token, MoE architecture), 262K context window; native multimodal with visual coding and self-directed agent swarm; standout benchmarks include HumanEval 99.0, MMLU 92.0, MMLU-Pro 87.1, LiveCodeBench 85.0, AIME 2025 96.1, GPQA Diamond 87.6, MATH-500 98.0, Chatbot Arena 1447, IFEval 94.0; continued pretraining on 15T mixed visual/text tokens.[1][4]
๐ฎ Future ImplicationsAI analysis grounded in cited sources
โณ Timeline
๐ Sources (9)
Factual claims are grounded in the sources below. Forward-looking analysis is AI-generated interpretation.
- vertu.com โ Open Source LLM Leaderboard 2026 Rankings Benchmarks the Best Models Right Now
- thechinaacademy.org โ Kimi Moonshot AI Becomes Chinas Fastest Decacorn As 20 Day Revenue Surpasses Entire 2025 Total China AI Daily February 24 2026
- dataconomy.com โ Chinese AI Models Hit 61 Market Share on Openrouter
- openrouter.ai โ Programming
- openrouter.ai โ Kimi K2
- teamday.ai โ Top AI Models Openrouter 2026
- openrouter.ai โ Minimax M2
- llm-stats.com
- onyx.app โ Open LLM Leaderboard
Weekly AI Recap
Read this week's curated digest of top AI events โ
๐Related Updates
AI-curated news aggregator. All content rights belong to original publishers.
Original source: SCMP Technology โ