⚛️Stalecollected in 6h

Qwen 3.5 Tops Hugging Face Leaderboard

PostLinkedIn
⚛️Read original on 量子位

💡Qwen 3.5 #1 open model, China 8/10 top—stronger than Llama 3.1? Test it!

⚡ 30-Second TL;DR

What Changed

Qwen 3.5 ranks #1 on Hugging Face open leaderboard

Why It Matters

Solidifies China's dominance in open-weight LLMs, pressuring Western labs to accelerate releases.

What To Do Next

Download Qwen-3.5 from Hugging Face and run it on your MMLU or coding benchmarks.

Who should care:Developers & AI Engineers

🧠 Deep Insight

Web-grounded analysis with 6 cited sources.

🔑 Enhanced Key Takeaways

  • Qwen3-Next-80B ranks third on the F5 CASI security leaderboard with a score of 81.10, behind Claude Sonnet 4 and GPT-5[1].
  • Qwen3-30B-A3B appears on the F5 CASI leaderboard with a score of 58.33 in a lower tier, indicating varied performance across model sizes[1].
  • Qwen 3.5 is ranked in the A-tier on the Onyx AI Open Source LLM Leaderboard 2026, alongside MiMo-V2-Flash[6].
  • Qwen3-235B-A22B received a 0725 update but scored only 50.97 on the updated CASI benchmark, vulnerable to attack agents[1].

🔮 Future ImplicationsAI analysis grounded in cited sources

Chinese models will capture 9/10 top spots on Hugging Face open leaderboard by mid-2026
Current dominance of 8/10 positions combined with rapid iterations like Qwen3-235B-A22B updates signals aggressive advancement.
MoE efficiency in Qwen 3.5 will pressure closed models to release open variants
Activating only 170B of 397B parameters achieves top performance, challenging high-cost proprietary models on leaderboards like CASI.
📰

Weekly AI Recap

Read this week's curated digest of top AI events →

👉Related Updates

AI-curated news aggregator. All content rights belong to original publishers.
Original source: 量子位