DeepSeek has begun testing a new long-context model supporting 1 million tokens in its web and app versions since February 13. This has fueled industry speculation about a major Lunar New Year release. It could replicate the breakout success of last year's model.
Key Points
- 1.Testing of 1M token context model started Feb 13
- 2.Available in DeepSeek web and app versions
- 3.Industry expects Lunar New Year release
- 4.Aims to replicate prior year's success
Impact Analysis
This pushes open-source LLM boundaries in long-context processing, enabling advanced RAG and agentic apps. DeepSeek could challenge proprietary leaders like Gemini 1.5, intensifying competition.
Technical Details
Model supports 1 million token context length for extended inputs. Currently in testing phase via web/app interfaces.
