Static
1 source
·
DeepSeek-V4: Towards Highly Efficient Million-Token Context Intelligence
Related Stories
- DeepSeek V4 Pro has 1.6T total parameters, its largest model by the metric, and V4 Flash has 284B parameters; both models have a context window of 1M tokens
- DeepSeek V4 Pro costs $1.74/1M input tokens and $3.48/1M output tokens, while V4 Flash costs $0.14/1M and $0.28/1M; both models are the cheapest in their class
- Intel reports Q1 revenue up 7% YoY to $13.58B, vs. $12.42B est., and forecasts Q2 revenue and adjusted EPS above estimates; INTC jumps 15%+ after hours
- GPT-5.5 is priced at $5/1M input tokens and $30/1M output tokens, double GPT-5.4's pricing; GPT-5.5 Pro costs $30/1M input tokens and $180/1M output tokens
- OpenAI says "GPT-5.5 matches GPT-5.4 per-token latency in real-world serving, while performing at a much higher level of intelligence"