DeepSeek AIOpen-weight·66K context·671B params·DeepSeek License v1.0
DeepSeek V3deepseek/deepseek-v3
DeepSeek V3 is a 671B-parameter MoE language model with strong reasoning and coding performance at prices roughly 1/10 of frontier US models. Open-weights released under DeepSeek License v1.0; OpenAI-compatible official API available globally.
Cheapest blended:$0.48 / 1M tokenson DeepSeek · 2 providers listed
Pricing across providers
Sort by:
| Provider | Input /1M | Output /1M | Blended /1M | Latency p50 | Format | Freshness | Action |
|---|---|---|---|---|---|---|---|
| DeepSeek deepseek-chat | $0.27 | $1.10 | $0.48 | 480ms | OpenAI-compatible | Verified 3d ago | Try → |
| Together.ai deepseek-ai/DeepSeek-V3 | $1.25 | $1.25 | $1.25 | 210ms | OpenAI-compatible | Verified 3d ago | Try → |
Affiliate disclosure: We may earn a commission from qualified signups. Pricing independence is enforced at the data layer — see our Editorial Independence Policy.
Works with
Point any of these clients at a hosting's base URL — they all speak at least one of this model's endpoint protocols (OPENAI_COMPATIBLE).
Capabilities
- reasoning
- coding
- tool_use
- long_context
- multilingual
Languages: en, zh, ja, ko, es, fr, de, ru
Benchmarks
Code samples
Example using DeepSeek — the cheapest hosting for this model as of last verification. Swap base_url and model to use a different provider from the matrix above.
from openai import OpenAI
client = OpenAI(
api_key="YOUR_API_KEY",
base_url="https://api.deepseek.com/v1",
)
response = client.chat.completions.create(
model="deepseek-chat",
messages=[{"role": "user", "content": "Hello!"}],
)
print(response.choices[0].message.content)
Technical specs
- Context
- 66K
- Max output
- 8K
- Parameters
- 671B
- Release
- 2024-12-26
- Training cutoff
- 2024-07-01
- License
- DeepSeek License v1.0
Similar models
Compare with
- DeepSeek V3 vs DeepSeek Chat V3.1Compare side-by-side →
- DeepSeek V3 vs DeepSeek R1Compare side-by-side →
- DeepSeek V3 vs Hunyuan LargeComparison planned — not yet published
Frequently asked
How much does DeepSeek V3 cost?+−
The cheapest public hosting is $0.48 per 1M blended tokens on DeepSeek. 2 total providers are listed above with per-input / per-output / cached pricing.
How do I access DeepSeek V3 from outside China?+−
All hostings listed above support global access. The official API (e.g. api.deepseek.com, dashscope-intl.aliyuncs.com) accepts international credit cards and does not require a Chinese mobile number. For privacy-sensitive workloads, third-party aggregators like Together.ai host the model on US/EU infrastructure.
Is DeepSeek V3 open-source? Can I fine-tune it?+−
Yes. DeepSeek V3 is open-weight under the DeepSeek License v1.0 license. Weights are available on Hugging Face for local inference, fine-tuning, and commercial use (see license for specific terms).
Is DeepSeek V3 OpenAI-compatible?+−
Most listed hostings expose an OpenAI-compatible API, so you can point an existing
openai SDK client at the Provider's base_url and use the Provider's model name. See the Code Samples above for a copy-pasteable example.What's the maximum context window for DeepSeek V3?+−
The model supports up to 65,536 tokens of context (input + output). Some hosted versions may impose a smaller limit — check the "Context" column in the pricing matrix for each provider.