China's DeepSeek releases V3.2, a 671 billion parameter model that matches GPT-5 performance on reasoning benchmarks while costing 10x less. The entire model is MIT licensed. Here's what it means for the AI industry.
On December 1, 2025, China's DeepSeek unveiled two new versions of their experimental AI model: DeepSeek-V3.2 and DeepSeek-V3.2-Speciale. According to Bloomberg, these models match the performance of OpenAI's flagship GPT-5 across multiple reasoning benchmarks.
DeepSeek V3.2's performance on mathematical reasoning benchmarks is remarkable, matching or exceeding frontier models from OpenAI and Google:
V3.2-Speciale Achievement: The advanced reasoning variant attains gold-level results in IMO (International Mathematical Olympiad), CMO (Chinese Mathematical Olympiad), ICPC World Finals, and IOI 2025.
Perhaps more impressive than the benchmarks is DeepSeek's cost efficiency. According to Introl's analysis, DeepSeek offers frontier-level AI at a fraction of the cost:
DeepSeek V3 uses several innovative techniques to achieve its efficiency:
import openai
# DeepSeek API is OpenAI-compatible
client = openai.OpenAI(
api_key="your-deepseek-api-key",
base_url="https://api.deepseek.com/v1"
)
response = client.chat.completions.create(
model="deepseek-chat", # or "deepseek-reasoner"
messages=[
{"role": "user", "content": "Solve this math problem..."}
]
)
print(response.choices[0].message.content)
Self-Hosting: The full 671B parameter model is available on GitHub under the MIT license. You can run it locally with sufficient hardware.