API Documentation
Our API is fully compatible with the OpenAI API spec. Existing code that uses OpenAI can switch by changing the base URL and API key.
Authentication
All API requests must include your API key in the Authorization header:
Authorization: Bearer sk-xxxxxxxxxxxxxxxxxxxxxxxx
Get your API key from the API Keys dashboard. Keep it secret — never expose it in client-side code.
Base URL
https://api.yourdomain.com/v1
Chat Completions
Generate text responses from AI models. Compatible with the OpenAI Chat Completions format.
POST
/v1/chat/completionsRequest body
{
"model": "deepseek-chat",
"messages": [
{
"role": "system",
"content": "You are a helpful assistant."
},
{
"role": "user",
"content": "Hello! What can you do?"
}
],
"max_tokens": 1024,
"temperature": 0.7,
"stream": false
}Response
{
"id": "chatcmpl-abc123",
"object": "chat.completion",
"created": 1740000000,
"model": "deepseek-chat",
"choices": [
{
"index": 0,
"message": {
"role": "assistant",
"content": "I can help you with a wide range of tasks..."
},
"finish_reason": "stop"
}
],
"usage": {
"prompt_tokens": 25,
"completion_tokens": 142,
"total_tokens": 167
}
}Available text models
deepseek-chatFast general-purpose chatdeepseek-r1Advanced reasoning with Chain-of-Thoughtdeepseek-coderCode generation and completionImage Generation
Generate images using JiMeng via a chat completions-style request. Pass the model name and a prompt.
POST
/v1/chat/completions{
"model": "jimeng-image",
"messages": [
{
"role": "user",
"content": "A futuristic city at sunset, cinematic lighting"
}
]
}
// Response includes image URL in content field
{
"choices": [{
"message": {
"role": "assistant",
"content": "https://cdn.example.com/gen/image-xyz.png"
}
}]
}Python
Use the requests library or the OpenAI Python SDK with a custom base URL.
import requests
API_KEY = "sk-your-api-key"
BASE_URL = "https://api.yourdomain.com/v1"
response = requests.post(
f"{BASE_URL}/chat/completions",
headers={
"Authorization": f"Bearer {API_KEY}",
"Content-Type": "application/json",
},
json={
"model": "deepseek-chat",
"messages": [
{"role": "user", "content": "Explain quantum computing simply"}
],
},
)
data = response.json()
print(data["choices"][0]["message"]["content"])Or use the OpenAI SDK:
from openai import OpenAI
client = OpenAI(
api_key="sk-your-api-key",
base_url="https://api.yourdomain.com/v1",
)
completion = client.chat.completions.create(
model="deepseek-r1",
messages=[{"role": "user", "content": "Solve: 2x + 5 = 13"}],
)
print(completion.choices[0].message.content)JavaScript
const API_KEY = "sk-your-api-key";
const BASE_URL = "https://api.yourdomain.com/v1";
const response = await fetch(`${BASE_URL}/chat/completions`, {
method: "POST",
headers: {
"Authorization": `Bearer ${API_KEY}`,
"Content-Type": "application/json",
},
body: JSON.stringify({
model: "deepseek-chat",
messages: [
{ role: "user", content: "What is the capital of France?" }
],
}),
});
const data = await response.json();
console.log(data.choices[0].message.content);cURL
curl https://api.yourdomain.com/v1/chat/completions \
-H "Authorization: Bearer sk-your-api-key" \
-H "Content-Type: application/json" \
-d '{
"model": "deepseek-chat",
"messages": [
{"role": "user", "content": "Hello!"}
]
}'Error Codes
| HTTP Status | Error Code | Description |
|---|---|---|
| 401 | unauthorized | Missing or invalid API key |
| 402 | insufficient_credits | Account has no credits remaining |
| 429 | rate_limit_exceeded | Too many requests — see rate limits below |
| 503 | provider_unavailable | Upstream AI provider is temporarily down |
Rate Limits
Requests per minute (per API key)60 RPM
Concurrent requests10
When rate limited, you'll receive a 429 response. The Retry-After header indicates how many seconds to wait.