Background texture

Chat Completions

The chat completions endpoint is OpenAI-compatible. If you've used the OpenAI API, you already know how to use ModelMax.

Basic request

curl -X POST https://api.modelmax.io/v1/chat/completions \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer $MODELMAX_API_KEY" \
  -d '{
    "model": "deepseek-v3.2",
    "messages": [
      { "role": "system", "content": "You are a helpful assistant." },
      { "role": "user", "content": "What is the capital of France?" }
    ]
  }'
from openai import OpenAI

client = OpenAI(
    api_key="your-api-key",
    base_url="https://api.modelmax.io/v1",
)

response = client.chat.completions.create(
    model="deepseek-v3.2",
    messages=[
        {"role": "system", "content": "You are a helpful assistant."},
        {"role": "user", "content": "What is the capital of France?"},
    ],
)

print(response.choices[0].message.content)
import OpenAI from "openai";

const client = new OpenAI({
  apiKey: "your-api-key",
  baseURL: "https://api.modelmax.io/v1",
});

const response = await client.chat.completions.create({
  model: "deepseek-v3.2",
  messages: [
    { role: "system", content: "You are a helpful assistant." },
    { role: "user", content: "What is the capital of France?" },
  ],
});

console.log(response.choices[0].message.content);

Streaming

Set stream: true to receive tokens incrementally via Server-Sent Events. This is useful for chat UIs where you want to display text as it's generated.

curl -X POST https://api.modelmax.io/v1/chat/completions \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer $MODELMAX_API_KEY" \
  -d '{
    "model": "deepseek-v3.2",
    "messages": [
      { "role": "user", "content": "Tell me a short story." }
    ],
    "stream": true
  }'
stream = client.chat.completions.create(
    model="deepseek-v3.2",
    messages=[{"role": "user", "content": "Tell me a short story."}],
    stream=True,
)

for chunk in stream:
    content = chunk.choices[0].delta.content
    if content:
        print(content, end="", flush=True)
print()
const stream = await client.chat.completions.create({
  model: "deepseek-v3.2",
  messages: [{ role: "user", content: "Tell me a short story." }],
  stream: true,
});

for await (const chunk of stream) {
  const content = chunk.choices[0]?.delta?.content;
  if (content) process.stdout.write(content);
}
console.log();

Each line in the SSE response:

data: {"id":"...","choices":[{"delta":{"content":"Once"},"index":0}]}
data: {"id":"...","choices":[{"delta":{"content":" upon"},"index":0}]}
...
data: [DONE]

Multi-turn conversations

Include previous messages to maintain context across turns:

response = client.chat.completions.create(
    model="gemini-3-flash-preview",
    messages=[
        {"role": "system", "content": "You are a math tutor."},
        {"role": "user", "content": "What is 2+2?"},
        {"role": "assistant", "content": "2 + 2 = 4."},
        {"role": "user", "content": "And if you multiply that by 3?"},
    ],
)

Parameters

ParameterTypeDefaultDescription
modelstringRequired. Model ID (e.g. deepseek-v3.2)
messagesarrayRequired. Conversation messages
streambooleanfalseEnable SSE streaming
temperaturenumbermodel defaultSampling temperature (0–2)
top_pnumbermodel defaultNucleus sampling threshold
max_tokensintegermodel defaultMaximum tokens to generate
stopstring | arraynullStop sequences

Switching models

Change the model parameter to switch between providers. The API format stays the same:

# AWS Bedrock model
client.chat.completions.create(model="deepseek-v3.2", messages=[...])

# Google Gemini model
client.chat.completions.create(model="gemini-3-flash-preview", messages=[...])

# Same API, different providers — no code changes needed.

See Supported models for the full list.