Background texture

Error Handling

ModelMax returns standard HTTP status codes with JSON error bodies. This guide covers all error scenarios and how to handle them.

Error response format

All errors follow this structure:

{
  "error": {
    "message": "human-readable error description",
    "type": "error_type"
  }
}

Status codes

Client errors

StatusTypeMeaningAction
400bad_requestInvalid request body, missing fields, or unsupported modelFix the request
401unauthorizedMissing or invalid API keyCheck your API key
402insufficient_balanceAccount balance is zero or negativeTop up your account

Server errors

StatusTypeMeaningAction
500internal_errorUnexpected server errorRetry after a moment
502bad_gatewayUpstream provider returned an errorRetry or try a different model

Handling common errors

Insufficient balance (402)

This is returned before the request reaches the provider. No charges are incurred.

{
  "error": {
    "message": "insufficient balance",
    "type": "insufficient_balance"
  }
}
from openai import OpenAI, APIStatusError

client = OpenAI(api_key="your-key", base_url="https://api.modelmax.io/v1")

try:
    response = client.chat.completions.create(
        model="gemini-3-flash-preview",
        messages=[{"role": "user", "content": "Hello"}],
    )
except APIStatusError as e:
    if e.status_code == 402:
        print("Insufficient balance — please top up your account.")
    elif e.status_code == 401:
        print("Invalid API key — check your credentials.")
    else:
        print(f"API error {e.status_code}: {e.message}")
import OpenAI from "openai";

const client = new OpenAI({ apiKey: "your-key", baseURL: "https://api.modelmax.io/v1" });

try {
  const response = await client.chat.completions.create({
    model: "gemini-3-flash-preview",
    messages: [{ role: "user", content: "Hello" }],
  });
} catch (error) {
  if (error instanceof OpenAI.APIError) {
    if (error.status === 402) {
      console.error("Insufficient balance — please top up.");
    } else if (error.status === 401) {
      console.error("Invalid API key.");
    } else {
      console.error(`API error ${error.status}: ${error.message}`);
    }
  }
}

Invalid model (400)

{
  "error": {
    "message": "model not found: gpt-4o",
    "type": "bad_request"
  }
}

Upstream error (502)

The upstream provider (Bedrock, Gemini) returned an error. This can happen due to provider rate limits, temporary outages, or content policy violations.

If you get repeated 502 errors with one model, try a different model from another provider. For example, switch from a Bedrock model to a Gemini model.

Video generation errors

Video tasks can fail asynchronously. Check the status and error fields:

{
  "request_id": "...",
  "status": "FAILED",
  "error": "content policy violation"
}

Retry strategy

For transient errors (500, 502), a simple exponential backoff works well:

import time
from openai import OpenAI, APIStatusError

client = OpenAI(api_key="your-key", base_url="https://api.modelmax.io/v1")

def chat_with_retry(messages, model="gemini-3-flash-preview", max_retries=3):
    for attempt in range(max_retries):
        try:
            return client.chat.completions.create(model=model, messages=messages)
        except APIStatusError as e:
            if e.status_code in (500, 502) and attempt < max_retries - 1:
                wait = 2 ** attempt
                print(f"Retrying in {wait}s...")
                time.sleep(wait)
            else:
                raise