Skip to main content

Endpoint

All chat completion requests go to:
POST https://api.example.com/v1/chat/completions

Request format

{
  "model": "gpt-4o@@openai",
  "messages": [
    {"role": "system", "content": "You are a helpful assistant."},
    {"role": "user", "content": "What is the capital of France?"}
  ],
  "temperature": 0.7,
  "max_tokens": 1000
}

Required fields

FieldDescription
modelThe model and provider in model@@provider format
messagesArray of message objects with role and content

Optional fields

FieldDescription
temperatureControls randomness (0 to 2, default varies by provider)
max_tokensMaximum tokens in the response
streamSet to true for streaming responses
userA unique identifier for the end user

Message format

Each message in the messages array has:
{
  "role": "user",
  "content": "Hello!"
}

Roles

RoleDescription
systemSets the behavior of the assistant
userMessages from the user
assistantPrevious responses from the AI

Full example

curl https://api.example.com/v1/chat/completions \
  -H "Authorization: Bearer YOUR_MODA_API_KEY" \
  -H "X-Provider-Key: YOUR_OPENAI_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "model": "gpt-4o@@openai",
    "messages": [
      {"role": "system", "content": "You are a helpful assistant."},
      {"role": "user", "content": "What is 2 + 2?"}
    ],
    "temperature": 0.5,
    "max_tokens": 100
  }'

Response format

{
  "id": "chatcmpl-abc123",
  "object": "chat.completion",
  "created": 1699000000,
  "model": "gpt-4o",
  "choices": [
    {
      "index": 0,
      "message": {
        "role": "assistant",
        "content": "2 + 2 equals 4."
      },
      "finish_reason": "stop"
    }
  ],
  "usage": {
    "prompt_tokens": 20,
    "completion_tokens": 8,
    "total_tokens": 28
  }
}

Streaming

For streaming responses, set stream: true:
{
  "model": "gpt-4o@@openai",
  "messages": [...],
  "stream": true
}
The response will be server-sent events (SSE) with incremental content.