Endpoint
All chat completion requests go to:Request format
Required fields
| Field | Description |
|---|---|
model | The model and provider in model@@provider format |
messages | Array of message objects with role and content |
Optional fields
| Field | Description |
|---|---|
temperature | Controls randomness (0 to 2, default varies by provider) |
max_tokens | Maximum tokens in the response |
stream | Set to true for streaming responses |
user | A unique identifier for the end user |
Message format
Each message in themessages array has:
Roles
| Role | Description |
|---|---|
system | Sets the behavior of the assistant |
user | Messages from the user |
assistant | Previous responses from the AI |
Full example
Response format
Streaming
For streaming responses, setstream: true: