Overview
If you cannot use OpenLLMetry, you can send data directly to the Moda ingest API. This is useful for:
- Custom integrations
- Languages without OpenLLMetry support
- Fine-grained control over what data is sent
Endpoint
POST https://moda-ingest.modas.workers.dev/v1/ingest
Authentication
Include your Moda API key in the Authorization header:
-H "Authorization: Bearer YOUR_MODA_API_KEY"
Send an array of events in the request body:
{
"events": [
{
"conversation_id": "conv-123",
"role": "user",
"message": "What is the capital of France?",
"timestamp": "2024-01-15T10:30:00Z"
},
{
"conversation_id": "conv-123",
"role": "assistant",
"message": "The capital of France is Paris.",
"timestamp": "2024-01-15T10:30:01Z",
"input_tokens": 12,
"output_tokens": 8,
"model": "gpt-4o",
"provider": "openai"
}
]
}
Event fields
Required fields
| Field | Type | Description |
|---|
conversation_id | string | Unique ID for the conversation |
role | string | One of: user, assistant, system |
message | string | The message content |
Optional fields
| Field | Type | Description |
|---|
timestamp | string | ISO 8601 timestamp (defaults to now) |
trace_id | string | For linking related events (defaults to conversation_id) |
user_id | string | Identifier for the end user |
input_tokens | number | Number of input/prompt tokens used |
output_tokens | number | Number of output/completion tokens used |
reasoning_tokens | number | Tokens used for extended thinking (Claude models) |
model | string | Model name (e.g., gpt-4o, claude-3-opus) |
provider | string | Provider name (e.g., openai, anthropic) |
content_blocks | array | Structured content blocks (see below) |
Content blocks
For conversations with tool use, extended thinking, or images, include structured content blocks:
| Block Type | Fields | Description |
|---|
text | text | Plain text content |
thinking | text | Model reasoning (extended thinking) |
tool_use | tool_name, tool_use_id, input | Tool/function call |
tool_result | tool_use_id, content, is_error | Tool response |
image | source | Image (base64 or URL) |
Example with tool use:
{
"events": [{
"conversation_id": "conv-123",
"role": "assistant",
"message": "Let me search for that.",
"content_blocks": [
{"type": "text", "text": "Let me search for that."},
{"type": "tool_use", "tool_name": "web_search", "tool_use_id": "toolu_abc", "input": {"query": "latest news"}}
]
}]
}
Example with extended thinking:
{
"events": [{
"conversation_id": "conv-123",
"role": "assistant",
"message": "The answer is 42.",
"reasoning_tokens": 150,
"output_tokens": 10,
"content_blocks": [
{"type": "thinking", "text": "Let me think through this step by step..."},
{"type": "text", "text": "The answer is 42."}
]
}]
}
Example
curl https://moda-ingest.modas.workers.dev/v1/ingest \
-H "Authorization: Bearer YOUR_MODA_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"events": [
{
"conversation_id": "conv-abc123",
"role": "user",
"message": "Hello, how are you?"
},
{
"conversation_id": "conv-abc123",
"role": "assistant",
"message": "I am doing well, thank you for asking!",
"input_tokens": 8,
"output_tokens": 12,
"model": "gpt-4o",
"provider": "openai"
}
]
}'
Response
Success response
{
"success": true,
"count": 2,
"requestId": "550e8400-e29b-41d4-a716-446655440000"
}
Error response
{
"success": false,
"count": 0,
"message": "Invalid or missing API key",
"requestId": "550e8400-e29b-41d4-a716-446655440000",
"retryable": false
}
| Field | Type | Description |
|---|
success | boolean | Whether the request succeeded |
count | number | Number of events processed |
requestId | string | Unique request ID for debugging |
message | string | Error message (on failure) |
retryable | boolean | Whether the error is temporary and should be retried |
Batch limits
| Limit | Value |
|---|
| Max events per request | 1,000 |
| Max message size | 100 KB |
| Max request size | 5 MB |
Error handling
| Status | Meaning | Retryable |
|---|
| 200 | Success | - |
| 400 | Invalid request format | No |
| 401 | Invalid or missing API key | No |
| 413 | Request too large | No |
| 503 | Service temporarily unavailable | Yes |
For 503 errors, use exponential backoff when retrying. Start with 1 second and double each retry, up to a maximum of 30 seconds.
OTLP endpoint
For OpenTelemetry-native integrations, you can also use the OTLP/HTTP endpoint:
POST https://moda-ingest.modas.workers.dev/v1/traces
This accepts standard OTLP JSON trace data. See the OpenLLMetry integration for details on the expected format.