Skip to main content

Overview

If you cannot use OpenLLMetry, you can send data directly to the Moda ingest API. This is useful for:
  • Custom integrations
  • Languages without OpenLLMetry support
  • Fine-grained control over what data is sent

Endpoint

POST https://moda-ingest.modas.workers.dev/v1/ingest

Authentication

Include your Moda API key in the Authorization header:
-H "Authorization: Bearer YOUR_MODA_API_KEY"

Request format

Send an array of events in the request body:
{
  "events": [
    {
      "conversation_id": "conv-123",
      "role": "user",
      "message": "What is the capital of France?",
      "timestamp": "2024-01-15T10:30:00Z"
    },
    {
      "conversation_id": "conv-123",
      "role": "assistant",
      "message": "The capital of France is Paris.",
      "timestamp": "2024-01-15T10:30:01Z",
      "input_tokens": 12,
      "output_tokens": 8,
      "model": "gpt-4o",
      "provider": "openai"
    }
  ]
}

Event fields

Required fields

FieldTypeDescription
conversation_idstringUnique ID for the conversation
rolestringOne of: user, assistant, system
messagestringThe message content

Optional fields

FieldTypeDescription
timestampstringISO 8601 timestamp (defaults to now)
trace_idstringFor linking related events (defaults to conversation_id)
user_idstringIdentifier for the end user
input_tokensnumberNumber of input/prompt tokens used
output_tokensnumberNumber of output/completion tokens used
reasoning_tokensnumberTokens used for extended thinking (Claude models)
modelstringModel name (e.g., gpt-4o, claude-3-opus)
providerstringProvider name (e.g., openai, anthropic)
content_blocksarrayStructured content blocks (see below)

Content blocks

For conversations with tool use, extended thinking, or images, include structured content blocks:
Block TypeFieldsDescription
texttextPlain text content
thinkingtextModel reasoning (extended thinking)
tool_usetool_name, tool_use_id, inputTool/function call
tool_resulttool_use_id, content, is_errorTool response
imagesourceImage (base64 or URL)
Example with tool use:
{
  "events": [{
    "conversation_id": "conv-123",
    "role": "assistant",
    "message": "Let me search for that.",
    "content_blocks": [
      {"type": "text", "text": "Let me search for that."},
      {"type": "tool_use", "tool_name": "web_search", "tool_use_id": "toolu_abc", "input": {"query": "latest news"}}
    ]
  }]
}
Example with extended thinking:
{
  "events": [{
    "conversation_id": "conv-123",
    "role": "assistant",
    "message": "The answer is 42.",
    "reasoning_tokens": 150,
    "output_tokens": 10,
    "content_blocks": [
      {"type": "thinking", "text": "Let me think through this step by step..."},
      {"type": "text", "text": "The answer is 42."}
    ]
  }]
}

Example

curl https://moda-ingest.modas.workers.dev/v1/ingest \
  -H "Authorization: Bearer YOUR_MODA_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "events": [
      {
        "conversation_id": "conv-abc123",
        "role": "user",
        "message": "Hello, how are you?"
      },
      {
        "conversation_id": "conv-abc123",
        "role": "assistant",
        "message": "I am doing well, thank you for asking!",
        "input_tokens": 8,
        "output_tokens": 12,
        "model": "gpt-4o",
        "provider": "openai"
      }
    ]
  }'

Response

Success response

{
  "success": true,
  "count": 2,
  "requestId": "550e8400-e29b-41d4-a716-446655440000"
}

Error response

{
  "success": false,
  "count": 0,
  "message": "Invalid or missing API key",
  "requestId": "550e8400-e29b-41d4-a716-446655440000",
  "retryable": false
}
FieldTypeDescription
successbooleanWhether the request succeeded
countnumberNumber of events processed
requestIdstringUnique request ID for debugging
messagestringError message (on failure)
retryablebooleanWhether the error is temporary and should be retried

Batch limits

LimitValue
Max events per request1,000
Max message size100 KB
Max request size5 MB

Error handling

StatusMeaningRetryable
200Success-
400Invalid request formatNo
401Invalid or missing API keyNo
413Request too largeNo
503Service temporarily unavailableYes
For 503 errors, use exponential backoff when retrying. Start with 1 second and double each retry, up to a maximum of 30 seconds.

OTLP endpoint

For OpenTelemetry-native integrations, you can also use the OTLP/HTTP endpoint:
POST https://moda-ingest.modas.workers.dev/v1/traces
This accepts standard OTLP JSON trace data. See the OpenLLMetry integration for details on the expected format.