What is ingestion?
Ingestion allows you to send LLM usage data to Moda from your existing applications. This is useful if you:- Already have an application making direct calls to LLM providers
- Want to add analytics without changing how you call the APIs
Ways to ingest data
Moda Python SDK
Recommended for Python. Official SDK with automatic conversation threading for OpenAI and Anthropic.
Moda Node.js SDK
Recommended for Node.js/TypeScript. Official SDK with automatic conversation threading.
Direct API
Send LLM conversation data directly to the Moda API. Supports conversations across chat, email, voice, and more.
Providers
Provider-specific setup guides for OpenAI, Anthropic, OpenRouter, Azure, and Bedrock.
How it works
- Your app makes calls to LLM providers as normal
- The Moda SDK captures telemetry in the background
- Telemetry is sent to Moda’s ingestion API
- Moda validates your API key and processes the data
- View insights and analytics in the Moda dashboard
What data is captured?
| Field | Description |
|---|---|
| Conversation ID | Groups related messages together |
| User message | What the user asked |
| Assistant response | What the AI replied |
| Model | Which model was used |
| Provider | LLM provider name (e.g., openai, anthropic) |
| Timestamp | When the interaction happened |
| Token usage | Input, output, and total tokens consumed |
| Reasoning tokens | Tokens used for extended thinking (e.g., Claude) |
| Content blocks | Structured content including tool use, thinking, and images |
| User ID | User identifier for per-user analytics |
| Environment | Deployment environment (development, staging, production) |
| Prompt tracking | Prompt template ID, name, and version |
Endpoints
All ingestion endpoints are hosted athttps://moda-ingest.modas.workers.dev.
| Endpoint | Use Case |
|---|---|
POST /v1/traces | Used automatically by the Moda SDK |
POST /v1/ingest | Direct API integrations (conversations across chat, email, voice, and standard completions) |
Batch Limits
These limits apply across all ingestion endpoints:| Limit | Value |
|---|---|
| Max events per request | 1,000 |
| Max message size | 100 KB |
| Max request size | 5 MB |