Skip to main content

What is ingestion?

Ingestion allows you to send LLM usage data to Moda from your existing applications. This is useful if you:
  • Already have an application making direct calls to LLM providers
  • Want to add observability without changing how you call the APIs
  • Use OpenLLMetry or other OpenTelemetry-based instrumentation

Ways to ingest data

How it works

Your App  -->  LLM Provider (OpenAI, etc.)
    |
    |  (telemetry)
    v
Moda Ingest  -->  Analytics & Dashboard
  1. Your app makes calls to LLM providers as normal
  2. Telemetry data is sent to Moda in the background
  3. Moda processes and stores the data
  4. View insights in the Moda dashboard

What data is captured?

FieldDescription
Conversation IDGroups related messages together
User messageWhat the user asked
Assistant responseWhat the AI replied
ModelWhich model was used
TimestampWhen the interaction happened
Token usageHow many tokens were consumed

Privacy

Moda stores conversation content to provide analytics. Make sure this aligns with your privacy policy and data handling requirements.
Do not send sensitive personal information (like passwords or credit card numbers) through LLM calls that are being logged.