Skip to main content

What is the Moda Gateway?

The Moda Gateway is a proxy that sits between your application and LLM providers. Instead of making direct calls to OpenAI, Anthropic, or AWS Bedrock, you send requests to Moda and we route them to the right place.

Benefits

Single endpoint

One API endpoint for all providers. No need to manage multiple SDKs or endpoints.

Provider flexibility

Switch between models and providers by changing a single field in your request.

Automatic logging

Every request and response is logged for analysis and debugging.

Conversation tracking

Multi-turn conversations are automatically grouped together.

How it works

Your App  -->  Moda Gateway  -->  OpenAI / Anthropic / Bedrock
                    |
                    v
              Logs & Analytics
  1. Your application sends a request to the Moda endpoint
  2. Moda reads the model@@provider field to determine where to route it
  3. Moda forwards the request to the appropriate provider
  4. The response is streamed back to your application
  5. Both request and response are logged for observability

Request format

Moda uses a format similar to OpenAI’s Chat Completions API:
{
  "model": "gpt-4o@@openai",
  "messages": [
    {"role": "system", "content": "You are a helpful assistant."},
    {"role": "user", "content": "Hello!"}
  ]
}
The key difference is the model field, which includes the provider name after @@.

Next steps