Skip to main content

Overview

Moda automatically tracks your OpenAI API calls. Chat completions, streaming, function calling, and tool use are all captured with no additional code required.

Setup

pip install moda-ai openai
import moda
from openai import OpenAI

moda.init("YOUR_MODA_API_KEY")

client = OpenAI()
response = client.chat.completions.create(
    model="gpt-4o",
    messages=[{"role": "user", "content": "Hello!"}]
)

moda.flush()

Supported Features

FeatureCaptured
Chat completionsYes
StreamingYes
Function calling / tool useYes (captured as content blocks)
EmbeddingsYes
Token usageYes (input, output, total)
Model nameYes (request and response)

Streaming

Streaming responses are automatically tracked. The SDK captures the complete response after the stream finishes:
stream = client.chat.completions.create(
    model="gpt-4o",
    messages=[{"role": "user", "content": "Count to 5"}],
    stream=True,
)

for chunk in stream:
    print(chunk.choices[0].delta.content or "", end="")

Tool Use

Tool calls and function calling are automatically captured with full input/output details:
response = client.chat.completions.create(
    model="gpt-4o",
    messages=[{"role": "user", "content": "What's the weather in London?"}],
    tools=[{
        "type": "function",
        "function": {
            "name": "get_weather",
            "description": "Get the weather for a location",
            "parameters": {
                "type": "object",
                "properties": {
                    "location": {"type": "string"}
                }
            }
        }
    }]
)

Troubleshooting

Data not appearing?
  • Ensure moda.init() is called before creating the OpenAI client
  • Call moda.flush() (Python) or await Moda.flush() (Node.js) before exit
  • Verify your API key is correct
Streaming responses incomplete?
  • The SDK captures the full response after the stream ends. Ensure you consume the entire stream.
For full SDK documentation, see the Python SDK or Node.js SDK guides.