SDK Documentation

Ambient, context-aware observability for agentic systems. Zero boilerplate.

Installation

pip install magenta

Quick Start

There are three ways to use the SDK based on what you want to observe:

1. Track LLM Calls

“I want to see cost, tokens, and latency for my LLM usage.”

import magenta
from openai import OpenAI

magenta.init(api_key="...", project_id="...", user_id="...")
magenta.track_llm_calls()  # Enable automatic tracking

client = OpenAI()
response = client.chat.completions.create(...)  # Auto-tracked!

2. Track Agents

“I want to understand what each agent did, end-to-end.”

import magenta
from openai import OpenAI

magenta.init(api_key="...", project_id="...", user_id="...")
magenta.track_llm_calls()

client = OpenAI()

with magenta.run("Research Task"):
    with magenta.agent("Researcher", "gpt-4"):
        magenta.event("input", {"query": "AI trends"})

        response = client.chat.completions.create(
            model="gpt-4",
            messages=[{"role": "user", "content": "AI trends 2024"}],
        )

        magenta.event("output", {"result": response.choices[0].message.content})

API Reference

magenta.init()

Initialize the SDK. Call once at startup.

magenta.init(
    api_key: str,
    project_id: str,
    user_id: str,
    endpoint: str = "localhost:3000",  # optional
    capture_llm_response: bool = False,  # optional
)

magenta.track_llm_calls()

Enable automatic LLM call tracking.

magenta.track_llm_calls()  # Enable globally

magenta.run(name, objective?)

Execute code in a run context.

with magenta.run("Task Name", objective="optional goal"):
    # All agent calls inside are traced
    pass

magenta.agent(name, model)

Execute code in an agent trace.

with magenta.agent("Coder", "gpt-4"):
    magenta.event("input", {"task": "Write code"})
    # ... do work ...
    magenta.event("output", {"code": "..."})

Nested agents are linked automatically.

magenta.event(type, data)

Log an event (context auto-detected).

magenta.event("input", {"query": "..."})
magenta.event("output", {"result": "..."})
magenta.event("tool_call", {"tool": "search"})
magenta.event("state", {"status": "processing"})
magenta.event("error", {"message": "..."})

magenta.set_outcome(text)

Set the outcome of the current run.

magenta.set_outcome("Completed successfully with 3 results")

Multi-Agent Example

with magenta.run("Write Article"):
    with magenta.agent("Orchestrator", "gpt-4"):
        magenta.event("input", {"task": "Write about AI"})

        # Nested agents - auto-linked
        with magenta.agent("Researcher", "gpt-4"):
            magenta.event("input", {"query": "AI research"})
            client.chat.completions.create(...)

        with magenta.agent("Writer", "gpt-4"):
            magenta.event("input", {"context": "..."})
            client.chat.completions.create(...)

# Trace hierarchy:
# run: Write Article
# └── agent: Orchestrator
#     ├── agent: Researcher
#     └── agent: Writer

How It Works

  • contextvars: Context flows through async calls automatically
  • Auto-patching: track_llm_calls() patches supported LLM libraries
  • No ID Passing: Parent-child relationships detected from call stack

~60% less code. Zero ID management.

Privacy & Data

  • Prompts/outputs are NOT logged by default - Only explicit magenta.event() calls capture data
  • LLM response capture is opt-in - Use capture_llm_response=True in init() to enable
  • No environment variables are logged - Only explicitly passed data is transmitted

Security

Reporting a Vulnerability

  • Do not open a public GitHub issue for security vulnerabilities
  • Email the security team with details of the vulnerability
  • Include steps to reproduce the issue if possible
  • Allow reasonable time for a fix before public disclosure

Security Considerations

Data Privacy:

  • No prompts/outputs logged by default - LLM response capture is opt-in via capture_llm_response=True
  • No environment variables logged - Only explicitly passed data is transmitted
  • Sensitive data is user-controlled - Use magenta.event() carefully to avoid logging secrets

Network Security:

  • All data is sent over HTTPS in production
  • API keys are passed via Authorization header, not logged
  • Failed requests do not expose sensitive data in error messages

Best Practices:

  • Never log secrets - Don't pass API keys, passwords, or tokens to magenta.event()
  • Review logged data - Be aware of what data your events contain
  • Use environment variables - Store API keys in environment variables, not code
  • Secure your endpoint - Ensure your backend endpoint uses HTTPS

Semantic Attributes

The SDK uses OpenTelemetry-compatible semantic attributes with the magenta. prefix.

Resource Attributes

Set on the OpenTelemetry Resource and apply to all spans:

AttributeTypeDescription
magenta.project_idstring (UUID)Project this telemetry belongs to
magenta.user_idstring (UUID)User who initiated the run
magenta.sdk.versionstringSDK version (e.g., "0.1.0")
magenta.sdk.languagestring"python" or "typescript"

Run Attributes

Set on root spans that represent execution runs:

AttributeTypeDescription
magenta.run.idstring (UUID)Unique identifier for this run
magenta.run.namestringHuman-readable name for the run
magenta.run.objectivestringGoal or objective of the run
magenta.run.statusstring"running", "completed", or "failed"
magenta.run.actual_outcomestringActual result (set on completion)

Agent Attributes

Set on agent trace spans:

AttributeTypeDescription
magenta.agent.idstring (UUID)Unique identifier for the agent
magenta.agent.namestringHuman-readable agent name
magenta.model.idstringModel identifier (e.g., "gpt-4")
magenta.trace.total_cost_usdfloatAccumulated cost in USD

Event Types

Valid values for magenta.event.type:

TypeDescription
inputInitial input to an agent
outputFinal output from an agent
attemptAn individual action or LLM call
stateA state change or observation
controlA flow control decision
humanHuman-in-the-loop interaction

Code of Conduct

We are committed to providing a welcoming and inclusive environment for everyone who wants to contribute to or use this project.

Our Standards

Positive behaviors:

  • Being respectful and considerate in communications
  • Providing constructive feedback
  • Focusing on what's best for the community
  • Showing empathy towards others

Unacceptable behaviors:

  • Harassment, discrimination, or offensive comments
  • Personal attacks or trolling
  • Publishing others' private information
  • Other conduct inappropriate for a professional setting

Enforcement

Project maintainers may remove, edit, or reject comments, commits, code, and other contributions that violate this Code of Conduct. Repeated violations may result in a ban from the project.

This is a v0.x release. The API may change in future versions.