docs

Python SDK

The ninetrix Python SDK lets you build AI agents programmatically. Define tools with @Tool, run agents with a single method call, compose workflows with checkpointing, and orchestrate teams with LLM-based routing.

Two ways to build agents
The CLI (agentfile.yaml + ninetrix build) is declarative — great for DevOps and deployment. The SDK (from ninetrix import Agent) is programmatic — great for embedding agents in your Python apps, notebooks, and custom pipelines. They share the same runtime.

Installation

Bash
pip install ninetrix-sdk

# Optional extras
pip install ninetrix-sdk[serve]      # FastAPI server for agents
pip install ninetrix-sdk[otel]       # OpenTelemetry integration
pip install ninetrix-sdk[providers]  # All LLM provider adapters

Quick start

main.py
from ninetrix import Agent, Tool

@Tool
def get_weather(city: str) -> str:
    """Get the current weather for a city."""
    return f"72°F and sunny in {city}"

agent = Agent(
    name="assistant",
    provider="anthropic",
    model="claude-sonnet-4-6",
    tools=[get_weather],
    instructions="You are a helpful assistant. Use tools when needed.",
)

result = agent.run("What's the weather in San Francisco?")
print(result.output)       # "It's 72°F and sunny in San Francisco!"
print(result.cost_usd)     # 0.003
print(result.tokens_used)  # 247

The Ninetrix factory

Use the Ninetrix factory to share defaults (provider, checkpointer, budget) across multiple agents:

Python
from ninetrix import Ninetrix, PostgresCheckpointer

ntx = Ninetrix(
    provider="anthropic",
    model="claude-sonnet-4-6",
    checkpointer=PostgresCheckpointer("postgresql://localhost/agents"),
)

researcher = ntx.agent(name="researcher", instructions="Find information.")
writer = ntx.agent(name="writer", instructions="Write clear prose.")
team = ntx.team(name="content-team", agents=[researcher, writer])

What's in the SDK

ModuleDescriptionDocs
AgentRun agents with .run(), .arun(), .stream()Agent
@ToolDefine tools with type hints@Tool
@WorkflowCompose durable multi-step workflowsWorkflows
TeamLLM-routed multi-agent orchestrationTeam
CheckpointerPersistent memory and crash recoveryPersistence
output_type=Structured output with Pydantic modelsStructured Output
enable_debug()Observability, OpenTelemetry, event streamingObservability
TypesType annotations and JSON Schema mappingType Support
TestingMockTool, AgentSandbox, registry utilitiesTesting
On this page