docs

Quick Start

Build and run your first AI agent in 5 minutes.

Prerequisites
Docker Desktop must be running. You need Python 3.11+ and an API key for your chosen LLM provider (Anthropic, OpenAI, Google, etc.).
1

Install the CLI

Bash
curl -fsSL https://install.ninetrix.io | sh

The install script auto-detects pipx, uv, or pip3 and picks the best option. Or install manually:

Bash
# or manually:
pip install ninetrix
# or:
uv tool install ninetrix
Bash
ninetrix --version
2

Start the local stack

ninetrix dev starts PostgreSQL and the API server (including the observability dashboard) in Docker. The MCP gateway and worker are opt-in — pass --mcp to include them.

Bash
ninetrix dev

The dashboard is available at http://localhost:8000/dashboard. Leave this running in the background.

3

Scaffold a new agent

Bash
ninetrix init --name my-agent --provider anthropic --yes

This creates agentfile.yaml in the current directory. In interactive mode (without --yes), it prompts for your API key and saves it to .env automatically.

Bash
# If you used --yes, create .env manually:
echo "ANTHROPIC_API_KEY=sk-ant-..." > .env
4

Review agentfile.yaml

The scaffolded file defines a single agent with a web search tool:

agentfile.yaml
agents:
  my-agent:
    metadata:
      role: Research assistant
      goal: Answer questions accurately using web search
      instructions: |
        Search the web to find accurate, up-to-date information.
        Always cite your sources.

    runtime:
      provider: anthropic
      model: claude-sonnet-4-6
      temperature: 0.3

    tools:
      - name: web_search
        source: mcp://brave-search

Need system packages (e.g. git, ffmpeg, chromium) in the container? Add a packages list under the agent:

YAML
agents:
  my-agent:
    packages:
      - git
      - ffmpeg
5

Build and run

Bash
ninetrix build
ninetrix run

Type a message at the prompt. Your agent will search the web and respond. Open the dashboard to see the full trace.

What just happened?

  • ninetrix build rendered your YAML into a Dockerfile + Python runtime and built a Docker image tagged ninetrix/my-agent:latest
  • ninetrix run started the container, connected it to your local API server, and streamed its output to your terminal
  • Every tool call and LLM response was checkpointed to PostgreSQL
  • The dashboard at localhost:8000/dashboard shows the full trace

Resume a session

Every run has a thread ID. Pass --thread-id to resume exactly where you left off:

Bash
ninetrix run --thread-id my-project

Next steps

On this page