Skip to content

Quick Start

  1. Create a new project

    Terminal window
    npx create-runcontext my-data
    cd my-data
  2. Run the setup wizard

    Terminal window
    context setup

    A 5-step wizard opens in your browser:

    • Product name — name your data product (e.g. sales-analytics)
    • Owner — name, team, and email
    • Sensitivity — public, internal, confidential, or restricted
    • Database — auto-detected from environment, or enter a connection URL
    • Documentation — optionally upload SQL files, data dictionaries, or markdown
  3. Watch the pipeline run

    The wizard runs the build pipeline in real time:

    • Introspect schema → scaffold Bronze metadata
    • Auto-enrich toward Silver (glossary, lineage, sample values)
    • Lint and verify
  4. Curate to Gold

    Open the visual studio to add semantic roles, aggregation rules, guardrails, and golden queries:

    Terminal window
    context dev --studio

    Or tell your AI agent: “Read context/AGENT_INSTRUCTIONS.md and get this project to Gold.”

  5. Serve to AI agents

    Terminal window
    context serve --stdio

    Your MCP server is live. Add it to your AI tool’s config:

    {
    "mcpServers": {
    "contextkit": {
    "command": "npx",
    "args": ["@runcontext/cli", "serve", "--stdio"]
    }
    }
    }

In Claude Code, Cursor, or any agentic coding platform:

“Install @runcontext/cli and build a semantic layer for my database.”

The agent runs context setup, introspects your database, and starts asking you questions about metrics, ownership, and business rules. It reads the generated context/AGENT_INSTRUCTIONS.md and knows exactly what to do.

Terminal window
# Install
npm install -g @runcontext/cli
# Scaffold from a database
context introspect --db postgres://user:pass@host/db --select
# Auto-enrich to Silver
context enrich --target silver --apply
# Check tier progress
context tier
# Serve via MCP
context serve --stdio

The setup pipeline performed these steps:

  1. Connected to your database (PostgreSQL, DuckDB, MySQL, SQL Server, SQLite, Snowflake, BigQuery, ClickHouse, or Databricks)
  2. Introspected every table — generated OSI YAML with columns, types, primary keys
  3. Scaffolded governance files — ownership, security classification, grain statements
  4. Enriched toward Silver — added lineage, sample values, glossary links, trust status
  5. Linted the result — flagged anything that still needs attention
  6. Generated AGENT_INSTRUCTIONS.md — a complete guide for any AI agent to continue curating

Your metadata now lives in YAML files under context/. Edit them in the visual studio, let AI agents curate them, or modify the YAML directly.

  • Check your tier: context tier to see Bronze/Silver/Gold scores
  • Open the studio: context dev --studio to curate in the browser
  • Export a blueprint: context blueprint for portable OSI YAML
  • Generate docs: context site for a browsable documentation site
  • Read the CLI reference for all 16 commands