Skip to content

Coding with a tinker model using OpenCode

Prerequisites

OpenCode can talk to any OpenAI-compatible endpoint. Tinker exposes one, so you can chat or code with a fine-tuned checkpoint directly from your terminal — no export or download needed.

Step 1: Get your checkpoint path

The checkpoint must be a sampler checkpoint (saved via save_weights_for_sampler, not a raw training checkpoint). After saving, you get a path like:

tinker://040ac13a-4dea-5e21-bee5-2d1e581ae9d4:train:0/sampler_weights/000043

This is the model ID you will use in the config.

Step 2: Add a provider to opencode.json

Create or edit opencode.json in your project root:

{
  "$schema": "https://opencode.ai/config.json",
  "provider": {
    "tinker": {
      "env": ["TINKER_API_KEY"],
      "npm": "@ai-sdk/openai-compatible",
      "models": {
        "tinker://YOUR_CHECKPOINT_PATH": {
          "name": "My Fine-Tuned Model",
          "attachment": false,
          "reasoning": true,
          "temperature": true,
          "tool_call": true,
          "cost": { "input": 0, "output": 0 },
          "limit": {
            "context": 262144,
            "output": 8192
          },
          "options": {
            "separate_reasoning": true
          }
        }
      },
      "options": {
        "baseURL": "https://tinker.thinkingmachines.dev/services/tinker-prod/oai/api/v1",
        "apiKey": "{env:TINKER_API_KEY}"
      }
    }
  }
}

Replace tinker://YOUR_CHECKPOINT_PATH with the actual checkpoint path from Step 1.

Config fields

Field Purpose
npm Must be @ai-sdk/openai-compatible — tells OpenCode how to talk to the API
env Lists required env vars; OpenCode warns if they're missing
options.baseURL Tinker's OpenAI-compatible endpoint
options.apiKey Supports {env:VAR} substitution — never hardcode keys
models.<id> The key must match the checkpoint path exactly
limit.context Max input tokens (32768 for most Tinker models)
limit.output Max output tokens
options.separate_reasoning Set true if the model uses thinking tokens (e.g. Kimi-K2 family)

Step 3: Export your API key

export TINKER_API_KEY="your-api-key-here"

Step 4: Launch OpenCode

opencode

Select your model from the model picker (tinker/tinker://...). You're now chatting with your fine-tuned checkpoint.

Using a base model (no fine-tuning)

You can also point at a base model on Tinker's sampler:

"models": {
  "moonshotai/Kimi-K2.5": {
    "name": "Kimi K2.5",
    "reasoning": true,
    "limit": { "context": 262144, "output": 8192 },
    "options": { "separate_reasoning": true }
  }
}

Next steps