`.
Then, create a `ServiceInterface`. This lets you find out what base models are available to be fine-tuned.
```python
import tinker
service_client = tinker.ServiceClient()
print("Available models:")
for item in service_client.get_server_capabilities().supported_models:
print("- " + item.model_name)
```
You'll see a list of model names:
```
- meta-llama/Llama-3.1-70B
- meta-llama/Llama-3.1-8B
...
- Qwen/Qwen3-235B-A22B-Instruct-2507
- Qwen/Qwen3-30B-A3B-Base
```
We currently support models from the Qwen3 and Llama3 series. We're going to use Qwen3-30B-A3B-Base (a raw pre-trained model) for these examples. See [Available Models in Tinker](/model-lineup) for the full list.
Now we can create the `TrainingClient`:
```python
base_model = "Qwen/Qwen3-30B-A3B-Base"
training_client = service_client.create_lora_training_client(
base_model=base_model
)
```
Note that we've specified a *base model*, which is the model we'll initialize from. In this case, it's a raw pre-trained model, but most of the "base models" in Tinker are fine-tuned for chat/instruction-following. You should check the details of the model you're using in their system cards.
## Preparing the training data
Now we can do training updates on the model. This quickstart example won't show best practices for LLM fine-tuning; it's just an API demo. Check out [Rendering](/rendering), [Supervised Fine-tuning](/supervised-learning) and the other Cookbook examples for guidance on how to use Tinker in real applications.
For this model, we'll train a model that can translate words into Pig Latin. The rules for Pig Latin are simple:
- If a word begins with a consonant, move it to the end and add "ay"
- If a word begins with a vowel, just add "way" to the end
Here are some example completions we'd like the model to perform, where the prompt is in green and the model's completion is in red:
English: hello world
Pig Latin: ello-hay orld-way
Let's create some training examples and convert them to a format expected by Tinker.
```python
# Create some training examples
examples = [
{
"input": "banana split",
"output": "anana-bay plit-say"
},
{
"input": "quantum physics",
"output": "uantum-qay ysics-phay"
},
{
"input": "donut shop",
"output": "onut-day op-shay"
},
{
"input": "pickle jar",
"output": "ickle-pay ar-jay"
},
{
"input": "space exploration",
"output": "ace-spay exploration-way"
},
{
"input": "rubber duck",
"output": "ubber-ray uck-day"
},
{
"input": "coding wizard",
"output": "oding-cay izard-way"
},
]
# Convert examples into the format expected by the training client
from tinker import types
# Get the tokenizer from the training client
tokenizer = training_client.get_tokenizer()
def process_example(example: dict, tokenizer) -> types.Datum:
# Format the input with Input/Output template
# For most real use cases, you'll want to use a renderer / chat template,
# (see later docs) but here, we'll keep it simple.
prompt = f"English: {example['input']}\nPig Latin:"
prompt_tokens = tokenizer.encode(prompt, add_special_tokens=True)
prompt_weights = [0] * len(prompt_tokens)
# Add a space before the output string, and finish with double newline
completion_tokens = tokenizer.encode(f" {example['output']}\n\n", add_special_tokens=False)
completion_weights = [1] * len(completion_tokens)
tokens = prompt_tokens + completion_tokens
weights = prompt_weights + completion_weights
input_tokens = tokens[:-1]
target_tokens = tokens[1:] # We're predicting the next token, so targets need to be shifted.
weights = weights[1:]
# A datum is a single training example for the loss function.
# It has model_input, which is the input sequence that'll be passed into the LLM,
# loss_fn_inputs, which is a dictionary of extra inputs used by the loss function.
return types.Datum(
model_input=types.ModelInput.from_ints(tokens=input_tokens),
loss_fn_inputs=dict(weights=weights, target_tokens=target_tokens)
)
processed_examples = [process_example(ex, tokenizer) for ex in examples]
# Visualize the first example for debugging purposes
datum0 = processed_examples[0]
print(f"{'Input':<20} {'Target':<20} {'Weight':<10}")
print("-" * 50)
for i, (inp, tgt, wgt) in enumerate(zip(datum0.model_input.to_ints(), datum0.loss_fn_inputs['target_tokens'].tolist(), datum0.loss_fn_inputs['weights'].tolist())):
print(f"{repr(tokenizer.decode([inp])):<20} {repr(tokenizer.decode([tgt])):<20} {wgt:<10}")
```
The visualization of the first example is:
```
Input Target Weight
--------------------------------------------------
'English' ':' 0.0
':' ' I' 0.0
' I' ' love' 0.0
' love' ' tink' 0.0
' tink' 'ering' 0.0
'ering' '\n' 0.0
'\n' 'P' 0.0
'P' 'ig' 0.0
'ig' ' Latin' 0.0
' Latin' ':' 0.0
':' ' I' 1.0
' I' '-way' 1.0
'-way' ' o' 1.0
' o' 've' 1.0
've' '-l' 1.0
'-l' 'ay' 1.0
'ay' ' ink' 1.0
' ink' 'ering' 1.0
'ering' '-t' 1.0
'-t' 'ay' 1.0
'ay' '<|endoftext|>' 1.0
```
## Performing a training update
Now we can use this data to perform a training update. We'll do 6 updates on the same batch of data. (Note that this is not typically a good way to train!)
```python
import numpy as np
for _ in range(6):
fwdbwd_future = training_client.forward_backward(processed_examples, "cross_entropy")
optim_future = training_client.optim_step(types.AdamParams(learning_rate=1e-4))
# Wait for the results
fwdbwd_result = fwdbwd_future.result()
optim_result = optim_future.result()
# fwdbwd_result contains the logprobs of all the tokens we put in. Now we can compute the weighted
# average log loss per token.
logprobs = np.concatenate([output['logprobs'].tolist() for output in fwdbwd_result.loss_fn_outputs])
weights = np.concatenate([example.loss_fn_inputs['weights'].tolist() for example in processed_examples])
print(f"Loss per token: {-np.dot(logprobs, weights) / weights.sum():.4f}")
```
Note that the `forward_backward` and `optim_step` functions immediately return *futures*, which acknowledge that the task has been queued up by the server. For improved speed, we submitted both operations before waiting for the result by calling `result()` on the futures.
## Sampling from the model
Now we can test our model by sampling from it. In this case, we'll translate the phrase "coffee break" into Pig Latin.
```python
# First, create a sampling client. We need to transfer weights
sampling_client = training_client.save_weights_and_get_sampling_client(name='pig-latin-model')
# Now, we can sample from the model.
prompt=types.ModelInput.from_ints(tokenizer.encode("English: coffee break\nPig Latin:"))
params = types.SamplingParams(max_tokens=20, temperature=0.0, stop=["\n"]) # Greedy sampling
future = sampling_client.sample(prompt=prompt, sampling_params=params, num_samples=8)
result = future.result()
print("Responses:")
for i, seq in enumerate(result.sequences):
print(f"{i}: {repr(tokenizer.decode(seq.tokens))}")
```
Since sampling is nondeterministic (sadly, even with temperature=0.0, [due to batching](https://thinkingmachines.ai/blog/defeating-nondeterminism-in-llm-inference/)), the output will be different each time. You should see something like this:
```
Responses:
0: ' offe-bay eak-bay\n\n'
1: ' offey-coy eak-bray\n\n'
2: ' offecay eakbray\n\n'
3: ' offeec-cay eak-brcay\n\n\n'
4: ' offecay akebay\n\n'
5: ' offee-Cay ake-bay\n\n\n'
6: ' offey-pay eak-bray\n\n'
7: ' offee β cay eak β bray\n\n'
```
---
## File: rendering.mdx
import { CookbookLink } from '../components/CookbookLink'
# Rendering
Rendering is the process of converting list-of-message datatypes into their token representations and is roughly the same as [chat templates](https://huggingface.co/docs/transformers/en/chat_templating); however, the chat template functionality in other libraries is only well-suited for inference, whereas this library provides a more complete interface that handles supervised and reinforcement learning.
## Renderer Class
The Renderer class is the main interface used for rendering and can be found in `renderers.py`.
Let's use the following working example of a conversation between a user and an assistant.
```python
messages =[
{'role': 'system', 'content': 'Answer concisely; at most one sentence per response'},
{'role': 'user', 'content': 'What is the longest-lived rodent species?'},
{'role': 'assistant', 'content': 'The naked mole rat, which can live over 30 years.'},
{'role': 'user', 'content': 'How do they live so long?'},
{'role': 'assistant', 'content': 'They evolved multiple protective mechanisms including special hyaluronic acid that prevents cancer, extremely stable proteins, and efficient DNA repair systems that work together to prevent aging.'}
]
```
## Generating messages
Our model maps tokens to tokens, but with the renderer, it can map messages to messages. To sample messages from the model, we need to use three methods from the renderer:
- `build_generation_prompt`
- `get_stop_sequences`
- `parse_response`
`build_generation_prompt` converts a conversation into a prompt that we can use to sample from the assistant. This is used during reinforcement learning and at deployment time.
Let's remove the last assistant message and call `build_generation_prompt` to get a prompt that we can use to sample an alternative response from the assistant.
```python
from tinker_cookbook import renderers, tokenizer_utils
tokenizer = tokenizer_utils.get_tokenizer('Qwen/Qwen3-30B-A3B')
renderer = renderers.get_renderer('qwen3', tokenizer)
prompt = renderer.build_generation_prompt(messages[:-1])
print(prompt)
print('-'*10)
print(tokenizer.decode(prompt.to_ints()))
```
First you can see that the prompt is a `ModelInput` object, which is a list of `EncodedTextChunk` objects (but contains different objects in multi-modal data).
```
ModelInput(chunks=[EncodedTextChunk(tokens=[151644, 8948, 198, 2610, 525, 264, 10950, 17847, 13, 151645, 198, 151644, 8948, 198, 16141, 3529, 285, 974, 26, 518, 1429, 825, 11652, 817, 2033, 151645, 198, 151644, 872, 198, 3838, 374, 279, 22032, 61854, 20589, 306, 9419, 30, 151645, 198, 151644, 77091, 198, 785, 19020, 34651, 11244, 11, 892, 646, 3887, 916, 220, 18, 15, 1635, 13, 151645, 198, 151644, 872, 198, 10234, 30, 151645, 198, 151644, 77091, 198], type='encoded_text')])
----------
<|im_start|>system
Answer concisely; at most one sentence per response<|im_end|>
<|im_start|>user
What is the longest-lived rodent species?<|im_end|>
<|im_start|>assistant
The naked mole rat, which can live over 30 years.<|im_end|>
<|im_start|>user
How do they live so long?<|im_end|>
<|im_start|>assistant
```
Given that we're providing messages as input, we probably want a message output, rather than a token output. For that, we can use `parse_response`.
```python
import tinker
from tinker.types import SamplingParams
service_client = tinker.ServiceClient()
sampling_client = service_client.create_sampling_client(base_model='Qwen/Qwen3-30B-A3B')
stop_sequences = renderer.get_stop_sequences()
print(f"Stop sequences: {stop_sequences}")
sampling_params = SamplingParams(max_tokens=100, temperature=0.5, stop=stop_sequences)
output = sampling_client.sample(prompt, sampling_params=sampling_params, num_samples=1).result()
print(f"Sampled tokens: {output.sequences[0].tokens}")
sampled_message, parse_success = renderer.parse_response(output.sequences[0].tokens)
print(f"Sampled message: {sampled_message}")
print(f"Parse success: {parse_success}")
```
We get the following output:
```
Stop sequences: [151645]
Sampled tokens: [45, 7741, 34651, 31410, 614, 4911, 76665, 11, 2670, 264, 7548, 11050, 22077, 1849, 323, 264, 1602, 3347, 40761, 4379, 11, 892, 16792, 311, 862, 57119, 13, 151645]
Sampled message: {'role': 'assistant', 'content': 'Naked mole rats have unique adaptations, including a highly efficient immune system and a very low metabolic rate, which contribute to their longevity.'}
Parse success: True
```
You can see that the there is one stop sequence, 151645, which you can verify is the `<|im_end|>` token. The output is parsed successfully into a message.
## Supervised learning
For supervised learning, along with some other algorithms like [DPO](/preferences/dpo-guide), we need different information from the renderer -- we want to provide a target assistant message, and the renderer needs to tell us which tokens are part of the prompt and completion.
To do this, we can use `build_supervised_example` as follows:
```python
tokens, weights = renderer.build_supervised_example(messages)
from tinker_cookbook.utils.format_colorized import format_colorized
print(format_colorized(tokens, weights, tokenizer))
```
We get the following output:
<|im_start|>systemβ΅
Answer concisely; at most one sentence per response<|im_end|>β΅
<|im_start|>userβ΅
What is the longest-lived rodent species?<|im_end|>β΅
<|im_start|>assistantβ΅
The naked mole rat, which can live over 30 years.<|im_end|>β΅
<|im_start|>userβ΅
How do they live so long?<|im_end|>β΅
<|im_start|>assistantβ΅
They evolved multiple protective mechanisms including special hyaluronic acid that prevents cancer, extremely stable proteins, and efficient DNA repair systems that work together to prevent aging.<|im_end|>
The green text is part of the prompt (i.e. with `weight=0`) and red is part of the completion (i.e. with `weight=1`). Note that the β΅ have been inserted just to make it clear whether each newline is part of the prompt or the completion; these are not actually part of the token sequence.
## Appendix: why don't you just use jinja-formatted chat templates?
In our experience, the Jinja2 templates are harder to write than python code -- especially when we need to get the whitespace exactly right. They are also unwieldy for supervised learning, where you need to put different labels on different tokens.
---
## File: completers.mdx
import { CookbookLink } from '../components/CookbookLink'
# Completers
The concept of policies is crucial to the RL training process. In the Tinker Cookbook, policies are implemented as `Completers`. Completers are abstractions that represent models or policies that can be sampled from, providing different levels of structure depending on your use case.
## Overview of Completer Types
The Tinker Cookbook provides two main types of completers, each designed for different use cases:
1. **TokenCompleter**: Operates on tokens and is used by RL algorithms
2. **MessageCompleter**: Operates on messages and needs to be used with a renderer
The choice between these depends on whether you're working at the token level for RL training or at the message level for interacting with and evaluating the model.
### TokenCompleter
The `TokenCompleter` is the foundational interface used by RL algorithms because they work directly with tokens.
```python
class TokenCompleter:
async def __call__(
self, model_input: types.ModelInput, stop: StopCondition
) -> TokensWithLogprobs:
```
This interface takes:
- `model_input`: The input to the model (of type `types.ModelInput`)
- `stop`: Stop conditions, either a list of strings or token IDs (combined into a `StopCondition` class). When training with reinforcement learning, this should be defined by the `initial_observation` function of the environment.
It returns a `TokensWithLogprobs` object containing:
- `tokens`: The generated token sequence
- `maybe_logprobs`: Optional log probabilities for each token
### MessageCompleter
The `MessageCompleter` operates at a higher level with structured messages, similarly to standard chat APIs. It takes a list of messages and returns a single assistant message response.
```python
class MessageCompleter:
async def __call__(self, messages: list[renderers.Message]) -> renderers.Message:
```
For training purposes the `TokenCompleter` is the class we will use for RL training as we need to optimiza the same same set of tokens during the update step that the model output during rollout. The `MessageCompleter` is useful for sampling where we need to use the model output for semantic purposes such as Judge models or multi-agent environments.
The Tinker Cookbook uses two concrete implementations of these interfaces - `TinkerTokenCompleter` and `TinkerMessageCompleter` which are both wrappers around a `tinker.SamplingClient`. While the TinkerTokenCompleter operates directly on tokens, the TinkerMessageCompleter needs to be instantiated with a renderer to make it compatible with the inputs expected by the samping client.
---
## File: install.mdx
# Installing Tinker
There are two packages to install:
- [tinker](https://github.com/thinking-machines-lab/tinker), which performs basic low-level operations like `forward_backward`
- [tinker-cookbook](https://github.com/thinking-machines-lab/tinker-cookbook), which is a collection of training code and experiment tools built on top of Tinker
Both packages can be installed with `pip` from their GitHub repositories:
```bash
pip install git+https://github.com/thinking-machines-lab/tinker.git
pip install git+https://github.com/thinking-machines-lab/tinker-cookbook.git
```
However, for the Cookbook, we'd recommend doing a local editable install, as you'll probably want to browse and edit the code:
```bash
git clone https://github.com/thinking-machines-lab/tinker-cookbook.git
cd tinker-cookbook
pip install -e .
```
Tinker Cookbook also has a few optional dependencies, which you can install with the following command in an editable install:
```bash
pip install -e ".[envs]"
```
Or, for a non-editable install:
```bash
pip install "git+https://github.com/thinking-machines-lab/tinker-cookbook.git#egg=tinker-cookbook[envs]"
```
---
## File: rl.mdx
import { CookbookLink } from '../components/CookbookLink'
# Reinforcement learning
Reinforcement learning (RL) means learning from trial and error. Whereas in supervised learning, we're given input-output pairs, in RL, we're given inputs (prompts) and reward functions (i.e., a function for scoring candidate outputs). RL algorithms need to discover what good outputs look like.
Here are a few different types of RL training that we support in the Tinker Cookbook:
- *RL with Verifiable Rewards*: this is when we do RL on a reward function that checks model outputs using a program. Typically, the reward function checks the candidate answer against a reference answer, or, in coding cases, it may check if the candidate solution passes some unit tests. RLVR is especially suitable for teaching models to do reasoning (with chain-of-thought) and multi-step tool use (e.g., debugging and iterative modification pf programs).
- *RL on Human Feedback*: here, we assume we have an objective that can't be calculated by a simple program, and it requires some human judgement. For example, we typically want to optimize our models for helpfulness, which includes being clear, informative, and interesting. For RLHF, we train a *preference model* using supervised learning to match human judgement, scoring or ranking candidate outputs. Then we do RL on the preference model's scores. See the [Preferences](/preferences) section for more details.
We'll first show how to do small RL runs in the RLVR setting, then we'll show you how to define your own RL environments and train on them, then we'll provide examples for larger-scale or more complicated training setups.
We anticipate that people will want to use tinker for RL in a few different ways:
- Creating a specialist model that's SoTA at a specific skill, which existing models haven't been trained on. In this case, you'll want to start with a post-trained model that's already strong, and then do RL on an environment you've defined. See [RL Environments](/rl/rl-envs).
- Doing research on post-training pipelines. In this case, you'll probably want to chain together SL and RL and runs with different data mixes, environments, and reward functions. See our [RLHF example](/preferences/rlhf-example).
- Doing research on RL algorithms. Here, you'll probably want to find some existing environments to use as benchmarks, and either modify our provided training code (rl/train.py) or write your own minimal training loop. We've provided a [minimal training loop](/rl/rl-loops) that you can use as a starting point.
---
## File: model-lineup.mdx
# Available Models in Tinker
The table below shows the models that are currently available in Tinker. We plan to update this list frequently as new models are released.
## What model should I use?
- In general, use MoE models, which are more cost effective than the dense models.
- Use π Base models only if you're doing research or are running the full post-training pipeline yourself
- If you want to create a model that is good at a specific task or domain, use an existing post-trained model model, and fine-tune it on your own data or environment.
- If you care about latency, use one of the "β‘ Instruction" models, which will start outputting tokens without a chain-of-thought.
- If you care about intelligence and robustness, use one of the "π€ Hybrid" models, which can use long chain-of-thought.
## Full Listing
| Model Name | Training Type | Architecture | Size |
|------------|--------------|--------------|------|
| [Qwen/Qwen3-235B-A22B-Instruct-2507](https://huggingface.co/Qwen/Qwen3-235B-A22B-Instruct-2507) | β‘ Instruction | π MoE | π¦ Large |
| [Qwen/Qwen3-30B-A3B-Instruct-2507](https://huggingface.co/Qwen/Qwen3-30B-A3B-Instruct-2507) | β‘ Instruction | π MoE | π¦
Medium |
| [Qwen/Qwen3-30B-A3B](https://huggingface.co/Qwen/Qwen3-30B-A3B) | π€ Hybrid | π MoE | π¦
Medium |
| [Qwen/Qwen3-30B-A3B-Base](https://huggingface.co/Qwen/Qwen3-30B-A3B-Base) | π Base | π MoE | π¦
Medium |
| [Qwen/Qwen3-32B](https://huggingface.co/Qwen/Qwen3-32B) | π€ Hybrid | π§± Dense | π¦
Medium |
| [Qwen/Qwen3-8B](https://huggingface.co/Qwen/Qwen3-8B) | π€ Hybrid | π§± Dense | π¦ Small |
| [Qwen/Qwen3-8B-Base](https://huggingface.co/Qwen/Qwen3-8B-Base) | π Base | π§± Dense | π¦ Small |
| [Qwen/Qwen3-4B-Instruct-2507](https://huggingface.co/Qwen/Qwen3-4B-Instruct-2507) | β‘ Instruction | π§± Dense | π£ Compact |
| [meta-llama/Llama-3.3-70B](https://huggingface.co/meta-llama/Llama-3.3-70B) | β‘ Instruction | π§± Dense | π¦ Large |
| [meta-llama/Llama-3.1-70B](https://huggingface.co/meta-llama/Llama-3.1-70B) | π Base | π§± Dense | π¦ Large |
| [meta-llama/Llama-3.1-8B](https://huggingface.co/meta-llama/Llama-3.1-8B) | π Base | π§± Dense | π¦ Small |
| [meta-llama/Llama-3.1-8B-Instruct](https://huggingface.co/meta-llama/Llama-3.1-8B-Instruct) | β‘ Instruction | π§± Dense | π¦ Small |
| [meta-llama/Llama-3.2-3B](https://huggingface.co/meta-llama/Llama-3.2-3B) | π Base | π§± Dense | π£ Compact |
| [meta-llama/Llama-3.2-1B](https://huggingface.co/meta-llama/Llama-3.2-1B) | π Base | π§± Dense | π£ Compact |
## Legend
### Training Types
- π **Base**: Foundation models trained on raw text data, suitable for post-training research and custom fine-tuning
- β‘ **Instruction**: Models fine-tuned for following instructions and chat, optimized for fast inference
- π€ **Hybrid**: Models that can operate in both thinking and non-thinking modes
### Architecture
- π§± **Dense**: Standard transformer architecture with all parameters active
- π **MoE**: Mixture of Experts architecture with sparse activation
### Model Sizes
- π£ **Compact**: 1B-4B parameters
- π¦ **Small**: 8B parameters
- π¦
**Medium**: 30B-32B parameters
- π¦ **Large**: 70B+ parameters
Note that the MoE models are much more cost effective than the dense models. E.g. the Qwen3-30B-A3B model has only 3B active parameters, so it'll cost around the same as a 3B dense model for training and inference.
---
## File: preferences/dpo-guide.mdx
import { Callout } from 'nextra/components'
import { CookbookLink } from '../../components/CookbookLink'
# Direct Preference Optimization (DPO)
Direct Preference Optimization (DPO) is a method for training language models to align with human preferences without requiring a separate reward model. Instead of using reinforcement learning with human feedback (RLHF), DPO directly optimizes the model to prefer chosen responses over rejected ones using a simple classification loss.
## DPO Algorithm Details
The core DPO loss is computed as:
$$
\mathcal{L}_{\theta} = -\mathbb{E}_{x, y_\text{chosen}, y_\text{rejected} \sim \mathcal{D}}\left[\log\sigma\left(\beta\log \frac{\pi_{\theta}(y_\text{chosen}|x)}{\pi_{\text{ref}}(y_\text{chosen}|x)} - \beta\log \frac{\pi_{\theta}(y_\text{rejected}|x)}{\pi_{\text{ref}}(y_\text{rejected}|x)}\right)\right]
$$
Where:
- $\pi_{\theta}$ is the current policy
- $\pi_{\text{ref}}$ is the reference model (typically the initial model before DPO training)
- $\beta$ is the DPO beta parameter
- Where $\mathcal{D}$ is a dataset of prompts $x$, a chosen response $y_{\text{chosen}}$ and a rejected response $y_{\text{rejected}}$
This optimizes the classical constrianed RLHF objective, where the reference model constrains deviation from the initial distribution.
**DPO vs RLHF**: DPO eliminates the need for a separate reward model by directly optimizing the policy to prefer chosen responses. This makes training simpler and computationally cheaper than classical RLHF.
## Running DPO Training
The implementation is in train_dpo.py with a CLI interface in train_dpo_cli.py. You can run it from the command line:
```bash
python -m tinker_cookbook.preference.train_dpo_cli \
log_path=/tmp/dpo-hhh-experiment \
model_name=meta-llama/Llama-3.2-1B \
dataset=hhh \
renderer_name=role_colon \
learning_rate=1e-5 \
dpo_beta=0.1
```
### Key Parameters
- `log_relpath`: Directory where results and checkpoints are saved
- `model_name`: Base model used as initialization and for the reference policy
- `dataset`: Dataset name (`hhh`, `helpsteer3`, `ultrafeedback`)
- `renderer_name`: How conversations are formatted (see [Rendering](../rendering.mdx))
- `learning_rate`: Learning rate for optimization
- `dpo_beta`: DPO beta parameter (controls the strength of preference learning)
### Available Datasets
There are several pre-defined datasets:
- **`hhh`**: Anthropic's Helpful-Harmless-Honest dataset
- **`helpsteer3`**: NVIDIA's HelpSteer3 preference dataset
- **`ultrafeedback`**: UltraFeedback binarized preferences dataset
These are implemented as `DPODatasetBuilder` classes and you can implement a custom dataset builder following the `tinker_cookbook.preference.preference_datasets` interface.
## Training Process
During training, you'll see output like this showing the DPO metrics:
```
Step 50
ββββββββββββββββββββββββββββββββββ³ββββββββββββ
β Metric β Value β
β‘βββββββββββββββββββββββββββββββββββββββββββββ©
β accuracy β 0.568627 β
β batch_time β 27.953704 β
β chosen_reward β 0.053621 β
β dpo_loss β 0.683825 β
β learning_rate β 0.000009 β
β margin β 0.002147 β
β num_pairs β 255 β
β num_tokens β 112638 β
β progress β 0.081210 β
β rejected_reward β 0.032152 β
β test/nll β 1.871778 β
ββββββββββββββββββββββββββββββββββ΄ββββββββββββ
```
The key metrics are:
- **`dpo_loss`**: The DPO classification loss
- **`accuracy`**: Accuracy of the implicit reward model evaluated on the preference dataset
- **`margin`**: Average difference between chosen and rejected rewards
- **`chosen_reward`/`rejected_reward`**: Average rewards for chosen/rejected responses
## Evaluating DPO Models
After training, you can evaluate your DPO model using the inspect evaluation framework:
```bash
MODEL_PATH=tinker://YOUR_MODEL_PATH_HERE
python -m tinker_cookbook.run_inspect_evals \
model_path=$MODEL_PATH \
model_name=meta-llama/Llama-3.2-1B \
tasks=ifeval,paws \
renderer_name=role_colon
```
This will evaluate the model on various benchmarks to measure the impact of preference optimization.
## Tips for DPO Training
1. **Beta Parameter**: Start with `dpo_beta=0.1` and adjust based on your dataset.
2. **Learning Rate**: Use a lower learning rate than supervised fine-tuning (typically 1e-5 to 1e-6).
3. **Base Model**: The base model should already be in-distribution with the preference data. Either start with a ligh SFT phase or collect on-policy preferences. While training would still work. sharp distribution mis-match will create strange model behaviors.
---
## File: preferences/rlhf-example.mdx
import { CookbookLink } from '../../components/CookbookLink'
# Reinforcement Learning from Human Feedback
## Train a pairwise preference model
We can first use supervised learning to train a model to do pairwise comparisons. To do this, we format the comparisons such that the model sees the prompt and the two completions, and it learns to predict the label (which was was preferred). Here, we train on the [Anthropic HHH](https://huggingface.co/datasets/Anthropic/hh-rlhf) dataset.
```bash
model_name=meta-llama/Llama-3.1-8B-Instruct
python -m tinker_cookbook.supervised.train_cli \
log_path=/tmp/tinker-examples/hhh-pm \
dataset=hhh \
model_name=$model_name \
learning_rate=4e-4
```
After the training is done, we can make note of the path of the final checkpoint. This path is printed to the console, and it's also in the `metrics.jsonl` file.
## Do RL using the reward model
Next, we can run RL using this reward model. In rl/train_cli.py, we've hardcoded the tinker path of a Llama-8B-Instruct model that was we trained on the dataset above. Here, we define the reward function by generating 4 completions per prompt, and doing 4x3=12 pairwise comparisons between these completions, using the reward model.
```bash
python -m tinker_cookbook.rl.train_cli \
env=hhh \
log_path=/tmp/tinker-examples/hhh-rl \
groups_per_batch=256 \
group_size=4 \
max_tokens=400 \
learning_rate=2e-5
```
TODO: evaluate with reward model and inspect evals
---
## File: rl/rl-basic.mdx
import { CookbookLink } from '../../components/CookbookLink'
# Your First RL Run
We've provided a minimal script that runs RL on the [GSM8K dataset](https://huggingface.co/datasets/openai/gsm8k): rl_basic.py. For running this script, tinker-cookbook needs to be installed with optional envs dependencies (`pip install -e ".[envs]"`). You can run the minimal RL script from the command line as follows:
```bash
python -m tinker_cookbook.recipes.rl_basic
```
This script will fine-tune the Llama-3.1-8B base (pretrained) model on this dataset with the following reward function:
$$
1[\text{answer is correct}] + 0.1 \times (1[\text{answer is formatted correctly}] - 1)
$$
The training should take about 1 minute per iteration and climb to about 63% accuracy after 15 iterations (`env/all/correct`). You can look at the printouts for some other metrics of interest:
- `ac_tokens_per_turn`: the number of each tokens in each generated completion
- `env/all/format`: the fraction of completions that are formatted correctly
- `env/all/reward/total`: mean total reward (combining format and correctness as defined above)
- `entropy`: per-token entropy (mean negative log-probability of sampled tokens)
- `kl_sample_train_{v1,v2}`: two different approximations/estimators of KL divergence between the sampler's and learner's probability distribution (contributed to by numerical differences and rounding noise)
- `progress/done_frac`: what fraction of the total number of iterations we've completed so far
- `time/...`: time for different parts of the training loop
You can also look at the `log_path` directory for more detailed metrics. There are several files of interest, which are mostly the same as in the [Supervised Learning](/supervised-learning/sl-basic) case.
---
## File: rl/rl-hyperparams.mdx
# RL Hyperparameters
This guide covers the key hyperparameters for reinforcement learning training, from core settings to advanced configurations.
## Core Hyperparameters
### Learning Rate
Similar to the [supervised learning setting](../supervised-learning/sl-hyperparams), the learning rate is the most critical hyperparameter choice. We recommend using the guidance presented there as a starting point for RL experiments as well.
### Batch and Group Sizes
As described in our [RL environments](../rl/rl-envs.mdx) documentation, we use two key parameters:
- **`batch_size`**: The number of unique environments or problems used for training
- **`group_size`**: The number of rollouts performed per unique environment
If you have limited environments or problems available for training, increase the `group_size` to generate more training data. While the total number of rollouts depends on both parameters, we recommend scaling learning rates proportionally to $\text{LR} \propto \sqrt{\text{batch\_size}}$.
## Multiple Updates per Sampling Iteration
The `num_substeps` parameter controls how many policy weight updates are performed on data sampled from the last policy iteration, similar to PPO and GRPO.
### How it works:
- **`num_substeps = 1` (default)**: Each batch of collected trajectories is used for exactly one optimizer update
- **`num_substeps > 1`**: The batch of unique environments is split into `num_substeps` mini-batches, where each environment/problem has `group_size` rollouts (we pack all rollouts for a particular environment/problem in the same minibatch). We do a single update step on each mini-batch. Note that our implementation still takes only a single epoch through the data.
### Usage Guidelines:
- The batch size must be divisible by `num_substeps`
- Our experiments show that `num_substeps = 1` already gives decent performance, but if you would like to experiment with this parameter, we recommend starting with a low value of 2-4 and using the PPO objective.
- Higher values can lead to update steps that are too out-of-distribution for the policy. Consider limiting the number of updates or decreasing the learning rate when using multiple update steps.
## Advanced Training Configurations
β οΈ **Note**: These features are experimental and may be subject to instabilities. They are currently disabled by default.
### Streaming Minibatch Training
Enable streaming minibatch training by specifying the `StreamMinibatchConfig`. This approach overlaps trajectory sampling and model training, improving overall throughput by submitting training requests as soon as enough rollouts complete, without waiting for all sampling jobs to finish.
**Configuration Parameters:**
- **`groups_per_batch`**: Same as batch size (TODO: consider getting this from the dataset builder, also general naming around these is not great now)
- **`num_minibatches`**: Number of minibatches per substepβcontrols how many individual forward-backward requests we submit. This controls how the work is split (TODO: Why do we need this? We should just send a group as soon as it's ready).
**Important**: This remains on-policy training and is strictly a pipeline efficiency improvement.
### Async Off-Policy Training
Async training allows the model to train on trajectories generated with slightly older model versions, enabling higher throughput at the cost of some off-policy bias. While Tinker doesn't currently support in-flight weight changes, it supports the "off-by-K" async RL approach where multiple model iterations generate data simultaneously. Configure this by setting the `AsyncConfig` object.
**Configuration Parameters:**
- **`max_steps_off_policy`**: Maximum age (in training steps) of trajectories before they're discarded. Essentially, trajectories from policy iterations older than `max_steps_off_policy` steps will not be used.
- **`groups_per_batch`**: Number of new trajectory groups to accumulate (with a `group_size` number of rollouts each) before updating the current iteration of the model. Note: This is separate from the batch size used for dataset construction.
**Usage Guidelines:**
- Async RL is appropriate for applications with long and heterogeneous rollouts, such as very long CoT models, multi-hop tool use, or agentic workflows
- Start with a small value for `max_steps_off_policy` (less than 5)
## Monitoring and Run Health
Using policy-gradient algorithms with off-policy data can significantly degrade performance or even crash the policy, making monitoring essential during training.
### KL Divergence Monitoring
The current implementation logs the KL divergence between the data generation policy and the current learner: $\mathbb{D}_{KL}[\pi_{\text{sampler}}(\cdot|x)||\pi_{\theta}(\cdot|x)]$ using two separate [estimators](http://joschu.net/blog/kl-approx.html):
- `kl_sample_train_v1`
- `kl_sample_train_v2`
A few important notes to keep in mind:
- Even with full on-policy training, the divergence between sampling and learning policies will not be [exactly zero](https://thinkingmachines.ai/blog/defeating-nondeterminism-in-llm-inference/) due to implementation details
- In our experience training ist stable with KL divergence below 0.01
- If KL divergence crosses the recommended threshold, this indicates a numerical instability or potential issue with the training run
---
## File: rl/rl-loops.mdx
import { CookbookLink } from '../../components/CookbookLink'
# RL Training Loop
In this section we will walk through the RL training loop using a minimum working example on the Hendrycks Math dataset.
## Dataset Construction
The Hendrycks Math environment uses the `MathDatasetBuilder` as described in `Creating RL Environments` to create structured batches of mathematical problems:
```python
from tinker_cookbook.rl.math_env import MathDatasetBuilder
dataset_builder = MathDatasetBuilder(
batch_size=64, # 64 problem groups per training batch
group_size=32, # 32 solution attempts per problem
model_name_for_tokenizer="meta-llama/Llama-3.1",
renderer_name="llama3",
convo_prefix="standard" # Includes few-shot examples
)
train_dataset, test_dataset = dataset_builder()
```
Here `MathDatasetBuilder` is a `RLDatasetBuilder` that creates a `RLDataset` where we have a batch of 64 problems, each of which creates a group of 32 identical environments.
## Policy (Completer) Construction
Following the `Completers` section, the policy is implemented as a `TinkerTokenCompleter` that wraps the sampling client:
```python
import tinker
from tinker_cookbook.completers import TinkerTokenCompleter
service_client = tinker.ServiceClient()
training_client = await service_client.create_lora_training_client_async(
base_model="meta-llama/Llama-3.1-8B", rank=32
)
async def get_policy(step: int):
# Save a checkpoint that you can use for sampling
sampling_path = training_client.save_weights_for_sampler(name="xxxx").result().path
# Create a sampling client with that checkpoint
sampling_client = service_client.create_sampling_client(model_path=sampling_path) #
# Wrap in completer interface for RL algorithms
return TinkerTokenCompleter(
sampling_client=sampling_client,
max_tokens=512
)
```
At each training step the `get_policy` function will save he training weights and update the sampling client with the new weights to generate on-policy data.
## Rollout Collection Loop
The key step of the RL training loop is collecting new on-policy data after each policy update. For each training batch we have
```python
import asyncio
from tinker_cookbook.completers import TokenCompleter
from tinker_cookbook.rl.rollouts import do_group_rollout
async def generate_rollouts(step: int, policy: TokenCompleter):
# Creates 64 builders, each building 32 MathEnv instances
env_group_builders_P = train_dataset.get_batch(step)
# Generate rollouts for each group of 32 environments
trajectory_groups_P = await asyncio.gather(
*[do_group_rollout(builder, policy) for builder in env_group_builders_P]
)
taglist_P = [builder.logging_tags() for builder in env_group_builders_P]
return trajectory_groups_P, taglist_P
```
Here the `do_group_rollout` function will run the policy in the environments and return a list of of Trajectory Groups. The `TrajectoryGroup` is an object that consists of a list of trajectories (a `Trajectory` object) and, final rewards and potentially other metadata. The rollout will return a `batch_size` list of `TrajectoryGroup` objects, each of which consists of `group_size` `Trajectory` objects.
## Data Processing
After we collect new data, we need to process it for training. We compute advantages by centering rewards within each problem group. At this step we can also optionally filter out groups with all successes or all failures as these have policy gradients of zero. Finally the `assemble_training_data` function converts the trajectories into token-level training examples.
```python
from typing import List
from tinker_cookbook.rl.types import TrajectoryGroup
from tinker_cookbook.rl.train import (
_remove_constant_reward_groups,
compute_advantages,
assemble_training_data
)
def process_trajectory_groups(trajectory_groups_P: List[TrajectoryGroup]):
# (Optionally) Remove groups with all successes or all failures
filtered_trajectory_groups_P = (
_remove_constant_reward_groups(trajectory_groups_P)
)
# Compute advantages for each trajectory in each group
advantages_P = compute_advantages(filtered_trajectory_groups_P)
# Convert trajectories to token-level training examples
data_D, _metadata_D = assemble_training_data(filtered_trajectory_groups_P, advantages_P)
return data_D, _metadata_D
```
This function will return a list of training data `types.Datum` objects, as well as a list of metadata dictionaries.
## Model Training Step
The model update uses importance sampling loss:
```python
from typing import List
import torch
import tinker
from tinker import types
from tinker_cookbook.rl.train import _remove_mask
async def train_step(
data_D: List[types.Datum],
training_client: tinker.TrainingClient,
learning_rate: float,
) -> List[torch.Tensor]:
"""Train the model on collected trajectories."""
# Forward-backward pass on all data
fwd_bwd_future = await training_client.forward_backward_async(
list(map(_remove_mask, data_D)), loss_fn="importance_sampling"
)
# Optimizer step
adam_params = types.AdamParams(learning_rate=learning_rate, beta1=0.9, beta2=0.95, eps=1e-8)
optim_step_future = await training_client.optim_step_async(adam_params)
# Wait for results
fwd_bwd_result = await fwd_bwd_future.result_async()
_optim_step_result = await optim_step_future.result_async()
```
Here we use the standard importance sampling loss policy gradient loss function:
```math
\mathcal{L}_{\theta} = -\mathbb{E}_{x\sim \mathcal{D}, y \sim \pi_{\text{sampler}}(\cdot|x)}\left[\textbf{sg}\left(\frac{\pi_{\theta}(y|x)}{\pi_{\text{sampler}}(y|x)}\right)A(x, y)\log\pi_{\theta}(y|x)\right]
```
where we use the group-level advantages $A(x, y)$ to scale the policy gradient. Notice that even when we have exact on-policy RL, slight differences between the trainer and sampler model (even with the same weights) can introduce off-policy issues that can (severely) negatively affect learning.
## Complete Training Loop
Putting it all together, the complete training loop iterates between the following steps:
1. Create a policy with the current model weights
2. Generate rollouts
3. Process trajectory data into training examples
4. Update model parameters
```python
import time
from tinker_cookbook.utils import ml_log
from tinker_cookbook.utils.misc_utils import timed
from tinker_cookbook.rl.metric_util import compute_trajectory_metrics
async def rl_loop():
"""Complete RL training loop for Hendrycks Math problems."""
# Setup dataset
dataset_builder = MathDatasetBuilder(
batch_size=64,
group_size=32,
model_name_for_tokenizer="meta-llama/Llama-3.1-8B",
renderer_name="llama3",
convo_prefix="standard"
)
train_dataset, test_dataset = dataset_builder()
# Setup training client
service_client = tinker.ServiceClient()
training_client = await service_client.create_lora_training_client_async(
"meta-llama/Llama-3.1-8B", rank=32
)
# Setup logging
log_dir = str("~/experiments/math-rl")
num_batches = len(train_dataset)
learning_rate = 1e-5
# Main training loop
for i_batch in range(num_batches):
# Setup metrics for logging
t_start = time.time()
metrics = {
"progress/batch": i_batch,
"optim/lr": learning_rate,
"progress/done_frac": (i_batch + 1) / num_batches,
}
# 1. Create policy with current weights
policy = await get_policy(i_batch)
# 2. Generate rollouts
with timed("generate_rollouts", metrics):
# Generate rollouts: 64 groups Γ 32 environments = 2,048 total rollouts
trajectory_groups_P, taglist_P = await generate_rollouts(i_batch, policy)
# Compute trajectory metrics
metrics.update(compute_trajectory_metrics(trajectory_groups_P, taglist_P))
# 3. Process trajectory data into training examples
data_D, _metadata_D = process_trajectory_groups(trajectory_groups_P)
# 4. Update model parameters
await train_step(data_D, training_client, learning_rate)
# Log metrics
metrics["time/total"] = time.time() - t_start
ml_logger.log_metrics(metrics, step=i_batch)
```
## Running RL training
To run a training job you can use the our basic RL training `script`.
---
## File: rl/rl-envs.mdx
import { CookbookLink } from '../../components/CookbookLink'
# RL Environments
Here, we'll explain how to create your own RL environments and train on them. First, lets look at the basic classes, which can be found in `tinker_cookbook.rl.types`. As you can see, there's an `Env` interface, corresponding to an RL environment. To write an environment, you need to implement two methods: `initial_observation` and `step`.
```python
class Env:
"""
Stateful environment that a single agent interacts with.
Discard after running for one episode.
"""
async def initial_observation(self) -> tuple[Observation, StopCondition]:
raise NotImplementedError
async def step(self, action: Action) -> StepResult:
raise NotImplementedError
```
Note that this `Env` operates on *tokens*, rather than strings or messages. Why define it this way, when it's usually more natural to define the logic in terms of strings or messages? We've defined `Env` this way because this interface is what's needed by the *training* code. The training code needs to know the exact tokens that were sampled, and their logprobs. It might be more convenient to write the logic in terms of strings or messages. See [Env Adapters](/rl/env-adapters) for more on how to write adapters between our `Env` class and other libraries.
We need to write two more small classes to use this environment in the RL training code. First, since the environment is discarded after a single episode, we need to be able to instantiate new environments in the training loop. We actually build a *group* of environments at a time, which enables multi-agent training or objectives that compare multiple samples (for example, a reward model that acts on a pair of samples).
```python
class EnvGroupBuilder:
"""
Builds a group of environments.
"""
async def make_envs(self) -> Sequence[Env]:
raise NotImplementedError
```
This object creates a group of environments. Often it does the trivial thing of returning a list of copies of the same environment.
Finally, we need a dataset of these EnvGroupBuilders.
```python
class RLDataset:
"""
Dataset of EnvGroupBuilders.
"""
def get_batch(self, index: int) -> list[EnvGroupBuilder]:
raise NotImplementedError
```
That's a lot of classes! But their combination gives us a lot of flexibility. In previous implementations (like OpenAI Gym), the dataset is implicitly part of the environment; this structure is more modular and gives us more control over the data loading.
## Building a simple example
You can find an example of writing a new RL environment in the Twenty Questions directory.
Here, we define a multi-step environment, where we're training a question-asking agent, which asks questions to another agent to guess a hidden word.
In this case, the answerer model is fixed and is Llama-3.1-8B-Instruct.
The player model (which we fine-tune) is also based on that same model.
You can run the training script as follows:
```bash
uv run python -m tinker_cookbook.recipes.twenty_questions.train
```
---
## File: supervised-learning/sl-hyperparams.mdx
# Supervised Learning Hyperparameters
Successful fine-tuning of any LLM requires careful tuning of the hyperparameters. While the most accurate way to tune hyperparameters is to sweep over a range of values for each hyperparameter and pick the values that minimize the loss or maximize the evals, this is often time-consuming and expensive. We provide some starting recommendations for the most important hyperparameters.
## Learning rate
The most important hyperparameter is generally the learning rate (LR). Our current best estimate of optimal LR for a model $m$ is the following:
$$ LR(m) = lr_{base} Β· M_{LoRA} Β· \Big(\frac{2000}{H_m}\Big)^{P_m} $$
where $lr_{base}$ is a constant base lr, $M_{LoRA}$ is a multiplier applied when using LoRA (1 if using full-finetuning), $H_m$ is the hidden size of the model $m$, and $P_m$ is a model-specific exponent adjustment. Importantly, note that this function is independent of the LoRA rank.
Our current best estimates are the following: $lr_{base} = 5e-5$,
$M_{LoRA} = 10$, $P_m = 0.0775$ for Qwen models and $P_m = 0.781$ for Llama models.
You can use the following function to get the recommended lr for any model:
```
from tinker_cookbook.hyperparam_utils import get_lr
model_name = "meta-llama/Llama-3.2-1B"
print(get_lr(model_name))
```
We can define the regret of using any lr as the following:
$$regret(lr') = \frac{loss(lr') - min_{lr} loss(lr)}{min_{lr} loss(lr)}$$
We used our lr estimate to run SFT experiments across a variety of datasets, dataset sizes, batch_sizes and lora_ranks, and found the regret to be \<0.5%.
## Batch size
The second most important hyperparameter is the batch size. For small batch sizes, there's a phenomenon of *perfect scaling*, where the LR and batchsize should be varied together as $LR \propto \sqrt{B}$, and the learning curve only depends on $\frac{LR}{\sqrt{B}}$. See e.g., [Shallue et al. (2018)](https://arxiv.org/abs/1811.03600), in the training-from-scratch setting. When fine-tuning LLMs, we're often in a regime where smaller batch sizes give better performance, at the cost of longer training time; moreover, the $LR \propto \sqrt{B}$ scaling doesn't always hold. When doing SL fine-tuning, we recommend using smaller batch sizes like 128, depending on your tolerance for longer training time. For best results, you should aim for at least 100 steps of training (but usually get best results with 1000 or more). [NOTE: we are not confident about these recommendations and need to do further research.]
---
## File: supervised-learning/sl-basic.mdx
import { CookbookLink } from '../../components/CookbookLink'
# Basic Supervised Learning
We've provided an implementation of supervised learning in train_cli.py. To use this training loop, you'll need to create a `Config` object with the data and parameters.
We've provided an example of a script that starts up an SL training run in sl_basic.py. You can run it from the command line as follows:
```bash
python -m tinker_cookbook.recipes.sl_basic
```
This script will fine-tune the Llama-3.1-8B base (pretrained) model on a small dataset called [NoRobots](https://huggingface.co/datasets/HuggingFaceH4/no_robots), created by Hugging Face.
- Each step you should see a printout of the train and test loss, along with other stats like timing.
- The training script will also print out what the data looks like, with predicted tokens (weight=1) in green and context tokens (weight=0) in yellow.
- The training script will write various logs and checkpoint info to the `log_path` directory, which is set to `/tmp/tinker-examples/sl_basic` in the example script.
Looking at this directory, there are several files of interest:
- `metrics.jsonl`: the training metrics that also were printed to the console. You can load and plot them like this:
```python
import pandas
import matplotlib.pyplot as plt
df = pandas.read_json("/tmp/tinker-examples/sl_basic/metrics.jsonl", lines=True)
plt.plot(df['train_mean_nll'], label='train_loss')
plt.plot(df['test/nll'].dropna(), label='test_loss')
plt.legend()
plt.show()
```
You should see a plot like this:

- `checkpoints.jsonl`: this files stores the checkpoints that were saved during training. Recall from [Saving and Loading](/save-load) that there are (currently) two kinds of checkpoints: one that has "/sampler_weights/" in the path (used for sampling), and the other that has "/weights/" in the path (includes full optimizer state, used for resuming training). If you interrupt the training script, then run it again, it will ask you if you want to resume training. If you choose to do so, it'll load the last (full state) checkpoint from this file.
- `config.json`: the configuration that you used for training.
In the `sl_basic` script, you'll see that there's also some disabled code (under `if 0:`) that shows how to use your own dataset, specified as a JSONL file, provided in the format of conversations.jsonl.
---
## File: supervised-learning/prompt-distillation.mdx
import { CookbookLink } from '../../components/CookbookLink'
# Prompt Distillation
Prompt distillation is a training technique in which a model is optimized to behave as though it had been provided with a long and complex prompt, without requiring access to that prompt during inference.
At a high level, this procedure involves two main steps:
- **Creation of distillation data**: A teacher prompt, which is typically lengthy and highly detailed, provides explicit, step-by-step instructions. A teacher model uses this prompt to generate responses for a set of queries.
- **Training the student model**: A student model is then trained (or fine-tuned) on the distilled dataset, thereby learning to reproduce the essential behaviors and reasoning encoded in the teacherβs instructions.
---
## Overview
Let $f_T$ and $f_S$ denote the teacher and student models, respectively. Given an instruction prompt $P$ and a query $q_i$, the teacher model generates a response $r_i$:
$$
r_i = f_T([P, q_i])
$$
Here, the prompt $P$ and the query $q_i$ are concatenated to form the input to the teacher model $f_T$. For a dataset of queries $Q = \{q_i \mid 1 \leq i \leq D\}$, we obtain a corresponding set of teacher responses $R = \{r_i \mid 1 \leq i \leq D\}$.
The distillation training dataset is defined as the set of queryβresponse pairs (excluding the original prompt):
$$
T = \{(q_i, r_i) \mid 1 \leq i \leq D\}.
$$
The student model $f_S$ is then trained to minimize the cross-entropy loss:
$$
\ell(f_S(q_i), r_i) = \ell(f_S(q_i), f_T([P, q_i])).
$$
---
## Example
The Tinker Cookbook provides a prompt distillation recipe tailored for a language classification task. The objective is straightforward: given a text query, the model should predict a two-character code corresponding to the language of the input. The set of possible labels is:
```
ar (Arabic), de (German), el (Greek), en (English), es (Spanish), fr (French), hi (Hindi), ru (Russian), tr (Turkish), ur (Urdu), vi (Vietnamese), zh (Chinese - Simplified), ot (Other/Unknown).
```
The recipe in recipes/prompt_distillation/create_data.py also includes handling strategies for inputs containing code, numerical content, or multiple languages.
In the example below, the same model (`Qwen/Qwen3-30B-A3B`) is used as both teacher and student, though in general they need not be identical.
---
### Step 1: Generate Training Data
Create prompt distillation data using the teacher model using recipes/prompt_distillation/create_data.py:
```bash
python -m tinker_cookbook.recipes.prompt_distillation.create_data \
output_file=/tmp/tinker-datasets/prompt_distillation_lang.jsonl
```
This command will:
- Use the configured teacher model to generate language classification examples
- Save the distilled dataset to the specified output file
- Create diverse training examples suitable for student model fine-tuning
### Step 2: Train the Student Model
Fine-tune a student model on the distillation data using recipes/prompt_distillation/train.py:
```bash
python -m tinker_cookbook.recipes.prompt_distillation.train
```
The training script will:
- Load the generated distillation dataset
- Apply optimized training configurations
- Fine-tune the student model for language classification
### Step 3: Test Your Model
Once training is complete, you can test your distilled model by sampling from the trained model to verify its performance on language classification tasks.
## Advanced Configuration
The prompt distillation recipe can be customized for different scenarios:
- **Teacher model selection**: Choose different base models based on your requirements
- **Sampling strategies**: Adjust temperature and other generation parameters
- **Data volume**: Scale the number of generated examples based on your needs
- **Training hyperparameters**: Fine-tune learning rates and other training settings
---
## File: supervised-learning/sweep-case-study.mdx
import { CookbookLink } from '../../components/CookbookLink'
# Sweep case study
In [Supervised Learning Hyperparameters](./sl-hyperparams), we introduced default hyperparameters as a starting point. While defaults are useful, optimal values are often task-specific. A hyperparameter sweep (systematically testing values across a range) is a more reliable way to identify the best hyperparameter values.
This page demonstrates how to sweep over the **learning rate (LR)** to find an optimal value.
## Setup
We use the simple supervised learning training loop in
sl_loop.py, which trains a Llama-3.1-8B model.
To retrieve the modelβs default learning rate recommendation:
```
from tinker_cookbook.hyperparam_utils import get_lr
print(get_lr("meta-llama/Llama-3.1-8B"))
```
This should output
```
0.0002856415043086949 # β 2.8e-4
```
This default value provides a baseline. A common best practice is to sweep one order of magnitude above and below the default. For this case, we sweep over: $lr \in [1e-5, 3e-5, 1e-4, 3e-4, 1e-3, 3e-3]$
## Running the sweep
Launch experiments in separate terminal windows for each LR value. For example:
```bash
python -m tinker_cookbook.recipes.sl_loop learning_rate=0.003 log_path=/tmp/sft-lr-sweep/lr-0.003
python -m tinker_cookbook.recipes.sl_loop learning_rate=0.001 log_path=/tmp/sft-lr-sweep/lr-0.001
python -m tinker_cookbook.recipes.sl_loop learning_rate=0.0003 log_path=/tmp/sft-lr-sweep/lr-0.0003
python -m tinker_cookbook.recipes.sl_loop learning_rate=0.0001 log_path=/tmp/sft-lr-sweep/lr-0.0001
python -m tinker_cookbook.recipes.sl_loop learning_rate=0.0003 log_path=/tmp/sft-lr-sweep/lr-0.00003
python -m tinker_cookbook.recipes.sl_loop learning_rate=0.0001 log_path=/tmp/sft-lr-sweep/lr-0.00001
```
You can also automate this process by writing a script that spawns multiple tmux windows and launches experiments programmatically.
## Collecting Results
After the experiments are complete, you can read the `metrics.jsonl` file for all the experiments:
```python
from glob import glob
import pandas
import os
import json
data = []
for fname in sorted(glob(os.path.expanduser("/tmp/sft-lr-sweep/*/metrics.jsonl"))):
df = pandas.read_json(fname, lines=True)
# make sure the experiment is completed
if len(df) == 0 or df["progress"].iloc[-1] < 0.98:
continue
config_fname = fname.replace("metrics.jsonl", "config.json")
with open(config_fname, "rb") as f:
metadata = json.load(f)
data.append({
"fname": fname,
"learning_rate": metadata["learning_rate"],
"final_loss": df["train_mean_nll"].iloc[-1].item()
})
print(f"Read metrics for {len(data)} experiments")
```
If all the experiments are completed, the above code should print:
```
Read metrics for 6 experiments
```
## Visualizing the Sweep
Plot the `final_loss` as a function of `learning_rate`:
```python
import matplotlib.pyplot as plt
df = pandas.DataFrame(data)
plt.plot(df["learning_rate"], df["final_loss"], marker='o')
plt.axhline(y=df["final_loss"].min(), color="green", linestyle="--")
plt.ylim(1.65, 1.8)
plt.xscale("log")
plt.xlabel("Learning Rate (log scale)")
plt.ylabel("Final Loss")
plt.title("Final Loss vs Learning Rate")
plt.show()
```
You should see a U-shaped curve, similar to this:

If the full U-curve is not visible in your setting, expand the sweep range by adding more LR values.
## Determining the Optimal LR
The optimal learning rate is the one that minimizes the loss. The plot above shows that the optimal LR is `3e-4` which you can also calculate by finding the minima:
```
optimal_lr = df["learning_rate"][df["final_loss"].idxmin()]
print(f"The optimal LR is {optimal_lr:.2e}")
```
Expected output:
```
The optimal LR is 3.00e-04
```
Note that the optimal LR in our sweep (`3e-4`) is very close to the default LR (`2.8e-4`).
---
## File: supervised-learning/sl-loop.mdx
import { CookbookLink } from '../../components/CookbookLink'
# Supervised Learning Training Loop
We've provided a simple SL training loop in sl_loop.py, which avoids using our dataset classes and instead defines the data loading a more self-contained way. This is for people who like to write their own training loops or learn about how things work under the hood. Our more performant implementation in supervised/train.py does basically the same thing, but with some performance optimizations, and with some additional features like periodic evals.
---
# PART 2: TYPE DEFINITIONS
Total types collected: 30
## Type: AdamParams
```python
class AdamParams(StrictBase):
learning_rate: float = 0.0001
"""Learning rate for the optimizer"""
beta1: float = 0.9
"""Coefficient used for computing running averages of gradient"""
beta2: float = 0.95
"""Coefficient used for computing running averages of gradient square"""
eps: float = 1e-12
"""Term added to the denominator to improve numerical stability"""
```
## Type: ComputeLogprobsResponse
```python
class ComputeLogprobsResponse(BaseModel):
logprobs: Sequence[Optional[float]]
type: Literal["compute_logprobs"] = "compute_logprobs"
```
## Type: CreateModelResponse
```python
class CreateModelResponse(BaseModel):
model_id: ModelID
type: Literal["create_model"] = "create_model"
```
## Type: Datum
```python
class Datum(StrictBase):
loss_fn_inputs: LossFnInputs
"""Dictionary mapping field names to tensor data"""
model_input: ModelInput
@model_validator(mode="before")
@classmethod
def convert_tensors(cls, data: Any) -> Any:
"""Convert torch.Tensor and numpy arrays to TensorData in loss_fn_inputs during construction."""
if isinstance(data, dict) and "loss_fn_inputs" in data:
loss_fn_inputs = data["loss_fn_inputs"]
if isinstance(loss_fn_inputs, dict):
converted_inputs = {}
for key, value in loss_fn_inputs.items():
converted_inputs[key] = cls._maybe_convert_array(key, value)
data = dict(data) # Make a copy
data["loss_fn_inputs"] = converted_inputs
return data
@classmethod
def _maybe_convert_array(cls, key: str, value: Any) -> Any:
"""Convert torch.Tensor, numpy array, or 1-D list to TensorData if needed."""
if _HAVE_TORCH and isinstance(value, torch.Tensor):
return TensorData.from_torch(value)
elif isinstance(value, np.ndarray):
return TensorData.from_numpy(value)
elif isinstance(value, list):
# assume it's 1d and infer the dtype from the key
return TensorData(data=value, dtype=_key_to_type[key], shape=[len(value)])
else:
return value
_key_to_type = {
"target_tokens": "int64",
"weights": "float32",
"advantages": "float32",
"logprobs": "float32",
}
```
## Type: EncodedTextChunk
```python
class EncodedTextChunk(StrictBase):
tokens: Sequence[int]
"""Array of token IDs"""
type: Literal["encoded_text"] = "encoded_text"
@property
def length(self) -> int:
return len(self.tokens)
```
## Type: ForwardBackwardInput
```python
class ForwardBackwardInput(StrictBase):
data: List[Datum]
"""Array of input data for the forward/backward pass"""
loss_fn: LossFnType
"""Fully qualified function path for the loss function"""
```
## Type: ForwardBackwardOutput
```python
class ForwardBackwardOutput(BaseModel):
loss_fn_output_type: str
"""The type of the ForwardBackward output. Can be one of [...] TODO"""
loss_fn_outputs: List[LossFnOutput]
"""Dictionary mapping field names to tensor data"""
metrics: Dict[str, float]
"""Training metrics as key-value pairs"""
```
## Type: GetInfoResponse
```python
class GetInfoResponse(BaseModel):
type: Optional[Literal["get_info"]] = None
model_data: ModelData
model_id: ModelID
is_lora: Optional[bool] = None
lora_rank: Optional[int] = None
model_name: Optional[str] = None
if PYDANTIC_V2:
# allow fields with a `model_` prefix
model_config = ConfigDict(protected_namespaces=tuple())
```
## Type: GetServerCapabilitiesResponse
```python
class GetServerCapabilitiesResponse(BaseModel):
supported_models: List[SupportedModel]
```
## Type: ImageAssetPointerChunk
```python
class ImageAssetPointerChunk(StrictBase):
format: Literal["png", "jpeg"]
"""Image format"""
height: int
"""Image height in pixels"""
location: str
"""Path or URL to the image asset"""
tokens: int
"""Number of tokens this image represents"""
width: int
"""Image width in pixels"""
type: Literal["image_asset_pointer"] = "image_asset_pointer"
@property
def length(self) -> int:
return self.tokens
```
## Type: LoadWeightsResponse
```python
class LoadWeightsResponse(BaseModel):
path: Optional[str] = None
"""A tinker URI for model weights at a specific step"""
type: Optional[Literal["load_weights"]] = None
```
## Type: LoraConfig
```python
class LoraConfig(StrictBase):
rank: int
"""LoRA rank (dimension of low-rank matrices)"""
seed: Optional[int] = None
"""Seed used for initialization of LoRA weights.
Useful if you need deterministic or reproducible initialization of weights.
"""
train_unembed: bool = True
"""Whether to add lora to the unembedding layer"""
train_mlp: bool = True
"""Whether to add loras to the MLP layers (including MoE layers)"""
train_attn: bool = True
"""Whether to add loras to the attention layers"""
```
## Type: LossFnInputs
```python
LossFnInputs: TypeAlias = Dict[str, TensorData]
```
## Type: LossFnOutput
```python
LossFnOutput: TypeAlias = Dict[str, TensorData]
```
## Type: LossFnType
```python
LossFnType: TypeAlias = Literal["cross_entropy", "importance_sampling", "ppo"]
```
## Type: ModelData
```python
class ModelData(BaseModel):
arch: Optional[str] = None
model_name: Optional[str] = None
```
## Type: ModelID
```python
ModelID: TypeAlias = str
```
## Type: ModelInput
```python
class ModelInput(StrictBase):
chunks: List[ModelInputChunk]
"""Sequence of input chunks (formerly TokenSequence)"""
@classmethod
def from_ints(cls, tokens: List[int]) -> "ModelInput":
"""
Create a ModelInput from a list of ints (tokens).
"""
return cls(chunks=[EncodedTextChunk(tokens=tokens)])
def to_ints(self) -> List[int]:
"""
Convert the ModelInput to a list of ints (tokens)
Throws exception if there are any non-token chunks
"""
if not all(isinstance(chunk, EncodedTextChunk) for chunk in self.chunks):
raise ValueError(f"to_ints only supported for ModelInput with EncodedTextChunks, got {[type(chunk) for chunk in self.chunks]}")
return [token for chunk in self.chunks for token in chunk.tokens]
@property
def length(self) -> int:
"""
Return the total context length used by this ModelInput.
"""
return sum(chunk.length for chunk in self.chunks)
@classmethod
def empty(cls) -> "ModelInput":
"""
Create an empty ModelInput.
"""
return cls(chunks=[])
def append(self, chunk: ModelInputChunk) -> "ModelInput":
"""
Add a new chunk, return a new ModelInput.
"""
return ModelInput(chunks=self.chunks + [chunk])
def append_int(self, token: int) -> "ModelInput":
"""
Add a new token, return a new ModelInput.
"""
return self.append(EncodedTextChunk(tokens=[token]))
```
## Type: ModelInputChunk
```python
ModelInputChunk: TypeAlias = Annotated[
Union[EncodedTextChunk, ImageAssetPointerChunk], PropertyInfo(discriminator="type")
]
```
## Type: OptimStepResponse
```python
class OptimStepResponse(BaseModel):
metrics: Optional[Dict[str, float]] = None
"""Optimization step metrics as key-value pairs"""
```
## Type: SampleResponse
```python
class SampleResponse(BaseModel):
sequences: Sequence[SampledSequence]
type: Literal["sample"] = "sample"
prompt_logprobs: Optional[List[Optional[float]]] = None
"""
If prompt_logprobs was set to true in the request, logprobs are computed for
every token in the prompt. The `prompt_logprobs` response contains a float32
value for every token in the prompt.
"""
```
## Type: SampledSequence
```python
class SampledSequence(BaseModel):
stop_reason: StopReason
"""Reason why sampling stopped"""
tokens: List[int]
"""List of generated token IDs"""
logprobs: Optional[List[float]] = None
"""Log probabilities for each token (optional)"""
```
## Type: SamplingParams
```python
class SamplingParams(BaseModel):
max_tokens: Optional[int] = None
"""Maximum number of tokens to generate"""
seed: Optional[int] = None
"""Random seed for reproducible generation"""
stop: Union[str, Sequence[str], Sequence[int], None] = None
"""Stop sequences for generation"""
temperature: float = 1
"""Sampling temperature"""
top_k: int = -1
"""Top-k sampling parameter (-1 for no limit)"""
top_p: float = 1
"""Nucleus sampling probability"""
```
## Type: SaveWeightsForSamplerResponse
```python
class SaveWeightsForSamplerResponse(BaseModel):
path: str
"""A tinker URI for model weights for sampling at a specific step"""
type: Optional[Literal["save_weights_for_sampler"]] = None
```
## Type: SaveWeightsResponse
```python
class SaveWeightsResponse(BaseModel):
path: str
"""A tinker URI for model weights at a specific step"""
type: Optional[Literal["save_weights"]] = None
```
## Type: StopReason
```python
StopReason: TypeAlias = Literal["length", "stop"]
```
## Type: SupportedModel
```python
class SupportedModel(BaseModel):
model_name: Optional[str] = None
```
## Type: TensorData
```python
class TensorData(StrictBase):
data: Union[List[int], List[float]]
"""Flattened tensor data as array of numbers."""
dtype: TensorDtype
shape: Optional[List[int]] = None
"""Optional.
The shape of the tensor (see PyTorch tensor.shape). The shape of a
one-dimensional list of length N is `(N,)`. Can usually be inferred if not
provided, and is generally inferred as a 1D tensor.
"""
@classmethod
def from_numpy(cls, array: npt.NDArray[Any]) -> "TensorData":
return cls(
data=array.flatten().tolist(),
dtype=_convert_numpy_dtype_to_tensor(array.dtype),
shape=list(array.shape),
)
@classmethod
def from_torch(cls, tensor: "torch.Tensor") -> "TensorData":
return cls(
data=tensor.flatten().tolist(),
dtype=_convert_torch_dtype_to_tensor(tensor.dtype),
shape=list(tensor.shape),
)
def to_numpy(self) -> npt.NDArray[Any]:
"""Convert TensorData to numpy array."""
numpy_dtype = _convert_tensor_dtype_to_numpy(self.dtype)
arr = np.array(self.data, dtype=numpy_dtype)
if self.shape is not None:
arr = arr.reshape(self.shape)
return arr
def to_torch(self) -> "torch.Tensor":
"""Convert TensorData to torch tensor."""
if not _HAVE_TORCH:
raise ImportError("PyTorch is not installed. Cannot convert to torch tensor.")
torch_dtype = _convert_tensor_dtype_to_torch(self.dtype)
tensor = torch.tensor(self.data, dtype=torch_dtype)
if self.shape is not None:
tensor = tensor.reshape(self.shape)
return tensor
def tolist(self) -> List[Any]:
return self.to_numpy().tolist()
def _convert_tensor_dtype_to_numpy(dtype: TensorDtype) -> npt.DTypeLike:
"""Convert TensorDtype to numpy dtype-like."""
if dtype == "float32":
return np.float32
elif dtype == "int64":
return np.int64
else:
raise ValueError(f"Unsupported TensorDtype: {dtype}")
def _convert_tensor_dtype_to_torch(dtype: TensorDtype) -> "torch.dtype":
"""Convert TensorDtype to torch dtype."""
if not _HAVE_TORCH:
raise ImportError("PyTorch is not installed. Cannot convert to torch dtype.")
import torch
if dtype == "float32":
return torch.float32
elif dtype == "int64":
return torch.int64
else:
raise ValueError(f"Unsupported TensorDtype: {dtype}")
def _convert_numpy_dtype_to_tensor(dtype: np.dtype[Any]) -> TensorDtype:
"""Convert numpy dtype to TensorDtype."""
if dtype.kind == "f":
return "float32"
elif dtype.kind == "i":
return "int64"
else:
raise ValueError(f"Unsupported numpy dtype: {dtype}")
def _convert_torch_dtype_to_tensor(dtype: "torch.dtype") -> TensorDtype:
"""Convert torch dtype to TensorDtype."""
# torch.dtype objects have .is_floating_point
if getattr(dtype, "is_floating_point", False):
return "float32"
else:
return "int64"
```
## Type: TensorDtype
```python
TensorDtype: TypeAlias = Literal["int64", "float32"]
```
## Type: UnloadModelResponse
```python
class UnloadModelResponse(BaseModel):
model_id: ModelID
type: Optional[Literal["unload_model"]] = None
```