Rendering

Rendering to tokens

Rendering converts list-of-message datatypes into their token representations for model training and inference. While similar to chat templates (opens in a new tab), Tinker's rendering system is designed for the full training lifecycle--not just inference--supporting supervised learning, reinforcement learning, and deployment.

The Renderer class

The Renderer class is the main interface used for rendering. It can be found in renderers.py.

Eample conversation:

messages =[
    {'role': 'system', 'content': 'Answer concisely; at most one sentence per response'},
    {'role': 'user', 'content': 'What is the longest-lived rodent species?'},
    {'role': 'assistant', 'content': 'The naked mole rat, which can live over 30 years.'},
    {'role': 'user', 'content': 'How do they live so long?'},
    {'role': 'assistant', 'content': 'They evolved multiple protective mechanisms including special hyaluronic acid that prevents cancer, extremely stable proteins, and efficient DNA repair systems that work together to prevent aging.'}
]

We'll use this conversation throughout the examples below.

Inference: Generating messages

Our model maps tokens to tokens, but with the renderer, it can map messages to messages. To sample messages from the model, we need to use three methods from the renderer:

  • build_generation_prompt
  • get_stop_sequences
  • parse_response

build_generation_prompt converts a conversation into a prompt that we can use to sample from the assistant. This is used during reinforcement learning and at deployment time.

Example: Generate an alternative assistant response

Let's remove the last assistant message and call build_generation_prompt to get a prompt that we can use to sample an alternative response from the assistant:

from tinker_cookbook import renderers, tokenizer_utils
tokenizer = tokenizer_utils.get_tokenizer('Qwen/Qwen3-30B-A3B')
renderer = renderers.get_renderer('qwen3', tokenizer)
prompt = renderer.build_generation_prompt(messages[:-1])
print(prompt)
print('-'*10)
print(tokenizer.decode(prompt.to_ints()))

Output:

ModelInput(chunks=[EncodedTextChunk(tokens=[151644, 8948, 198, 2610, 525, 264, 10950, 17847, 13, 151645, 198, 151644, 8948, 198, 16141, 3529, 285, 974, 26, 518, 1429, 825, 11652, 817, 2033, 151645, 198, 151644, 872, 198, 3838, 374, 279, 22032, 61854, 20589, 306, 9419, 30, 151645, 198, 151644, 77091, 198, 785, 19020, 34651, 11244, 11, 892, 646, 3887, 916, 220, 18, 15, 1635, 13, 151645, 198, 151644, 872, 198, 10234, 30, 151645, 198, 151644, 77091, 198], type='encoded_text')])
----------
<|im_start|>system
Answer concisely; at most one sentence per response<|im_end|>
<|im_start|>user
What is the longest-lived rodent species?<|im_end|>
<|im_start|>assistant
The naked mole rat, which can live over 30 years.<|im_end|>
<|im_start|>user
How do they live so long?<|im_end|>
<|im_start|>assistant

You can see that the prompt is a ModelInput object, which is a list of EncodedTextChunk objects (but contains different objects in multi-modal data).

Sampling and parsing the response:

Given that we're providing messages as input, we probably want a message output, rather than a token output. For that, we can use parse_response.

import tinker
from tinker.types import SamplingParams
service_client = tinker.ServiceClient()
sampling_client = service_client.create_sampling_client(base_model='Qwen/Qwen3-30B-A3B')
stop_sequences = renderer.get_stop_sequences()
print(f"Stop sequences: {stop_sequences}")
sampling_params = SamplingParams(max_tokens=100, temperature=0.5, stop=stop_sequences)
output = sampling_client.sample(prompt, sampling_params=sampling_params, num_samples=1).result()
print(f"Sampled tokens: {output.sequences[0].tokens}")
sampled_message, parse_success = renderer.parse_response(output.sequences[0].tokens)
print(f"Sampled message: {sampled_message}")
print(f"Parse success: {parse_success}")

Output:

Stop sequences: [151645]
Sampled tokens: [45, 7741, 34651, 31410, 614, 4911, 76665, 11, 2670, 264, 7548, 11050, 22077, 1849, 323, 264, 1602, 3347, 40761, 4379, 11, 892, 16792, 311, 862, 57119, 13, 151645]
Sampled message: {'role': 'assistant', 'content': 'Naked mole rats have unique adaptations, including a highly efficient immune system and a very low metabolic rate, which contribute to their longevity.'}
Parse success: True

You can see that the there is one stop sequence, 151645, which you can verify is the <|im_end|> token. The output is parsed successfully into a message.

Training: Supervised learning

For supervised learning (and some other algorithms like DPO), we need to distinguish between prompt tokens (context) and completion tokens (what the model should learn to generate). We want to provide a target assistant message, and the renderer needs to tell us which tokens are part of the prompt and completion.

We can use build_supervised_example to get tokens with per-token loss weights:

tokens, weights = renderer.build_supervised_example(messages)
 
from tinker_cookbook.utils.format_colorized import format_colorized
print(format_colorized(tokens, weights, tokenizer))

We get the following output:

<|im_start|>system↵
Answer concisely; at most one sentence per response<|im_end|>↵
<|im_start|>user↵
What is the longest-lived rodent species?<|im_end|>↵
<|im_start|>assistant↵
The naked mole rat, which can live over 30 years.<|im_end|>↵
<|im_start|>user↵
How do they live so long?<|im_end|>↵
<|im_start|>assistant↵
They evolved multiple protective mechanisms including special hyaluronic acid that prevents cancer, extremely stable proteins, and efficient DNA repair systems that work together to prevent aging.<|im_end|>

The green text is part of the prompt (i.e. with weight=0, so no loss is computed on these) and red is part of the completion (i.e. with weight=1, so the model is trained to predict these). Note that the ↵ have been inserted for clarity to show newlines; these are not actually part of the token sequence.

The key insight here is that only the final assistant message is treated as the completion. All previous context, including the first assistant response, is part of the prompt, so the model learns to continue conversations rather than just answer single questions.

Appendix: Why not Jinja templates?

In our experience, the Jinja2 templates are harder to write than Python code, especially when we need to get the whitespace exactly right. They are also unwieldy for supervised learning, where you need to put different labels on different tokens.