tinker_cookbook.renderers.Renderer
class tinker_cookbook.renderers.Renderer(ABC)
Abstract base class for rendering message lists into training and sampling prompts.
Fields:
- tokenizer (Tokenizer)
-
supports_streaming (bool) – Whether this renderer supports streaming response parsing.
Renderers that set this to True get a default parse_response_streaming implementation using ReasoningStreamingParser. They must also define
_end_message_tokenand_parse_response_for_streaming. Default:False.
property has_extension_property
Whether this renderer satisfies the sequence extension property.
get_stop_sequences()
Return stop token IDs or strings that signal end-of-generation for the model.
render_message(message, ctx)
Render a single message into its header/output/stop_overlap components.
Parameters:
- message (Message) – The message to render.
- ctx (RenderContext) – Context about the message's position in the conversation, including index, is_last flag, and prev_message.
Returns: RenderedMessage: Container with header, output, and optionally stop_overlap components for loss masking.
parse_response(response)
Parse sampled tokens back into a Message.
Parameters:
- response (list[int]) – Token IDs returned from sampling.
Returns: tuple[Message, bool]: A (message, success) tuple. If success is False, the response could not be parsed (e.g., missing stop token), but a best-effort message is still returned.
parse_response_streaming(response)
Parse response tokens with streaming, yielding incremental deltas.
Parameters:
- response (list[int]) – Token IDs from the model.
- Yields –
- StreamingMessageHeader – Once at the start of the message.
- StreamingTextDelta – Incremental text content.
- StreamingThinkingDelta – Incremental thinking/reasoning content.
- Message – The complete parsed message at the end.
to_openai_message(message)
Convert a Message to OpenAI chat completions API format.
Parameters:
- message (Message) – The Message to convert.
Returns: dict: A dict in OpenAI API message format.
create_conversation_prefix_with_tools(tools, system_prompt)
Create message(s) with tool specifications to prepend to conversations.
Parameters:
- tools (list[ToolSpec]) – List of tool specifications.
- system_prompt (str) – The system prompt content.
Returns: list[Message]: List of messages to prepend to the conversation.
Raises:
- NotImplementedError: If the renderer doesn't support tool calling.
build_generation_prompt(messages, role, prefill)
Convert a message list to a token prompt for sampling.
Parameters:
- messages (list[Message]) – A list of messages to render.
- role (Role) – The role of the partial message to be completed. Defaults to
"assistant". - prefill (str | None) – An optional string to prefill in the model's generation. Useful for constraining the start of the model's output.
Returns: tinker.ModelInput: A ModelInput containing the tokenized prompt.
build_supervised_examples(messages, train_on_what)
Build tokens and per-token weights for supervised fine-tuning.
Parameters:
- messages (list[Message]) – The conversation to render.
- train_on_what (TrainOnWhat) – Which parts of the sequence to compute loss on.
Returns: list[tuple[tinker.ModelInput, torch.Tensor]]: A list of (ModelInput, weight_tensor) tuples for training.
build_supervised_example(messages, train_on_what)
Build tokens and per-token weights for supervised fine-tuning.
Parameters:
- messages (list[Message]) – A list of messages to render.
- train_on_what (TrainOnWhat) – Controls which tokens receive non-zero training weight: - LAST_ASSISTANT_MESSAGE: Only the last assistant message - LAST_ASSISTANT_TURN: The last assistant message after the last user message - ALL_ASSISTANT_MESSAGES: All assistant messages - ALL_MESSAGES: All messages (but not headers) - ALL_TOKENS: Everything including headers - ALL_USER_AND_SYSTEM_MESSAGES: User and system messages only - CUSTOMIZED: Use the 'trainable' field on each message
Returns: tuple[tinker.ModelInput, torch.Tensor]: A (model_input, weights) tuple where weights is a 1-D float tensor with the same length as the total number of tokens.