Tutorial: Publish to HuggingFace Hub
Prerequisites
Run it interactively
Once you have a merged model or PEFT adapter on disk, you can upload it to HuggingFace Hub for sharing, deployment, or version control.
The publish workflow:
- Build your model (merged via
build_hf_modelor adapter viabuild_lora_adapter) - Optionally configure a model card with training metadata
- Push to Hub with
publish_to_hf_hub
You need a HuggingFace token with write access. Set it via HF_TOKEN environment variable or huggingface-cli login.
Basic publish
The simplest case -- push a model directory to a Hub repository. Repositories are created as private by default.
from tinker_cookbook import weights
url = weights.publish_to_hf_hub(
model_path="./merged_model",
repo_id="my-org/my-finetuned-qwen3",
)
print(f"Published to: {url}")
# -> Published to: https://huggingface.co/my-org/my-finetuned-qwen3
Custom model card
Use ModelCardConfig to auto-generate a README.md with HuggingFace metadata (base model, datasets, tags, license). The model card is created during upload.
from tinker_cookbook.weights import ModelCardConfig
card_config = ModelCardConfig(
base_model="Qwen/Qwen3.5-4B",
datasets=["my-org/my-sft-dataset"],
tags=["sft", "chat"],
license="apache-2.0",
language=["en"],
)
print("Model card config:")
print(f" base_model: {card_config.base_model}")
print(f" tags: {card_config.tags}")
print(f" license: {card_config.license}")
Preview the model card
You can preview the generated model card without publishing by calling generate_model_card directly.
from tinker_cookbook.weights import generate_model_card
_card = generate_model_card(
config=card_config,
repo_id="my-org/my-finetuned-qwen3",
)
print(str(_card))
Output
---
base_model: Qwen/Qwen3.5-4B
datasets:
- my-org/my-sft-dataset
language:
- en
library_name: transformers
license: apache-2.0
pipeline_tag: text-generation
tags:
- tinker
- tinker-cookbook
- sft
- chat
---
# my-org/my-finetuned-qwen3
This model was fine-tuned from [Qwen/Qwen3.5-4B](https://huggingface.co/Qwen/Qwen3.5-4B) using [Tinker](https://thinkingmachines.ai/tinker) and [tinker-cookbook](https://github.com/thinking-machines-lab/tinker-cookbook).
## Model details
- **Base model:** [Qwen/Qwen3.5-4B](https://huggingface.co/Qwen/Qwen3.5-4B)
- **Format:** Merged model
## Usage
```python
from transformers import AutoModelForCausalLM
model = AutoModelForCausalLM.from_pretrained("my-org/my-finetuned-qwen3")
Framework versions
- tinker-cookbook: 0.1.1.dev284+g225efe4a7.d20260327
- transformers: 5.3.0
- torch: 2.11.0 ```
Publishing with a model card
Pass the config to publish_to_hf_hub and the model card is created automatically:
url = weights.publish_to_hf_hub(
model_path="./merged_model",
repo_id="my-org/my-finetuned-qwen3",
model_card=card_config,
)
# -> Published to: https://huggingface.co/my-org/my-finetuned-qwen3
Publishing a PEFT adapter
The same publish_to_hf_hub works for adapter directories too. When model_path contains adapter_config.json, the model card auto-detects the format.
adapter_card = ModelCardConfig(
base_model="Qwen/Qwen3.5-4B",
tags=["sft"],
license="apache-2.0",
)
url = weights.publish_to_hf_hub(
model_path="./peft_adapter",
repo_id="my-org/my-qwen3-lora",
model_card=adapter_card,
private=False, # make public
)
# -> Published to: https://huggingface.co/my-org/my-qwen3-lora
CLI alternative
You can also publish from the command line with the tinker CLI:
# Push a merged model
tinker checkpoint push-hf \
--model-path ./merged_model \
--repo-id my-org/my-finetuned-qwen3
# Push a PEFT adapter
tinker checkpoint push-hf \
--model-path ./peft_adapter \
--repo-id my-org/my-qwen3-lora \
--public
The CLI supports the same options as the Python API (model card fields, privacy settings, custom HF tokens).
Next steps
- Export a Merged HuggingFace Model -- Merge LoRA into a standalone model
- Build a PEFT LoRA Adapter -- Convert to PEFT format for serving