Available Models in Tinker
The table below shows the models that are currently available in Tinker. We plan to update this list as new models are released.
What model should I use?
- In general, use MoE models, which are more cost effective than the dense models.
- Use 🐙 Base models only if you're doing research or are running the full post-training pipeline yourself
- If you want to create a model that is good at a specific task or domain, use an existing post-trained model model, and fine-tune it on your own data or environment.
- If you care about latency, use one of the "⚡ Instruction" models, which will start outputting tokens without a chain-of-thought.
- If you care about intelligence and robustness, use one of the "🤔 Hybrid" or "💭 Reasoning" models, which can use long chain-of-thought.
Full Listing
Legend
Training Types
- 🐙 Base: Foundation models trained on raw text data, suitable for post-training research and custom fine-tuning.
- ⚡ Instruction: Models fine-tuned for following instructions and chat, optimized for fast inference.
- 💭 Reasoning: Models that always use chain-of-thought reasoning before their "visible" output that responds to the prompt.
- 🤔 Hybrid: Models that can operate in both thinking and non-thinking modes, where the non-thinking mode requires using a special renderer or argument that disables chain-of-thought.
Architecture
- 🧱 Dense: Standard transformer architecture with all parameters active
- 🔀 MoE: Mixture of Experts architecture with sparse activation
Model Sizes
- 🐣 Compact: 1B-4B parameters
- 🦆 Small: 8B parameters
- 🦅 Medium: 30B-32B parameters
- 🦖 Large: 70B+ parameters
Note that the MoE models are much more cost effective than the dense models as their cost is proportional to the number of active parameters and not the total number of parameters.