Skip to content

tinker_cookbook.hyperparam_utils

Function Description
get_lora_lr_over_full_finetune_lr() Return the factor that you should scale the full fine-tuning learning rate by to
get_lora_param_count() Get the number of parameters in the LoRA adapter.
get_lr() Get a recommended learning rate for the given model.
get_full_finetune_param_count() Get the total parameter count for a model by reading safetensors headers.
get_full_finetune_lr_multiplier() Get a model-specific LR multiplier for full fine-tuning, proportional to 1/sqrt(
get_lora_lr_multiplier() Get a model-specific multiplier for the LR, when training with LoRA.