Skip to content

tinker_cookbook.hyperparam_utils.get_lora_lr_over_full_finetune_lr

tinker_cookbook.hyperparam_utils.get_lora_lr_over_full_finetune_lr(model_name, lora_alpha)

Return the factor that you should scale the full fine-tuning learning rate by to get the equivalent LoRA learning rate.

Parameters:

  • model_name (str) – HuggingFace model identifier (currently unused but kept for API consistency).
  • lora_alpha (int) – LoRA alpha scaling parameter (currently unused; multiplier is fixed at 10).