Efficient Fine-Tuning of DeepScaleR-1.5B Without Increasing Parameters

#8
by HassanStar - opened

What are the best methods for fine-tuning DeepScaleR-1.5B without increasing the number of parameters during inference? Would LoRA or other PEFT methods be effective, and what settings are recommended?

HassanStar changed discussion title from Fine-Tuning DeepScaleR-1.5B Without Increasing Parameters to Efficient Fine-Tuning of DeepScaleR-1.5B Without Increasing Parameters
Agentica org

We don't have any specific recommendation here. For DeepScaleR's training we do full finetuning, but you can also try out LoRA to see if it works well.

We don't have any specific recommendation here. For DeepScaleR's training we do full finetuning, but you can also try out LoRA to see if it works well.

thanks will try it soon!

Agentica org

Some people on Twitter have tried LoRA, apparently it does even better!

Some people on Twitter have tried LoRA, apparently it does even better!

Could you provide a link?

Sign up or log in to comment