Let’s Fine-Tune your model for function-calling

We’re now ready to fine-tune our first model for function-calling 🔥.

How do we train our model for function-calling ?

Answer : We need data

A model training can be divided into 3 steps :

  1. The model is pretrained on a large quantity of data. The output of that step is a pre-trained model. For instance google/gemma-2-2b. It’s a base model and only knows how to predict the next token without good instruction following capacities.

  2. The model then, to be useful in chat context needs to be fine-tuned to follow instructions. In this step, it can be trained by the model creators, open-source community, you, or everyone. For instance google/gemma-2-2b-it is an instruct-tuned model by the Google Team behind the Gemma project.

  3. The model can then be aligned to the creator’s preference. For instance, a customer service chat model that must never be impolite to customers.

Usually a complete product like Gemini or Mistral will go through all 3 steps while the models you can find on Hugging Face have passed by one or more steps of this training.

In this tutorial, we will build a function-calling model based on google/gemma-2-2b-it. The base model is google/gemma-2-2b and the Google team fine-tuned the base model on instruction following : resulting in “google/gemma-2-2b-it”.

In this case we will take “google/gemma-2-2b-it” as base and not the base model because the prior fine-tuning it has been through is important for our use-case.

Since we want to interact with our model through conversations in messages, starting from the base model would require more training in order to learn instruction following, chat AND function-calling.

By starting from the instruct-tuned model, we minimize the amount of information that our model needs to learn.

LoRA (Low-Rank Adaptation of Large Language Models)

LoRA (Low-Rank Adaptation of Large Language Models) is a popular and lightweight training technique that significantly reduces the number of trainable parameters.

It works by inserting a smaller number of new weights as an adapter into the model to train. This makes training with LoRA much faster, memory-efficient, and produces smaller model weights (a few hundred MBs), which are easier to store and share.

LoRA inference

LoRA works by adding pairs of rank decomposition matrices to Transformer layers, typically focusing on linear layers. During training, we will “freeze” the rest of the model and will only update the weights of those newly added adapters.

By doing so, the number of parameters that we need to train drops considerably as we only need to update the adapter’s weights.

During inference, the input is passed into the adapter and the base model or these adapter weights can be merged with the base model, resulting in no additional latency overhead.

LoRA is particularly useful for adapting large language models to specific tasks or domains while keeping resource requirements manageable. This helps reduce the memory required to train a model.

If you want to learn more about how LoRA works, you should check this tutorial.

Fine-Tuning a model for Function-calling

You can access the tutorial notebook 👉 here.

Then, click on Open In Colab to be able to run it in a Colab Notebook.

< > Update on GitHub