Papers
arxiv:2502.14502

How Much Knowledge Can You Pack into a LoRA Adapter without Harming LLM?

Published on Feb 20
· Submitted by msalnikov on Feb 21
Authors:
,
,
,

Abstract

The performance of Large Language Models (LLMs) on many tasks is greatly limited by the knowledge learned during pre-training and stored in the model's parameters. Low-rank adaptation (LoRA) is a popular and efficient training technique for updating or domain-specific adaptation of LLMs. In this study, we investigate how new facts can be incorporated into the LLM using LoRA without compromising the previously learned knowledge. We fine-tuned Llama-3.1-8B-instruct using LoRA with varying amounts of new knowledge. Our experiments have shown that the best results are obtained when the training data contains a mixture of known and new facts. However, this approach is still potentially harmful because the model's performance on external question-answering benchmarks declines after such fine-tuning. When the training data is biased towards certain entities, the model tends to regress to few overrepresented answers. In addition, we found that the model becomes more confident and refuses to provide an answer in only few cases. These findings highlight the potential pitfalls of LoRA-based LLM updates and underscore the importance of training data composition and tuning parameters to balance new knowledge integration and general model capabilities.

Community

Paper author Paper submitter
This comment has been hidden
Paper author Paper submitter

Discover the fascinating interplay between knowledge integration and model performance in this paper, which explores how much new information can be packed into LoRA adapters for LLMs without compromising their core capabilities.

Github: https://github.com/AIRI-Institute/knowledge-packing

This is an automated message from the Librarian Bot. I found the following papers similar to this paper.

The following papers were recommended by the Semantic Scholar API

Please give a thumbs up to this comment if you found it helpful!

If you want recommendations for any Paper on Hugging Face checkout this Space

You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: @librarian-bot recommend

This is extremely important. OpenAI team should read this, as well as other AI companies.

What are the key takeaways

Thanks

Very interesting

Sorry if I'm misrepresenting the pictures, but my interpretation is that ideally you should NOT finetune with data the model already knows, that was the worst performance. Ideally you should finetune with paraphrases of the trained data, but fine-tuning without any information that the model already knows is better than with information, correct? Thanks for this research!

·

Thanks for your interest in our work and your comment!

Usually we're fine-tuning the model for a specific task, such as classification, and we don't care about performance on tasks other than our own, unless we're adding something completely new. When it comes to adding something completely new, we show that it's crucial to add either paraphrases OR HighlyKnown samples to the train. The decision to add one or the other depends on the other desired outcomes. We show that adding paraphrases hurts reasoning abilities on the external models less while adding HighlyKnown samples makes the model forgets less facts it knew before training. Anyway, it's better to add either paraphrases or HighlyKnown samples than doing nothing at all.

As we state in the paper, the intuition behind this is as follows: 'When a model learns new singular knowledge as a simple sentence, it learns it without “inner structure”. But if we augment it with paraphrases or HighlyKnown the model retains the new knowledge structurally, since the HighlyKnown elements model knows not as simple sentences “Paris is a capital of a France”, but as models’ “inner space of capitals” and models’ “inner space of countries”. Adding new knowledge in this way is less disruptive than simply learning singular knowledge (Zeyuan Allen-Zhu and Yuanzhi Li. 2024. Physics of language models: Part 3.2, knowledge manipulation. Preprint, arXiv:2309.14402.).

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2502.14502 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2502.14502 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2502.14502 in a Space README.md to link it from this page.

Collections including this paper 6