Text Generation
PEFT
Safetensors
File size: 8,950 Bytes
848c2be
 
 
198ffc1
ffc8a10
848c2be
 
6712d1f
 
 
848c2be
ffc8a10
848c2be
 
 
 
ffc8a10
848c2be
ffc8a10
 
 
 
 
 
848c2be
ffc8a10
 
 
 
 
848c2be
 
 
 
ffc8a10
848c2be
ffc8a10
 
848c2be
 
ffc8a10
848c2be
 
ffc8a10
848c2be
 
ffc8a10
848c2be
 
 
 
 
ffc8a10
 
 
848c2be
ffc8a10
 
848c2be
ffc8a10
 
 
848c2be
ffc8a10
 
848c2be
ffc8a10
 
 
 
 
848c2be
ffc8a10
848c2be
ffc8a10
 
848c2be
ffc8a10
 
 
 
848c2be
ffc8a10
848c2be
 
ffc8a10
 
 
848c2be
ffc8a10
 
848c2be
 
 
 
 
ffc8a10
848c2be
 
ffc8a10
848c2be
 
ffc8a10
848c2be
 
ffc8a10
848c2be
ffc8a10
848c2be
 
ffc8a10
848c2be
 
 
ffc8a10
848c2be
 
ffc8a10
848c2be
ffc8a10
 
848c2be
 
ffc8a10
 
 
 
 
 
 
 
848c2be
 
ffc8a10
848c2be
ffc8a10
 
848c2be
 
ffc8a10
848c2be
 
 
ffc8a10
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
---
base_model: meta-llama/Meta-Llama-3-8B
library_name: peft
pipeline_tag: text-generation
license: apache-2.0
---

# Model Card for LoRI-S_code_llama3_rank_32

This model is part of [LoRI: Reducing Cross-Task Interference in Multi-Task Low-Rank Adaptation](https://arxiv.org/abs/2504.07448).

**Abstract:** Low-Rank Adaptation (LoRA) has emerged as a popular parameter-efficient fine-tuning (PEFT) method for Large Language Models (LLMs), yet it still incurs notable overhead and suffers from parameter interference in multi-task scenarios. We propose LoRA with Reduced Interference (LoRI), a simple yet effective approach that freezes the projection matrices $A$ as random projections and sparsifies the matrices $B$ using task-specific masks. This design substantially reduces the number of trainable parameters while maintaining strong task performance. Moreover, LoRI minimizes cross-task interference in adapter merging by leveraging the orthogonality between adapter subspaces, and supports continual learning by using sparsity to mitigate catastrophic forgetting. Extensive experiments across natural language understanding, mathematical reasoning, code generation, and safety alignment tasks demonstrate that LoRI outperforms full fine-tuning and existing PEFT methods, while using up to 95% fewer trainable parameters than LoRA. In multi-task experiments, LoRI enables effective adapter merging and continual learning with reduced cross-task interference.

## Model Details

### Model Description
LoRI (LoRA with Reduced Interference) is a simple yet effective variant of Low-Rank Adaptation (LoRA) for fine-tuning Large Language Models (LLMs). It improves efficiency and performance by freezing projection matrices (`A`) as random projections and sparsifying matrices (`B`) using task-specific masks. This design significantly reduces trainable parameters while maintaining strong task performance. LoRI also minimizes cross-task interference in adapter merging and supports continual learning by mitigating catastrophic forgetting through sparsity.

-   **Developed by:** Juzheng Zhang, Jiacheng You, Ashwinee Panda, Tom Goldstein
-   **Shared by:** tomg-group-umd
-   **Model type:** Parameter-Efficient Fine-Tuning (PEFT) adapter (LoRA variant)
-   **Language(s) (NLP):** English
-   **License:** Apache-2.0
-   **Finetuned from model:** `meta-llama/Meta-Llama-3-8B`

### Model Sources
-   **Repository:** https://github.com/juzhengz/LoRI/
-   **Paper:** https://arxiv.org/abs/2504.07448
-   **Project Page:** https://juzhengz.github.io/
-   **Hugging Face Collection:** https://huggingface.co/collections/tomg-group-umd/lori-adapters-67f795549d792613e1290011

## Uses

### Direct Use
LoRI adapters can be directly loaded with a compatible base LLM (e.g., `meta-llama/Meta-Llama-3-8B`) using the `peft` library. This model, `LoRI-S_code_llama3_rank_32`, is specifically fine-tuned for code generation tasks. LoRI is designed for efficient fine-tuning across various tasks including natural language understanding, mathematical reasoning, code generation, and safety alignment, and supports effective adapter merging and continual learning.

### Downstream Use
LoRI can be integrated into larger AI systems and applications requiring efficient multi-task learning or continual adaptation of LLMs. Its reduced cross-task interference makes it suitable for complex scenarios where multiple capabilities are needed from a single adapter.

### Out-of-Scope Use
This model is designed for text-based generation and understanding tasks, specifically in the context of code generation. Using it for tasks outside of its trained modalities, for applications requiring very high precision in domains not covered by its training data, or for generating harmful content is not recommended.

## Bias, Risks, and Limitations
As with any model finetuned from a large language model, this adapter may inherit biases present in its underlying training data (`meta-llama/Meta-Llama-3-8B`) and the specific finetuning datasets. While the LoRI paper mentions "safety alignment tasks", comprehensive evaluation for all potential risks is recommended.

### Recommendations
Users should be aware of the inherent biases and limitations of large language models. It is recommended to perform further evaluation in specific deployment contexts and to implement appropriate safeguards, especially in sensitive applications.

## How to Get Started with the Model

Use the code below to get started with the model.

```python
from transformers import AutoModelForCausalLM, AutoTokenizer
from peft import PeftModel

# Load the base model (e.g., Llama-3-8B)
base_model = AutoModelForCausalLM.from_pretrained("meta-llama/Meta-Llama-3-8B")

# Load the LoRI adapter on top of the base model
# This example loads the LoRI-S adapter for code generation, rank 32
adapter = PeftModel.from_pretrained(base_model, "tomg-group-umd/LoRI-S_code_llama3_rank_32")

# Load the tokenizer for the base model
tokenizer = AutoTokenizer.from_pretrained("meta-llama/Meta-Llama-3-8B")

# Example usage (for text generation with the adapted model)
# from transformers import pipeline
# generator = pipeline("text-generation", model=adapter, tokenizer=tokenizer)
# print(generator("def fibonacci(n):", max_new_tokens=50))
```

## Training Details

### Training Data
LoRI adapters are trained on various datasets relevant to different tasks. This specific adapter (`LoRI-S_code_llama3_rank_32`) was trained for code generation using the **CodeAlpaca dataset**. Other datasets mentioned in the paper/repo include GSM8K (mathematical reasoning) and SaferPaca (safety alignment).

### Training Procedure
LoRI employs a two-stage training procedure:
1.  **LoRI-D (Discovery):** Initial training where projection matrices `A` are frozen as random projections, and matrices `B` are trained to discover task-specific masks.
2.  **LoRI-S (Sparse):** Continues training using the sparse masks extracted from LoRI-D, typically with 90% sparsity, further reducing trainable parameters.

Training is implemented using [Fully Sharded Data Parallel (FSDP)](https://pytorch.org/tutorials/intermediate/FSDP_tutorial.html) to support multi-GPU environments.

#### Training Hyperparameters
-   **Adapter Ranks:** 32 (for this model) and 64 (general for LoRI).
-   **Sparsity:** Up to 90% in LoRI-S stage.
-   **Training regime:** Mixed precision (e.g., fp16 or bf16).

#### Speeds, Sizes, Times
LoRI uses up to **95% fewer trainable parameters** than traditional LoRA while maintaining strong task performance.

## Evaluation

### Testing Data, Factors & Metrics
#### Testing Data
For code generation tasks, evaluation was performed on the **HumanEval** benchmark. LoRI was also evaluated across natural language understanding, mathematical reasoning, and safety alignment tasks on various datasets.

#### Factors
No specific disaggregated factors (e.g., subpopulations) were explicitly mentioned for evaluation in the provided context.

#### Metrics
For code generation, the primary metric is typically **pass@k**, which measures the percentage of generated code samples that pass unit tests for a given problem.

### Results
LoRI consistently outperforms full fine-tuning and existing PEFT methods, while using significantly fewer trainable parameters. It also demonstrates reduced cross-task interference in adapter merging and improved resistance to catastrophic forgetting in continual learning. For detailed quantitative results, please refer to the [paper](https://arxiv.org/abs/2504.07448).

## Technical Specifications

### Model Architecture and Objective
LoRI modifies the standard LoRA architecture by freezing the projection matrices `A` as random projections and sparsifying the matrices `B` using task-specific masks. This design aims to achieve substantial reduction in trainable parameters, minimize cross-task interference between different adaptations, and support continual learning by mitigating catastrophic forgetting.

### Compute Infrastructure
#### Hardware
Training and inference are supported on multi-GPU environments, leveraging technologies like FSDP.

#### Software
The project builds on `PyTorch`, `transformers`, and `peft`.

## Citation
If you use LoRI in your work, please cite:

**BibTeX:**
```bibtex
@article{zhang2025lori,
  title={LoRI: Reducing Cross-Task Interference in Multi-Task Low-Rank Adaptation},
  author={Zhang, Juzheng and You, Jiacheng and Panda, Ashwinee and Goldstein, Tom},
  journal={arXiv preprint arXiv:2504.07448},
  year={2025}
}
```

**APA:**
Zhang, J., You, J., Panda, A., & Goldstein, T. (2025). LoRI: Reducing Cross-Task Interference in Multi-Task Low-Rank Adaptation. *arXiv preprint arXiv:2504.07448*.

## Model Card Authors
Niels Drost (huggingface.co/nielsr)

## Model Card Contact
[email protected]

### Framework versions

- PEFT 0.12.0
- Transformers (compatible with recent versions)
- PyTorch (compatible with recent versions)