File size: 2,170 Bytes
05dcf8b
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
---
base_model:
- princeton-nlp/Llama-3-8B-ProLong-512k-Instruct
license: apache-2.0
language:
- en
datasets:
- chtmp223/CLIPPER
---

# ProLong-512k-8B-CLIPPER
ProLong-512k-8B-CLIPPER is a fine-tuned version of princeton-nlp/Llama-3-8B-ProLong-512k-Instruct using supervised finetuning over chtmp223/CLIPPER dataset. 
Please check [our paper](https://arxiv.org/abs/2502.14854) for more details on the method. 

## πŸ“’ Model Details

### Model Description

- **Language(s) (NLP):** English
- **License:** Apache-2.0
- **Finetuned from model:** princeton-nlp/Llama-3-8B-ProLong-512k-Instruct](https://huggingface.co/princeton-nlp/Llama-3-8B-ProLong-512k-Instruct)

### Model Sources

- **Repository:** [Github repository](https://github.com/chtmp223/CLIPPER).
- **Paper:** [https://arxiv.org/abs/2502.14854](https://arxiv.org/abs/2502.14854)


## πŸ’» Training Details

### Training Data

[chtmp223/CLIPPER](https://huggingface.co/datasets/chtmp223/CLIPPER)

### Training Procedure

| **Configurations**               | **Values**   |
|----------------------------------|--------------|
| Hardware (Training and Inference)| 8xA100s      |
| Tracking                         | wandb        |
| batch size                       | 16           |
| gradient_checkpointing           | True         |
| learning_rate                    | 1.0e-6       |
| lr_scheduler_type                | cosine       |
| max_length                       | 131072       |
| num_train_epochs                 | 1            |
| optim                            | adamw_torch  |

#### Software

Training code is adapted from [https://github.com/princeton-nlp/ProLong](https://github.com/princeton-nlp/ProLong).

## πŸ€— Inference
Inference is done with [vLLM](https://github.com/vllm-project/vllm) on 1 A100-80GB.  


## πŸ“œ Citation 

```
@misc{pham2025clippercompressionenableslongcontext,
      title={CLIPPER: Compression enables long-context synthetic data generation}, 
      author={Chau Minh Pham and Yapei Chang and Mohit Iyyer},
      year={2025},
      eprint={2502.14854},
      archivePrefix={arXiv},
      primaryClass={cs.CL},
      url={https://arxiv.org/abs/2502.14854}, 
}
```