RichardErkhov commited on
Commit
d63d3d1
·
verified ·
1 Parent(s): c7d5991

uploaded readme

Browse files
Files changed (1) hide show
  1. README.md +184 -0
README.md ADDED
@@ -0,0 +1,184 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Quantization made by Richard Erkhov.
2
+
3
+ [Github](https://github.com/RichardErkhov)
4
+
5
+ [Discord](https://discord.gg/pvy7H8DZMG)
6
+
7
+ [Request more models](https://github.com/RichardErkhov/quant_request)
8
+
9
+
10
+ TowerInstruct-7B-v0.1 - GGUF
11
+ - Model creator: https://huggingface.co/Unbabel/
12
+ - Original model: https://huggingface.co/Unbabel/TowerInstruct-7B-v0.1/
13
+
14
+
15
+ | Name | Quant method | Size |
16
+ | ---- | ---- | ---- |
17
+ | [TowerInstruct-7B-v0.1.Q2_K.gguf](https://huggingface.co/RichardErkhov/Unbabel_-_TowerInstruct-7B-v0.1-gguf/blob/main/TowerInstruct-7B-v0.1.Q2_K.gguf) | Q2_K | 2.36GB |
18
+ | [TowerInstruct-7B-v0.1.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/Unbabel_-_TowerInstruct-7B-v0.1-gguf/blob/main/TowerInstruct-7B-v0.1.IQ3_XS.gguf) | IQ3_XS | 2.6GB |
19
+ | [TowerInstruct-7B-v0.1.IQ3_S.gguf](https://huggingface.co/RichardErkhov/Unbabel_-_TowerInstruct-7B-v0.1-gguf/blob/main/TowerInstruct-7B-v0.1.IQ3_S.gguf) | IQ3_S | 2.75GB |
20
+ | [TowerInstruct-7B-v0.1.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/Unbabel_-_TowerInstruct-7B-v0.1-gguf/blob/main/TowerInstruct-7B-v0.1.Q3_K_S.gguf) | Q3_K_S | 2.75GB |
21
+ | [TowerInstruct-7B-v0.1.IQ3_M.gguf](https://huggingface.co/RichardErkhov/Unbabel_-_TowerInstruct-7B-v0.1-gguf/blob/main/TowerInstruct-7B-v0.1.IQ3_M.gguf) | IQ3_M | 2.9GB |
22
+ | [TowerInstruct-7B-v0.1.Q3_K.gguf](https://huggingface.co/RichardErkhov/Unbabel_-_TowerInstruct-7B-v0.1-gguf/blob/main/TowerInstruct-7B-v0.1.Q3_K.gguf) | Q3_K | 3.07GB |
23
+ | [TowerInstruct-7B-v0.1.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/Unbabel_-_TowerInstruct-7B-v0.1-gguf/blob/main/TowerInstruct-7B-v0.1.Q3_K_M.gguf) | Q3_K_M | 3.07GB |
24
+ | [TowerInstruct-7B-v0.1.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/Unbabel_-_TowerInstruct-7B-v0.1-gguf/blob/main/TowerInstruct-7B-v0.1.Q3_K_L.gguf) | Q3_K_L | 3.35GB |
25
+ | [TowerInstruct-7B-v0.1.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/Unbabel_-_TowerInstruct-7B-v0.1-gguf/blob/main/TowerInstruct-7B-v0.1.IQ4_XS.gguf) | IQ4_XS | 3.4GB |
26
+ | [TowerInstruct-7B-v0.1.Q4_0.gguf](https://huggingface.co/RichardErkhov/Unbabel_-_TowerInstruct-7B-v0.1-gguf/blob/main/TowerInstruct-7B-v0.1.Q4_0.gguf) | Q4_0 | 3.56GB |
27
+ | [TowerInstruct-7B-v0.1.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/Unbabel_-_TowerInstruct-7B-v0.1-gguf/blob/main/TowerInstruct-7B-v0.1.IQ4_NL.gguf) | IQ4_NL | 3.58GB |
28
+ | [TowerInstruct-7B-v0.1.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/Unbabel_-_TowerInstruct-7B-v0.1-gguf/blob/main/TowerInstruct-7B-v0.1.Q4_K_S.gguf) | Q4_K_S | 3.59GB |
29
+ | [TowerInstruct-7B-v0.1.Q4_K.gguf](https://huggingface.co/RichardErkhov/Unbabel_-_TowerInstruct-7B-v0.1-gguf/blob/main/TowerInstruct-7B-v0.1.Q4_K.gguf) | Q4_K | 3.8GB |
30
+ | [TowerInstruct-7B-v0.1.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/Unbabel_-_TowerInstruct-7B-v0.1-gguf/blob/main/TowerInstruct-7B-v0.1.Q4_K_M.gguf) | Q4_K_M | 3.8GB |
31
+ | [TowerInstruct-7B-v0.1.Q4_1.gguf](https://huggingface.co/RichardErkhov/Unbabel_-_TowerInstruct-7B-v0.1-gguf/blob/main/TowerInstruct-7B-v0.1.Q4_1.gguf) | Q4_1 | 3.95GB |
32
+ | [TowerInstruct-7B-v0.1.Q5_0.gguf](https://huggingface.co/RichardErkhov/Unbabel_-_TowerInstruct-7B-v0.1-gguf/blob/main/TowerInstruct-7B-v0.1.Q5_0.gguf) | Q5_0 | 4.33GB |
33
+ | [TowerInstruct-7B-v0.1.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/Unbabel_-_TowerInstruct-7B-v0.1-gguf/blob/main/TowerInstruct-7B-v0.1.Q5_K_S.gguf) | Q5_K_S | 4.33GB |
34
+ | [TowerInstruct-7B-v0.1.Q5_K.gguf](https://huggingface.co/RichardErkhov/Unbabel_-_TowerInstruct-7B-v0.1-gguf/blob/main/TowerInstruct-7B-v0.1.Q5_K.gguf) | Q5_K | 4.45GB |
35
+ | [TowerInstruct-7B-v0.1.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/Unbabel_-_TowerInstruct-7B-v0.1-gguf/blob/main/TowerInstruct-7B-v0.1.Q5_K_M.gguf) | Q5_K_M | 4.45GB |
36
+ | [TowerInstruct-7B-v0.1.Q5_1.gguf](https://huggingface.co/RichardErkhov/Unbabel_-_TowerInstruct-7B-v0.1-gguf/blob/main/TowerInstruct-7B-v0.1.Q5_1.gguf) | Q5_1 | 4.72GB |
37
+ | [TowerInstruct-7B-v0.1.Q6_K.gguf](https://huggingface.co/RichardErkhov/Unbabel_-_TowerInstruct-7B-v0.1-gguf/blob/main/TowerInstruct-7B-v0.1.Q6_K.gguf) | Q6_K | 5.15GB |
38
+
39
+
40
+
41
+
42
+ Original model description:
43
+ ---
44
+ license: cc-by-nc-4.0
45
+ language:
46
+ - en
47
+ - de
48
+ - fr
49
+ - zh
50
+ - pt
51
+ - nl
52
+ - ru
53
+ - ko
54
+ - it
55
+ - es
56
+ metrics:
57
+ - comet
58
+ pipeline_tag: translation
59
+ ---
60
+ # Model Card for TowerInstruct-7B-v0.1
61
+
62
+ ## Model Details
63
+
64
+ ### Model Description
65
+
66
+ TowerInstruct-7B is a language model that results from fine-tuning TowerBase on the TowerBlocks supervised fine-tuning dataset. TowerInstruct-7B-v0.1 is the first model in the series.
67
+ The model is trained to handle several translation-related tasks, such as general machine translation (e.g., sentence- and paragraph-level translation, terminology-aware translation, context-aware translation), automatic post edition, named-entity recognition, gramatical error correction, and paraphrase generation.
68
+ We will release more details in the upcoming technical report.
69
+
70
+ - **Developed by:** Unbabel, Instituto Superior Técnico, CentraleSupélec University of Paris-Saclay
71
+ - **Model type:** A 7B parameter model fine-tuned on a mix of publicly available, synthetic datasets on translation-related tasks, as well as conversational datasets and code instructions.
72
+ - **Language(s) (NLP):** English, Portuguese, Spanish, French, German, Dutch, Italian, Korean, Chinese, Russian
73
+ - **License:** CC-BY-NC-4.0, Llama 2 is licensed under the [LLAMA 2 Community License](https://ai.meta.com/llama/license/), Copyright © Meta Platforms, Inc. All Rights Reserved.
74
+ - **Finetuned from model:** [TowerBase](https://huggingface.co/Unbabel/TowerBase-7B-v0.1)
75
+
76
+ ## Intended uses & limitations
77
+
78
+ The model was initially fine-tuned on a filtered and preprocessed supervised fine-tuning dataset ([TowerBlocks](https://huggingface.co/datasets/Unbabel/TowerBlocks-v0.1)), which contains a diverse range of data sources:
79
+ - Translation (sentence and paragraph-level)
80
+ - Automatic Post Edition
81
+ - Machine Translation Evaluation
82
+ - Context-aware Translation
83
+ - Terminology-aware Translation
84
+ - Multi-reference Translation
85
+ - Named-entity Recognition
86
+ - Paraphrase Generation
87
+ - Synthetic Chat data
88
+ - Code instructions
89
+
90
+ You can find the dataset and all data sources of [TowerBlocks](https://huggingface.co/datasets/Unbabel/TowerBlocks-v0.1) here.
91
+
92
+ Here's how you can run the model using the `pipeline()` function from 🤗 Transformers:
93
+
94
+ ```python
95
+ # Install transformers from source - only needed for versions <= v4.34
96
+ # pip install git+https://github.com/huggingface/transformers.git
97
+ # pip install accelerate
98
+
99
+ import torch
100
+ from transformers import pipeline
101
+
102
+ pipe = pipeline("text-generation", model="Unbabel/TowerInstruct-v0.1", torch_dtype=torch.bfloat16, device_map="auto")
103
+ # We use the tokenizer’s chat template to format each message - see https://huggingface.co/docs/transformers/main/en/chat_templating
104
+ messages = [
105
+ {"role": "user", "content": "Translate the following text from Portuguese into English.\nPortuguese: Um grupo de investigadores lançou um novo modelo para tarefas relacionadas com tradução.\nEnglish:"},
106
+ ]
107
+ prompt = pipe.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
108
+ outputs = pipe(prompt, max_new_tokens=256, do_sample=False)
109
+ print(outputs[0]["generated_text"])
110
+ # <|im_start|>user
111
+ # Translate the following text from Portuguese into English.
112
+ # Portuguese: Um grupo de investigadores lançou um novo modelo para tarefas relacionadas com tradução.
113
+ # English:<|im_end|>
114
+ # <|im_start|>assistant
115
+ # A group of researchers has launched a new model for translation-related tasks.
116
+ ```
117
+
118
+ ### Out-of-Scope Use
119
+
120
+ The model is not guaranteed to perform for languages other than the 10 languages it supports. Even though we trained the model on conversational data and code instructions, it is not intended to be used as a conversational chatbot or code assistant.
121
+ We are currently working on improving quality and consistency on document-level translation. This model should is not intended to be use as a document-level translator.
122
+
123
+ ## Bias, Risks, and Limitations
124
+
125
+ TowerInstruct-v0.1 has not been aligned to human preferences, so the model may generate problematic outputs (e.g., hallucinations, harmful content, or false statements).
126
+
127
+ ## Prompt Format
128
+
129
+ TowerInstruct-v0.1 was trained using the ChatML prompt templates without any system prompts. An example follows below:
130
+ ```
131
+ <|im_start|>user
132
+ {USER PROMPT}<|im_end|>
133
+ <|im_start|>assistant
134
+ {MODEL RESPONSE}<|im_end|>
135
+ <|im_start|>user
136
+ [...]
137
+ ```
138
+
139
+ ### Supervised tasks
140
+
141
+ The prompts for all supervised tasks can be found in [TowerBlocks](https://huggingface.co/datasets/Unbabel/TowerBlocks-v0.1). We have used multiple prompt templates for each task. While different prompts may offer different outputs, the difference in downstream performance should be very minimal.
142
+
143
+ ## Training Details
144
+
145
+ ### Training Data
146
+
147
+ Link to [TowerBlocks](https://huggingface.co/datasets/Unbabel/TowerBlocks-v0.1).
148
+
149
+ #### Training Hyperparameters
150
+
151
+ The following hyperparameters were used during training:
152
+
153
+ - total_train_batch_size: 256
154
+
155
+ - learning_rate: 7e-06
156
+
157
+ - lr_scheduler_type: cosine
158
+
159
+ - lr_scheduler_warmup_steps: 500
160
+
161
+ - weight_decay: 0.01
162
+
163
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
164
+
165
+ - num_epochs: 4
166
+
167
+ - max_seq_length: 2048
168
+
169
+ ## Citation
170
+
171
+ ```bibtex
172
+ @misc{tower_llm_2024,
173
+ title={Tower: An Open Multilingual Large Language Model for Translation-Related Tasks},
174
+ author={Duarte M. Alves and José Pombal and Nuno M. Guerreiro and Pedro H. Martins and João Alves and Amin Farajian and Ben Peters and Ricardo Rei and Patrick Fernandes and Sweta Agrawal and Pierre Colombo and José G. C. de Souza and André F. T. Martins},
175
+ year={2024},
176
+ eprint={2402.17733},
177
+ archivePrefix={arXiv},
178
+ primaryClass={cs.CL}
179
+ }
180
+ ```
181
+
182
+ [<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
183
+
184
+