Taka008 commited on
Commit
a846574
·
verified ·
1 Parent(s): 723173e

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -160
README.md CHANGED
@@ -25,34 +25,7 @@ inference: false
25
 
26
  LLM-jp-3 is the series of large language models developed by the [Research and Development Center for Large Language Models](https://llmc.nii.ac.jp/) at the [National Institute of Informatics](https://www.nii.ac.jp/en/).
27
 
28
- This repository provides **llm-jp-3-44---
29
- license: apache-2.0
30
- language:
31
- - en
32
- - ja
33
- programming_language:
34
- - C
35
- - C++
36
- - C#
37
- - Go
38
- - Java
39
- - JavaScript
40
- - Lua
41
- - PHP
42
- - Python
43
- - Ruby
44
- - Rust
45
- - Scala
46
- - TypeScript
47
- pipeline_tag: text-generation
48
- library_name: transformers
49
- inference: false
50
- ---
51
- # llm-jp-3-150m-instruct2
52
-
53
- LLM-jp-3 is the series of large language models developed by the [Research and Development Center for Large Language Models](https://llmc.nii.ac.jp/) at the [National Institute of Informatics](https://www.nii.ac.jp/en/).
54
-
55
- This repository provides **llm-jp-3-150m-instruct2** model.
56
  For an overview of the LLM-jp-3 models across different parameter sizes, please refer to:
57
  - [LLM-jp-3 Pre-trained Models](https://huggingface.co/collections/llm-jp/llm-jp-3-pre-trained-models-672c6096472b65839d76a1fa)
58
  - [LLM-jp-3 Fine-tuned Models](https://huggingface.co/collections/llm-jp/llm-jp-3-fine-tuned-models-672c621db852a01eae939731).
@@ -176,138 +149,6 @@ The models released here are in the early stages of our research and development
176
  llm-jp(at)nii.ac.jp
177
 
178
 
179
- ## License
180
-
181
- [Apache License, Version 2.0](https://www.apache.org/licenses/LICENSE-2.0)
182
-
183
- ## Model Card Authors
184
-
185
- *The names are listed in alphabetical order.*
186
-
187
- Hirokazu Kiyomaru and Takashi Kodama.0m-instruct2** model.
188
- For an overview of the LLM-jp-3 models across different parameter sizes, please refer to:
189
- - [LLM-jp-3 Pre-trained Models](https://huggingface.co/collections/llm-jp/llm-jp-3-pre-trained-models-672c6096472b65839d76a1fa)
190
- - [LLM-jp-3 Fine-tuned Models](https://huggingface.co/collections/llm-jp/llm-jp-3-fine-tuned-models-672c621db852a01eae939731).
191
-
192
-
193
- Checkpoints format: Hugging Face Transformers
194
-
195
-
196
- ## Required Libraries and Their Versions
197
-
198
- - torch>=2.3.0
199
- - transformers>=4.40.1
200
- - tokenizers>=0.19.1
201
- - accelerate>=0.29.3
202
- - flash-attn>=2.5.8
203
-
204
- ## Usage
205
-
206
- ```python
207
- import torch
208
- from transformers import AutoTokenizer, AutoModelForCausalLM
209
- tokenizer = AutoTokenizer.from_pretrained("llm-jp/llm-jp-3-150m-instruct2")
210
- model = AutoModelForCausalLM.from_pretrained("llm-jp/llm-jp-3-150m-instruct2", device_map="auto", torch_dtype=torch.bfloat16)
211
- chat = [
212
- {"role": "system", "content": "以下は、タスクを説明する指示です。要求を適切に満たす応答を書きなさい。"},
213
- {"role": "user", "content": "自然言語処理とは何か"},
214
- ]
215
- tokenized_input = tokenizer.apply_chat_template(chat, add_generation_prompt=True, tokenize=True, return_tensors="pt").to(model.device)
216
- with torch.no_grad():
217
- output = model.generate(
218
- tokenized_input,
219
- max_new_tokens=100,
220
- do_sample=True,
221
- top_p=0.95,
222
- temperature=0.7,
223
- repetition_penalty=1.05,
224
- )[0]
225
- print(tokenizer.decode(output))
226
- ```
227
-
228
-
229
- ## Model Details
230
-
231
- - **Model type:** Transformer-based Language Model
232
- - **Total seen tokens:** 2.1T tokens
233
-
234
- |Params|Layers|Hidden size|Heads|Context length|Embedding parameters|Non-embedding parameters|
235
- |:---:|:---:|:---:|:---:|:---:|:---:|:---:|
236
- |150M|12|512|8|4096|101,874,688|50,344,448|
237
- |440M|16|1024|8|4096|203,749,376|243,303,424|
238
- |980M|20|1536|8|4096|305,624,064|684,258,816|
239
- |1.8b|24|2048|16|4096|407,498,752|1,459,718,144|
240
- |3.7b|28|3072|24|4096|611,248,128|3,171,068,928|
241
- |7.2b|32|4096|32|4096|814,997,504|6,476,271,616|
242
- |13b|40|5120|40|4096|1,018,746,880|12,688,184,320|
243
- |172b|96|12288|96|4096|2,444,992,512|169,947,181,056|
244
-
245
- ## Tokenizer
246
-
247
- The tokenizer of this model is based on [huggingface/tokenizers](https://github.com/huggingface/tokenizers) Unigram byte-fallback model.
248
- The vocabulary entries were converted from [`llm-jp-tokenizer v3.0`](https://github.com/llm-jp/llm-jp-tokenizer/releases/tag/v3.0b2).
249
- Please refer to [README.md](https://github.com/llm-jp/llm-jp-tokenizer) of `llm-jp-tokenizer` for details on the vocabulary construction procedure (the pure SentencePiece training does not reproduce our vocabulary).
250
-
251
- ## Datasets
252
-
253
- ### Pre-training
254
-
255
- The models have been pre-trained using a blend of the following datasets.
256
-
257
- | Language | Dataset | Tokens|
258
- |:---|:---|---:|
259
- |Japanese|[Wikipedia](https://gitlab.llm-jp.nii.ac.jp/datasets/llm-jp-corpus-v3)|2.6B
260
- ||[Common Crawl](https://gitlab.llm-jp.nii.ac.jp/datasets/llm-jp-corpus-v3)|762.8B
261
- ||[WARP/PDF](https://gitlab.llm-jp.nii.ac.jp/datasets/llm-jp-corpus-v3)|237.3B
262
- ||[WARP/HTML](https://gitlab.llm-jp.nii.ac.jp/datasets/llm-jp-corpus-v3)|2.7B
263
- ||[Kaken](https://gitlab.llm-jp.nii.ac.jp/datasets/llm-jp-corpus-v3)|1.8B
264
- |English|[Wikipedia](https://gitlab.llm-jp.nii.ac.jp/datasets/llm-jp-corpus-v3)|4.7B
265
- ||[Dolma/CC-head](https://gitlab.llm-jp.nii.ac.jp/datasets/llm-jp-corpus-v3)|608.5B
266
- ||[Dolma/C4](https://gitlab.llm-jp.nii.ac.jp/datasets/llm-jp-corpus-v3)|181.6B
267
- ||[Dolma/Reddit](https://gitlab.llm-jp.nii.ac.jp/datasets/llm-jp-corpus-v3)|83.1B
268
- ||[Dolma/PeS2o](https://gitlab.llm-jp.nii.ac.jp/datasets/llm-jp-corpus-v3)|62.9B
269
- ||[Dolma/Gutenberg](https://gitlab.llm-jp.nii.ac.jp/datasets/llm-jp-corpus-v3)|5.5B
270
- ||[Dolma/Wiki](https://gitlab.llm-jp.nii.ac.jp/datasets/llm-jp-corpus-v3)|3.9B
271
- |Code|[The Stack](https://huggingface.co/datasets/bigcode/the-stack)|114.1B
272
- |Chinese|[Wikipedia](https://huggingface.co/datasets/bigcode/the-stack)|0.8B
273
- |Korean|[Wikipedia](https://huggingface.co/datasets/bigcode/the-stack)|0.3B
274
-
275
- ### Post-training
276
-
277
- We have fine-tuned the pre-trained checkpoint with supervised fine-tuning.
278
-
279
- #### Supervised Fine-tuning
280
- The datasets used for supervised fine-tuning are as follows:
281
-
282
- | Language | Dataset | Description |
283
- |:---|:---|:---|
284
- |Japanese|[ichikara-instruction-004-002](https://liat-aip.sakura.ne.jp/wp/llm%e3%81%ae%e3%81%9f%e3%82%81%e3%81%ae%e6%97%a5%e6%9c%ac%e8%aa%9e%e3%82%a4%e3%83%b3%e3%82%b9%e3%83%88%e3%83%a9%e3%82%af%e3%82%b7%e3%83%a7%e3%83%b3%e3%83%87%e3%83%bc%e3%82%bf%e4%bd%9c%e6%88%90/llm%e3%81%ae%e3%81%9f%e3%82%81%e3%81%ae%e6%97%a5%e6%9c%ac%e8%aa%9e%e3%82%a4%e3%83%b3%e3%82%b9%e3%83%88%e3%83%a9%e3%82%af%e3%82%b7%e3%83%a7%e3%83%b3%e3%83%87%e3%83%bc%e3%82%bf-%e5%85%ac%e9%96%8b/)| A manually constructed instruction dataset. |
285
- | |[AnswerCarefully (ver2.0)](https://huggingface.co/datasets/llm-jp/AnswerCarefully)| A manually constructed instruction dataset focusing on LLMs' safety. |
286
- | |ichikara-instruction-format| A small subset of the ichikara-instruction dataset, edited with some constraints on the output format. |
287
- | |[AutoMultiTurnByCalm3-22B](https://huggingface.co/datasets/kanhatakeyama/AutoMultiTurnByCalm3-22B)| A synthetic instruction dataset. |
288
- | |[ramdom-to-fixed-multiturn-Calm3](https://huggingface.co/datasets/kanhatakeyama/ramdom-to-fixed-multiturn-Calm3)| A synthetic instruction dataset. |
289
- | |[wizardlm8x22b-logical-math-coding-sft-ja](https://huggingface.co/datasets/llm-jp/wizardlm8x22b-logical-math-coding-sft-ja)| A synthetic instruction dataset. |
290
- | |[magpie-sft-v1.0](https://huggingface.co/datasets/llm-jp/magpie-sft-v1.0)| A synthetic instruction dataset we created. |
291
- |English|[Daring-Anteater](https://huggingface.co/datasets/nvidia/Daring-Anteater)| - |
292
- | |[FLAN](https://huggingface.co/datasets/llm-jp/FLAN/blob/main/README.md) | - |
293
- |Japanese & English|[Synthetic-JP-EN-Coding-Dataset](https://huggingface.co/datasets/llm-jp/Synthetic-JP-EN-Coding-Dataset)| A synthetic instruction dataset. |
294
-
295
-
296
- ## Evaluation
297
-
298
- Detailed evaluation results are reported in this blog.
299
-
300
-
301
- ## Risks and Limitations
302
-
303
- The models released here are in the early stages of our research and development and have not been tuned to ensure outputs align with human intent and safety considerations.
304
-
305
-
306
- ## Send Questions to
307
-
308
- llm-jp(at)nii.ac.jp
309
-
310
-
311
  ## License
312
 
313
  [Apache License, Version 2.0](https://www.apache.org/licenses/LICENSE-2.0)
 
25
 
26
  LLM-jp-3 is the series of large language models developed by the [Research and Development Center for Large Language Models](https://llmc.nii.ac.jp/) at the [National Institute of Informatics](https://www.nii.ac.jp/en/).
27
 
28
+ This repository provides **llm-jp-3-440m-instruct2** model.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
29
  For an overview of the LLM-jp-3 models across different parameter sizes, please refer to:
30
  - [LLM-jp-3 Pre-trained Models](https://huggingface.co/collections/llm-jp/llm-jp-3-pre-trained-models-672c6096472b65839d76a1fa)
31
  - [LLM-jp-3 Fine-tuned Models](https://huggingface.co/collections/llm-jp/llm-jp-3-fine-tuned-models-672c621db852a01eae939731).
 
149
  llm-jp(at)nii.ac.jp
150
 
151
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
152
  ## License
153
 
154
  [Apache License, Version 2.0](https://www.apache.org/licenses/LICENSE-2.0)