Ppoyaa commited on
Commit
e8a0e4f
·
verified ·
1 Parent(s): 0b66e57

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +30 -41
README.md CHANGED
@@ -1,47 +1,36 @@
1
  ---
2
- base_model:
3
- - Ppoyaa/LlumiLuminRP-8B-Instruct-262k-v0.3
4
- - ChaoticNeutrals/Poppy_Porpoise-v0.7-L3-8B
5
  library_name: transformers
6
  tags:
7
  - mergekit
8
  - merge
9
-
10
  ---
11
- # merge
12
-
13
- This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
14
-
15
- ## Merge Details
16
- ### Merge Method
17
-
18
- This model was merged using the SLERP merge method.
19
-
20
- ### Models Merged
21
-
22
- The following models were included in the merge:
23
- * [Ppoyaa/LlumiLuminRP-8B-Instruct-262k-v0.3](https://huggingface.co/Ppoyaa/LlumiLuminRP-8B-Instruct-262k-v0.3)
24
- * [ChaoticNeutrals/Poppy_Porpoise-v0.7-L3-8B](https://huggingface.co/ChaoticNeutrals/Poppy_Porpoise-v0.7-L3-8B)
25
-
26
- ### Configuration
27
-
28
- The following YAML configuration was used to produce this model:
29
-
30
- ```yaml
31
- slices:
32
- - sources:
33
- - model: ChaoticNeutrals/Poppy_Porpoise-v0.7-L3-8B
34
- layer_range: [0, 32]
35
- - model: Ppoyaa/LlumiLuminRP-8B-Instruct-262k-v0.3
36
- layer_range: [0, 32]
37
- merge_method: slerp
38
- base_model: Ppoyaa/LlumiLuminRP-8B-Instruct-262k-v0.3
39
- parameters:
40
- t:
41
- - filter: self_attn
42
- value: [0, 0.5, 0.3, 0.7, 1]
43
- - filter: mlp
44
- value: [1, 0.5, 0.7, 0.3, 0]
45
- - value: 0.5
46
- dtype: bfloat16
47
- ```
 
1
  ---
 
 
 
2
  library_name: transformers
3
  tags:
4
  - mergekit
5
  - merge
6
+ license: apache-2.0
7
  ---
8
+ # LlumiLuminRP-8B-Instruct-262k-v0.4
9
+ ![1715297915105.png](https://cdn-uploads.huggingface.co/production/uploads/65f158693196560d34495d54/SoMXojKFU1ZseLPfalfy0.png)
10
+ ***
11
+ ## Description
12
+ An update to v0.3 to improve coherence and roleplaying experience. This model is the result of merging a bunch of Llama-3-8B RP/ERP models and is using a context window of 262k.
13
+ ***
14
+ ## 💻 Usage
15
+ ```python
16
+ !pip install -qU transformers accelerate
17
+
18
+ from transformers import AutoTokenizer
19
+ import transformers
20
+ import torch
21
+
22
+ model = "Ppoyaa/LlumiLuminRP-8B-Instruct-262k-v0.4"
23
+ messages = [{"role": "user", "content": "What is a large language model?"}]
24
+
25
+ tokenizer = AutoTokenizer.from_pretrained(model)
26
+ prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
27
+ pipeline = transformers.pipeline(
28
+ "text-generation",
29
+ model=model,
30
+ torch_dtype=torch.float16,
31
+ device_map="auto",
32
+ )
33
+
34
+ outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
35
+ print(outputs[0]["generated_text"])
36
+ ```