Darkknight535 commited on
Commit
6885231
·
verified ·
1 Parent(s): fa128db

Upload 2 files

Browse files
Files changed (3) hide show
  1. .gitattributes +1 -0
  2. 1s.png +3 -0
  3. README.md +62 -23
.gitattributes CHANGED
@@ -34,3 +34,4 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
36
  tokenizer.json filter=lfs diff=lfs merge=lfs -text
 
 
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
36
  tokenizer.json filter=lfs diff=lfs merge=lfs -text
37
+ 1s.png filter=lfs diff=lfs merge=lfs -text
1s.png ADDED

Git LFS Details

  • SHA256: ba7fffff9fe0fec8f8ecf2945966206bd0a6e2b87ad977a37eaf8b92b987c51f
  • Pointer size: 132 Bytes
  • Size of remote file: 1.91 MB
README.md CHANGED
@@ -1,30 +1,57 @@
1
  ---
2
- base_model:
3
- - IntervitensInc/Mistral-Nemo-Base-2407-chatml
4
- - inflatebot/MN-12B-Mag-Mell-R1
5
- - LatitudeGames/Wayfarer-12B
6
- - TheDrummer/Rocinante-12B-v1.1
7
  library_name: transformers
8
  tags:
9
- - mergekit
10
- - merge
11
-
12
  ---
13
- # merge
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
14
 
15
- This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
16
 
17
- ## Merge Details
18
- ### Merge Method
 
 
 
 
19
 
20
- This model was merged using the [DARE](https://arxiv.org/abs/2311.03099) [TIES](https://arxiv.org/abs/2306.01708) merge method using [IntervitensInc/Mistral-Nemo-Base-2407-chatml](https://huggingface.co/IntervitensInc/Mistral-Nemo-Base-2407-chatml) as a base.
21
 
22
- ### Models Merged
 
 
 
 
 
 
23
 
24
- The following models were included in the merge:
25
- * [inflatebot/MN-12B-Mag-Mell-R1](https://huggingface.co/inflatebot/MN-12B-Mag-Mell-R1)
26
- * [LatitudeGames/Wayfarer-12B](https://huggingface.co/LatitudeGames/Wayfarer-12B)
27
- * [TheDrummer/Rocinante-12B-v1.1](https://huggingface.co/TheDrummer/Rocinante-12B-v1.1)
28
 
29
  ### Configuration
30
 
@@ -32,7 +59,11 @@ The following YAML configuration was used to produce this model:
32
 
33
  ```yaml
34
  models:
35
- - model: TheDrummer/Rocinante-12B-v1.1
 
 
 
 
36
  parameters:
37
  density: 0.7
38
  weight: 0.5
@@ -40,12 +71,20 @@ models:
40
  parameters:
41
  density: 0.9
42
  weight: 1
43
- - model: LatitudeGames/Wayfarer-12B
 
 
 
 
44
  parameters:
45
  density: 0.5
46
  weight: 0.7
47
- merge_method: dare_ties
48
  base_model: IntervitensInc/Mistral-Nemo-Base-2407-chatml
49
  tokenizer_source: base
50
-
51
- ```
 
 
 
 
 
1
  ---
 
 
 
 
 
2
  library_name: transformers
3
  tags:
4
+ - not-for-all-audiences
5
+ language:
6
+ - en
7
  ---
8
+ ### MN-StarAI-RP-12B
9
+ This is a Merge language model.
10
+
11
+ ![IMG](https://huggingface.co/Darkknight535/MN-StarAI-RP-12B/resolve/main/1s.png)
12
+
13
+ ### Instruct Template
14
+ Default Chatml instruct and context preset in SillyTavern.
15
+
16
+ ### Samplers
17
+
18
+
19
+ ## Creative
20
+ ```
21
+ Temp : 1
22
+ Min P : 0.05
23
+ Repetition Penalty : 1.05
24
+
25
+ [And everything else neutral]
26
+ ```
27
+
28
+ ## Normal
29
+ ```
30
+ Temp : 0.6 - 0.8
31
+ Min P : 0.1
32
+ Repetition Penalty : 1.1
33
+
34
+ [And everything else neutral]
35
+ ```
36
 
37
+ ### Key Points
38
 
39
+ - Creative (Swipes are crazy.)
40
+ - Coherent
41
+ - Having Negative and Positive Bias, Mainly Both have 50/50 chance.
42
+ - Follows prompt better
43
+ - context length 16K Max. (Cuz it's Nemo)
44
+ - Can summarize and generate image prompts well [The Above image's prompt is generated in a roleplay by this model] (Possible : Due to the nemo-chatml model as base)
45
 
 
46
 
47
+ ### Instruct Prompt
48
+ ```
49
+ You are {{char}}. Following {{char}}'s Personality: [{{personality}}]. Continue writing this story and portray characters realistically. Keep in mind your responses should be short 1-6 sentences. Do not write {{user}}'s actions or dialogues. Give characters some emotional depths according to their roles. Prefix name of additional characters before speaking as them.
50
+ ```
51
+
52
+ ### FeedBack
53
+ [FeedBack here](https://huggingface.co/Darkknight535/MN-StarAI-RP-12B/discussions/1)
54
 
 
 
 
 
55
 
56
  ### Configuration
57
 
 
59
 
60
  ```yaml
61
  models:
62
+ - model: DavidAU/MN-GRAND-Gutenberg-Lyra4-Lyra-12B-DARKNESS
63
+ parameters:
64
+ density: 0.7
65
+ weight: 0.5
66
+ - model: LatitudeGames/Wayfarer-12B
67
  parameters:
68
  density: 0.7
69
  weight: 0.5
 
71
  parameters:
72
  density: 0.9
73
  weight: 1
74
+ - model: nothingiisreal/MN-12B-Celeste-V1.9
75
+ parameters:
76
+ density: 0.9
77
+ weight: 1
78
+ - model: TheDrummer/UnslopNemo-12B-v4
79
  parameters:
80
  density: 0.5
81
  weight: 0.7
82
+ merge_method: ties
83
  base_model: IntervitensInc/Mistral-Nemo-Base-2407-chatml
84
  tokenizer_source: base
85
+ parameters:
86
+ int8_mask: true
87
+ rescale: true
88
+ normalize: false
89
+ dtype: bfloat16
90
+ ```