janboe91 commited on
Commit
01dc5f1
·
verified ·
1 Parent(s): 9976f46

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +149 -0
README.md ADDED
@@ -0,0 +1,149 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - en
4
+ - fr
5
+ - de
6
+ - es
7
+ - it
8
+ - pt
9
+ - ru
10
+ - zh
11
+ - ja
12
+ license: apache-2.0
13
+ tags:
14
+ - merge
15
+ - mlx
16
+ - mlx-my-repo
17
+ datasets:
18
+ - Epiculous/SynthRP-Gens-v1.1-Filtered-n-Cleaned
19
+ - anthracite-org/stheno-filtered-v1.1
20
+ - PJMixers/hieunguyenminh_roleplay-deduped-ShareGPT
21
+ - Gryphe/Sonnet3.5-Charcard-Roleplay
22
+ - Epiculous/Synthstruct-Gens-v1.1-Filtered-n-Cleaned
23
+ - anthracite-org/kalo-opus-instruct-22k-no-refusal
24
+ - anthracite-org/nopm_claude_writing_fixed
25
+ - anthracite-org/kalo_opus_misc_240827
26
+ pipeline_tag: text-generation
27
+ base_model: Epiculous/Violet_Twilight-v0.2
28
+ model-index:
29
+ - name: Violet_Twilight-v0.2
30
+ results:
31
+ - task:
32
+ type: text-generation
33
+ name: Text Generation
34
+ dataset:
35
+ name: IFEval (0-Shot)
36
+ type: HuggingFaceH4/ifeval
37
+ args:
38
+ num_few_shot: 0
39
+ metrics:
40
+ - type: inst_level_strict_acc and prompt_level_strict_acc
41
+ value: 45.32
42
+ name: strict accuracy
43
+ source:
44
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=Epiculous/Violet_Twilight-v0.2
45
+ name: Open LLM Leaderboard
46
+ - task:
47
+ type: text-generation
48
+ name: Text Generation
49
+ dataset:
50
+ name: BBH (3-Shot)
51
+ type: BBH
52
+ args:
53
+ num_few_shot: 3
54
+ metrics:
55
+ - type: acc_norm
56
+ value: 23.94
57
+ name: normalized accuracy
58
+ source:
59
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=Epiculous/Violet_Twilight-v0.2
60
+ name: Open LLM Leaderboard
61
+ - task:
62
+ type: text-generation
63
+ name: Text Generation
64
+ dataset:
65
+ name: MATH Lvl 5 (4-Shot)
66
+ type: hendrycks/competition_math
67
+ args:
68
+ num_few_shot: 4
69
+ metrics:
70
+ - type: exact_match
71
+ value: 2.72
72
+ name: exact match
73
+ source:
74
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=Epiculous/Violet_Twilight-v0.2
75
+ name: Open LLM Leaderboard
76
+ - task:
77
+ type: text-generation
78
+ name: Text Generation
79
+ dataset:
80
+ name: GPQA (0-shot)
81
+ type: Idavidrein/gpqa
82
+ args:
83
+ num_few_shot: 0
84
+ metrics:
85
+ - type: acc_norm
86
+ value: 2.13
87
+ name: acc_norm
88
+ source:
89
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=Epiculous/Violet_Twilight-v0.2
90
+ name: Open LLM Leaderboard
91
+ - task:
92
+ type: text-generation
93
+ name: Text Generation
94
+ dataset:
95
+ name: MuSR (0-shot)
96
+ type: TAUR-Lab/MuSR
97
+ args:
98
+ num_few_shot: 0
99
+ metrics:
100
+ - type: acc_norm
101
+ value: 13.61
102
+ name: acc_norm
103
+ source:
104
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=Epiculous/Violet_Twilight-v0.2
105
+ name: Open LLM Leaderboard
106
+ - task:
107
+ type: text-generation
108
+ name: Text Generation
109
+ dataset:
110
+ name: MMLU-PRO (5-shot)
111
+ type: TIGER-Lab/MMLU-Pro
112
+ config: main
113
+ split: test
114
+ args:
115
+ num_few_shot: 5
116
+ metrics:
117
+ - type: acc
118
+ value: 23.45
119
+ name: accuracy
120
+ source:
121
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=Epiculous/Violet_Twilight-v0.2
122
+ name: Open LLM Leaderboard
123
+ ---
124
+
125
+ # janboe91/Violet_Twilight-v0.2-Q8-mlx
126
+
127
+ The Model [janboe91/Violet_Twilight-v0.2-Q8-mlx](https://huggingface.co/janboe91/Violet_Twilight-v0.2-Q8-mlx) was converted to MLX format from [Epiculous/Violet_Twilight-v0.2](https://huggingface.co/Epiculous/Violet_Twilight-v0.2) using mlx-lm version **0.21.5**.
128
+
129
+ ## Use with mlx
130
+
131
+ ```bash
132
+ pip install mlx-lm
133
+ ```
134
+
135
+ ```python
136
+ from mlx_lm import load, generate
137
+
138
+ model, tokenizer = load("janboe91/Violet_Twilight-v0.2-Q8-mlx")
139
+
140
+ prompt="hello"
141
+
142
+ if hasattr(tokenizer, "apply_chat_template") and tokenizer.chat_template is not None:
143
+ messages = [{"role": "user", "content": prompt}]
144
+ prompt = tokenizer.apply_chat_template(
145
+ messages, tokenize=False, add_generation_prompt=True
146
+ )
147
+
148
+ response = generate(model, tokenizer, prompt=prompt, verbose=True)
149
+ ```