Severian commited on
Commit
0b356d0
·
verified ·
1 Parent(s): 7cb371d

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -29,7 +29,7 @@ This model is based on **Mistral-Small-24b** and has been fine-tuned using **MLX
29
 
30
  [Computational-Model-for-Symbolic-Representations GitHub Repository](https://github.com/severian42/Computational-Model-for-Symbolic-Representations/tree/main)
31
 
32
- ### GGUFs Thanks to the increbile Bartowski!!! https://huggingface.co/bartowski/Severian_Glyphstral-24b-v1-GGUF *Note: The heavy fine-tuning using a new system instruction may have made the normal Mistral chat temp not as effective. Try using the example sys inst prompts from this repo for better results. This is all still a work in progress*
33
 
34
  **Key Features (Version 1 - Preview):**
35
 
 
29
 
30
  [Computational-Model-for-Symbolic-Representations GitHub Repository](https://github.com/severian42/Computational-Model-for-Symbolic-Representations/tree/main)
31
 
32
+ ### GGUFs Thanks to the increbile Bartowski!!! https://huggingface.co/bartowski/Severian_Glyphstral-24b-v1-GGUF *Note: The GGUF version seem to have some errors baked in from the lamma.cpp conversion, not 100% sure why, which is causing gibberish outputs. The MLX version works great still, even at 8Bit; so I will keep investigating why the quants are odd. The heavy fine-tuning using a new system instruction may have made the normal Mistral chat temp not as effective. Try using the example sys inst prompts from this repo for better results. This is all still a work in progress.*
33
 
34
  **Key Features (Version 1 - Preview):**
35