--- library_name: transformers tags: - llama-cpp base_model: jpacifico/Chocolatine-2-14B-Instruct-v2.0b2 datasets: - IntelligentEstate/The_Key language: - en - fr --- # IntelligentEstate/Chocolat_Bite-14B-Q4_K_M-GGUF ![chocolatine.png](https://cdn-uploads.huggingface.co/production/uploads/6593502ca2607099284523db/bkh4-06XGCy2zGgPsX4hM.png) This model was converted to GGUF format from the always amazing models of Jpacifico [`jpacifico/Chocolatine-2-14B-Instruct-v2.0b2`](https://huggingface.co/jpacifico/Chocolatine-2-14B-Instruct-v2.0b2) using llama.cpp Refer to the [original model card](https://huggingface.co/jpacifico/Chocolatine-2-14B-Instruct-v2.0b2) for more details on the model. ## Made for a larger but still under 10GB base station GGUF backbone of the Estate/Enterprise system Project CutPurse(API FREEDOM) Quant Test Set up as the base or writing layer of your swarm agent or in your server for Quick and reliable inference(Much better than ChatGPT 1o/R1 when tied to tool use from [Pancho](https://huggingface.co/IntelligentEstate/Pancho-V1va-Replicant-qw25-Q8_0-GGUF) and web query from RSS feeds and so on) while keeping all your Data and your clients/Families/financials secure. ### Use with a Limit Crossing AGI template for your own Agent of Coheasion or Chaos. !!(Use LimitCrossing with Extreame Caution)!! Paper in Files ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI.