--- base_model: [] library_name: transformers tags: - mergekit - merge - llama - conversational license: llama3 --- # L3-Hecate-8B-v1.0 ![Hecate](https://huggingface.co/Azazelle/L3-Hecate-8B-v1.0/resolve/main/IhBchsAoR4ao0D2C2AEKuw.jpg) ## About: This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). **Recommended Samplers:** ``` Temperature - 1.0 TFS - 0.85 Smoothing Factor - 0.3 Smoothing Curve - 1.1 Repetition Penalty - 1.1 ``` ### Merge Method This model was merged a series of model stock and lora merges, followed by ExPO. It uses a mix of smart and roleplay centered models to improve performance. ### Configuration The following YAML configuration was used to produce this model: ```yaml --- models: - model: Nitral-AI/Hathor_Stable-v0.2-L3-8B - model: Sao10K/L3-8B-Stheno-v3.2 - model: Jellywibble/lora_120k_pref_data_ep2 - model: Hastagaras/Jamet-8B-L3-MK.V-Blackroot - model: Cas-Warehouse/Llama-3-SOVL-MopeyMule-Blackroot-8B merge_method: model_stock base_model: failspy/Meta-Llama-3-8B-Instruct-abliterated-v3 dtype: float32 vocab_type: bpe name: hq_rp --- # ExPO models: - model: hq_rp parameters: weight: 1.25 merge_method: task_arithmetic base_model: failspy/Meta-Llama-3-8B-Instruct-abliterated-v3 parameters: normalize: false dtype: float32 vocab_type: bpe ```