--- base_model: - ArliAI/Llama-3.1-8B-ArliAI-RPMax-v1.2 - DavidAU/L3.1-Dark-Planet-SpinFire-Uncensored-8B - vicgalle/Humanish-Roleplay-Llama-3.1-8B library_name: transformers tags: - mergekit - merge --- # merge This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the [linear](https://arxiv.org/abs/2203.05482) merge method. ### Models Merged The following models were included in the merge: * [ArliAI/Llama-3.1-8B-ArliAI-RPMax-v1.2](https://huggingface.co/ArliAI/Llama-3.1-8B-ArliAI-RPMax-v1.2) * [DavidAU/L3.1-Dark-Planet-SpinFire-Uncensored-8B](https://huggingface.co/DavidAU/L3.1-Dark-Planet-SpinFire-Uncensored-8B) * [vicgalle/Humanish-Roleplay-Llama-3.1-8B](https://huggingface.co/vicgalle/Humanish-Roleplay-Llama-3.1-8B) ### Configuration The following YAML configuration was used to produce this model: ```yaml models: - model: DavidAU/L3.1-Dark-Planet-SpinFire-Uncensored-8B parameters: weight: 0.55 - model: ArliAI/Llama-3.1-8B-ArliAI-RPMax-v1.2 parameters: weight: 0.3 - model: vicgalle/Humanish-Roleplay-Llama-3.1-8B parameters: weight: 0.15 merge_method: linear # You can change this based on the merge method you want to use tokenizer_source: union # Union tokenizer (adjust if necessary) dtype: float16 # You can adjust dtype depending on your system ```