---
license: cc-by-nc-4.0
library_name: transformers
base_model: secretmoon/LoRA-Llama-3-MLP
language:
- en
pipeline_tag: text-generation
---
## Overview
GGUF merged with base model version of **[Secretmoon/LoRA-Llama-3-MLP](https://huggingface.co/secretmoon/LoRA-Llama-3-MLP)** LoRA adapter, LoRA Alpha=48. Secretmoon/LoRA-Llama-3-MLP is 8-bit LoRA adapter for the Llama-3-8B model, primarily designed to expand the model's knowledge of the MLP:FiM (My Little Pony: Friendship is Magic) universe. This adapter is ideal for generating fan fiction, role-playing scenarios, and other creative projects. The training data includes factual content from the Fandom wiki and canonical fan works that deeply explore the universe.

## Base Model
The base model for this adapter is **[Sao10K/L3-8B-Stheno-v3.1](https://huggingface.co/Sao10K/L3-8B-Stheno-v3.1)**, an excellent fine-tuned version of the original Llama-3-8B. It excels in story writing and role-playing without suffering from degradation due to overfitting.
## Training Details
- **Dataset:**
1. Cleaned copy of the MLP Fandom Wiki, excluding information about recent and side projects unrelated to MLP:FiM. (Alpaca)
2. Approximately 100 specially selected fan stories from FiMFiction. (RAW text)
3. Additional data to train the model as a personal assistant and enhance its sensitivity to user emotions. (Alpaca)
- **Training Duration:** 3 hours
- **Hardware:** 1 x NVIDIA RTX A6000 48GB
- **PEFT Type:** LoRA 8-bit
- **Sequence Length:** 6144
- **Batch Size:** 2
- **Num Epochs:** 3
- **Optimizer:** AdamW_BNB_8bit
- **Learning Rate Scheduler:** Cosine
- **Learning Rate:** 0.00033
- **LoRA R:** 256
- **Sample Packing:** True
- **LoRA Target Linear:** True
### Recommendations for LoRA Alpha (If you merge LoRA to the model yourself)
- **16:** Low influence
- **48:** Suggested optimal value (recommended)
- **64:** High influence, significantly impacting model behavior
- **128:** Very high influence, drastically changing language model behavior (not recommended)
## How to Use
- **[llama.cpp](https://github.com/ggerganov/llama.cpp)**
The opensource framework for running GGUF LLM models on which all other interfaces are made.
- **[koboldcpp](https://github.com/LostRuins/koboldcpp)**
Lightweight open source fork llama.cpp with a simple graphical interface and many additional features. Optimized for RP.
- **[LM studio](https://lmstudio.ai/)**
Proprietary free fork llama.cpp with a graphical interface.
## Other:
You can contact me on telegram @monstor86 or discord @starlight2288
Also you can try some RP with this adapter for free in my bot on telegram @Luna_Pony_bot
[
](https://github.com/OpenAccess-AI-Collective/axolotl)