Network Configuration Analysis LoRA Adapter
LoRA adapter for network configuration analysis, optimized for memory efficiency.
Usage
from peft import PeftModel
from transformers import AutoModelForCausalLM, AutoTokenizer
base_model = AutoModelForCausalLM.from_pretrained("unsloth/llama-3.2-3b-instruct-bnb-4bit")
tokenizer = AutoTokenizer.from_pretrained("unsloth/llama-3.2-3b-instruct-bnb-4bit")
model = PeftModel.from_pretrained(base_model, "WaiLwin/network-model-adapter")
Training Details
- Optimized for Google Colab memory constraints
- LoRA Rank: 8, Alpha: 8
- Max Sequence Length: 1024
- Downloads last month
- 4
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
🙋
Ask for provider support