ozkurt7's picture
Upload README.md with huggingface_hub
e99891c verified
---
base_model: unsloth/DeepSeek-R1-Distill-Llama-8B-unsloth-bnb-4bit
library_name: peft
license: apache-2.0
tags:
- oracle
- scm
- fusion-cloud
- adapter
- dora
- lora
language:
- en
pipeline_tag: text-generation
---
# Oracle Fusion Cloud SCM - DoRA Adapter
Bu Oracle Fusion Cloud SCM konularında uzmanlaşmış **DoRA (Weight-Decomposed Low-Rank Adaptation)** adapter'ıdır.
## 🎯 Kullanım
### Google Colab'da Merge:
```python
# 1. Base model yükle
from transformers import AutoTokenizer, AutoModelForCausalLM
from peft import PeftModel
import torch
base_model_name = "unsloth/DeepSeek-R1-Distill-Llama-8B-unsloth-bnb-4bit"
adapter_name = "ozkurt7/oracle-deepseek-r1-adapter"
# 2. Model ve adapter yükle
tokenizer = AutoTokenizer.from_pretrained(base_model_name)
base_model = AutoModelForCausalLM.from_pretrained(
base_model_name,
torch_dtype=torch.float16,
device_map="auto"
)
model = PeftModel.from_pretrained(base_model, adapter_name)
# 3. Merge işlemi
merged_model = model.merge_and_unload()
# 4. Kaydet
merged_model.save_pretrained("./oracle-merged")
tokenizer.save_pretrained("./oracle-merged")
# 5. Test
messages = [
{"role": "system", "content": "You are an Oracle Fusion Cloud SCM expert."},
{"role": "user", "content": "What is Oracle SCM Cloud?"}
]
text = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
inputs = tokenizer(text, return_tensors="pt")
outputs = merged_model.generate(**inputs, max_new_tokens=200, temperature=0.7)
response = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(response)
```
### Local kullanım:
```bash
# Adapter'ı indir
git clone https://huggingface.co/ozkurt7/oracle-deepseek-r1-adapter
# Python'da merge
python merge_adapter.py
```
## 📊 Model Details
- **Base Model**: unsloth/DeepSeek-R1-Distill-Llama-8B-unsloth-bnb-4bit
- **Technique**: DoRA (Weight-Decomposed Low-Rank Adaptation)
- **Domain**: Oracle Fusion Cloud SCM
- **Status**: Adapter only (merge required)
- **Memory**: ~500MB (adapter only)
## 🚀 Next Steps
1. Google Colab'da bu adapter'ı kullanarak merge yapın
2. Merge edilen modeli yeni repo'ya upload edin
3. GGUF formatına dönüştürün
## 🛠️ Troubleshooting
- **Memory Error**: Colab Pro kullanın veya local'de merge yapın
- **Loading Error**: `trust_remote_code=True` ekleyin
- **CUDA Error**: `device_map="auto"` kullanın
**Created by**: Kaggle → Google Colab workflow
**Date**: 2025-08-12