File size: 4,559 Bytes
bb3c048 8c9d5fc bb3c048 c4e9dbc bb3c048 5be46c7 bb3c048 7232fbd bb3c048 00da1b8 bb3c048 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 |
---
license: llama3.2
datasets:
- CarrotAI/Magpie-Ko-Pro-AIR
- CarrotAI/Carrot
- CarrotAI/ko-instruction-dataset
language:
- ko
- en
base_model:
- meta-llama/Llama-3.2-3B-Instruct
pipeline_tag: text-generation
new_version: CarrotAI/Llama-3.2-Rabbit-Ko-3B-Instruct-2412
---

## Model Description
### Model Details
- **Name**: Carrot Llama-3.2 Rabbit Ko
- **Version**: 3B Instruct
- **Base Model**: CarrotAI/Llama-3.2-Rabbit-Ko-3B-Instruct
- **Languages**: Korean, English
- **Model Type**: Large Language Model (Instruction-tuned)
### Training Process
본 모델은 다음과 같은 주요 훈련 단계를 거쳤습니다:
1. **SFT (Supervised Fine-Tuning)**
- 고품질 한국어 및 영어 데이터셋을 사용하여 기본 모델을 세부 조정
### Limitations
- 3B 파라미터 규모로 인한 복잡한 작업에서의 제한적 성능
- 특정 도메인에 대한 깊이 있는 전문성 부족
- 편향성 및 환각 가능성
### Ethics Statement
모델 개발 과정에서 윤리적 고려사항을 최대한 반영하였으나, 사용자는 항상 결과를 비판적으로 검토해야 합니다.
### How to Use
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("CarrotAI/Llama-3.2-Rabbit-Ko-3B-Instruct")
tokenizer = AutoTokenizer.from_pretrained("CarrotAI/Llama-3.2-Rabbit-Ko-3B-Instruct")
```
## Score
|Tasks|Version| Filter |n-shot| Metric | |Value | |Stderr|
|-----|------:|----------------|-----:|-----------|---|-----:|---|-----:|
|gsm8k| 3|flexible-extract| 5|exact_match|↑ |0.6490|± |0.0131|
| | |strict-match | 5|exact_match|↑ |0.0023|± |0.0013|
|gsm8k-ko| 3|flexible-extract| 5|exact_match|↑ |0.3275|± |0.0134|
| | |strict-match | 5|exact_match|↑ |0.2737|± |0.0134|
|ifeval| 4|none | 5|inst_level_loose_acc |↑ |0.8058|± | N/A|
| | |none | 5|inst_level_strict_acc |↑ |0.7686|± | N/A|
| | |none | 5|prompt_level_loose_acc |↑ |0.7320|± |0.0191|
| | |none | 5|prompt_level_strict_acc|↑ |0.6858|± |0.0200|
| Tasks |Version|Filter|n-shot| Metric | |Value | |Stderr|
|-------------------------------|------:|------|-----:|--------|---|-----:|---|-----:|
|haerae | 1|none | |acc |↑ |0.4180|± |0.0148|
| | |none | |acc_norm|↑ |0.4180|± |0.0148|
| - haerae_general_knowledge | 1|none | 5|acc |↑ |0.3125|± |0.0350|
| | |none | 5|acc_norm|↑ |0.3125|± |0.0350|
| - haerae_history | 1|none | 5|acc |↑ |0.3404|± |0.0347|
| | |none | 5|acc_norm|↑ |0.3404|± |0.0347|
| - haerae_loan_word | 1|none | 5|acc |↑ |0.4083|± |0.0379|
| | |none | 5|acc_norm|↑ |0.4083|± |0.0379|
| - haerae_rare_word | 1|none | 5|acc |↑ |0.4815|± |0.0249|
| | |none | 5|acc_norm|↑ |0.4815|± |0.0249|
| - haerae_standard_nomenclature| 1|none | 5|acc |↑ |0.4771|± |0.0405|
| | |none | 5|acc_norm|↑ |0.4771|± |0.0405|
| Tasks |Version|Filter|n-shot| Metric | |Value | |Stderr|
|----------------|------:|------|-----:|--------|---|-----:|---|------|
|kobest_boolq | 1|none | 5|acc |↑ |0.7664|± |0.0113|
| | |none | 5|f1 |↑ |0.7662|± | N/A|
|kobest_copa | 1|none | 5|acc |↑ |0.5620|± |0.0157|
| | |none | 5|f1 |↑ |0.5612|± | N/A|
|kobest_hellaswag| 1|none | 5|acc |↑ |0.3840|± |0.0218|
| | |none | 5|acc_norm|↑ |0.4900|± |0.0224|
| | |none | 5|f1 |↑ |0.3807|± | N/A|
|kobest_sentineg | 1|none | 5|acc |↑ |0.5869|± |0.0247|
| | |none | 5|f1 |↑ |0.5545|± | N/A|
|kobest_wic | 1|none | 5|acc |↑ |0.4952|± |0.0141|
| | |none | 5|f1 |↑ |0.4000|± | N/A|
|