Text Generation
Transformers
Safetensors
Korean
llama
text-generation-inference
Inference Endpoints

Yi-Ko-6B-Instruct-v1.0

Model Details

Base Model

beomi/Yi-Ko-6B

Training Dataset

  1. kyujinpy/KOR-OpenOrca-Platypus-v3 πŸ™‡
  2. beomi/KoAlpaca-v1.1a πŸ™‡
  3. maywell/ko_wikidata_QA πŸ™‡
  4. AIHub MRC 데이터 선별 ν›„ Instruction Format 맞게 λ³€κ²½ ν›„ μ‚¬μš©

Benchmark Results

AI-Harness Evaluation

https://github.com/Beomi/ko-lm-evaluation-harness

Model kobest_boolq kobest_copa kobest_hellaswag kobest_sentineg korunsmile pawsx_ko
Zero-shot
Yi-Ko-6B-Instruct-v1.0 0.6619 0.7794 0.4858 0.4589 0.3520 0.5545
Yi-Ko-6B 0.7070 0.7696 0.5009 0.4044 0.3828 0.5145

Instruction Format

### User:
{instruction}

### Assistant:
{response}

Loading the Model

import torch
from transformers import AutoModelForCausalLM, AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained("wkshin89/Yi-Ko-6B-Instruct-v1.0")
model = AutoModelForCausalLM.from_pretrained(
    "wkshin89/Yi-Ko-6B-Instruct-v1.0",
    device_map="auto",
    torch_dtype=torch.bfloat16,
)
Downloads last month
1,927
Safetensors
Model size
6.18B params
Tensor type
BF16
Β·
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.

Model tree for wkshin89/Yi-Ko-6B-Instruct-v1.0

Base model

beomi/Yi-Ko-6B
Finetuned
(10)
this model
Quantizations
1 model

Datasets used to train wkshin89/Yi-Ko-6B-Instruct-v1.0