You need to agree to share your contact information to access this model

This repository is publicly accessible, but you have to accept the conditions to access its files and content.

Log in or Sign Up to review the conditions and access this model content.

devngho/ko_edu_classifier_v2_nlpai-lab_KoE5

이 모델의 기반 모델은 query: , passage: 을 붙이도록 학습되었으며, 이 모델도 passage: 을 붙이도록 학습되었습니다. 입력 텍스트 앞에 꼭 passage: 을 추가하세요.

이 모델은 nlpai-lab/KoE5에 classifier를 추가한 모델입니다. HuggingFaceFW/fineweb-edu-classifier의 한국어 버전을 목표로, 한국어 웹 페이지의 교육성 점수를 평가합니다. 학습에는 blueapple8259/c4-ko-cleaned-2에서 추출한 500k 샘플을 Qwen/Qwen2.5-32B-Instruct로 평가한 devngho/ko_llm_annotations 데이터셋이 사용되었습니다.

이 연구는 Google의 TPU Research Cloud (TRC)의 Cloud TPU 제공으로 수행되었습니다. ⚡

상세

  • 제작: devngho
  • 언어: ko
  • 라이선스: mit
  • 기반 모델: nlpai-lab/KoE5

학습 상세

  • learning_rate: 3e-4 (cosine)
  • warmup_ratio: 0.1
  • batch_size: 2048(512*4)
  • optimizer: adamw(b1=0.9, b2=0.98, eps=1e-8, weight_decay=0.01)
  • duration: 8h 12m

학습 장비

TPU v4-8

성능

Validation Report:
              precision    recall  f1-score   support

           0       0.66      0.33      0.44       198
           1       0.75      0.63      0.68      1553
           2       0.46      0.68      0.55      1159
           3       0.63      0.56      0.59       967
           4       0.62      0.26      0.36       219

    accuracy                           0.59      4096
   macro avg       0.62      0.49      0.52      4096
weighted avg       0.62      0.59      0.59      4096

Confusion Matrix:
[[ 66 116  16   0   0]
 [ 34 977 520  22   0]
 [  0 207 791 159   2]
 [  0  11 382 541  33]
 [  0   0  20 143  56]]

다른 작은 모델들보다는 높은 성능을 보이지만, 한국어 임베딩의 한계와 qwen2.5 32b 모델의 평가 한계로 성능이 낮은 것으로 보입니다. 3 이상과 미만으로 구분할 때 f1 score는 약 0.72입니다.

devngho/ko_edu_classifier_v2_nlpai-lab_KoE5

This model is based on the model query: , passage: , and this model has also been trained to prepend passage: . Be sure to **prefix passage: before your input text.

This model is nlpai-lab/KoE5 with classfier head. It is designed to evaluate the educational value of Korean web pages, similar to the HuggingFaceFW/fineweb-edu-classifier, but focused on Korean content. The training data comes from devngho/ko_llm_annotations dataset, contains 500k samples extracted from blueapple8259/c4-ko-cleaned-2 and evaluated using Qwen/Qwen2.5-32B-Instruct.

This research was supported with Cloud TPUs from Google's TPU Research Cloud (TRC).⚡

  • Developed by: devngho
  • Language(s): ko
  • License: mit
  • Base model: nlpai-lab/KoE5

Training detail

  • learning_rate: 3e-4 (cosine)
  • warmup_ratio: 0.1
  • batch_size: 2048(512*4)
  • optimizer: adamw(b1=0.9, b2=0.98, eps=1e-8, weight_decay=0.01)
  • duration: 3h 21m

Training hardware

TPU v4-8

Performance

Validation Report:
              precision    recall  f1-score   support

           0       0.66      0.33      0.44       198
           1       0.75      0.63      0.68      1553
           2       0.46      0.68      0.55      1159
           3       0.63      0.56      0.59       967
           4       0.62      0.26      0.36       219

    accuracy                           0.59      4096
   macro avg       0.62      0.49      0.52      4096
weighted avg       0.62      0.59      0.59      4096

Confusion Matrix:
[[ 66 116  16   0   0]
 [ 34 977 520  22   0]
 [  0 207 791 159   2]
 [  0  11 382 541  33]
 [  0   0  20 143  56]]

The low performance is likely due to the limitations of Korean embeddings and the evaluation limitations of the Qwen2.5 32B model. The F1 score is about 0.72 when separating above and below 3.

Downloads last month
10,379
Safetensors
Model size
560M params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for devngho/ko_edu_classifier_v2_nlpai-lab_KoE5

Finetuned
nlpai-lab/KoE5
Finetuned
(1)
this model

Dataset used to train devngho/ko_edu_classifier_v2_nlpai-lab_KoE5

Space using devngho/ko_edu_classifier_v2_nlpai-lab_KoE5 1

Collection including devngho/ko_edu_classifier_v2_nlpai-lab_KoE5