--- base_model: bigcode/starencoder tags: - generated_from_trainer metrics: - precision - recall - accuracy model-index: - name: stack-edu-classifier-cpp results: [] --- # stack-edu-classifier-cpp This is a classifier for scoring the educational value of code files in The Stack v2 dataset, it is a finetuned version of [bigcode/starencoder](https://huggingface.co/bigcode/starencoder) with a classification head on code files annotated by Llama3.1-70B-Instruct. We use this classifier for building Stack-Edu dataset used for training SmolLM2, see [paper](https://arxiv.org/pdf/2502.02737). Each classifier is trained on one programming language. ### How to use in transformers To load the classifier, use the following code: ```python from transformers import AutoTokenizer, AutoModelForSequenceClassification tokenizer = AutoTokenizer.from_pretrained(REPO_NAME) model = AutoModelForSequenceClassification.from_pretrained(REPO_NAME) text = "This is a test sentence." inputs = tokenizer(text, return_tensors="pt", padding="longest", truncation=True) outputs = model(**inputs) logits = outputs.logits.squeeze(-1).float().detach().numpy() score = logits.item() result = { "text": text, "score": score, "int_score": int(round(max(0, min(score, 5)))), } print(result) # {'text': 'This is a test sentence.', 'score': 0.07964489609003067, 'int_score': 0} ``` ## Intended uses & limitations While the classifier performs well in distinguishing high-quality code in its target language (C++ in this case), there are some limitations: - Scope: The model's performance might change for other datasets, in particular for out of distribution samples. The classifier's context is 1024 tokens, which might not be sufficient to assess the quality of some long code files. - Bias: The model's performance is dependent on the quality and representativeness of the training data and the LLM used for the annotation. Biases in both can affect the classifier's judgments. It might overfit to thoroughly commented code. - Context: The classifier evaluates individual code files without considering broader context, which might impact its effectiveness in certain scenarios. The training and inference code is available on GitHub https://github.com/huggingface/cosmopedia/tree/main/classification ## Training procedure The classifier was trained on 500,000 pairs of code files and their scores from 0 to 5, generated by Llama3.1. The samples were annotated based on their educational quality with 1 being not educational and 5 being highly educational and relevant for teaching programming. You can find the prompt used for building the annotations in the appendix of [SmolLM2 paper](https://arxiv.org/pdf/2502.02737). We added a classification head with a single regression output to StarEncoder and trained the model for 20 epochs with a learning rate of 3e-4. During training, the embedding and encoder layers were frozen to focus on the classification head. It achieves the following results on the evaluation set: - Loss: 0.4333 - Precision: 0.4421 - Recall: 0.2865 - F1 Macro: 0.3036 - Accuracy: 0.5539 - F1 Binary Minimum3: 0.6643 While the macro F1 scores across the 1-5 rating scale are relatively low due to the model's difficulty in distinguishing between higher-rated samples, the classifier performs well for our primary filtering task. When converting to binary classification, using a threshold of 2 achieves the F1 scores ranges between 0.8 and 0.9 for most Stack-Edu classifiers, whereas a threshold of 3 yields F1 scores between 0.5 and 0.8. With the Highest being Python, SQL, C, Rust and the lowest being HTML, TypeScript and C#.