--- widget: - text: >- Dih apaan banget dah buang sampah ke sungai begitu. Ada aktivis lingkungan yg sampe dipenjara karena menyuarakan peduli lingkungan. Ini pengangguran satu malah enak bener buang sampah sembarangan. Pantes lu susah, kelakuan lu nyusahin orang lain sih. example_title: Example 1 output: - label: Disgust score: 0.672 - label: Anger score: 0.282 - label: Sadness score: 0.033 - label: Joy score: 0.004 - label: Surprise score: 0.003 - label: Trust score: 0.003 - label: Fear score: 0.002 - label: Anticipation score: 0.001 - text: >- Februari 2009, wartawan Jawa Pos Radar Bali dibunuh dengan keji karena berita korupsi. Januari 2019, Presiden memberikan grasi kepada otak pembunuhan Prabangsa, dari seumur hidup menjadi cuma 20 tahun penjara. Sebuah langkah mundur yang menyakitkan! example_title: Example 2 output: - label: Sadness score: 0.604 - label: Anger score: 0.194 - label: Surprise score: 0.127 - label: Joy score: 0.021 - label: Fear score: 0.018 - label: Disgust score: 0.018 - label: Anticipation score: 0.016 - label: Trust score: 0.003 library_name: transformers license: mit language: - id --- ## Model Details ### Model Description The NusaBERT-base-Indonesian-Plutchik-emotion-analysis-v2 is a model designed to identify and analyze emotions in Indonesian texts based on Plutchik's eight basic emotions: Anticipation, Anger, Disgust, Fear, Joy, Sadness, Surprise, and Trust. This model is developed using the [NusaBERT-base](https://huggingface.co/LazarusNLP/NusaBERT-base) and trained using Indonesian tweets categorized into eight emotion categories. The evaluation results of this model can be utilized to analyze emotions in social media, providing insights into users' emotional responses. ### Bias Keep in mind that this model is trained using certain data which may cause bias in the emotion classification process. Therefore, it is important to consider and account for such biases when using this model. ### Evaluation Results The model was trained using the Hyperparameter Tuning technique with Optuna. In this process, Optuna conducted five trials to determine the optimal combination of learning rate (1e-6 to 1e-4) and weight decay (1e-6 to 1e-2). Each trial trained the BERT model with different hyperparameter configurations on the training dataset and then evaluated using the validation dataset. After all the experiments are completed, the best hyperparameter combination is used to train the final model. 
Epoch Training Loss Validation Loss Accuracy F1 Precision Recall
1 0.758400 0.583508 0.829932 0.830203 0.833136 0.829932
2 0.370100 0.394630 0.866213 0.865496 0.870364 0.866213
3 0.231500 0.355294 0.884354 0.884585 0.888140 0.884354
4 0.071000 0.322376 0.902494 0.902801 0.904842 0.902494
5 0.129900 0.308596 0.900227 0.900340 0.902132 0.900227
## Citation ``` @misc{Ardiyanto_Mikhael_2024, author = {Mikhael Ardiyanto}, title = {NusaBERT-base-Indonesian-Plutchik-emotion-analysis-v2}, year = {2024}, URL = {https://huggingface.co/Aardiiiiy/NusaBERT-base-Indonesian-Plutchik-emotion-analysis-v2}, publisher = {Hugging Face} }