Science Keyword Classification model

We have fine-tuned INDUS Model for classifying scientific keywords from NASA's Common Metadata Repository (CMR). The project aims to improve the accessibility and organization of Earth observation metadata by predicting associated keywords in an Extreme Multi-Label Classification setting.

Model Overview

  • Base Model: INDUS, fine-tuned for multi-label classification.
  • Loss Function: The model uses focal loss instead of traditional cross-entropy to address label imbalance by focusing on difficult-to-classify examples.
  • Dataset: NASA's CMR metadata, filtered to remove duplicates and irrelevant labels, resulting in a dataset of 42,474 records and 3,240 labels. You can find the dataset here

Key Features

  • Extreme Multi-Label Classification: Addresses classification with a vast number of potential labels (keywords) and imbalanced frequency.
  • Stratified Splitting: The dataset is split based on provider-id to maintain balanced representation across train, validation, and test sets.
  • Improved Performance: Focal loss with different focusing parameters (γ) was evaluated, showing significant improvements in weighted precision, recall, F1 score, and Jaccard similarity over cross-entropy loss and previous models.

Label Mapping During Inference

After obtaining predictions from the model, we can map the predicted label indices to their actual names using the model.config.id2label dictionary

# Example usage
predicted_indices = [0, 2, 5] # top 3
predicted_labels = [model.config.id2label[idx] for idx in predicted_indices]
print(predicted_labels)

Experiments

  1. Baseline (alpha-1.0.1): Used cross-entropy loss.
  2. Experiment 2 (alpha-1.1.1): Focal loss with γ = 4.
  3. Experiment 3 (alpha-1.1.2): Focal loss with γ = 2.
  4. Final (alpha-1.2.1): Focal loss (γ = 2) with stratified splitting.

Results

The model with focal loss and stratified sampling (alpha-1.2.1) outperformed all other configurations and previous models in terms of precision, recall, F1 score, and Jaccard similarity. The weighted metrics at various threshold for the model can be found below. image/png

Please find accompanying technical writeup here.

References

Downloads last month
89
Safetensors
Model size
127M params
Tensor type
F32
·
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.

Model tree for nasa-impact/science-keyword-classification

Finetuned
(5)
this model