haukelicht's picture
used native setfit head
5955e84 verified
---
base_model: sentence-transformers/all-mpnet-base-v2
language:
- en
license: apache-2.0
tags:
- noneconomic-attributes
- mention-classification
- mpnet-base-v2
- setfit
- multi-label-classification
model-index:
- name: all-mpnet-base-v2_noneconomic-attributes-classifier
results:
- task:
type: multi-label-classification
name: Multi-label classification
metrics:
- type: _tba_
value: -1.0
dataset:
type: custom
name: custom human-labeled multi-label annotation dataset
---
# Group mention non-economic attributes classifier
A multi-label classifier for detecting **non-economic attribute** categories referred to in a social group mention, trained with `setfit` based on the light-weight [`sentence-transformers/all-mpnet-base-v2`](https://huggingface.co/sentence-transformers/all-mpnet-base-v2) sentence embedding model.
The non-economic attributes classified are:
| attribute | definition |
|:--------------------------|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| age | People referred to based on or categorized according to their age, generation, or cohort such as children, young people, old people, future generations. |
| family | People referred to based on or categorized according to their familial role such as fathers, mothers, parents. |
| gender/sexuality | People referred to based on or categorized according to their gender or sexuality such as men, women, or LGBTQI+ people. |
| place/location | People referred to based on or categorized according to their place or location such as peolple from rural areas, urban center, the global south, or global north. |
| nationality | People referred to based on or categorized according to their nationality such as natives or immigrants. |
| ethnicity | People referred to based on or categorized according to heir ethnicity such as people of color or ethnic minorities. |
| religion | People referred to based on or categorized according to their religion of belief such as christians, jews, muslims, etc. |
| health | People referred to based on or categorized according to their health condition or relation to aspects of health such as disabled/handicapped people or chronically sick people. |
| crime | People referred to based on or categorized according to their relation to crime such as offenders/criminals or victims. |
| shared values/mentalities | People referred to based on or categorized according to their shared values or mentalities such as people with a growth mindset, meritocratic values, environmental or peace mentalities or a more equal society. |
## Model Details
### Model Description
Group mention non-economic attributes classifier
- **Developed by:** Hauke Licht
- **Model type:** mpnet
- **Language(s) (NLP):** ['en']
- **License:** apache-2.0
- **Finetuned from model:** sentence-transformers/all-mpnet-base-v2
- **Funded by:** The *Deutsche Forschungsgemeinschaft* (DFG, German Research Foundation) under Germany's Excellence Strategy – EXC 2126/1 – 390838866
### Model Sources
- **Repository:** _tba_
- **Paper:** _tba_
- **Demo:** [More Information Needed]
## Uses
### Bias, Risks, and Limitations
- Evaluation of the classifier in held-out data shows that it makes mistakes.
- The model has been finetuned only on human-annotated labeled social group mentions recorded in sentences sampled from party manifestos of European parties (mostly far-right and Green parties). Applying the classifier in other domains can lead to higher error rates.
- The data used to finetune the model come from human annotators. Human annotators can be biased and factors like gender and social background can impact their annotations judgments. This may lead to bias in the detection of specific social groups.
#### Recommendations
- Users who want to apply the model outside its training data domain should evaluate its performance in the target data.
- Users who want to apply the model outside its training data domain should contuninue to finetune this model on labeled data.
### How to Get Started with the Model
Use the code below to get started with the model.
## Usage
You can use the model with the [`setfit` python library](https://github.com/huggingface/setfit) (>=1.1.0):
*Note:* It is recommended to use transformers version >=4.5.5,<=5.0.0 and sentence-transformers version >=4.0.1,<=5.1.0 for compatibility.
### Classification
```python
import torch
from setfit import SetFitModel
model_name = "haukelicht/all-mpnet-base-v2_noneconomic-attributes-classifier"
device = "cuda" if torch.cuda.is_available() else "mps" if torch.backends.mps.is_available() else "cpu"
classifier = SetFitModel.from_pretrained(model_name)
classifier.to(device);
# Example mentions
mentions = ["working class people", "highly-educated professionals", "people without a stable job"]
# Get predictions
with torch.no_grad():
predictions = classifier.predict(mentions)
print(predictions)
# Map predictions to labels
[
[
classifier.id2label[l]
for l, p in enumerate(pred) if p==1
]
for pred in predictions
]
```
### Mention embedding
```python
import torch
from sentence_transformers import SentenceTransformer
model_name = "haukelicht/all-mpnet-base-v2_noneconomic-attributes-classifier"
device = "cuda" if torch.cuda.is_available() else "mps" if torch.backends.mps.is_available() else "cpu"
# Load the sentence transformer component of the pre-trained classifier
model = SentenceTransformer(model_name, device=device)
# Example mentions
mentions = ["working class people", "highly-educated professionals", "people without a stable job"]
# Compute mention embeddings
with torch.no_grad():
embeddings = model.encode(mentions)
````
## Training Details
### Training Data
The train, dev, and test splits used for model finetuning and evaluation will be made available on Github upon publication of the associated research paper.
### Training Procedure
#### Training Hyperparameters
- num epochs: (1, 4)
- train batch sizes: (32, 4)
- body train max teps: 75
- head learning rate: 0.010
- L2 weight: 0.01
- warmup proportion: 0.15
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
The train, dev, and test splits used for model finetuning and evaluation will be made available on Github upon publication of the associated research paper.
## Citation
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Model Card Contact
[email protected]