Feature Extraction
Transformers
Safetensors
clip
zero-shot-image-classification
nielsr HF Staff commited on
Commit
5740e05
·
verified ·
1 Parent(s): ebb4ec5

Add pipeline tag and library name

Browse files

This PR improves the model card by adding the `pipeline_tag` and `library_name`.

Files changed (1) hide show
  1. README.md +6 -3
README.md CHANGED
@@ -1,11 +1,14 @@
1
  ---
2
- license: mit
 
3
  datasets:
4
  - ILSVRC/imagenet-1k
5
  - mlfoundations/datacomp_small
6
- base_model:
7
- - laion/CLIP-ViT-H-14-laion2B-s32B-b79K
 
8
  ---
 
9
  [[Paper]](https://www.arxiv.org/abs/2506.03355)   [[Code]](https://github.com/LIONS-EPFL/LEAF)
10
 
11
  Model Initialized from `laion/CLIP-ViT-H-14-laion2B-s32B-b79K`. The image encoder is finetuned with FARE at $\epsilon=2/255$. The text encoder is finetuned with LEAF at $k=1$ with $\rho=50$ and semantic constraints.
 
1
  ---
2
+ base_model:
3
+ - laion/CLIP-ViT-H-14-laion2B-s32B-b79K
4
  datasets:
5
  - ILSVRC/imagenet-1k
6
  - mlfoundations/datacomp_small
7
+ license: mit
8
+ pipeline_tag: feature-extraction
9
+ library_name: transformers
10
  ---
11
+
12
  [[Paper]](https://www.arxiv.org/abs/2506.03355)   [[Code]](https://github.com/LIONS-EPFL/LEAF)
13
 
14
  Model Initialized from `laion/CLIP-ViT-H-14-laion2B-s32B-b79K`. The image encoder is finetuned with FARE at $\epsilon=2/255$. The text encoder is finetuned with LEAF at $k=1$ with $\rho=50$ and semantic constraints.