habibi26 commited on
Commit
f163b30
·
verified ·
1 Parent(s): c3b8bbd

Model save

Browse files
Files changed (2) hide show
  1. README.md +21 -17
  2. model.safetensors +1 -1
README.md CHANGED
@@ -21,7 +21,7 @@ model-index:
21
  metrics:
22
  - name: Accuracy
23
  type: accuracy
24
- value: 0.9857142857142858
25
  ---
26
 
27
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
@@ -31,8 +31,8 @@ should probably proofread and complete it, then remove this comment. -->
31
 
32
  This model is a fine-tuned version of [openai/clip-vit-base-patch32](https://huggingface.co/openai/clip-vit-base-patch32) on the imagefolder dataset.
33
  It achieves the following results on the evaluation set:
34
- - Loss: 0.0284
35
- - Accuracy: 0.9857
36
 
37
  ## Model description
38
 
@@ -60,25 +60,29 @@ The following hyperparameters were used during training:
60
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
61
  - lr_scheduler_type: linear
62
  - lr_scheduler_warmup_ratio: 0.1
63
- - num_epochs: 15
64
 
65
  ### Training results
66
 
67
  | Training Loss | Epoch | Step | Validation Loss | Accuracy |
68
  |:-------------:|:-------:|:----:|:---------------:|:--------:|
69
- | No log | 0.8421 | 4 | 0.5341 | 0.8286 |
70
- | No log | 1.8947 | 9 | 0.2827 | 0.8857 |
71
- | 0.6045 | 2.9474 | 14 | 0.4044 | 0.7857 |
72
- | 0.6045 | 4.0 | 19 | 0.2278 | 0.9143 |
73
- | 0.7684 | 4.8421 | 23 | 0.1033 | 0.9571 |
74
- | 0.7684 | 5.8947 | 28 | 0.1466 | 0.9714 |
75
- | 0.2323 | 6.9474 | 33 | 0.2253 | 0.9143 |
76
- | 0.2323 | 8.0 | 38 | 0.1232 | 0.9714 |
77
- | 0.0933 | 8.8421 | 42 | 0.0582 | 0.9714 |
78
- | 0.0933 | 9.8947 | 47 | 0.0609 | 0.9714 |
79
- | 0.067 | 10.9474 | 52 | 0.0453 | 0.9714 |
80
- | 0.067 | 12.0 | 57 | 0.0422 | 0.9857 |
81
- | 0.0835 | 12.6316 | 60 | 0.0284 | 0.9857 |
 
 
 
 
82
 
83
 
84
  ### Framework versions
 
21
  metrics:
22
  - name: Accuracy
23
  type: accuracy
24
+ value: 0.9428571428571428
25
  ---
26
 
27
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
 
31
 
32
  This model is a fine-tuned version of [openai/clip-vit-base-patch32](https://huggingface.co/openai/clip-vit-base-patch32) on the imagefolder dataset.
33
  It achieves the following results on the evaluation set:
34
+ - Loss: 0.2587
35
+ - Accuracy: 0.9429
36
 
37
  ## Model description
38
 
 
60
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
61
  - lr_scheduler_type: linear
62
  - lr_scheduler_warmup_ratio: 0.1
63
+ - num_epochs: 20
64
 
65
  ### Training results
66
 
67
  | Training Loss | Epoch | Step | Validation Loss | Accuracy |
68
  |:-------------:|:-------:|:----:|:---------------:|:--------:|
69
+ | No log | 0.8421 | 4 | 0.2112 | 0.9571 |
70
+ | No log | 1.8947 | 9 | 0.1227 | 0.9857 |
71
+ | 0.295 | 2.9474 | 14 | 0.1203 | 0.9571 |
72
+ | 0.295 | 4.0 | 19 | 0.0635 | 0.9714 |
73
+ | 0.0962 | 4.8421 | 23 | 0.2939 | 0.9429 |
74
+ | 0.0962 | 5.8947 | 28 | 0.2483 | 0.9286 |
75
+ | 0.163 | 6.9474 | 33 | 0.0712 | 0.9857 |
76
+ | 0.163 | 8.0 | 38 | 0.0474 | 0.9714 |
77
+ | 0.0646 | 8.8421 | 42 | 0.2012 | 0.9429 |
78
+ | 0.0646 | 9.8947 | 47 | 0.3587 | 0.9 |
79
+ | 0.1048 | 10.9474 | 52 | 0.0427 | 0.9857 |
80
+ | 0.1048 | 12.0 | 57 | 0.0149 | 0.9857 |
81
+ | 0.0519 | 12.8421 | 61 | 0.1616 | 0.9571 |
82
+ | 0.0519 | 13.8947 | 66 | 0.2286 | 0.9571 |
83
+ | 0.0151 | 14.9474 | 71 | 0.1369 | 0.9571 |
84
+ | 0.0151 | 16.0 | 76 | 0.2154 | 0.9571 |
85
+ | 0.0455 | 16.8421 | 80 | 0.2587 | 0.9429 |
86
 
87
 
88
  ### Framework versions
model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:9217367efe66f73b213b7ba341d3f9f82db99d8735aadf9639c182bd157b0f15
3
  size 349854120
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5971bd635e0f39449daf9128e9f2e74b05a7e70538103ee3f14c792a7b694072
3
  size 349854120