id
stringlengths
4
117
sentence
stringlengths
1
134k
timm/mobilenetv3_large_100.ra_in1k
RMSProp (TF 1.0 behaviour) optimizer, EMA weight averaging
timm/mobilenetv3_large_100.ra_in1k
Step
timm/mobilenetv3_large_100.ra_in1k
(exponential decay w/ staircase)
timm/mobilenetv3_large_100.ra_in1k
LR schedule with warmup
timm/mobilenetv3_large_100.ra_in1k
Model Details
timm/mobilenetv3_large_100.ra_in1k
Model
timm/mobilenetv3_large_100.ra_in1k
Type: Image classification / feature backbone
timm/mobilenetv3_large_100.ra_in1k
Model Stats:
timm/mobilenetv3_large_100.ra_in1k
Params (M): 5.5
timm/mobilenetv3_large_100.ra_in1k
GMACs: 0.2
timm/mobilenetv3_large_100.ra_in1k
Activations
timm/mobilenetv3_large_100.ra_in1k
(M): 4.4
timm/mobilenetv3_large_100.ra_in1k
Image size: 224 x 224
timm/mobilenetv3_large_100.ra_in1k
Papers:
timm/mobilenetv3_large_100.ra_in1k
Searching for MobileNetV3: https://arxiv.org/abs/1905.02244
timm/mobilenetv3_large_100.ra_in1k
ResNet strikes back: An improved training procedure in timm: https://arxiv.org/abs/2110.00476
timm/mobilenetv3_large_100.ra_in1k
Dataset: ImageNet-1k
timm/mobilenetv3_large_100.ra_in1k
Original: https://github.com/huggingface/pytorch-image-models
timm/mobilenetv3_large_100.ra_in1k
Model Usage
timm/mobilenetv3_large_100.ra_in1k
Image
timm/mobilenetv3_large_100.ra_in1k
Classification
timm/mobilenetv3_large_100.ra_in1k
from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model('mobilenetv3_large_100.ra_in1k', pretrained=True) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0))
timm/mobilenetv3_large_100.ra_in1k
# unsqueeze single image into batch of 1 top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1)
timm/mobilenetv3_large_100.ra_in1k
* 100, k=5)
timm/mobilenetv3_large_100.ra_in1k
Feature Map Extraction
timm/mobilenetv3_large_100.ra_in1k
from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model( 'mobilenetv3_large_100.ra_in1k', pretrained=True, features_only=True, ) model = model.eval()
timm/mobilenetv3_large_100.ra_in1k
# get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0))
timm/mobilenetv3_large_100.ra_in1k
# unsqueeze single image into batch of 1 for o in output: # print shape of each feature map in output # e.g.: # torch.
timm/mobilenetv3_large_100.ra_in1k
Size([1, 16, 112, 112]) # torch.
timm/mobilenetv3_large_100.ra_in1k
Size([1, 24, 56, 56]) # torch.
timm/mobilenetv3_large_100.ra_in1k
Size([1, 40, 28, 28]) # torch.
timm/mobilenetv3_large_100.ra_in1k
Size([1, 112, 14, 14]) # torch.
timm/mobilenetv3_large_100.ra_in1k
Size([1, 960, 7, 7]) print(o.shape)
timm/mobilenetv3_large_100.ra_in1k
Image Embeddings
timm/mobilenetv3_large_100.ra_in1k
from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model( 'mobilenetv3_large_100.ra_in1k', pretrained=True, num_classes=0, # remove classifier nn.Linear ) model = model.eval()
timm/mobilenetv3_large_100.ra_in1k
# get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0))
timm/mobilenetv3_large_100.ra_in1k
# output is (batch_size, num_features) shaped tensor # or equivalently (without needing to set num_classes=0) output = model.forward_features(transforms(img).unsqueeze(0))
timm/mobilenetv3_large_100.ra_in1k
# output is unpooled, a (1, 960, 7, 7) shaped tensor output = model.forward_head(output, pre_logits=True) # output is a (1, num_features) shaped tensor
timm/mobilenetv3_large_100.ra_in1k
Model Comparison
timm/mobilenetv3_large_100.ra_in1k
Explore the dataset and runtime metrics of this model in timm model results.
timm/mobilenetv3_large_100.ra_in1k
Citation
timm/mobilenetv3_large_100.ra_in1k
@inproceedings{howard2019searching, title={Searching for mobilenetv3}, author={Howard, Andrew and Sandler, Mark and Chu, Grace and Chen, Liang-Chieh and Chen, Bo and Tan, Mingxing and Wang, Weijun and Zhu, Yukun and Pang, Ruoming and Vasudevan, Vijay and others}, booktitle={Proceedings of the IEEE/CVF international conference on computer vision}, pages={1314--1324}, year={2019} }
timm/mobilenetv3_large_100.ra_in1k
@misc{rw2019timm, author = {Ross Wightman}, title = {PyTorch Image Models}, year = {2019}, publisher = {GitHub}, journal = {GitHub repository}, doi = {10.5281/zenodo.4414861}, howpublished = {\url{https://github.com/huggingface/pytorch-image-models}} }
timm/mobilenetv3_large_100.ra_in1k
@inproceedings{wightman2021resnet, title={ResNet strikes back: An improved training procedure in timm}, author={Wightman, Ross and Touvron, Hugo and Jegou, Herve}, booktitle={NeurIPS 2021 Workshop on ImageNet: Past, Present, and Future} }
distilbert-base-uncased-finetuned-sst-2-english
DistilBERT base uncased finetuned SST-2
distilbert-base-uncased-finetuned-sst-2-english
Table of Contents
distilbert-base-uncased-finetuned-sst-2-english
Model Details
distilbert-base-uncased-finetuned-sst-2-english
How to Get Started With the Model
distilbert-base-uncased-finetuned-sst-2-english
Uses
distilbert-base-uncased-finetuned-sst-2-english
Risks, Limitations and Biases
distilbert-base-uncased-finetuned-sst-2-english
Training
distilbert-base-uncased-finetuned-sst-2-english
Model Details
distilbert-base-uncased-finetuned-sst-2-english
Model
distilbert-base-uncased-finetuned-sst-2-english
Description: This model is a fine-tune checkpoint of DistilBERT-base-uncased, fine-tuned on SST-2.
distilbert-base-uncased-finetuned-sst-2-english
This model reaches an accuracy of 91.3 on the dev set (for comparison, Bert bert-base-uncased version reaches an accuracy of 92.7).
distilbert-base-uncased-finetuned-sst-2-english
Developed by: Hugging Face
distilbert-base-uncased-finetuned-sst-2-english
Model
distilbert-base-uncased-finetuned-sst-2-english
Type:
distilbert-base-uncased-finetuned-sst-2-english
Text Classification
distilbert-base-uncased-finetuned-sst-2-english
Language(s):
distilbert-base-uncased-finetuned-sst-2-english
English
distilbert-base-uncased-finetuned-sst-2-english
License: Apache-2.0
distilbert-base-uncased-finetuned-sst-2-english
Parent Model:
distilbert-base-uncased-finetuned-sst-2-english
For more details about DistilBERT, we encourage users to check out this model card.
distilbert-base-uncased-finetuned-sst-2-english
Resources for more information:
distilbert-base-uncased-finetuned-sst-2-english
Model Documentation
distilbert-base-uncased-finetuned-sst-2-english
DistilBERT paper
distilbert-base-uncased-finetuned-sst-2-english
How to Get Started With the Model
distilbert-base-uncased-finetuned-sst-2-english
Example of single-label classification:
distilbert-base-uncased-finetuned-sst-2-english
​​
distilbert-base-uncased-finetuned-sst-2-english
import
distilbert-base-uncased-finetuned-sst-2-english
torch from transformers import DistilBertTokenizer, DistilBertForSequenceClassification tokenizer = DistilBertTokenizer.from_pretrained("distilbert-base-uncased") model = DistilBertForSequenceClassification.from_pretrained("distilbert-base-uncased") inputs = tokenizer("Hello, my dog is cute", return_tensors="pt") with torch.no_grad(): logits
distilbert-base-uncased-finetuned-sst-2-english
= model(**inputs).logits predicted_class_id = logits.argmax().item() model.config.id2label[predicted_class_id]
distilbert-base-uncased-finetuned-sst-2-english
Uses
distilbert-base-uncased-finetuned-sst-2-english
Direct
distilbert-base-uncased-finetuned-sst-2-english
Use
distilbert-base-uncased-finetuned-sst-2-english
This model can be used for topic classification.
distilbert-base-uncased-finetuned-sst-2-english
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to be fine-tuned on a downstream task.
distilbert-base-uncased-finetuned-sst-2-english
See the model hub to look for fine-tuned versions on a task that interests you.
distilbert-base-uncased-finetuned-sst-2-english
Misuse and Out-of-scope Use
distilbert-base-uncased-finetuned-sst-2-english
The model should not be used to intentionally create hostile or alienating environments for people.
distilbert-base-uncased-finetuned-sst-2-english
In addition, the model was not trained to be factual or true representations of people or events, and therefore using the model to generate such content is out-of-scope for the abilities of this model.
distilbert-base-uncased-finetuned-sst-2-english
Risks, Limitations and Biases
distilbert-base-uncased-finetuned-sst-2-english
Based on a few experimentations, we observed that this model could produce biased predictions that target underrepresented populations.
distilbert-base-uncased-finetuned-sst-2-english
For instance, for sentences like This film was filmed in COUNTRY, this binary classification model will give radically different probabilities for the positive label depending on the country (0.89 if the country is France, but 0.08 if the country is Afghanistan) when nothing in the input indicates such a strong semantic shift.
distilbert-base-uncased-finetuned-sst-2-english
In this colab, Aurélien Géron made an interesting map plotting these probabilities for each country.
distilbert-base-uncased-finetuned-sst-2-english
We strongly advise users to thoroughly probe these aspects on their use-cases in order to evaluate the risks of this model.
distilbert-base-uncased-finetuned-sst-2-english
We recommend looking at the following bias evaluation datasets as a place to start: WinoBias, WinoGender, Stereoset.
distilbert-base-uncased-finetuned-sst-2-english
Training
distilbert-base-uncased-finetuned-sst-2-english
Training
distilbert-base-uncased-finetuned-sst-2-english
Data
distilbert-base-uncased-finetuned-sst-2-english
The authors use the following Stanford Sentiment Treebank(sst2) corpora for the model.
distilbert-base-uncased-finetuned-sst-2-english
Training Procedure
distilbert-base-uncased-finetuned-sst-2-english
Fine-tuning hyper-parameters
distilbert-base-uncased-finetuned-sst-2-english
learning_rate = 1e-5
distilbert-base-uncased-finetuned-sst-2-english
batch_size
distilbert-base-uncased-finetuned-sst-2-english
= 32
distilbert-base-uncased-finetuned-sst-2-english
warmup
distilbert-base-uncased-finetuned-sst-2-english
=
distilbert-base-uncased-finetuned-sst-2-english
600