Raihan Hidayatullah Djunaedi commited on
Commit
2d694d9
·
1 Parent(s): 94028e0

Remove upload instructions and example usage scripts for zero-shot classification model

Browse files
Files changed (2) hide show
  1. UPLOAD_INSTRUCTIONS.md +0 -112
  2. example_usage.py +0 -95
UPLOAD_INSTRUCTIONS.md DELETED
@@ -1,112 +0,0 @@
1
- # How to Upload Your Model to Hugging Face
2
-
3
- Follow these steps to upload your zero-shot classification model to Hugging Face and make it available for use through the transformers library.
4
-
5
- ## Prerequisites
6
-
7
- 1. Install required packages:
8
-
9
- ```bash
10
- pip install huggingface_hub transformers
11
- ```
12
-
13
- 2. Create a Hugging Face account at https://huggingface.co/
14
-
15
- 3. Get your access token:
16
- - Go to https://huggingface.co/settings/tokens
17
- - Create a new token with "Write" permissions
18
- - Copy the token (keep it secure!)
19
-
20
- ## Upload Steps
21
-
22
- ### Method 1: Using the Web Interface (Recommended for beginners)
23
-
24
- 1. Go to https://huggingface.co/new
25
- 2. Choose "Model" and give it a name (e.g., `zero-shot-classification`)
26
- 3. Set visibility (Public/Private)
27
- 4. Click "Create model repository"
28
- 5. Upload files using the web interface:
29
- - Drag and drop all files from your model directory
30
- - Or use git (see Method 2)
31
-
32
- ### Method 2: Using Git/Command Line
33
-
34
- 1. Login to Hugging Face CLI:
35
-
36
- ```bash
37
- huggingface-cli login
38
- # Enter your token when prompted
39
- ```
40
-
41
- 2. Clone your repository:
42
-
43
- ```bash
44
- git clone https://huggingface.co/YOUR_USERNAME/zero-shot-classification
45
- cd zero-shot-classification
46
- ```
47
-
48
- 3. Copy your model files:
49
-
50
- ```bash
51
- # Copy all files from your model directory to the cloned repository
52
- cp /path/to/your/model/* .
53
- ```
54
-
55
- 4. Upload to Hugging Face:
56
-
57
- ```bash
58
- git add .
59
- git commit -m "Upload XLM-RoBERTa zero-shot classification model"
60
- git push
61
- ```
62
-
63
- ### Method 3: Using Python API
64
-
65
- ```python
66
- from huggingface_hub import HfApi, create_repo
67
-
68
- # Initialize API
69
- api = HfApi()
70
-
71
- # Create repository (optional if not exists)
72
- repo_id = "YOUR_USERNAME/zero-shot-classification"
73
- create_repo(repo_id, repo_type="model", private=False)
74
-
75
- # Upload files
76
- api.upload_folder(
77
- folder_path="/path/to/your/model/directory",
78
- repo_id=repo_id,
79
- repo_type="model"
80
- )
81
- ```
82
-
83
- ## Important Notes
84
-
85
- 1. **Replace placeholders**: Before uploading, make sure to replace `YOUR_USERNAME` in the README.md and example files with your actual Hugging Face username.
86
-
87
- 2. **Model card**: The README.md serves as your model card. Make sure it's complete and accurate.
88
-
89
- 3. **File size**: Large files (>10MB) are automatically handled by Git LFS, which is already configured in .gitattributes.
90
-
91
- 4. **Testing**: After upload, test your model:
92
-
93
- ```python
94
- from transformers import pipeline
95
- classifier = pipeline("zero-shot-classification", model="YOUR_USERNAME/zero-shot-classification")
96
- ```
97
-
98
- ## Making Your Model Discoverable
99
-
100
- 1. Add relevant tags in your README.md frontmatter
101
- 2. Add a good description
102
- 3. Include example usage
103
- 4. Consider adding a model card with performance metrics
104
-
105
- ## Troubleshooting
106
-
107
- - **Authentication errors**: Make sure your token has write permissions
108
- - **Large file errors**: Ensure Git LFS is properly configured
109
- - **Model loading errors**: Check that all required files are present (config.json, model files, tokenizer files)
110
-
111
- After successful upload, your model will be available at:
112
- `https://huggingface.co/YOUR_USERNAME/zero-shot-classification`
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
example_usage.py DELETED
@@ -1,95 +0,0 @@
1
- #!/usr/bin/env python3
2
- """
3
- Example script demonstrating how to use the XLM-RoBERTa Zero-Shot Classification model.
4
- This script shows various use cases including multilingual classification.
5
- """
6
-
7
- import torch
8
- from transformers import pipeline
9
-
10
-
11
- def main():
12
- print("Loading XLM-RoBERTa Zero-Shot Classification model...")
13
-
14
- # Initialize the zero-shot classification pipeline
15
- # Replace 'YOUR_USERNAME/zero-shot-classification' with your actual model path
16
- classifier = pipeline(
17
- "zero-shot-classification",
18
- model="YOUR_USERNAME/zero-shot-classification",
19
- device=0 if torch.cuda.is_available() else -1, # Use GPU if available
20
- )
21
-
22
- print("Model loaded successfully!\n")
23
-
24
- # Example 1: English sentiment analysis
25
- print("Example 1: English Sentiment Analysis")
26
- text_en = "I love this new smartphone, it's absolutely amazing!"
27
- labels_en = ["positive", "negative", "neutral"]
28
-
29
- result = classifier(text_en, labels_en)
30
- print(f"Text: {text_en}")
31
- print(f"Predicted label: {result['labels'][0]} (score: {result['scores'][0]:.4f})")
32
- print()
33
-
34
- # Example 2: Multilingual topic classification
35
- print("Example 2: Multilingual Topic Classification")
36
- texts = [
37
- (
38
- "English",
39
- "The government announced new economic policies today.",
40
- ["politics", "sports", "technology", "entertainment"],
41
- ),
42
- (
43
- "Spanish",
44
- "El nuevo iPhone tiene características increíbles.",
45
- ["tecnología", "deportes", "política", "entretenimiento"],
46
- ),
47
- (
48
- "French",
49
- "Le match de football était très excitant hier soir.",
50
- ["sport", "politique", "technologie", "divertissement"],
51
- ),
52
- (
53
- "German",
54
- "Die neue KI-Technologie wird die Zukunft verändern.",
55
- ["Technologie", "Sport", "Politik", "Unterhaltung"],
56
- ),
57
- ]
58
-
59
- for language, text, labels in texts:
60
- result = classifier(text, labels)
61
- print(f"{language}: {text}")
62
- print(f"Predicted: {result['labels'][0]} (score: {result['scores'][0]:.4f})")
63
- print()
64
-
65
- # Example 3: Multi-label classification
66
- print("Example 3: Multi-label Classification")
67
- text_multi = "This movie has great action scenes and amazing special effects, but the story is quite boring."
68
- labels_multi = ["action", "drama", "comedy", "boring", "exciting", "visual effects"]
69
-
70
- result = classifier(text_multi, labels_multi, multi_label=True)
71
- print(f"Text: {text_multi}")
72
- print("All predictions:")
73
- for label, score in zip(result["labels"], result["scores"]):
74
- print(f" {label}: {score:.4f}")
75
- print()
76
-
77
- # Example 4: Custom hypothesis template
78
- print("Example 4: Custom Hypothesis Template (Spanish)")
79
- text_es = "Esta película es realmente fantástica y emocionante."
80
- labels_es = ["positivo", "negativo", "neutro"]
81
- hypothesis_template = "Este texto es {}."
82
-
83
- result = classifier(text_es, labels_es, hypothesis_template=hypothesis_template)
84
- print(f"Text: {text_es}")
85
- print(f"Predicted: {result['labels'][0]} (score: {result['scores'][0]:.4f})")
86
- print(f"Using custom template: '{hypothesis_template}'")
87
- print()
88
-
89
- print(
90
- "Demo completed! You can now use this model for your own zero-shot classification tasks."
91
- )
92
-
93
-
94
- if __name__ == "__main__":
95
- main()