codechrl commited on
Commit
feaddf8
·
verified ·
1 Parent(s): 7e8bae7

Training update: 6,737/238,451 rows (2.83%) | +26 new @ 2025-10-23 02:35:06

Browse files
Files changed (5) hide show
  1. README.md +8 -15
  2. config.json +1 -1
  3. model.safetensors +1 -1
  4. training_args.bin +1 -1
  5. training_metadata.json +7 -7
README.md CHANGED
@@ -9,60 +9,53 @@ tags:
9
  - cybersecurity
10
  - fill-mask
11
  - named-entity-recognition
 
 
 
 
12
  base_model: boltuix/bert-micro
13
  library_name: transformers
 
14
  ---
15
  # bert-micro-cybersecurity
16
-
17
  ## 1. Model Details
18
  **Model description**
19
  "bert-micro-cybersecurity" is a compact transformer model adapted for cybersecurity text classification tasks (e.g., threat detection, incident reports, malicious vs benign content).
20
-
21
  - Model type: fine-tuned lightweight BERT variant
22
  - Languages: English & Indonesia
23
  - Finetuned from: `boltuix/bert-micro`
24
- - Status: **Early version** — trained on **2.78%** of planned data.
25
-
26
  **Model sources**
27
  - Base model: [boltuix/bert-micro](https://huggingface.co/boltuix/bert-micro)
28
  - Data: Cybersecurity Data
29
-
30
  ## 2. Uses
31
  ### Direct use
32
  You can use this model to classify cybersecurity-related text — for example, whether a given message, report or log entry indicates malicious intent, abnormal behaviour, or threat presence.
33
-
34
  ### Downstream use
35
  - Embedding extraction for clustering or anomaly detection in security logs.
36
  - As part of a pipeline for phishing detection, malicious email filtering, incident triage.
37
  - As a feature extractor feeding a downstream system (e.g., alert-generation, SOC dashboard).
38
-
39
  ### Out-of-scope use
40
  - Not meant for high-stakes automated blocking decisions without human review.
41
  - Not optimized for languages other than English and Indonesian.
42
  - Not tested for non-cybersecurity domains or out-of-distribution data.
43
-
44
  ## 3. Bias, Risks, and Limitations
45
- Because the model is based on a small subset (2.78%) of planned data, performance is preliminary and may degrade on unseen or specialized domains (industrial control, IoT logs, foreign language).
46
-
47
  - Inherits any biases present in the base model (`boltuix/bert-micro`) and in the fine-tuning data — e.g., over-representation of certain threat types, vendor or tooling-specific vocabulary.
48
  - Should not be used as sole authority for incident decisions; only as an aid to human analysts.
49
-
50
  ## 4. How to Get Started with the Model
51
  ```python
52
  from transformers import AutoTokenizer, AutoModelForSequenceClassification
53
-
54
  tokenizer = AutoTokenizer.from_pretrained("codechrl/bert-micro-cybersecurity")
55
  model = AutoModelForSequenceClassification.from_pretrained("codechrl/bert-micro-cybersecurity")
56
-
57
  inputs = tokenizer("The server logged an unusual outbound connection to 123.123.123.123",
58
  return_tensors="pt", truncation=True, padding=True)
59
  outputs = model(**inputs)
60
  logits = outputs.logits
61
  predicted_class = logits.argmax(dim=-1).item()
62
  ```
63
-
64
  ## 5. Training Details
65
- - **Trained records**: 6,637 / 238,430 (2.78%)
66
  - **Learning rate**: 5e-05
67
  - **Epochs**: 3
68
  - **Batch size**: 16
 
9
  - cybersecurity
10
  - fill-mask
11
  - named-entity-recognition
12
+ - transformers
13
+ - tensorflow
14
+ - pytorch
15
+ - masked-language-modeling
16
  base_model: boltuix/bert-micro
17
  library_name: transformers
18
+ pipeline_tag: fill-mask
19
  ---
20
  # bert-micro-cybersecurity
 
21
  ## 1. Model Details
22
  **Model description**
23
  "bert-micro-cybersecurity" is a compact transformer model adapted for cybersecurity text classification tasks (e.g., threat detection, incident reports, malicious vs benign content).
 
24
  - Model type: fine-tuned lightweight BERT variant
25
  - Languages: English & Indonesia
26
  - Finetuned from: `boltuix/bert-micro`
27
+ - Status: **Early version** — trained on **2.83%** of planned data.
 
28
  **Model sources**
29
  - Base model: [boltuix/bert-micro](https://huggingface.co/boltuix/bert-micro)
30
  - Data: Cybersecurity Data
 
31
  ## 2. Uses
32
  ### Direct use
33
  You can use this model to classify cybersecurity-related text — for example, whether a given message, report or log entry indicates malicious intent, abnormal behaviour, or threat presence.
 
34
  ### Downstream use
35
  - Embedding extraction for clustering or anomaly detection in security logs.
36
  - As part of a pipeline for phishing detection, malicious email filtering, incident triage.
37
  - As a feature extractor feeding a downstream system (e.g., alert-generation, SOC dashboard).
 
38
  ### Out-of-scope use
39
  - Not meant for high-stakes automated blocking decisions without human review.
40
  - Not optimized for languages other than English and Indonesian.
41
  - Not tested for non-cybersecurity domains or out-of-distribution data.
 
42
  ## 3. Bias, Risks, and Limitations
43
+ Because the model is based on a small subset (2.83%) of planned data, performance is preliminary and may degrade on unseen or specialized domains (industrial control, IoT logs, foreign language).
 
44
  - Inherits any biases present in the base model (`boltuix/bert-micro`) and in the fine-tuning data — e.g., over-representation of certain threat types, vendor or tooling-specific vocabulary.
45
  - Should not be used as sole authority for incident decisions; only as an aid to human analysts.
 
46
  ## 4. How to Get Started with the Model
47
  ```python
48
  from transformers import AutoTokenizer, AutoModelForSequenceClassification
 
49
  tokenizer = AutoTokenizer.from_pretrained("codechrl/bert-micro-cybersecurity")
50
  model = AutoModelForSequenceClassification.from_pretrained("codechrl/bert-micro-cybersecurity")
 
51
  inputs = tokenizer("The server logged an unusual outbound connection to 123.123.123.123",
52
  return_tensors="pt", truncation=True, padding=True)
53
  outputs = model(**inputs)
54
  logits = outputs.logits
55
  predicted_class = logits.argmax(dim=-1).item()
56
  ```
 
57
  ## 5. Training Details
58
+ - **Trained records**: 6,737 / 238,451 (2.83%)
59
  - **Learning rate**: 5e-05
60
  - **Epochs**: 3
61
  - **Batch size**: 16
config.json CHANGED
@@ -17,7 +17,7 @@
17
  "num_hidden_layers": 2,
18
  "pad_token_id": 0,
19
  "position_embedding_type": "absolute",
20
- "transformers_version": "4.57.1",
21
  "type_vocab_size": 2,
22
  "use_cache": true,
23
  "vocab_size": 30522
 
17
  "num_hidden_layers": 2,
18
  "pad_token_id": 0,
19
  "position_embedding_type": "absolute",
20
+ "transformers_version": "4.57.0",
21
  "type_vocab_size": 2,
22
  "use_cache": true,
23
  "vocab_size": 30522
model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:d499c86d2c0ea3b54f75177498a11c5825510248cb35dd5973adec65d52440d6
3
  size 17671560
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:487e4f9052c100978689e6eade446eeda6f4c8fba1363b7a582d6079bb33374a
3
  size 17671560
training_args.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:32e79d532c7f5a7b4ae209d8f8bf38545aa96797d1142502dfe93c3287200d0a
3
  size 5905
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1f36e9bc24f3f3fbc56bfa76ad08bf8c96c15d398a5d494f6e83bfb7fc0d9d52
3
  size 5905
training_metadata.json CHANGED
@@ -1,11 +1,11 @@
1
  {
2
- "trained_at": 1761184356.7112105,
3
- "trained_at_readable": "2025-10-23 01:52:36",
4
- "samples_this_session": 956,
5
- "new_rows_this_session": 100,
6
- "trained_rows_total": 6637,
7
- "total_db_rows": 238430,
8
- "percentage": 2.783626221532525,
9
  "final_loss": 0,
10
  "epochs": 3,
11
  "learning_rate": 5e-05
 
1
  {
2
+ "trained_at": 1761186906.6386719,
3
+ "trained_at_readable": "2025-10-23 02:35:06",
4
+ "samples_this_session": 26,
5
+ "new_rows_this_session": 26,
6
+ "trained_rows_total": 6737,
7
+ "total_db_rows": 238451,
8
+ "percentage": 2.825318409232924,
9
  "final_loss": 0,
10
  "epochs": 3,
11
  "learning_rate": 5e-05