codechrl commited on
Commit
9ed22f8
·
verified ·
1 Parent(s): 9e0f81c

Training update: 163,080/164,080 rows (99.39%) | +14 new @ 2025-11-12 12:59:19

Browse files
Files changed (4) hide show
  1. README.md +5 -5
  2. model.safetensors +1 -1
  3. training_args.bin +1 -1
  4. training_metadata.json +7 -7
README.md CHANGED
@@ -25,7 +25,7 @@ pipeline_tag: fill-mask
25
  - Model type: fine-tuned lightweight BERT variant
26
  - Languages: English & Indonesia
27
  - Finetuned from: `boltuix/bert-micro`
28
- - Status: **Early version** — trained on **99.24%** of planned data.
29
 
30
  **Model sources**
31
  - Base model: [boltuix/bert-micro](https://huggingface.co/boltuix/bert-micro)
@@ -51,7 +51,7 @@ You can use this model to classify cybersecurity-related text — for example, w
51
  - Early classification of SIEM alert & events.
52
 
53
  ## 3. Bias, Risks, and Limitations
54
- Because the model is based on a small subset (99.24%) of planned data, performance is preliminary and may degrade on unseen or specialized domains (industrial control, IoT logs, foreign language).
55
  - Inherits any biases present in the base model (`boltuix/bert-micro`) and in the fine-tuning data — e.g., over-representation of certain threat types, vendor or tooling-specific vocabulary.
56
  - **Should not be used as sole authority for incident decisions; only as an aid to human analysts.**
57
 
@@ -75,9 +75,9 @@ Since cybersecurity data often contains lengthy alert descriptions and execution
75
  - **LR scheduler**: Linear with warmup
76
 
77
  ### Training Data
78
- - **Total database rows**: 164,074
79
- - **Rows processed (cumulative)**: 162,825 (99.24%)
80
- - **Training date**: 2025-11-12 12:06:26
81
 
82
  ### Post-Training Metrics
83
  - **Final training loss**:
 
25
  - Model type: fine-tuned lightweight BERT variant
26
  - Languages: English & Indonesia
27
  - Finetuned from: `boltuix/bert-micro`
28
+ - Status: **Early version** — trained on **99.39%** of planned data.
29
 
30
  **Model sources**
31
  - Base model: [boltuix/bert-micro](https://huggingface.co/boltuix/bert-micro)
 
51
  - Early classification of SIEM alert & events.
52
 
53
  ## 3. Bias, Risks, and Limitations
54
+ Because the model is based on a small subset (99.39%) of planned data, performance is preliminary and may degrade on unseen or specialized domains (industrial control, IoT logs, foreign language).
55
  - Inherits any biases present in the base model (`boltuix/bert-micro`) and in the fine-tuning data — e.g., over-representation of certain threat types, vendor or tooling-specific vocabulary.
56
  - **Should not be used as sole authority for incident decisions; only as an aid to human analysts.**
57
 
 
75
  - **LR scheduler**: Linear with warmup
76
 
77
  ### Training Data
78
+ - **Total database rows**: 164,080
79
+ - **Rows processed (cumulative)**: 163,080 (99.39%)
80
+ - **Training date**: 2025-11-12 12:59:19
81
 
82
  ### Post-Training Metrics
83
  - **Final training loss**:
model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:235d888928b7f139279de22b3a6a87bc3580dd576886416cc8a38c81ea4eed84
3
  size 17671560
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2c8c88d7b8e1c5a6bb92697ebebe4033fe560edfd1a14abd667de76bd4122345
3
  size 17671560
training_args.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:ac78a4206886a076d2de4260d5e61a9f79bf5245fc923840906ecf3c293949be
3
  size 5905
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:fc86d10159644e71a2378dd3c3c100d67bae25d04340dec86285fe19e929ceaa
3
  size 5905
training_metadata.json CHANGED
@@ -1,11 +1,11 @@
1
  {
2
- "trained_at": 1762949186.5664735,
3
- "trained_at_readable": "2025-11-12 12:06:26",
4
- "samples_this_session": 1491,
5
- "new_rows_this_session": 255,
6
- "trained_rows_total": 162825,
7
- "total_db_rows": 164074,
8
- "percentage": 99.2387581213355,
9
  "final_loss": 0,
10
  "epochs": 3,
11
  "learning_rate": 5e-05,
 
1
  {
2
+ "trained_at": 1762952359.7289188,
3
+ "trained_at_readable": "2025-11-12 12:59:19",
4
+ "samples_this_session": 1391,
5
+ "new_rows_this_session": 14,
6
+ "trained_rows_total": 163080,
7
+ "total_db_rows": 164080,
8
+ "percentage": 99.39054119941491,
9
  "final_loss": 0,
10
  "epochs": 3,
11
  "learning_rate": 5e-05,