File size: 1,506 Bytes
9a67fbe
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36

--- OUTER FOLD 1/5 ---
INFO: Best params for fold 1: {'lr': 0.0006858999160561152, 'hidden_dim': 64, 'batch_size': 32}
INFO: Fold 1 Val RMSE: 1.5861, MAE: 1.3031

--- OUTER FOLD 2/5 ---
INFO: Best params for fold 2: {'lr': 0.000512061330949742, 'hidden_dim': 64, 'batch_size': 64}
INFO: Fold 2 Val RMSE: 1.4737, MAE: 1.1930

--- OUTER FOLD 3/5 ---
INFO: Best params for fold 3: {'lr': 0.000756755929227675, 'hidden_dim': 64, 'batch_size': 64}
INFO: Fold 3 Val RMSE: 1.4271, MAE: 1.1530

--- OUTER FOLD 4/5 ---
INFO: Best params for fold 4: {'lr': 0.0007303755012255117, 'hidden_dim': 128, 'batch_size': 32}
INFO: Fold 4 Val RMSE: 1.5594, MAE: 1.2839

--- OUTER FOLD 5/5 ---
INFO: Best params for fold 5: {'lr': 0.0007618320309633699, 'hidden_dim': 256, 'batch_size': 32}
INFO: Fold 5 Val RMSE: 1.4943, MAE: 1.2206

------ Nested Cross-Validation Summary ------
Unbiased Validation RMSE: 1.5081 ± 0.0577
Unbiased Validation MAE:  1.2307 ± 0.0559
VAL FOLD RMSEs: [1.5861493, 1.4737211, 1.4270978, 1.5593641, 1.4943413]
VAL FOLD MAEs: [1.303138, 1.1929528, 1.1530348, 1.2838583, 1.2206385]

===== STEP 2: Final Model Training & Testing =====
INFO: Finding best hyperparameters on the FULL train/val set for final model...
INFO: Optimal hyperparameters for final model: {'lr': 0.000512061330949742, 'hidden_dim': 64, 'batch_size': 64}
INFO: Training final model...

===== STEP 3: Final Held-Out Test Evaluation =====
Test RMSE: 1.7620 (95% CI: [1.5127, 2.0372])
Test MAE:  1.3219 (95% CI: [1.2180, 1.4427])