mnandwana commited on
Commit
eaf60d9
·
verified ·
1 Parent(s): c45e9c5

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +4 -4
README.md CHANGED
@@ -30,13 +30,13 @@ RoGuard 1.0, a SOTA instruction fine-tuned LLM, is designed to help safeguard ou
30
 
31
 
32
  ## 📊 Model Benchmark Results
33
- We benchmark RoGuard 1.0 model on a comprehensive set of open-source datasets for both prompt and response, as well as on RoGuard-Eval. This allows us to evaluate our model on both in-domain and out-of-domain datasets. We report our results in terms of F-1 score for binary violating/non-violating classification. In the table below, we compare our performance with that of several well-known models. The RoGuard 1.0 outperforms other models while generalizing on out-of-domain datasets.
34
 
35
- - **Prompt Metrics**: These evaluate how well the model classifies or responds to potentially harmful **user inputs**
36
- - **Response Metrics**: These measure how well the model handles or generates **responses**, ensuring its outputs are safe and aligned.
37
 
 
38
 
39
- ![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/667c4d7fb4244f6997563346/iqmZ9bZvvluBEqZiAJ7_0.jpeg)
 
40
 
41
 
42
  ## 🔗 GitHub Repository
 
30
 
31
 
32
  ## 📊 Model Benchmark Results
 
33
 
34
+ ![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/667c4d7fb4244f6997563346/iqmZ9bZvvluBEqZiAJ7_0.jpeg)
 
35
 
36
+ We benchmark RoGuard 1.0 model on a comprehensive set of open-source datasets for both prompt and response, as well as on RoGuard-Eval. This allows us to evaluate our model on both in-domain and out-of-domain datasets. We report our results in terms of F-1 score for binary violating/non-violating classification. In the table above, we compare our performance with that of several well-known models. The RoGuard 1.0 outperforms other models while generalizing on out-of-domain datasets.
37
 
38
+ - **Prompt Metrics**: These evaluate how well the model classifies or responds to potentially harmful **user inputs**
39
+ - **Response Metrics**: These measure how well the model handles or generates **responses**, ensuring its outputs are safe and aligned.
40
 
41
 
42
  ## 🔗 GitHub Repository