wanadzhar913 commited on
Commit
b1d1622
·
verified ·
1 Parent(s): 69ea740

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -2
README.md CHANGED
@@ -16,9 +16,9 @@ This model was originally developed as part of the [AI Tinkerer's Hackathon in K
16
  for an LLM-as-a-Judge use case.
17
 
18
  It is a finetune [mesolitica/malaysian-debertav2-base](https://huggingface.co/mesolitica/malaysian-debertav2-base).
19
- DeBERTa (Decoding-enhanced BERT with disentangled attention) for a Natural language inference (NLI) task.
20
  In our case, NLI is the task of determining whether a "hypothesis" is true (*entailment*) or false (*contradiction*)
21
- given a question-statement pair. DeBERTa is selected due to its
22
  [SOTA performance in comparison to other models like BERT and RoBERTAa](https://wandb.ai/akshayuppal12/DeBERTa/reports/The-Next-Generation-of-Transformers-Leaving-BERT-Behind-With-DeBERTa--VmlldzoyNDM2NTk2#:~:text=What%20we%20do%20see%3A%20for,accuracy%20for%20the%20validation%20set.).
23
 
24
  ### Training Details
 
16
  for an LLM-as-a-Judge use case.
17
 
18
  It is a finetune [mesolitica/malaysian-debertav2-base](https://huggingface.co/mesolitica/malaysian-debertav2-base).
19
+ We're using DeBERTa (Decoding-enhanced BERT with disentangled attention) for a Natural language inference (NLI) task.
20
  In our case, NLI is the task of determining whether a "hypothesis" is true (*entailment*) or false (*contradiction*)
21
+ given a question-statement pair. DeBERTa was selected due to its
22
  [SOTA performance in comparison to other models like BERT and RoBERTAa](https://wandb.ai/akshayuppal12/DeBERTa/reports/The-Next-Generation-of-Transformers-Leaving-BERT-Behind-With-DeBERTa--VmlldzoyNDM2NTk2#:~:text=What%20we%20do%20see%3A%20for,accuracy%20for%20the%20validation%20set.).
23
 
24
  ### Training Details