Datasets:
Your employees were clearly bored
#18
by
ChuckMcSneed
- opened
There are lots of joke answers, lots of false information, and lots of low quality answers. I'm surprised that your models were functional after tuning on this.
Varied response quality, including low-quality examples, broadens Dolly 15k’s exposure to diverse scenarios and strengthens the model’s learning and resilience.
I'm okay with low quality questions and a joke answer or two. I'm more concerned with false info answers:
{"instruction": "Who became king of Holland in 1806?", "context": "", "response": "William I of the Netherlands became king of Holland in 1806.", "category": "open_qa"}
This is straight up false. The right answer is Louis Bonaparte
. Another one:
{"instruction": "Which Dutch actress played Xenia Onatopp in the James Bond movie GoldenEye?", "context": "", "response": "Dutch actress Marijke Janssen played Xenia Onatopp in the James Bond movie GoldenEye.", "category": "open_qa"}
Right answer here is Famke Beumer Janssen
. There are many more of those. If you teach model to lie, it will come out as a liar. This is not a good thing.