unable to qunatize model for local use with ollama
#75
by
adamjheard
- opened
➜ bert-base-uncased git:(main) docker run --rm -v .:/model ollama/quantize -q q4_K_M /model
unknown architecture BertForMaskedLM
➜ bert-base-uncased git:(main) docker run --rm -v .:/model ollama/quantize -q q4_K_M /model
unknown architecture BertForMaskedLM