Implemented as a Multi-Layer Perceptron to classify handwritten Digits (0-9)
Model Architecture and Results
The model comprises a flattening layer and three linear layers ((256, 64) hidden dimensions)
with relus to approximate non-linearity. It achieves 95.6% accuracy after 15 training epochs
and batch size = 64
. Taining and Test MNIST datasets are loaded with PyTorch dataloaders.
Inference Providers
NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API:
The model has no library tag.