Can neural networks count digit frequency?




Neural Networks, Machine Learning, Decision Tree, Random Forest, Frequency, Sequence


In this research, we aim to compare the performance of different classical machine learning models and neural networks in identifying the frequency of occurrence of each digit in a given number. It has various applications in machine learning and computer vision, e.g. for obtaining the frequency of a target object in a visual scene. We considered this problem as a hybrid of classification and regression tasks. We carefully create our own datasets to observe systematic differences between different methods. We evaluate each of the methods using different metrics across multiple datasets.The metrics of performance used were the root mean squared error and mean absolute error for regression evaluation, and accuracy for classification performance evaluation.  We observe that decision trees and random forests overfit to the dataset, due to their inherent bias, and are not able to generalize well. We also observe that the neural networks significantly outperform the classical machine learning models in terms of both the regression and classification metrics for both the 6-digit and 10-digit number datasets. 


Download data is not yet available.


Metrics Loading ...

Author Biography

Viveka Kulharia, Cruise LLC

Viveka is currently a Senior Applied Research Scientist in Cruise LLC. He completed his PhD in University of Oxford with Professor Phil Torr and Dr. Puneet Dokania.

References or Bibliography

“scikit-learn Machine Learning in Python”, in

Jeremy Howard and Sylvain Gugger, in “fastai A Layered API for Deep Learning”.

“Welcome to fastai”, in

LeCun, Y., Boser, B., Denker, J. S., Henderson, D., Howard, R. E., Hubbard, W., & Jackel, L. D. (1989). Backpropagation applied to handwritten zip code recognition. Neural computation, 1(4), 541-551.

LeCun, Y., Bottou, L., Bengio, Y., & Haffner, P. (1998). Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11), 2278-2324.

Krizhevsky, A., Sutskever, I., & Hinton, G. E. (2012). Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems, 25(1106-1114), 1.

Simonyan, K., & Zisserman, A. (2014). Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556.

He, K., Zhang, X., Ren, S., & Sun, J. (2016). Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 770-778).

Silver, D., Schrittwieser, J., Simonyan, K., Antonoglou, I., Huang, A., Guez, A., ... & Hassabis, D. (2017). Mastering the game of go without human knowledge. nature, 550(7676), 354-359.

Ouyang, L., Wu, J., Jiang, X., Almeida, D., Wainwright, C., Mishkin, P., ... & Lowe, R. (2022). Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems, 35, 27730-27744.

von Winterfeldt, Detlof; Edwards, Ward (1986). Decision trees. Decision Analysis and Behavioral Research. Cambridge University Press. pp. 63–89. ISBN 0-521-27304-8.

Kaminski, B.; Jakubczyk, M.; Szufel, P. (2017). A framework for sensitivity analysis of decision trees. Central European Journal of Operations Research. 26 (1), 135–159. doi:10.1007/s10100-017-0479-6. PMC 5767274. PMID 29375266.

Ho, T. K. (1995, August). Random decision forests. In Proceedings of 3rd international conference on document analysis and recognition (Vol. 1, pp. 278-282). IEEE.

Breiman, L. (2001). Random forests. Machine learning, 45, 5-32.

Lin, T. Y., Maire, M., Belongie, S., Hays, J., Perona, P., Ramanan, D., ... & Zitnick, C. L. (2014). Microsoft coco: Common objects in context. In Computer Vision–ECCV 2014: 13th European Conference, Zurich, Switzerland, September 6-12, 2014, Proceedings, Part V 13 (pp. 740-755). Springer International Publishing.

Ren, S., He, K., Girshick, R., & Sun, J. (2015). Faster r-cnn: Towards real-time object detection with region proposal networks. Advances in neural information processing systems, 28.

Howard, J. (2022) How random forests really work. Kaggle.

Holte, R. C. (1993). Very simple classification rules perform well on most commonly used datasets. Machine learning, 11, 63-90.

Breiman, L. (1996). Bagging predictors. Machine learning, 24, 123-140.

Kingma, D. P., & Ba, J. (2015). Adam: A method for stochastic optimization. In 3rd International Conference for Learning Representations, San Diego, 2015.



How to Cite

Khandelwal, P., & Kulharia, V. (2023). Can neural networks count digit frequency?. Journal of Student Research, 12(3).



Research Articles