Comparison of chosen image classification methods on Android
Article Sidebar
Open full text
Issue Vol. 36 (2025)
-
Classification of cyber attacks in IoMT networks using deep learning: a comparative study
Asif Rahman Rumee232-242
-
Assessing the memorability and usability of the Gutenberg Editor Interface in the Drupal CMS
Paweł Iwon, Marek Miłosz243-250
-
Analysis of the directions of development of digital aesthetics on the example of Windows interfaces
Maksymilian Cegiełka, Marek Miłosz251-257
-
Analysis of the effectiveness of the portal integrating various tender platforms
Adrian Krzysztof Jedynak, Marek Miłosz258-261
-
Review and assessment of the quality of applications related to diet man-agement using the Mobile App Rating Scale (MARS)
Kamil Lewartowski, Marek Miłosz262-265
-
Comparison of the accessibility of websites of voivodeship cities in Poland
Dawid Drzewiecki, Marek Miłosz266-270
-
Study of factors affecting the performance of web applications on mobile devices
Jarosław Królikowski, Marek Miłosz271-277
-
The impact of changing graphic settings on performance in selected video games
Łukasz Stanik, Marek Miłosz278-283
-
Analysis of the use of Angular and Svelte products in mobile web applications
Michał Nurzyński, Marcin Badurowicz284-288
-
Comparative analysis of web and mobile interfaces of popular sales portals
Kacper Dudek, Marek Miłosz289-295
-
Comperative analasys of JavaScript runtime environments
Konrad Kalman, Marek Miłosz296-302
-
Image classification using PyTorch and Core ML
Jakub Ślusarski, Arkadiusz Szumny, Maria Skublewska-Paszkowska303-311
-
Analysis of ergonomics and security of email software
Marceli Szarapajew, Tomasz Szymczyk312-319
-
Comparative analysis of Cypress and Playwright frameworks in end-to-end testing for applications based on Angular
Przemysław Gosik, Marek Miłosz320-327
-
Password managers: a critical review of security, usability, and innovative designs
Hussein Abdulkhaleq Saleh328-335
-
Benchmarking the performance of Python web frameworks
Bartłomiej Bednarz, Marek Miłosz336-341
-
Comparison of chosen image classification methods on Android
Mariusz Zapalski, Patryk Żabczyński, Paweł Powroźnik342-349
-
Performance evaluation of REST and GraphQL API aproaches in data retrieval scenarios using NestJS
Kacper Stępień, Maria Skublewska-Paszkowska350-356
-
Comparative analysis of cross-platform application development tools in terms of operating system integration
Rafał Milichiewicz, Marcin Badurowicz357-364
-
Comparative analysis of selected mobile applications for language learning
Jakub Furtak, Emilia Drabik365-370
Main Article Content
DOI
Authors
mariusz.zapalski@pollub.edu.pl
patryk.zabczynski@pollub.edu.pl
Abstract
The authors compared three lightweight convolutional networks (MobileNet-V1, EfficientNet-Lite0 and ResNet-50) for image classification on Android smartphones using TensorFlow Lite and a multithreaded CPU. They measured inference time, CPU load and memory usage across various devices. EfficientNet-Lite0 proved the best compromise - providing high accuracy, short and consistent processing times and moderate resource demands. MobileNet-V1 was the fastest but less precise, while ResNet-50 achieved the highest accuracy at the expense of speed and memory. In practice, EfficientNet-Lite0 is recommended, and further research into optimizations such as quantization, pruning and adaptive frame sampling is advised.
Keywords:
References
[1] A. Krizhevsky, I. Sutskever, G. E. Hinton, ImageNet classification with deep convolutional neural networks, Advances in Neural Information Processing Systems 25 (2012) 1097–1105.
[2] A. Ignatov, R. Timofte, W. Chou, K. Wang, M. Wu, T. Hartley, L. Van Gool, AI benchmark: running deep neural networks on Android smartphones, Proceedings of the European Conference on Computer Vision (ECCV) Workshops (2018) 288–314, https://doi.org/10.1007/978-3-030-11021-5_19. DOI: https://doi.org/10.1007/978-3-030-11021-5_19
[3] C. Luo, X. He, J. Zhan, L. Wang, W. Gao, J. Dai, Comparison and Benchmarking of AI Models and Frameworks on Mobile Devices (2020), https://arxiv.org/abs/2005.05085.
[4] Zbiór danych na temat modeli, TensorFlow Blog, https://blog.tensorflow.org/2020/03/higher-accuracy-on-vision-models-with-efficientnet-lite.html, [01.05.2025].
[5] D. Baldota, S. Advani, S. Jaidhara, A. Hatekar, Object Recognition using TensorFlow and Voice Assistant, International Journal of Engineering Research & Technology 10(9) (2021) 359–362.
[6] S. A. Jakhete, P. Bagmar, A. Dorle, A. Rajurkar, P. Pimplikar, Object Recognition App for Visually Impaired, In 2019 IEEE Pune Section International Conference (PuneCon) (2019) 1-4, https://doi.org/10.1109/PuneCon46936.2019.9105670. DOI: https://doi.org/10.1109/PuneCon46936.2019.9105670
[7] O. Alsing, Mobile object detection using TensorFlow Lite and transfer learning, Master thesis, KTH Royal Institute of Technology, Stockholm, 2018.
[8] N. Khaled, S. Mohsen, K. El-Din, E. Akram, In-door assistant mobile application using CNN and TensorFlow, In 2020 International Conference on Electrical, Communication, and Computer Engineering (ICECCE) (2020) 1-6, https://doi.org/10.1109/ICECCE49384.2020.9179386. DOI: https://doi.org/10.1109/ICECCE49384.2020.9179386
[9] N. Parikh, I. Shah, S. Vahora, Android smartphone based visual object recognition for visually impaired using deep learning, In 2018 International Conference on Communication and Signal Processing (ICCSP) (2018) 420–425, https://doi.org/10.1109/ICCSP.2018.8524493. DOI: https://doi.org/10.1109/ICCSP.2018.8524493
[10] F. Ashiq, M. Asif, CNN-based object recognition and tracking system to assist visually impaired people, IEEE Access 10 (2022) 14819–14834, https://doi.org/10.1109/ACCESS.2022.3148036. DOI: https://doi.org/10.1109/ACCESS.2022.3148036
[11] G. Khekare, K. Solanki, Real time object detection with speech recognition using TensorFlow Lite, GIS Science Journal 9(3) (2022) 552-559.
[12] I. Martinez-Alpiste, G. Golcarenarenji, Q. Wang, J. M. Alcaraz-Calero, Smartphone-based real-time object recognition architecture for portable and constrained systems, Journal of Real-Time Image Processing 19 (2022) 103–115, https://doi.org/10.1007/s11554-021-01164-1. DOI: https://doi.org/10.1007/s11554-021-01164-1
[13] G. Demosthenous, V. Vassiliades, Continual Learning on the Edge with TensorFlow Lite (2021) 1-8, https://doi.org/10.48550/arXiv.2105.01946.
[14] L. Pellegrini, V. Lomonaco, G. Graffieti, D. Maltoni, Continual Learning at the Edge: Real-Time Training on Smartphone Devices, https://arxiv.org/abs/2105.13127.
[15] Z. Qin, Z. Li, Z. Zhang, Y. Bao, G. Yu, Y. Peng, J. Sun, ThunderNet: towards real-time generic object detection on mobile devices, Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV) (2019) 6717–6726, https://doi.org/10.1109/ICCV.2019.00682. DOI: https://doi.org/10.1109/ICCV.2019.00682
[16] Dokumentacja Android, https://developer.android.com/reference/android/hardware/camera2/package-summary.html, [01.05.2025].
[17] Artykuł poświęcony konwolucyjnym sieciom neuronowym, IBM, https://www.ibm.com/think/topics/convolutional-neural-networks, [01.05.2025].
[18] Artykuł poświęcony konwolucyjnym sieciom neuronowym, GeeksforGeeks, https://www.geeksforgeeks.org/introduction-convolution-neural-network, [01.05.2025].
[19] P. Powroźnik, Polish emotional speech recognition using artifical neural network, Advances in Science and Technology, Research Journal 8 (2014) 24-27, https://doi.org/10.12913/22998624/562. DOI: https://doi.org/10.12913/22998624/562
[20] P. Powroźnik, P. Wójcicki, S. W. Przyłucki, Scalogram as a representation of emotional speech, IEEE Access 9 (2021) 154044-154057, https://doi.org/10.1109/ACCESS.2021.3127581. DOI: https://doi.org/10.1109/ACCESS.2021.3127581
[21] M. Skublewska-Paszkowska, P. Powroźnik, E. Łukasik, Attention temporal graph convolutional network for tennis groundstrokes phases classification, In 2022 IEEE International Conference on Fuzzy Systems (FUZZ-IEEE) (2022) 1–8, https://doi.org/10.1109/FUZZ-IEEE55066.2022.9882822. DOI: https://doi.org/10.1109/FUZZ-IEEE55066.2022.9882822
[22] K. Nowomiejska, P. Powroźnik, M. Skublewska- Paszkowska, K. Adamczyk, M. Concilio, L. Sereikaite, R. Zemaitiene, M. D. Toro, R. Rejdak, Residual Attention Network for distinction between visible optic disc drusen and healthy optic discs, Optics and Lasers in Engineering 176 (2024) 108056, https://doi.org/10.1016/j.optlaseng.2024.108056. DOI: https://doi.org/10.1016/j.optlaseng.2024.108056
[23] Repozytorium modelu MobileNet-V1, https://github.com/tensorflow/models/blob/master/research/slim/nets/mobilenet_v1.md?utm_source, [01.05.2025].
[24] Repozytorium modelu EfficientNet-Lite0, https://github.com/tensorflow/tpu/blob/master/models/official/efficientnet/lite/README.md, [01.05.2025].
[25] Repozytorium modelu ResNet-50, https://huggingface.co/qualcomm/ResNet50, [01.05.2025].
Article Details
Abstract views: 95

