APPLICATION OF RESNET-152 NEURAL NETWORKS TO ANALYZE IMAGES FROM UAV FOR FIRE DETECTION

Nataliia Stelmakh

n.stelmakh@kpi.ua
National Technical University of Ukraine “Igor Sikorsky Kyiv Polytechnic Institute” (Ukraine)
https://orcid.org/0000-0003-1876-2794

Svitlana Mandrovska


National Technical University of Ukraine “Igor Sikorsky Kyiv Polytechnic Institute” (Ukraine)
https://orcid.org/0009-0005-2354-9965

Roman Galagan


National Technical University of Ukraine “Igor Sikorsky Kyiv Polytechnic Institute” (Ukraine)
https://orcid.org/0000-0001-7470-8392

Abstract

Timely detection of fires in the natural environment (including fires on agricultural land) is an urgent task, as their uncontrolled development can cause significant damage. Today, the main approaches to fire detection are human visual analysis of real-time video stream from unmanned aerial vehicles or satellite image analysis. The first approach does not allow automating the fire detection process and contains a human factor, and the second approach does not allow detect the fire in real time. The article is devoted to the issue of the relevance of using neural networks to recognize and detect seat of the fire based on the analysis of images obtained in real time from the cameras of small unmanned aerial vehicles. This ensures the automation of fire detection, increases the efficiency of this process, and provides a rapid response to fires occurrence, which reduces their destructive consequences. In this paper, we propose to use the convolutional neural network ResNet-152. In order to test the performance of the trained neural network model, we specifically used a limited test dataset with characteristics that differ significantly from the training and validation dataset. Thus, the trained neural network was placed in deliberately difficult working conditions. At the same time, we achieved a Precision of 84.6%, Accuracy of 91% and Recall of 97.8%.


Keywords:

UAV, neural network, ResNet-152, computer vision, artificial intelligence, fire detection

Bezugla N. et al.: Biological Tissues Axial Anisotropy Spatial Photometry. Bezuglyi M. et al. (eds): Advanced System Development Technologies I. Studies in Systems, Decision and Control 511, Springer 2024 [https://doi.org/10.1007/978-3-031-44347-3_5].
  Google Scholar

Bezuglyi M. et al.: Ellipsoidal Reflectors for Biological Media Light Scattering Photometry. Bezuglyi M. et al. (eds): Advanced System Development Technologies I. Studies in Systems, Decision and Control 511, Springer 2024 [https://doi.org/10.1007/978-3-031-44347-3_4].
  Google Scholar

Bondariev D. et al.: Optical Properties of Light-Scattering Standards for CCD Photometry. Sensors 23, 2023, 7700 [https://doi.org/10.3390/s23187700].
  Google Scholar

Cabezas M. et al.: Detection of invasive species in wetlands: practical DL with heavily imbalanced data. Remote Sensing 12(20), 2020, 3431 [https://doi.org/10.3390/rs12203431].
  Google Scholar

Eisenbeiss H.: A mini unmanned aerial vehicle (UAV): system overview and image acquisition. International Archives of Photogrammetry. Remote Sensing and Spatial Information Sciences 36(5), 2004, 1–7.
  Google Scholar

Hasanah S. A. et al.: A Deep Learning Review of ResNet Architecture for Lung Disease Identification in CXR Image. Applied Sciences 13(24), 2023, 13111 [https://doi.org/10.3390/app132413111].
  Google Scholar

He K. et al.: Deep residual learning for image recognition. Proceedings of the IEEE conference on computer vision and pattern recognition, 2016, 770–778.
  Google Scholar

https://github.com/UIA-CAIR/Fire-Detection-Image-Dataset (available: 29.03.2024).
  Google Scholar

Khan A. et al.: A survey of the recent architectures of deep convolutional neural networks. Artificial intelligence review 53, 2020, 5455–5516.
  Google Scholar

Liu W. et al.: SSD: Single shot multibox detector. 14th European Conference Computer Vision–ECCV, Part I 14, 2016, 21–37.
  Google Scholar

Mascarenhas S., Agarwal M.: A comparison between VGG16, VGG19 and ResNet50 architecture frameworks for Image Classification. International Conference on Disruptive Technologies for Multi-Disciplinary Research and Applications (CENTCON), 2021, 96–99 [https://doi.org/10.1109/CENTCON52345.2021.9687944].
  Google Scholar

Redmon J. et al.: You only look once: Unified, real-time object detection. IEEE conference on computer vision and pattern recognition, 2016, 779–788.
  Google Scholar

Sharma J. et al.: Deep convolutional neural networks for fire detection in images. Communications in Computer and Information Science 744, 2017 [https://doi.org/10.1007/978-3-319-65172-9_16].
  Google Scholar

Stelmakh N. et al.: Features of the implementation of computer vision in the problems of automated product quality control. Informatyka, Automatyka, Pomiary w Gospodarce i Ochronie Środowiska 13(1), 2023, 38–41 [http://doi.org/10.35784/iapgos.3434].
  Google Scholar

Wonjae L. et al.: Deep neural networks for wild fire detection with unmanned aerial vehicle. IEEE international conference on consumer electronics (ICCE), 2017, 252–253 [https://doi.org/10.1109/ICCE.2017.7889305].
  Google Scholar

Zhang L. et al.: Is faster R-CNN doing well for pedestrian detection? 14th European Conference Computer Vision–ECCV, Part II 14, 2016, 443–457.
  Google Scholar

Download


Published
2024-06-30

Cited by

Stelmakh, N., Mandrovska, S., & Galagan, R. (2024). APPLICATION OF RESNET-152 NEURAL NETWORKS TO ANALYZE IMAGES FROM UAV FOR FIRE DETECTION. Informatyka, Automatyka, Pomiary W Gospodarce I Ochronie Środowiska, 14(2), 77–82. https://doi.org/10.35784/iapgos.5862

Authors

Nataliia Stelmakh 
n.stelmakh@kpi.ua
National Technical University of Ukraine “Igor Sikorsky Kyiv Polytechnic Institute” Ukraine
https://orcid.org/0000-0003-1876-2794

Nataliia Stelmakh

Ph.D., Associate professor in Department of Device Production, National Technical University of Ukraine “Igor Sikorsky Kyiv Polytechnic Institute”. Received PhD degree thesis in unique "Engineering Technology" in 2010. Author of more than 50 scientific papers, and 12 patents for utility models. Research interests: automation and computer-integrated technologies, assembly of devices and preparation of production, computer vision.


Authors

Svitlana Mandrovska 

National Technical University of Ukraine “Igor Sikorsky Kyiv Polytechnic Institute” Ukraine
https://orcid.org/0009-0005-2354-9965

Svitlana Mandrovska received a master's degree in automation and computer-integrated technologies from the National Technical University of Ukraine "Kyiv Polytechnic Institute", Faculty of Instrumentation (Ukraine).

Research interests: study of computer vision for production processes, machine learning, neural networks.


Authors

Roman Galagan 

National Technical University of Ukraine “Igor Sikorsky Kyiv Polytechnic Institute” Ukraine
https://orcid.org/0000-0001-7470-8392

Associate professor in Department of Automation and Non-Destructive Testing Systems the Faculty of Instrumentation Engineering, National Technical University of Ukraine "Igor Sikorsky Kyiv Polytechnic Institute".

Author and co-author of more than 40 scientific papers, 1 monograph and 1 textbook. Research interests: programming, machine learning, non-destructive testing, computer vision, data analysis.



Statistics

Abstract views: 133
PDF downloads: 93


License

Creative Commons License

This work is licensed under a Creative Commons Attribution 4.0 International License.