APPLICATION OF RESNET-152 NEURAL NETWORKS TO ANALYZE IMAGES FROM UAV FOR FIRE DETECTION

Main Article Content

DOI

Nataliia Stelmakh

n.stelmakh@kpi.ua

https://orcid.org/0000-0003-1876-2794
Svitlana Mandrovska

mandrovskasvitlana@gmail.com

https://orcid.org/0009-0005-2354-9965
Roman Galagan

r.galagan@kpi.ua

https://orcid.org/0000-0001-7470-8392

Abstract

Timely detection of fires in the natural environment (including fires on agricultural land) is an urgent task, as their uncontrolled development can cause significant damage. Today, the main approaches to fire detection are human visual analysis of real-time video stream from unmanned aerial vehicles or satellite image analysis. The first approach does not allow automating the fire detection process and contains a human factor, and the second approach does not allow detect the fire in real time. The article is devoted to the issue of the relevance of using neural networks to recognize and detect seat of the fire based on the analysis of images obtained in real time from the cameras of small unmanned aerial vehicles. This ensures the automation of fire detection, increases the efficiency of this process, and provides a rapid response to fires occurrence, which reduces their destructive consequences. In this paper, we propose to use the convolutional neural network ResNet-152. In order to test the performance of the trained neural network model, we specifically used a limited test dataset with characteristics that differ significantly from the training and validation dataset. Thus, the trained neural network was placed in deliberately difficult working conditions. At the same time, we achieved a Precision of 84.6%, Accuracy of 91% and Recall of 97.8%.

Keywords:

UAV, neural network, ResNet-152, computer vision, artificial intelligence, fire detection

References

Article Details

Stelmakh, N., Mandrovska, S., & Galagan, R. (2024). APPLICATION OF RESNET-152 NEURAL NETWORKS TO ANALYZE IMAGES FROM UAV FOR FIRE DETECTION. Informatyka, Automatyka, Pomiary W Gospodarce I Ochronie Środowiska, 14(2), 77–82. https://doi.org/10.35784/iapgos.5862
Author Biographies

Nataliia Stelmakh, National Technical University of Ukraine “Igor Sikorsky Kyiv Polytechnic Institute”

Nataliia Stelmakh

Ph.D., Associate professor in Department of Device Production, National Technical University of Ukraine “Igor Sikorsky Kyiv Polytechnic Institute”. Received PhD degree thesis in unique "Engineering Technology" in 2010. Author of more than 50 scientific papers, and 12 patents for utility models. Research interests: automation and computer-integrated technologies, assembly of devices and preparation of production, computer vision.

Svitlana Mandrovska, National Technical University of Ukraine “Igor Sikorsky Kyiv Polytechnic Institute”

Svitlana Mandrovska received a master's degree in automation and computer-integrated technologies from the National Technical University of Ukraine "Kyiv Polytechnic Institute", Faculty of Instrumentation (Ukraine).

Research interests: study of computer vision for production processes, machine learning, neural networks.

Roman Galagan, National Technical University of Ukraine “Igor Sikorsky Kyiv Polytechnic Institute”

Associate professor in Department of Automation and Non-Destructive Testing Systems the Faculty of Instrumentation Engineering, National Technical University of Ukraine "Igor Sikorsky Kyiv Polytechnic Institute".

Author and co-author of more than 40 scientific papers, 1 monograph and 1 textbook. Research interests: programming, machine learning, non-destructive testing, computer vision, data analysis.