EFFICIENT LINE DETECTION METHOD BASED ON 2D CONVOLUTION FILTER
Paweł Kowalski
pawel.kowalski@pg.edu.plGdansk University of Technology (Poland)
http://orcid.org/0000-0002-0913-1408
Piotr Tojza
Gdansk University of Technology (Poland)
http://orcid.org/0000-0002-0837-0976
Abstract
The article proposes an efficient line detection method using a 2D convolution filter. The proposed method was compared with the Hough transform, the most popular method of straight lines detection. The developed method is suitable for local detection of straight lines with a slope from
-45˚ to 45˚. Also, it can be used for curve detection which shape is approximated with the short straight sections. The new method is characterized by a constant computational cost regardless of the number of set pixels. The convolution is performed using the logical conjunction and sum operations. Moreover, design of the developed filter and the method of filtration allows for parallelization. Due to constant computation cost, the new method is suitable for implementation in the hardware structure of real-time image processing systems.
Keywords:
image processing, real-time processing, Hough transform, straight lines detectionReferences
Ballard D. H.: Generalizing the hough transform to detect arbitrary shapes. Pattern recognition 13(2), 1981, 111–122.
DOI: https://doi.org/10.1016/0031-3203(81)90009-1
Google Scholar
Elhossini A., Moussa M.: Memory efficient FPGA implementation of hough transform for line and circle detection. 25th IEEE Canadian Conference on Electrical and Computer Engineering (CCECE), 2012, 1–5
DOI: https://doi.org/10.1109/CCECE.2012.6335003
Google Scholar
Guan J., An F., Zhang X., Chen L., Mattausch H. J.: Real-time straight-line detection for xga-size videos by hough transform with parallelized voting procedures. Sensors 17(2), 2017, 270.
DOI: https://doi.org/10.3390/s17020270
Google Scholar
Han Q., Zhao K., Xu J., Cheng M. M.: Deep hough transform for semantic line detection. 2020, arXiv preprint arXiv:2003.04676.
DOI: https://doi.org/10.1007/978-3-030-58545-7_15
Google Scholar
Illingworth J., Kittler J.: A survey of the hough trans form. Computer vision, graphics, and image processing 44(1), 1988, 87–116.
DOI: https://doi.org/10.1016/S0734-189X(88)80033-1
Google Scholar
Kowalski P., Smyk R.: Straight lines detection in digital image using hough transform. Zeszyty Naukowe Wydziału Elektrotechniki i Automatyki Politechniki Gdanskiej 61, 2018, 45–48.
Google Scholar
Milletari F., Ahmadi S. A., Kroll C., Plate A., Rozanski V., Maiostre J., Levin J., Dietrich O., Ertl-Wagner B., Bötzel K., et al.: Hough-cnn: deep learning for segmentation of deep brain regions in mri and ultrasound. Computer Vision and Image Understanding 164, 2017, 92–102.
DOI: https://doi.org/10.1016/j.cviu.2017.04.002
Google Scholar
Mukhopadhyay P., Chaudhuri B. B.: A survey of hough transform. Pattern Recognition 48(3), 2015, 993–1010.
DOI: https://doi.org/10.1016/j.patcog.2014.08.027
Google Scholar
Ritchie D. M., Kernighan W., Lesk M. E.: The C programming language. Prentice Hall Englewood Cliffs, 1988.
Google Scholar
Serra P. L., Masotti P. H., Rocha M. S., de Andrade D. A., Torres W. M., de Mesquita R. N.: Two-phase flow void fraction estimation based on bubble image segmentation using randomized hough transform with neural network (rhtn). Progress in Nuclear Energy 118, 2020, 103133.
DOI: https://doi.org/10.1016/j.pnucene.2019.103133
Google Scholar
Shehata Hassanein A., Mohammad S., Sameer M., Ehab Ragab M.: A survey on hough transform, theory, techniques and applications. 2015, arXiv:1502.02160.
Google Scholar
Ye H., Shang G., Wang L., Zheng M.: A new method based on hough transform for quick line and circle detection. IEEE 8th International Conference on Biomedical Engineering and Informatics (BMEI), 2015, 52–56.
DOI: https://doi.org/10.1109/BMEI.2015.7401472
Google Scholar
Authors
Paweł Kowalskipawel.kowalski@pg.edu.pl
Gdansk University of Technology Poland
http://orcid.org/0000-0002-0913-1408
Statistics
Abstract views: 263PDF downloads: 178
License
This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.