Background subtraction and yolo algorithm : two methods for the detection of people in uncontrolled environments
Universidad Francisco de Paula Santander, Facultad de Ingeniería
email: carlosvicentenr@ufps.edu.co
Universidad Francisco de Paula Santander, Facultad de Ingeniería
email: sergio.castroc@ufps.edu.co
Universidad Francisco de Paula Santander, Facultad de Ingeniería
email: byronmedina@ufps.edu.co
Introduction: This article is the result of research entitled “Signal processing system for the detection of people in agglomerations in areas of public space in the city of Cúcuta”, developed at the Universidad Francisco de Paula Santander in 2020.
Problem: The high percentage of false positives and false negatives in people detection processes makes decision making in video surveillance, tracking and tracing applications complex.
Objective: To determine which technique for the detection of people presents better results in terms of respon-se time and detection hits.
Methodology: Two techniques for the detection of people in uncontrolled environments are validated in Python with videos taken inside the Universidad Francisco de Paula Santander: Background subtraction and the YOLO algorithm.
Results: With the background subtraction technique, we obtained a hit rate of 84.07 % and an average response time of 0.815 seconds. Likewise, with the YOLO algorithm the hit rate and average response time are 90% and 4.59 seconds respectively.
Conclusion: It is possible to infer the use of the background subtraction technique in hardware tools such as the Pi 3B+ Raspberry board for processes in which the analysis of information in real time is prioritized, while the YOLO algorithm presents the characteristics required in the processes in which the information is analyzed after the acquisition of the image.
Originality: Through this research, aspects required for the real-time analysis of information obtained in pro-cesses of people detection in uncontrolled environments were analyzed.
Limitations: The analyzed videos were taken only at the Universidad Francisco de Paula Santander. Also, the Raspberry Pi 3B+ board overheats when processing the video images, due to the full resource requirement of the device.
C. V. Niño Rondón, S. A. Castro Casadiego, B. Medina Delgado, and D. Guevara Ibarra, “Análisis de viabilidad y diseño de un sistema electrónico para el seguimiento de la dinámica poblacional en la ciudad de Cúcuta,” Ing. USBMed, vol. 11, no. 1, pp. 56–64, 2020. doi: 10.21500/20275846.4489
E. N. Kajabad and S. V. Ivanov, “People Detection and Finding Attractive Areas by the use of Movement Detection Analysis and Deep Learning Approach,” Procedia Comput. Sci., vol. 156, pp. 327–337, 2019. doi: 10.1016/j.procs.2019.08.209
M. Hussein, W. Abd-Almageed, Y. Ran, and L. Davis, “Real-time human detection, tracking, and verification in uncontrolled camera motion environments,” Proc. Fourth IEEE Int. Conf. Comput. Vis. Syst. ICVS’06, vol. 2006, no. February, p. 41, 2006. doi: 10.1109/ICVS.2006.52
A. K. Agrawal and Y. N. Singh, “An efficient approach for face recognition in uncontrolled environment,” Multimed. Tools Appl., vol. 76, no. 3, pp. 3751–3760, 2017. doi: 10.1007/s11042-016-3976-z
B. Garcia-Garcia, T. Bouwmans, and A. J. Rosales Silva, “Background subtraction in real applications: Challenges, current models and future directions,” Comput. Sci. Rev., vol. 35, p. 100204, 2020. doi: 10.1016/j.cosrev.2019.100204
E. S. Jeon et al., “Human detection based on the generation of a background image by using a far-infrared light camera,” Sensors (Switzerland), vol. 15, no. 3, pp. 6763–6788, 2015. doi: 10.3390/s150306763
Y. Yang, Q. Zhang, P. Wang, X. Hu, and N. Wu, “Moving Object Detection for Dynamic Background Scenes Based on Spatiotemporal Model,” Adv. Multimed., vol. 2017, 2017. doi: 10.1155/2017/5179013
G. Natanael, C. Zet, and C. Fosalau, “Estimating the distance to an object based on image processing,” EPE 2018 - Proc. 2018 10th Int. Conf. Expo. Electr. Power Eng., pp. 211–216, 2018. doi: 10.1109/ICEPE.2018.8559642
J. Redmon, S. Divvala, R. Girshick, and A. Farhadi, “You only look once: Unified, real-time object detection,” Proc. IEEE Comput. Soc. Conf. Comput. Vis. Pattern Recognit., vol. 2016-Decem, pp. 779–788, 2016. doi: 10.1109/CVPR.2016.91
J. Zuo, Z. Jia, J. Yang, and N. Kasabov, “Moving Target Detection Based on Improved Gaussian Mixture Background Subtraction in Video Images,” IEEE Access, vol. 7, pp. 152612–152623, 2019. doi: 10.1109/ACCESS.2019.2946230
A. Miranto, S. R. Sulistiyanti, and F. X. Arinto Setyawan, “Adaptive background subtraction for monitoring system,” 2019 Int. Conf. Inf. Commun. Technol. ICOIACT 2019, pp. 153–156, 2019. doi: 10.1109/ICOIACT46704.2019.8938501
W. Liu, H. Yu, H. Yuan, H. Zhao, and X. Xu, “Effective background modelling and subtraction approach for moving object detection,” IET Comput. Vis., vol. 9, no. 1, pp. 13–24, 2015. doi: 10.1049/iet-cvi.2013.0242
S. N. N. Htun, T. T. Zin, and P. Tin, “Image Processing Technique and Hidden Markov Model for an Elderly Care Monitoring System,” J. Imaging, vol. 6, no. 6, p. 49, 2020. doi: 10.3390/jimaging6060049
M. Mariappan, L. K. Thong, and K. Muthukaruppan, “A design methodology of an embedded motion-detecting video surveillance system,” Int. J. Integr. Eng., vol. 12, no. 2, pp. 55–69, 2020. doi: 10.30880/ijie.2020.12.02.007
S. Li, M. O. Tezcan, P. Ishwar, and J. Konrad, “Supervised people counting using an overhead fisheye camera,” in 2019 16th IEEE International Conference on Advanced Video and Signal Based Surveillance, AVSS 2019, 2019, pp. 1–8. doi: 10.1109/AVSS.2019.8909877
K. Rantelobo, M. A. Indraswara, N. P. Sastra, D. M. Wiharta, H. F. J. Lami, and H. Z. Kotta, “Monitoring Systems for Counting People using Raspberry Pi 3,” in 2018 International Conference on Smart Green Technology in Electrical and Information Systems: Smart Green Technology for Sustainable Living, ICSGTEIS 2018 - Proceeding, 2018, no. October, pp. 57–60. doi: 10.1109/ICSGTEIS.2018.8709141
P. Ren, W. Fang, and S. Djahel, “A novel YOLO-Based real-Time people counting approach,” in 2017 International Smart Cities Conference, ISC2 2017, 2017, vol. 243, pp. 3–4. doi: 10.1109/ISC2.2017.8090864
S. Vaidya, N. Shah, N. Shah, and R. Shankarmani, “Real-Time Object Detection for Visually Challenged People,” in Proceedings of the International Conference on Intelligent Computing and Control Systems, ICICCS 2020, 2020, no. Iciccs, pp. 311–316. doi: 10.1109/ICICCS48265.2020.9121085
A. Lucian, A. Sandu, R. Orghidan, and D. Moldovan, “Human leg detection from depth sensing,” in 2018 IEEE International Conference on Automation, Quality and Testing, Robotics, AQTR 2018 - THETA 21st Edition, Proceedings, 2018, pp. 1–5. doi: 10.1109/AQTR.2018.8402735
M. Ahmad, I. Ahmed, and A. Adnan, “Overhead View Person Detection Using YOLO,” in 2019 IEEE 10th Annual Ubiquitous Computing, Electronics and Mobile Communication Conference, UEMCON 2019, 2019, no. i, pp. 0627–0633. doi: 10.1109/UEMCON47517.2019.8992980
Y. Yang, K. Han, S. HyungLee, and J. Lee, “Temporal difference based adaptive object Detection (ToDo) platform at Edge Computing System,” in 17th Annual Consumer Communications & Networking Conference (CCNC), 2020, pp. 1–2. doi: 10.1109/JPROC.2019.2918951
C. Liu, P. Liu, and C. Xiao, “Detecting stabbing by a deep learning method from surveillance videos,” in IST 2019 - IEEE International Conference on Imaging Systems and Techniques, Proceedings, 2019, pp. 1–5. doi: 10.1109/IST48021.2019.9010206
A. AKTAŞ, B. DOĞAN, and Ö. DEMİR, “Derin öğrenme yöntemleri ile dokunsal parke yüzeyi tespiti,” Gazi Üniversitesi Mühendislik-Mimarlık Fakültesi Derg., vol. 3, pp. 1685–1700, 2020. doi: 10.17341/gazimmfd.652101
R. Bohush and I. Zakharava, “Person tracking algorithm based on convolutional neural network for indoor video surveillance,” Comput. Opt., vol. 44, no. 1, pp. 109–116, 2020. doi: 10.17341/gazimmfd.652101
F. Fang, K. Qian, B. Zhou, and X. Ma, “Real-time RGB-D based people detection and tracking system for mobile robots,” in 2017 IEEE International Conference on Mechatronics and Automation, ICMA 2017, 2017, pp. 1937–1941. doi: 10.1109/ICMA.2017.8016114
G. Cavanzo, M. Pérez, and F. Villavisan, “Medición de eficiencia de algoritmos de visión artificial implementados en raspberry pi y ordenador personal mediante Python,” Ingenium, vol. 18, no. 35, pp. 105–119, 2017. doi: 10.21500/01247492.3218
S. Sivaranjani and S. Sumathi, “A review on implementation of bimodal newborn authentication using raspberry Pi,” in Global Conference on Communication Technologies, GCCT 2015, 2015, no. Gcct, pp. 267–272. doi: 10.1109/GCCT.2015.7342664
R. Barbuzza, J. D’amato, A. Rubiales, L. Dominguez, A. Perez, and M. Venere, “Un Método para la Sustracción de Fondo en Videos Inestables,” Mecánica Comput., vol. 34, no. 51, pp. 3409–3417, 2016. doi: 10.13140/2.1.1478.1123
C. V. Niño Rondón, S. A. Castro Casadiego, and B. Medina Delgado, “Caracterización para la ubicación en la captura de video aplicado a técnicas de visión artificial en la detección de personas,” Rev. Colomb. Tecnol. Av. RCTA, vol. 2, no. 36, pp. 83–88, 2020. doi: 10.24054/16927257.v36.n36.2020.3720
G. Sánchez-Torres and J. A. Taborda-Giraldo, “Estimación automática de la medida de ocupación de playas mediante procesamiento de imágenes digitales,” TecnoLógicas, vol. 17, no. 33, p. 21, 2014. doi: 10.22430/22565337.543
D. Jyoti Bora, R. Kumar Bania, and C. Ngoc, “A Local Type-2 Fuzzy Set Based Technique For He Stain Image Enhancement,” Ing. Solidar., vol. 15, no. 29, pp. 1–22, 2019. doi: 10.16925/2357-6014.2019.03.02
C. T. Hsieh, H. C. Wang, Y. K. Wu, L. C. Chang, and T. K. Kuo, “A Kinect-based people-flow counting system,” in ISPACS 2012 - IEEE International Symposium on Intelligent Signal Processing and Communications Systems, 2012, no. Ispacs, pp. 146–150. doi: 10.1109/ISPACS.2012.6473470
C. S. Marzan and N. Marcos, “Towards tobacco leaf detection using Haar cascade classifier and image processing techniques,” ACM Int. Conf. Proceeding Ser., pp. 63–68, 2018. doi: 10.1145/3282286.3282292
B. Rodríguez-Cuenca, S. García-Cortés, C. Ordóñez, and M. C. Alonso, “Morphological operations to extract urban curbs in 3d mls point clouds,” ISPRS Int. J. Geo-Information, vol. 5, no. 6, 2016. doi: 10.3390/ijgi5060093
E. Elboher and M. Werman, “Efficient and accurate Gaussian image filtering using running sums,” in International Conference on Intelligent Systems Design and Applications, ISDA, 2012, pp. 897–902. doi: 10.1109/ISDA.2012.6416657
P. Huamaní Navarrete, “Umbralización múltiple utilizando el método de Otsu para reconocer la luz roja en semáforos,” Scientia, vol. 17, no. 17, pp. 247–262, 2016. doi: 10.31381/scientia.v17i17.393
G. XU, Z. WANG, Y. CHENG, Y. TIAN, and C. ZHANG, “A man-made object detection algorithm based on contour complexity evaluation,” Chinese J. Aeronaut., vol. 30, no. 6, pp. 1931–1940, 2017. doi: 10.1016/j.cja.2017.09.001
X. Zou, “A Review of object detection techniques,” in Proceedings - 2019 International Conference on Smart Grid and Electrical Automation, ICSGEA 2019, 2019, pp. 251–254. doi: 10.1109/ICSGEA.2019.00065
M. Massiris, C. Delrieux, and J. Á. Fernández, “Detección de equipos de protección personal mediante red neuronal convolucional YOLO,” no. September, pp. 1022–1029, 2020. doi: 10.17979/spudc.9788497497565.1022
C. Szegedy et al., “Going deeper with convolutions,” Proc. IEEE Comput. Soc. Conf. Comput. Vis. Pattern Recognit., vol. 07-12-June, pp. 1–9, 2015. doi: 10.1109/CVPR.2015.7298594
J. Hu, X. Gao, H. Wu, and S. Gao, “Detection of Workers Without the Helments in Videos Based on YOLO V3,” in 12th International Congress on Image and Signal Processing, BioMedical Engineering and Informatics (CISP-BMEI), 2019, pp. 1553–1560. doi: 10.1109/CISP-BMEI48845.2019.8966045
J. Yang, Q. Min, W. Lu, Y. Ma, W. Yao, and T. Lu, “An RGB channel operation for removal of the difference of atmospheric scattering and its application on total sky cloud detection,” Atmos. Meas. Tech., vol. 10, no. 3, pp. 1191–1201, 2017. doi: 10.5194/amt-10-1191-2017
R. Rasmus, G. Matthieu, and V. G. Luc, “Non-Maximum Suppression for Object Detection by Passing Messages between Windows,” in Asian Conference on Computer Vision, 2014, vol. 9003. doi: 10.1007/978-3-319-16865-4
R. Huang, J. Pedoeem, and C. Chen, “YOLO-LITE: A Real-Time Object Detection Algorithm Optimized for Non-GPU Computers,” in Proceedings - 2018 IEEE International Conference on Big Data, Big Data 2018, 2019, pp. 2503–2510. doi: 10.1109/BigData.2018.8621865
P. Tumas, A. Nowosielski, and A. Serackis, “Pedestrian Detection in Severe Weather Conditions,” IEEE Access, vol. 8, pp. 62775–62784, 2020. doi: 10.1109/ACCESS.2020.2982539
R. Gothwal, S. Gupta, D. Gupta, and A. K. Dahiya, “Color image segmentation algorithm based on RGB channels,” in Proceedings - 2014 3rd International Conference on Reliability, Infocom Technologies and Optimization: Trends and Future Directions, ICRITO 2014, 2015, pp. 1–5. doi: 10.1109/ICRITO.2014.7014669
Y. Xu, T. Wu, F. Gao, J. R. Charlton, and K. M. Bennett, “Improved small blob detection in 3D images using jointly constrained deep learning and Hessian analysis,” Sci. Rep., vol. 10, no. 1, pp. 1–12, 2020. doi: 10.1109/ICRITO.2014.7014669
S. Shinde, A. Kothari, and V. Gupta, “YOLO based Human Action Recognition and Localization,” Procedia Comput. Sci., vol. 133, no. 2018, pp. 831–838, 2018. doi: 10.1016/j.procs.2018.07.112
R. Seidel, A. Apitzsch, and G. Hirtz, “Improved Person Detection on Omnidirectional Images with Non-maxima Supression,” VISIGRAPP 2019 - Proc. 14th Int. Jt. Conf. Comput. Vision, Imaging Comput. Graph. Theory Appl., vol. 5, no. January, pp. 474–481, 2019. doi: 10.5220/0007388404740481
D. Ramírez-Jiménez and J. D. Quintero-Ospina, “Clasificación de patologías presentes en la columna vertebral mediante técnicas de máquinas de aprendizaje,” Ing. Solidar., vol. 15, no. 27, pp. 1–23, 2019. doi:10.16925/2357-6014.2019.01.05
Copyright (c) 2021 Ingeniería Solidaria

This work is licensed under a Creative Commons Attribution 4.0 International License.
Cession of rights and ethical commitment
As the author of the article, I declare that is an original unpublished work exclusively created by me, that it has not been submitted for simultaneous evaluation by another publication and that there is no impediment of any kind for concession of the rights provided for in this contract.
In this sense, I am committed to await the result of the evaluation by the journal Ingeniería Solidaría before considering its submission to another medium; in case the response by that publication is positive, additionally, I am committed to respond for any action involving claims, plagiarism or any other kind of claim that could be made by third parties.
At the same time, as the author or co-author, I declare that I am completely in agreement with the conditions presented in this work and that I cede all patrimonial rights, in other words, regarding reproduction, public communication, distribution, dissemination, transformation, making it available and all forms of exploitation of the work using any medium or procedure, during the term of the legal protection of the work and in every country in the world, to the Universidad Cooperativa de Colombia Press.