Discriminación automática de tráfico urbano con técnicas de visión por computador

dc.contributor.advisorEspinosa Oviedo, Jairo Joséspa
dc.contributor.advisorEspinosa Oviedo, Jorge Ernestospa
dc.contributor.authorArroyo Jiménez, José Nicolásspa
dc.contributor.researchgroupGrupo de Automática de la Universidad Nacional GAUNALspa
dc.date.accessioned2021-02-26T14:06:17Zspa
dc.date.available2021-02-26T14:06:17Zspa
dc.date.issued2020-09-25spa
dc.description.abstractLa situación de movilidad actual en las ciudades conlleva problemas como alta congestión vehicular, contaminación ambiental y requerimientos de infraestructura. El uso de sistemas inteligentes de transporte basados en video puede mitigar estos efectos a un relativo bajo costo. Para logralo el tráfico urbano debe ser discriminado a partir de tomas de video. La discriminación de tráfico urbano se divide en 3 tareas: detección, clasificación y seguimiento. Primero, se muestra un conjunto de datos que contiene anotaciones de distintos tipos de tráfico en tres escenarios. A continuación, se revisan técnicas de detección basadas en características de movimiento y apariencia, y se muestra los resultados de un experimento de detección de tráfico con estas. Luego, se revisan métodos de clasificación con algoritmos para la extracción de características y se muestra los resultados de un experimento con histogramas de gradientes orientados y máquinas de vectores de soporte. Después, se trata el tema de aprendizaje profundo para las tareas de detección y clasificación y se usan dos algoritmos en experimentos para detectar y clasificar distintos tipos de tráfico en diferentes escenarios urbanos. Finalmente, se revisan técnicas de seguimiento multiobjetivo y se experimenta con los resultados de detección obtenidos con los algoritmos de aprendizaje profundo.spa
dc.description.abstractCurrent transportation situation in cities carry issues such as traffic jams, environmental pollution and infrastructure requirements. The use of intelligent transport systems based on video could mitigate negative impacts at a relatively low cost. To accomplish this, urban traffic must be discriminated from video captures. Urban traffic discrimination is divided into three tasks: detection, classification and tracking. First, a dataset with annotations of different types of vehicles on three different scenarios is detailed. Next, detection techniques based on motion features and on appearance features are reviewed, and the results of an experiment using detection techniques based on motion features are shown. Then, classification methods that use feature extraction algorithms and classifiers are reviewed. The results of an experiment that used histograms of oriented gradients and support vector machines for classification are shown. Afterwards, deep learning techniques for detection and classification are examined and evaluated with experiments using two algorithms on different urban scenarios. Finally, multi-objective tracking techniques are reviewed and tested on the results obtained with deep learning and the results are shown.spa
dc.description.additionalLínea de Investigación: Inteligencia artificial, Visión por computadorspa
dc.description.degreelevelMaestríaspa
dc.format.extent139spa
dc.format.mimetypeapplication/pdfspa
dc.identifier.urihttps://repositorio.unal.edu.co/handle/unal/79313
dc.language.isoengspa
dc.publisher.branchUniversidad Nacional de Colombia - Sede Medellínspa
dc.publisher.departmentDepartamento de Ingeniería Eléctrica y Automáticaspa
dc.publisher.programMedellín - Minas - Maestría en Ingeniería - Automatización Industrialspa
dc.relation.referencesK. He, G. Gkioxari, P. Dollar, and R. Girshick, “Mask R-CNN,”Proceedings of theIEEE International Conference on Computer Vision, vol. 2017-Octob, pp. 2980–2988,2017.spa
dc.relation.referencesT. Kadir and M. B. Scale, “Saliency and Image Description,”International Journal ofComputer Vision, vol. 45, no. 2, pp. 83–105, 2001.spa
dc.relation.referencesJ. W. Woo, W. Lee, and M. Lee, “A traffic surveillance system using dynamic saliencymap and SVM boosting,”International Journal of Control, Automation and Systems,vol. 8, no. 5, pp. 948–956, 2010.spa
dc.relation.referencesG. Lee and R. Mallipeddi, “A Genetic Algorithm-Based Moving Object Detection for Real-time Traffic Surveillance,” Signal Processing Letters, . . . , vol. 22, no. 10, pp. 1619–1622, 2015. [Online]. Available: http://ieeexplore.ieee.org/xpls/abs{ }all. jsp?arnumber=7072530spa
dc.relation.referencesL.-w. Tsai, J.-w. Hsieh, and K.-c. Fan, “Vehicle Detection Using Normalized Color and Edge Map,” vol. 16, no. 3, pp. 850–864, 2007.spa
dc.relation.referencesC. Cortes and V. Vapnik, “Support-Vector Networks,” Machine Learning, vol. 20, no. 3, pp. 273–297, 1995.spa
dc.relation.referencesK. Fukushima, “Neocognitron: A self-organizing neural network model for a mechanism of pattern recognition unaffected by shift in position,” Biological Cybernetics, vol. 36, no. 4, pp. 193–202, 1980.spa
dc.relation.referencesV. Dumoulin and F. Visin, “A guide to convolution arithmetic for deep learning,” pp. 1–31, 2016. [Online]. Available: http://arxiv.org/abs/1603.07285spa
dc.relation.referencesN. Srivastava, G. Hinton, A. Krizhevsky, I. Sutskever, and R. Salakhutdinov, “Dropout: A simple way to prevent neural networks from overfitting,” Journal of Machine Learning Research, vol. 15, pp. 1929–1958, 2014.spa
dc.relation.referencesY. Lecun, L. Bottou, Y. Bengio, and P. Haffner, “Gradient-Based Learning Applied to Document Recognition,” proc. OF THE IEEE, 1998. [Online]. Available: http://ieeexplore.ieee.org/document/726791/{#}full-text-sectionspa
dc.relation.referencesK. Chellapilla, S. Puri, P. Simard, K. Chellapilla, S. Puri, P. Simard, H. Performance, C. Neural, and P. Simard, “High Performance Convolutional Neural Networks for Document Processing To cite this version : High Performance Convolutional Neural Networks for Document Processing,” 2006.spa
dc.relation.referencesA. Krizhevsky, I. Sutskever, and G. E. Hinton, “ImageNet Classification with Deep Convolutional Neural Networks,” Advances In Neural Information Processing Systems, pp. 1–9, 2012.spa
dc.relation.referencesM. D. Zeiler and R. Fergus, “Visualizing and understanding convolutional networks,” Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), vol. 8689 LNCS, no. PART 1, pp. 818–833, 2014.spa
dc.relation.referencesK. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, vol. 2016-December, pp. 770–778, 2016.spa
dc.relation.referencesR. Girshick, J. Donahue, T. Darrell, and J. Malik, “Rich feature hierarchies for accurate object detection and semantic segmentation,” Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pp. 580–587, 2014.spa
dc.relation.referencesR. Girshick, “Fast R-CNN,” Proceedings of the IEEE International Conference on Computer Vision, vol. 2015 Inter, pp. 1440–1448, 2015.spa
dc.relation.referencesW. Liu, D. Anguelov, D. Erhan, C. Szegedy, S. Reed, C. Y. Fu, and A. C. Berg, “SSD: Single shot multibox detector,” Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), vol. 9905 LNCS, pp. 21–37, 2016.spa
dc.relation.referencesJ. Redmon, S. Divvala, R. Girshick, and A. Farhadi, “You only look once: Unified, real-time object detection,” Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, vol. 2016-Decem, pp. 779–788, 2016.spa
dc.relation.referencesS. Ren, K. He, R. Girshick, and J. Sun, “Faster R-CNN: Towards realtime object detection with region proposal networks,” in Advances in Neural Information Processing Systems, vol. 2015-Janua, 2015, pp. 91–99. [Online]. Available: https://www.scopus.com/inward/record.uri?eid=2-s2.0-84960980241{&} partnerID=40{&}md5=18aaa500235b11fb99e953f8b227f46dspa
dc.relation.referencesR. Rad and M. Jamzad, “Real time classification and tracking of multiple vehicles in highways,” Pattern Recognition Letters, vol. 26, no. 10, pp. 1597–1607, 2005.spa
dc.relation.referencesK. Zhang, L. Zhang, Q. Liu, D. Zhang, and M. H. Yang, “Fast visual tracking via dense spatio-temporal context learning,” Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), vol. 8693 LNCS, no. PART 5, pp. 127–141, 2014.spa
dc.relation.referencesL. Bertinetto, J. Valmadre, J. F. Henriques, A. Vedaldi, and P. H. Torr, “Fullyconvolutional siamese networks for object tracking,” Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), vol. 9914 LNCS, pp. 850–865, 2016.spa
dc.relation.referencesY. Qi, S. Zhang, L. Qin, H. Yao, Q. Huang, J. Lim, and M. H. Yang, “Hedged Deep Tracking,” Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, vol. 2016-December, pp. 4303–4311, 2016.spa
dc.relation.referencesS. Yun, J. Choi, Y. Yoo, K. Yun, and J. Y. Choi, “Action-decision networks for visual tracking with deep reinforcement learning,” Proceedings - 30th IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2017, vol. 2017-January, pp. 1349– 1358, 2017.spa
dc.relation.referencesJ. Valmadre, L. Bertinetto, J. Henriques, A. Vedaldi, and P. H. Torr, “End-to-end representation learning for Correlation Filter based tracking,” Proceedings - 30th IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2017, vol. 2017- January, pp. 5000–5008, 2017.spa
dc.relation.referencesN. Wojke, A. Bewley, and D. Paulus, “Simple online and realtime tracking with a deep association metric,” Proceedings - International Conference on Image Processing, ICIP, vol. 2017-September, pp. 3645–3649, 2018.spa
dc.relation.referencesB. Tian, B. T. Morris, M. Tang, Y. Liu, Y. Yao, C. Gou, D. Shen, and S. Tang, “Hierarchical and Networked Vehicle Surveillance in ITS: A Survey,” IEEE Transactions on Intelligent Transportation Systems, vol. 18, no. 1, pp. 25–48, 2017.spa
dc.relation.referencesN. Buch, S. A. Velastin, and J. Orwell, “A review of computer vision techniques for the analysis of urban traffic,” IEEE Transactions on Intelligent Transportation Systems, vol. 12, no. 3, pp. 920–939, 2011.spa
dc.relation.referencesS. Sivaraman and M. M. Trivedi, “A general active-learning framework for on-road vehicle recognition and tracking,” IEEE Transactions on Intelligent Transportation Systems, vol. 11, no. 2, pp. 267–276, 2010.spa
dc.relation.referencesW. Zhang, Q. M. Wu, X. Yang, and X. Fang, “Multilevel framework to detect and handle vehicle occlusion,” IEEE Transactions on Intelligent Transportation Systems, vol. 9, no. 1, pp. 161–174, 2008.spa
dc.relation.referencesD. G. Lowe, “Distinctive Image Features from Scale-Invariant Keypoints,” International Journal of Computer Vision, vol. 60, no. 2, pp. 91–110, 2004.spa
dc.relation.referencesA. Ng and M. I. Jordan, “On generative vs. discriminative classifiers: A comparison of logistic regression and naive bayes,” Proceedings of Advances in Neural Information Processing, vol. 28, no. 3, pp. 841–848, 2001.spa
dc.relation.referencesY. Freund, “An adaptive version of the boost by majority algorithm,” in Machine Learning, ser. COLT ’99, vol. 43, no. 3. New York, NY, USA: ACM, 2001, pp. 293–318. [Online]. Available: http://doi.acm.org/10.1145/307400.307419spa
dc.relation.referencesG. Cybenkot, “Mathematics of Control, Signals, and Systems Approximation by Superpositions of a Sigmoidal Function*,” Math. Control Signals Systems, vol. 2, pp. 303–314, 1989.spa
dc.relation.referencesN. S. Altman, “An Introduction to Kernel and Nearest-Neighbor Nonparametric Regression,” The American Statistician, vol. 46, no. 3, pp. 175–185, 1992. [Online]. Available: https://www.tandfonline.com/doi/abs/10.1080/00031305.1992.10475879spa
dc.relation.referencesD. C. Cires, U. Meier, J. Masci, and L. M. Gambardella, “Flexible, High Performance Convolutional Neural Networks for Image Classification,” Proceedings of the TwentySecond International Joint Conference on Artificial Intelligence Flexible, pp. 1237–1242, 2003. [Online]. Available: https://www.ijcai.org/Proceedings/11/Papers/210.pdfspa
dc.relation.referencesR. E. KALMAN, “A New Approach to Linear Filtering and Prediction Problems,” Journal of basic Engineering, vol. 82, no. 1, pp. 35–45, 1960.spa
dc.relation.referencesJ. Lou, H. Yang, W. Hu, and T. Tan, “Visual Vehicle Tracking Using An Improved EKF*,” no. January, pp. 23–25, 2002.spa
dc.relation.referencesM. S. Arulampalam, S. Maskell, N. Gordon, and T. Clapp, “A tutorial on particle filters for online nonlinear/nongaussian bayesian tracking,” Bayesian Bounds for Parameter Estimation and Nonlinear Filtering/Tracking, vol. 18, no. 2, pp. 723–737, 2007.spa
dc.relation.referencesJ. Redmon and A. Farhadi, “Yolov3: An incremental improvement,” CoRR, vol. abs/1804.02767, 2018. [Online]. Available: http://arxiv.org/abs/1804.02767spa
dc.relation.referencesN. Dalal and B. Triggs, “Histograms of oriented gradients for human detection,” in Proceedings - 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, CVPR 2005, vol. I, no. 3, 2005, pp. 886–893.spa
dc.relation.referencesV. Mariano, Junghye Min, Jin-Hyeong Park, R. Kasturi, D. Mihalcik, Huiping Li, D. Doermann, and T. Drayer, “Performance evaluation of object detection algorithms,” no. January, pp. 965–969, 2003.spa
dc.relation.referencesJ. E. Espinosa, S. A. Velastin, and J. W. Branch, “Motorcycle detection and classification in urban Scenarios using a model based on Faster RCNN,” arXiv:1808.02299 [cs], Aug. 2018, arXiv: 1808.02299. [Online]. Available: http://arxiv.org/abs/1808.02299spa
dc.relation.referencesM. Piccardi, “Background subtraction techniques: A review,” in Conference Proceedings - IEEE International Conference on Systems, Man and Cybernetics, vol. 4, 2004, pp. 3099–3104.spa
dc.relation.referencesD. Koller, J. Weber, T. Huang, J. Malik, G. Ogasawara, B. Rao, S. Russell, and I. I, “Towards Robust Automatic Traffic Scene Analysis in Real-Time,” no. December, pp. 3776–3781, 1994.spa
dc.relation.referencesC. Stauffer and W. Grimson, “Adaptive background mixture models for real-time tracking,” pp. 246–252, 2003.spa
dc.relation.referencesN. Friedman and S. Russell, “Image Segmentation in Video Sequences: A Probabilistic Approach,” Proceedings of the Thirteenth Conference on Uncertainty in Artificial Intelligence, pp. 175–181, 1997. [Online]. Available: http://arxiv.org/abs/1302.1539spa
dc.relation.referencesDempster, A. P., Laird, N. M., and Rubin, D. B., “Maximum Likelihood from Incomplete Data via the EM Algorithm,” Journal of the Royal Statistical Society. Series B (Methodological), vol. 39, no. 1, pp. 1–38, 1977. [Online]. Available: http://www.jstor.org/stable/2984875spa
dc.relation.referencesA. Elgammal, D. Harwood, and L. Davis, “Non-parametric Model for Background Subtraction,” 2000, pp. 751–767.spa
dc.relation.referencesZ. Zivkovic, “Improved adaptive Gaussian mixture model for background subtraction,” Proceedings - International Conference on Pattern Recognition, vol. 2, pp. 28–31, 2004.spa
dc.relation.referencesB. Han, D. Comaniciu, and L. Davis, “Sequential kernel density approximation through mode propagation: Applications to background modeling,” vol. 32, no. 12, p. 16, 2004.spa
dc.relation.referencesM. Seki, T. Wada, H. Fujiwara, and K. Sumi, “Background subtraction based on cooccurrence of image variations,” Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, vol. 2, 2003.spa
dc.relation.referencesN. Oliver, B. Rosario, and A. Pentland, “A Bayesian computer vision system for modeling human interactions,” Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), vol. 1542, pp. 255–272, 1999.spa
dc.relation.referencesI. Sobel and G. Feldman, “A 3x3 isotropic gradient operator for image processing.” in Hart, P. E. & Duda R. O. Pattern Classification and Scene Analysis, pp. 271–272, 1973. [Online]. Available: papers2://publication/uuid/ F6C98D8E-0A99-40EF-A91C-0ECA53448D1Fspa
dc.relation.referencesR. Cucchiara, C. Crana, M. Piccardi, A. Prati, and S. Sirotti, “Improving shadow suppression in moving object detection with HSV color information,” pp. 334–339, 2002.spa
dc.relation.referencesD. Comaniciu and V. Ramesh, “Real-Time Tracking of Non-Rigid Objects using Mean Shift 3 Bhattacharyya Coe cient Based Metric for Target Localization,” Computer Vision and Pattern Recognition, IEEE Conference on. Vol 2, no. 7, pp. 142–149, 2000.spa
dc.relation.referencesA. Kuehnle, “Symmetry-based recognition of vehicle rears,” Pattern Recognition Letters, vol. 12, no. 4, pp. 249–258, 1991.spa
dc.relation.referencesW. von Seelen, C. Curio, J. Gayko, U. Handmann, and T. Kalinke, “Scene analysis and organization of behavior in driver assistance systems,” pp. 524–527, 2002.spa
dc.relation.referencesR. O. Duda and P. E. Hart, Pattern classification and scene analysis, 1973. [Online]. Available: http://www.citeulike.org/group/1938/article/1055187spa
dc.relation.referencesJohn Canny, “A Computational Approach To Edge Detection,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 8, no. 6, pp. 679–714, 1986.spa
dc.relation.referencesC. Goerick, D. Noll, and M. Werner, “Artificial neural networks in real-time car detection and tracking applications,” Pattern Recognition Letters, vol. 17, no. 4 SPEC. ISS., pp. 335–343, 1996.spa
dc.relation.referencesHsu-Yung Cheng, Chih-Chia Weng, and Yi-Ying Chen, “Vehicle Detection in Aerial Surveillance Using Dynamic Bayesian Networks,” IEEE Transactions on Image Processing, vol. 21, no. 4, pp. 2152–2159, 2011.spa
dc.relation.referencesC. Harris and M. Stephens, “A Combined Corner and Edge Detector,” Procedings of the Alvey Vision Conference 1988, pp. 23.1–23.6, 1988. [Online]. Available: http://www.bmva.org/bmvc/1988/avc-88-023.htmlspa
dc.relation.referencesN. Dalal and W. Triggs, “Histograms of Oriented Gradients for Human Detection,” 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition CVPR05, vol. 1, no. 3, pp. 886–893, 2004. [Online]. Available: http://eprints.pascal-network.org/archive/00000802/spa
dc.relation.referencesT. Kalinke, C. Tzomakas, and W. Seelen, “A texture-based object detection and an adaptive model-based classification,” IEEE Intelligent Vehicles Symposium, pp. 341–346, 1998. [Online]. Available: http://citeseerx.ist.psu.edu/viewdoc/summary? doi=10.1.1.33.526spa
dc.relation.referencesR. M. Haralick, K. Shanmugam, and I. Dinstein, “Textural Features for Image Classification,” IEEE Transactions on Systems, Man, and Cybernetics, vol. SMC-3, no. 6, pp. 610–621, 1973.spa
dc.relation.referencesH. Bay, A. Ess, T. Tuytelaars, and L. Van Gool, “Speeded-Up Robust Features (SURF),” Computer Vision and Image Understanding, vol. 110, no. 3, pp. 346–359, 2008.spa
dc.relation.referencesT. Moranduzzo and F. Melgani, “Automatic car counting method for unmanned aerial vehicle images,” IEEE Transactions on Geoscience and Remote Sensing, vol. 52, no. 3, pp. 1635–1647, 2014.spa
dc.relation.referencesJ. W. Hsieh, L. C. Chen, and D. Y. Chen, “Symmetrical SURF and Its applications to vehicle detection and vehicle make and model recognition,” IEEE Transactions on Intelligent Transportation Systems, vol. 15, no. 1, pp. 6–20, 2014.spa
dc.relation.referencesE. Rosten and T. Drummond, “Machine learning for high-speed corner detection,” Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), vol. 3951 LNCS, pp. 430–443, 2006.spa
dc.relation.referencesM. Calonder, V. Lepetit, C. Strecha, and P. Fua, “BRIEF: Binary robust independent elementary features,” Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), vol. 6314 LNCS, no. PART 4, pp. 778–792, 2010.spa
dc.relation.referencesE. Rublee, V. Rabaud, K. Konolige, and G. Bradski, “ORB: An efficient alternative to SIFT or SURF,” Proceedings of the IEEE International Conference on Computer Vision, pp. 2564–2571, 2011.spa
dc.relation.referencesJ. L. Buliali, C. Fatichah, D. Herumurti, D. Fenomena, H. Widyastuti, and M. Wallace, “Vehicle detection on images from satellite using oriented fast and rotated brief,” Journal of Engineering and Applied Sciences, vol. 12, no. 17, pp. 4500–4503, 2017.spa
dc.relation.referencesS. Kamijo, Y. Matsushita, K. Ikeuchi, and M. Sakauchi, “Traffic monitoring and accident detection at intersections,” IEEE Conference on Intelligent Transportation Systems, Proceedings, ITSC, vol. 1, no. 2, pp. 703–708, 1999.spa
dc.relation.referencesC. C. R. Wang and J. J. J. Lien, “Automatic vehicle detection using local features - A statistical approach,” IEEE Transactions on Intelligent Transportation Systems, vol. 9, no. 1, pp. 83–96, 2008.spa
dc.relation.referencesF. Leymarie and M. D. Levine, “Simulating the grassfire transform using an active contour model,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 14, no. 1, pp. 56–75, 1992.spa
dc.relation.referencesS. Baluja, “Population-Based Incremental Learning: A Method for Integrating Genetic Search Based Function Optimization and Competitive Learning,” Ph.D. dissertation, Carnegie Mellon University, 1994.spa
dc.relation.referencesP. Jaccard, “Etude comparative de la distribution florale dans une portion des Alpes et ´ des Jura,” Bulletin de la Soci´et´e Vaudoise des Sciences Naturelles, vol. 37, pp. 547–579, 1901.spa
dc.relation.referencesL. Mason, J. Baxter, P. Bartlett, and M. Frean, “Boosting algorithms as gradient descent,” Advances in Neural Information Processing Systems, pp. 512–518, 2000.spa
dc.relation.referencesQ. Zhu, S. Avidan, M. C. Yeh, and K. T. Cheng, “Fast human detection using a cascade of histograms of oriented gradients,” Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, vol. 2, no. May 2014, pp. 1491–1498, 2006.spa
dc.relation.referencesM. Pedersoli, J. Gonz`alez, and J. J. Villanueva, “High-speed human detection using a multiresolution cascade of histograms of oriented gradients,” Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), vol. 5524 LNCS, pp. 48–55, 2009.spa
dc.relation.referencesB. Zhang, “Reliable classification of vehicle types based on cascade classifier ensembles,” IEEE Transactions on Intelligent Transportation Systems, vol. 14, no. 1, pp. 322–332, 2013.spa
dc.relation.referencesN. Buch, J. Orwell, and S. A. Velastin, “3D extended histogram of oriented gradients (3DHOG) for classification of road users in urban scenes,” British Machine Vision Conference, BMVC 2009 - Proceedings, 2009.spa
dc.relation.referencesT. Ojala, M. Pietik¨ainen, and D. Harwood, “Performance evaluation of texture measures with classification based on Kullback discrimination of distributions,” Proceedings - International Conference on Pattern Recognition, vol. 3, pp. 582–585, 1994.spa
dc.relation.referencesY. Tang, C. Zhang, R. Gu, P. Li, and B. Yang, “Vehicle detection and recognition for intelligent traffic surveillance system,” Multimedia Tools and Applications, vol. 76, no. 4, pp. 5817–5832, 2017.spa
dc.relation.referencesO. Barkan, J. Weill, L. Wolf, and H. Aronowitz, “Fast high dimensional vector multiplication face recognition,” Proceedings of the IEEE International Conference on Computer Vision, pp. 1960–1967, 2013.spa
dc.relation.referencesJ. Trefn´y and J. Matas, “Extended set of local binary patterns for rapid object detection,” Computer Vision Winter Workshop, pp. 1–7, 2010. [Online]. Available: http://cmp.felk.cvut.cz/{∼}matas/papers/trefny-lbp-cvww10.pdfspa
dc.relation.referencesX. Tan and B. Triggs, “Enhanced local texture feature sets for face recognition under difficult lighting conditions,” IEEE Transactions on Image Processing, vol. 19, no. 6, pp. 1635–1650, 2010.spa
dc.relation.referencesT. Bouwmans, C. Silva, C. Marghes, M. S. Zitouni, H. Bhaskar, and C. Frelicot, “On the role and the importance of features for background modeling and foreground detection,” Computer Science Review, vol. 28, pp. 26–91, 2018.spa
dc.relation.referencesP. Doll´ar, Z. Tu, P. Perona, and S. Belongie, “Integral channel features,” British Machine Vision Conference, BMVC 2009 - Proceedings, pp. 1–11, 2009.spa
dc.relation.referencesA. K. Jain, “Data clustering: 50 years beyond K-means,” Pattern Recognition Letters, vol. 31, no. 8, pp. 651–666, 2010.spa
dc.relation.referencesJ. Schmidhuber, “Deep Learning in neural networks: An overview,” Neural Networks, vol. 61, pp. 85–117, 2015.spa
dc.relation.referencesW. S. Mcculloch and W. Pitts, “A logical calculus nervous activity,” Bulletin of Mathematical Biology, vol. 52, no. l, pp. 99–115, 1990.spa
dc.relation.referencesM. L. Minsky and S. Papert, Perceptrons: An Introduction to Computational Geometry. Mit Press, 1972. [Online]. Available: https://books.google.com.co/books? id=Ow1OAQAAIAAJspa
dc.relation.referencesP. J. Werbos, Beyond Regression: New Tools for Prediction and Analysis in the Behavioral Sciences. Harvard University, 1975. [Online]. Available: https://books.google.com.co/books?id=z81XmgEACAAJspa
dc.relation.referencesF. Rosenblatt, “The Perceptron - A Perceiving and Recognizing Automaton,” pp. 460– 1, 1957.spa
dc.relation.referencesL. Rosasco, E. De Vito, A. Caponnetto, M. Piana, and A. Verri, “Are Loss Functions All the Same?” Neural Computation, vol. 16, no. 5, pp. 1063–1076, 2004.spa
dc.relation.referencesJ. Espinosa, S. Velastin, and J. Branch, “Motorcycle Classification in Urban Scenarios using Convolutional Neural Networks for Feature Extraction,” in 8th International Conference of Pattern Recognition Systems (ICPRS 2017). Institution of Engineering and Technology, 2017, pp. 26 (6 .)–26 (6 .). [Online]. Available: https://digital-library.theiet.org/content/conferences/10.1049/cp.2017.0155spa
dc.relation.referencesS. Stehman, “Selecting and interpreting measures of thematic classification accuracy.” Remote Sensing of Environment, vol. 62, p. 77, 1997.spa
dc.relation.referencesC. Bishop, “Pattern Recognition and Machine Learning,” Journal of Electronic Imaging, vol. 16, no. 4, p. 049901, jan 2007. [Online]. Available: http: //electronicimaging.spiedigitallibrary.org/article.aspx?doi=10.1117/1.2819119spa
dc.relation.referencesY. Lecun, Y. Bengio, and G. Hinton, “Deep learning,” Nature, vol. 521, no. 7553, pp. 436–444, 2015.spa
dc.relation.referencesY. LeCun, B. Boser, J. S. Denker, D. Henderson, R. E. Howard, W. Hubbard, and L. D. Jackel, “Backpropagation applied to digit recognition,” pp. 541–551, 1989.spa
dc.relation.referencesD. E. Rumelhart, G. E. Hinton, and R. J. Williams, Learning Internal Representations by Error Propagation. Morgan Kaufmann Publishers, Inc., 1988. [Online]. Available: http://dx.doi.org/10.1016/B978-1-4832-1446-7.50035-2spa
dc.relation.referencesK. Hornik, M. Stinchcombe, and H. White, “Multilayer feedforward networks are universal approximators,” Neural Networks, vol. 2, no. 5, pp. 359–366, 1989.spa
dc.relation.referencesX. Glorot, A. Bordes, and Y. Bengio, “Deep sparse rectifier neural networks,” Journal of Machine Learning Research, vol. 15, pp. 315–323, 2011.spa
dc.relation.referencesI. Goodfellow, Y. Bengio, and A. Courville, Deep Learning. MIT Press, 2016, http: //www.deeplearningbook.org.spa
dc.relation.referencesV. Nair and G. E. Hinton, “Rectified linear units improve Restricted Boltzmann machines,” ICML 2010 - Proceedings, 27th International Conference on Machine Learning, pp. 807–814, 2010.spa
dc.relation.referencesC. Lemar´echal, “Cauchy and the Gradient Method,” Documenta Mathematica, vol. ISMP, pp. 251–254, 2012. [Online]. Available: https://www.math.uni-bielefeld.de/ documenta/vol-ismp/40{ }lemarechal-claude.pdfspa
dc.relation.referencesO. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma, Z. Huang, A. Karpathy, A. Khosla, M. Bernstein, A. C. Berg, and L. Fei-Fei, “ImageNet Large Scale Visual Recognition Challenge,” International Journal of Computer Vision (IJCV), vol. 115, no. 3, pp. 211–252, 2015.spa
dc.relation.referencesY. LeCun, L. Bottou, G. B. Orr, and K. R. M¨uller, “Efficient BackProp,” pp. 9–50, 1998.spa
dc.relation.referencesY. N. Dauphin, R. Pascanu, C. Gulcehre, K. Cho, S. Ganguli, and Y. Bengio, “Identifying and attacking the saddle point problem in high-dimensional non-convex optimization,” Advances in Neural Information Processing Systems, vol. 4, no. January, pp. 2933–2941, 2014.spa
dc.relation.referencesJ. Duchi, E. Hazan, and Y. Singer, “Adaptive subgradient methods for online learning and stochastic optimization,” COLT 2010 - The 23rd Conference on Learning Theory, pp. 257–269, 2010.spa
dc.relation.referencesD. P. Kingma and J. L. Ba, “Adam: A method for stochastic optimization,” 3rd International Conference on Learning Representations, ICLR 2015 - Conference Track Proceedings, pp. 1–15, 2015.spa
dc.relation.referencesS. Ioffe and C. Szegedy, “Batch normalization: Accelerating deep network training by reducing internal covariate shift,” 32nd International Conference on Machine Learning, ICML 2015, vol. 1, pp. 448–456, 2015.spa
dc.relation.referencesC. Shorten and T. M. Khoshgoftaar, “A survey on Image Data Augmentation for Deep Learning,” Journal of Big Data, vol. 6, no. 1, 2019. [Online]. Available: https://doi.org/10.1186/s40537-019-0197-0spa
dc.relation.referencesJ. Yosinski, J. Clune, Y. Bengio, and H. Lipson, “How transferable are features in deep neural networks?” Advances in Neural Information Processing Systems, vol. 4, no. January, pp. 3320–3328, 2014.spa
dc.relation.referencesD. H. Hubel and T. N. Wiesel, “Receptive fields of single neurons in the cat’s striate cortex,” Journal of Physiology, vol. 148, pp. 574–591, 1959.spa
dc.relation.referencesC. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, and A. Rabinovich, “Going deeper with convolutions,” Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, vol. 07- 12-June-2015, pp. 1–9, 2015.spa
dc.relation.referencesK. Simonyan and A. Zisserman, “Very deep convolutional networks for large-scale image recognition,” 3rd International Conference on Learning Representations, ICLR 2015 - Conference Track Proceedings, pp. 1–14, 2015.spa
dc.relation.referencesP. Soviany and R. T. Ionescu, “Optimizing the trade-off between single-stage and twostage deep object detectors using image difficulty prediction,” Proceedings - 2018 20th International Symposium on Symbolic and Numeric Algorithms for Scientific Computing, SYNASC 2018, pp. 209–214, 2018.spa
dc.relation.referencesJ. R. Uijlings, K. E. Van De Sande, T. Gevers, and A. W. Smeulders, “Selective search for object recognition,” International Journal of Computer Vision, vol. 104, no. 2, pp. 154–171, 2013.spa
dc.relation.referencesM. Everingham, L. Van Gool, C. K. I. Williams, J. Winn, and A. Zisserman, “The PASCAL Visual Object Classes Challenge 2012 (VOC2012) Results,” http://www.pascalnetwork.org/challenges/VOC/voc2012/workshop/index.html.spa
dc.relation.referencesP. F. Felzenszwalb, R. B. Girshick, D. McAllester, and D. Ramanan, “Object detection with discriminatively trained part-based models,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 32, no. 9, pp. 1627–1645, 2010.spa
dc.relation.referencesK. He, X. Zhang, S. Ren, and J. Sun, “Spatial pyramid pooling in deep convolutional networks for visual recognition,” CoRR, vol. abs/1406.4729, 2014. [Online]. Available: http://arxiv.org/abs/1406.4729spa
dc.relation.referencesB. Xu, N. Wang, T. Chen, and M. Li, “Empirical evaluation of rectified activations in convolutional network,” CoRR, vol. abs/1505.00853, 2015. [Online]. Available: http://arxiv.org/abs/1505.00853spa
dc.relation.referencesJ. Redmon and A. Farhadi, “YOLO9000: Better, faster, stronger,” Proceedings - 30th IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2017, vol. 2017-January, pp. 6517–6525, 2017.spa
dc.relation.referencesM. Zhu, “Recall, precision and average precision,” Department of Statistics and Actuarial Science, . . . , pp. 1–11, 2004. [Online]. Available: http://scholar.google.com/scholar?hl=en{&}btnG=Search{&}q=intitle:Recall+ ,+Precision+and+Average+Precision{#}0spa
dc.relation.referencesM. Everingham, L. Van Gool, C. K. Williams, J. Winn, and A. Zisserman, “The pascal visual object classes (VOC) challenge,” International Journal of Computer Vision, vol. 88, no. 2, pp. 303–338, 2010.spa
dc.relation.referencesT. Lin, M. Maire, S. J. Belongie, L. D. Bourdev, R. B. Girshick, J. Hays, P. Perona, D. Ramanan, P. Doll´ar, and C. L. Zitnick, “Microsoft COCO: common objects in context,” CoRR, vol. abs/1405.0312, 2014. [Online]. Available: http://arxiv.org/abs/1405.0312spa
dc.relation.referencesA. Yilmaz, O. Javed, and M. Shah, “Object tracking: A survey,” ACM Computing Surveys, vol. 38, no. 4, 2006.spa
dc.relation.referencesD. H. D. H. Ballard and . Brown, Christopher M., Computer vision. Englewood Cliffs, N.J. : Prentice-Hall, 1982, includes bibliographies and indexes.spa
dc.relation.referencesJohn Canny, “A Computational Approach To Edge Detection,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 8, no. 6, pp. 679–714, 1986.spa
dc.relation.referencesS. Gupte, O. Masoud, R. F. K. Martin, and N. P. Papanikolopoulos, “Detection and Classification of Vehicles,” IEEE Transactions on Intelligent Transportation Systems, vol. 3, no. 1, pp. 37–47, 2002.spa
dc.relation.referencesK. Nummiaro, E. Koller-Meier, and L. Van Gool, “An adaptive color-based particle filter,” Image and Vision Computing, vol. 21, no. 1, pp. 99–110, 2003.spa
dc.relation.referencesK. Mu, F. Hui, and X. Zhao, “Multiple vehicle detection and tracking in highway traffic surveillance video based on sift feature matching,” Journal of Information Processing Systems, vol. 12, pp. 183–195, 01 2016.spa
dc.relation.referencesB. T. Morris and M. M. Trivedi, “Learning, modeling, and classification of vehicle track patterns from live video,” IEEE Transactions on Intelligent Transportation Systems, vol. 9, no. 3, pp. 425–437, 2008.spa
dc.relation.referencesA. Bewley, Z. Ge, L. Ott, F. Ramos, and B. Upcroft, “Simple online and realtime tracking,” Proceedings - International Conference on Image Processing, ICIP, vol. 2016- Augus, pp. 3464–3468, 2016.spa
dc.relation.referencesH. W. Kuhn, “The Hungarian method for the assignment problem,” Naval Research Logistics Quarterly, vol. 2, no. 1-2, pp. 83–97, 1955.spa
dc.relation.referencesS. Messelodi, C. M. Modena, and M. Zanin, “A computer vision system for the detection and classification of vehicles at urban road intersections,” Pattern Analysis and Applications, vol. 8, no. 1-2, pp. 17–31, 2005.spa
dc.relation.referencesC. Montella, “The Kalman Filter and Related Algorithms A Literature Review,” Research Gate, no. May, pp. 1–17, 2014.spa
dc.relation.referencesN. J. Gordon, D. J. Salmond, and A. F. Smith, “Novel approach to nonlinear/nongaussian Bayesian state estimation,” IEE Proceedings, Part F: Radar and Signal Processing, vol. 140, no. 2, pp. 107–113, 1993.spa
dc.relation.referencesF. Bardet and T. Chateau, “MCMC particle filter for real-time visual tracking of vehicles,” IEEE Conference on Intelligent Transportation Systems, Proceedings, ITSC, pp. 539–544, 2008.spa
dc.relation.referencesT. Mauthner, M. Donoser, and H. Bischof, “Robust tracking of spatial related components,” Proceedings - International Conference on Pattern Recognition, no. January, 2008.spa
dc.relation.referencesE. Maggio and A. Cavallaro, “Hybrid particle filter and mean shift tracker,” Proc. Int. Conf. Acoustics, Speech, and Signal Processing, no. 3, pp. 221–224, 2005. [Online]. Available: http://www.eecs.qmul.ac.uk/{∼}andrea/papers/icassp05{ } maggio{ }cavallaro.pdf{%}0Ahttp://citeseerx.ist.psu.edu/viewdoc/download?doi= 10.1.1.111.7909{&}rep=rep1{&}type=pdfspa
dc.relation.referencesK. Fukunaga and L. D. Hostetler, “The Estimation of the Gradient of a Density Function, with Applications in Pattern Recognition,” IEEE Transactions on Information Theory, vol. 21, no. 1, pp. 32–40, 1975.spa
dc.relation.referencesA. Bhattacharyya, “On A Measure of Divergence Between Two Statistical Populations Defined by their Probability Distributions,” Bulletin of the Calcutta Methematical Society, vol. 35, no. 1, pp. 99–109, 1943.spa
dc.relation.referencesD. S. Bolme, J. R. Beveridge, B. A. Draper, and Y. M. Lui, “Visual object tracking using adaptive correlation filters,” Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pp. 2544–2550, 2010.spa
dc.relation.referencesJ. F. Henriques, R. Caseiro, P. Martins, and J. Batista, “Exploiting the circulant structure of tracking-by-detection with kernels,” Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), vol. 7575 LNCS, no. PART 4, pp. 702–715, 2012.spa
dc.relation.referencesJo˜ao F. Henriques, Caseiro Rui, Martins Pedro, and Batista Horge, “High-Speed Tracking with Kernelized Correlation Filters,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 37, no. 3, pp. 583–596, 2015.spa
dc.relation.referencesJ. BROMLEY, J. W. BENTZ, L. BOTTOU, I. GUYON, Y. LECUN, C. MOORE, E. SACKINGER, and R. SHAH, “Signature Verification Using a “Siamese” Time ¨ Delay Neural Network,” International Journal of Pattern Recognition and Artificial Intelligence, vol. 07, no. 04, pp. 669–688, 1993.spa
dc.relation.referencesK. Chaudhuri, Y. Freund, and D. Hsu, “A parameter-free hedging algorithm,” Advances in Neural Information Processing Systems 22 - Proceedings of the 2009 Conference, pp. 297–305, 2009.spa
dc.relation.referencesL. Zheng, Z. Bie, Y. Sun, J. Wang, C. Su, S. Wang, and Q. Tian, “Mars: A video benchmark for large-scale person re-identification,” Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), vol. 9910 LNCS, pp. 868–884, 2016.spa
dc.relation.referencesK. Bernardin and R. Stiefelhagen, “Evaluating multiple object tracking performance: The CLEAR MOT metrics,” Eurasip Journal on Image and Video Processing, vol. 2008, 2008.spa
dc.rightsDerechos reservados - Universidad Nacional de Colombiaspa
dc.rights.accessrightsinfo:eu-repo/semantics/openAccessspa
dc.rights.licenseAtribución-SinDerivadas 4.0 Internacionalspa
dc.rights.spaAcceso abiertospa
dc.rights.urihttp://creativecommons.org/licenses/by-nd/4.0/spa
dc.subject.ddc620 - Ingeniería y operaciones afines::629 - Otras ramas de la ingenieríaspa
dc.subject.proposalTraffic discriminationeng
dc.subject.proposalDiscriminación de tráficospa
dc.subject.proposalDetección de vehículosspa
dc.subject.proposalVehicle detectioneng
dc.subject.proposalVehicle recognitioneng
dc.subject.proposalClasificación de vehículosspa
dc.subject.proposalSeguimiento de vehículosspa
dc.subject.proposalVehicle trackingeng
dc.subject.proposalAprendizaje profundospa
dc.subject.proposalMultiple Object Trackingeng
dc.subject.proposalRedes neuronales convolucionalesspa
dc.subject.proposalDeep Learningeng
dc.subject.proposalSistemas inteligentes de transportespa
dc.subject.proposalConvolutional neural networkeng
dc.subject.proposalIntelligent transport systemseng
dc.titleDiscriminación automática de tráfico urbano con técnicas de visión por computadorspa
dc.title.alternativeAutomatic urban traffic discrimination using computer vision techniquesspa
dc.typeTrabajo de grado - Maestríaspa
dc.type.coarhttp://purl.org/coar/resource_type/c_bdccspa
dc.type.coarversionhttp://purl.org/coar/version/c_ab4af688f83e57aaspa
dc.type.contentTextspa
dc.type.driverinfo:eu-repo/semantics/masterThesisspa
dc.type.versioninfo:eu-repo/semantics/acceptedVersionspa
oaire.accessrightshttp://purl.org/coar/access_right/c_abf2spa

Archivos

Bloque original

Mostrando 1 - 1 de 1
Cargando...
Miniatura
Nombre:
1067906800.2020.pdf
Tamaño:
14.25 MB
Formato:
Adobe Portable Document Format
Descripción:
Tesis de Maestría en Ingeniería - Automatización Industrial

Bloque de licencias

Mostrando 1 - 1 de 1
Cargando...
Miniatura
Nombre:
license.txt
Tamaño:
3.87 KB
Formato:
Item-specific license agreed upon to submission
Descripción: