dc.rights.license | Atribución-SinDerivadas 4.0 Internacional |
dc.contributor.advisor | Espinosa Oviedo, Jairo José |
dc.contributor.advisor | Espinosa Oviedo, Jorge Ernesto |
dc.contributor.author | Arroyo Jiménez, José Nicolás |
dc.date.accessioned | 2021-02-26T14:06:17Z |
dc.date.available | 2021-02-26T14:06:17Z |
dc.date.issued | 2020-09-25 |
dc.identifier.uri | https://repositorio.unal.edu.co/handle/unal/79313 |
dc.description.abstract | La situación de movilidad actual en las ciudades conlleva problemas como alta congestión vehicular, contaminación ambiental y requerimientos de infraestructura. El uso de sistemas inteligentes de transporte basados en video puede mitigar estos efectos a un relativo bajo costo. Para logralo el tráfico urbano debe ser discriminado a partir de tomas de video. La discriminación de tráfico urbano se divide en 3 tareas: detección, clasificación y seguimiento. Primero, se muestra un conjunto de datos que contiene anotaciones de distintos tipos de tráfico en tres escenarios. A continuación, se revisan técnicas de detección basadas en características de movimiento y apariencia, y se muestra los resultados de un experimento de detección de tráfico con estas. Luego, se revisan métodos de clasificación con algoritmos para la extracción de características y se muestra los resultados de un experimento con histogramas de gradientes orientados y máquinas de vectores de soporte. Después, se trata el tema de aprendizaje profundo para las tareas de detección y clasificación y se usan dos algoritmos en experimentos para detectar y clasificar distintos tipos de tráfico en diferentes escenarios urbanos. Finalmente, se revisan técnicas de seguimiento multiobjetivo y se experimenta con los resultados de detección obtenidos con los algoritmos de aprendizaje profundo. |
dc.description.abstract | Current transportation situation in cities carry issues such as traffic jams, environmental pollution and infrastructure requirements. The use of intelligent transport systems based on video could mitigate negative impacts at a relatively low cost. To accomplish this, urban traffic must be discriminated from video captures. Urban traffic discrimination is divided into three tasks: detection, classification and tracking. First, a dataset with annotations of different types of vehicles on three different scenarios is detailed. Next, detection techniques based on motion features and on appearance features are reviewed, and the results of an experiment using detection techniques based on motion features are shown. Then, classification methods that use feature extraction algorithms and classifiers are reviewed. The results of an experiment that used histograms of oriented gradients and support vector machines for classification are shown. Afterwards, deep learning techniques for detection and classification are examined and evaluated with experiments using two algorithms on different urban scenarios. Finally, multi-objective tracking techniques are reviewed and tested on the results obtained with deep learning and the results are shown. |
dc.format.extent | 139 |
dc.format.mimetype | application/pdf |
dc.language.iso | eng |
dc.rights | Derechos reservados - Universidad Nacional de Colombia |
dc.rights.uri | http://creativecommons.org/licenses/by-nd/4.0/ |
dc.subject.ddc | 620 - Ingeniería y operaciones afines::629 - Otras ramas de la ingeniería |
dc.title | Discriminación automática de tráfico urbano con técnicas de visión por computador |
dc.title.alternative | Automatic urban traffic discrimination using computer vision techniques |
dc.type | Otro |
dc.rights.spa | Acceso abierto |
dc.description.additional | Línea de Investigación: Inteligencia artificial, Visión por computador |
dc.type.driver | info:eu-repo/semantics/other |
dc.type.version | info:eu-repo/semantics/acceptedVersion |
dc.publisher.program | Medellín - Minas - Maestría en Ingeniería - Automatización Industrial |
dc.contributor.researchgroup | Grupo de Automática de la Universidad Nacional GAUNAL |
dc.description.degreelevel | Maestría |
dc.publisher.department | Departamento de Ingeniería Eléctrica y Automática |
dc.publisher.branch | Universidad Nacional de Colombia - Sede Medellín |
dc.relation.references | K. He, G. Gkioxari, P. Dollar, and R. Girshick, “Mask R-CNN,”Proceedings of theIEEE International Conference on Computer Vision, vol. 2017-Octob, pp. 2980–2988,2017. |
dc.relation.references | T. Kadir and M. B. Scale, “Saliency and Image Description,”International Journal ofComputer Vision, vol. 45, no. 2, pp. 83–105, 2001. |
dc.relation.references | J. W. Woo, W. Lee, and M. Lee, “A traffic surveillance system using dynamic saliencymap and SVM boosting,”International Journal of Control, Automation and Systems,vol. 8, no. 5, pp. 948–956, 2010. |
dc.relation.references | G. Lee and R. Mallipeddi, “A Genetic Algorithm-Based Moving Object Detection
for Real-time Traffic Surveillance,” Signal Processing Letters, . . . , vol. 22, no. 10,
pp. 1619–1622, 2015. [Online]. Available: http://ieeexplore.ieee.org/xpls/abs{ }all.
jsp?arnumber=7072530 |
dc.relation.references | L.-w. Tsai, J.-w. Hsieh, and K.-c. Fan, “Vehicle Detection Using Normalized Color and
Edge Map,” vol. 16, no. 3, pp. 850–864, 2007. |
dc.relation.references | C. Cortes and V. Vapnik, “Support-Vector Networks,” Machine Learning, vol. 20, no. 3,
pp. 273–297, 1995. |
dc.relation.references | K. Fukushima, “Neocognitron: A self-organizing neural network model for a mechanism
of pattern recognition unaffected by shift in position,” Biological Cybernetics, vol. 36,
no. 4, pp. 193–202, 1980. |
dc.relation.references | V. Dumoulin and F. Visin, “A guide to convolution arithmetic for deep learning,” pp.
1–31, 2016. [Online]. Available: http://arxiv.org/abs/1603.07285 |
dc.relation.references | N. Srivastava, G. Hinton, A. Krizhevsky, I. Sutskever, and R. Salakhutdinov, “Dropout: A simple way to prevent neural networks from overfitting,” Journal of Machine
Learning Research, vol. 15, pp. 1929–1958, 2014. |
dc.relation.references | Y. Lecun, L. Bottou, Y. Bengio, and P. Haffner, “Gradient-Based Learning
Applied to Document Recognition,” proc. OF THE IEEE, 1998. [Online]. Available:
http://ieeexplore.ieee.org/document/726791/{#}full-text-section |
dc.relation.references | K. Chellapilla, S. Puri, P. Simard, K. Chellapilla, S. Puri, P. Simard, H. Performance, C. Neural, and P. Simard, “High Performance Convolutional Neural Networks for
Document Processing To cite this version : High Performance Convolutional Neural
Networks for Document Processing,” 2006. |
dc.relation.references | A. Krizhevsky, I. Sutskever, and G. E. Hinton, “ImageNet Classification with Deep
Convolutional Neural Networks,” Advances In Neural Information Processing Systems,
pp. 1–9, 2012. |
dc.relation.references | M. D. Zeiler and R. Fergus, “Visualizing and understanding convolutional networks,”
Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial
Intelligence and Lecture Notes in Bioinformatics), vol. 8689 LNCS, no. PART 1, pp.
818–833, 2014. |
dc.relation.references | K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,”
Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, vol. 2016-December, pp. 770–778, 2016. |
dc.relation.references | R. Girshick, J. Donahue, T. Darrell, and J. Malik, “Rich feature hierarchies for accurate object detection and semantic segmentation,” Proceedings of the IEEE Computer
Society Conference on Computer Vision and Pattern Recognition, pp. 580–587, 2014. |
dc.relation.references | R. Girshick, “Fast R-CNN,” Proceedings of the IEEE International Conference on
Computer Vision, vol. 2015 Inter, pp. 1440–1448, 2015. |
dc.relation.references | W. Liu, D. Anguelov, D. Erhan, C. Szegedy, S. Reed, C. Y. Fu, and A. C. Berg,
“SSD: Single shot multibox detector,” Lecture Notes in Computer Science (including
subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics),
vol. 9905 LNCS, pp. 21–37, 2016. |
dc.relation.references | J. Redmon, S. Divvala, R. Girshick, and A. Farhadi, “You only look once: Unified,
real-time object detection,” Proceedings of the IEEE Computer Society Conference on
Computer Vision and Pattern Recognition, vol. 2016-Decem, pp. 779–788, 2016. |
dc.relation.references | S. Ren, K. He, R. Girshick, and J. Sun, “Faster R-CNN: Towards realtime object detection with region proposal networks,” in Advances in Neural
Information Processing Systems, vol. 2015-Janua, 2015, pp. 91–99. [Online].
Available: https://www.scopus.com/inward/record.uri?eid=2-s2.0-84960980241{&}
partnerID=40{&}md5=18aaa500235b11fb99e953f8b227f46d |
dc.relation.references | R. Rad and M. Jamzad, “Real time classification and tracking of multiple vehicles in
highways,” Pattern Recognition Letters, vol. 26, no. 10, pp. 1597–1607, 2005. |
dc.relation.references | K. Zhang, L. Zhang, Q. Liu, D. Zhang, and M. H. Yang, “Fast visual tracking via
dense spatio-temporal context learning,” Lecture Notes in Computer Science (including
subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics),
vol. 8693 LNCS, no. PART 5, pp. 127–141, 2014. |
dc.relation.references | L. Bertinetto, J. Valmadre, J. F. Henriques, A. Vedaldi, and P. H. Torr, “Fullyconvolutional siamese networks for object tracking,” Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in
Bioinformatics), vol. 9914 LNCS, pp. 850–865, 2016. |
dc.relation.references | Y. Qi, S. Zhang, L. Qin, H. Yao, Q. Huang, J. Lim, and M. H. Yang, “Hedged Deep
Tracking,” Proceedings of the IEEE Computer Society Conference on Computer Vision
and Pattern Recognition, vol. 2016-December, pp. 4303–4311, 2016. |
dc.relation.references | S. Yun, J. Choi, Y. Yoo, K. Yun, and J. Y. Choi, “Action-decision networks for visual
tracking with deep reinforcement learning,” Proceedings - 30th IEEE Conference on
Computer Vision and Pattern Recognition, CVPR 2017, vol. 2017-January, pp. 1349–
1358, 2017. |
dc.relation.references | J. Valmadre, L. Bertinetto, J. Henriques, A. Vedaldi, and P. H. Torr, “End-to-end representation learning for Correlation Filter based tracking,” Proceedings - 30th IEEE
Conference on Computer Vision and Pattern Recognition, CVPR 2017, vol. 2017-
January, pp. 5000–5008, 2017. |
dc.relation.references | N. Wojke, A. Bewley, and D. Paulus, “Simple online and realtime tracking with a
deep association metric,” Proceedings - International Conference on Image Processing,
ICIP, vol. 2017-September, pp. 3645–3649, 2018. |
dc.relation.references | B. Tian, B. T. Morris, M. Tang, Y. Liu, Y. Yao, C. Gou, D. Shen, and S. Tang, “Hierarchical and Networked Vehicle Surveillance in ITS: A Survey,” IEEE Transactions
on Intelligent Transportation Systems, vol. 18, no. 1, pp. 25–48, 2017. |
dc.relation.references | N. Buch, S. A. Velastin, and J. Orwell, “A review of computer vision techniques for the
analysis of urban traffic,” IEEE Transactions on Intelligent Transportation Systems,
vol. 12, no. 3, pp. 920–939, 2011. |
dc.relation.references | S. Sivaraman and M. M. Trivedi, “A general active-learning framework for on-road
vehicle recognition and tracking,” IEEE Transactions on Intelligent Transportation
Systems, vol. 11, no. 2, pp. 267–276, 2010. |
dc.relation.references | W. Zhang, Q. M. Wu, X. Yang, and X. Fang, “Multilevel framework to detect and
handle vehicle occlusion,” IEEE Transactions on Intelligent Transportation Systems,
vol. 9, no. 1, pp. 161–174, 2008. |
dc.relation.references | D. G. Lowe, “Distinctive Image Features from Scale-Invariant Keypoints,” International Journal of Computer Vision, vol. 60, no. 2, pp. 91–110, 2004. |
dc.relation.references | A. Ng and M. I. Jordan, “On generative vs. discriminative classifiers: A comparison
of logistic regression and naive bayes,” Proceedings of Advances in Neural Information
Processing, vol. 28, no. 3, pp. 841–848, 2001. |
dc.relation.references | Y. Freund, “An adaptive version of the boost by majority algorithm,” in Machine
Learning, ser. COLT ’99, vol. 43, no. 3. New York, NY, USA: ACM, 2001, pp.
293–318. [Online]. Available: http://doi.acm.org/10.1145/307400.307419 |
dc.relation.references | G. Cybenkot, “Mathematics of Control, Signals, and Systems Approximation by Superpositions of a Sigmoidal Function*,” Math. Control Signals Systems, vol. 2, pp.
303–314, 1989. |
dc.relation.references | N. S. Altman, “An Introduction to Kernel and Nearest-Neighbor Nonparametric
Regression,” The American Statistician, vol. 46, no. 3, pp. 175–185, 1992. [Online].
Available: https://www.tandfonline.com/doi/abs/10.1080/00031305.1992.10475879 |
dc.relation.references | D. C. Cires, U. Meier, J. Masci, and L. M. Gambardella, “Flexible, High Performance
Convolutional Neural Networks for Image Classification,” Proceedings of the TwentySecond International Joint Conference on Artificial Intelligence Flexible, pp. 1237–1242,
2003. [Online]. Available: https://www.ijcai.org/Proceedings/11/Papers/210.pdf |
dc.relation.references | R. E. KALMAN, “A New Approach to Linear Filtering and Prediction Problems,”
Journal of basic Engineering, vol. 82, no. 1, pp. 35–45, 1960. |
dc.relation.references | J. Lou, H. Yang, W. Hu, and T. Tan, “Visual Vehicle Tracking Using An Improved
EKF*,” no. January, pp. 23–25, 2002. |
dc.relation.references | M. S. Arulampalam, S. Maskell, N. Gordon, and T. Clapp, “A tutorial on particle filters
for online nonlinear/nongaussian bayesian tracking,” Bayesian Bounds for Parameter
Estimation and Nonlinear Filtering/Tracking, vol. 18, no. 2, pp. 723–737, 2007. |
dc.relation.references | J. Redmon and A. Farhadi, “Yolov3: An incremental improvement,” CoRR, vol.
abs/1804.02767, 2018. [Online]. Available: http://arxiv.org/abs/1804.02767 |
dc.relation.references | N. Dalal and B. Triggs, “Histograms of oriented gradients for human detection,” in
Proceedings - 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, CVPR 2005, vol. I, no. 3, 2005, pp. 886–893. |
dc.relation.references | V. Mariano, Junghye Min, Jin-Hyeong Park, R. Kasturi, D. Mihalcik, Huiping Li,
D. Doermann, and T. Drayer, “Performance evaluation of object detection algorithms,”
no. January, pp. 965–969, 2003. |
dc.relation.references | J. E. Espinosa, S. A. Velastin, and J. W. Branch, “Motorcycle detection
and classification in urban Scenarios using a model based on Faster RCNN,” arXiv:1808.02299 [cs], Aug. 2018, arXiv: 1808.02299. [Online]. Available:
http://arxiv.org/abs/1808.02299 |
dc.relation.references | M. Piccardi, “Background subtraction techniques: A review,” in Conference Proceedings - IEEE International Conference on Systems, Man and Cybernetics, vol. 4, 2004,
pp. 3099–3104. |
dc.relation.references | D. Koller, J. Weber, T. Huang, J. Malik, G. Ogasawara, B. Rao, S. Russell, and I. I,
“Towards Robust Automatic Traffic Scene Analysis in Real-Time,” no. December, pp.
3776–3781, 1994. |
dc.relation.references | C. Stauffer and W. Grimson, “Adaptive background mixture models for real-time
tracking,” pp. 246–252, 2003. |
dc.relation.references | N. Friedman and S. Russell, “Image Segmentation in Video Sequences: A Probabilistic
Approach,” Proceedings of the Thirteenth Conference on Uncertainty in Artificial
Intelligence, pp. 175–181, 1997. [Online]. Available: http://arxiv.org/abs/1302.1539 |
dc.relation.references | Dempster, A. P., Laird, N. M., and Rubin, D. B., “Maximum Likelihood from
Incomplete Data via the EM Algorithm,” Journal of the Royal Statistical Society.
Series B (Methodological), vol. 39, no. 1, pp. 1–38, 1977. [Online]. Available:
http://www.jstor.org/stable/2984875 |
dc.relation.references | A. Elgammal, D. Harwood, and L. Davis, “Non-parametric Model for Background
Subtraction,” 2000, pp. 751–767. |
dc.relation.references | Z. Zivkovic, “Improved adaptive Gaussian mixture model for background subtraction,”
Proceedings - International Conference on Pattern Recognition, vol. 2, pp. 28–31, 2004. |
dc.relation.references | B. Han, D. Comaniciu, and L. Davis, “Sequential kernel density approximation through
mode propagation: Applications to background modeling,” vol. 32, no. 12, p. 16, 2004. |
dc.relation.references | M. Seki, T. Wada, H. Fujiwara, and K. Sumi, “Background subtraction based on cooccurrence of image variations,” Proceedings of the IEEE Computer Society Conference
on Computer Vision and Pattern Recognition, vol. 2, 2003. |
dc.relation.references | N. Oliver, B. Rosario, and A. Pentland, “A Bayesian computer vision system for modeling human interactions,” Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), vol. 1542, pp. 255–272, 1999. |
dc.relation.references | I. Sobel and G. Feldman, “A 3x3 isotropic gradient operator for image
processing.” in Hart, P. E. & Duda R. O. Pattern Classification and Scene
Analysis, pp. 271–272, 1973. [Online]. Available: papers2://publication/uuid/
F6C98D8E-0A99-40EF-A91C-0ECA53448D1F |
dc.relation.references | R. Cucchiara, C. Crana, M. Piccardi, A. Prati, and S. Sirotti, “Improving shadow
suppression in moving object detection with HSV color information,” pp. 334–339,
2002. |
dc.relation.references | D. Comaniciu and V. Ramesh, “Real-Time Tracking of Non-Rigid Objects using Mean
Shift 3 Bhattacharyya Coe cient Based Metric for Target Localization,” Computer
Vision and Pattern Recognition, IEEE Conference on. Vol 2, no. 7, pp. 142–149, 2000. |
dc.relation.references | A. Kuehnle, “Symmetry-based recognition of vehicle rears,” Pattern Recognition Letters, vol. 12, no. 4, pp. 249–258, 1991. |
dc.relation.references | W. von Seelen, C. Curio, J. Gayko, U. Handmann, and T. Kalinke, “Scene analysis
and organization of behavior in driver assistance systems,” pp. 524–527, 2002. |
dc.relation.references | R. O. Duda and P. E. Hart, Pattern classification and scene analysis, 1973. [Online].
Available: http://www.citeulike.org/group/1938/article/1055187 |
dc.relation.references | John Canny, “A Computational Approach To Edge Detection,” IEEE Transactions
on Pattern Analysis and Machine Intelligence, vol. 8, no. 6, pp. 679–714, 1986. |
dc.relation.references | C. Goerick, D. Noll, and M. Werner, “Artificial neural networks in real-time car detection and tracking applications,” Pattern Recognition Letters, vol. 17, no. 4 SPEC.
ISS., pp. 335–343, 1996. |
dc.relation.references | Hsu-Yung Cheng, Chih-Chia Weng, and Yi-Ying Chen, “Vehicle Detection in Aerial
Surveillance Using Dynamic Bayesian Networks,” IEEE Transactions on Image Processing, vol. 21, no. 4, pp. 2152–2159, 2011. |
dc.relation.references | C. Harris and M. Stephens, “A Combined Corner and Edge Detector,” Procedings
of the Alvey Vision Conference 1988, pp. 23.1–23.6, 1988. [Online]. Available:
http://www.bmva.org/bmvc/1988/avc-88-023.html |
dc.relation.references | N. Dalal and W. Triggs, “Histograms of Oriented Gradients for Human Detection,”
2005 IEEE Computer Society Conference on Computer Vision and Pattern
Recognition CVPR05, vol. 1, no. 3, pp. 886–893, 2004. [Online]. Available:
http://eprints.pascal-network.org/archive/00000802/ |
dc.relation.references | T. Kalinke, C. Tzomakas, and W. Seelen, “A texture-based object detection and
an adaptive model-based classification,” IEEE Intelligent Vehicles Symposium, pp.
341–346, 1998. [Online]. Available: http://citeseerx.ist.psu.edu/viewdoc/summary?
doi=10.1.1.33.526 |
dc.relation.references | R. M. Haralick, K. Shanmugam, and I. Dinstein, “Textural Features for Image Classification,” IEEE Transactions on Systems, Man, and Cybernetics, vol. SMC-3, no. 6,
pp. 610–621, 1973. |
dc.relation.references | H. Bay, A. Ess, T. Tuytelaars, and L. Van Gool, “Speeded-Up Robust Features
(SURF),” Computer Vision and Image Understanding, vol. 110, no. 3, pp. 346–359,
2008. |
dc.relation.references | T. Moranduzzo and F. Melgani, “Automatic car counting method for unmanned aerial
vehicle images,” IEEE Transactions on Geoscience and Remote Sensing, vol. 52, no. 3,
pp. 1635–1647, 2014. |
dc.relation.references | J. W. Hsieh, L. C. Chen, and D. Y. Chen, “Symmetrical SURF and Its applications
to vehicle detection and vehicle make and model recognition,” IEEE Transactions on
Intelligent Transportation Systems, vol. 15, no. 1, pp. 6–20, 2014. |
dc.relation.references | E. Rosten and T. Drummond, “Machine learning for high-speed corner detection,”
Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial
Intelligence and Lecture Notes in Bioinformatics), vol. 3951 LNCS, pp. 430–443, 2006. |
dc.relation.references | M. Calonder, V. Lepetit, C. Strecha, and P. Fua, “BRIEF: Binary robust independent
elementary features,” Lecture Notes in Computer Science (including subseries Lecture
Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), vol. 6314 LNCS,
no. PART 4, pp. 778–792, 2010. |
dc.relation.references | E. Rublee, V. Rabaud, K. Konolige, and G. Bradski, “ORB: An efficient alternative
to SIFT or SURF,” Proceedings of the IEEE International Conference on Computer
Vision, pp. 2564–2571, 2011. |
dc.relation.references | J. L. Buliali, C. Fatichah, D. Herumurti, D. Fenomena, H. Widyastuti, and M. Wallace,
“Vehicle detection on images from satellite using oriented fast and rotated brief,”
Journal of Engineering and Applied Sciences, vol. 12, no. 17, pp. 4500–4503, 2017. |
dc.relation.references | S. Kamijo, Y. Matsushita, K. Ikeuchi, and M. Sakauchi, “Traffic monitoring and accident detection at intersections,” IEEE Conference on Intelligent Transportation Systems, Proceedings, ITSC, vol. 1, no. 2, pp. 703–708, 1999. |
dc.relation.references | C. C. R. Wang and J. J. J. Lien, “Automatic vehicle detection using local features
- A statistical approach,” IEEE Transactions on Intelligent Transportation Systems,
vol. 9, no. 1, pp. 83–96, 2008. |
dc.relation.references | F. Leymarie and M. D. Levine, “Simulating the grassfire transform using an active
contour model,” IEEE Transactions on Pattern Analysis and Machine Intelligence,
vol. 14, no. 1, pp. 56–75, 1992. |
dc.relation.references | S. Baluja, “Population-Based Incremental Learning: A Method for Integrating Genetic
Search Based Function Optimization and Competitive Learning,” Ph.D. dissertation,
Carnegie Mellon University, 1994. |
dc.relation.references | P. Jaccard, “Etude comparative de la distribution florale dans une portion des Alpes et ´
des Jura,” Bulletin de la Soci´et´e Vaudoise des Sciences Naturelles, vol. 37, pp. 547–579,
1901. |
dc.relation.references | L. Mason, J. Baxter, P. Bartlett, and M. Frean, “Boosting algorithms as gradient
descent,” Advances in Neural Information Processing Systems, pp. 512–518, 2000. |
dc.relation.references | Q. Zhu, S. Avidan, M. C. Yeh, and K. T. Cheng, “Fast human detection using a cascade of histograms of oriented gradients,” Proceedings of the IEEE Computer Society
Conference on Computer Vision and Pattern Recognition, vol. 2, no. May 2014, pp.
1491–1498, 2006. |
dc.relation.references | M. Pedersoli, J. Gonz`alez, and J. J. Villanueva, “High-speed human detection using
a multiresolution cascade of histograms of oriented gradients,” Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture
Notes in Bioinformatics), vol. 5524 LNCS, pp. 48–55, 2009. |
dc.relation.references | B. Zhang, “Reliable classification of vehicle types based on cascade classifier ensembles,” IEEE Transactions on Intelligent Transportation Systems, vol. 14, no. 1, pp.
322–332, 2013. |
dc.relation.references | N. Buch, J. Orwell, and S. A. Velastin, “3D extended histogram of oriented gradients
(3DHOG) for classification of road users in urban scenes,” British Machine Vision
Conference, BMVC 2009 - Proceedings, 2009. |
dc.relation.references | T. Ojala, M. Pietik¨ainen, and D. Harwood, “Performance evaluation of texture measures with classification based on Kullback discrimination of distributions,” Proceedings
- International Conference on Pattern Recognition, vol. 3, pp. 582–585, 1994. |
dc.relation.references | Y. Tang, C. Zhang, R. Gu, P. Li, and B. Yang, “Vehicle detection and recognition
for intelligent traffic surveillance system,” Multimedia Tools and Applications, vol. 76,
no. 4, pp. 5817–5832, 2017. |
dc.relation.references | O. Barkan, J. Weill, L. Wolf, and H. Aronowitz, “Fast high dimensional vector multiplication face recognition,” Proceedings of the IEEE International Conference on
Computer Vision, pp. 1960–1967, 2013. |
dc.relation.references | J. Trefn´y and J. Matas, “Extended set of local binary patterns for rapid object
detection,” Computer Vision Winter Workshop, pp. 1–7, 2010. [Online]. Available:
http://cmp.felk.cvut.cz/{∼}matas/papers/trefny-lbp-cvww10.pdf |
dc.relation.references | X. Tan and B. Triggs, “Enhanced local texture feature sets for face recognition under
difficult lighting conditions,” IEEE Transactions on Image Processing, vol. 19, no. 6,
pp. 1635–1650, 2010. |
dc.relation.references | T. Bouwmans, C. Silva, C. Marghes, M. S. Zitouni, H. Bhaskar, and C. Frelicot, “On
the role and the importance of features for background modeling and foreground detection,” Computer Science Review, vol. 28, pp. 26–91, 2018. |
dc.relation.references | P. Doll´ar, Z. Tu, P. Perona, and S. Belongie, “Integral channel features,” British Machine Vision Conference, BMVC 2009 - Proceedings, pp. 1–11, 2009. |
dc.relation.references | A. K. Jain, “Data clustering: 50 years beyond K-means,” Pattern Recognition Letters,
vol. 31, no. 8, pp. 651–666, 2010. |
dc.relation.references | J. Schmidhuber, “Deep Learning in neural networks: An overview,” Neural Networks,
vol. 61, pp. 85–117, 2015. |
dc.relation.references | W. S. Mcculloch and W. Pitts, “A logical calculus nervous activity,” Bulletin of Mathematical Biology, vol. 52, no. l, pp. 99–115, 1990. |
dc.relation.references | M. L. Minsky and S. Papert, Perceptrons: An Introduction to Computational
Geometry. Mit Press, 1972. [Online]. Available: https://books.google.com.co/books?
id=Ow1OAQAAIAAJ |
dc.relation.references | P. J. Werbos, Beyond Regression: New Tools for Prediction and Analysis
in the Behavioral Sciences. Harvard University, 1975. [Online]. Available:
https://books.google.com.co/books?id=z81XmgEACAAJ |
dc.relation.references | F. Rosenblatt, “The Perceptron - A Perceiving and Recognizing Automaton,” pp. 460–
1, 1957. |
dc.relation.references | L. Rosasco, E. De Vito, A. Caponnetto, M. Piana, and A. Verri, “Are Loss Functions
All the Same?” Neural Computation, vol. 16, no. 5, pp. 1063–1076, 2004. |
dc.relation.references | J. Espinosa, S. Velastin, and J. Branch, “Motorcycle Classification in Urban
Scenarios using Convolutional Neural Networks for Feature Extraction,” in 8th
International Conference of Pattern Recognition Systems (ICPRS 2017). Institution
of Engineering and Technology, 2017, pp. 26 (6 .)–26 (6 .). [Online]. Available:
https://digital-library.theiet.org/content/conferences/10.1049/cp.2017.0155 |
dc.relation.references | S. Stehman, “Selecting and interpreting measures of thematic classification accuracy.”
Remote Sensing of Environment, vol. 62, p. 77, 1997. |
dc.relation.references | C. Bishop, “Pattern Recognition and Machine Learning,” Journal of Electronic
Imaging, vol. 16, no. 4, p. 049901, jan 2007. [Online]. Available: http:
//electronicimaging.spiedigitallibrary.org/article.aspx?doi=10.1117/1.2819119 |
dc.relation.references | Y. Lecun, Y. Bengio, and G. Hinton, “Deep learning,” Nature, vol. 521, no. 7553, pp.
436–444, 2015. |
dc.relation.references | Y. LeCun, B. Boser, J. S. Denker, D. Henderson, R. E. Howard, W. Hubbard, and
L. D. Jackel, “Backpropagation applied to digit recognition,” pp. 541–551, 1989. |
dc.relation.references | D. E. Rumelhart, G. E. Hinton, and R. J. Williams, Learning Internal Representations
by Error Propagation. Morgan Kaufmann Publishers, Inc., 1988. [Online]. Available:
http://dx.doi.org/10.1016/B978-1-4832-1446-7.50035-2 |
dc.relation.references | K. Hornik, M. Stinchcombe, and H. White, “Multilayer feedforward networks are universal approximators,” Neural Networks, vol. 2, no. 5, pp. 359–366, 1989. |
dc.relation.references | X. Glorot, A. Bordes, and Y. Bengio, “Deep sparse rectifier neural networks,” Journal
of Machine Learning Research, vol. 15, pp. 315–323, 2011. |
dc.relation.references | I. Goodfellow, Y. Bengio, and A. Courville, Deep Learning. MIT Press, 2016, http:
//www.deeplearningbook.org. |
dc.relation.references | V. Nair and G. E. Hinton, “Rectified linear units improve Restricted Boltzmann machines,” ICML 2010 - Proceedings, 27th International Conference on Machine Learning,
pp. 807–814, 2010. |
dc.relation.references | C. Lemar´echal, “Cauchy and the Gradient Method,” Documenta Mathematica, vol.
ISMP, pp. 251–254, 2012. [Online]. Available: https://www.math.uni-bielefeld.de/
documenta/vol-ismp/40{ }lemarechal-claude.pdf |
dc.relation.references | O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma, Z. Huang, A. Karpathy,
A. Khosla, M. Bernstein, A. C. Berg, and L. Fei-Fei, “ImageNet Large Scale Visual
Recognition Challenge,” International Journal of Computer Vision (IJCV), vol. 115,
no. 3, pp. 211–252, 2015. |
dc.relation.references | Y. LeCun, L. Bottou, G. B. Orr, and K. R. M¨uller, “Efficient BackProp,” pp. 9–50,
1998. |
dc.relation.references | Y. N. Dauphin, R. Pascanu, C. Gulcehre, K. Cho, S. Ganguli, and Y. Bengio, “Identifying and attacking the saddle point problem in high-dimensional non-convex optimization,” Advances in Neural Information Processing Systems, vol. 4, no. January, pp.
2933–2941, 2014. |
dc.relation.references | J. Duchi, E. Hazan, and Y. Singer, “Adaptive subgradient methods for online learning
and stochastic optimization,” COLT 2010 - The 23rd Conference on Learning Theory,
pp. 257–269, 2010. |
dc.relation.references | D. P. Kingma and J. L. Ba, “Adam: A method for stochastic optimization,” 3rd International Conference on Learning Representations, ICLR 2015 - Conference Track
Proceedings, pp. 1–15, 2015. |
dc.relation.references | S. Ioffe and C. Szegedy, “Batch normalization: Accelerating deep network training by
reducing internal covariate shift,” 32nd International Conference on Machine Learning,
ICML 2015, vol. 1, pp. 448–456, 2015. |
dc.relation.references | C. Shorten and T. M. Khoshgoftaar, “A survey on Image Data Augmentation
for Deep Learning,” Journal of Big Data, vol. 6, no. 1, 2019. [Online]. Available:
https://doi.org/10.1186/s40537-019-0197-0 |
dc.relation.references | J. Yosinski, J. Clune, Y. Bengio, and H. Lipson, “How transferable are features in
deep neural networks?” Advances in Neural Information Processing Systems, vol. 4,
no. January, pp. 3320–3328, 2014. |
dc.relation.references | D. H. Hubel and T. N. Wiesel, “Receptive fields of single neurons in the cat’s striate
cortex,” Journal of Physiology, vol. 148, pp. 574–591, 1959. |
dc.relation.references | C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, and A. Rabinovich, “Going deeper with convolutions,” Proceedings of the IEEE
Computer Society Conference on Computer Vision and Pattern Recognition, vol. 07-
12-June-2015, pp. 1–9, 2015. |
dc.relation.references | K. Simonyan and A. Zisserman, “Very deep convolutional networks for large-scale
image recognition,” 3rd International Conference on Learning Representations, ICLR
2015 - Conference Track Proceedings, pp. 1–14, 2015. |
dc.relation.references | P. Soviany and R. T. Ionescu, “Optimizing the trade-off between single-stage and twostage deep object detectors using image difficulty prediction,” Proceedings - 2018 20th
International Symposium on Symbolic and Numeric Algorithms for Scientific Computing, SYNASC 2018, pp. 209–214, 2018. |
dc.relation.references | J. R. Uijlings, K. E. Van De Sande, T. Gevers, and A. W. Smeulders, “Selective search
for object recognition,” International Journal of Computer Vision, vol. 104, no. 2, pp.
154–171, 2013. |
dc.relation.references | M. Everingham, L. Van Gool, C. K. I. Williams, J. Winn, and A. Zisserman, “The PASCAL Visual Object Classes Challenge 2012 (VOC2012) Results,” http://www.pascalnetwork.org/challenges/VOC/voc2012/workshop/index.html. |
dc.relation.references | P. F. Felzenszwalb, R. B. Girshick, D. McAllester, and D. Ramanan, “Object detection with discriminatively trained part-based models,” IEEE Transactions on Pattern
Analysis and Machine Intelligence, vol. 32, no. 9, pp. 1627–1645, 2010. |
dc.relation.references | K. He, X. Zhang, S. Ren, and J. Sun, “Spatial pyramid pooling in deep convolutional
networks for visual recognition,” CoRR, vol. abs/1406.4729, 2014. [Online]. Available:
http://arxiv.org/abs/1406.4729 |
dc.relation.references | B. Xu, N. Wang, T. Chen, and M. Li, “Empirical evaluation of rectified activations
in convolutional network,” CoRR, vol. abs/1505.00853, 2015. [Online]. Available:
http://arxiv.org/abs/1505.00853 |
dc.relation.references | J. Redmon and A. Farhadi, “YOLO9000: Better, faster, stronger,” Proceedings - 30th
IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2017, vol.
2017-January, pp. 6517–6525, 2017. |
dc.relation.references | M. Zhu, “Recall, precision and average precision,” Department of Statistics and Actuarial Science, . . . , pp. 1–11, 2004. [Online]. Available: http://scholar.google.com/scholar?hl=en{&}btnG=Search{&}q=intitle:Recall+
,+Precision+and+Average+Precision{#}0 |
dc.relation.references | M. Everingham, L. Van Gool, C. K. Williams, J. Winn, and A. Zisserman, “The pascal
visual object classes (VOC) challenge,” International Journal of Computer Vision,
vol. 88, no. 2, pp. 303–338, 2010. |
dc.relation.references | T. Lin, M. Maire, S. J. Belongie, L. D. Bourdev, R. B. Girshick, J. Hays,
P. Perona, D. Ramanan, P. Doll´ar, and C. L. Zitnick, “Microsoft COCO:
common objects in context,” CoRR, vol. abs/1405.0312, 2014. [Online]. Available:
http://arxiv.org/abs/1405.0312 |
dc.relation.references | A. Yilmaz, O. Javed, and M. Shah, “Object tracking: A survey,” ACM Computing
Surveys, vol. 38, no. 4, 2006. |
dc.relation.references | D. H. D. H. Ballard and . Brown, Christopher M., Computer vision. Englewood Cliffs,
N.J. : Prentice-Hall, 1982, includes bibliographies and indexes. |
dc.relation.references | John Canny, “A Computational Approach To Edge Detection,” IEEE Transactions
on Pattern Analysis and Machine Intelligence, vol. 8, no. 6, pp. 679–714, 1986. |
dc.relation.references | S. Gupte, O. Masoud, R. F. K. Martin, and N. P. Papanikolopoulos, “Detection and
Classification of Vehicles,” IEEE Transactions on Intelligent Transportation Systems,
vol. 3, no. 1, pp. 37–47, 2002. |
dc.relation.references | K. Nummiaro, E. Koller-Meier, and L. Van Gool, “An adaptive color-based particle
filter,” Image and Vision Computing, vol. 21, no. 1, pp. 99–110, 2003. |
dc.relation.references | K. Mu, F. Hui, and X. Zhao, “Multiple vehicle detection and tracking in highway traffic
surveillance video based on sift feature matching,” Journal of Information Processing
Systems, vol. 12, pp. 183–195, 01 2016. |
dc.relation.references | B. T. Morris and M. M. Trivedi, “Learning, modeling, and classification of vehicle track
patterns from live video,” IEEE Transactions on Intelligent Transportation Systems,
vol. 9, no. 3, pp. 425–437, 2008. |
dc.relation.references | A. Bewley, Z. Ge, L. Ott, F. Ramos, and B. Upcroft, “Simple online and realtime tracking,” Proceedings - International Conference on Image Processing, ICIP, vol. 2016-
Augus, pp. 3464–3468, 2016. |
dc.relation.references | H. W. Kuhn, “The Hungarian method for the assignment problem,” Naval Research
Logistics Quarterly, vol. 2, no. 1-2, pp. 83–97, 1955. |
dc.relation.references | S. Messelodi, C. M. Modena, and M. Zanin, “A computer vision system for the detection and classification of vehicles at urban road intersections,” Pattern Analysis and
Applications, vol. 8, no. 1-2, pp. 17–31, 2005. |
dc.relation.references | C. Montella, “The Kalman Filter and Related Algorithms A Literature Review,” Research Gate, no. May, pp. 1–17, 2014. |
dc.relation.references | N. J. Gordon, D. J. Salmond, and A. F. Smith, “Novel approach to nonlinear/nongaussian Bayesian state estimation,” IEE Proceedings, Part F: Radar and Signal Processing, vol. 140, no. 2, pp. 107–113, 1993. |
dc.relation.references | F. Bardet and T. Chateau, “MCMC particle filter for real-time visual tracking of
vehicles,” IEEE Conference on Intelligent Transportation Systems, Proceedings, ITSC,
pp. 539–544, 2008. |
dc.relation.references | T. Mauthner, M. Donoser, and H. Bischof, “Robust tracking of spatial related components,” Proceedings - International Conference on Pattern Recognition, no. January,
2008. |
dc.relation.references | E. Maggio and A. Cavallaro, “Hybrid particle filter and mean shift tracker,”
Proc. Int. Conf. Acoustics, Speech, and Signal Processing, no. 3, pp. 221–224,
2005. [Online]. Available: http://www.eecs.qmul.ac.uk/{∼}andrea/papers/icassp05{ }
maggio{ }cavallaro.pdf{%}0Ahttp://citeseerx.ist.psu.edu/viewdoc/download?doi=
10.1.1.111.7909{&}rep=rep1{&}type=pdf |
dc.relation.references | K. Fukunaga and L. D. Hostetler, “The Estimation of the Gradient of a Density Function, with Applications in Pattern Recognition,” IEEE Transactions on Information
Theory, vol. 21, no. 1, pp. 32–40, 1975. |
dc.relation.references | A. Bhattacharyya, “On A Measure of Divergence Between Two Statistical Populations
Defined by their Probability Distributions,” Bulletin of the Calcutta Methematical
Society, vol. 35, no. 1, pp. 99–109, 1943. |
dc.relation.references | D. S. Bolme, J. R. Beveridge, B. A. Draper, and Y. M. Lui, “Visual object tracking
using adaptive correlation filters,” Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pp. 2544–2550, 2010. |
dc.relation.references | J. F. Henriques, R. Caseiro, P. Martins, and J. Batista, “Exploiting the circulant
structure of tracking-by-detection with kernels,” Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in
Bioinformatics), vol. 7575 LNCS, no. PART 4, pp. 702–715, 2012. |
dc.relation.references | Jo˜ao F. Henriques, Caseiro Rui, Martins Pedro, and Batista Horge, “High-Speed Tracking with Kernelized Correlation Filters,” IEEE Transactions on Pattern Analysis and
Machine Intelligence, vol. 37, no. 3, pp. 583–596, 2015. |
dc.relation.references | J. BROMLEY, J. W. BENTZ, L. BOTTOU, I. GUYON, Y. LECUN, C. MOORE,
E. SACKINGER, and R. SHAH, “Signature Verification Using a “Siamese” Time ¨
Delay Neural Network,” International Journal of Pattern Recognition and Artificial
Intelligence, vol. 07, no. 04, pp. 669–688, 1993. |
dc.relation.references | K. Chaudhuri, Y. Freund, and D. Hsu, “A parameter-free hedging algorithm,” Advances in Neural Information Processing Systems 22 - Proceedings of the 2009 Conference,
pp. 297–305, 2009. |
dc.relation.references | L. Zheng, Z. Bie, Y. Sun, J. Wang, C. Su, S. Wang, and Q. Tian, “Mars: A video
benchmark for large-scale person re-identification,” Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in
Bioinformatics), vol. 9910 LNCS, pp. 868–884, 2016. |
dc.relation.references | K. Bernardin and R. Stiefelhagen, “Evaluating multiple object tracking performance:
The CLEAR MOT metrics,” Eurasip Journal on Image and Video Processing, vol.
2008, 2008. |
dc.rights.accessrights | info:eu-repo/semantics/openAccess |
dc.subject.proposal | Traffic discrimination |
dc.subject.proposal | Discriminación de tráfico |
dc.subject.proposal | Detección de vehículos |
dc.subject.proposal | Vehicle detection |
dc.subject.proposal | Vehicle recognition |
dc.subject.proposal | Clasificación de vehículos |
dc.subject.proposal | Seguimiento de vehículos |
dc.subject.proposal | Vehicle tracking |
dc.subject.proposal | Aprendizaje profundo |
dc.subject.proposal | Multiple Object Tracking |
dc.subject.proposal | Redes neuronales convolucionales |
dc.subject.proposal | Deep Learning |
dc.subject.proposal | Sistemas inteligentes de transporte |
dc.subject.proposal | Convolutional neural network |
dc.subject.proposal | Intelligent transport systems |
dc.type.coar | http://purl.org/coar/resource_type/c_1843 |
dc.type.coarversion | http://purl.org/coar/version/c_ab4af688f83e57aa |
dc.type.content | Text |
oaire.accessrights | http://purl.org/coar/access_right/c_abf2 |