Modelo de aprendizaje profundo para cuantificar el daño causado por el Glaucoma en el nervio óptico

dc.contributor.advisorPerdomo Charry, Oscar Julian
dc.contributor.advisorGonzalez Osorio, Fabio Augusto
dc.contributor.authorBeltrán Barrera, Lillian Daniela
dc.contributor.researchgroupMindlabspa
dc.date.accessioned2023-05-25T15:38:35Z
dc.date.available2023-05-25T15:38:35Z
dc.date.issued2023-04
dc.descriptionilustraciones, fotografías, graficasspa
dc.description.abstractEl glaucoma es una de las enfermedades de mayor prevalencia y gravedad en el mundo, se caracteriza por provocar una pérdida gradual de la visión periférica, que si no se trata a tiempo, puede ser irreversible y conducir a la pérdida total de la visión. Con el objetivo de facilitar la detección temprana de esta enfermedad, se han propuesto diversos modelos basados en aprendizaje profundo y redes neuronales convolucionales que permiten un diagnóstico automatizado. A pesar de su utilidad, estos modelos presentan algunas limitaciones, como la evaluación del ancho del borde neurorretiniano solamente de forma vertical y la asignación de una clasificación binaria para denotar la presencia o ausencia de la enfermedad, lo que dificulta la identificación de su estadio y del avance de la enfermedad en múltiples direcciones. Por tal motivo, este trabajo presenta un enfoque basado en aprendizaje profundo que toma como referencia la escala DDLS (Disc Damage Likelihood Scale) para detectar y conocer el avance del glaucoma en los pacientes. Para ello, se utilizó como insumo el conjunto de imágenes REFUGE (Retinal Fundus Glaucoma Challenge), identificando la región de interés (ROI por sus siglas en inglés) mediante el algoritmo de detección de objetos YOLO (You Only Look Once).Después de esto, se procedió a realizar la medición del RDR (Rim-to-Disc Ratio) en cada grado en las imágenes segmentadas utilizando dos modelos previamente entrenados: uno para el disco y otro para la copa ocular. De esta manera, se logró asignar nuevas etiquetas a las imágenes con base la escala DDLS. Luego, se entrenó un modelo base con las etiquetas originales, el cual se comparó con tres modelos entrenados mediante aprendizaje por transferencia con las etiquetas construidas. Estos modelos utilizaron diferentes técnicas para el procesamiento de las imágenes, incluyendo la conversión de coordenadas cartesianas a polares y el recorte de las imágenes en estéreo centradas en el nervio óptico a una dimensión de 224 × 224 píxeles para contar con mayor información de la imagen. Los mejores resultados fueron obtenidos por el modelo entrenado con las imágenes convertidas a coordenadas polares. (Texto tomado de la fuente)spa
dc.description.abstractGlaucoma is one of the most prevalent and severe diseases in the world, characterized by a gradual loss of peripheral vision that, if not treated in time, can be irreversible and lead to total vision loss. In order to facilitate early detection of this disease, various models based on deep learning and convolutional neural networks have been proposed, which allow for automated diagnosis. Despite their usefulness, these models present some limitations, such as the evaluation of neuroretinal border width only vertically and the assignment of a binary classification to denote the presence or absence of the disease, which makes it difficult to identify its stage and the progression of the disease in multiple directions. For this reason, this work presents a deep learning-based approach that uses the DDLS (Disc Damage Likelihood Scale) scale to detect and understand the progression of glaucoma in patients. For this purpose, the REFUGE (Retinal Fundus Glaucoma Challenge) image set was used as input, identifying the region of interest (ROI) using the YOLO (You Only Look Once) object detection algorithm. After this, the RDR (Rim-to-Disc Ratio) was measured at each degree in the segmented images using two previously trained models: one for the disc and one for the optic cup. In this way, new labels were assigned to the images based on the DDLS scale. Then, a baseline model was trained with the original labels, which was compared with three models trained by transfer learning with the constructed labels. These models used different techniques for image processing, including the conversion of Cartesian coordinates to polar coordinates and the cropping of stereo images centered on the optic nerve to a dimension of 224 × 224 pixels to obtain more information from the image. The best results were obtained by the model trained with images converted to polar coordinates.eng
dc.description.degreelevelMaestríaspa
dc.description.degreenameMagíster en Ingeniería - Ingeniería de Sistemas y Computaciónspa
dc.format.extentxiv, 59 páginasspa
dc.format.mimetypeapplication/pdfspa
dc.identifier.instnameUniversidad Nacional de Colombiaspa
dc.identifier.reponameRepositorio Institucional Universidad Nacional de Colombiaspa
dc.identifier.repourlhttps://repositorio.unal.edu.co/spa
dc.identifier.urihttps://repositorio.unal.edu.co/handle/unal/83865
dc.language.isospaspa
dc.publisherUniversidad Nacional de Colombiaspa
dc.publisher.branchUniversidad Nacional de Colombia - Sede Bogotáspa
dc.publisher.facultyFacultad de Ingenieríaspa
dc.publisher.placeBogotá, Colombiaspa
dc.publisher.programBogotá - Ingeniería - Maestría en Ingeniería - Ingeniería de Sistemas y Computaciónspa
dc.relation.references[25] M. N. Bajwa, G. A. P. Singh, W. Neumeier, M. I. Malik, A. Dengel, and S. Ahmed, “G1020: A benchmark retinal fundus image dataset for Computer-Aided Glaucoma Detection,” arXiv, no. May, 2020spa
dc.relation.references[26] E. L. Mayro, M. Wang, T. Elze, and L. R. Pasquale, “The impact of artificial intelligence in the diagnosis and management of glaucoma,” Eye (Basingstoke), vol. 34, no. 1, 2020. [Online]. Available: http://dx.doi.org/10.1038/s41433-019-0577-xspa
dc.relation.references[27] N. Mojab, V. Noroozi, P. S. Yu, and J. A. Hallak, “Deep multi-task learning for inter- pretable glaucoma detection,” Proceedings - 2019 IEEE 20th International Conference on Information Reuse and Integration for Data Science, IRI 2019, pp. 167–174, 2019.spa
dc.relation.references[28] J. Martins, J. S. Cardoso, and F. Soares, “Offline computer-aided diagnosis for Glauco- ma detection using fundus images targeted at mobile devices,” Computer Methods and Programs in Biomedicine, vol. 192, 2020.spa
dc.relation.references[29] U. Raghavendra, H. Fujita, S. V. Bhandary, A. Gudigar, J. H. Tan, and U. R. Acharya, “Deep convolution neural network for accurate diagnosis of glaucoma using digital fundus images,” Information Sciences, vol. 441, pp. 41–49, 2018. [Online]. Available: https://doi.org/10.1016/j.ins.2018.01.051spa
dc.relation.references[30] A. Thakur, M. Goldbaum, and S. Yousefi, “Predicting Glaucoma before Onset Using Deep Learning,” Ophthalmology. Glaucoma, vol. 3, no. 4, pp. 262–268, 2020.spa
dc.relation.references[31] V. G. Edupuganti, A. Chawla, and A. Kale, “Automatic optic disk and cup segmentation of fundus images using deep learning,” Proceedings - International Conference on Image Processing, ICIP, pp. 2227–2231, 2018spa
dc.relation.references[32] J. B. Jonas, A. Bergua, P. Schmitz-Valckenberg, K. I. Papastathopoulos, and W. M. Budde, “Ranking of optic disc variables for detection of glaucomatous optic nerve da- mage,” Investigative Ophthalmology & Visual Science, vol. 41, no. 7, pp. 1764–1773, 2000spa
dc.relation.references[33] A. Pal, M. R. Moorthy, and A. Shahina, “G-Eyenet: A Convolutional Autoencoding Classifier Framework for the Detection of Glaucoma from Retinal Fundus Images,” Proceedings - International Conference on Image Processing, ICIP, pp. 2775–2779, 2018.spa
dc.relation.references[34] Z. Xiao, X. Zhang, L. Geng, F. Zhang, J. Wu, and Y. Liu, “Research on the method of color fundus image optic cup segmentation based on deep learning,” Symmetry, vol. 11, no. 7, 2019.spa
dc.relation.references[35] S. Yu, D. Xiao, S. Frost, and Y. Kanagasingam, “Robust optic disc and cup segmentation with deep learning for glaucoma detection,” Computerized Medical Imaging and Graphics, vol. 74, pp. 61–71, 2019. [Online]. Available: https://doi.org/10.1016/j.compmedimag.2019.02.005spa
dc.relation.references[36] G. L. Spaeth, J. Henderer, C. Liu, M. Kesen, U. Altangerel, A. Bayer, L. J. Katz, J. Myers, D. Rhee, and W. Steinmann, “The disc damage likelihood scale: reprodu- cibility of a new method of estimating the amount of optic nerve damage caused by glaucoma.” Transactions of the American Ophthalmological Society, vol. 100, p. 181, 2002.spa
dc.relation.references[37] A. C. Thompson, A. A. Jammal, S. I. Berchuck, E. B. Mariottoni, and F. A. Medeiros, “Assessment of a Segmentation-Free Deep Learning Algorithm for Diagnosing Glaucoma from Optical Coherence Tomography Scans,” JAMA Ophthalmology, vol. 138, no. 4, pp. 333–339, 2020.spa
dc.relation.references[38] I. El Naqa and M. J. Murphy, “What is machine learning?” in machine learning in radiation oncology. Springer, 2015, pp. 3–11.spa
dc.relation.references[39] A. R. Pathak, M. Pandey, and S. Rautaray, “Deep learning approaches for detecting objects from images: a review,” Progress in Computing, Analytics and Networking, pp. 491–499, 2018.spa
dc.relation.references[40] R. Girshick, J. Donahue, T. Darrell, and J. Malik, “Rich feature hierarchies for accurate object detection and semantic segmentation,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2014, pp. 580–587.spa
dc.relation.references[41] X. Wang, M. Yang, S. Zhu, and Y. Lin, “Regionlets for generic object detection,” in Proceedings of the IEEE international conference on computer vision, 2013, pp. 17–24.spa
dc.relation.references[42] R. Girshick, “Fast r-cnn,” in Proceedings of the IEEE international conference on com- puter vision, 2015, pp. 1440–1448.spa
dc.relation.references[43] J. Redmon, S. Divvala, R. Girshick, and A. Farhadi, “You only look once: Unified, real- time object detection,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2016, pp. 779–788.spa
dc.relation.references[44] P. Tyagi, T. Singh, R. Nayar, and S. Kumar, “Performance comparison and analysis of medical image segmentation techniques,” in 2018 IEEE International Conference on Current Trends in Advanced Computing (ICCTAC). IEEE, 2018, pp. 1–6.spa
dc.relation.references[45] R. C. Gonzalez, Digital image processing. Pearson education india, 2009.spa
dc.relation.references[46] W.-X. Kang, Q.-Q. Yang, and R.-P. Liang, “The comparative research on image seg- mentation algorithms,” in 2009 First international workshop on education technology and computer science, vol. 2. IEEE, 2009, pp. 703–707.spa
dc.relation.references[47] B. A. Skourt, A. El Hassani, and A. Majda, “Lung ct image segmentation using deep neural networks,” Procedia Computer Science, vol. 127, pp. 109–113, 2018.spa
dc.relation.references[48] V. Badrinarayanan, A. Kendall, and R. Cipolla, “Segnet: A deep convolutional encoder- decoder architecture for image segmentation,” IEEE transactions on pattern analysis and machine intelligence, vol. 39, no. 12, pp. 2481–2495, 2017.spa
dc.relation.references[49] D. S. Kermany, M. Goldbaum, W. Cai, C. C. Valentim, H. Liang, S. L. Baxter, A. Mc- Keown, G. Yang, X. Wu, F. Yan et al., “Identifying medical diagnoses and treatable diseases by image-based deep learning,” Cell, vol. 172, no. 5, pp. 1122–1131, 2018.spa
dc.relation.references[50] J. Ding, B. Chen, H. Liu, and M. Huang, “Convolutional neural network with data augmentation for sar target recognition,” IEEE Geoscience and remote sensing letters, vol. 13, no. 3, pp. 364–368, 2016.spa
dc.relation.references[51] T. Rahman, M. E. Chowdhury, A. Khandakar, K. R. Islam, K. F. Islam, Z. B. Mahbub, M. A. Kadir, and S. Kashem, “Transfer learning with deep convolutional neural network (cnn) for pneumonia detection using chest x-ray,” Applied Sciences, vol. 10, no. 9, p. 3233, 2020.spa
dc.relation.references[52] S. Sabour, N. Frosst, and G. E. Hinton, “Dynamic routing between capsules,” Advances in neural information processing systems, vol. 30, 2017.spa
dc.relation.references[53] M. Christopher, A. Belghith, C. Bowd, J. A. Proudfoot, M. H. Goldbaum, R. N. Wein- reb, C. A. Girkin, J. M. Liebmann, and L. M. Zangwill, “Performance of Deep Learning Architectures and Transfer Learning for Detecting Glaucomatous Optic Neuropathy in Fundus Photographs,” Scientific Reports, vol. 8, no. 1, pp. 1–13, 2018.spa
dc.relation.references[54] A. Saxena, A. Vyas, L. Parashar, and U. Singh, “A Glaucoma Detection using Convo- lutional Neural Network,” Proceedings of the International Conference on Electronics and Sustainable Communication Systems, ICESC 2020, no. Icesc, pp. 815–820, 2020.spa
dc.relation.references[55] K. Park, J. Kim, and J. Lee, “Automatic optic nerve head localization and cup-to- disc ratio detection using state-of-the-art deep-learning architectures,” Scientific reports, vol. 10, no. 1, pp. 1–10, 2020.spa
dc.relation.references[56] N. Shibata, M. Tanito, K. Mitsuhashi, Y. Fujino, M. Matsuura, H. Murata, and R. Asaoka, “Development of a deep residual learning algorithm to screen for glaucoma from fundus photography,” Scientific Reports, vol. 8, no. 1, pp. 1–9, 2018. [Online]. Available: http://dx.doi.org/10.1038/s41598-018-33013-wspa
dc.relation.references[57] L. Li, M. Xu, H. Liu, Y. Li, X. Wang, L. Jiang, Z. Wang, X. Fan, and N. Wang, “A Large-Scale Database and a CNN Model for Attention-Based Glaucoma Detection,” IEEE Transactions on Medical Imaging, vol. 39, no. 2, pp. 413–424, 2020.spa
dc.relation.references[58] B. Al-Bander, W. Al-Nuaimy, M. A. Al-Taee, and Y. Zheng, “Automated glaucoma diagnosis using deep learning approach,” 2017 14th International Multi-Conference on Systems, Signals and Devices, SSD 2017, vol. 2017-Janua, pp. 207–210, 2017.spa
dc.relation.references[59] A. Soltani, T. Battikh, I. Jabri, and N. Lakhoua, “A new expert system based on fuzzy logic and image processing algorithms for early glaucoma diagnosis,” Biomedical Signal Processing and Control, vol. 40, pp. 366–377, 2018. [Online]. Available: http://dx.doi.org/10.1016/j.bspc.2017.10.009spa
dc.relation.references[60] S. H. Lu, K. Y. Lee, J. I. T. Chong, A. K. Lam, J. S. Lai, and D. C. Lam, “Compa- rison of Ocular Biomechanical Machine Learning Classifiers for Glaucoma Diagnosis,” Proceedings - 2018 IEEE International Conference on Bioinformatics and Biomedicine, BIBM 2018, pp. 2539–2543, 2019.spa
dc.relation.references[61] Z.-H. Su and J.-C. Yen, “Cup-to-disk ratio detection of optic disk in fundus images based on yolov5,” in 2021 International Conference on Electronic Communications, Internet of Things and Big Data (ICEIB). IEEE, 2021, pp. 327–329.spa
dc.relation.references[62] N. Thakur and M. Juneja, “Classification of glaucoma using hybrid features with machi- ne learning approaches,” Biomedical Signal Processing and Control, vol. 62, p. 102137, 2020.spa
dc.relation.references[63] L. K. Singh, M. Khanna, S. Thawkar, and R. Singh, “Nature-inspired computing and machine learning based classification approach for glaucoma in retinal fundus images,” Multimedia Tools and Applications, pp. 1–49, 2023.spa
dc.relation.references[64] J. I. Orlando, H. Fu, J. B. Breda, K. van Keer, D. R. Bathula, A. Diaz-Pinto, R. Fang, P.-A. Heng, J. Kim, J. Lee et al., “Refuge challenge: A unified framework for evaluating automated methods for glaucoma assessment from fundus photographs,” Medical image analysis, vol. 59, p. 101570, 2020.spa
dc.relation.references[65] F. J. F. Batista, T. Diaz-Aleman, J. Sigut, S. Alayon, R. Arnay, and D. Angel-Pereira, “Rim-one dl: A unified retinal image database for assessing glaucoma using deep lear- ning,” Image Analysis & Stereology, vol. 39, no. 3, pp. 161–167, 2020.spa
dc.relation.references[66] O. Kovalyk, J. Morales-S´anchez, R. Verdu´-Monedero, I. Sell´es-Navarro, A. Palaz´on- Cabanes, and J.-L. Sancho-G´omez, “Papila: Dataset with fundus images and clinical data of both eyes of the same patient for glaucoma assessment,” Scientific Data, vol. 9, no. 1, pp. 1–12, 2022.spa
dc.relation.references[67] G. Bradski and A. Kaehler, “Opencv,” Dr. Dobb’s journal of software tools, vol. 3, p. 120, 2000.spa
dc.relation.references[68] O. Ronneberger, P. Fischer, and T. Brox, “U-net: Convolutional networks for biomedical image segmentation,” in International Conference on Medical image computing and computer-assisted intervention. Springer, 2015, pp. 234–241.spa
dc.relation.references[69] N. Siddique, S. Paheding, C. P. Elkin, and V. Devabhaktuni, “U-net and its variants for medical image segmentation: A review of theory and applications,” Ieee Access, vol. 9, pp. 82 031–82 057, 2021.spa
dc.relation.references[70] C. Szegedy, S. Ioffe, V. Vanhoucke, and A. A. Alemi, “Inception-v4, inception-resnet and the impact of residual connections on learning,” in Thirty-first AAAI conference on artificial intelligence, 2017.spa
dc.relation.references[71] H. Zhang, M. Tian, G. Shao, J. Cheng, and J. Liu, “Target detection of forward-looking sonar image based on improved yolov5,” IEEE Access, vol. 10, pp. 18 023–18 034, 2022.spa
dc.relation.references[1] N. Pardiñas Barón, F. Fernandez Fernández, F. Fondevila Camps, M. Giner Muñoz, and M. Ara Báguena, “Retinopatía de gran altura,” Archivos de la Sociedad Española de Oftalmología, vol. 87, no. 10, pp. 337–339, 2012.spa
dc.relation.references[2] M. Pahlitzsch, N. Torun, C. Erb, J. Bruenner, A. K. B. Maier, J. Gonnermann, E. Ber- telmann, and M. K. Klamann, “Significance of the disc damage likelihood scale objecti- vely measured by a non-mydriatic fundus camera in preperimetric glaucoma,” Clinical Ophthalmology (Auckland, NZ), vol. 9, p. 2147, 2015.spa
dc.relation.references[3] I. Katsamenis, E. E. Karolou, A. Davradou, E. Protopapadakis, A. Doulamis, N. Dou- lamis, and D. Kalogeras, “Tracon: A novel dataset for real-time traffic cones detection using deep learning,” arXiv preprint arXiv:2205.11830, 2022.spa
dc.relation.references[4] J. D. Henderer, “Disc damage likelihood scale,” British Journal of Ophthalmology, vol. 90, no. 4, pp. 395–396, 2006.spa
dc.relation.references[5] D. J. Smits, T. Elze, H. Wang, and L. R. Pasquale, “Machine Learning in the Detection of the Glaucomatous Disc and Visual Field,” Seminars in Ophthalmology, vol. 34, no. 4, pp. 232–242, 2019. [Online]. Available: https://doi.org/10.1080/08820538.2019.1620801spa
dc.relation.references[6] F. Li, L. Yan, Y. Wang, J. Shi, H. Chen, X. Zhang, M. Jiang, Z. Wu, and K. Zhou, “Deep learning-based automated detection of glaucomatous optic neuropathy on color fundus photographs,” Graefe’s Archive for Clinical and Experimental Ophthalmology, vol. 258, no. 4, pp. 851–867, 2020.spa
dc.relation.references[7] S. Borwankar, R. Sen, and B. Kakani, “Improved Glaucoma Diagnosis Using Deep Learning,” Proceedings of CONECCT 2020 - 6th IEEE International Conference on Electronics, Computing and Communication Technologies, pp. 2–5, 2020.spa
dc.relation.references[8] M. S. Haleem, L. Han, J. van Hemert, and B. Li, “Automatic extraction of retinal features from colour retinal images for glaucoma diagnosis: A review,” Computerized Medical Imaging and Graphics, vol. 37, no. 7-8, pp. 581–596, 2013. [Online]. Available: http://dx.doi.org/10.1016/j.compmedimag.2013.09.005spa
dc.relation.references[9] M. Kim, J. C. Han, S. H. Hyun, O. Janssens, S. Van Hoecke, C. Kee, and W. De Neve, “Medinoid: Computer-aided diagnosis and localization of glaucoma using deep learning,” Applied Sciences (Switzerland), vol. 9, no. 15, 2019.spa
dc.relation.references[10] S. K. Devalla, Z. Liang, T. H. Pham, C. Boote, N. G. Strouthidis, A. H. Thiery, and M. J. Girard, “Glaucoma management in the era of artificial intelligence,” British Journal of Ophthalmology, vol. 104, no. 3, pp. 301–311, 2020.spa
dc.relation.references[11] S. Sreng, N. Maneerat, K. Hamamoto, and K. Y. Win, “Deep learning for optic disc seg- mentation and glaucoma diagnosis on retinal images,” Applied Sciences (Switzerland), vol. 10, no. 14, 2020.spa
dc.relation.references[12] H. Fu, F. Li, Y. Xu, J. Liao, J. Xiong, J. Shen, J. Liu, X. Zhang, C. Yang, F. Lin, H. Luo, H. Li, H. Che, N. Li, and Y. Fan, “A retrospective comparison of deep learning to manual annotations for optic disc and optic cup segmentation in fundus photos,” medRxiv, pp. 1–11, 2020.spa
dc.relation.references[13] S. Malik, N. Kanwal, M. N. Asghar, M. A. A. Sadiq, I. Karamat, and M. Fleury, “Data driven approach for eye disease classification with machine learning,” Applied Sciences (Switzerland), vol. 9, no. 14, 2019.spa
dc.relation.references[14] A. C. Thompson, A. A. Jammal, and F. A. Medeiros, “A review of deep learning for screening, diagnosis, and detection of glaucoma progression,” Translational Vision Science and Technology, vol. 9, no. 2, pp. 1–19, 2020.spa
dc.relation.references[15] H. Liu, L. Li, I. M. Wormstone, C. Qiao, C. Zhang, P. Liu, S. Li, H. Wang, D. Mou, R. Pang, D. Yang, L. M. Zangwill, S. Moghimi, H. Hou, C. Bowd, L. Jiang, Y. Chen, M. Hu, Y. Xu, H. Kang, X. Ji, R. Chang, C. Tham, C. Cheung, D. S. W. Ting, T. Y. Wong, Z. Wang, R. N. Weinreb, M. Xu, and N. Wang, “Development and Validation of a Deep Learning System to Detect Glaucomatous Optic Neuropathy Using Fundus Photographs,” JAMA Ophthalmology, vol. 137, no. 12, pp. 1353–1360, 2019.spa
dc.relation.references[16] Z. Li, S. Keel, C. Liu, and M. He, “Can artificial intelligence make screening faster, more accurate, and more accessible?” Asia-Pacific Journal of Ophthalmology, vol. 7, no. 6, pp. 436–441, 2018.spa
dc.relation.references[17] R. Kapoor, S. P. Walters, and L. A. Al-Aswad, “The current state of artificial intelligence in ophthalmology,” Survey of Ophthalmology, vol. 64, no. 2, pp. 233–240, 2019. [Online]. Available: https://doi.org/10.1016/j.survophthal.2018.09.002spa
dc.relation.references[18] I. J. MacCormick, B. M. Williams, Y. Zheng, K. Li, B. Al-Bander, S. Czanner, R. Cheeseman, C. E. Willoughby, E. N. Brown, G. L. Spaeth, and G. Czanner, “Accurate, fast, data efficient and interpretable glaucoma diagnosis with automated spatial analysis of the whole cup to disc profile,” PLoS ONE, vol. 14, no. 1, pp. 1–20, 2019. [Online]. Available: http://dx.doi.org/10.1371/journal.pone.0209409spa
dc.relation.references[19] Z. Tan, J. Scheetz, and M. He, “Artificial intelligence in ophthalmology: Accuracy, cha- llenges, and clinical application,” Asia-Pacific Journal of Ophthalmology, vol. 8, no. 3, pp. 197–199, 2019.spa
dc.relation.references[20] A. R. Ran, C. C. Tham, P. P. Chan, C. Y. Cheng, Y. C. Tham, T. H. Rim, and C. Y. Cheung, “Deep learning in glaucoma with optical coherence tomography: a review,” Eye (Basingstoke), vol. 35, no. 1, pp. 188–201, 2021. [Online]. Available: http://dx.doi.org/10.1038/s41433-020-01191-5spa
dc.relation.references[21] F. Abdullah, R. Imtiaz, H. A. Madni, H. A. Khan, T. M. Khan, M. A. Khan, and S. S. Naqvi, “A Review on Glaucoma Disease Detection using Computerized Techniques,” IEEE Access, pp. 37 311–37 333, 2021.spa
dc.relation.references[22] M. Alghamdi and M. Abdel-Mottaleb, “A Comparative Study of Deep Learning Models for Diagnosing Glaucoma From Fundus Images,” IEEE Access, vol. 9, pp. 23 894–23 906, 2021.spa
dc.relation.references[23] A. M. Stefan, E. A. Paraschiv, S. Ovreiu, and E. Ovreiu, “A review of glaucoma detec- tion from digital fundus images using machine learning techniques,” 2020 8th E-Health and Bioengineering Conference, EHB 2020, pp. 20–23, 2020.spa
dc.relation.references[24] D. Mirzania, A. C. Thompson, and K. W. Muir, “Applications of deep learning in detection of glaucoma: A systematic review,” European Journal of Ophthalmology, 2020.spa
dc.rights.accessrightsinfo:eu-repo/semantics/openAccessspa
dc.rights.licenseAtribución-NoComercial 4.0 Internacionalspa
dc.rights.urihttp://creativecommons.org/licenses/by-nc/4.0/spa
dc.subject.ddc000 - Ciencias de la computación, información y obras generales::006 - Métodos especiales de computaciónspa
dc.subject.decsEnfermedades del Nervio Ópticospa
dc.subject.decsOptic Nerve Diseaseseng
dc.subject.proposalGlaucomaspa
dc.subject.proposalGlaucomaeng
dc.subject.proposalEscala DDLSspa
dc.subject.proposalDDLS scaleeng
dc.subject.proposalRDRspa
dc.subject.proposalRDReng
dc.subject.proposalYOLOspa
dc.subject.proposalYOLOeng
dc.subject.proposalAprendizaje por transferenciaspa
dc.subject.proposalTransfer learningeng
dc.subject.proposalRedes neuronales convolucionalesspa
dc.subject.proposalConvolutional neural networkseng
dc.subject.proposalModelo de clasificaciónspa
dc.subject.proposalClassification modeleng
dc.subject.proposalModelo de segmentaciónspa
dc.subject.proposalSegmentation modeleng
dc.subject.unescoModelo de simulación
dc.subject.unescoSimulation models
dc.titleModelo de aprendizaje profundo para cuantificar el daño causado por el Glaucoma en el nervio ópticospa
dc.title.translatedDeep learning model to quantify the damage caused by Glaucoma in the optic nerveeng
dc.typeTrabajo de grado - Maestríaspa
dc.type.coarhttp://purl.org/coar/resource_type/c_bdccspa
dc.type.coarversionhttp://purl.org/coar/version/c_ab4af688f83e57aaspa
dc.type.contentTextspa
dc.type.driverinfo:eu-repo/semantics/masterThesisspa
dc.type.redcolhttp://purl.org/redcol/resource_type/TMspa
dc.type.versioninfo:eu-repo/semantics/acceptedVersionspa
dcterms.audience.professionaldevelopmentInvestigadoresspa
oaire.accessrightshttp://purl.org/coar/access_right/c_abf2spa

Archivos

Bloque original

Mostrando 1 - 1 de 1
Cargando...
Miniatura
Nombre:
1026591991.2023.pdf
Tamaño:
28.12 MB
Formato:
Adobe Portable Document Format
Descripción:
Tesis de Maestría en Ingeniería de Sistemas y Computación

Bloque de licencias

Mostrando 1 - 1 de 1
No hay miniatura disponible
Nombre:
license.txt
Tamaño:
5.74 KB
Formato:
Item-specific license agreed upon to submission
Descripción: