Diagnóstico de severidad de edema pulmonar asistido por multimodal learning
| dc.contributor.advisor | González Osorio, Fabio Augusto | spa |
| dc.contributor.advisor | Toledo Cortés, Santiago | spa |
| dc.contributor.author | Neiza Mejia, Juan Sebastian | spa |
| dc.contributor.cvlac | Neiza, Juan Sebastian [1032436240] | spa |
| dc.contributor.orcid | Neiza Mejia, Juan Sebastian [0009000851299731] | spa |
| dc.contributor.referee | Perdomo Charry, Oscar Julian | spa |
| dc.contributor.referee | Gomez Jaramillo, Francisco Albeiro | spa |
| dc.contributor.researchgroup | Mindlab | |
| dc.date.accessioned | 2025-12-05T13:03:35Z | |
| dc.date.available | 2025-12-05T13:03:35Z | |
| dc.date.issued | 2025-11-12 | |
| dc.description | ilustraciones, fotografías, gráficas, tablas | spa |
| dc.description.abstract | El diagnóstico del edema pulmonar a partir del análisis automático de radiografías de tórax y reportes clínicos representa un desafío relevante en el desarrollo de sistemas de apoyo a la decisión médica. Este problema se ve agravado por dos factores principales: la escasez de anotaciones clínicas de alta calidad y la dificultad de integrar modalidades heterogéneas como imágenes e informes textuales. Para abordar esta tarea, en esta tesis se estudiaron y compararon distintos enfoques unimodales (imagen y texto por separado) y multimodales (fusión de ambas modalidades), incorporando tanto modelos convencionales como modelos fundacionales. El flujo de trabajo experimental incluye tres componentes principales: (i) el uso de extractores visuales como DenseNet 121 y DINOv2; (ii) la representación textual mediante modelos como BERT Medical y MedCPT, aplicados tanto a reportes redactados por especialistas como a reportes sintéticos generados automáticamente con ContactDoctor; y (iii) la comparación de diferentes mecanismos de fusi´on multimodal: un perceptrón multicapa (MLP), la unidad multimodal con compuertas (Gated Multimodal Unit, GMU) y el marco de Kernel Density Matrices (KDM). La metodología se evaluó en una tarea de clasificación ordinal con cuatro niveles de severidad del edema pulmonar, utilizando el conjunto de datos MIMIC-CXR. Los resultados muestran que: (i) los modelos basados en imágenes superan a los textuales cuando se consideran de manera aislada, alcanzando un máximo de 0.45 con DINOv2; (ii) los reportes generados automáticamente aportan uniformidad y pueden mejorar el desempeño frente a los reportes humanos en ciertos escenarios multimodales; y (iii) la fusión con KDM alcanza el mejor resultado global, logrando un macro F1-score de 0.48, lo que confirma la utilidad de la integración multimodal frente a cualquier modalidad aislada. Los resultados demuestran que el aprovechamiento de modelos fundacionales y de mecanismos de fusión probabilística como KDM mejora el rendimiento en la predicción de la severidad del edema pulmonar. Estos hallazgos sugieren que la combinación de información visual y textual puede potenciar la capacidad diagnóstica en entornos clínicos con datos limitados. (Texto tomado de la fuente). | spa |
| dc.description.abstract | The diagnosis of pulmonary edema from the automatic analysis of chest radiographs and clinical reports represents a major challenge in the development of medical decision-support systems. This problem is aggravated by two main factors: the scarcity of high-quality clinical annotations and the difficulty of integrating heterogeneous modalities such as images and textual reports. To address this task, this thesis studied and compared different unimodal approaches (image and text separately) and multimodal approaches (fusion of both modalities), incorporating both conventional models and foundation models. The experimental workflow included three main components: (i) the use of visual extractors such as DenseNet121 and DINOv2; (ii) textual representation through models such as BERT Medical and MedCPT, applied both to reports written by specialists and to synthetic reports automatically generated with ContactDoctor ; and (iii) the comparison of different multimodal fusion mechanisms: a multilayer perceptron (MLP), the Gated Multimodal Unit (GMU), and the Kernel Density Matrices (KDM) framework. The methodology was evaluated on an ordinal classification task with four levels of pulmonary edema severity, using the MIMIC-CXR dataset. The results show that: (i) image-based models outperform text-based ones when considered in isolation, reaching a maximum 0.45 with DINOv2; (ii) automatically generated reports provide uniformity and can improve performance compared to human-written reports in certain multimodal scenarios; and (iii) fusion with KDM achieves the best overall result, reaching a macro F1-score of 0.48, confirming the utility of multimodal integration over any isolated modality. The results demonstrate that leveraging foundation models and probabilistic fusion mechanisms such as KDM improves performance in predicting pulmonary edema severity. These findings suggest that the combination of visual and textual information can enhance diagnostic capacity in clinical environments with limited data. | eng |
| dc.description.degreelevel | Maestría | spa |
| dc.description.degreename | Magíster en Ingeniería - Ingeniería de Sistemas y Computación | spa |
| dc.description.researcharea | Sistemas inteligentes | spa |
| dc.format.extent | x, 43 páginas | spa |
| dc.format.mimetype | application/pdf | |
| dc.identifier.instname | Universidad Nacional de Colombia | spa |
| dc.identifier.reponame | Repositorio Institucional Universidad Nacional de Colombia | spa |
| dc.identifier.repourl | https://repositorio.unal.edu.co/ | spa |
| dc.identifier.uri | https://repositorio.unal.edu.co/handle/unal/89182 | |
| dc.language.iso | spa | |
| dc.publisher | Universidad Nacional de Colombia | spa |
| dc.publisher.branch | Universidad Nacional de Colombia - Sede Bogotá | spa |
| dc.publisher.department | Departamento de Ingenieria de Sistemas e Industrial | spa |
| dc.publisher.faculty | Facultad de Ingeniería | spa |
| dc.publisher.place | Bogotá, Colombia | spa |
| dc.publisher.program | Bogotá - Ingeniería - Maestría en Ingeniería - Ingeniería de Sistemas y Computación | spa |
| dc.relation.indexed | Bireme | spa |
| dc.relation.references | A. K. Dubey, M. T. Young, C. Stanley, D. Lunga, and J. Hinkle, “Computer-aided abnormality detection in chest radiographs in a clinical setting via domain-adaptation,” 2020. | |
| dc.relation.references | A. Poernama, I. Soesanti, and O. Wahyunggoro, “Feature extraction and feature selection methods in classification of brain mri images: A review,” 10 2019, pp. 58–63. | |
| dc.relation.references | A. Schaumberg, W. Juarez-Nicanor, S. Choudhury, L. Pastri´an, B. Pritt, M. P. Pozuelo, R. S. S´anchez, K. Ho, N. Zahra, B. Sener, M. Aly, and T. Fuchs, “Interpretable multimodal deep learning for real-time pan-tissue pan-disease pathology search on social media,” Modern Pathology, 2020. | |
| dc.relation.references | A. Siamak, R. Sadeghian, I. Abdellatif, and S. Nwoji, “Diagnosing heart disease types from chest x-rays using a deep learning approach,” 12 2019, pp. 910–913. | |
| dc.relation.references | A. Taleb, W. Loetzsch, N. Danz, J. Severin, T. Gaertner, D. Health, and M. Learning, “3d self-supervised methods for medical imaging,” Digital Health Machine Learning , Hasso-Plattner-Institute, Potsdam University , Berlin, Germany, pp. 1–17, 2020. | |
| dc.relation.references | A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, L. Kaiser, and I. Polosukhin, “Attention is all you need,” Advances in Neural Information Processing Systems 30 (NIPS 2017), 2017. | |
| dc.relation.references | A. Chavarro, J. Camargo, and F. Gonz´alez, “Visualizing multimodal image collections,” 09 2013, pp. 1–6. | |
| dc.relation.references | Appinventiv, “Transformer vs rnn: Which is better for nlp tasks?” Appinventiv Blog, 2024. | |
| dc.relation.references | “Bert medical ner,” 2024. [Online]. Available: https://huggingface.co/medical-ner-proj/bert-medical-ner-proj | |
| dc.relation.references | C. Doersch, A. Gupta, and A. A. Efros, “Unsupervised visual representation learning by context prediction,” 2016. | |
| dc.relation.references | C. Lin, P. Hu, H. Su, S. Li, J. Mei, J. Zhou, and H. Leung, “Sensemood: Depression detection on social media,” 2020, pp. 407–411. | |
| dc.relation.references | C. Q. Chen, L. Wang, L. Wang, Z. Deng, J. Zhang, and Y. Zhu, “Glioma grade prediction using wavelet scattering-based radiomics,” IEEE Access, vol. 8, pp. 106 564–106 575, 2020. | |
| dc.relation.references | “Contactdoctor-bio-medical-multimodal-llama-3-8b-v1: A high-performance biomedical multimodal llm,” https://huggingface.co/ContactDoctor/Bio-Medical-MultiModal-Llama-3-8B-V1, 2024. | |
| dc.relation.references | “Dinov2 base fine-tuned on chest x-ray,” 2024. [Online]. Available: https://huggingface.co/Haaaaaaaaaax/dinov2-Base-finetuned-chest xray | |
| dc.relation.references | D. Engemann, O. Kozynets, D. Sabbagh, G. Lemaˆıtre, G. Varoquaux, F. Liem, and A. Gramfort, “Combining magnetoencephalography with magnetic resonance imaging enhances learning of surrogate-biomarkers,” eLife, vol. 9, p. e54055, 05 2020. | |
| dc.relation.references | D. Ramachandram and G. W. Taylor, “Deep multimodal learning: A survey on recent advances and trends,” IEEE Signal Processing Magazine, vol. 34, pp. 96–108, 11 2017. | |
| dc.relation.references | D. Sun, M. Wang, and A. Li, “A multimodal deep neural network for human breast cancer prognosis prediction by integrating multi-dimensional data,” IEEE/ACM Transactions on Computational Biology and Bioinformatics, vol. PP, pp. 1–1, 02 2018. | |
| dc.relation.references | D.-i. Eun, R. Jang, W. Ha, H. Lee, S. Jung, and N. Kim, “Deep-learning-based image quality enhancement of compressed sensing magnetic resonance imaging of vessel wall: comparison of self-supervised and unsupervised approaches,” Scientific Reports, vol. 10, p. 13950, 08 2020. | |
| dc.relation.references | F. A. González, R. Ramos-Pollán, and J. A. Gallego-Mejia, “Quantum kernel mixtures for probabilistic deep learning,” 2023. | |
| dc.relation.references | G. Tadesse, H. Javed, N. Thanh, H. Thi, L. Tan, L. Thwaites, D. Clifton, and T. Zhu, “Multi-modal diagnosis of infectious diseases in the developing world,” IEEE Journal of Biomedical and Health Informatics, vol. 24, pp. 2131–2141, 2020. | |
| dc.relation.references | H. Chen, M. Gao, Y. Zhang, W. Liang, and X. Zou, “Attention-based multi-nmf deep neural network with multimodality data for breast cancer prognosis model,” BioMed Research International, vol. 2019, pp. 1–11, 05 2019. | |
| dc.relation.references | H. Liang, X. Sun, S. Yunlei, and Y. Gao, “Text feature extraction based on deep learning: a review,” EURASIP Journal on Wireless Communications and Networking, vol. 2017, 12 2017. | |
| dc.relation.references | H. Wang and B. Raj, “On the Origin of Deep Learning,” pp. 1–72, 2017. [Online]. Available: http://arxiv.org/abs/1702.07800 | |
| dc.relation.references | Hervella, J. Rouco, J. Novo, and M. Ortega, “Self-supervised multimodal reconstruction of retinal images over paired datasets,” Expert Systems with Applications, vol. 161, 2020. | |
| dc.relation.references | I. Qureshi, J. Ma, and Q. Abbas, “Diabetic retinopathy detection and stage classification in eye fundus images using active deep learning,” Multimedia Tools and Applications, vol. 80, 03 2021. | |
| dc.relation.references | J. Arevalo, T. Solorio, M. M. y Gómez, and F. A. González, “Gated multimodal units for information fusion,” 2 2017. [Online]. Available: http://arxiv.org/abs/1702.01992 | |
| dc.relation.references | J. Ben-Abdallah, J. Caicedo, F. González, and O. Nasraoui, “Multimodal image annotation using non-negative matrix factorization,” vol. 1, 08 2010, pp. 128–135. | |
| dc.relation.references | J. C. Caicedo, J. BenAbdallah, F. A. González, and O. Nasraoui, “Multimodal representation, indexing, automated annotation and retrieval of image collections via non-negative matrix factorization,” Neurocomputing, vol. 76, no. 1, pp. 50–60, 2012, seventh International Symposium on Neural Networks (ISNN 2010) Advances in Web Intelligence. [Online]. Available: https://www.sciencedirect.com/science/article/pii/S0925231211004048 | |
| dc.relation.references | J. C. Caicedo and F. A. González, “Image retrieval using multimodal data - paradigma.uniandes.edu.co,” 2014. [Online]. Available: http://paradigma.uniandes.edu.co/images/ sampledata/PARADIGMA/ediciones/Edicion3/Numero3/Articulo7/j-caicedo-7.pdf | |
| dc.relation.references | J. Camargo and F. González, “Multimodal latent topic analysis for image collection summarization,” Information Sciences, vol. 328, pp. 270–287, 01 2016. | |
| dc.relation.references | J. Camargo and F. González, “Multimodal latent topic analysis for image collection summarization,” Information Sciences, vol. 328, pp. 270–287, 01 2016. | |
| dc.relation.references | J. Lara Ramírez, V. Contreras, J. Otálora Montenegro, H. Müller, and F. González, “Multimodal latent semantic alignment for automated prostate tissue classification and retrieval,” 09 2020. | |
| dc.relation.references | J. Peeken, T. Goldberg, T. Pyka, M. Bernhofer, B. Wiestler, K. Kessel, P. Tafti, F. Nuesslin, A. Braun, C. Zimmer, B. Rost, and U.-P. D. S. Combs, “Combining multimodal imaging and treatment features improves machine learning-based prognostic assessment in patients with glioblastoma multiforme,” Cancer Medicine, vol. 8, pp. 1–9, 12 2018. | |
| dc.relation.references | J. Yuan, X. Ran, K. Liu, C. Yao, Y. Yao, H. Wu, and Q. Liu, “Machine learning applications on neuroimaging for diagnosis and prognosis of epilepsy: A review,” 2021. | |
| dc.relation.references | K. A. Dubey, M. T. Young, C. Stanley, D. Lunga, and J. Hinkle, “Computer-aided abnormality detection in chest radiographs in a clinical setting via domain-adaptation,” 2020. | |
| dc.relation.references | “Keras applications: Densenet,” 2024. [Online]. Available: https://keras.io/api/applications/densenet/ | |
| dc.relation.references | Kolena, “Transformer vs rnn: 4 key differences and how to choose,” Kolena Guides, 2024. | |
| dc.relation.references | L. Jing and Y. Tian, “Self-supervised visual feature learning with deep neural networks: A survey,” 2019. | |
| dc.relation.references | L. Jin, Z. Zhao, A. Doshi, Y. Genc, C. Dan, G. S. Corrado, and Y. Liu, “MedCPT: Contrastive pre-trained transformers for medical text,” in Findings of the Association for Computational Linguistics: NAACL 2023. Association for Computational Linguistics, 2023, pp. 1118–1131. | |
| dc.relation.references | L. Lazli, M. Boukadoum, and O. Mohamed, “A survey on computer-aided diagnosis of brain disorders through mri based on machine learning and data mining methodologies with an emphasis on alzheimer disease diagnosis and the contribution of the multimodal fusion,” Applied Sciences (Switzerland), vol. 10, 2020. | |
| dc.relation.references | L. Vale-Silva and K. Rohr, “Pan-cancer prognosis prediction using multimodal deep learning,” 04 2020, pp. 568–571. | |
| dc.relation.references | M. Elbattah, C. Loughnane, J.-L. Guerin, R. Carette, F. Cilia, and G. Dequen, “Variational autoencoder for image-based augmentation of eye-tracking data,” Journal of Imaging, vol. 7, p. 83, 05 2021. | |
| dc.relation.references | M. J. Horry, S. Chakraborty, M. Paul, A. Ulhaq, B. Pradhan, M. Saha, and N. Shukla, “Covid-19 detection through transfer learning using multimodal imaging data,” IEEE Access, vol. 8, pp. 149 808–149 824, 2020. | |
| dc.relation.references | M. Kazmierski, M. Welch, S. Kim, C. McIntosh, P. M. Head, N. C. Group, K. Rey-McIntyre, S. H. Huang, T. Patel, T. Tadic, M. Milosevic, F.-F. Liu, A. Hope, S. Bratman, and B. Haibe-Kains, “A machine learning challenge for prognostic modelling in head and neck cancer using multi-modal data,” 2021. | |
| dc.relation.references | M. Li, N. Arun, M. Gidwani, K. Chang, F. Deng, B. Little, D. Mendoza, M. Lang, O. Vtc Lee, A. O’Shea, A. Parakh, P. Singh, and J. Kalpathy-Cramer, “Automated assessment and tracking of covid-19 pulmonary disease severity on chest radiographs using convolutional siamese neural networks,” Radiology: Artificial Intelligence, vol. 2, p. e200079, 07 2020. | |
| dc.relation.references | “Medcpt query encoder (qenc),” 2024. [Online]. Available: https://huggingface.co/ncbi/MedCPT-Query-Encoder | |
| dc.relation.references | M. Oquab, T. Darcet, T. Moutakanni, H. V. Vo, M. Szafraniec, V. Khalidov, P. Fernandez, D. Haziza, F. Massa, A. El-Nouby et al., “Dinov2: Learning robust visual features without supervision,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2023, pp. 19 867–19 877. | |
| dc.relation.references | N. Koutsouleris, D. B. Dwyer, F. Degenhardt, C. Maj, M. F. Urquijo-Castro, R. Sanfelici, D. Popovic, O. Oeztuerk, S. S. Haas, J. Weiske, A. Ruef, L. Kambeitz-Ilankovic, L. A. Antonucci, S. Neufang, C. Schmidt-Kraepelin, S. Ruhrmann, N. Penzel, J. Kambeitz, T. K. Haidl, M. Rosen, K. Chisholm, A. Riecher-R¨ossler, L. Egloff, A. Schmidt, C. Andreou, J. Hietala, T. Schirmer, G. Romer, P. Walger, M. Franscini, N. Traber-Walker, B. G. Schimmelmann, R. Fl¨uckiger, C. Michel, W. R¨ossler, O. Borisov, P. M. Krawitz, K. Heekeren, R. Buechler, C. Pantelis, P. Falkai, R. K. R. Salokangas, R. Lencer, A. Bertolino, S. Borgwardt, M. Noethen, P. Brambilla, S. J. Wood, R. Upthegrove, F. Schultze-Lutter, A. Theodoridou, E. Meisenzahl, and PRONIA Consortium, “Multimodal machine learning workflows for prediction of psychosis in patients with clinical high-risk syndromes and recent-onset depression,” JAMA Psychiatry, vol. 78, no. 2, pp. 195–209, Feb. 2021. | |
| dc.relation.references | O. Razeghi, J. Solis-Lemus, A. Lee, R. Karim, C. Corrado, C. Roney, A. De Vecchi, and S. Niederer, “Cemrgapp: An interactive medical imaging application with image processing, computer vision, and machine learning toolkits for cardiovascular research,” SoftwareX, vol. 12, p. 100570, 07 2020. | |
| dc.relation.references | P. Kellmeyer, “Artificial intelligence in basic and clinical neuroscience: Scientific opportunities and ethical challenges,” Neuroforum, 2019. | |
| dc.relation.references | P. Khosla, P. Teterwak, and Wang, “Supervised contrastive learning,” in Advances in Neural Information Processing Systems, H. Larochelle, M. Ranzato, R. Hadsell, M. Balcan, and H. Lin, Eds., vol. 33. Curran Associates, Inc., 2020, pp. 18 661–18 673. [Online]. Available: https: //proceedings.neurips.cc/paper files/paper/2020/file/d89a66c7c80a29b1bdbab0f2a1a94af8-Paper.pdf | |
| dc.relation.references | R. Bommasani and D. A., “On the opportunities and risks of foundation models,” Stanford University, Tech. Rep., 2021. | |
| dc.relation.references | R. Liao, J. Rubin, G. Lam, S. Berkowitz, S. Dalal, W. Wells, S. Horng, and P. Golland, “Semi-supervised learning for quantification of pulmonary edema in chest x-ray images,” 2019. | |
| dc.relation.references | R. Liao, G. Chauhan, P. Golland, S. Berkowitz, and S. Horng, “Pulmonary edema severity grades based on mimic-cxr (version 1.0.1),” PhysioNet, 2021, rRID:SCR 007345. [Online]. Available: https://physionet.org/content/mimic-cxr-pe-severity/1.0.1/ | |
| dc.relation.references | S. Horng, R. Liao, X. Wang, S. Dalal, P. Golland, and S. Berkowitz, “Deep learning to quantify pulmonary edema in chest radiographs,” Radiology: Artificial Intelligence, vol. 3, p. e190228, 01 2021. | |
| dc.relation.references | S. Liang, R. Zhang, D. Liang, T. Song, T. Ai, C. Xia, L. Xia, and Y. Wang, “Multimodal 3d densenet for idh genotype prediction in gliomas,” Genes, vol. 9, 2018. | |
| dc.relation.references | S. Maharjan, M. Montes, F. González, and T. Solorio, “A genre-aware attention model to improve the likability prediction of books,” 11 2018. | |
| dc.relation.references | S. Sierra and F. A. González, “Combining textual and visual representations for multimodal author profiling: Notebook for PAN at CLEF 2018,” in Working Notes of CLEF 2018 - Conference and Labs of the Evaluation Forum, Avignon, France, September 10-14, 2018, ser. CEUR Workshop Proceedings, L. Cappellato, N. Ferro, J. Nie, and L. Soulier, Eds., vol. 2125. CEUR-WS.org, 2018. [Online]. Available: https://ceur-ws.org/Vol-2125/paper 219.pdf | |
| dc.relation.references | S. Tabarestani, M. Aghili, M. Eslami, M. Cabrerizo, A. Barreto, N. Rishe, R. Curiel, D. Loewenstein, R. Duara, and M. Adjouadi, “A distributed multitask multimodal approach for the prediction of alzheimer’s disease in a longitudinal study,” NeuroImage, vol. 206, 2020. | |
| dc.relation.references | S. Toledo-Cortés, D. H. Useche, and F. A. González, “Prostate tissue grading with deep quantum measurement ordinal regression,” CoRR, vol. abs/2103.03188, 2021. [Online]. Available: https://arxiv.org/abs/2103.03188 | |
| dc.relation.references | W. Wang, D. Tran, and M. Feiszli, “What makes training multi-modal classification networks hard?” 2020. | |
| dc.relation.references | Y. Khare, V. Bagal, M. Mathew, A. Devi, U. D. Priyakumar, and C. Jawahar, “Mmbert: Multimodal bert pretraining for improved medical vqa,” 2021. | |
| dc.relation.references | Y. Li, H. Wang, and Y. Luo, “A comparison of pre-trained vision-and-language models for multimodal representation learning across medical images and reports,” 2020. | |
| dc.relation.references | J. Yap, W. Yolland, and P. Tschandl, “Multimodal skin lesion classification using deep learning,” Experimental Dermatology, vol. 27, pp. 1261–1267, 2018. | |
| dc.relation.references | Z. Hu, Z. Zhang, H. Yang, Q. Chen, and D. Zuo, “A deep learning approach for predicting the quality of online health expert question-answering services,” Journal of Biomedical Informatics, vol. 71, pp. 241–253, 2017. | |
| dc.relation.references | Z. Tang, Y. Xu, Z. Jiao, J. Lu, L. Jin, A. Aibaidula, J. Wu, Q. Wang, and H. Zhang, “Pre-operative overall survival time prediction for glioblastoma patients using deep learning on both imaging phenotype and genotype,” vol. 11764, 10 2019, pp. 415–422. | |
| dc.rights.accessrights | info:eu-repo/semantics/openAccess | |
| dc.rights.license | Atribución-NoComercial 4.0 Internacional | |
| dc.rights.uri | http://creativecommons.org/licenses/by-nc/4.0/ | |
| dc.subject.ddc | 000 - Ciencias de la computación, información y obras generales::005 - Programación, programas, datos de computación | spa |
| dc.subject.ddc | 610 - Medicina y salud::616 - Enfermedades | spa |
| dc.subject.decs | Edema Pulmonar | spa |
| dc.subject.decs | Pulmonary Edema | eng |
| dc.subject.decs | Redes Neuronales Convolucionales | spa |
| dc.subject.decs | Convolutional Neural Networks | eng |
| dc.subject.decs | Algoritmos | spa |
| dc.subject.decs | Algorithms | eng |
| dc.subject.proposal | Edema pulmonar | spa |
| dc.subject.proposal | Aprendizaje multimodal | spa |
| dc.subject.proposal | Fusión de características | spa |
| dc.subject.proposal | Modelos fundacionales | spa |
| dc.subject.proposal | Kernel Density matrices | spa |
| dc.subject.proposal | Enriquecimiento semántico | spa |
| dc.subject.proposal | Diagnóstico asistido por computador | spa |
| dc.subject.proposal | Pulmonary edema | eng |
| dc.subject.proposal | Multimodal learning | eng |
| dc.subject.proposal | Feature fusion | eng |
| dc.subject.proposal | Foundation models | eng |
| dc.subject.proposal | Kernel Density matrice | eng |
| dc.subject.proposal | Semantic enrichment | eng |
| dc.subject.proposal | Computer-aided diagnosis | eng |
| dc.title | Diagnóstico de severidad de edema pulmonar asistido por multimodal learning | spa |
| dc.title.translated | Diagnosis of pulmonary edema severity assisted by multimodal learning | eng |
| dc.type | Trabajo de grado - Maestría | spa |
| dc.type.coar | http://purl.org/coar/resource_type/c_bdcc | |
| dc.type.coarversion | http://purl.org/coar/version/c_ab4af688f83e57aa | |
| dc.type.content | Text | |
| dc.type.driver | info:eu-repo/semantics/masterThesis | |
| dc.type.redcol | http://purl.org/redcol/resource_type/TM | |
| dc.type.version | info:eu-repo/semantics/acceptedVersion | |
| dcterms.audience.professionaldevelopment | Investigadores | spa |
| oaire.accessrights | http://purl.org/coar/access_right/c_abf2 |
Archivos
Bloque original
1 - 1 de 1
Cargando...
- Nombre:
- DocumentoFinalTesis1032436240.pdf
- Tamaño:
- 1.62 MB
- Formato:
- Adobe Portable Document Format
- Descripción:
- Tesis de Maestría en Ingeniería - Ingeniería de Sistemas y Computación
Bloque de licencias
1 - 1 de 1
Cargando...
- Nombre:
- license.txt
- Tamaño:
- 5.74 KB
- Formato:
- Item-specific license agreed upon to submission
- Descripción:

