En 18 día(s), 20 hora(s) y 7 minuto(s): El Repositorio Institucional UNAL informa a la comunidad universitaria que, con motivo del periodo de vacaciones colectivas, el servicio de publicación estará suspendido: Periodo de cierre: Del 20 de diciembre al 18 de enero de 2026. Sobre los depósitos: Durante este tiempo, los usuarios podrán continuar realizando el depósito respectivo de sus trabajos en la plataforma. Reanudación: Una vez reiniciadas las actividades administrativas, los documentos serán revisados y publicados en orden de llegada.

Diagnóstico de severidad de edema pulmonar asistido por multimodal learning

dc.contributor.advisorGonzález Osorio, Fabio Augustospa
dc.contributor.advisorToledo Cortés, Santiagospa
dc.contributor.authorNeiza Mejia, Juan Sebastianspa
dc.contributor.cvlacNeiza, Juan Sebastian [1032436240]spa
dc.contributor.orcidNeiza Mejia, Juan Sebastian [0009000851299731]spa
dc.contributor.refereePerdomo Charry, Oscar Julianspa
dc.contributor.refereeGomez Jaramillo, Francisco Albeirospa
dc.contributor.researchgroupMindlab
dc.date.accessioned2025-12-05T13:03:35Z
dc.date.available2025-12-05T13:03:35Z
dc.date.issued2025-11-12
dc.descriptionilustraciones, fotografías, gráficas, tablasspa
dc.description.abstractEl diagnóstico del edema pulmonar a partir del análisis automático de radiografías de tórax y reportes clínicos representa un desafío relevante en el desarrollo de sistemas de apoyo a la decisión médica. Este problema se ve agravado por dos factores principales: la escasez de anotaciones clínicas de alta calidad y la dificultad de integrar modalidades heterogéneas como imágenes e informes textuales. Para abordar esta tarea, en esta tesis se estudiaron y compararon distintos enfoques unimodales (imagen y texto por separado) y multimodales (fusión de ambas modalidades), incorporando tanto modelos convencionales como modelos fundacionales. El flujo de trabajo experimental incluye tres componentes principales: (i) el uso de extractores visuales como DenseNet 121 y DINOv2; (ii) la representación textual mediante modelos como BERT Medical y MedCPT, aplicados tanto a reportes redactados por especialistas como a reportes sintéticos generados automáticamente con ContactDoctor; y (iii) la comparación de diferentes mecanismos de fusi´on multimodal: un perceptrón multicapa (MLP), la unidad multimodal con compuertas (Gated Multimodal Unit, GMU) y el marco de Kernel Density Matrices (KDM). La metodología se evaluó en una tarea de clasificación ordinal con cuatro niveles de severidad del edema pulmonar, utilizando el conjunto de datos MIMIC-CXR. Los resultados muestran que: (i) los modelos basados en imágenes superan a los textuales cuando se consideran de manera aislada, alcanzando un máximo de 0.45 con DINOv2; (ii) los reportes generados automáticamente aportan uniformidad y pueden mejorar el desempeño frente a los reportes humanos en ciertos escenarios multimodales; y (iii) la fusión con KDM alcanza el mejor resultado global, logrando un macro F1-score de 0.48, lo que confirma la utilidad de la integración multimodal frente a cualquier modalidad aislada. Los resultados demuestran que el aprovechamiento de modelos fundacionales y de mecanismos de fusión probabilística como KDM mejora el rendimiento en la predicción de la severidad del edema pulmonar. Estos hallazgos sugieren que la combinación de información visual y textual puede potenciar la capacidad diagnóstica en entornos clínicos con datos limitados. (Texto tomado de la fuente).spa
dc.description.abstractThe diagnosis of pulmonary edema from the automatic analysis of chest radiographs and clinical reports represents a major challenge in the development of medical decision-support systems. This problem is aggravated by two main factors: the scarcity of high-quality clinical annotations and the difficulty of integrating heterogeneous modalities such as images and textual reports. To address this task, this thesis studied and compared different unimodal approaches (image and text separately) and multimodal approaches (fusion of both modalities), incorporating both conventional models and foundation models. The experimental workflow included three main components: (i) the use of visual extractors such as DenseNet121 and DINOv2; (ii) textual representation through models such as BERT Medical and MedCPT, applied both to reports written by specialists and to synthetic reports automatically generated with ContactDoctor ; and (iii) the comparison of different multimodal fusion mechanisms: a multilayer perceptron (MLP), the Gated Multimodal Unit (GMU), and the Kernel Density Matrices (KDM) framework. The methodology was evaluated on an ordinal classification task with four levels of pulmonary edema severity, using the MIMIC-CXR dataset. The results show that: (i) image-based models outperform text-based ones when considered in isolation, reaching a maximum 0.45 with DINOv2; (ii) automatically generated reports provide uniformity and can improve performance compared to human-written reports in certain multimodal scenarios; and (iii) fusion with KDM achieves the best overall result, reaching a macro F1-score of 0.48, confirming the utility of multimodal integration over any isolated modality. The results demonstrate that leveraging foundation models and probabilistic fusion mechanisms such as KDM improves performance in predicting pulmonary edema severity. These findings suggest that the combination of visual and textual information can enhance diagnostic capacity in clinical environments with limited data.eng
dc.description.degreelevelMaestríaspa
dc.description.degreenameMagíster en Ingeniería - Ingeniería de Sistemas y Computaciónspa
dc.description.researchareaSistemas inteligentesspa
dc.format.extentx, 43 páginasspa
dc.format.mimetypeapplication/pdf
dc.identifier.instnameUniversidad Nacional de Colombiaspa
dc.identifier.reponameRepositorio Institucional Universidad Nacional de Colombiaspa
dc.identifier.repourlhttps://repositorio.unal.edu.co/spa
dc.identifier.urihttps://repositorio.unal.edu.co/handle/unal/89182
dc.language.isospa
dc.publisherUniversidad Nacional de Colombiaspa
dc.publisher.branchUniversidad Nacional de Colombia - Sede Bogotáspa
dc.publisher.departmentDepartamento de Ingenieria de Sistemas e Industrialspa
dc.publisher.facultyFacultad de Ingenieríaspa
dc.publisher.placeBogotá, Colombiaspa
dc.publisher.programBogotá - Ingeniería - Maestría en Ingeniería - Ingeniería de Sistemas y Computaciónspa
dc.relation.indexedBiremespa
dc.relation.referencesA. K. Dubey, M. T. Young, C. Stanley, D. Lunga, and J. Hinkle, “Computer-aided abnormality detection in chest radiographs in a clinical setting via domain-adaptation,” 2020.
dc.relation.referencesA. Poernama, I. Soesanti, and O. Wahyunggoro, “Feature extraction and feature selection methods in classification of brain mri images: A review,” 10 2019, pp. 58–63.
dc.relation.referencesA. Schaumberg, W. Juarez-Nicanor, S. Choudhury, L. Pastri´an, B. Pritt, M. P. Pozuelo, R. S. S´anchez, K. Ho, N. Zahra, B. Sener, M. Aly, and T. Fuchs, “Interpretable multimodal deep learning for real-time pan-tissue pan-disease pathology search on social media,” Modern Pathology, 2020.
dc.relation.referencesA. Siamak, R. Sadeghian, I. Abdellatif, and S. Nwoji, “Diagnosing heart disease types from chest x-rays using a deep learning approach,” 12 2019, pp. 910–913.
dc.relation.referencesA. Taleb, W. Loetzsch, N. Danz, J. Severin, T. Gaertner, D. Health, and M. Learning, “3d self-supervised methods for medical imaging,” Digital Health Machine Learning , Hasso-Plattner-Institute, Potsdam University , Berlin, Germany, pp. 1–17, 2020.
dc.relation.referencesA. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, L. Kaiser, and I. Polosukhin, “Attention is all you need,” Advances in Neural Information Processing Systems 30 (NIPS 2017), 2017.
dc.relation.referencesA. Chavarro, J. Camargo, and F. Gonz´alez, “Visualizing multimodal image collections,” 09 2013, pp. 1–6.
dc.relation.referencesAppinventiv, “Transformer vs rnn: Which is better for nlp tasks?” Appinventiv Blog, 2024.
dc.relation.references“Bert medical ner,” 2024. [Online]. Available: https://huggingface.co/medical-ner-proj/bert-medical-ner-proj
dc.relation.referencesC. Doersch, A. Gupta, and A. A. Efros, “Unsupervised visual representation learning by context prediction,” 2016.
dc.relation.referencesC. Lin, P. Hu, H. Su, S. Li, J. Mei, J. Zhou, and H. Leung, “Sensemood: Depression detection on social media,” 2020, pp. 407–411.
dc.relation.referencesC. Q. Chen, L. Wang, L. Wang, Z. Deng, J. Zhang, and Y. Zhu, “Glioma grade prediction using wavelet scattering-based radiomics,” IEEE Access, vol. 8, pp. 106 564–106 575, 2020.
dc.relation.references“Contactdoctor-bio-medical-multimodal-llama-3-8b-v1: A high-performance biomedical multimodal llm,” https://huggingface.co/ContactDoctor/Bio-Medical-MultiModal-Llama-3-8B-V1, 2024.
dc.relation.references“Dinov2 base fine-tuned on chest x-ray,” 2024. [Online]. Available: https://huggingface.co/Haaaaaaaaaax/dinov2-Base-finetuned-chest xray
dc.relation.referencesD. Engemann, O. Kozynets, D. Sabbagh, G. Lemaˆıtre, G. Varoquaux, F. Liem, and A. Gramfort, “Combining magnetoencephalography with magnetic resonance imaging enhances learning of surrogate-biomarkers,” eLife, vol. 9, p. e54055, 05 2020.
dc.relation.referencesD. Ramachandram and G. W. Taylor, “Deep multimodal learning: A survey on recent advances and trends,” IEEE Signal Processing Magazine, vol. 34, pp. 96–108, 11 2017.
dc.relation.referencesD. Sun, M. Wang, and A. Li, “A multimodal deep neural network for human breast cancer prognosis prediction by integrating multi-dimensional data,” IEEE/ACM Transactions on Computational Biology and Bioinformatics, vol. PP, pp. 1–1, 02 2018.
dc.relation.referencesD.-i. Eun, R. Jang, W. Ha, H. Lee, S. Jung, and N. Kim, “Deep-learning-based image quality enhancement of compressed sensing magnetic resonance imaging of vessel wall: comparison of self-supervised and unsupervised approaches,” Scientific Reports, vol. 10, p. 13950, 08 2020.
dc.relation.referencesF. A. González, R. Ramos-Pollán, and J. A. Gallego-Mejia, “Quantum kernel mixtures for probabilistic deep learning,” 2023.
dc.relation.referencesG. Tadesse, H. Javed, N. Thanh, H. Thi, L. Tan, L. Thwaites, D. Clifton, and T. Zhu, “Multi-modal diagnosis of infectious diseases in the developing world,” IEEE Journal of Biomedical and Health Informatics, vol. 24, pp. 2131–2141, 2020.
dc.relation.referencesH. Chen, M. Gao, Y. Zhang, W. Liang, and X. Zou, “Attention-based multi-nmf deep neural network with multimodality data for breast cancer prognosis model,” BioMed Research International, vol. 2019, pp. 1–11, 05 2019.
dc.relation.referencesH. Liang, X. Sun, S. Yunlei, and Y. Gao, “Text feature extraction based on deep learning: a review,” EURASIP Journal on Wireless Communications and Networking, vol. 2017, 12 2017.
dc.relation.referencesH. Wang and B. Raj, “On the Origin of Deep Learning,” pp. 1–72, 2017. [Online]. Available: http://arxiv.org/abs/1702.07800
dc.relation.referencesHervella, J. Rouco, J. Novo, and M. Ortega, “Self-supervised multimodal reconstruction of retinal images over paired datasets,” Expert Systems with Applications, vol. 161, 2020.
dc.relation.referencesI. Qureshi, J. Ma, and Q. Abbas, “Diabetic retinopathy detection and stage classification in eye fundus images using active deep learning,” Multimedia Tools and Applications, vol. 80, 03 2021.
dc.relation.referencesJ. Arevalo, T. Solorio, M. M. y Gómez, and F. A. González, “Gated multimodal units for information fusion,” 2 2017. [Online]. Available: http://arxiv.org/abs/1702.01992
dc.relation.referencesJ. Ben-Abdallah, J. Caicedo, F. González, and O. Nasraoui, “Multimodal image annotation using non-negative matrix factorization,” vol. 1, 08 2010, pp. 128–135.
dc.relation.referencesJ. C. Caicedo, J. BenAbdallah, F. A. González, and O. Nasraoui, “Multimodal representation, indexing, automated annotation and retrieval of image collections via non-negative matrix factorization,” Neurocomputing, vol. 76, no. 1, pp. 50–60, 2012, seventh International Symposium on Neural Networks (ISNN 2010) Advances in Web Intelligence. [Online]. Available: https://www.sciencedirect.com/science/article/pii/S0925231211004048
dc.relation.referencesJ. C. Caicedo and F. A. González, “Image retrieval using multimodal data - paradigma.uniandes.edu.co,” 2014. [Online]. Available: http://paradigma.uniandes.edu.co/images/ sampledata/PARADIGMA/ediciones/Edicion3/Numero3/Articulo7/j-caicedo-7.pdf
dc.relation.referencesJ. Camargo and F. González, “Multimodal latent topic analysis for image collection summarization,” Information Sciences, vol. 328, pp. 270–287, 01 2016.
dc.relation.referencesJ. Camargo and F. González, “Multimodal latent topic analysis for image collection summarization,” Information Sciences, vol. 328, pp. 270–287, 01 2016.
dc.relation.referencesJ. Lara Ramírez, V. Contreras, J. Otálora Montenegro, H. Müller, and F. González, “Multimodal latent semantic alignment for automated prostate tissue classification and retrieval,” 09 2020.
dc.relation.referencesJ. Peeken, T. Goldberg, T. Pyka, M. Bernhofer, B. Wiestler, K. Kessel, P. Tafti, F. Nuesslin, A. Braun, C. Zimmer, B. Rost, and U.-P. D. S. Combs, “Combining multimodal imaging and treatment features improves machine learning-based prognostic assessment in patients with glioblastoma multiforme,” Cancer Medicine, vol. 8, pp. 1–9, 12 2018.
dc.relation.referencesJ. Yuan, X. Ran, K. Liu, C. Yao, Y. Yao, H. Wu, and Q. Liu, “Machine learning applications on neuroimaging for diagnosis and prognosis of epilepsy: A review,” 2021.
dc.relation.referencesK. A. Dubey, M. T. Young, C. Stanley, D. Lunga, and J. Hinkle, “Computer-aided abnormality detection in chest radiographs in a clinical setting via domain-adaptation,” 2020.
dc.relation.references“Keras applications: Densenet,” 2024. [Online]. Available: https://keras.io/api/applications/densenet/
dc.relation.referencesKolena, “Transformer vs rnn: 4 key differences and how to choose,” Kolena Guides, 2024.
dc.relation.referencesL. Jing and Y. Tian, “Self-supervised visual feature learning with deep neural networks: A survey,” 2019.
dc.relation.referencesL. Jin, Z. Zhao, A. Doshi, Y. Genc, C. Dan, G. S. Corrado, and Y. Liu, “MedCPT: Contrastive pre-trained transformers for medical text,” in Findings of the Association for Computational Linguistics: NAACL 2023. Association for Computational Linguistics, 2023, pp. 1118–1131.
dc.relation.referencesL. Lazli, M. Boukadoum, and O. Mohamed, “A survey on computer-aided diagnosis of brain disorders through mri based on machine learning and data mining methodologies with an emphasis on alzheimer disease diagnosis and the contribution of the multimodal fusion,” Applied Sciences (Switzerland), vol. 10, 2020.
dc.relation.referencesL. Vale-Silva and K. Rohr, “Pan-cancer prognosis prediction using multimodal deep learning,” 04 2020, pp. 568–571.
dc.relation.referencesM. Elbattah, C. Loughnane, J.-L. Guerin, R. Carette, F. Cilia, and G. Dequen, “Variational autoencoder for image-based augmentation of eye-tracking data,” Journal of Imaging, vol. 7, p. 83, 05 2021.
dc.relation.referencesM. J. Horry, S. Chakraborty, M. Paul, A. Ulhaq, B. Pradhan, M. Saha, and N. Shukla, “Covid-19 detection through transfer learning using multimodal imaging data,” IEEE Access, vol. 8, pp. 149 808–149 824, 2020.
dc.relation.referencesM. Kazmierski, M. Welch, S. Kim, C. McIntosh, P. M. Head, N. C. Group, K. Rey-McIntyre, S. H. Huang, T. Patel, T. Tadic, M. Milosevic, F.-F. Liu, A. Hope, S. Bratman, and B. Haibe-Kains, “A machine learning challenge for prognostic modelling in head and neck cancer using multi-modal data,” 2021.
dc.relation.referencesM. Li, N. Arun, M. Gidwani, K. Chang, F. Deng, B. Little, D. Mendoza, M. Lang, O. Vtc Lee, A. O’Shea, A. Parakh, P. Singh, and J. Kalpathy-Cramer, “Automated assessment and tracking of covid-19 pulmonary disease severity on chest radiographs using convolutional siamese neural networks,” Radiology: Artificial Intelligence, vol. 2, p. e200079, 07 2020.
dc.relation.references“Medcpt query encoder (qenc),” 2024. [Online]. Available: https://huggingface.co/ncbi/MedCPT-Query-Encoder
dc.relation.referencesM. Oquab, T. Darcet, T. Moutakanni, H. V. Vo, M. Szafraniec, V. Khalidov, P. Fernandez, D. Haziza, F. Massa, A. El-Nouby et al., “Dinov2: Learning robust visual features without supervision,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2023, pp. 19 867–19 877.
dc.relation.referencesN. Koutsouleris, D. B. Dwyer, F. Degenhardt, C. Maj, M. F. Urquijo-Castro, R. Sanfelici, D. Popovic, O. Oeztuerk, S. S. Haas, J. Weiske, A. Ruef, L. Kambeitz-Ilankovic, L. A. Antonucci, S. Neufang, C. Schmidt-Kraepelin, S. Ruhrmann, N. Penzel, J. Kambeitz, T. K. Haidl, M. Rosen, K. Chisholm, A. Riecher-R¨ossler, L. Egloff, A. Schmidt, C. Andreou, J. Hietala, T. Schirmer, G. Romer, P. Walger, M. Franscini, N. Traber-Walker, B. G. Schimmelmann, R. Fl¨uckiger, C. Michel, W. R¨ossler, O. Borisov, P. M. Krawitz, K. Heekeren, R. Buechler, C. Pantelis, P. Falkai, R. K. R. Salokangas, R. Lencer, A. Bertolino, S. Borgwardt, M. Noethen, P. Brambilla, S. J. Wood, R. Upthegrove, F. Schultze-Lutter, A. Theodoridou, E. Meisenzahl, and PRONIA Consortium, “Multimodal machine learning workflows for prediction of psychosis in patients with clinical high-risk syndromes and recent-onset depression,” JAMA Psychiatry, vol. 78, no. 2, pp. 195–209, Feb. 2021.
dc.relation.referencesO. Razeghi, J. Solis-Lemus, A. Lee, R. Karim, C. Corrado, C. Roney, A. De Vecchi, and S. Niederer, “Cemrgapp: An interactive medical imaging application with image processing, computer vision, and machine learning toolkits for cardiovascular research,” SoftwareX, vol. 12, p. 100570, 07 2020.
dc.relation.referencesP. Kellmeyer, “Artificial intelligence in basic and clinical neuroscience: Scientific opportunities and ethical challenges,” Neuroforum, 2019.
dc.relation.referencesP. Khosla, P. Teterwak, and Wang, “Supervised contrastive learning,” in Advances in Neural Information Processing Systems, H. Larochelle, M. Ranzato, R. Hadsell, M. Balcan, and H. Lin, Eds., vol. 33. Curran Associates, Inc., 2020, pp. 18 661–18 673. [Online]. Available: https: //proceedings.neurips.cc/paper files/paper/2020/file/d89a66c7c80a29b1bdbab0f2a1a94af8-Paper.pdf
dc.relation.referencesR. Bommasani and D. A., “On the opportunities and risks of foundation models,” Stanford University, Tech. Rep., 2021.
dc.relation.referencesR. Liao, J. Rubin, G. Lam, S. Berkowitz, S. Dalal, W. Wells, S. Horng, and P. Golland, “Semi-supervised learning for quantification of pulmonary edema in chest x-ray images,” 2019.
dc.relation.referencesR. Liao, G. Chauhan, P. Golland, S. Berkowitz, and S. Horng, “Pulmonary edema severity grades based on mimic-cxr (version 1.0.1),” PhysioNet, 2021, rRID:SCR 007345. [Online]. Available: https://physionet.org/content/mimic-cxr-pe-severity/1.0.1/
dc.relation.referencesS. Horng, R. Liao, X. Wang, S. Dalal, P. Golland, and S. Berkowitz, “Deep learning to quantify pulmonary edema in chest radiographs,” Radiology: Artificial Intelligence, vol. 3, p. e190228, 01 2021.
dc.relation.referencesS. Liang, R. Zhang, D. Liang, T. Song, T. Ai, C. Xia, L. Xia, and Y. Wang, “Multimodal 3d densenet for idh genotype prediction in gliomas,” Genes, vol. 9, 2018.
dc.relation.referencesS. Maharjan, M. Montes, F. González, and T. Solorio, “A genre-aware attention model to improve the likability prediction of books,” 11 2018.
dc.relation.referencesS. Sierra and F. A. González, “Combining textual and visual representations for multimodal author profiling: Notebook for PAN at CLEF 2018,” in Working Notes of CLEF 2018 - Conference and Labs of the Evaluation Forum, Avignon, France, September 10-14, 2018, ser. CEUR Workshop Proceedings, L. Cappellato, N. Ferro, J. Nie, and L. Soulier, Eds., vol. 2125. CEUR-WS.org, 2018. [Online]. Available: https://ceur-ws.org/Vol-2125/paper 219.pdf
dc.relation.referencesS. Tabarestani, M. Aghili, M. Eslami, M. Cabrerizo, A. Barreto, N. Rishe, R. Curiel, D. Loewenstein, R. Duara, and M. Adjouadi, “A distributed multitask multimodal approach for the prediction of alzheimer’s disease in a longitudinal study,” NeuroImage, vol. 206, 2020.
dc.relation.referencesS. Toledo-Cortés, D. H. Useche, and F. A. González, “Prostate tissue grading with deep quantum measurement ordinal regression,” CoRR, vol. abs/2103.03188, 2021. [Online]. Available: https://arxiv.org/abs/2103.03188
dc.relation.referencesW. Wang, D. Tran, and M. Feiszli, “What makes training multi-modal classification networks hard?” 2020.
dc.relation.referencesY. Khare, V. Bagal, M. Mathew, A. Devi, U. D. Priyakumar, and C. Jawahar, “Mmbert: Multimodal bert pretraining for improved medical vqa,” 2021.
dc.relation.referencesY. Li, H. Wang, and Y. Luo, “A comparison of pre-trained vision-and-language models for multimodal representation learning across medical images and reports,” 2020.
dc.relation.referencesJ. Yap, W. Yolland, and P. Tschandl, “Multimodal skin lesion classification using deep learning,” Experimental Dermatology, vol. 27, pp. 1261–1267, 2018.
dc.relation.referencesZ. Hu, Z. Zhang, H. Yang, Q. Chen, and D. Zuo, “A deep learning approach for predicting the quality of online health expert question-answering services,” Journal of Biomedical Informatics, vol. 71, pp. 241–253, 2017.
dc.relation.referencesZ. Tang, Y. Xu, Z. Jiao, J. Lu, L. Jin, A. Aibaidula, J. Wu, Q. Wang, and H. Zhang, “Pre-operative overall survival time prediction for glioblastoma patients using deep learning on both imaging phenotype and genotype,” vol. 11764, 10 2019, pp. 415–422.
dc.rights.accessrightsinfo:eu-repo/semantics/openAccess
dc.rights.licenseAtribución-NoComercial 4.0 Internacional
dc.rights.urihttp://creativecommons.org/licenses/by-nc/4.0/
dc.subject.ddc000 - Ciencias de la computación, información y obras generales::005 - Programación, programas, datos de computaciónspa
dc.subject.ddc610 - Medicina y salud::616 - Enfermedadesspa
dc.subject.decsEdema Pulmonarspa
dc.subject.decsPulmonary Edemaeng
dc.subject.decsRedes Neuronales Convolucionalesspa
dc.subject.decsConvolutional Neural Networkseng
dc.subject.decsAlgoritmosspa
dc.subject.decsAlgorithmseng
dc.subject.proposalEdema pulmonarspa
dc.subject.proposalAprendizaje multimodalspa
dc.subject.proposalFusión de característicasspa
dc.subject.proposalModelos fundacionalesspa
dc.subject.proposalKernel Density matricesspa
dc.subject.proposalEnriquecimiento semánticospa
dc.subject.proposalDiagnóstico asistido por computadorspa
dc.subject.proposalPulmonary edemaeng
dc.subject.proposalMultimodal learningeng
dc.subject.proposalFeature fusioneng
dc.subject.proposalFoundation modelseng
dc.subject.proposalKernel Density matriceeng
dc.subject.proposalSemantic enrichmenteng
dc.subject.proposalComputer-aided diagnosiseng
dc.titleDiagnóstico de severidad de edema pulmonar asistido por multimodal learningspa
dc.title.translatedDiagnosis of pulmonary edema severity assisted by multimodal learningeng
dc.typeTrabajo de grado - Maestríaspa
dc.type.coarhttp://purl.org/coar/resource_type/c_bdcc
dc.type.coarversionhttp://purl.org/coar/version/c_ab4af688f83e57aa
dc.type.contentText
dc.type.driverinfo:eu-repo/semantics/masterThesis
dc.type.redcolhttp://purl.org/redcol/resource_type/TM
dc.type.versioninfo:eu-repo/semantics/acceptedVersion
dcterms.audience.professionaldevelopmentInvestigadoresspa
oaire.accessrightshttp://purl.org/coar/access_right/c_abf2

Archivos

Bloque original

Mostrando 1 - 1 de 1
Cargando...
Miniatura
Nombre:
DocumentoFinalTesis1032436240.pdf
Tamaño:
1.62 MB
Formato:
Adobe Portable Document Format
Descripción:
Tesis de Maestría en Ingeniería - Ingeniería de Sistemas y Computación

Bloque de licencias

Mostrando 1 - 1 de 1
Cargando...
Miniatura
Nombre:
license.txt
Tamaño:
5.74 KB
Formato:
Item-specific license agreed upon to submission
Descripción: