Multimodal explainability using class activation maps and canonical correlation analysis for EEG-based motor imagery classification enhancement

dc.contributor.advisorAlvarez Meza, Andres Marino
dc.contributor.advisorCastellanos Dominguez, Cesar German
dc.contributor.authorLoaiza Arias, Marcos
dc.contributor.cvlacLoaiza Arias, Marcos [0001836881]spa
dc.contributor.googlescholarLoaiza Arias, Marcos [7UcdYLQAAAAJ]spa
dc.contributor.orcidLoaiza Arias, Marcos [0000000337575089]spa
dc.contributor.researchgroupGrupo de Control y Procesamiento Digital de Señalesspa
dc.date.accessioned2025-07-15T20:18:36Z
dc.date.available2025-07-15T20:18:36Z
dc.date.issued2025-01-10
dc.descriptiongraficas, tablasspa
dc.description.abstractBrain-Computer Interfaces (BCIs) are essential in advancing medical diagnosis and treatment by providing non-invasive tools to assess neurological states. Among these, Motor Imagery (MI), where patients mentally simulate motor tasks without physical movement, has proven to be an effective paradigm for diagnosing and monitoring neurological conditions. Electroencephalography (EEG) is widely used for MI data collection due to its high temporal resolution, cost-effectiveness, and portability. However, EEG signals can be noisy from several sources, including physiological artifacts and electromagnetic interference. They can also vary from person to person, which makes it harder to extract features and understand the signals. Additionally, this variability, influenced by genetic and cognitive factors, presents challenges for developing subject-independent solutions. To address these limitations, this work presents a Multimodal and Explainable Deep Learning (MEDL) approach for MI-EEG classification and physiological interpretability. Our approach involves: i) evaluating different deep learning (DL) models for subject-dependent MI-EEG discrimination; ii) employing Class Activation Mapping (CAM) to visualize relevant MI-EEG features; and iii) utilizing a Questionnaire-MI Performance Canonical Correlation Analysis (QMIP-CCA) to provide multidomain interpretability. On the GIGAScience MI dataset, experiments show that shallow neural networks are good at classifying MI-EEG data, while the CAM-based method finds spatio-frequency patterns. Moreover, the QMIP-CCA framework successfully correlates physiological data with MI-EEG performance, offering an enhanced, interpretable solution for BCIs (Texto tomado de la fuente).eng
dc.description.abstractLas interfaces cerebro-computador(BCI) son esenciales para avanzar en el diagnóstico y el tratamiento médicos al proporcionar herramientas no invasivas para evaluar estados neurológicos. Entre ellas, la Imaginación Motora (IM), en la que los pacientes simulan mentalmente tareas motoras sin movimiento físico, ha demostrado ser un paradigma eficaz para diagnosticar y monitorizar afecciones neurológicas. La electroencefalografía (EEG) se utiliza ampliamente para la recopilación de datos de IM debido a su alta resolución temporal, rentabilidad y portabilidad. Sin embargo, las señales de EEG pueden ser ruidosas debido a varias fuentes, como los artefactos fisiológicos y las interferencias electromagnéticas. También pueden variar de una persona a otra, lo que dificulta la extracción de características y la comprensión de las señales. Además, esta variabilidad, influida por factores genéticos y cognitivos, plantea retos para el desarrollo de soluciones independientes del sujeto. Para abordar estas limitaciones, este trabajo presenta un enfoque de Aprendizaje Profundo Multimodal y Explicable (MEDL) para la clasificación MI-EEG y la interpretabilidad fisiológica. Nuestro enfoque implica: i) evaluar diferentes modelos de aprendizaje profundo (DL) para la discriminación MI-EEG dependiente del sujeto; ii) emplear el mapeo de activación de clase (CAM) para visualizar características MI-EEG relevantes; y iii) utilizar un análisis de correlación canónica de rendimiento de cuestionario-MI (QMIP-CCA) para proporcionar interpretabilidad multidominio. En el conjunto de datos MI de GIGAScience, los experimentos muestran que las redes neuronales poco profundas son buenas para clasificar los datos MI-EEG, mientras que el método basado en CAM encuentra patrones de espacio-frecuencia. Además, el marco QMIP-CCA correlaciona con éxito los datos fisiológicos con el rendimiento MI-EEG, ofreciendo una solución mejorada e interpretable para BCI.spa
dc.description.curricularareaEléctrica, Electrónica, Automatización Y Telecomunicaciones.Sede Manizalesspa
dc.description.degreelevelMaestríaspa
dc.description.degreenameMagíster en Ingeniería - Automatización Industrialspa
dc.description.sponsorshipMincienciasspa
dc.format.extentxi, 82 páginasspa
dc.format.mimetypeapplication/pdfspa
dc.identifier.instnameUniversidad Nacional de Colombiaspa
dc.identifier.reponameRepositorio Institucional Universidad Nacional de Colombiaspa
dc.identifier.repourlhttps://repositorio.unal.edu.co/spa
dc.identifier.urihttps://repositorio.unal.edu.co/handle/unal/88342
dc.language.isoengspa
dc.publisherUniversidad Nacional de Colombiaspa
dc.publisher.branchUniversidad Nacional de Colombia - Sede Manizalesspa
dc.publisher.facultyFacultad de Ingeniería y Arquitecturaspa
dc.publisher.placeManizales, Colombiaspa
dc.publisher.programManizales - Ingeniería y Arquitectura - Maestría en Ingeniería - Automatización Industrialspa
dc.relation.references[Collazos-Huertas et al., 2022] Collazos-Huertas, D. F.; Álvarez-Meza, A. M. &Castellanos-Dominguez, G.: , 2022; Image-based learning using gradient classactivation maps for enhanced physiological interpretability of motor imagery skills;Applied Sciences; 12 (3): 1695.spa
dc.relation.references[Croce et al., 2020] Croce, P.; Quercia, A.; Costa, S. & Zappasodi, F.: , 2020; Eegmicrostates associated with intra-and inter-subject alpha variability; Scientific reports;10 (1): 2469.spa
dc.relation.references[Daeglau et al., 2021] Daeglau, M.; Zich, C. & Kranczioch, C.: , 2021; The impactof context on eeg motor imagery neurofeedback and related motor domains; CurrentBehavioral Neuroscience Reports; 8 (3): 90–101.spa
dc.relation.references[Damseh et al., 2024] Damseh, R.; Hireche, A.; Sirpal, P. & Belkacem, A. N.: , 2024;Multimodal eeg-fnirs seizure pattern decoding using vision transformer; IEEE OpenJournal of the Computer Society.spa
dc.relation.references[Demir et al., 2021] Demir, F.; Sobahi, N.; Siuly, S. & Sengur, A.: , 2021; Exploringdeep learning features for automatic classification of human emotion using eeg rhythms;IEEE Sensors Journal ; 21 (13): 14923–14930.spa
dc.relation.references[Doshi-Velez & Kim, 2017] Doshi-Velez, F. & Kim, B.: , 2017; Towards a rigorous scienceof interpretable machine learning; arXiv preprint arXiv:1702.08608.spa
dc.relation.references[Fukumizu et al., 2007] Fukumizu, K.; Bach, F. R. & Gretton, A.: , 2007; Statisticalconsistency of kernel canonical correlation analysis.; Journal of Machine LearningResearch; 8 (2).spa
dc.relation.references[Gandhi et al., 2023] Gandhi, A.; Adhvaryu, K.; Poria, S.; Cambria, E. & Hussain,A.: , 2023; Multimodal sentiment analysis: A systematic review of history, datasets,multimodal fusion methods, applications, challenges and future directions; InformationFusion; 91: 424–444.spa
dc.relation.references[Gao et al., 2021] Gao, Z.; Sun, X.; Liu, M.; Dang, W.; Ma, C. & Chen, G.: , 2021;Attention-based parallel multiscale convolutional neural network for visual evokedpotentials eeg classification; IEEE Journal of Biomedical and Health Informatics;25 (8): 2887–2894.spa
dc.relation.references[García-Murillo et al., 2023] García-Murillo, D. G.; Álvarez Meza, A. M. &Castellanos-Dominguez, C. G.: , 2023; Kcs-fcnet: Kernel cross-spectral functionalconnectivity network for eeg-based motor imagery classification; Diagnostics; 13 (6); doi:10.3390/diagnostics13061122; URL https://www.mdpi.com/2075-4418/13/6/1122.spa
dc.relation.references[Geng et al., 2022] Geng, X.; Li, D.; Chen, H.; Yu, P.; Yan, H. & Yue, M.: , 2022;An improved feature extraction algorithms of eeg signals based on motor imagerybrain-computer interface; Alexandria Engineering Journal ; 61 (6): 4807–4820.spa
dc.relation.references[Ghorbani et al., 2019] Ghorbani, A.; Wexler, J.; Zou, J. Y. & Kim, B.: , 2019; To-wards automatic concept-based explanations; Advances in neural information processingsystems; 32.spa
dc.relation.references[Hain et al., 2023] Hain, D. S.; Jurowetzki, R.; Squicciarini, M.; Xu, L. et al.: , 2023;Unveiling the neurotechnology landscape: Scientific advancements innovations andmajor trends.spa
dc.relation.references[He et al., 2020] He, Z.; Li, Z.; Yang, F.; Wang, L.; Li, J.; Zhou, C. & Pan, J.: ,2020; Advances in multimodal emotion recognition based on brain–computer interfaces;Brain sciences; 10 (10): 687.spa
dc.relation.references[Hewamalage et al., 2021] Hewamalage, H.; Bergmeir, C. & Bandara, K.: , 2021;Recurrent neural networks for time series forecasting: Current status and futuredirections; International Journal of Forecasting; 37 (1): 388–427.spa
dc.relation.references[Hong et al., 2021] Hong, Q.; Wang, Y.; Li, H.; Zhao, Y.; Guo, W. & Wang, X.: ,2021; Probing filters to interpret cnn semantic configurations by occlusion; en DataScience: 7th International Conference of Pioneering Computer Scientists, Engineersand Educators, ICPCSEE 2021, Taiyuan, China, September 17–20, 2021, Proceedings,Part II 7 ; Springer; págs. 103–115.spa
dc.relation.references[Hopulele-Petri & Manea, 2017] Hopulele-Petri, A. & Manea, M.: , 2017; Visual andmotor functions in schizophrenia; European Psychiatry; 41 (S1): S198–S199.spa
dc.relation.references[Hwaidi & Chen, 2022] Hwaidi, J. F. & Chen, T. M.: , 2022; Classification of motorimagery eeg signals based on deep autoencoder and convolutional neural networkapproach; IEEE access; 10: 48071–48081.spa
dc.relation.references[Jiang et al., 2021] Jiang, P.-T.; Zhang, C.-B.; Hou, Q.; Cheng, M.-M. & Wei, Y.: ,2021; Layercam: Exploring hierarchical class activation maps for localization; IEEETransactions on Image Processing; 30: 5875–5888.spa
dc.relation.references[Jin et al., 2021] Jin, J.; Xiao, R.; Daly, I.; Miao, Y.; Wang, X. & Cichocki, A.: , 2021;Internal feature selection method of csp based on l1-norm and dempster–shafer theory;IEEE Transactions on Neural Networks and Learning Systems; 32 (11): 4814–4825;doi:10.1109/TNNLS.2020.3015505.spa
dc.relation.references[Jung & Oh, 2021] Jung, H. & Oh, Y.: , 2021; Towards better explanations of classactivation mapping; en Proceedings of the IEEE/CVF international conference oncomputer vision; págs. 1336–1344.spa
dc.relation.references[Kanamori et al., 2020] Kanamori, K.; Takagi, T.; Kobayashi, K. & Arimura, H.:, 2020; Dace: Distribution-aware counterfactual explanation by mixed-integer linearoptimization.; en IJCAI ; págs. 2855–2862.spa
dc.relation.references[Kim et al., 2022] Kim, S.-J.; Lee, D.-H. & Lee, S.-W.: , 2022; Rethinking cnn architec-ture for enhancing decoding performance of motor imagery-based eeg signals; IEEEAccess.spa
dc.relation.references[Kirk et al., 2021] Kirk, I. J.; Spriggs, M. J. & Sumner, R. L.: , 2021; Human eeg andthe mechanisms of memory: investigating long-term potentiation (ltp) in sensory-evokedpotentials; Journal of the Royal Society of New Zealand ; 51 (1): 24–40.spa
dc.relation.references[Koh & Liang, 2017] Koh, P. W. & Liang, P.: , 2017; Understanding black-box predictionsvia influence functions; en International conference on machine learning; PMLR; págs.1885–1894.spa
dc.relation.references[Kramer, 1991] Kramer, M. A.: , 1991; Nonlinear principal component analysis usingautoassociative neural networks; AIChE journal ; 37 (2): 233–243.spa
dc.relation.references[Lawhern et al., 2018] Lawhern, V. J.; Solon, A. J.; Waytowich, N. R.; Gordon,S. M.; Hung, C. P. & Lance, B. J.: , 2018; Eegnet: a compact convolutionalneural network for eeg-based brain–computer interfaces; Journal of Neural Engineering;15 (5): 056013; doi:10.1088/1741-2552/aace8c; URL https://dx.doi.org/10.1088/1741-2552/aace8c.spa
dc.relation.references[Li et al., 2020] Li, F.; He, F.; Wang, F.; Zhang, D.; Xia, Y. & Li, X.: , 2020; A novelsimplified convolutional neural network classification algorithm of motor imagery eegsignals based on deep learning; Applied Sciences; 10 (5): 1605.spa
dc.relation.references[Li et al., 2022a] Li, W.; Wu, C.; Hu, X.; Chen, J.; Fu, S.; Wang, F. & Zhang,D.: , 2022a; Quantitative personality predictions from a brief eeg recording; IEEETransactions on Affective Computing; 13 (3): 1514–1527; doi:10.1109/TAFFC.2020.3008775.spa
dc.relation.references[Li et al., 2022b] Li, X.; Xiong, H.; Li, X.; Wu, X.; Zhang, X.; Liu, J.; Bian, J.& Dou, D.: , 2022b; Interpretable deep learning: Interpretation, interpretability,trustworthiness, and beyond; Knowledge and Information Systems; 64 (12): 3197–3234.spa
dc.relation.references[Lin et al., 2019] Lin, C.-T.; Chiu, C.-Y.; Singh, A. K.; King, J.-T.; Ko, L.-W.; Lu,Y.-C. & Wang, Y.-K.: , 2019; A wireless multifunctional ssvep-based brain–computerinterface assistive system; IEEE Transactions on Cognitive and Developmental Systems;11 (3): 375–383; doi:10.1109/TCDS.2018.2820153.spa
dc.relation.references[Liu et al., 2024] Liu, H.; Lou, T.; Zhang, Y.; Wu, Y.; Xiao, Y.; Jensen, C. S. &Zhang, D.: , 2024; Eeg-based multimodal emotion recognition: a machine learningperspective; IEEE Transactions on Instrumentation and Measurement.spa
dc.relation.references[Liu et al., 2020] Liu, J.; Wu, G.; Luo, Y.; Qiu, S.; Yang, S.; Li, W. & Bi, Y.: , 2020;Eeg-based emotion classification using a deep neural network and sparse autoencoder;Frontiers in Systems Neuroscience; 14: 43.spa
dc.relation.references[Lopes et al., 2023] Lopes, M.; Cassani, R. & Falk, T. H.: , 2023; Using cnn saliencymaps and eeg modulation spectra for improved and more interpretable machine learning-based alzheimer’s disease diagnosis; Computational Intelligence and Neuroscience;2023 (1): 3198066.spa
dc.relation.references[Lu et al., 2016] Lu, N.; Li, T.; Ren, X. & Miao, H.: , 2016; A deep learning scheme formotor imagery classification based on restricted boltzmann machines; IEEE transactionson neural systems and rehabilitation engineering; 25 (6): 566–576.spa
dc.relation.references[Luo et al., 2020] Luo, J.; Gao, X.; Zhu, X.; Wang, B.; Lu, N. & Wang, J.: , 2020;Motor imagery eeg classification based on ensemble support vector learning; Computermethods and programs in biomedicine; 193: 105464.spa
dc.relation.references[Luo et al., 2023] Luo, J.; Wang, Y.; Xia, S.; Lu, N.; Ren, X.; Shi, Z. & Hei, X.:, 2023; A shallow mirror transformer for subject-independent motor imagery bci;Computers in Biology and Medicine; 164: 107254.spa
dc.relation.references[Ma et al., 2022] Ma, Y.; Song, Y. & Gao, F.: , 2022; A novel hybrid cnn-transformermodel for eeg motor imagery classification; en 2022 International Joint Conference onNeural Networks (IJCNN); IEEE; págs. 1–8.spa
dc.relation.references[Maswanganyi et al., 2022] Maswanganyi, R. C.; Tu, C.; Owolawi, P. A. & Du, S.:, 2022; Statistical evaluation of factors influencing inter-session and inter-subjectvariability in eeg-based brain computer interface; IEEE Access; 10: 96821–96839.spa
dc.relation.references[Mayo Clinic Editorial Staff, 2024] Mayo Clinic Editorial Staff: , 2024; Eeg (elec-troencephalogram); https://www.mayoclinic.org/tests-procedures/eeg/about/pac-20393875.spa
dc.relation.references[McFarland et al., 2017] McFarland, D. J.; Daly, J.; Boulay, C. & Parvaz, M. A.: ,2017; Therapeutic applications of bci technologies; Brain-Computer Interfaces; 4 (1-2):37–52.spa
dc.relation.references[Miao et al., 2021] Miao, Y.; Jin, J.; Daly, I.; Zuo, C.; Wang, X.; Cichocki, A.& Jung, T.-P.: , 2021; Learning common time-frequency-spatial patterns for mo-tor imagery classification; IEEE Transactions on Neural Systems and RehabilitationEngineering; 29: 699–707.spa
dc.relation.references[Mirzaei & Ghasemi, 2021] Mirzaei, S. & Ghasemi, P.: , 2021; Eeg motor imageryclassification using dynamic connectivity patterns and convolutional autoencoder;Biomedical Signal Processing and Control ; 68: 102584.spa
dc.relation.references[Mortaga et al., 2021] Mortaga, M.; Brenner, A. & Kutafina, E.: , 2021; Towardsinterpretable machine learning in eeg analysis; en German Medical Data Sciences 2021:Digital Medicine: Recognize–Understand–Heal ; IOS Press; págs. 32–38.spa
dc.relation.references[Murden et al., 2022] Murden, R. J.; Zhang, Z.; Guo, Y. & Risk, B. B.: , 2022;Interpretive jive: Connections with cca and an application to brain connectivity;Frontiers in Neuroscience; 16: 969510.spa
dc.relation.references[Murphy, 2022] Murphy, K.: , 2022; Probabilistic Machine Learning: An Introduction;Adaptive Computation and Machine Learning series; MIT Press; ISBN 9780262046824;URL https://books.google.com.co/books?id=HLlyzgEACAAJ.spa
dc.relation.references[Musallam et al., 2021] Musallam, Y. K.; AlFassam, N. I.; Muhammad, G.; Amin,S. U.; Alsulaiman, M.; Abdul, W.; Altaheri, H.; Bencherif, M. A. & Algabri,M.: , 2021; Electroencephalography-based motor imagery classification using temporalconvolutional network fusion; Biomedical Signal Processing and Control ; 69: 102826;doi:https://doi.org/10.1016/j.bspc.2021.102826; URL https://www.sciencedirect.com/science/article/pii/S1746809421004237.spa
dc.relation.references[Nair-Bedouelle, 2021] Nair-Bedouelle, S.: , 2021; Engineering for sustainable develop-ment: Delivering on the Sustainable Development Goals; United Nations Educational,Scientific, and Cultural Organization.spa
dc.relation.references[Nguyen et al., 2018] Nguyen, M.; Sun, N.; Alexander, D. C.; Feng, J. & Yeo, B. T.:, 2018; Modeling alzheimer’s disease progression using deep recurrent neural networks;en 2018 International Workshop on Pattern Recognition in Neuroimaging (PRNI);IEEE; págs. 1–4.spa
dc.relation.references[Nguyen et al., 2020] Nguyen, M.; He, T.; An, L.; Alexander, D. C.; Feng, J.; Yeo,B. T.; Initiative, A. D. N. et al.: , 2020; Predicting alzheimer’s disease progressionusing deep recurrent neural networks; NeuroImage; 222: 117203.spa
dc.relation.references[Onishi et al., 2024a] Onishi, S.; Nishimura, M.; Fujimura, R. & Hayashi, Y.: , 2024a;Why do tree ensemble approximators not outperform the recursive-rule extractionalgorithm?; Machine Learning and Knowledge Extraction; 6 (1): 658–678; doi:10.3390/make6010031; URL https://www.mdpi.com/2504-4990/6/1/31.spa
dc.relation.references[Onishi et al., 2024b] Onishi, S.; Nishimura, M.; Fujimura, R. & Hayashi, Y.: , 2024b;Why do tree ensemble approximators not outperform the recursive-rule extractionalgorithm?; Machine Learning and Knowledge Extraction; 6 (1): 658–678.spa
dc.relation.references[Park & Chung, 2020] Park, Y. & Chung, W.: , 2020; Optimal channel selection usingcorrelation coefficient for csp based eeg classification; IEEE Access; 8: 111514–111521.spa
dc.relation.references[Pérez-Suay et al., 2020] Pérez-Suay, A.; Adsuara, J. E.; Piles, M.; Martínez-Ferrer,L.; Díaz, E.; Moreno-Martínez, A. & Camps-Valls, G.: , 2020; Interpretability ofrecurrent neural networks in remote sensing; en IGARSS 2020-2020 IEEE InternationalGeoscience and Remote Sensing Symposium; IEEE; págs. 3991–3994.spa
dc.relation.references[Picard et al., 2001] Picard, R. W.; Vyzas, E. & Healey, J.: , 2001; Toward machineemotional intelligence: Analysis of affective physiological state; IEEE transactions onpattern analysis and machine intelligence; 23 (10): 1175–1191.spa
dc.relation.references[Plumb et al., 2020] Plumb, G.; Al-Shedivat, M.; Cabrera, Á. A.; Perer, A.; Xing, E.& Talwalkar, A.: , 2020; Regularizing black-box models for improved interpretability;Advances in Neural Information Processing Systems; 33: 10526–10536.spa
dc.relation.references[Rahman et al., 2022] Rahman, A. U.; Tubaishat, A.; Al-Obeidat, F.; Halim, Z.;Tahir, M. & Qayum, F.: , 2022; Extended ica and m-csp with bilstm towardsimproved classification of eeg signals; Soft Computing; 26 (20): 10687–10698.spa
dc.relation.references[Rakhmatulin et al., 2024] Rakhmatulin, I.; Dao, M.-S.; Nassibi, A. & Mandic, D.: ,2024; Exploring convolutional neural network architectures for eeg feature extraction;Sensors; 24 (3); doi:10.3390/s24030877; URL https://www.mdpi.com/1424-8220/24/3/877.spa
dc.relation.references[Ramadan & Altamimi, 2024] Ramadan, R. A. & Altamimi, A. B.: , 2024; Unravel-ing the potential of brain-computer interface technology in medical diagnostics andrehabilitation: A comprehensive literature review; Health and Technology; 14 (2):263–276.spa
dc.relation.references[Ribeiro et al., 2018] Ribeiro, M. T.; Singh, S. & Guestrin, C.: , 2018; Anchors: High-precision model-agnostic explanations; en Proceedings of the AAAI conference onartificial intelligence, tomo 32.spa
dc.relation.references[Saba-Sadiya et al., 2021] Saba-Sadiya, S.; Chantland, E.; Alhanai, T.; Liu, T. &Ghassemi, M. M.: , 2021; Unsupervised eeg artifact detection and correction;Frontiers in digital health; 2: 608920.spa
dc.relation.references[Saha & Baumert, 2020] Saha, S. & Baumert, M.: , 2020; Intra-and inter-subject vari-ability in eeg-based sensorimotor brain computer interface: a review; Frontiers incomputational neuroscience; 13: 87.spa
dc.relation.references[Saha et al., 2017] Saha, S.; Ahmed, K. I. U.; Mostafa, R.; Hadjileontiadis, L. &Khandoker, A.: , 2017; Evidence of variabilities in eeg dynamics during motorimagery-based multiclass brain–computer interface; IEEE Transactions on NeuralSystems and Rehabilitation Engineering; 26 (2): 371–382.spa
dc.relation.references[Saini et al., 2020] Saini, M.; Satija, U. & Upadhayay, M. D.: , 2020; Wavelet basedwaveform distortion measures for assessment of denoised eeg quality with reference tonoise-free eeg signal; IEEE Signal Processing Letters; 27: 1260–1264.spa
dc.relation.references[Schirrmeister et al., 2017] Schirrmeister, R. T.; Springenberg, J. T.; Fiederer, L.D. J.; Glasstetter, M.; Eggensperger, K.; Tangermann, M.; Hutter, F.;Burgard, W. & Ball, T.: , 2017; Deep learning with convolutional neural networksfor eeg decoding and visualization; Human Brain Mapping; 38 (11): 5391–5420; doi:https://doi.org/10.1002/hbm.23730; URL https://onlinelibrary.wiley.com/doi/abs/10.1002/hbm.23730.spa
dc.relation.references[Selvaraju et al., 2019] Selvaraju, R. R.; Cogswell, M.; Das, A.; Vedantam, R.;Parikh, D. & Batra, D.: , 2019; Grad-cam: Visual explanations from deep net-works via gradient-based localization; International Journal of Computer Vision;128 (2): 336–359; doi:10.1007/s11263-019-01228-7; URL http://dx.doi.org/10.1007/s11263-019-01228-7.spa
dc.relation.references[Setiono & Liu, 1995] Setiono, R. & Liu, H.: , 1995; Understanding neural networks viarule extraction; en IJCAI, tomo 1; Citeseer; págs. 480–485.spa
dc.relation.references[Silva et al., 2020] Silva, S.; Borges, L.; Santiago, L.; Lucena, L.; Lindquist, A. &Ribeiro, T.: , 2020; Motor imagery for gait rehabilitation after stroke; CochraneDatabase of Systematic Reviews; (9); doi:10.1002/14651858.CD013019.pub2; URLhttps://doi.org//10.1002/14651858.CD013019.pub2.spa
dc.relation.references[Tian et al., 2021] Tian, Z.; Li, D.; Song, Y.; Gao, Q.; Kang, Q. & Yang, Y.: , 2021;Eeg-based emotion recognition of deaf subjects by integrated genetic firefly algorithm;IEEE Transactions on Instrumentation and Measurement; 70: 1–11.spa
dc.relation.references[Tibrewal et al., 2022] Tibrewal, N.; Leeuwis, N. & Alimardani, M.: , 2022; Classifica-tion of motor imagery eeg using deep learning increases performance in inefficient bciusers; Plos one; 17 (7): e0268880.spa
dc.relation.references[Tobón-Henao et al., 2023] Tobón-Henao, M.; Álvarez Meza, A. M. & Castellanos-Dominguez, C. G.: , 2023; Kernel-based regularized eegnet using centered alignmentand gaussian connectivity for motor imagery discrimination; Computers; 12 (7); doi:10.3390/computers12070145; URL https://www.mdpi.com/2073-431X/12/7/145.spa
dc.relation.references[Tsuchimoto et al., 2021] Tsuchimoto, S.; Shibusawa, S.; Iwama, S.; Hayashi, M.;Okuyama, K.; Mizuguchi, N.; Kato, K. & Ushiba, J.: , 2021; Use of commonaverage reference and large-laplacian spatial-filters enhances eeg signal-to-noise ratiosin intrinsic sensorimotor activity; Journal of neuroscience methods; 353: 109089.spa
dc.relation.references[Ulm University, 2014] Ulm University: , 2014; Normed spaces; URL https://www.uni-ulm.de/fileadmin/website_uni_ulm/mawi.inst.020/sauter/ws14/normed-spaces.2014-11-12.pdf.spa
dc.relation.references[Vempati & Sharma, 2023] Vempati, R. & Sharma, L. D.: , 2023; Eeg rhythm basedemotion recognition using multivariate decomposition and ensemble machine learningclassifier; Journal of Neuroscience Methods; 393: 109879.spa
dc.relation.references[Wang et al., 2020a] Wang, B.; Wong, C. M.; Kang, Z.; Liu, F.; Shui, C.; Wan, F.& Chen, C. P.: , 2020a; Common spatial pattern reformulated for regularizations inbrain–computer interfaces; IEEE transactions on cybernetics; 51 (10): 5008–5020.spa
dc.relation.references[Wang et al., 2012] Wang, H.; Tang, Q. & Zheng, W.: , 2012; L1-norm-based commonspatial patterns; IEEE Transactions on Biomedical Engineering; 59 (3): 653–662;doi:10.1109/TBME.2011.2177523.spa
dc.relation.references[Wang et al., 2020b] Wang, H.; Wang, Z.; Du, M.; Yang, F.; Zhang, Z.; Ding, S.;Mardziel, P. & Hu, X.: , 2020b; Score-cam: Score-weighted visual explanationsfor convolutional neural networks; en Proceedings of the IEEE/CVF Conference onComputer Vision and Pattern Recognition (CVPR) Workshops.spa
dc.relation.references[Wang et al., 2021] Wang, L.; Kong, W. & Wang, S.: , 2021; Detecting genetic associa-tions with brain imaging phenotypes in alzheimer’s disease via a novel structured kccaapproach; Journal of bioinformatics and computational biology; 19 (04): 2150012.spa
dc.relation.references[Wang et al., 2022] Wang, Q.; Wang, M.; Yang, Y. & Zhang, X.: , 2022; Multi-modalemotion recognition using eeg and speech signals; Computers in Biology and Medicine;149: 105907.spa
dc.relation.references[Wei et al., 2021] Wei, C.-S.; Keller, C. J.; Li, J.; Lin, Y.-P.; Nakanishi, M.; Wagner,J.; Wu, W.; Zhang, Y. & Jung, T.-P.: , 2021; Inter-and intra-subject variability inbrain imaging and decoding.spa
dc.relation.references[Weisstein, 2000a] Weisstein, E. W.: , 2000a; Metric space; URL https://mathworld.wolfram.com/MetricSpace.html.spa
dc.relation.references[Weisstein, 2000b] Weisstein, E. W.: , 2000b; Normed space; URL https://mathworld.wolfram.com/NormedSpace.html.spa
dc.relation.references[Weisstein, 2005a] Weisstein, E. W.: , 2005a; Banach space; URL https://mathworld.wolfram.com/BanachSpace.html.spa
dc.relation.references[Weisstein, 2005b] Weisstein, E. W.: , 2005b; Hilbert space; URL https://mathworld.wolfram.com/HilbertSpace.html.spa
dc.relation.references[Weisstein, 2009] Weisstein, E. W.: , 2009; Vector space; URL https://mathworld.wolfram.com/VectorSpace.html.spa
dc.relation.references[Weisstein, 2014] Weisstein, E. W.: , 2014; Inner product space; URL https://mathworld.wolfram.com/InnerProductSpace.html.spa
dc.relation.references[Wojtas & Chen, 2020] Wojtas, M. & Chen, K.: , 2020; Feature importance ranking fordeep learning; Advances in neural information processing systems; 33: 5105–5114.spa
dc.relation.references[Wu et al., 2020a] Wu, D.; Zhang, J. & Zhao, Q.: , 2020a; Multimodal fused emotionrecognition about expression-eeg interaction and collaboration using deep learning;IEEE Access; 8: 133180–133189; doi:10.1109/ACCESS.2020.3010311.spa
dc.relation.references[Wu et al., 2018] Wu, M.; Hughes, M.; Parbhoo, S.; Zazzi, M.; Roth, V. & Doshi-Velez, F.: , 2018; Beyond sparsity: Tree regularization of deep models for inter-pretability; en Proceedings of the AAAI conference on artificial intelligence, tomo 32.spa
dc.relation.references[Wu et al., 2020b] Wu, M.; Parbhoo, S.; Hughes, M.; Kindle, R.; Celi, L.; Zazzi, M.;Roth, V. & Doshi-Velez, F.: , 2020b; Regional tree regularization for interpretabilityin deep neural networks; en Proceedings of the AAAI conference on artificial intelligence,tomo 34; págs. 6413–6421.spa
dc.relation.references[Zhang et al., 2021a] Zhang, H.; Zhao, M.; Wei, C.; Mantini, D.; Li, Z. & Liu, Q.: ,2021a; Eegdenoisenet: a benchmark dataset for deep learning solutions of eeg denoising;Journal of Neural Engineering; 18 (5): 056057.spa
dc.relation.references[Zhang et al., 2018] Zhang, Q.; Wu, Y. N. & Zhu, S.-C.: , 2018; Interpretable convolu-tional neural networks; en Proceedings of the IEEE conference on computer vision andpattern recognition; págs. 8827–8836.spa
dc.relation.references[Zhang et al., 2021b] Zhang, X.; She, Q.; Chen, Y.; Kong, W. & Mei, C.: , 2021b; Sub-band target alignment common spatial pattern in brain-computer interface; ComputerMethods and Programs in Biomedicine; 207: 106150.spa
dc.relation.references[Zhang et al., 2015] Zhang, Y.; Zhou, G.; Jin, J.; Wang, X. & Cichocki, A.: , 2015;Optimizing spatial patterns with sparse filter bands for motor-imagery based brain–computer interface; Journal of neuroscience methods; 255: 85–91.spa
dc.relation.references[Zhang et al., 2021c] Zhang, Y.; Sidibé, D.; Morel, O. & Mériaudeau, F.: , 2021c;Deep multimodal fusion for semantic image segmentation: A survey; Image and VisionComputing; 105: 104042.spa
dc.relation.references[Zhang et al., 2021d] Zhang, Y.; Tiňo, P.; Leonardis, A. & Tang, K.: , 2021d; Asurvey on neural network interpretability; IEEE Transactions on Emerging Topics inComputational Intelligence; 5 (5): 726–742.spa
dc.relation.references[Zhou et al., 2016] Zhou, B.; Khosla, A.; Lapedriza, A.; Oliva, A. & Torralba,A.: , 2016; Learning deep features for discriminative localization; en 2016 IEEEConference on Computer Vision and Pattern Recognition (CVPR); págs. 2921–2929;doi:10.1109/CVPR.2016.319.spa
dc.relation.references[Ziafati & Maleki, 2022] Ziafati, A. & Maleki, A.: , 2022; Boosting the evoked responseof brain to enhance the reference signals of cca method; IEEE Transactions on NeuralSystems and Rehabilitation Engineering; 30: 2107–2115.spa
dc.rights.accessrightsinfo:eu-repo/semantics/openAccessspa
dc.rights.licenseReconocimiento 4.0 Internacionalspa
dc.rights.urihttp://creativecommons.org/licenses/by/4.0/spa
dc.subject.ddc000 - Ciencias de la computación, información y obras generalesspa
dc.subject.proposalMultimodaleng
dc.subject.proposalElectroencephalographyeng
dc.subject.proposalClass Activation Mapseng
dc.subject.proposalInterpretabilityeng
dc.subject.proposalCanonical Correlation Analysiseng
dc.subject.proposalKernel Methodseng
dc.subject.proposalDeep Learningeng
dc.subject.proposalConvolutional Neural Networkseng
dc.subject.proposalTopomapseng
dc.subject.proposalFrequency Analysiseng
dc.subject.proposalMultimodalspa
dc.subject.proposalElectroencefalografíaspa
dc.subject.proposalMapas de Activación de Clasespa
dc.subject.proposalInterpretabilidadspa
dc.subject.proposalCorelación Canónicaspa
dc.subject.proposalMetodos Kernelspa
dc.subject.proposalAprendizaje Profundospa
dc.subject.proposalRedes Convolucionalesspa
dc.subject.proposalTopomapasspa
dc.subject.proposalAnálisis de Frecuenciaspa
dc.subject.unescoInteracción hombre-máquina
dc.subject.unescoHuman machine interaction
dc.subject.unescoNeurobiology
dc.subject.unescoNeurobiología
dc.titleMultimodal explainability using class activation maps and canonical correlation analysis for EEG-based motor imagery classification enhancementeng
dc.title.translatedExplicabilidad multimodal mediante mapas de activación de clases y análisis de correlación canónica para la mejora de la clasificación de imágenes motoras basada en EEGspa
dc.typeTrabajo de grado - Maestríaspa
dc.type.coarhttp://purl.org/coar/resource_type/c_bdccspa
dc.type.coarversionhttp://purl.org/coar/version/c_ab4af688f83e57aaspa
dc.type.contentTextspa
dc.type.driverinfo:eu-repo/semantics/masterThesisspa
dc.type.versioninfo:eu-repo/semantics/acceptedVersionspa
dcterms.audience.professionaldevelopmentBibliotecariosspa
dcterms.audience.professionaldevelopmentEstudiantesspa
dcterms.audience.professionaldevelopmentInvestigadoresspa
dcterms.audience.professionaldevelopmentPúblico generalspa
oaire.accessrightshttp://purl.org/coar/access_right/c_abf2spa
oaire.awardtitleSistema de monitoreo automático para la evaluación clínica de infantes con alteraciones neurológicas motoras mediante el análisis de volumetría cerebral y patrón de marcha"(Codigo 111089784907)spa
oaire.fundernameMincienciasspa

Archivos

Bloque original

Mostrando 1 - 1 de 1
Cargando...
Miniatura
Nombre:
Tesis_Marcos.pdf
Tamaño:
6.39 MB
Formato:
Adobe Portable Document Format
Descripción:
Tesis de Maestría en Ingeniería - Automatización Industrial

Bloque de licencias

Mostrando 1 - 1 de 1
Cargando...
Miniatura
Nombre:
license.txt
Tamaño:
5.74 KB
Formato:
Item-specific license agreed upon to submission
Descripción: