Development of algorithms to improve the technical efficiency of capturing, processing, and identification of EEG signals in the word imagery task

dc.contributor.advisorBacca Rodríguez, Janspa
dc.contributor.authorVillamizar Delgado, Sergio Ivánspa
dc.contributor.researchgroupGrupo de investigación en electrónica de alta frecuencia y telecomunicaciones (cmun)spa
dc.date.accessioned2020-07-23T17:38:31Zspa
dc.date.available2020-07-23T17:38:31Zspa
dc.date.issued2020-02-07spa
dc.description.abstractA Brain-Computer Interface (BCI) system is a powerful tool that decodes signals from the brain and translates it into codes which are understood by software to perform a specific task. Through a BCI, a disabled person can communicate with the world using a speller or move a hook prosthesis just thinking in a movement or a word. Three main stages compose the BCI systems, the capturing data stage, the processing and decoding of the signals, and the translation of the features into a pattern for a control system. Among the possibilities offered by the market for a prosthesis, e.g., of the upper limb, the most popular is the electromyographic, which uses the muscles to control it. Also, there are the neuro-prosthesis that capture the brain activity by implanted sensors in the cortex through a chirurgical procedure. Finally, in a new growing research line, this research considering the ones which use the electroencephalography (EEG) technique to capture data from the mental tasks of people. In this research, an improvement of the technical efficiency of the capture, processing, and identification process of the silent speech EEG signals of vowels, syllables, and words are presented. First, for the signal acquisition stage, novel locations for the electrodes are proposed to maximize the capturing of the brain signals due to the language process in contrast with the 10-20 system. For the second and third stages, four novel methodologies were implemented, each one with its pros and cons. However, considering the propose of scaling in future the method of an online application, only the third algorithm excels in skills like high discriminability, reliability in the prediction of labels, robustness facing noisy data and variability inter-subject, and low computational resources consumption to reduce the processing time. For the preprocessing stage, a novel solution for the cleaning of artifacts is proposed, which is based on an algorithm called “Singular Vector Decomposition Multivariate Empirical Mode Decomposition” (SVD-MEMD), that uses the singular vector decomposition to project data into a new dimensional space and separate useful data from noise. The output of this stage is a cleaned signal matrix with the same dimensions as input which its significant power remains in the range of [18 Hz to 50 Hz]. The algorithm exhibits outstanding yields in front of noisy, non-linear, and non-stationary data. Also, in compassion with the MEMD, the computational costs are low, and the processing time is quick. The main changes between proposed the first three algorithms fall into the feature extraction stage. The fourth methodology changes a little the conception of signals' analysis creating images from the captured data that then be classified. The first proposal methodology uses the singular vector decomposition technique to extracts the discriminative features which after are discriminated by an Extremely randomizes tree (ET) achieving an overall accuracy for five classes classifier of 0.79 ±0.07 using the Neurophysiology database - (NDB). The second algorithm uses a combination of non-parametric modeling called Multivariate Adaptive Regression Splines (MARS) with Maximum Relevance Minimum Common Redundancy (mRMR) dimensional reduction technique to obtain the features vectors which after are labeled by an Adaboost classifier getting an average accuracy score of 0.84±0.03 and 0.77±0.04 for the ET using the KARA ONE database in a five-class classifier. The third proposal combines the Phase Locking Value (PLV) for feature extraction with the Linear Discriminant Analysis (LDA) for dimensional reduction technique to increase the discriminability. The algorithm uses the ET to classify data. The implementation of the third proposal delivers a light, adaptative, and flexible methodology, which accomplishes an average accuracy of 0.86±0.04 in a five classes classifier with low processing time using the December Database (DDB). The fourth methodology aims to combine in a pseudo-image spatial, frequency, and time information which after are discriminated using a convolutional neural network. The best person yields an average accuracy of 0.51 ±0.045 in a five-class classifier using as input the DDB database. Considering the outstanding results of the third proposal, it was decided to codec it in a portable device. The FPGA board PYNQ-Z2 hosts the third algorithm which after several tests the methodology delivers a prediction in only 380 ms ± 9.69 ms per loop using the DDB database that has a sampling rate of 128 Hz and fourteen electrodes. Also, several testing trials were sent to the FPGA simulating the capture process, achieving high accuracy results. The before allows us to conclude that it is possible to implement an algorithm which discriminates EEG silent speech signals in portable hardware that allows us to achieve high processing speeds (around milliseconds in the first processing tests) without losing accuracy.spa
dc.description.abstractUna interfaz cerebro-computadora (BCI) es una herramienta poderosa que decodifica las señales del cerebro y las traduce en códigos que el software entiende para realizar una tarea específica. A través de un BCI, una persona discapacitada puede comunicarse con el mundo usando un deletreador o mover una prótesis de gancho simplemente pensando en un movimiento o una palabra. Tres etapas principales componen los sistemas BCI, la etapa de captura de datos, el procesamiento y decodificación de las señales y la traducción de las características en un patrón para un sistema de control. Entre las posibilidades que ofrece el mercado para una prótesis, por ejemplo, de la extremidad superior, la más popular es la electromiográfica, que utiliza los músculos para controlarla. Además, existen las neuroprótesis que capturan la actividad cerebral mediante sensores implantados en la corteza a través de un procedimiento quirúrgico. Finalmente, en una nueva línea de investigación en crecimiento, esta investigación considera las que utilizan la técnica de electroencefalografía (EEG) para capturar datos de las tareas mentales de las personas. En esta investigación, se presenta una mejora de la eficiencia técnica del proceso de captura, procesamiento e identificación de las señales EEG de voz silenciosa de vocales, sílabas y palabras. Primero, para la etapa de adquisición de señal, se proponen nuevas ubicaciones para los electrodos para maximizar la captura de las señales cerebrales debido al proceso del lenguaje en contraste con el sistema 10-20. Para la segunda y tercera etapa, se implementaron cuatro nuevas metodologías, cada una con sus pros y sus contras. Sin embargo, considerando la propuesta de escalar en el futuro el método de una aplicación en línea, solo el tercer algoritmo sobresale en habilidades como alta discriminabilidad, confiabilidad en la predicción de etiquetas, robustez frente a datos ruidosos y variabilidad entre sujetos, y bajo consumo de recursos computacionales para Reducir el tiempo de procesamiento. Para la etapa de preprocesamiento, se propone una solución novedosa para la limpieza de artefactos, que se basa en un algoritmo llamado "Descomposición de vectores singulares Descomposición de modo empírico multivariante" (SVD-MEMD), que utiliza la descomposición de vectores singulares para proyectar datos en un nuevo espacio dimensional y separar datos útiles del ruido. La salida de esta etapa es una matriz de señal limpia con las mismas dimensiones que la entrada, cuya potencia significativa permanece en el rango de [18 Hz a 50 Hz]. El algoritmo exhibe rendimientos sobresalientes frente a datos ruidosos, no lineales y no estacionarios. Además, en compasión con el MEMD, los costos computacionales son bajos y el tiempo de procesamiento es rápido. Los principales cambios entre los tres primeros algoritmos propuestos caen en la etapa de extracción de características. La cuarta metodología cambia un poco la concepción del análisis de señales creando imágenes a partir de los datos capturados que luego se clasifican. La metodología de la primera propuesta utiliza la técnica de descomposición vectorial singular para extraer las características discriminatorias que luego son discriminadas por un árbol extremadamente aleatorio (ET) logrando una precisión general para el clasificador de cinco clases de 0.79 ± 0.07 utilizando la base de datos de Neurofisiología - (NDB). El segundo algoritmo utiliza una combinación de modelado no paramétrico llamado Splines de regresión adaptativa multivariante (MARS) con la técnica de reducción dimensional de Máxima relevancia Mínima redundancia común (mRMR) para obtener los vectores de características que luego son etiquetados por un clasificador Adaboost obteniendo un puntaje de precisión promedio de 0.84 ± 0.03 y 0.77 ± 0.04 para el ET usando la base de datos KARA ONE en un clasificador de cinco clases. La tercera propuesta combina el valor de bloqueo de fase (PLV) para la extracción de características con el análisis discriminante lineal (LDA) para la técnica de reducción dimensional para aumentar la discriminabilidad. El algoritmo usa el ET para clasificar los datos. La implementación de la tercera propuesta ofrece una metodología ligera, adaptativa y flexible, que logra una precisión promedio de 0.86 ± 0.04 en un clasificador de cinco clases con bajo tiempo de procesamiento utilizando la base de datos de diciembre (DDB). La cuarta metodología tiene como objetivo combinar en una pseudoimagen información espacial, de frecuencia y de tiempo que luego se discrimina utilizando una red neuronal convolucional. La mejor persona produce una precisión promedio de 0.51 ± 0.045 en un clasificador de cinco clases utilizando como entrada la base de datos DDB. Teniendo en cuenta los excelentes resultados de la tercera propuesta, se decidió codificarlo en un dispositivo portátil. La placa FPGA PYNQ-Z2 aloja el tercer algoritmo que, después de varias pruebas, la metodología entrega una predicción en solo 380 ms ± 9.69 ms por bucle utilizando la base de datos DDB que tiene una frecuencia de muestreo de 128 Hz y catorce electrodos. Además, se enviaron varias pruebas de prueba al FPGA simulando el proceso de captura, logrando resultados de alta precisión. Lo anterior nos permite concluir que es posible implementar un algoritmo que discrimina las señales de voz silenciosa de EEG en hardware portátil que nos permite alcanzar altas velocidades de procesamiento (alrededor de milisegundos en las primeras pruebas de procesamiento) sin perder precisiónspa
dc.description.degreelevelDoctoradospa
dc.description.sponsorshipColcienciasspa
dc.format.extent154spa
dc.format.mimetypeapplication/pdfspa
dc.identifier.urihttps://repositorio.unal.edu.co/handle/unal/77829
dc.language.isoengspa
dc.publisher.branchUniversidad Nacional de Colombia - Sede Bogotáspa
dc.publisher.programBogotá - Ingeniería - Doctorado en Ingeniería - Ingeniería Eléctricaspa
dc.relation.references[1] Presidencia de la República de colombia, “Dirección contra minas,” 2019. [Online]. Available: http://www.accioncontraminas.gov.co/estadisticas/Paginas/victimas-minas-antipersonal.aspx.spa
dc.relation.references[2] W. H. Organization, Neurological Disorders: Public Health Challenges. World Health Organization, 2006.spa
dc.relation.references[3] D. Farina et al., “The Extraction of Neural Information from the Surface EMG for the Control of Upper-Limb Prostheses: Emerging Avenues and Challenges,” IEEE Trans. Neural Syst. Rehabil. Eng., vol. 22, no. 4, pp. 797–809, 2014.spa
dc.relation.references[4] V. Kaiser et al., “Hybrid brain–computer interfaces and hybrid neuroprostheses for restoration of upper limb functions in individuals with high-level spinal cord injury,” Artif. Intell. Med., vol. 59, no. 2, pp. 133–142, 2013spa
dc.relation.references[5] N. Kaongoen and S. Jo, “A novel hybrid auditory BCI paradigm combining ASSR and P300,” J. Neurosci. Methods, vol. 279, pp. 44–51, Mar. 2017.spa
dc.relation.references[6] B. Zhao, X. Chen, X. Gao, S. Xu, and Y. Wang, “Control of a 7-DOF Robotic Arm System With an SSVEP-Based BCI,” Int. J. Neural Syst., vol. 28, no. 08, p. 1850018, 2018.spa
dc.relation.references[7] J. Kevric and A. Subasi, “Comparison of signal decomposition methods in classification of EEG signals for motor-imagery BCI system,” Biomed. Signal Process. Control, vol. 31, pp. 398–406, Jan. 2017.spa
dc.relation.references[8] C. S. DaSalla, H. Kambara, M. Sato, and Y. Koike, “Single-trial classification of vowel speech imagery using common spatial patterns,” Neural Networks, vol. 22, no. 9, pp. 1334–1339, 2009.spa
dc.relation.references[9] S. Deng, R. Srinivasan, T. Lappas, and M. D’Zmura, “EEG classification of imagined syllable rhythm using Hilbert spectrum methods,” J. Neural Eng., vol. 7, no. 4, 2010.spa
dc.relation.references[10] K. Brigham and B. V. K. V. Kumar, “Imagined speech classification with EEG signals for silent communication: A preliminary investigation into synthetic telepathy,” 2010 4th Int. Conf. Bioinforma. Biomed. Eng. iCBBE 2010, pp. 1–4, 2010.spa
dc.relation.references[11] A. Riaz, S. Akhtar, S. Iftikhar, A. A. Khan, and A. Salman, “Inter comparison of classification techniques for vowel speech imagery using EEG sensors,” 2014 2nd Int. Conf. Syst. Informatics, ICSAI 2014, no. Icsai, pp. 712–717, 2015.spa
dc.relation.references[12] S. Iqbal and O. Farooq, “Classification of Imagined Vowel Sounds by EEG analysis,” 2015 2nd Int. Conf. Comput. Sustain. Glob. Dev., pp. 1591–1594, 2015.spa
dc.relation.references[13] S. D. S. Kamalakkannan R., Rajkumar R., Madan Raj. M., “Imagined Speech Classification using EEG Signals,” Adv. Biomed. Sci. Eng., vol. 1, no. 2, pp. 20–32, 2014.spa
dc.relation.references[14] C. M. Spooner, E. Viirre, and B. Chase, “From Explicit to Implicit Speech Recognition BT - Foundations of Augmented Cognition,” 2013, pp. 502–511.spa
dc.relation.references[15] M. J. Aguila, H. D. V. Basilio, P. V. C. Suarez, J. P. E. Dueñas, and S. V. Prado, “Comparative Study of Linear and Nonlinear Features Used in Imagined Vowels Classification Using a Backpropagation Neural Network Classifier,” in Proceedings of the 7th International Conference on Bioscience, Biochemistry and Bioinformatics - ICBBB ’17, 2017, pp. 7–11.spa
dc.relation.references[16] S. Zhao and F. Rudzicz, “Classifying phonological categories in imagined and articulated speech,” ICASSP, IEEE Int. Conf. Acoust. Speech Signal Process. - Proc., vol. 2015-Augus, pp. 992–996, 2015.spa
dc.relation.references[17] B. Min, J. Kim, H. Park, and B. Lee, “Vowel Imagery Decoding toward Silent Speech BCI Using Extreme Learning Machine with Electroencephalogram,” Biomed Res. Int., vol. 2016, pp. 1–11, 2016.spa
dc.relation.references[18] N. Yoshimura, A. Satsuma, C. S. Dasalla, T. Hanakawa, M. A. Sato, and Y. Koike, “Usability of EEG cortical currents in classification of vowel speech imagery,” 2011 Int. Conf. Virtual Rehabil. ICVR 2011, pp. 1–2, 2011.spa
dc.relation.references[19] A. Rezazadeh Sereshkeh, R. Trott, A. Bricout, and T. Chau, “EEG Classification of Covert Speech Using Regularized Neural Networks,” IEEE/ACM Trans. Audio Speech Lang. Process., vol. 25, no. 12, pp. 2292–2300, 2017.spa
dc.relation.references[20] X. Chi, J. B. Hagedorn, D. Schoonover, and M. D. Zmura, “EEG-Based Discrimination of Imagined Speech Phonemes,” Int. J. Bioelectromagn., vol. 13, no. 4, pp. 201–206, 2011.spa
dc.relation.references[21] L. C. Sarmiento Vela, Reconocimiento del habla silenciosa con señales electroencefalográficas (EEG) para interfaces cerebro-computador. Bogota: Universidad Pedagógica Nacional, 2019.spa
dc.relation.references[22] J. Kim, S. K. Lee, and B. Lee, “EEG classification in a single-trial basis for vowel speech perception using multivariate empirical mode decomposition,” J. Neural Eng., vol. 11, no. 3, 2014.spa
dc.relation.references[23] M. Matsumoto and J. Hori, “Classification of silent speech using support vector machine and relevance vector machine,” Appl. Soft Comput. J., vol. 20, pp. 95–102, 2014.spa
dc.relation.references[24] L. Wang, X. Zhang, X. Zhong, and Y. Zhang, “Analysis and classification of speech imagery EEG for BCI,” Biomed. Signal Process. Control, vol. 8, no. 6, pp. 901–908, 2013.spa
dc.relation.references[25] C. J. Stam, “Nonlinear dynamical analysis of EEG and MEG : Review of an emerging field,” vol. 116, pp. 2266–2301, 2005.spa
dc.relation.references[26] L. Wang, X. Liu, Z. Liang, Z. Yang, and X. Hu, “Analysis and classification of hybrid BCI based on motor imagery and speech imagery,” Measurement, 2019.spa
dc.relation.references[27] M. Statistics, “Multivariate Adaptive Regression Splines Author ( s ): Jerome H . Friedman Source : The Annals of Statistics , Vol . 19 , No . 1 ( Mar ., 1991 ), pp . 1-67 Published by : Institute of Mathematical Statistics Stable URL : https://www.jstor.org/stable/22418,” vol. 19, no. 1, pp. 1–67, 2019.spa
dc.relation.references[28] R. N. Harner, “Singular Value Decomposition—A general linear model for analysis of multivariate structure in the electroencephalogram,” Brain Topogr., vol. 3, no. 1, pp. 43–47, 1990.spa
dc.relation.references[29] J. Che, Y. Yang, L. Li, X. Bai, S. Zhang, and C. Deng, “Maximum relevance minimum common redundancy feature selection for nonlinear data,” Inf. Sci. (Ny)., vol. 409–410, pp. 68–86, 2017.spa
dc.relation.references[30] T. Li, S. Zhu, and M. Ogihara, “Using discriminant analysis for multi-class classification: An experimental investigation,” Knowl. Inf. Syst., vol. 10, no. 4, pp. 453–472, 2006.spa
dc.relation.references[31] P. Geurts, D. Ernst, and L. Wehenkel, “Extremely randomized trees,” Mach. Learn., vol. 63, no. 1, pp. 3–42, 2006.spa
dc.relation.references[32] C. Gonzales and J. Gomez, “Información estadística de la discapacidad,” Bogotá, Colombia, 2004.spa
dc.relation.references[33] R. A. Miranda et al., “DARPA-funded efforts in the development of novel brain – computer interface technologies,” J. Neurosci. Methods, vol. 244, pp. 52–67, 2015.spa
dc.relation.references[34] M. Kurzynski, “Multiclassifier System with Dynamic Model of Classifier Competence Applied to the Control of Bioprosthetic Hand,” vol. 36, pp. 163–175, 2015.spa
dc.relation.references[35] S. P. Beeby, “Control strategies for a multiple degree of freedom prosthetic hand,” IET Conf. Proc., pp. 211-218(7), Jan. 2006.spa
dc.relation.references[36] H. Lee and D. J. Roberson, “A systemic view of an upper extremity prosthesis,” in System of Systems Engineering -SoSE’07. IEEE International Conference, 2007, pp. 1–6.spa
dc.relation.references[37] R. Rupp and G. R. Müller-Putz, “BCI-Controlled Grasp Neuroprosthesis in High Spinal Cord Injury BT - Converging Clinical and Engineering Research on Neurorehabilitation,” 2013, pp. 1253–1258.spa
dc.relation.references[38] D. D’Croz-Baron, J. M. Ramirez, M. Baker, V. Alarcon-Aquino, and O. Carrera, “A BCI motor imagery experiment based on parametric feature extraction and Fisher Criterion,” in CONIELECOMP 2012, 22nd International Conference on Electrical Communications and Computers, 2012, pp. 257–261.spa
dc.relation.references[39] Z. Tang, C. Li, and S. Sun, “Single-trial EEG classification of motor imagery using deep convolutional neural networks,” Optik (Stuttg)., vol. 130, pp. 11–18, Feb. 2017.spa
dc.relation.references[40] S. S. P. Shen, Hilbert-Huang Transform and Its Applications. 5 Toh Tuck Link, Singapore 596224: World Scientific Publishing Co. Pte. Ltd., 2005.spa
dc.relation.references[41] T.-W. Lee, “Independent Component Analysis,” in Independent Component Analysis: Theory and Applications, Boston, MA: Springer US, 1998, pp. 27–66.spa
dc.relation.references[42] S. P. Mishra et al., “Multivariate Statistical Data Analysis- Principal Component Analysis (PCA),” IJCAI Int. Jt. Conf. Artif. Intell., no. August 2018, pp. 2936–2942, 2017.spa
dc.relation.references[43] A. A. Torres-García, C. A. Reyes-García, L. Villaseñor-Pineda, and G. García-Aguilar, “Implementing a fuzzy inference system in a multi-objective EEG channel selection model for imagined speech classification,” Expert Syst. Appl., vol. 59, pp. 1–12, 2016.spa
dc.relation.references[44] A. A. Torres-García, C. A. Reyes-García, L. Villaseñor-Pineda, and J. M. Ramírez-Cortés, “Análisis de señales electroencefalográficas para la clasificación de habla imaginada,” Rev. Mex. Ing. Biomed., vol. 34, no. 1, pp. 23–39, 2013.spa
dc.relation.references[45] E. F. González-Castañeda, A. A. Torres-García, C. A. Reyes-García, and L. Villaseñor-Pineda, “Sonification and textification: Proposing methods for classifying unspoken words from EEG signals,” Biomed. Signal Process. Control, vol. 37, pp. 82–91, 2017.spa
dc.relation.references[46] M. Salama, L. Elsherif, H. Lashin, and T. Gamal, “Recognition of Unspoken Words Using Electrode Electroencephalograhic Signals,” Sixth Int. Conf. Adv. Cogn. Technol. Appl., no. c, pp. 51–55, 2014.spa
dc.relation.references[47] K. Chekima, R. Alfred, and K. O. Chin, “Word-Based Classification of Imagined Speech Using EEG,” Lect. Notes Electr. Eng., vol. 488, pp. 172–185, 2018.spa
dc.relation.references[48] C. H. Nguyen, G. K. Karavas, and P. Artemiadis, “Inferring imagined speech using EEG signals: A new approach using Riemannian manifold features,” J. Neural Eng., vol. 15, no. 1, 2018.spa
dc.relation.references[49] M. D’Zmura, S. Deng, T. Lappas, S. Thorpe, and R. Srinivasan, “Toward eeg sensing of imagined speech,” Lect. Notes Comput. Sci. (including Subser. Lect. Notes Artif. Intell. Lect. Notes Bioinformatics), vol. 5610 LNCS, no. PART 1, pp. 40–48, 2009.spa
dc.relation.references[50] L. C. Sarmiento, P. Lorenzana, C. J. Cortés, W. J. Arcos, J. A. Bacca, and A. Tovar, “Brain computer interface (BCI) with EEG signals for automatic vowel recognition based on articulation mode,” ISSNIP Biosignals Biorobotics Conf. BRC, 2014.spa
dc.relation.references[51] A. Jahangiri and F. Sepulveda, “The contribution of different frequency bands in class separability of covert speech tasks for BCIs,” Proc. Annu. Int. Conf. IEEE Eng. Med. Biol. Soc. EMBS, pp. 2093–2096, 2017.spa
dc.relation.references[52] “American Electroencephalographic Society Guidelines for Standard Electrode Position Nomenclature,” J. Clin. Neurophysiol., vol. 8, no. 2, 1991.spa
dc.relation.references[53] S. J. Luck, An Introduction to the Event-Related Potential Technique. MIT Press, 2014.spa
dc.relation.references[54] B. Stemmer and H. A. Whitaker, Handbook of the Neuroscience of Language. Elsevier Science, 2008.spa
dc.relation.references[55] A. R. Braun, A. Guillemin, L. Hosey, and M. Varga, “The neural organization of discourse An H 215 O-PET study of narrative production in English and American sign language,” no. December 2014, 2001.spa
dc.relation.references[56] B. Horwitz and A. R. Braun, “Brain network interactions in auditory, visual and linguistic processing,” Brain Lang., vol. 89, no. 2, pp. 377–384, 2004.spa
dc.relation.references[57] J. S. García-Salinas, L. Villaseñor-Pineda, C. A. Reyes-García, and A. A. Torres-García, “Transfer learning in imagined speech EEG-based BCIs,” Biomed. Signal Process. Control, vol. 50, pp. 151–157, 2019.spa
dc.relation.references[58] L. Wang, X. Zhang, and Y. Zhang, “Extending motor imagery by speech imagery for brain-computer interface,” Proc. Annu. Int. Conf. IEEE Eng. Med. Biol. Soc. EMBS, pp. 7056–7059, 2013.spa
dc.relation.references[59] A. Porbadnigk, M. Wester, J.-P. Calliess, and T. Schultz, “EEG-based Speech Recognition - Impact of Temporal Effects.,” BIOSIGNALS 2009 - Proc. Int. Conf. Bio-inspired Syst. Signal Process. Porto, Port. January 14-17, 2009, pp. 376–381, 2009.spa
dc.relation.references[60] J. Gu and R. Ward, “Novel feature generation and classification for a 2-state Self-paced Brain Computer Interface system,” in 2012 25th IEEE Canadian Conference on Electrical and Computer Engineering (CCECE), 2012, pp. 1–4.spa
dc.relation.references[61] Z. Zhou, B. Wan, D. Ming, and H. Qi, “A novel technique for phase synchrony measurement from the complex motor imaginary potential of combined body and limb action,” J. Neural Eng., vol. 7, p. 46008, Aug. 2010.spa
dc.relation.references[62] L. Espa, “REAL ACADEMIA,” 1999.spa
dc.relation.references[63] H. Schnelle, Language in the Brain. Cambridge: Cambridge University Press, 2010.spa
dc.relation.references[64] A. A. Sleeper, Speech and language. New York: Chelsea House, 2006.spa
dc.relation.references[65] G. Hickok and D. Poeppel, “The cortical organization of speech processing,” Nat. Rev. Neurosci., vol. 8, p. 393, Apr. 2007.spa
dc.relation.references[66] N. Geschwnd, “DISCONNEXION SYNDROMES IN ANIMALS AND MAN *,” vol. LXXXVIII, pp. 237–294.spa
dc.relation.references[67] D. of C. S. University of Toronto, “Computational Linguistics,” 2012. [Online]. Available: http://www.cs.toronto.edu/~complingweb/data/karaOne/karaOne.html.spa
dc.relation.references[68] A. Hemakom, V. Goverdovsky, D. Looney, and D. P. Mandic, “Adaptive-projection intrinsically transformed multivariate empirical mode decomposition in cooperative brain-computer interface applications,” Philos. Trans. R. Soc. A Math. Phys. Eng. Sci., vol. 374, no. 2065, 2016.spa
dc.relation.references[69] J. Zhu, H. Zou, S. Rosset, and T. Hastie, “Multi-class AdaBoost,” Stat. Interface, vol. 2, pp. 349–360, 2009.spa
dc.relation.references[70] Saeid Sanei and J.A. Chambers, EEG SIGNAL PROCESSING. Cardiff University, UK: John Wiley & Sons Ltd, 2007.spa
dc.relation.references[71] A. Gogolou, T. Tsandilas, T. Palpanas, and A. Bezerianos, “Comparing Similarity Perception in Time Series Visualizations,” IEEE Trans. Vis. Comput. Graph., vol. PP, no. c, p. 1, 2018.spa
dc.relation.references[72] H. Ding, G. Trajcevski, P. Scheuermann, X. Wang, and E. Keogh, “Querying and Mining of Time Series Data : Experimental Comparison of Representations and Distance Measures,” vol. 1, pp. 1542–1552, 2008.spa
dc.relation.references[73] O. Moses and T. Vinícius, “CID : an efficient complexity-invariant distance for time series,” 2013.spa
dc.relation.references[74] E. Engineering, “Multivariate empirical mode decomposition,” no. December 2009, pp. 1291–1302, 2010.spa
dc.relation.references[75] G. H. Golub and C. Reinsch, “Singular value decomposition and least squares solutions,” Numer. Math., vol. 14, no. 5, pp. 403–420, 1970.spa
dc.relation.references[76] J. K. W. Demmel, “Accurate Singular Values of Bidiagonal Matrices,” SIAM J. Sci. Stat. Comput, vol. 11, no. 5, pp. 873–912, 1990.spa
dc.relation.references[77] S. Chandrasekaran and M. Gu, “A divide-and-conquer algorithm for the eigendecomposition of symmetric block-diagonal plus semiseparable matrices,” Numer. Math., vol. 96, no. 4, pp. 723–731, 2004.spa
dc.relation.references[78] B. Großer and B. Lang, “An O(n2) algorithm for the bidiagonal SVD,” Linear Algebra Appl., vol. 358, no. 1, pp. 45–70, 2003.spa
dc.relation.references[79] L. Hogben, Handbook of Linear Algebra. CRC Press, 2006.spa
dc.relation.references[80] J. R. Quinlan, “Induction of Decision Trees,” pp. 81–106, 2007.spa
dc.relation.references[81] C.-W. Hsu and C.-J. Lin, “A comparison of methods for multiclass support vector machines,” IEEE Trans. Neural Networks, vol. 13, no. 2, pp. 415–425, 2002.spa
dc.relation.references[82] D.-G. Chen, H.-Y. Wang, and E. C. C. Tsang, “Generalized Mercer theorem and its application to feature space related to indefinite kernels,” in 2008 International Conference on Machine Learning and Cybernetics, 2008, vol. 2, pp. 774–777.spa
dc.relation.references[83] G. H. Golub, M. Heath, and G. Wahba, “Generalized Cross-Validation as a Method for Choosing a Good Ridge Parameter,” Technometrics, vol. 21, no. 2, pp. 215–223, May 1979.spa
dc.relation.references[84] and C. D. Hanchuan Peng, Fuhui Long, “Feature Selection Based on Mutual Information :,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 27, no. 8, pp. 1226–1238, 2005.spa
dc.relation.references[85] T. Klikauer, “Scikit-learn: Machine Learning in Python,” TripleC, vol. 14, no. 1, pp. 260–264, 2016.spa
dc.relation.references[86] A. P. C. S. P. A. T. J. K. Michael Rosenblum, “Phase Synchronization: From Theory to Data Analysis,” vol. 4, pp. 279–321, 1999.spa
dc.relation.references[87] S. Aydore, D. Pantazis, and R. M. Leahy, “A note on the phase locking value and its properties,” Neuroimage, vol. 74, pp. 231–244, 2013.spa
dc.relation.references[88] I. Guyon and A. Elisseeff, “An Introduction to Variable and Feature Selection,” J. Mach. Learn. Res., vol. 3, pp. 1157–1182, Mar. 2003.spa
dc.relation.references[89] M. T. McCann, D. E. Thompson, Z. H. Syed, and J. E. Huggins, “Electrode subset selection methods for an EEG-based P300 brain-computer interface,” Disabil. Rehabil. Assist. Technol., vol. 10, no. 3, pp. 216–220, May 2015.spa
dc.relation.references[90] Siuly, Y. Li, and P. (Paul) Wen, “Modified CC-LR algorithm with three diverse feature sets for motor imagery tasks classification in EEG based brain–computer interface,” Comput. Methods Programs Biomed., vol. 113, no. 3, pp. 767–780, 2014.spa
dc.relation.references[91] A. Atyabi, M. H. Luerssen, and D. M. W. Powers, “PSO-based dimension reduction of EEG recordings: Implications for subject transfer in BCI,” Neurocomputing, vol. 119, pp. 319–331, 2013.spa
dc.relation.references[92] I. Rejer and K. Lorenz, “Genetic algorithm and forward method for feature selection in EEG feature space,” vol. 7, no. 2, pp. 72–82, 2013.spa
dc.relation.references[93] A. V Oppenheim, A. S. Willsky, and S. H. Nawab, Signals & Systems. Prentice-Hall International, 1997.spa
dc.relation.references[94] J. Allen, “Short term spectral analysis, synthesis, and modification by discrete Fourier transform,” IEEE Trans. Acoust., vol. 25, no. 3, pp. 235–238, 1977.spa
dc.relation.references[95] E. D. Übeyli, “Analysis of EEG signals by implementing eigenvector methods/recurrent neural networks,” Digit. Signal Process., vol. 19, no. 1, pp. 134–143, 2009.spa
dc.relation.references[96] L. Lab, “Convolutional Neural Networks (LeNet) - DeepLearning 0.1 documentaation.” [Online]. Available: http://deeplearning.net/tutorial/lenet.html. [Accessed: 12-Nov-2019].spa
dc.relation.references[97] J. Johnson, “CS231n: Convolutional Neural Networks for Visual Recognition,” 2019. [Online]. Available: http://cs231n.github.io/.spa
dc.relation.references[98] S. Millborrow, “earth: Multivariate Adaptive Regression Spline Models.” CRAN, 2012.spa
dc.relation.references[99] Ottobock, “Myo terminal device Digital Twin system electric hand,” Austria, 2018.spa
dc.relation.references[100] T. Klikauer, “Scikit-learn: Machine Learning in Python,” TripleC, vol. 14, no. 1, pp. 260–264, 2016.spa
dc.relation.references[101] K. Simonyan and A. Zisserman, “Very Deep Convolutional Networks for Large-Scale Image Recognition,” Sep. 2014.spa
dc.rightsDerechos reservados - Universidad Nacional de Colombiaspa
dc.rights.accessrightsinfo:eu-repo/semantics/openAccessspa
dc.rights.licenseAtribución-SinDerivadas 4.0 Internacionalspa
dc.rights.spaAcceso abiertospa
dc.rights.urihttp://creativecommons.org/licenses/by-nd/4.0/spa
dc.subject.ddc620 - Ingeniería y operaciones afines::629 - Otras ramas de la ingenieríaspa
dc.subject.proposalBCIspa
dc.subject.proposalBCIeng
dc.subject.proposalEEGspa
dc.subject.proposalEEGeng
dc.subject.proposalSilent speecheng
dc.subject.proposalhabla slienciosaspa
dc.subject.proposalreconocimiento de patronesspa
dc.subject.proposalpattern recognitioneng
dc.subject.proposaltratamiento de señalesspa
dc.subject.proposalsignal enhancementeng
dc.titleDevelopment of algorithms to improve the technical efficiency of capturing, processing, and identification of EEG signals in the word imagery taskspa
dc.title.alternativeDesarrollo de una Interfaz Cerebro Computador con señales electroencefalográficas (EEG) que utilice el pensamiento del lenguaje para el control de una prótesis de miembro superior con aplicación a personas discapacitadas con amputaciones debidas al conflicto armado colombianospa
dc.typeTrabajo de grado - Doctoradospa
dc.type.coarhttp://purl.org/coar/resource_type/c_db06spa
dc.type.coarversionhttp://purl.org/coar/version/c_ab4af688f83e57aaspa
dc.type.contentTextspa
dc.type.driverinfo:eu-repo/semantics/doctoralThesisspa
dc.type.versioninfo:eu-repo/semantics/acceptedVersionspa
oaire.accessrightshttp://purl.org/coar/access_right/c_abf2spa

Archivos

Bloque original

Mostrando 1 - 1 de 1
Cargando...
Miniatura
Nombre:
Phd_thesis_Sergio_Villamizar_repositorio.pdf
Tamaño:
9.57 MB
Formato:
Adobe Portable Document Format

Bloque de licencias

Mostrando 1 - 1 de 1
Cargando...
Miniatura
Nombre:
license.txt
Tamaño:
3.9 KB
Formato:
Item-specific license agreed upon to submission
Descripción: