Mostrar el registro sencillo del documento

dc.rights.licenseAtribución-NoComercial 4.0 Internacional
dc.contributor.advisorCastellanos-Dominguez, German
dc.contributor.advisorAlvarez-Meza, Andres
dc.contributor.authorPérez Nastar, Hernán Darío
dc.date.accessioned2023-11-28T16:16:22Z
dc.date.available2023-11-28T16:16:22Z
dc.date.issued2023
dc.identifier.urihttps://repositorio.unal.edu.co/handle/unal/85004
dc.descriptiongráficas, tablas
dc.description.abstractEsta tesis de maestría presenta una metodología de aprendizaje profundo multimodal innovadora que fusiona un modelo de clasificación de emociones con un generador musical, con el propósito de crear música a partir de señales de electroencefalografía, profundizando así en la interconexión entre emociones y música. Los resultados alcanzan tres objetivos específicos: Primero, ya que el rendimiento de los sistemas interfaz cerebro-computadora varía considerablemente entre diferentes sujetos, se introduce un enfoque basado en la transferencia de conocimiento entre sujetos para mejorar el rendimiento de individuos con dificultades en sistemas de interfaz cerebro-computadora basados en el paradigma de imaginación motora. Este enfoque combina datos de EEG etiquetados con datos estructurados, como cuestionarios psicológicos, mediante un método de "Kernel Matching CKA". Utilizamos una red neuronal profunda (Deep&Wide) para la clasificación de la imaginación motora. Los resultados destacan su potencial para mejorar las habilidades motoras en interfaces cerebro-computadora. Segundo, proponemos una técnica innovadora llamada "Labeled Correlation Alignment"(LCA) para sonificar respuestas neurales a estímulos representados en datos no estructurados, como música afectiva. Esto genera características musicales basadas en la actividad cerebral inducida por las emociones. LCA aborda la variabilidad entre sujetos y dentro de sujetos mediante el análisis de correlación, lo que permite la creación de envolventes acústicos y la distinción entre diferente información sonora. Esto convierte a LCA en una herramienta prometedora para interpretar la actividad neuronal y su reacción a estímulos auditivos. Finalmente, en otro capítulo, desarrollamos una metodología de aprendizaje profundo de extremo a extremo para generar contenido musical MIDI (datos simbólicos) a partir de señales de actividad cerebral inducidas por música con etiquetas afectivas. Esta metodología abarca el preprocesamiento de datos, el entrenamiento de modelos de extracción de características y un proceso de emparejamiento de características mediante Deep Centered Kernel Alignment, lo que permite la generación de música a partir de señales EEG. En conjunto, estos logros representan avances significativos en la comprensión de la relación entre emociones y música, así como en la aplicación de la inteligencia artificial en la generación musical a partir de señales cerebrales. Ofrecen nuevas perspectivas y herramientas para la creación musical y la investigación en neurociencia emocional. Para llevar a cabo nuestros experimentos, utilizamos bases de datos públicas como GigaScience, Affective Music Listening y Deap Dataset (Texto tomado de la fuente)
dc.description.abstractThis master’s thesis presents an innovative multimodal deep learning methodology that combines an emotion classification model with a music generator, aimed at creating music from electroencephalography (EEG) signals, thus delving into the interplay between emotions and music. The results achieve three specific objectives: First, since the performance of brain-computer interface systems varies significantly among different subjects, an approach based on knowledge transfer among subjects is introduced to enhance the performance of individuals facing challenges in motor imagery-based brain-computer interface systems. This approach combines labeled EEG data with structured information, such as psychological questionnaires, through a "Kernel Matching CKA"method. We employ a deep neural network (Deep&Wide) for motor imagery classification. The results underscore its potential to enhance motor skills in brain-computer interfaces. Second, we propose an innovative technique called "Labeled Correlation Alignment"(LCA) to sonify neural responses to stimuli represented in unstructured data, such as affective music. This generates musical features based on emotion-induced brain activity. LCA addresses variability among subjects and within subjects through correlation analysis, enabling the creation of acoustic envelopes and the distinction of different sound information. This makes LCA a promising tool for interpreting neural activity and its response to auditory stimuli. Finally, in another chapter, we develop an end-to-end deep learning methodology for generating MIDI music content (symbolic data) from EEG signals induced by affectively labeled music. This methodology encompasses data preprocessing, feature extraction model training, and a feature matching process using Deep Centered Kernel Alignment, enabling music generation from EEG signals. Together, these achievements represent significant advances in understanding the relationship between emotions and music, as well as in the application of artificial intelligence in musical generation from brain signals. They offer new perspectives and tools for musical creation and research in emotional neuroscience. To conduct our experiments, we utilized public databases such as GigaScience, Affective Music Listening and Deap Dataset
dc.format.extentviii, 81 páginas
dc.format.mimetypeapplication/pdf
dc.language.isoeng
dc.publisherUniversidad Nacional de Colombia
dc.rights.urihttp://creativecommons.org/licenses/by-nc/4.0/
dc.subject.ddc600 - Tecnología (Ciencias aplicadas)::607 - Educación, investigación, temas relacionados
dc.titleBrain Music : Sistema compositivo, gráfico y sonoro creado a partir del comportamiento frecuencial de las señales cerebrales
dc.typeTrabajo de grado - Maestría
dc.type.driverinfo:eu-repo/semantics/masterThesis
dc.type.versioninfo:eu-repo/semantics/acceptedVersion
dc.publisher.programManizales - Ingeniería y Arquitectura - Maestría en Ingeniería - Automatización Industrial
dc.contributor.researchgroupGrupo de Control y Procesamiento Digital de Señales
dc.description.degreelevelMaestría
dc.description.degreenameMagíster en Ingeniería - Automatización Industrial
dc.description.researchareaInvestigación en Aprendizaje Profundo y señales Biológicas
dc.identifier.instnameUniversidad Nacional de Colombia
dc.identifier.reponameRepositorio Institucional Universidad Nacional de Colombia
dc.identifier.repourlhttps://repositorio.unal.edu.co/
dc.publisher.facultyFacultad de Ingeniería y Arquitectura
dc.publisher.placeManizales, Colombia
dc.publisher.branchUniversidad Nacional de Colombia - Sede Manizales
dc.relation.referencesADOLPHS, Ralph ; ANDERSON, David: The neuroscience of emotion: A new synthesis. Princeton University Press, 2018
dc.relation.referencesAGUIRRE-ARANGO, Juan C. ; ÁLVAREZ-MEZA, Andrés M. ; CASTELLANOS- DOMINGUEZ, German: Feet Segmentation for Regional Analgesia Monitoring Using Con- volutional RFF and Layer-Wise Weighted CAM Interpretability. En: Computation 11 (2023), Nr. 6, p. 113
dc.relation.referencesALARCAO, Soraia M. ; FONSECA, Manuel J.: Emotions recognition using EEG signals: A survey. En: IEEE Transactions on Affective Computing 10 (2017), Nr. 3, p. 374–393
dc.relation.referencesALVAREZ-MEZA, A ; CARDENAS-PENA, D ; CASTELLANOS-DOMINGUEZ, G: Unsuper- vised kernel function building using maximization of information potential variability. En: Iberoamerican Congress on Pattern Recognition Springer, 2014, p. 335–342
dc.relation.referencesALVAREZ-MEZA, A ; OROZCO-GUTIERREZ, A ; CASTELLANOS-DOMINGUEZ, G: Kernel-based relevance analysis with enhanced interpretability for detection of brain ac- tivity patterns. En: Frontiers in neuroscience 11 (2017), p. 550
dc.relation.referencesALVAREZ-MEZA, A. M. ; OROZCO-GUTIERREZ, A. ; CASTELLANOS-DOMINGUEZ, G.: Kernel-Based Relevance Analysis with Enhanced Interpretability for Detection of Brain Activity Patterns. En: Frontiers in Neuroscience 11 (2017), p. 550. – ISSN 1662–453X
dc.relation.referencesÁLVAREZ-MEZA, Andrés M. ; TORRES-CARDONA, Héctor F. ; OROZCO-ALZATE, Mau- ricio ; PÉREZ-NASTAR, Hernán D. ; CASTELLANOS-DOMINGUEZ, German: Affective Neural Responses Sonified through Labeled Correlation Alignment. En: Sensors 23 (2023), Nr. 12, p. 5574
dc.relation.referencesANDREW, G ; ARORA, R ; BILMES, J ; LIVESCU, K: Deep canonical correlation analysis. En: International conference on machine learning PMLR, 2013, p. 1247–1255
dc.relation.referencesANOWAR, F. ; SADAOUI, S. ; SELIM, B.: Conceptual and empirical comparison of dimen- sionality reduction algorithms (PCA, KPCA, LDA, MDS, SVD, LLE, ISOMAP, LE, ICA, t-SNE). En: Computer Science Review 40 (2021), p. 100378. – ISSN 1574–0137
dc.relation.referencesANTOL, Stanislaw ; AGRAWAL, Aishwarya ; LU, Jiasen ; MITCHELL, Margaret ; BATRA, Dhruv ; ZITNICK, C L. ; PARIKH, Devi: Vqa: Visual question answering. En: Proceedings of the IEEE international conference on computer vision, 2015, p. 2425–2433
dc.relation.referencesAPPRIOU, Aurelien ; CICHOCKI, Andrzej ; LOTTE, Fabien: Modern machine-learning al- gorithms: for classifying cognitive and affective states from electroencephalography signals. En: IEEE Systems, Man, and Cybernetics Magazine 6 (2020), Nr. 3, p. 29–38
dc.relation.referencesASGHAR, Muhammad A. ; KHAN, Muhammad J. ; FAWAD ; AMIN, Yasar ; RIZWAN, Muhammad ; RAHMAN, MuhibUr ; BADNAVA, Salman ; MIRJAVADI, Seyed S.: EEG-based multi-modal emotion recognition using bag of deep features: An optimal feature selection approach. En: Sensors 19 (2019), Nr. 23, p. 5218
dc.relation.referencesDE AZEVEDO SANTOS, L R. ; SILLA JR, Carlos N. ; COSTA-ABREU, MD: A methodology for procedural piano music composition with mood templates using genetic algorithms. (2021)
dc.relation.referencesBAGHERZADEH, S ; MAGHOOLI, K ; SHALBAF, A ; MAGHSOUDI, A: Recognition of emotional states using frequency effective connectivity maps through transfer learning ap- proach from electroencephalogram signals. En: Biomedical Signal Processing and Control 75 (2022), p. 103544
dc.relation.referencesBAHMANI, M ; BABAK, M ; LAND, W ; HOWARD, J ; DIEKFUSS, J ; ABDOLLAHIPOUR, R: Children’s motor imagery modality dominance modulates the role of attentional focus in motor skill learning. En: Human movement science 75 (2020), p. 102742
dc.relation.referencesBASSO, J ; SATYAL, M ; RUGH, R: Dance on the Brain: Enhancing Intra- and Inter-Brain Synchrony. En: Frontiers in Human Neuroscience 14 (2021), p. 586
dc.relation.referencesBHATTACHARJEE, M ; MAHADEVA, P ; GUHA, P: Time-Frequency Audio Features for Speech-Music Classification. En: ArXiv (2018), p. 1–5
dc.relation.referencesBITTNER, Rachel M. ; BOSCH, Juan J. ; RUBINSTEIN, David ; MESEGUER-BROCAL, Ga- briel ; EWERT, Sebastian: A lightweight instrument-agnostic model for polyphonic no- te transcription and multipitch estimation. En: ICASSP 2022-2022 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) IEEE, 2022, p. 781– 785
dc.relation.referencesBODNER, Mark ; MUFTULER, L T. ; NALCIOGLU, Orhan ; SHAW, Gordon L.: FMRI study relevant to the Mozart effect: brain areas involved in spatial–temporal reasoning. En: Neurological research 23 (2001), Nr. 7, p. 683–690
dc.relation.referencesBRIOT, J ; PACHET, F: Music Generation by Deep Learning - Challenges and Directions. En: ArXiv 1712.04371 (2017), p. 1–17
dc.relation.referencesBRIOT, Jean-Pierre ; HADJERES, Gaëtan ; PACHET, François-David: Deep learning techni- ques for music generation–a survey. En: arXiv preprint arXiv:1709.01620 (2017)
dc.relation.referencesBRIOT, Jean-Pierre ; HADJERES, Gaëtan ; PACHET, François-David: Deep learning techniques for music generation. Vol. 1. Springer, 2020
dc.relation.referencesCARDONA, L ; VARGAS-CARDONA, H ; NAVARRO, P ; CARDENAS PEÑA, D ; OROZ- CO GUTIÉRREZ, A: Classification of Categorical Data Based on the Chi-Square Dissimila- rity and t-SNE. En: Computation 8 (2020), Nr. 4. – ISSN 2079–3197
dc.relation.referencesCHEN, Yu-An ; WANG, Ju-Chiang ; YANG, Yi-Hsuan ; CHEN, Homer: Linear regression- based adaptation of music emotion recognition models for personalization. En: 2014 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) IEEE, 2014, p. 2149–2153
dc.relation.referencesDE CHEVEIGNÉ, A ; WONG, D ; DI LIBERTO, G ; HJORTKJAER, J ; SLANEY, M ; LALOR, E: Decoding the auditory brain with canonical component analysis. En: NeuroImage 172 (2018), p. 206–216
dc.relation.referencesCHO, H ; AHN, M ; AHN, S ; KWON, M ; JUN, S: EEG datasets for motor imagery brain- computer interface. En: GigaScience 6 (2017), 05, Nr. 7
dc.relation.referencesCICCARELLI, G ; NOLAN, M ; PERRICONE, J ; CALAMIA, P ; HARO, S ; O’SULLIVAN, J ; MESGARANI, N ; QUATIERI, T ; SMALT, C: Comparison of two-talker attention decoding from EEG with nonlinear neural networks and linear methods. En: Scientific reports 9 (2019), Nr. 1, p. 1–10
dc.relation.referencesCOLE, Ross: The problem with AI music: song and cyborg creativity in the digital age. En: Popular Music 39 (2020), Nr. 2, p. 332–338
dc.relation.referencesCOLLAZOS-HUERTAS, D ; ALVAREZ-MEZA, A ; CASTELLANOS-DOMINGUEZ, G: Image-Based Learning Using Gradient Class Activation Maps for Enhanced Physiologi- cal Interpretability of Motor Imagery Skills. En: Applied Sciences 12 (2022), Nr. 3, p. 1695
dc.relation.referencesCOLLAZOS-HUERTAS, D. ; CAICEDO-ACOSTA, J. ; CASTAÑO DUQUE, G. A. ; ACOSTA- MEDINA, C. D.: Enhanced Multiple Instance Representation Using Time-Frequency Atoms in Motor Imagery Classification. En: Frontiers in Neuroscience 14 (2020), p. 155
dc.relation.referencesCOLLAZOS-HUERTAS, DF ; ÁLVAREZ-MEZA, AM ; ACOSTA-MEDINA, CD ; CASTAÑO- DUQUE, GA ; CASTELLANOS-DOMINGUEZ, G: CNN-based framework using spatial drop- ping for enhanced interpretation of neural activity in motor imagery classification. En: Brain Informatics 7 (2020), Nr. 1, p. 1–13
dc.relation.referencesCOLLAZOS-HUERTAS, D.F. ; ÁLVAREZ-MEZA, A.M. ; CASTELLANOS-DOMINGUEZ, G.: Spatial interpretability of time-frequency relevance optimized in motor imagery discrimina- tion using Deep&Wide networks. En: Biomedical Signal Processing and Control 68 (2021), p. 102626. – ISSN 1746–8094
dc.relation.referencesCOLLAZOS-HUERTAS, Diego F. ; VELASQUEZ-MARTINEZ, Luisa F. ; PEREZ-NASTAR, Hernan D. ; ALVAREZ-MEZA, Andres M. ; CASTELLANOS-DOMINGUEZ, German: Deep and wide transfer learning with kernel matching for pooling data from electroencephalo- graphy and psychological questionnaires. En: Sensors 21 (2021), Nr. 15, p. 5105
dc.relation.referencesCOLLET, C. ; HAJJ, M. E. ; CHAKER, Rawad ; BUI-XUAN, B. ; LEHOT, J. ; HOYEK, N.: Effect of motor imagery and actual practice on learning professional medical skills. En: BMC Medical Education 21 (2021)
dc.relation.referencesCUI, Xu ; WU, Yongrong ; WU, Jipeng ; YOU, Zhiyu ; XIAHOU, Jianbing ; OUYANG, Menglin: A review: Music-emotion recognition and analysis based on EEG signals. En: Frontiers in Neuroinformatics 16 (2022), p. 997282
dc.relation.referencesDADEBAYEV, Didar ; GOH, Wei W. ; TAN, Ee X.: EEG-based emotion recognition: Review of commercial EEG devices and machine learning techniques. En: Journal of King Saud University-Computer and Information Sciences 34 (2022), Nr. 7, p. 4385–4401
dc.relation.referencesDAI, C ; WANG, Z ; WEI, L ; CHEN, G ; CHEN, B ; ZUO, F ; LI, Y: Combining early post-resuscitation EEG and HRV features improves the prognostic performance in cardiac arrest model of rats. En: The American Journal of Emergency Medicine 36 (2018), Nr. 12, p. 2242–2248. – ISSN 0735–6757
dc.relation.referencesDAI, Shuqi ; YU, Huiran ; DANNENBERG, Roger B.: What is missing in deep mu- sic generation? a study of repetition and structure in popular music. En: arXiv preprint arXiv:2209.00182 (2022)
dc.relation.referencesDALY, I ; NICOLAOU, N ; WILLIAMS, D ; HWANG, F ; KIRKE, A ; MIRANDA, E ; NA- SUTO, S: Neural and physiological data from participants listening to affective music. En: Scientific Data 7 (2020), Nr. 1, p. 1–7
dc.relation.referencesDAS, Abhishek ; KOTTUR, Satwik ; GUPTA, Khushi ; SINGH, Avi ; YADAV, Deshraj ; MOURA, José MF ; PARIKH, Devi ; BATRA, Dhruv: Visual dialog. En: Proceedings of the IEEE conference on computer vision and pattern recognition, 2017, p. 326–335
dc.relation.referencesDAS, P ; GUPTA, S ; NEOGI, B: Measurement of effect of music on human brain and con- sequent impact on attentiveness and concentration during reading. En: Procedia Computer Science 172 (2020), p. 1033–1038
dc.relation.referencesDASH, Adyasha ; AGRES, Kat R.: Ai-based affective music generation systems: a review of methods, and challenges. En: arXiv preprint arXiv:2301.06890 (2023)
dc.relation.referencesDAVIS, Steven ; MERMELSTEIN, Paul: Comparison of parametric representations for mo- nosyllabic word recognition in continuously spoken sentences. En: IEEE transactions on acoustics, speech, and signal processing 28 (1980), Nr. 4, p. 357–366
dc.relation.referencesDHARIWAL, P ; JUN, H ; PAYNE, C ; KIM, J ; RADFORD, A ; SUTSKEVER, I: Jukebox: A generative model for music. En: arXiv:2005.00341 (2020), p. 1–20
dc.relation.referencesDI-LIBERTO, G ; MARION, G ; SHAMMA, S: The Music of Silence: Part II: Music Listening Induces Imagery Responses. En: Journal of Neuroscience 41 (2021), Nr. 35, p. 7449–7460
dc.relation.referencesDING, Yi ; ROBINSON, Neethu ; ZHANG, Su ; ZENG, Qiuhao ; GUAN, Cuntai: Tsception: Capturing temporal dynamics and spatial asymmetry from EEG for emotion recognition. En: arXiv preprint arXiv:2104.02935 (2021)
dc.relation.referencesDING, Yi ; ROBINSON, Neethu ; ZHANG, Su ; ZENG, Qiuhao ; GUAN, Cuntai: Tsception: Capturing temporal dynamics and spatial asymmetry from EEG for emotion recognition. En: IEEE Transactions on Affective Computing (2022)
dc.relation.referencesDONAHUE, C ; MAO, H ; LI, Y ; COTTRELL, G ; MCAULEY, J: LakhNES: Improving Multi-instrumental Music Generation with Cross-domain Pre-training. En: ISMIR, 2019, p. 1–8
dc.relation.referencesDUBUS, G ; BRESIN, R: A Systematic Review of Mapping Strategies for the Sonification of Physical Quantities. En: PLoS ONE 8 (2013)
dc.relation.referencesDäHNE, S ; BIEßMANN, F ; SAMEK, W ; HAUFE, S ; GOLTZ, D ; GUNDLACH, C ; VILLRINGER, A ; FAZLI, S ; MüLLER, K: Multivariate Machine Learning Methods for Fusing Multimodal Functional Neuroimaging Data. En: Proceedings of the IEEE 103 (2015), Nr. 9, p. 1507–1530
dc.relation.referencesEBCIO ̆GLU, Kemal: An expert system for harmonizing four-part chorales. En: Computer Music Journal 12 (1988), Nr. 3, p. 43–51
dc.relation.referencesFERREIRA, Lucas N. ; WHITEHEAD, Jim: Learning to generate music with sentiment. En: arXiv preprint arXiv:2103.06125 (2021)
dc.relation.referencesFIEBRINK, Rebecca ; CARAMIAUX, Baptiste: The machine learning algorithm as creative musical tool. En: arXiv preprint arXiv:1611.00379 (2016)
dc.relation.referencesFLEURY, Mathis ; FIGUEIREDO, Patrícia ; VOURVOPOULOS, Athanasios ; LÉCUYER, Ana- tole: Two is better? Combining EEG and fMRI for BCI and Neurofeedback: A systematic review. (2023)
dc.relation.referencesFREER, D ; YANG, G: Data augmentation for self-paced motor imagery classification with C-LSTM. En: Journal of neural engineering 17 (2020), Nr. 1, p. 016041
dc.relation.referencesFURUI, Sadaoki: Speaker-independent isolated word recognition based on emphasized spec- tral dynamics. En: ICASSP’86. IEEE International Conference on Acoustics, Speech, and Signal Processing Vol. 11 IEEE, 1986, p. 1991–1994
dc.relation.referencesGARCIA-MURILLO, D ; ALVAREZ-MEZA, A ; CASTELLANOS-DOMINGUEZ, G: Single- Trial Kernel-Based Functional Connectivity for Enhanced Feature Extraction in Motor- Related Tasks. En: Sensors 21 (2021), Nr. 8
dc.relation.referencesGOMEZ, Patrick ; DANUSER, Brigitta: Relationships between musical structure and psy- chophysiological measures of emotion. En: Emotion 7 (2007), Nr. 2, p. 377
dc.relation.referencesHE, Qun ; FENG, Lufeng ; JIANG, Guoqian ; XIE, Ping: Multimodal multitask neural network for motor imagery classification with EEG and fNIRS signals. En: IEEE Sensors Journal 22 (2022), Nr. 21, p. 20695–20706
dc.relation.referencesHE, Zhipeng ; LI, Zina ; YANG, Fuzhou ; WANG, Lei ; LI, Jingcong ; ZHOU, Chengju ; PAN, Jiahui: Advances in multimodal emotion recognition based on brain–computer interfaces. En: Brain sciences 10 (2020), Nr. 10, p. 687
dc.relation.referencesHERNANDEZ-OLIVAN, Carlos ; BELTRAN, Jose R.: Music composition with deep lear- ning: A review. En: Advances in speech and music technology: computational aspects and applications (2022), p. 25–50
dc.relation.referencesHERREMANS, D ; CHUAN, C ; CHEW, E: A Functional Taxonomy of Music Generation Systems. En: ACM Computing Surveys (CSUR) 50 (2017), p. 1–30
dc.relation.referencesHILDT, E: Affective Brain-Computer Music Interfaces –Drivers and Implications. En: Frontiers in Human Neuroscience 15 (2021)
dc.relation.referencesHOUSSEIN, Essam H. ; HAMMAD, Asmaa ; ALI, Abdelmgeid A.: Human emotion recog- nition from EEG-based brain–computer interface using machine learning: a comprehensive review. En: Neural Computing and Applications 34 (2022), Nr. 15, p. 12527–12557
dc.relation.referencesHUANG, Chih-Fang ; HUANG, Cheng-Yuan: Emotion-based AI music generation sys- tem with CVAE-GAN. En: 2020 IEEE Eurasia Conference on IOT, Communication and Engineering (ECICE) IEEE, 2020, p. 220–222
dc.relation.referencesHUI, K ; GANAA, E ; ZHAN, Y ; SHEN, X: Robust deflated canonical correlation analy- sis via feature factoring for multi-view image classification. En: Multimedia Tools and Applications 80 (2021), Nr. 16, p. 24843–24865
dc.relation.referencesHUNG, Hsiao-Tzu ; CHING, Joann ; DOH, Seungheon ; KIM, Nabin ; NAM, Juhan ; YANG, Yi-Hsuan: Emopia: A multi-modal pop piano dataset for emotion recognition and emotion- based music generation. En: arXiv preprint arXiv:2108.01374 (2021)
dc.relation.referencesHUTCHINGS, Patrick E. ; MCCORMACK, Jon: Adaptive music composition for games. En: IEEE Transactions on Games 12 (2019), Nr. 3, p. 270–280
dc.relation.referencesJAMES, C ; ZUBER, S ; DUPUIS LOZERON, E ; ABDILI, L ; GERVAISE, D ; KLIEGEL, M: How Musicality, Cognition and Sensorimotor Skills Relate in Musically Untrained Chil- dren. En: Swiss Journal of Psychology 79 (2020), Nr. 3-4, p. 101–112
dc.relation.referencesJEON, E ; KO, W ; YOON, J ; SUK, H. Mutual Information-driven Subject-invariant and Class-relevant Deep Representation Learning in BCI. 2020
dc.relation.referencesJI, Shulei ; YANG, Xinyu ; LUO, Jing: A Survey on Deep Learning for Symbolic Music Ge- neration: Representations, Algorithms, Evaluations, and Challenges. En: ACM Computing Surveys (2023)
dc.relation.referencesJUSLIN, P ; VASTFJLL, D: Emotional responses to music: The need to consider underlying mechanisms. En: Behavioral and brain sciences 31 (2008), Nr. 5, p. 559–575
dc.relation.referencesKANT, P ; LASKAR, S ; HAZARIKA, J ; MAHAMUNE, R: CWT Based Transfer Learning for Motor Imagery Classification for Brain computer Interfaces. En: Journal of Neuroscience Methods 345 (2020), p. 108886. – ISSN 0165–0270
dc.relation.referencesKATTHI, J ; GANAPATHY, S: Deep Correlation Analysis for Audio-EEG Decoding. En: IEEE Trans Neural Syst Rehabil Eng 29 (2021), p. 2742–2753
dc.relation.referencesKINGMA, D ; WELLING, M: An Introduction to Variational Autoencoders. En: Foundations and Trends in Machine Learning 12 (2019), Nr. 4, p. 307–392
dc.relation.referencesKO, W ; JEON, E ; JEONG, S ; SUK, H. Multi-Scale Neural network for EEG Representation Learning in BCI. 2020
dc.relation.referencesKOCTÚROVÁ, M ; JUHÁR, J: A Novel approach to EEG Speech activity detection with visual stimuli and mobile BCI. En: Applied Sciences 11 (2021), Nr. 2, p. 674
dc.relation.referencesKOELSTRA, Sander ; MUHL, Christian ; SOLEYMANI, Mohammad ; LEE, Jong-Seok ; YAZDANI, Ashkan ; EBRAHIMI, Touradj ; PUN, Thierry ; NIJHOLT, Anton ; PATRAS, Ioannis: Deap: A database for emotion analysis; using physiological signals. En: IEEE transactions on affective computing 3 (2011), Nr. 1, p. 18–31
dc.relation.referencesKÜHL, N ; GOUTIER, M ; HIRT, R ; SATZGER, G: Machine Learning in Artificial Intelli- gence: Towards a Common Understanding. En: HICSS, 2019, p. 1–10
dc.relation.referencesKUMAR, S ; SHARMA, A ; TSUNODA, T: Brain wave classification using long short-term memory network based OPTICAL predictor. En: Scientific Reports 9 (2019), 12, p. 1–13
dc.relation.referencesLADDA, A ; LEBON, F ; LOTZE, M: Using motor imagery practice for improving motor performance - A review. En: Brain and Cognition 150 (2021), p. 105705
dc.relation.referencesLAWHERN, Vernon J. ; SOLON, Amelia J. ; WAYTOWICH, Nicholas R. ; GORDON, Stephen M. ; HUNG, Chou P. ; LANCE, Brent J.: EEGNet: a compact convolutional neural network for EEG-based brain–computer interfaces. En: Journal of neural engineering 15 (2018), Nr. 5, p. 056013
dc.relation.referencesLEE, M ; KWON, O ; KIM, Y ; KIM, H ; LEE, Y ; WILLIAMSON, J ; FAZLI, S ; LEE, S: EEG dataset and OpenBMI toolbox for three BCI paradigms: an investigation into BCI illiteracy. En: GigaScience 8 (2019), 01, Nr. 5. – ISSN 2047–217X
dc.relation.referencesLEE, M ; YOON, J ; LEE, S: Predicting Motor Imagery Performance From Resting-State EEG Using Dynamic Causal Modeling. En: Frontiers in Human Neuroscience 14 (2020), p. 321. – ISSN 1662–5161
dc.relation.referencesLEIPOLD, S ; GREBER, M ; SELE, S o. ; JÄNCKE, L: Neural patterns reveal single-trial information on absolute pitch and relative pitch perception. En: NeuroImage 200 (2019), p. 132–141
dc.relation.referencesLI, C ; WANG, B ; ZHANG, S ; LIU, Y ; SONG, R ; CHENG, J ; CHEN, X: Emotion recogni- tion from EEG based on multi-task learning with capsule network and attention mechanism. En: Comput. Biol. Med 143 (2022), p. 105303
dc.relation.referencesLI, Xiang ; SONG, Dawei ; ZHANG, Peng ; ZHANG, Yazhou ; HOU, Yuexian ; HU, Bin: Exploring EEG features in cross-subject emotion recognition. En: Frontiers in neuroscience 12 (2018), p. 162
dc.relation.referencesLI, Xiang ; ZHANG, Yazhou ; TIWARI, Prayag ; SONG, Dawei ; HU, Bin ; YANG, Meihong ; ZHAO, Zhigang ; KUMAR, Neeraj ; MARTTINEN, Pekka: EEG based emotion recognition: A tutorial and review. En: ACM Computing Surveys 55 (2022), Nr. 4, p. 1–57
dc.relation.referencesLI, Xiaowei ; HU, Bin ; ZHU, Tingshao ; YAN, Jingzhi ; ZHENG, Fang: Towards affective learning with an EEG feedback approach. En: Proceedings of the first ACM international workshop on Multimedia technologies for distance learning, 2009, p. 33–38
dc.relation.referencesLIEBMAN, E ; STONE, P: Artificial Musical Intelligence: A Survey. En: ArXiv 2006.10553 (2020)
dc.relation.referencesLIOI, G ; CURY, C ; PERRONNET, L ; MANO, M ; BANNIER, E ; LÉCUYER, A ; BARILLOT, C: Simultaneous MRI-EEG during a motor imagery neurofeedback task: an open access brain imaging dataset for multi-modal data integration. En: bioRxiv (2019), p. 862375
dc.relation.referencesLONG, Y ; KONG, W ; JIN, X ; SHANG, J ; YANG, C: Visualizing Emotional States: A Method Based on Human Brain Activity. En: ZENG, A (Ed.) ; PAN, D (Ed.) ; HAO, T (Ed.) ; ZHANG, D (Ed.) ; SHI, Y (Ed.) ; SONG, X (Ed.): Human Brain and Artificial Intelligence, 2019, p. 248–258
dc.relation.referencesLOUI, P: Neuroscience of Musical Improvisation. En: Handbook of Artificial Intelligence for Music. Springer, 2021, p. 97–115
dc.relation.referencesMAMMONE, N ; IERACITANO, C ; MORABITO, F: A deep CNN approach to decode motor preparation of upper limbs from timeâC“frequency maps of EEG signals at source level. En: Neural Networks 124 (2020), p. 357–372. – ISSN 0893–6080
dc.relation.referencesMARTIN, Rod A. ; BERRY, Glen E. ; DOBRANSKI, Tobi ; HORNE, Marilyn ; DODGSON, Philip G.: Emotion perception threshold: Individual differences in emotional sensitivity. En: Journal of Research in Personality 30 (1996), Nr. 2, p. 290–305
dc.relation.referencesMCAVINUE, L ; ROBERTSON, I: Measuring motor imagery ability: A review. En: European Journal of Cognitive Psychology 20 (2008), Nr. 2, p. 232–251
dc.relation.referencesMCFARLAND, D. ; MINER, L. ; VAUGHAN, T. ; WOLPAW, J: Mu and Beta Rhythm Topo- graphies During Motor Imagery and Actual Movements. En: Brain Topography 12 (2004), p. 177–186
dc.relation.referencesMILAZZO, M ; BUEHLER, B: Designing and fabricating materials from fire using sonifica- tion and deep learning. En: iScience 24 (2021), Nr. 8, p. 102873
dc.relation.referencesMIRZAEI, S ; GHASEMI, P: EEG motor imagery classification using dynamic connectivity patterns and convolutional autoencoder. En: Biomedical Signal Processing and Control 68 (2021), p. 102584. – ISSN 1746–8094
dc.relation.referencesMISHRA, S ; ASIF, M ; TIWARY, U: Dataset on Emotions using Naturalistic Stimuli (DENS). En: bioRxiv (2021), p. 1–13
dc.relation.referencesMIYAMOTO, K ; TANAKA, H ; NAKAMURA, S: Emotion Estimation from EEG Signals and Expected Subjective Evaluation. En: 2021 9th International Winter Conference on Brain-Computer Interface (BCI) IEEE, 2021, p. 1–6
dc.relation.referencesMIYAMOTO, Kana ; TANAKA, Hiroki ; NAKAMURA, Satoshi: Online EEG-based emotion prediction and music generation for inducing affective states. En: IEICE TRANSACTIONS on Information and Systems 105 (2022), Nr. 5, p. 1050–1063
dc.relation.referencesMOORE, F R.: The dysfunctions of MIDI. En: Computer music journal 12 (1988), Nr. 1, p. 19–28
dc.relation.referencesMORI, K: Decoding peak emotional responses to music from computational acoustic and lyrical features. En: Cognition 222 (2022), p. 105010
dc.relation.referencesMOU, Luntian ; ZHAO, Yiyuan ; HAO, Quan ; TIAN, Yunhan ; LI, Juehui ; LI, Jueying ; SUN, Yiqi ; GAO, Feng ; YIN, Baocai: Memomusic version 2.0: Extending personali- zed music recommendation with automatic music generation. En: 2022 IEEE International Conference on Multimedia and Expo Workshops (ICMEW) IEEE, 2022, p. 1–6
dc.relation.referencesMUHAMED, A ; LI, L ; SHI, X ; YADDANAPUDI, S ; CHI, W ; JACKSON, D ; SURESH, R ; LIPTON, Z ; SMOLA, A: Symbolic Music Generation with Transformer-GANs. En: Proceedings of the AAAI Conference on Artificial Intelligence Vol. 31, 2021, p. 408–417
dc.relation.referencesMUSALLAM, Yazeed K. ; ALFASSAM, Nasser I. ; MUHAMMAD, Ghulam ; AMIN, Syed U. ; ALSULAIMAN, Mansour ; ABDUL, Wadood ; ALTAHERI, Hamdi ; BENCHERIF, Moha- med A. ; ALGABRI, Mohammed: Electroencephalography-based motor imagery classifica- tion using temporal convolutional network fusion. En: Biomedical Signal Processing and Control 69 (2021), p. 102826
dc.relation.referencesNAFEA, Mohamed S. ; ISMAIL, Zool H.: Supervised machine learning and deep learning techniques for epileptic seizure recognition using EEG signalsâ C”A systematic literature review. En: Bioengineering 9 (2022), Nr. 12, p. 781
dc.relation.referencesNATSIOU, A ; O’LEARY, S: Audio representations for deep learning in sound synthesis: A review. En: ArXiv 2201.02490 (2022), p. 1–8
dc.relation.referencesNIRANJAN, D ; BURUNAT, I ; TOIVIAINEN, P ; ALLURI, V: Influence of musical expertise on the processing of musical features in a naturalistic setting. En: Conference on Cognitive Computational Neuroscience, 2019, p. 655–658
dc.relation.referencesNORDSTRÖM, Henrik ; LAUKKA, Petri: The time course of emotion recognition in speech and music. En: The Journal of the Acoustical Society of America 145 (2019), Nr. 5, p. 3058–3074
dc.relation.referencesORLANDI, S ; HOUSE, S ; KARLSSON, P ; SAAB, R ; CHAU, T: Brain-Computer Interfaces for Children With Complex Communication Needs and Limited Mobility: A Systematic Review. En: Frontiers in Human Neuroscience 15 (2021)
dc.relation.referencesPANDEY, P ; AHMAD, N ; MIYAPURAM, K ; LOMAS, D: Predicting Dominant Beat Fre- quency from Brain Responses While Listening to Music. En: 2021 IEEE International Conference on Bioinformatics and Biomedicine (BIBM) IEEE, 2021, p. 3058–3064
dc.relation.referencesPAPADOPOULOS, Alexandre ; ROY, Pierre ; PACHET, François: Assisted lead sheet com- position using flowcomposer. En: Principles and Practice of Constraint Programming: 22nd International Conference, CP 2016, Toulouse, France, September 5-9, 2016, Proceedings 22 Springer, 2016, p. 769–785
dc.relation.referencesPARCALABESCU, Letitia ; TROST, Nils ; FRANK, Anette: What is multimodality? En: arXiv preprint arXiv:2103.06304 (2021)
dc.relation.referencesPATNAIK, Suprava: Speech emotion recognition by using complex MFCC and deep sequen- tial model. En: Multimedia Tools and Applications 82 (2023), Nr. 8, p. 11897–11922
dc.relation.referencesPEKSA, Janis ; MAMCHUR, Dmytro: State-of-the-Art on Brain-Computer Interface Tech- nology. En: Sensors 23 (2023), Nr. 13, p. 6001
dc.relation.referencesPICARD, Rosalind ; SU, David ; LIU, Yan: AMAI: Adaptive music for affect improvement. (2018)
dc.relation.referencesPICARD, Rosalind W.: Building HAL: Computers that sense, recognize, and respond to human emotion. En: Human Vision and Electronic Imaging VI Vol. 4299 SPIE, 2001, p. 518–523
dc.relation.referencesPICARD, Rosalind W.: Affective computing: challenges. En: International Journal of Human-Computer Studies 59 (2003), Nr. 1-2, p. 55–64
dc.relation.referencesPLUMMER, Bryan A. ; WANG, Liwei ; CERVANTES, Chris M. ; CAICEDO, Juan C. ; HOCKENMAIER, Julia ; LAZEBNIK, Svetlana: Flickr30k entities: Collecting region-to- phrase correspondences for richer image-to-sentence models. En: Proceedings of the IEEE international conference on computer vision, 2015, p. 2641–2649
dc.relation.referencesPODOBNIK, B ; STANLEY, H: Detrended cross-correlation analysis: a new method for analyzing two nonstationary time series. En: Physical review letters 100 (2008), Nr. 8, p. 084102
dc.relation.referencesPURWINS, H ; LI, B ; VIRTANEN, T ; SCHLÜTER, J ; CHANG, S ; SAINATH, T: Deep lear- ning for audio signal processing. En: IEEE Journal of Selected Topics in Signal Processing 13 (2019), Nr. 2, p. 206–219
dc.relation.referencesPURWINS, Hendrik ; BLANKERTZ, Benjamin ; OBERMAYER, Klaus: A new method for tracking modulations in tonal music in audio data format. En: Proceedings of the IEEE-INNS-ENNS International Joint Conference on Neural Networks. IJCNN 2000. Neural Computing: New Challenges and Perspectives for the New Millennium Vol. 6 IEEE, 2000, p. 270–275
dc.relation.referencesRAFFEL, Colin ; ELLIS, Daniel P.: Intuitive analysis, creation and manipulation of MI- DI data with pretty_midi. En: 15th international society for music information retrieval conference late breaking and demo papers, 2014, p. 84–93
dc.relation.referencesRAHMAN, M ; SARKAR, A ; HOSSAIN, A ; HOSSAIN, S ; ISLAM, R ; HOSSAIN, B ; QUINN, J ; MONI, M: Recognition of human emotions using EEG signals: A review. En: Computers in Biology and Medicine 136 (2021), p. 104696
dc.relation.referencesRAMÍREZ, A ; HORNERO, G ; ROYO, D ; AGUILAR, A ; CASAS, O: Assessment of Emotio- nal States Through Physiological Signals and Its Application in Music Therapy for Disabled People. En: IEEE access 8 (2020), p. 127659–127671
dc.relation.referencesRIMBERT, S ; GAYRAUD, N ; BOUGRAIN, L ; CLERC, M ; FLECK, S: Can a Subjective Questionnaire Be Used as Brain-Computer Interface Performance Predictor? En: Frontiers in Human Neuroscience 12 (2019), p. 529. – ISSN 1662–5161
dc.relation.referencesRUSSELL, James A.: A circumplex model of affect. En: Journal of personality and social psychology 39 (1980), Nr. 6, p. 1161
dc.relation.referencesSANNELLI, C ; VIDAURRE, C ; MULLER, K ; BLANKERTZ, B: A large scale screening study with a SMR-based BCI: Categorization of BCI users and differences in their SMR activity. En: PLOS ONE 14 (2019), 01, Nr. 1, p. 1–37
dc.relation.referencesSANYAL, S ; NAG, S ; BANERJEE, A ; SENGUPTA, R ; GHOSH, D: Music of brain and music on brain: a novel EEG sonification approach. En: Cognitive neurodynamics 13 (2019), Nr. 1, p. 13–31
dc.relation.referencesSCHIRRMEISTER, Robin T. ; SPRINGENBERG, Jost T. ; FIEDERER, Lukas Dominique J. ; GLASSTETTER, Martin ; EGGENSPERGER, Katharina ; TANGERMANN, Michael ; HUT- TER, Frank ; BURGARD, Wolfram ; BALL, Tonio: Deep learning with convolutional neural networks for EEG decoding and visualization. En: Human brain mapping 38 (2017), Nr. 11, p. 5391–5420
dc.relation.referencesSHAMSI, F ; HADDAD, A ; NAJAFIZADEH, L: Early classification of motor tasks using dynamic functional connectivity graphs from EEG. En: Journal of neural engineering 18 (2021), Nr. 1, p. 016015
dc.relation.referencesSIMPSON, T ; ELLISON, P ; CARNEGIE, E ; MARCHANT, D: A systematic review of mo- tivational and attentional variables on childrenâ C™s fundamental movement skill develop- ment: The OPTIMAL theory. En: International Review of Sport and Exercise Psychology (2020), p. 1–47
dc.relation.referencesSINGH, A ; HUSSAIN, A ; LAL, S ; GUESGEN, H: A Comprehensive Review on Criti- cal Issues and Possible Solutions of Motor Imagery Based Electroencephalography Brain- Computer Interface. En: Sensors 21 (2021), Nr. 6
dc.relation.referencesSINGH, Yeshwant ; BISWAS, Anupam: Robustness of musical features on deep learning models for music genre classification. En: Expert Systems with Applications 199 (2022), p. 116879
dc.relation.referencesSONG, L. ; FUKUMIZU, K. ; GRETTON, A.: Kernel Embeddings of Conditional Distribu- tions: A Unified Kernel Framework for Nonparametric Inference in Graphical Models. En: IEEE Signal Processing Magazine 30 (2013), Nr. 4, p. 98–111
dc.relation.referencesSONG, XinWang ; YAN, DanDan ; ZHAO, LuLu ; YANG, LiCai: LSDD-EEGNet: An ef- ficient end-to-end framework for EEG-based depression detection. En: Biomedical Signal Processing and Control 75 (2022), p. 103612
dc.relation.referencesSOROUSH, M ; MAGHOOLI, K ; SETAREHDAN, S ; NASRABADI, A: A review on EEG sig- nals based emotion recognition. En: International Clinical Neuroscience Journal 4 (2017), Nr. 4, p. 118
dc.relation.referencesSOUTO, D ; CRUZ, T ; FONTES, P ; BATISTA, R ; HAASE, V: Motor Imagery Develop- ment in Children: Changes in Speed and Accuracy With Increasing Age. En: Frontiers in Pediatrics 8 (2020), p. 100
dc.relation.referencesSOYSA, Amani I. ; LOKUGE, Kulari: Interactive machine learning for incorporating user emotions in automatic music harmonization. En: 2010 Fifth International Conference on Information and Automation for Sustainability IEEE, 2010, p. 114–118
dc.relation.referencesSTEEDMAN, Mark J.: A generative grammar for jazz chord sequences. En: Music Perception 2 (1984), Nr. 1, p. 52–77
dc.relation.referencesSUBRAMANI, K ; RAO, P: HpRNet : Incorporating Residual Noise Modeling for Violin in a Variational Parametric Synthesizer. En: ArXiv 2008.08405 (2020), p. 1–7
dc.relation.referencesSUGGATE, S ; MARTZOG, P: Screen-time influences children’s mental imagery performan- ce. En: Developmental Science 23 (2020), Nr. 6, p. e12978
dc.relation.referencesTAN, C ; SUN, F ; KONG, T ; ZHANG, W ; YANG, C ; LIU, C: A Survey on Deep Transfer Learning. En: K ̊URKOVÁ, Vˇera (Ed.) ; MANOLOPOULOS, Yannis (Ed.) ; HAMMER, Barbara (Ed.) ; ILIADIS, Lazaros (Ed.) ; MAGLOGIANNIS, Ilias (Ed.): Artificial Neural Networks and Machine Learning – ICANN 2018. Cham : Springer International Publishing, 2018, p. 270–279
dc.relation.referencesTOBÓN-HENAO, Mateo ; ÁLVAREZ-MEZA, Andrés M. ; CASTELLANOS-DOMINGUEZ, Cesar G.: Kernel-Based Regularized EEGNet Using Centered Alignment and Gaussian Connectivity for Motor Imagery Discrimination. En: Computers 12 (2023), Nr. 7, p. 145
dc.relation.referencesTORRES-CARDONA, Hector F. ; AGUIRRE-GRISALES, Catalina ; CASTRO-LONDOÑO, Victor H. ; RODRIGUEZ-SOTELO, Jose L.: Interpolation, a model for sound representa- tion based on BCI. En: Augmented Cognition: 13th International Conference, AC 2019, Held as Part of the 21st HCI International Conference, HCII 2019, Orlando, FL, USA, July 26–31, 2019, Proceedings 21 Springer, 2019, p. 471–483
dc.relation.referencesVÄRBU, Kaido ; MUHAMMAD, Naveed ; MUHAMMAD, Yar: Past, present, and future of EEG-based BCI applications. En: Sensors 22 (2022), Nr. 9, p. 3331
dc.relation.referencesVASILYEV, A ; LIBURKINA, S ; YAKOVLEV, L ; PEREPELKINA, O ; KAPLAN, A: Assessing motor imagery in brain-computer interface training: Psychological and neurophysiological correlates. En: Neuropsychologia 97 (2017), p. 56–65. – ISSN 0028–3932
dc.relation.referencesVELASQUEZ, L. ; CAICEDO, J. ; CASTELLANOS-DOMINGUEZ, G.: Entropy-Based Es- timation of Event-Related De/Synchronization in Motor Imagery Using Vector-Quantized Patterns. En: Entropy 22 (2020), Nr. 6, p. 703
dc.relation.referencesVEMPATI, Raveendrababu ; SHARMA, Lakhan D.: A Systematic Review on Automated Hu- man Emotion Recognition using Electroencephalogram Signals and Artificial Intelligence. En: Results in Engineering (2023), p. 101027
dc.relation.referencesVISHESH, P ; PAVAN, A ; VASIST, Samarth G. ; RAO, Sindhu ; SRINIVAS, KS: DeepTunes- Music Generation based on Facial Emotions using Deep Learning. En: 2022 IEEE 7th International conference for Convergence in Technology (I2CT) IEEE, 2022, p. 1–6
dc.relation.referencesVUILLEUMIER, Patrik ; TROST, Wiebke: Music and emotions: from enchantment to en- trainment. En: Annals of the New York Academy of Sciences 1337 (2015), Nr. 1, p. 212–222
dc.relation.referencesWALLIS, Isaac ; INGALLS, Todd ; CAMPANA, Ellen ; GOODMAN, Janel: A rule-based generative music system controlled by desired valence and arousal. En: Proceedings of 8th international sound and music computing conference (SMC), 2011, p. 156–157
dc.relation.referencesWAN, Z. ; YANG, R. ; HUANG, M. ; ZENG, N. ; LIU, X: A review on transfer learning in EEG signal analysis. En: Neurocomputing 421 (2021), p. 1–14. – ISSN 0925–2312
dc.relation.referencesWANG, J ; XUE, F ; LI, H: Simultaneous channel and feature selection of fused EEG features based on sparse group lasso. En: BioMed research international 2015 (2015)
dc.relation.referencesWANG, N ; XU, H ; XU, F ; CHENG, L: The algorithmic composition for music copyright protection under deep learning and blockchain. En: Applied Soft Computing 112 (2021), p. 107763
dc.relation.referencesWANG, Yan ; SONG, Wei ; TAO, Wei ; LIOTTA, Antonio ; YANG, Dawei ; LI, Xinlei ; GAO, Shuyong ; SUN, Yixuan ; GE, Weifeng ; ZHANG, Wei [u. a.]: A systematic review on affective computing: Emotion models, databases, and recent advances. En: Information Fusion 83 (2022), p. 19–52
dc.relation.referencesWEI, X ; ORTEGA, P ; FAISAL, A. Inter-subject Deep Transfer Learning for Motor Imagery EEG Decoding. 2021
dc.relation.referencesWEINECK, K ; WEN, Olivia X. ; HENRY, Molly J.: Neural entrainment is strongest to the spectral flux of slow music and depends on familiarity and beat salience. En: bioRxiv (2021)
dc.relation.referencesWHISSELL, Cynthia M.: The dictionary of affect in language. En: The measurement of emotions. Elsevier, 1989, p. 113–131
dc.relation.referencesWILSON, J ; STERLING, A ; REWKOWSKI, N ; LIN, M: Glass half full: sound synthesis for fluid–structure coupling using added mass operator. En: The Visual Computer 33 (2017), Nr. 6, p. 1039–1048
dc.relation.referencesWU, D: Hearing the Sound in the Brain: Influences of Different EEG References. En: Frontiers in Neuroscience 12 (2018)
dc.relation.referencesXU, J ; ZHENG, H ; WANG, J ; LI, D ; FANG, X: Recognition of EEG Signal Motor Imagery Intention Based on Deep Multi-View Feature Learning. En: Sensors (Basel, Switzerland) 20 (2020)
dc.relation.referencesYANG, Li-Chia ; LERCH, Alexander: On the evaluation of generative models in music. En: Neural Computing and Applications 32 (2020), Nr. 9, p. 4773–4784
dc.relation.referencesYANG, X ; LIU, W ; LIU, W ; TAO, D: A survey on canonical correlation analysis. En: IEEE Transactions on Knowledge and Data Engineering 33 (2019), Nr. 6, p. 2349–2368
dc.relation.referencesYOON, J ; LEE, M: Effective Correlates of Motor Imagery Performance based on De- fault Mode Network in Resting-State. En: 2020 8th International Winter Conference on Brain-Computer Interface (BCI), 2020, p. 1–5
dc.relation.referencesYOU, Y ; CHEN, W ; ZHANG, T: Motor imagery EEG classification based on flexible analytic wavelet transform. En: Biomedical Signal Processing and Control 62 (2020), p. 102069. – ISSN 1746–8094
dc.relation.referencesYU, C ; QIN, Z ; MARTIN-MARTINEZ, J ; BUEHLER, M: A Self-Consistent Sonification Method to Translate Amino Acid Sequences into Musical Compositions and Application in Protein Design Using Artificial Intelligence. En: ACS Nano 13 (2019), Nr. 7, p. 7471–7482
dc.relation.referencesZELLERS, Rowan ; BISK, Yonatan ; FARHADI, Ali ; CHOI, Yejin: From recognition to cognition: Visual commonsense reasoning. En: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2019, p. 6720–6731
dc.relation.referencesZHANG, J ; YIN, Z ; CHEN, P ; NICHELE, S: Emotion recognition using multi-modal data and machine learning techniques: A tutorial and review. En: Information Fusion 59 (2020), p. 103–126
dc.relation.referencesZHANG, K ; XU, G ; CHEN, L ; TIAN, P ; HAN, C ; ZHANG, S ; DUAN, N: Instance transfer subject-dependent strategy for motor imagery signal classification using deep convolutional neural networks. En: Computational and Mathematical Methods in Medicine 2020 (2020)
dc.relation.referencesZHANG, R ; ZONG, Q ; DOU, L ; ZHAO, X ; TANG, Y ; LI, Z: Hybrid deep neural network using transfer learning for EEG motor imagery decoding. En: Biomedical Signal Processing and Control 63 (2021), p. 102144. – ISSN 1746–8094
dc.relation.referencesZHANG, Y ; ZHOU, G ; JIN, J ; WANG, X ; CICHOCKI, A: Optimizing spatial patterns with sparse filter bands for motor-imagery based brainâC“computer interface. En: Journal of Neuroscience Methods 255 (2015), p. 85–91. – ISSN 0165–0270
dc.relation.referencesZHAO, Kun ; LI, Siqi ; CAI, Juanjuan ; WANG, Hui ; WANG, Jingling: An emotional symbolic music generation system based on lstm networks. En: 2019 IEEE 3rd Information Technology, Networking, Electronic and Automation Control Conference (ITNEC) IEEE, 2019, p. 2039–2043
dc.relation.referencesZHAO, X ; ZHAO, J ; LIU, C ; CAI, W: Deep Neural Network with Joint Distribution Mat- ching for Cross-Subject Motor Imagery Brain-Computer Interfaces. En: BioMed Research International 2020 (2020), p. 1–15
dc.relation.referencesZHENG, Kaitong ; MENG, Ruijie ; ZHENG, Chengshi ; LI, Xiaodong ; SANG, Jinqiu ; CAI, Juanjuan ; WANG, Jie: EmotionBox: a music-element-driven emotional music generation system using Recurrent Neural Network. En: arXiv preprint arXiv:2112.08561 (2021)
dc.relation.referencesZHENG, M ; YANG, B ; GAO, S ; MENG, X: Spatio-time-frequency joint sparse optimi- zation with transfer learning in motor imagery-based brain-computer interface system. En: Biomedical Signal Processing and Control 68 (2021), p. 102702. – ISSN 1746–8094
dc.relation.referencesZHU, J ; WEI, Y ; FENG, Y ; ZHAO, X ; GAO, Y: Physiological Signals-based Emotion Recognition via High-order Correlation Learning. En: ACM Transactions on Multimedia Computing, Communications, and Applications (TOMM) 15 (2019), Nr. 3s, p. 1–18
dc.relation.referencesZHUANG, M ; WU, Q ; WAN, F ; HU, Y: State-of-the-art non-invasive brainâC“computer interface for neural rehabilitation: A review. En: Journal of Neurorestoratology 8 (2020), Nr. 1, p. 12
dc.relation.referencesZHUANG, Y ; LIN, L ; TONG, R ; LIU, J ; IWAMOT, Y ; CHEN, Y: G-gcsn: Global graph convolution shrinkage network for emotion perception from gait. En: Proceedings of the Asian Conference on Computer Vision, 2020
dc.rights.accessrightsinfo:eu-repo/semantics/openAccess
dc.subject.proposalAprendizaje profundo
dc.subject.proposalSeñales EEG
dc.subject.proposalClasificación de emociones
dc.subject.proposalGeneración de música
dc.subject.proposalInterfaz cerebro-computadora (BCI)
dc.subject.proposalAprendizaje multimodal
dc.subject.proposalGeneración de música simbólica
dc.subject.proposalPiano roll
dc.subject.proposalInteligencia artificial
dc.subject.proposalDeep learning
dc.subject.proposalEEG signals
dc.subject.proposalEmotion classification
dc.subject.proposalMusic generation
dc.subject.proposalBrain-Computer Interface (BCI)
dc.subject.proposalMultimodal learning
dc.subject.proposalSymbolic music generation
dc.subject.proposalPiano roll
dc.subject.proposalArtificial intelligence
dc.title.translatedBrain Music : Generative system for symbolic music creation from affective neural responses
dc.type.coarhttp://purl.org/coar/resource_type/c_bdcc
dc.type.coarversionhttp://purl.org/coar/version/c_ab4af688f83e57aa
dc.type.contentText
oaire.accessrightshttp://purl.org/coar/access_right/c_abf2
oaire.fundernameMinciencias
dcterms.audience.professionaldevelopmentBibliotecarios
dcterms.audience.professionaldevelopmentEstudiantes
dcterms.audience.professionaldevelopmentInvestigadores
dcterms.audience.professionaldevelopmentMaestros
dcterms.audience.professionaldevelopmentPúblico general
dc.description.curricularareaEléctrica, Electrónica, Automatización Y Telecomunicaciones.Sede Manizales


Archivos en el documento

Thumbnail

Este documento aparece en la(s) siguiente(s) colección(ones)

Mostrar el registro sencillo del documento

Atribución-NoComercial 4.0 InternacionalEsta obra está bajo licencia internacional Creative Commons Reconocimiento-NoComercial 4.0.Este documento ha sido depositado por parte de el(los) autor(es) bajo la siguiente constancia de depósito