Mostrar el registro sencillo del documento
Brain Music : Sistema compositivo, gráfico y sonoro creado a partir del comportamiento frecuencial de las señales cerebrales
dc.rights.license | Atribución-NoComercial 4.0 Internacional |
dc.contributor.advisor | Castellanos-Dominguez, German |
dc.contributor.advisor | Alvarez-Meza, Andres |
dc.contributor.author | Pérez Nastar, Hernán Darío |
dc.date.accessioned | 2023-11-28T16:16:22Z |
dc.date.available | 2023-11-28T16:16:22Z |
dc.date.issued | 2023 |
dc.identifier.uri | https://repositorio.unal.edu.co/handle/unal/85004 |
dc.description | gráficas, tablas |
dc.description.abstract | Esta tesis de maestría presenta una metodología de aprendizaje profundo multimodal innovadora que fusiona un modelo de clasificación de emociones con un generador musical, con el propósito de crear música a partir de señales de electroencefalografía, profundizando así en la interconexión entre emociones y música. Los resultados alcanzan tres objetivos específicos: Primero, ya que el rendimiento de los sistemas interfaz cerebro-computadora varía considerablemente entre diferentes sujetos, se introduce un enfoque basado en la transferencia de conocimiento entre sujetos para mejorar el rendimiento de individuos con dificultades en sistemas de interfaz cerebro-computadora basados en el paradigma de imaginación motora. Este enfoque combina datos de EEG etiquetados con datos estructurados, como cuestionarios psicológicos, mediante un método de "Kernel Matching CKA". Utilizamos una red neuronal profunda (Deep&Wide) para la clasificación de la imaginación motora. Los resultados destacan su potencial para mejorar las habilidades motoras en interfaces cerebro-computadora. Segundo, proponemos una técnica innovadora llamada "Labeled Correlation Alignment"(LCA) para sonificar respuestas neurales a estímulos representados en datos no estructurados, como música afectiva. Esto genera características musicales basadas en la actividad cerebral inducida por las emociones. LCA aborda la variabilidad entre sujetos y dentro de sujetos mediante el análisis de correlación, lo que permite la creación de envolventes acústicos y la distinción entre diferente información sonora. Esto convierte a LCA en una herramienta prometedora para interpretar la actividad neuronal y su reacción a estímulos auditivos. Finalmente, en otro capítulo, desarrollamos una metodología de aprendizaje profundo de extremo a extremo para generar contenido musical MIDI (datos simbólicos) a partir de señales de actividad cerebral inducidas por música con etiquetas afectivas. Esta metodología abarca el preprocesamiento de datos, el entrenamiento de modelos de extracción de características y un proceso de emparejamiento de características mediante Deep Centered Kernel Alignment, lo que permite la generación de música a partir de señales EEG. En conjunto, estos logros representan avances significativos en la comprensión de la relación entre emociones y música, así como en la aplicación de la inteligencia artificial en la generación musical a partir de señales cerebrales. Ofrecen nuevas perspectivas y herramientas para la creación musical y la investigación en neurociencia emocional. Para llevar a cabo nuestros experimentos, utilizamos bases de datos públicas como GigaScience, Affective Music Listening y Deap Dataset (Texto tomado de la fuente) |
dc.description.abstract | This master’s thesis presents an innovative multimodal deep learning methodology that combines an emotion classification model with a music generator, aimed at creating music from electroencephalography (EEG) signals, thus delving into the interplay between emotions and music. The results achieve three specific objectives: First, since the performance of brain-computer interface systems varies significantly among different subjects, an approach based on knowledge transfer among subjects is introduced to enhance the performance of individuals facing challenges in motor imagery-based brain-computer interface systems. This approach combines labeled EEG data with structured information, such as psychological questionnaires, through a "Kernel Matching CKA"method. We employ a deep neural network (Deep&Wide) for motor imagery classification. The results underscore its potential to enhance motor skills in brain-computer interfaces. Second, we propose an innovative technique called "Labeled Correlation Alignment"(LCA) to sonify neural responses to stimuli represented in unstructured data, such as affective music. This generates musical features based on emotion-induced brain activity. LCA addresses variability among subjects and within subjects through correlation analysis, enabling the creation of acoustic envelopes and the distinction of different sound information. This makes LCA a promising tool for interpreting neural activity and its response to auditory stimuli. Finally, in another chapter, we develop an end-to-end deep learning methodology for generating MIDI music content (symbolic data) from EEG signals induced by affectively labeled music. This methodology encompasses data preprocessing, feature extraction model training, and a feature matching process using Deep Centered Kernel Alignment, enabling music generation from EEG signals. Together, these achievements represent significant advances in understanding the relationship between emotions and music, as well as in the application of artificial intelligence in musical generation from brain signals. They offer new perspectives and tools for musical creation and research in emotional neuroscience. To conduct our experiments, we utilized public databases such as GigaScience, Affective Music Listening and Deap Dataset |
dc.format.extent | viii, 81 páginas |
dc.format.mimetype | application/pdf |
dc.language.iso | eng |
dc.publisher | Universidad Nacional de Colombia |
dc.rights.uri | http://creativecommons.org/licenses/by-nc/4.0/ |
dc.subject.ddc | 600 - Tecnología (Ciencias aplicadas)::607 - Educación, investigación, temas relacionados |
dc.title | Brain Music : Sistema compositivo, gráfico y sonoro creado a partir del comportamiento frecuencial de las señales cerebrales |
dc.type | Trabajo de grado - Maestría |
dc.type.driver | info:eu-repo/semantics/masterThesis |
dc.type.version | info:eu-repo/semantics/acceptedVersion |
dc.publisher.program | Manizales - Ingeniería y Arquitectura - Maestría en Ingeniería - Automatización Industrial |
dc.contributor.researchgroup | Grupo de Control y Procesamiento Digital de Señales |
dc.description.degreelevel | Maestría |
dc.description.degreename | Magíster en Ingeniería - Automatización Industrial |
dc.description.researcharea | Investigación en Aprendizaje Profundo y señales Biológicas |
dc.identifier.instname | Universidad Nacional de Colombia |
dc.identifier.reponame | Repositorio Institucional Universidad Nacional de Colombia |
dc.identifier.repourl | https://repositorio.unal.edu.co/ |
dc.publisher.faculty | Facultad de Ingeniería y Arquitectura |
dc.publisher.place | Manizales, Colombia |
dc.publisher.branch | Universidad Nacional de Colombia - Sede Manizales |
dc.relation.references | ADOLPHS, Ralph ; ANDERSON, David: The neuroscience of emotion: A new synthesis. Princeton University Press, 2018 |
dc.relation.references | AGUIRRE-ARANGO, Juan C. ; ÁLVAREZ-MEZA, Andrés M. ; CASTELLANOS- DOMINGUEZ, German: Feet Segmentation for Regional Analgesia Monitoring Using Con- volutional RFF and Layer-Wise Weighted CAM Interpretability. En: Computation 11 (2023), Nr. 6, p. 113 |
dc.relation.references | ALARCAO, Soraia M. ; FONSECA, Manuel J.: Emotions recognition using EEG signals: A survey. En: IEEE Transactions on Affective Computing 10 (2017), Nr. 3, p. 374–393 |
dc.relation.references | ALVAREZ-MEZA, A ; CARDENAS-PENA, D ; CASTELLANOS-DOMINGUEZ, G: Unsuper- vised kernel function building using maximization of information potential variability. En: Iberoamerican Congress on Pattern Recognition Springer, 2014, p. 335–342 |
dc.relation.references | ALVAREZ-MEZA, A ; OROZCO-GUTIERREZ, A ; CASTELLANOS-DOMINGUEZ, G: Kernel-based relevance analysis with enhanced interpretability for detection of brain ac- tivity patterns. En: Frontiers in neuroscience 11 (2017), p. 550 |
dc.relation.references | ALVAREZ-MEZA, A. M. ; OROZCO-GUTIERREZ, A. ; CASTELLANOS-DOMINGUEZ, G.: Kernel-Based Relevance Analysis with Enhanced Interpretability for Detection of Brain Activity Patterns. En: Frontiers in Neuroscience 11 (2017), p. 550. – ISSN 1662–453X |
dc.relation.references | ÁLVAREZ-MEZA, Andrés M. ; TORRES-CARDONA, Héctor F. ; OROZCO-ALZATE, Mau- ricio ; PÉREZ-NASTAR, Hernán D. ; CASTELLANOS-DOMINGUEZ, German: Affective Neural Responses Sonified through Labeled Correlation Alignment. En: Sensors 23 (2023), Nr. 12, p. 5574 |
dc.relation.references | ANDREW, G ; ARORA, R ; BILMES, J ; LIVESCU, K: Deep canonical correlation analysis. En: International conference on machine learning PMLR, 2013, p. 1247–1255 |
dc.relation.references | ANOWAR, F. ; SADAOUI, S. ; SELIM, B.: Conceptual and empirical comparison of dimen- sionality reduction algorithms (PCA, KPCA, LDA, MDS, SVD, LLE, ISOMAP, LE, ICA, t-SNE). En: Computer Science Review 40 (2021), p. 100378. – ISSN 1574–0137 |
dc.relation.references | ANTOL, Stanislaw ; AGRAWAL, Aishwarya ; LU, Jiasen ; MITCHELL, Margaret ; BATRA, Dhruv ; ZITNICK, C L. ; PARIKH, Devi: Vqa: Visual question answering. En: Proceedings of the IEEE international conference on computer vision, 2015, p. 2425–2433 |
dc.relation.references | APPRIOU, Aurelien ; CICHOCKI, Andrzej ; LOTTE, Fabien: Modern machine-learning al- gorithms: for classifying cognitive and affective states from electroencephalography signals. En: IEEE Systems, Man, and Cybernetics Magazine 6 (2020), Nr. 3, p. 29–38 |
dc.relation.references | ASGHAR, Muhammad A. ; KHAN, Muhammad J. ; FAWAD ; AMIN, Yasar ; RIZWAN, Muhammad ; RAHMAN, MuhibUr ; BADNAVA, Salman ; MIRJAVADI, Seyed S.: EEG-based multi-modal emotion recognition using bag of deep features: An optimal feature selection approach. En: Sensors 19 (2019), Nr. 23, p. 5218 |
dc.relation.references | DE AZEVEDO SANTOS, L R. ; SILLA JR, Carlos N. ; COSTA-ABREU, MD: A methodology for procedural piano music composition with mood templates using genetic algorithms. (2021) |
dc.relation.references | BAGHERZADEH, S ; MAGHOOLI, K ; SHALBAF, A ; MAGHSOUDI, A: Recognition of emotional states using frequency effective connectivity maps through transfer learning ap- proach from electroencephalogram signals. En: Biomedical Signal Processing and Control 75 (2022), p. 103544 |
dc.relation.references | BAHMANI, M ; BABAK, M ; LAND, W ; HOWARD, J ; DIEKFUSS, J ; ABDOLLAHIPOUR, R: Children’s motor imagery modality dominance modulates the role of attentional focus in motor skill learning. En: Human movement science 75 (2020), p. 102742 |
dc.relation.references | BASSO, J ; SATYAL, M ; RUGH, R: Dance on the Brain: Enhancing Intra- and Inter-Brain Synchrony. En: Frontiers in Human Neuroscience 14 (2021), p. 586 |
dc.relation.references | BHATTACHARJEE, M ; MAHADEVA, P ; GUHA, P: Time-Frequency Audio Features for Speech-Music Classification. En: ArXiv (2018), p. 1–5 |
dc.relation.references | BITTNER, Rachel M. ; BOSCH, Juan J. ; RUBINSTEIN, David ; MESEGUER-BROCAL, Ga- briel ; EWERT, Sebastian: A lightweight instrument-agnostic model for polyphonic no- te transcription and multipitch estimation. En: ICASSP 2022-2022 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) IEEE, 2022, p. 781– 785 |
dc.relation.references | BODNER, Mark ; MUFTULER, L T. ; NALCIOGLU, Orhan ; SHAW, Gordon L.: FMRI study relevant to the Mozart effect: brain areas involved in spatial–temporal reasoning. En: Neurological research 23 (2001), Nr. 7, p. 683–690 |
dc.relation.references | BRIOT, J ; PACHET, F: Music Generation by Deep Learning - Challenges and Directions. En: ArXiv 1712.04371 (2017), p. 1–17 |
dc.relation.references | BRIOT, Jean-Pierre ; HADJERES, Gaëtan ; PACHET, François-David: Deep learning techni- ques for music generation–a survey. En: arXiv preprint arXiv:1709.01620 (2017) |
dc.relation.references | BRIOT, Jean-Pierre ; HADJERES, Gaëtan ; PACHET, François-David: Deep learning techniques for music generation. Vol. 1. Springer, 2020 |
dc.relation.references | CARDONA, L ; VARGAS-CARDONA, H ; NAVARRO, P ; CARDENAS PEÑA, D ; OROZ- CO GUTIÉRREZ, A: Classification of Categorical Data Based on the Chi-Square Dissimila- rity and t-SNE. En: Computation 8 (2020), Nr. 4. – ISSN 2079–3197 |
dc.relation.references | CHEN, Yu-An ; WANG, Ju-Chiang ; YANG, Yi-Hsuan ; CHEN, Homer: Linear regression- based adaptation of music emotion recognition models for personalization. En: 2014 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) IEEE, 2014, p. 2149–2153 |
dc.relation.references | DE CHEVEIGNÉ, A ; WONG, D ; DI LIBERTO, G ; HJORTKJAER, J ; SLANEY, M ; LALOR, E: Decoding the auditory brain with canonical component analysis. En: NeuroImage 172 (2018), p. 206–216 |
dc.relation.references | CHO, H ; AHN, M ; AHN, S ; KWON, M ; JUN, S: EEG datasets for motor imagery brain- computer interface. En: GigaScience 6 (2017), 05, Nr. 7 |
dc.relation.references | CICCARELLI, G ; NOLAN, M ; PERRICONE, J ; CALAMIA, P ; HARO, S ; O’SULLIVAN, J ; MESGARANI, N ; QUATIERI, T ; SMALT, C: Comparison of two-talker attention decoding from EEG with nonlinear neural networks and linear methods. En: Scientific reports 9 (2019), Nr. 1, p. 1–10 |
dc.relation.references | COLE, Ross: The problem with AI music: song and cyborg creativity in the digital age. En: Popular Music 39 (2020), Nr. 2, p. 332–338 |
dc.relation.references | COLLAZOS-HUERTAS, D ; ALVAREZ-MEZA, A ; CASTELLANOS-DOMINGUEZ, G: Image-Based Learning Using Gradient Class Activation Maps for Enhanced Physiologi- cal Interpretability of Motor Imagery Skills. En: Applied Sciences 12 (2022), Nr. 3, p. 1695 |
dc.relation.references | COLLAZOS-HUERTAS, D. ; CAICEDO-ACOSTA, J. ; CASTAÑO DUQUE, G. A. ; ACOSTA- MEDINA, C. D.: Enhanced Multiple Instance Representation Using Time-Frequency Atoms in Motor Imagery Classification. En: Frontiers in Neuroscience 14 (2020), p. 155 |
dc.relation.references | COLLAZOS-HUERTAS, DF ; ÁLVAREZ-MEZA, AM ; ACOSTA-MEDINA, CD ; CASTAÑO- DUQUE, GA ; CASTELLANOS-DOMINGUEZ, G: CNN-based framework using spatial drop- ping for enhanced interpretation of neural activity in motor imagery classification. En: Brain Informatics 7 (2020), Nr. 1, p. 1–13 |
dc.relation.references | COLLAZOS-HUERTAS, D.F. ; ÁLVAREZ-MEZA, A.M. ; CASTELLANOS-DOMINGUEZ, G.: Spatial interpretability of time-frequency relevance optimized in motor imagery discrimina- tion using Deep&Wide networks. En: Biomedical Signal Processing and Control 68 (2021), p. 102626. – ISSN 1746–8094 |
dc.relation.references | COLLAZOS-HUERTAS, Diego F. ; VELASQUEZ-MARTINEZ, Luisa F. ; PEREZ-NASTAR, Hernan D. ; ALVAREZ-MEZA, Andres M. ; CASTELLANOS-DOMINGUEZ, German: Deep and wide transfer learning with kernel matching for pooling data from electroencephalo- graphy and psychological questionnaires. En: Sensors 21 (2021), Nr. 15, p. 5105 |
dc.relation.references | COLLET, C. ; HAJJ, M. E. ; CHAKER, Rawad ; BUI-XUAN, B. ; LEHOT, J. ; HOYEK, N.: Effect of motor imagery and actual practice on learning professional medical skills. En: BMC Medical Education 21 (2021) |
dc.relation.references | CUI, Xu ; WU, Yongrong ; WU, Jipeng ; YOU, Zhiyu ; XIAHOU, Jianbing ; OUYANG, Menglin: A review: Music-emotion recognition and analysis based on EEG signals. En: Frontiers in Neuroinformatics 16 (2022), p. 997282 |
dc.relation.references | DADEBAYEV, Didar ; GOH, Wei W. ; TAN, Ee X.: EEG-based emotion recognition: Review of commercial EEG devices and machine learning techniques. En: Journal of King Saud University-Computer and Information Sciences 34 (2022), Nr. 7, p. 4385–4401 |
dc.relation.references | DAI, C ; WANG, Z ; WEI, L ; CHEN, G ; CHEN, B ; ZUO, F ; LI, Y: Combining early post-resuscitation EEG and HRV features improves the prognostic performance in cardiac arrest model of rats. En: The American Journal of Emergency Medicine 36 (2018), Nr. 12, p. 2242–2248. – ISSN 0735–6757 |
dc.relation.references | DAI, Shuqi ; YU, Huiran ; DANNENBERG, Roger B.: What is missing in deep mu- sic generation? a study of repetition and structure in popular music. En: arXiv preprint arXiv:2209.00182 (2022) |
dc.relation.references | DALY, I ; NICOLAOU, N ; WILLIAMS, D ; HWANG, F ; KIRKE, A ; MIRANDA, E ; NA- SUTO, S: Neural and physiological data from participants listening to affective music. En: Scientific Data 7 (2020), Nr. 1, p. 1–7 |
dc.relation.references | DAS, Abhishek ; KOTTUR, Satwik ; GUPTA, Khushi ; SINGH, Avi ; YADAV, Deshraj ; MOURA, José MF ; PARIKH, Devi ; BATRA, Dhruv: Visual dialog. En: Proceedings of the IEEE conference on computer vision and pattern recognition, 2017, p. 326–335 |
dc.relation.references | DAS, P ; GUPTA, S ; NEOGI, B: Measurement of effect of music on human brain and con- sequent impact on attentiveness and concentration during reading. En: Procedia Computer Science 172 (2020), p. 1033–1038 |
dc.relation.references | DASH, Adyasha ; AGRES, Kat R.: Ai-based affective music generation systems: a review of methods, and challenges. En: arXiv preprint arXiv:2301.06890 (2023) |
dc.relation.references | DAVIS, Steven ; MERMELSTEIN, Paul: Comparison of parametric representations for mo- nosyllabic word recognition in continuously spoken sentences. En: IEEE transactions on acoustics, speech, and signal processing 28 (1980), Nr. 4, p. 357–366 |
dc.relation.references | DHARIWAL, P ; JUN, H ; PAYNE, C ; KIM, J ; RADFORD, A ; SUTSKEVER, I: Jukebox: A generative model for music. En: arXiv:2005.00341 (2020), p. 1–20 |
dc.relation.references | DI-LIBERTO, G ; MARION, G ; SHAMMA, S: The Music of Silence: Part II: Music Listening Induces Imagery Responses. En: Journal of Neuroscience 41 (2021), Nr. 35, p. 7449–7460 |
dc.relation.references | DING, Yi ; ROBINSON, Neethu ; ZHANG, Su ; ZENG, Qiuhao ; GUAN, Cuntai: Tsception: Capturing temporal dynamics and spatial asymmetry from EEG for emotion recognition. En: arXiv preprint arXiv:2104.02935 (2021) |
dc.relation.references | DING, Yi ; ROBINSON, Neethu ; ZHANG, Su ; ZENG, Qiuhao ; GUAN, Cuntai: Tsception: Capturing temporal dynamics and spatial asymmetry from EEG for emotion recognition. En: IEEE Transactions on Affective Computing (2022) |
dc.relation.references | DONAHUE, C ; MAO, H ; LI, Y ; COTTRELL, G ; MCAULEY, J: LakhNES: Improving Multi-instrumental Music Generation with Cross-domain Pre-training. En: ISMIR, 2019, p. 1–8 |
dc.relation.references | DUBUS, G ; BRESIN, R: A Systematic Review of Mapping Strategies for the Sonification of Physical Quantities. En: PLoS ONE 8 (2013) |
dc.relation.references | DäHNE, S ; BIEßMANN, F ; SAMEK, W ; HAUFE, S ; GOLTZ, D ; GUNDLACH, C ; VILLRINGER, A ; FAZLI, S ; MüLLER, K: Multivariate Machine Learning Methods for Fusing Multimodal Functional Neuroimaging Data. En: Proceedings of the IEEE 103 (2015), Nr. 9, p. 1507–1530 |
dc.relation.references | EBCIO ̆GLU, Kemal: An expert system for harmonizing four-part chorales. En: Computer Music Journal 12 (1988), Nr. 3, p. 43–51 |
dc.relation.references | FERREIRA, Lucas N. ; WHITEHEAD, Jim: Learning to generate music with sentiment. En: arXiv preprint arXiv:2103.06125 (2021) |
dc.relation.references | FIEBRINK, Rebecca ; CARAMIAUX, Baptiste: The machine learning algorithm as creative musical tool. En: arXiv preprint arXiv:1611.00379 (2016) |
dc.relation.references | FLEURY, Mathis ; FIGUEIREDO, Patrícia ; VOURVOPOULOS, Athanasios ; LÉCUYER, Ana- tole: Two is better? Combining EEG and fMRI for BCI and Neurofeedback: A systematic review. (2023) |
dc.relation.references | FREER, D ; YANG, G: Data augmentation for self-paced motor imagery classification with C-LSTM. En: Journal of neural engineering 17 (2020), Nr. 1, p. 016041 |
dc.relation.references | FURUI, Sadaoki: Speaker-independent isolated word recognition based on emphasized spec- tral dynamics. En: ICASSP’86. IEEE International Conference on Acoustics, Speech, and Signal Processing Vol. 11 IEEE, 1986, p. 1991–1994 |
dc.relation.references | GARCIA-MURILLO, D ; ALVAREZ-MEZA, A ; CASTELLANOS-DOMINGUEZ, G: Single- Trial Kernel-Based Functional Connectivity for Enhanced Feature Extraction in Motor- Related Tasks. En: Sensors 21 (2021), Nr. 8 |
dc.relation.references | GOMEZ, Patrick ; DANUSER, Brigitta: Relationships between musical structure and psy- chophysiological measures of emotion. En: Emotion 7 (2007), Nr. 2, p. 377 |
dc.relation.references | HE, Qun ; FENG, Lufeng ; JIANG, Guoqian ; XIE, Ping: Multimodal multitask neural network for motor imagery classification with EEG and fNIRS signals. En: IEEE Sensors Journal 22 (2022), Nr. 21, p. 20695–20706 |
dc.relation.references | HE, Zhipeng ; LI, Zina ; YANG, Fuzhou ; WANG, Lei ; LI, Jingcong ; ZHOU, Chengju ; PAN, Jiahui: Advances in multimodal emotion recognition based on brain–computer interfaces. En: Brain sciences 10 (2020), Nr. 10, p. 687 |
dc.relation.references | HERNANDEZ-OLIVAN, Carlos ; BELTRAN, Jose R.: Music composition with deep lear- ning: A review. En: Advances in speech and music technology: computational aspects and applications (2022), p. 25–50 |
dc.relation.references | HERREMANS, D ; CHUAN, C ; CHEW, E: A Functional Taxonomy of Music Generation Systems. En: ACM Computing Surveys (CSUR) 50 (2017), p. 1–30 |
dc.relation.references | HILDT, E: Affective Brain-Computer Music Interfaces –Drivers and Implications. En: Frontiers in Human Neuroscience 15 (2021) |
dc.relation.references | HOUSSEIN, Essam H. ; HAMMAD, Asmaa ; ALI, Abdelmgeid A.: Human emotion recog- nition from EEG-based brain–computer interface using machine learning: a comprehensive review. En: Neural Computing and Applications 34 (2022), Nr. 15, p. 12527–12557 |
dc.relation.references | HUANG, Chih-Fang ; HUANG, Cheng-Yuan: Emotion-based AI music generation sys- tem with CVAE-GAN. En: 2020 IEEE Eurasia Conference on IOT, Communication and Engineering (ECICE) IEEE, 2020, p. 220–222 |
dc.relation.references | HUI, K ; GANAA, E ; ZHAN, Y ; SHEN, X: Robust deflated canonical correlation analy- sis via feature factoring for multi-view image classification. En: Multimedia Tools and Applications 80 (2021), Nr. 16, p. 24843–24865 |
dc.relation.references | HUNG, Hsiao-Tzu ; CHING, Joann ; DOH, Seungheon ; KIM, Nabin ; NAM, Juhan ; YANG, Yi-Hsuan: Emopia: A multi-modal pop piano dataset for emotion recognition and emotion- based music generation. En: arXiv preprint arXiv:2108.01374 (2021) |
dc.relation.references | HUTCHINGS, Patrick E. ; MCCORMACK, Jon: Adaptive music composition for games. En: IEEE Transactions on Games 12 (2019), Nr. 3, p. 270–280 |
dc.relation.references | JAMES, C ; ZUBER, S ; DUPUIS LOZERON, E ; ABDILI, L ; GERVAISE, D ; KLIEGEL, M: How Musicality, Cognition and Sensorimotor Skills Relate in Musically Untrained Chil- dren. En: Swiss Journal of Psychology 79 (2020), Nr. 3-4, p. 101–112 |
dc.relation.references | JEON, E ; KO, W ; YOON, J ; SUK, H. Mutual Information-driven Subject-invariant and Class-relevant Deep Representation Learning in BCI. 2020 |
dc.relation.references | JI, Shulei ; YANG, Xinyu ; LUO, Jing: A Survey on Deep Learning for Symbolic Music Ge- neration: Representations, Algorithms, Evaluations, and Challenges. En: ACM Computing Surveys (2023) |
dc.relation.references | JUSLIN, P ; VASTFJLL, D: Emotional responses to music: The need to consider underlying mechanisms. En: Behavioral and brain sciences 31 (2008), Nr. 5, p. 559–575 |
dc.relation.references | KANT, P ; LASKAR, S ; HAZARIKA, J ; MAHAMUNE, R: CWT Based Transfer Learning for Motor Imagery Classification for Brain computer Interfaces. En: Journal of Neuroscience Methods 345 (2020), p. 108886. – ISSN 0165–0270 |
dc.relation.references | KATTHI, J ; GANAPATHY, S: Deep Correlation Analysis for Audio-EEG Decoding. En: IEEE Trans Neural Syst Rehabil Eng 29 (2021), p. 2742–2753 |
dc.relation.references | KINGMA, D ; WELLING, M: An Introduction to Variational Autoencoders. En: Foundations and Trends in Machine Learning 12 (2019), Nr. 4, p. 307–392 |
dc.relation.references | KO, W ; JEON, E ; JEONG, S ; SUK, H. Multi-Scale Neural network for EEG Representation Learning in BCI. 2020 |
dc.relation.references | KOCTÚROVÁ, M ; JUHÁR, J: A Novel approach to EEG Speech activity detection with visual stimuli and mobile BCI. En: Applied Sciences 11 (2021), Nr. 2, p. 674 |
dc.relation.references | KOELSTRA, Sander ; MUHL, Christian ; SOLEYMANI, Mohammad ; LEE, Jong-Seok ; YAZDANI, Ashkan ; EBRAHIMI, Touradj ; PUN, Thierry ; NIJHOLT, Anton ; PATRAS, Ioannis: Deap: A database for emotion analysis; using physiological signals. En: IEEE transactions on affective computing 3 (2011), Nr. 1, p. 18–31 |
dc.relation.references | KÜHL, N ; GOUTIER, M ; HIRT, R ; SATZGER, G: Machine Learning in Artificial Intelli- gence: Towards a Common Understanding. En: HICSS, 2019, p. 1–10 |
dc.relation.references | KUMAR, S ; SHARMA, A ; TSUNODA, T: Brain wave classification using long short-term memory network based OPTICAL predictor. En: Scientific Reports 9 (2019), 12, p. 1–13 |
dc.relation.references | LADDA, A ; LEBON, F ; LOTZE, M: Using motor imagery practice for improving motor performance - A review. En: Brain and Cognition 150 (2021), p. 105705 |
dc.relation.references | LAWHERN, Vernon J. ; SOLON, Amelia J. ; WAYTOWICH, Nicholas R. ; GORDON, Stephen M. ; HUNG, Chou P. ; LANCE, Brent J.: EEGNet: a compact convolutional neural network for EEG-based brain–computer interfaces. En: Journal of neural engineering 15 (2018), Nr. 5, p. 056013 |
dc.relation.references | LEE, M ; KWON, O ; KIM, Y ; KIM, H ; LEE, Y ; WILLIAMSON, J ; FAZLI, S ; LEE, S: EEG dataset and OpenBMI toolbox for three BCI paradigms: an investigation into BCI illiteracy. En: GigaScience 8 (2019), 01, Nr. 5. – ISSN 2047–217X |
dc.relation.references | LEE, M ; YOON, J ; LEE, S: Predicting Motor Imagery Performance From Resting-State EEG Using Dynamic Causal Modeling. En: Frontiers in Human Neuroscience 14 (2020), p. 321. – ISSN 1662–5161 |
dc.relation.references | LEIPOLD, S ; GREBER, M ; SELE, S o. ; JÄNCKE, L: Neural patterns reveal single-trial information on absolute pitch and relative pitch perception. En: NeuroImage 200 (2019), p. 132–141 |
dc.relation.references | LI, C ; WANG, B ; ZHANG, S ; LIU, Y ; SONG, R ; CHENG, J ; CHEN, X: Emotion recogni- tion from EEG based on multi-task learning with capsule network and attention mechanism. En: Comput. Biol. Med 143 (2022), p. 105303 |
dc.relation.references | LI, Xiang ; SONG, Dawei ; ZHANG, Peng ; ZHANG, Yazhou ; HOU, Yuexian ; HU, Bin: Exploring EEG features in cross-subject emotion recognition. En: Frontiers in neuroscience 12 (2018), p. 162 |
dc.relation.references | LI, Xiang ; ZHANG, Yazhou ; TIWARI, Prayag ; SONG, Dawei ; HU, Bin ; YANG, Meihong ; ZHAO, Zhigang ; KUMAR, Neeraj ; MARTTINEN, Pekka: EEG based emotion recognition: A tutorial and review. En: ACM Computing Surveys 55 (2022), Nr. 4, p. 1–57 |
dc.relation.references | LI, Xiaowei ; HU, Bin ; ZHU, Tingshao ; YAN, Jingzhi ; ZHENG, Fang: Towards affective learning with an EEG feedback approach. En: Proceedings of the first ACM international workshop on Multimedia technologies for distance learning, 2009, p. 33–38 |
dc.relation.references | LIEBMAN, E ; STONE, P: Artificial Musical Intelligence: A Survey. En: ArXiv 2006.10553 (2020) |
dc.relation.references | LIOI, G ; CURY, C ; PERRONNET, L ; MANO, M ; BANNIER, E ; LÉCUYER, A ; BARILLOT, C: Simultaneous MRI-EEG during a motor imagery neurofeedback task: an open access brain imaging dataset for multi-modal data integration. En: bioRxiv (2019), p. 862375 |
dc.relation.references | LONG, Y ; KONG, W ; JIN, X ; SHANG, J ; YANG, C: Visualizing Emotional States: A Method Based on Human Brain Activity. En: ZENG, A (Ed.) ; PAN, D (Ed.) ; HAO, T (Ed.) ; ZHANG, D (Ed.) ; SHI, Y (Ed.) ; SONG, X (Ed.): Human Brain and Artificial Intelligence, 2019, p. 248–258 |
dc.relation.references | LOUI, P: Neuroscience of Musical Improvisation. En: Handbook of Artificial Intelligence for Music. Springer, 2021, p. 97–115 |
dc.relation.references | MAMMONE, N ; IERACITANO, C ; MORABITO, F: A deep CNN approach to decode motor preparation of upper limbs from timeâC“frequency maps of EEG signals at source level. En: Neural Networks 124 (2020), p. 357–372. – ISSN 0893–6080 |
dc.relation.references | MARTIN, Rod A. ; BERRY, Glen E. ; DOBRANSKI, Tobi ; HORNE, Marilyn ; DODGSON, Philip G.: Emotion perception threshold: Individual differences in emotional sensitivity. En: Journal of Research in Personality 30 (1996), Nr. 2, p. 290–305 |
dc.relation.references | MCAVINUE, L ; ROBERTSON, I: Measuring motor imagery ability: A review. En: European Journal of Cognitive Psychology 20 (2008), Nr. 2, p. 232–251 |
dc.relation.references | MCFARLAND, D. ; MINER, L. ; VAUGHAN, T. ; WOLPAW, J: Mu and Beta Rhythm Topo- graphies During Motor Imagery and Actual Movements. En: Brain Topography 12 (2004), p. 177–186 |
dc.relation.references | MILAZZO, M ; BUEHLER, B: Designing and fabricating materials from fire using sonifica- tion and deep learning. En: iScience 24 (2021), Nr. 8, p. 102873 |
dc.relation.references | MIRZAEI, S ; GHASEMI, P: EEG motor imagery classification using dynamic connectivity patterns and convolutional autoencoder. En: Biomedical Signal Processing and Control 68 (2021), p. 102584. – ISSN 1746–8094 |
dc.relation.references | MISHRA, S ; ASIF, M ; TIWARY, U: Dataset on Emotions using Naturalistic Stimuli (DENS). En: bioRxiv (2021), p. 1–13 |
dc.relation.references | MIYAMOTO, K ; TANAKA, H ; NAKAMURA, S: Emotion Estimation from EEG Signals and Expected Subjective Evaluation. En: 2021 9th International Winter Conference on Brain-Computer Interface (BCI) IEEE, 2021, p. 1–6 |
dc.relation.references | MIYAMOTO, Kana ; TANAKA, Hiroki ; NAKAMURA, Satoshi: Online EEG-based emotion prediction and music generation for inducing affective states. En: IEICE TRANSACTIONS on Information and Systems 105 (2022), Nr. 5, p. 1050–1063 |
dc.relation.references | MOORE, F R.: The dysfunctions of MIDI. En: Computer music journal 12 (1988), Nr. 1, p. 19–28 |
dc.relation.references | MORI, K: Decoding peak emotional responses to music from computational acoustic and lyrical features. En: Cognition 222 (2022), p. 105010 |
dc.relation.references | MOU, Luntian ; ZHAO, Yiyuan ; HAO, Quan ; TIAN, Yunhan ; LI, Juehui ; LI, Jueying ; SUN, Yiqi ; GAO, Feng ; YIN, Baocai: Memomusic version 2.0: Extending personali- zed music recommendation with automatic music generation. En: 2022 IEEE International Conference on Multimedia and Expo Workshops (ICMEW) IEEE, 2022, p. 1–6 |
dc.relation.references | MUHAMED, A ; LI, L ; SHI, X ; YADDANAPUDI, S ; CHI, W ; JACKSON, D ; SURESH, R ; LIPTON, Z ; SMOLA, A: Symbolic Music Generation with Transformer-GANs. En: Proceedings of the AAAI Conference on Artificial Intelligence Vol. 31, 2021, p. 408–417 |
dc.relation.references | MUSALLAM, Yazeed K. ; ALFASSAM, Nasser I. ; MUHAMMAD, Ghulam ; AMIN, Syed U. ; ALSULAIMAN, Mansour ; ABDUL, Wadood ; ALTAHERI, Hamdi ; BENCHERIF, Moha- med A. ; ALGABRI, Mohammed: Electroencephalography-based motor imagery classifica- tion using temporal convolutional network fusion. En: Biomedical Signal Processing and Control 69 (2021), p. 102826 |
dc.relation.references | NAFEA, Mohamed S. ; ISMAIL, Zool H.: Supervised machine learning and deep learning techniques for epileptic seizure recognition using EEG signalsâ C”A systematic literature review. En: Bioengineering 9 (2022), Nr. 12, p. 781 |
dc.relation.references | NATSIOU, A ; O’LEARY, S: Audio representations for deep learning in sound synthesis: A review. En: ArXiv 2201.02490 (2022), p. 1–8 |
dc.relation.references | NIRANJAN, D ; BURUNAT, I ; TOIVIAINEN, P ; ALLURI, V: Influence of musical expertise on the processing of musical features in a naturalistic setting. En: Conference on Cognitive Computational Neuroscience, 2019, p. 655–658 |
dc.relation.references | NORDSTRÖM, Henrik ; LAUKKA, Petri: The time course of emotion recognition in speech and music. En: The Journal of the Acoustical Society of America 145 (2019), Nr. 5, p. 3058–3074 |
dc.relation.references | ORLANDI, S ; HOUSE, S ; KARLSSON, P ; SAAB, R ; CHAU, T: Brain-Computer Interfaces for Children With Complex Communication Needs and Limited Mobility: A Systematic Review. En: Frontiers in Human Neuroscience 15 (2021) |
dc.relation.references | PANDEY, P ; AHMAD, N ; MIYAPURAM, K ; LOMAS, D: Predicting Dominant Beat Fre- quency from Brain Responses While Listening to Music. En: 2021 IEEE International Conference on Bioinformatics and Biomedicine (BIBM) IEEE, 2021, p. 3058–3064 |
dc.relation.references | PAPADOPOULOS, Alexandre ; ROY, Pierre ; PACHET, François: Assisted lead sheet com- position using flowcomposer. En: Principles and Practice of Constraint Programming: 22nd International Conference, CP 2016, Toulouse, France, September 5-9, 2016, Proceedings 22 Springer, 2016, p. 769–785 |
dc.relation.references | PARCALABESCU, Letitia ; TROST, Nils ; FRANK, Anette: What is multimodality? En: arXiv preprint arXiv:2103.06304 (2021) |
dc.relation.references | PATNAIK, Suprava: Speech emotion recognition by using complex MFCC and deep sequen- tial model. En: Multimedia Tools and Applications 82 (2023), Nr. 8, p. 11897–11922 |
dc.relation.references | PEKSA, Janis ; MAMCHUR, Dmytro: State-of-the-Art on Brain-Computer Interface Tech- nology. En: Sensors 23 (2023), Nr. 13, p. 6001 |
dc.relation.references | PICARD, Rosalind ; SU, David ; LIU, Yan: AMAI: Adaptive music for affect improvement. (2018) |
dc.relation.references | PICARD, Rosalind W.: Building HAL: Computers that sense, recognize, and respond to human emotion. En: Human Vision and Electronic Imaging VI Vol. 4299 SPIE, 2001, p. 518–523 |
dc.relation.references | PICARD, Rosalind W.: Affective computing: challenges. En: International Journal of Human-Computer Studies 59 (2003), Nr. 1-2, p. 55–64 |
dc.relation.references | PLUMMER, Bryan A. ; WANG, Liwei ; CERVANTES, Chris M. ; CAICEDO, Juan C. ; HOCKENMAIER, Julia ; LAZEBNIK, Svetlana: Flickr30k entities: Collecting region-to- phrase correspondences for richer image-to-sentence models. En: Proceedings of the IEEE international conference on computer vision, 2015, p. 2641–2649 |
dc.relation.references | PODOBNIK, B ; STANLEY, H: Detrended cross-correlation analysis: a new method for analyzing two nonstationary time series. En: Physical review letters 100 (2008), Nr. 8, p. 084102 |
dc.relation.references | PURWINS, H ; LI, B ; VIRTANEN, T ; SCHLÜTER, J ; CHANG, S ; SAINATH, T: Deep lear- ning for audio signal processing. En: IEEE Journal of Selected Topics in Signal Processing 13 (2019), Nr. 2, p. 206–219 |
dc.relation.references | PURWINS, Hendrik ; BLANKERTZ, Benjamin ; OBERMAYER, Klaus: A new method for tracking modulations in tonal music in audio data format. En: Proceedings of the IEEE-INNS-ENNS International Joint Conference on Neural Networks. IJCNN 2000. Neural Computing: New Challenges and Perspectives for the New Millennium Vol. 6 IEEE, 2000, p. 270–275 |
dc.relation.references | RAFFEL, Colin ; ELLIS, Daniel P.: Intuitive analysis, creation and manipulation of MI- DI data with pretty_midi. En: 15th international society for music information retrieval conference late breaking and demo papers, 2014, p. 84–93 |
dc.relation.references | RAHMAN, M ; SARKAR, A ; HOSSAIN, A ; HOSSAIN, S ; ISLAM, R ; HOSSAIN, B ; QUINN, J ; MONI, M: Recognition of human emotions using EEG signals: A review. En: Computers in Biology and Medicine 136 (2021), p. 104696 |
dc.relation.references | RAMÍREZ, A ; HORNERO, G ; ROYO, D ; AGUILAR, A ; CASAS, O: Assessment of Emotio- nal States Through Physiological Signals and Its Application in Music Therapy for Disabled People. En: IEEE access 8 (2020), p. 127659–127671 |
dc.relation.references | RIMBERT, S ; GAYRAUD, N ; BOUGRAIN, L ; CLERC, M ; FLECK, S: Can a Subjective Questionnaire Be Used as Brain-Computer Interface Performance Predictor? En: Frontiers in Human Neuroscience 12 (2019), p. 529. – ISSN 1662–5161 |
dc.relation.references | RUSSELL, James A.: A circumplex model of affect. En: Journal of personality and social psychology 39 (1980), Nr. 6, p. 1161 |
dc.relation.references | SANNELLI, C ; VIDAURRE, C ; MULLER, K ; BLANKERTZ, B: A large scale screening study with a SMR-based BCI: Categorization of BCI users and differences in their SMR activity. En: PLOS ONE 14 (2019), 01, Nr. 1, p. 1–37 |
dc.relation.references | SANYAL, S ; NAG, S ; BANERJEE, A ; SENGUPTA, R ; GHOSH, D: Music of brain and music on brain: a novel EEG sonification approach. En: Cognitive neurodynamics 13 (2019), Nr. 1, p. 13–31 |
dc.relation.references | SCHIRRMEISTER, Robin T. ; SPRINGENBERG, Jost T. ; FIEDERER, Lukas Dominique J. ; GLASSTETTER, Martin ; EGGENSPERGER, Katharina ; TANGERMANN, Michael ; HUT- TER, Frank ; BURGARD, Wolfram ; BALL, Tonio: Deep learning with convolutional neural networks for EEG decoding and visualization. En: Human brain mapping 38 (2017), Nr. 11, p. 5391–5420 |
dc.relation.references | SHAMSI, F ; HADDAD, A ; NAJAFIZADEH, L: Early classification of motor tasks using dynamic functional connectivity graphs from EEG. En: Journal of neural engineering 18 (2021), Nr. 1, p. 016015 |
dc.relation.references | SIMPSON, T ; ELLISON, P ; CARNEGIE, E ; MARCHANT, D: A systematic review of mo- tivational and attentional variables on childrenâ C™s fundamental movement skill develop- ment: The OPTIMAL theory. En: International Review of Sport and Exercise Psychology (2020), p. 1–47 |
dc.relation.references | SINGH, A ; HUSSAIN, A ; LAL, S ; GUESGEN, H: A Comprehensive Review on Criti- cal Issues and Possible Solutions of Motor Imagery Based Electroencephalography Brain- Computer Interface. En: Sensors 21 (2021), Nr. 6 |
dc.relation.references | SINGH, Yeshwant ; BISWAS, Anupam: Robustness of musical features on deep learning models for music genre classification. En: Expert Systems with Applications 199 (2022), p. 116879 |
dc.relation.references | SONG, L. ; FUKUMIZU, K. ; GRETTON, A.: Kernel Embeddings of Conditional Distribu- tions: A Unified Kernel Framework for Nonparametric Inference in Graphical Models. En: IEEE Signal Processing Magazine 30 (2013), Nr. 4, p. 98–111 |
dc.relation.references | SONG, XinWang ; YAN, DanDan ; ZHAO, LuLu ; YANG, LiCai: LSDD-EEGNet: An ef- ficient end-to-end framework for EEG-based depression detection. En: Biomedical Signal Processing and Control 75 (2022), p. 103612 |
dc.relation.references | SOROUSH, M ; MAGHOOLI, K ; SETAREHDAN, S ; NASRABADI, A: A review on EEG sig- nals based emotion recognition. En: International Clinical Neuroscience Journal 4 (2017), Nr. 4, p. 118 |
dc.relation.references | SOUTO, D ; CRUZ, T ; FONTES, P ; BATISTA, R ; HAASE, V: Motor Imagery Develop- ment in Children: Changes in Speed and Accuracy With Increasing Age. En: Frontiers in Pediatrics 8 (2020), p. 100 |
dc.relation.references | SOYSA, Amani I. ; LOKUGE, Kulari: Interactive machine learning for incorporating user emotions in automatic music harmonization. En: 2010 Fifth International Conference on Information and Automation for Sustainability IEEE, 2010, p. 114–118 |
dc.relation.references | STEEDMAN, Mark J.: A generative grammar for jazz chord sequences. En: Music Perception 2 (1984), Nr. 1, p. 52–77 |
dc.relation.references | SUBRAMANI, K ; RAO, P: HpRNet : Incorporating Residual Noise Modeling for Violin in a Variational Parametric Synthesizer. En: ArXiv 2008.08405 (2020), p. 1–7 |
dc.relation.references | SUGGATE, S ; MARTZOG, P: Screen-time influences children’s mental imagery performan- ce. En: Developmental Science 23 (2020), Nr. 6, p. e12978 |
dc.relation.references | TAN, C ; SUN, F ; KONG, T ; ZHANG, W ; YANG, C ; LIU, C: A Survey on Deep Transfer Learning. En: K ̊URKOVÁ, Vˇera (Ed.) ; MANOLOPOULOS, Yannis (Ed.) ; HAMMER, Barbara (Ed.) ; ILIADIS, Lazaros (Ed.) ; MAGLOGIANNIS, Ilias (Ed.): Artificial Neural Networks and Machine Learning – ICANN 2018. Cham : Springer International Publishing, 2018, p. 270–279 |
dc.relation.references | TOBÓN-HENAO, Mateo ; ÁLVAREZ-MEZA, Andrés M. ; CASTELLANOS-DOMINGUEZ, Cesar G.: Kernel-Based Regularized EEGNet Using Centered Alignment and Gaussian Connectivity for Motor Imagery Discrimination. En: Computers 12 (2023), Nr. 7, p. 145 |
dc.relation.references | TORRES-CARDONA, Hector F. ; AGUIRRE-GRISALES, Catalina ; CASTRO-LONDOÑO, Victor H. ; RODRIGUEZ-SOTELO, Jose L.: Interpolation, a model for sound representa- tion based on BCI. En: Augmented Cognition: 13th International Conference, AC 2019, Held as Part of the 21st HCI International Conference, HCII 2019, Orlando, FL, USA, July 26–31, 2019, Proceedings 21 Springer, 2019, p. 471–483 |
dc.relation.references | VÄRBU, Kaido ; MUHAMMAD, Naveed ; MUHAMMAD, Yar: Past, present, and future of EEG-based BCI applications. En: Sensors 22 (2022), Nr. 9, p. 3331 |
dc.relation.references | VASILYEV, A ; LIBURKINA, S ; YAKOVLEV, L ; PEREPELKINA, O ; KAPLAN, A: Assessing motor imagery in brain-computer interface training: Psychological and neurophysiological correlates. En: Neuropsychologia 97 (2017), p. 56–65. – ISSN 0028–3932 |
dc.relation.references | VELASQUEZ, L. ; CAICEDO, J. ; CASTELLANOS-DOMINGUEZ, G.: Entropy-Based Es- timation of Event-Related De/Synchronization in Motor Imagery Using Vector-Quantized Patterns. En: Entropy 22 (2020), Nr. 6, p. 703 |
dc.relation.references | VEMPATI, Raveendrababu ; SHARMA, Lakhan D.: A Systematic Review on Automated Hu- man Emotion Recognition using Electroencephalogram Signals and Artificial Intelligence. En: Results in Engineering (2023), p. 101027 |
dc.relation.references | VISHESH, P ; PAVAN, A ; VASIST, Samarth G. ; RAO, Sindhu ; SRINIVAS, KS: DeepTunes- Music Generation based on Facial Emotions using Deep Learning. En: 2022 IEEE 7th International conference for Convergence in Technology (I2CT) IEEE, 2022, p. 1–6 |
dc.relation.references | VUILLEUMIER, Patrik ; TROST, Wiebke: Music and emotions: from enchantment to en- trainment. En: Annals of the New York Academy of Sciences 1337 (2015), Nr. 1, p. 212–222 |
dc.relation.references | WALLIS, Isaac ; INGALLS, Todd ; CAMPANA, Ellen ; GOODMAN, Janel: A rule-based generative music system controlled by desired valence and arousal. En: Proceedings of 8th international sound and music computing conference (SMC), 2011, p. 156–157 |
dc.relation.references | WAN, Z. ; YANG, R. ; HUANG, M. ; ZENG, N. ; LIU, X: A review on transfer learning in EEG signal analysis. En: Neurocomputing 421 (2021), p. 1–14. – ISSN 0925–2312 |
dc.relation.references | WANG, J ; XUE, F ; LI, H: Simultaneous channel and feature selection of fused EEG features based on sparse group lasso. En: BioMed research international 2015 (2015) |
dc.relation.references | WANG, N ; XU, H ; XU, F ; CHENG, L: The algorithmic composition for music copyright protection under deep learning and blockchain. En: Applied Soft Computing 112 (2021), p. 107763 |
dc.relation.references | WANG, Yan ; SONG, Wei ; TAO, Wei ; LIOTTA, Antonio ; YANG, Dawei ; LI, Xinlei ; GAO, Shuyong ; SUN, Yixuan ; GE, Weifeng ; ZHANG, Wei [u. a.]: A systematic review on affective computing: Emotion models, databases, and recent advances. En: Information Fusion 83 (2022), p. 19–52 |
dc.relation.references | WEI, X ; ORTEGA, P ; FAISAL, A. Inter-subject Deep Transfer Learning for Motor Imagery EEG Decoding. 2021 |
dc.relation.references | WEINECK, K ; WEN, Olivia X. ; HENRY, Molly J.: Neural entrainment is strongest to the spectral flux of slow music and depends on familiarity and beat salience. En: bioRxiv (2021) |
dc.relation.references | WHISSELL, Cynthia M.: The dictionary of affect in language. En: The measurement of emotions. Elsevier, 1989, p. 113–131 |
dc.relation.references | WILSON, J ; STERLING, A ; REWKOWSKI, N ; LIN, M: Glass half full: sound synthesis for fluid–structure coupling using added mass operator. En: The Visual Computer 33 (2017), Nr. 6, p. 1039–1048 |
dc.relation.references | WU, D: Hearing the Sound in the Brain: Influences of Different EEG References. En: Frontiers in Neuroscience 12 (2018) |
dc.relation.references | XU, J ; ZHENG, H ; WANG, J ; LI, D ; FANG, X: Recognition of EEG Signal Motor Imagery Intention Based on Deep Multi-View Feature Learning. En: Sensors (Basel, Switzerland) 20 (2020) |
dc.relation.references | YANG, Li-Chia ; LERCH, Alexander: On the evaluation of generative models in music. En: Neural Computing and Applications 32 (2020), Nr. 9, p. 4773–4784 |
dc.relation.references | YANG, X ; LIU, W ; LIU, W ; TAO, D: A survey on canonical correlation analysis. En: IEEE Transactions on Knowledge and Data Engineering 33 (2019), Nr. 6, p. 2349–2368 |
dc.relation.references | YOON, J ; LEE, M: Effective Correlates of Motor Imagery Performance based on De- fault Mode Network in Resting-State. En: 2020 8th International Winter Conference on Brain-Computer Interface (BCI), 2020, p. 1–5 |
dc.relation.references | YOU, Y ; CHEN, W ; ZHANG, T: Motor imagery EEG classification based on flexible analytic wavelet transform. En: Biomedical Signal Processing and Control 62 (2020), p. 102069. – ISSN 1746–8094 |
dc.relation.references | YU, C ; QIN, Z ; MARTIN-MARTINEZ, J ; BUEHLER, M: A Self-Consistent Sonification Method to Translate Amino Acid Sequences into Musical Compositions and Application in Protein Design Using Artificial Intelligence. En: ACS Nano 13 (2019), Nr. 7, p. 7471–7482 |
dc.relation.references | ZELLERS, Rowan ; BISK, Yonatan ; FARHADI, Ali ; CHOI, Yejin: From recognition to cognition: Visual commonsense reasoning. En: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2019, p. 6720–6731 |
dc.relation.references | ZHANG, J ; YIN, Z ; CHEN, P ; NICHELE, S: Emotion recognition using multi-modal data and machine learning techniques: A tutorial and review. En: Information Fusion 59 (2020), p. 103–126 |
dc.relation.references | ZHANG, K ; XU, G ; CHEN, L ; TIAN, P ; HAN, C ; ZHANG, S ; DUAN, N: Instance transfer subject-dependent strategy for motor imagery signal classification using deep convolutional neural networks. En: Computational and Mathematical Methods in Medicine 2020 (2020) |
dc.relation.references | ZHANG, R ; ZONG, Q ; DOU, L ; ZHAO, X ; TANG, Y ; LI, Z: Hybrid deep neural network using transfer learning for EEG motor imagery decoding. En: Biomedical Signal Processing and Control 63 (2021), p. 102144. – ISSN 1746–8094 |
dc.relation.references | ZHANG, Y ; ZHOU, G ; JIN, J ; WANG, X ; CICHOCKI, A: Optimizing spatial patterns with sparse filter bands for motor-imagery based brainâC“computer interface. En: Journal of Neuroscience Methods 255 (2015), p. 85–91. – ISSN 0165–0270 |
dc.relation.references | ZHAO, Kun ; LI, Siqi ; CAI, Juanjuan ; WANG, Hui ; WANG, Jingling: An emotional symbolic music generation system based on lstm networks. En: 2019 IEEE 3rd Information Technology, Networking, Electronic and Automation Control Conference (ITNEC) IEEE, 2019, p. 2039–2043 |
dc.relation.references | ZHAO, X ; ZHAO, J ; LIU, C ; CAI, W: Deep Neural Network with Joint Distribution Mat- ching for Cross-Subject Motor Imagery Brain-Computer Interfaces. En: BioMed Research International 2020 (2020), p. 1–15 |
dc.relation.references | ZHENG, Kaitong ; MENG, Ruijie ; ZHENG, Chengshi ; LI, Xiaodong ; SANG, Jinqiu ; CAI, Juanjuan ; WANG, Jie: EmotionBox: a music-element-driven emotional music generation system using Recurrent Neural Network. En: arXiv preprint arXiv:2112.08561 (2021) |
dc.relation.references | ZHENG, M ; YANG, B ; GAO, S ; MENG, X: Spatio-time-frequency joint sparse optimi- zation with transfer learning in motor imagery-based brain-computer interface system. En: Biomedical Signal Processing and Control 68 (2021), p. 102702. – ISSN 1746–8094 |
dc.relation.references | ZHU, J ; WEI, Y ; FENG, Y ; ZHAO, X ; GAO, Y: Physiological Signals-based Emotion Recognition via High-order Correlation Learning. En: ACM Transactions on Multimedia Computing, Communications, and Applications (TOMM) 15 (2019), Nr. 3s, p. 1–18 |
dc.relation.references | ZHUANG, M ; WU, Q ; WAN, F ; HU, Y: State-of-the-art non-invasive brainâC“computer interface for neural rehabilitation: A review. En: Journal of Neurorestoratology 8 (2020), Nr. 1, p. 12 |
dc.relation.references | ZHUANG, Y ; LIN, L ; TONG, R ; LIU, J ; IWAMOT, Y ; CHEN, Y: G-gcsn: Global graph convolution shrinkage network for emotion perception from gait. En: Proceedings of the Asian Conference on Computer Vision, 2020 |
dc.rights.accessrights | info:eu-repo/semantics/openAccess |
dc.subject.proposal | Aprendizaje profundo |
dc.subject.proposal | Señales EEG |
dc.subject.proposal | Clasificación de emociones |
dc.subject.proposal | Generación de música |
dc.subject.proposal | Interfaz cerebro-computadora (BCI) |
dc.subject.proposal | Aprendizaje multimodal |
dc.subject.proposal | Generación de música simbólica |
dc.subject.proposal | Piano roll |
dc.subject.proposal | Inteligencia artificial |
dc.subject.proposal | Deep learning |
dc.subject.proposal | EEG signals |
dc.subject.proposal | Emotion classification |
dc.subject.proposal | Music generation |
dc.subject.proposal | Brain-Computer Interface (BCI) |
dc.subject.proposal | Multimodal learning |
dc.subject.proposal | Symbolic music generation |
dc.subject.proposal | Piano roll |
dc.subject.proposal | Artificial intelligence |
dc.title.translated | Brain Music : Generative system for symbolic music creation from affective neural responses |
dc.type.coar | http://purl.org/coar/resource_type/c_bdcc |
dc.type.coarversion | http://purl.org/coar/version/c_ab4af688f83e57aa |
dc.type.content | Text |
oaire.accessrights | http://purl.org/coar/access_right/c_abf2 |
oaire.fundername | Minciencias |
dcterms.audience.professionaldevelopment | Bibliotecarios |
dcterms.audience.professionaldevelopment | Estudiantes |
dcterms.audience.professionaldevelopment | Investigadores |
dcterms.audience.professionaldevelopment | Maestros |
dcterms.audience.professionaldevelopment | Público general |
dc.description.curriculararea | Eléctrica, Electrónica, Automatización Y Telecomunicaciones.Sede Manizales |
Archivos en el documento
Este documento aparece en la(s) siguiente(s) colección(ones)
Esta obra está bajo licencia internacional Creative Commons Reconocimiento-NoComercial 4.0.Este documento ha sido depositado por parte de el(los) autor(es) bajo la siguiente constancia de depósito