GPU/CUDA-Based parallelization of the ECSAGO evolutionary algorithm

dc.contributor.advisorLeón Guzmán, Elizabeth
dc.contributor.authorTamayo Rivera, Joan Sebastian
dc.contributor.googlescholarElizabeth León Guzmán [FZ1l9jMAAAAJ&hl]
dc.contributor.orcidTamayo Rivera, Joan Sebastian [0009-0003-3251-8197]
dc.contributor.orcidLeón Guzmán, Elizabeth [0009-0004-3485-4976]
dc.contributor.researchgateElizabeth León Guzmán [Elizabeth-Leon-6]
dc.contributor.researchgroupMidas: Grupo de Investigación en Minería de Datos
dc.date.accessioned2026-02-20T16:07:53Z
dc.date.available2026-02-20T16:07:53Z
dc.date.issued2025-09
dc.descriptionilustraciones a color, diagramasspa
dc.description.abstractEl crecimiento exponencial de los volúmenes de datos y de la complejidad computacional ha generado grandes desafíos para los algoritmos de clustering tradicionales al procesar conjuntos de datos de gran escala y alta dimensionalidad. Este trabajo presenta el diseño e implementación de una versión paralela del algoritmo Evolutionary Clustering with Self-Adaptive Genetic Operators (ECSAGO), acelerada con GPU mediante CUDA, con el objetivo de superar las limitaciones de escalabilidad sin perder las capacidades de autoadaptación del algoritmo. La implementación se basa en un modelo maestro–esclavo que distribuye las tareas entre la CPU y la GPU: la CPU gestiona la lógica evolutiva, mientras que la GPU se encarga de la evaluación de aptitud, altamente demandante en cómputo, mediante kernels CUDA especializados. Los experimentos, realizados con conjuntos de datos sintéticos y reales, muestran que la aceleración con GPU depende en gran medida de la densidad computacional, alcanzando mejoras de hasta 9.6× en problemas de alta dimensionalidad y estableciendo un umbral práctico cercano a 500.000 para obtener beneficios claros. La versión paralela mantiene la calidad del agrupamiento en todas las pruebas, con puntuaciones de silueta comparables a las de la versión secuencial y, en algunos casos, con una exploración mejorada en escenarios complejos. En conjunto, este trabajo convierte a ECSAGO en una alternativa viable para aplicaciones de agrupamiento a gran escala en entornos intensivos en datos y ofrece orientaciones prácticas para profesionales que buscan aprovechar la aceleración con GPU. (Texto tomado de la fuente)spa
dc.description.abstractThe exponential growth of data volumes and computational complexity has created substantial challenges for traditional clustering algorithms when processing large-scale, high-dimensional datasets. This work presents the design and implementation of a GPU-accelerated parallel version of the Evolutionary Clustering with Self-Adaptive Genetic Operators (ECSAGO) algorithm using CUDA, addressing fundamental scalability limitations while preserving the algorithm's sophisticated self-adaptive capabilities. The implementation employs a master-slave architectural model that strategically distributes computational responsibilities between CPU and GPU components, with the CPU managing evolutionary control logic and the GPU executing computationally intensive fitness evaluation through specialized CUDA kernels. Comprehensive experimental evaluation across synthetic and real-world datasets demonstrates that GPU acceleration effectiveness correlates strongly with computational density, achieving substantial speedup improvements of up to 9.6× for high-dimensional problems while establishing a clear computational density threshold of approximately 500.000 for beneficial acceleration. The parallel implementation successfully preserves clustering quality across all experimental conditions, with silhouette scores remaining within acceptable bounds of sequential counterparts and evidence of enhanced exploration capabilities in complex datasets. This work transforms ECSAGO into a viable solution for large-scale, high-dimensional clustering applications in contemporary data-intensive domains while providing practical guidelines for practitioners regarding GPU acceleration deployment decisions.eng
dc.description.degreelevelMaestría
dc.description.degreenameMagíster en Ingeniería de Sistemas y Computación
dc.description.methodsThe experimental evaluation of the GPU-accelerated ECSAGO implementation was designed to comprehensively assess both computational performance and clustering quality across diverse dataset characteristics and problem scales. The methodology employed a systematic approach to evaluate scalability patterns, speedup achievements, and algorithmic integrity preservation under parallel execution conditions.
dc.description.researchareaParallel Computing and Evolutionary Clustering Algorithms
dc.description.technicalinfoHardware Platform and System Configuration Performance evaluation utilized Google Colaboratory Pro with NVIDIA A100 GPU hardware, providing access to cutting-edge computational resources representative of high performance computing environments. The A100 GPU, built on NVIDIA's Ampere architecture, features 6,912 CUDA cores with 40 GB high-bandwidth memory, offering substantial parallel processing capability and memory capacity essential for large-scale evolutionary clustering experiments. The system configuration included approximately 83 GB system RAM, enabling comprehensive dataset loading and preprocessing operations without memory constraints. The A100 architecture provides significant advancement over previous GPU generations, with up to 20-fold performance improvements for machine learning workloads compared to earlier architectures. This hardware platform enables realistic assessment of GPU acceleration potential for evolutionary clustering while representing computational resources increasingly available in cloud computing and high-performance computing environments. The CUDA programming environment (version 11.8) integrated seamlessly with the experimental framework, providing access to optimized libraries and development tools essential for efficient parallel implementation.eng
dc.description.technicalinfoPlataforma de hardware y configuración del sistema La evaluación del rendimiento utilizó Google Colaboratory Pro con hardware GPU NVIDIA A100, lo que proporcionó acceso a recursos computacionales de vanguardia representativos de entornos informáticos de alto rendimiento. La GPU A100, basada en la arquitectura Ampere de NVIDIA, cuenta con 6912 núcleos CUDA con 40 GB de memoria de alto ancho de banda, lo que ofrece una capacidad sustancial de procesamiento paralelo y memoria esencial para experimentos de agrupamiento evolutivo a gran escala. La configuración del sistema incluyó aproximadamente 83 GB de RAM, lo que permitió la carga integral de conjuntos de datos y operaciones de preprocesamiento sin restricciones de memoria. La arquitectura A100 ofrece un avance significativo con respecto a las generaciones anteriores de GPU, con mejoras de rendimiento hasta 20 veces superiores para cargas de trabajo de aprendizaje automático en comparación con arquitecturas anteriores. Esta plataforma de hardware permite una evaluación realista del potencial de aceleración de la GPU para el agrupamiento evolutivo, a la vez que representa los recursos computacionales cada vez más disponibles en entornos de computación en la nube y de alto rendimiento. El entorno de programación CUDA (versión 11.8) se integró a la perfección con el marco experimental, proporcionando acceso a bibliotecas optimizadas y herramientas de desarrollo esenciales para una implementación paralela eficiente.spa
dc.description.technicalinfoGitHub Repo: https://github.com/pwnaoj/pyecsago.giteng
dc.format.extent67 páginas
dc.format.mimetypeapplication/pdf
dc.identifier.instnameUniversidad Nacional de Colombiaspa
dc.identifier.reponameRepositorio Institucional Universidad Nacional de Colombiaspa
dc.identifier.repourlhttps://repositorio.unal.edu.co/spa
dc.identifier.urihttps://repositorio.unal.edu.co/handle/unal/89615
dc.language.isoeng
dc.publisherUniversidad Nacional de Colombia
dc.publisher.branchUniversidad Nacional de Colombia - Sede Bogotá
dc.publisher.facultyFacultad de Ingeniería
dc.publisher.placeBogotá, Colombia
dc.publisher.programBogotá - Ingeniería - Maestría en Ingeniería - Ingeniería de Sistemas y Computación
dc.relation.referencesBikov, Dusan ; Bouyukliev, Ilija ; Stojanova, Aleksandra: Benefit of Using Shared Memory in Implementation of Parallel FWT Algorithm with CUDA C on GPUs, 2016
dc.relation.referencesCelebi, M. E. ; Kingravi, Hassan A. ; Vela, Patricio A.: A comparative study of efficient initialization methods for the k-means clustering algorithm. En: Expert Systems with Applications 40 (2013), Januar, Nr. 1, p. 200–210.– ISSN 0957–4174
dc.relation.referencesChoi, Hyeonseong ; Lee, Jaehwan: Efficient Use of GPU Memory for Large-Scale Deep Learning Model Training. En: Applied Sciences 11 (2021), Nr. 21.– ISSN 2076–3417
dc.relation.referencesColomba, Luca ; Cagliero, Luca ; Garza, Paolo: Density-Based Clustering by Means of Bridge Point Identification. En: IEEE Transactions on Knowledge and Data Engineering 35 (2023), Nr. 11, p. 11274–11287
dc.relation.referencesIn: Corne, David ; Lones, Michael A.: Evolutionary Algorithms. Springer Interna tional Publishing, 2018, p. 1–22.– ISBN 9783319071534
dc.relation.referencesDe Rango, Alessio ; Furnari, Luca ; Senatore, Alfonso ; Mendicino, Giuseppe ; Giordano, Andrea ; Macri, Davide ; Utrera, Gladys ; D’Ambrosio, Donato: Performance Analysis and Optimization of the CUDA Implementation of the Three Dimensional Subsurface XCA-Flow Cellular Automaton. En: 2023 31st Euromicro International Conference on Parallel, Distributed and Network-Based Processing (PDP), 2023, p. 263–270
dc.relation.referencesDmitruk, Beata ; Stpiczynski, Przemyslaw: Improving accuracy of summation using parallel vectorized Kahan’s and Gill-Møller algorithms. En: Concurrency and Computation: Practice and Experience 35 (2023), 05
dc.relation.referencesEscobar, Juan J. ; Ortega, Julio ; Díaz, Antonio F. ; González, Jesús ; Damas, Miguel: Assessing Energy Consumption and Runtime Efficiency of Master- Worker Parallel Evolutionary Algorithms in CPU-GPU Systems, 2018
dc.relation.referencesEster, Martin ; Kriegel, Hans-Peter ; Sander, J¨org ; Xu, Xiaowei: A density-based algorithm for discovering clusters in large spatial databases with noise. En: Proceed ings of the Second International Conference on Knowledge Discovery and Data Mining, AAAI Press, 1996 (KDD’96), p. 226–231
dc.relation.referencesGalan, Severino F. ; Mengshoel, OleJ.: Generalized crowding for genetic algorithms. En: Proceedings of the 12th Annual Conference on Genetic and Evolutionary Compu tation. New York, NY, USA : Association for Computing Machinery, 2010 (GECCO ’10).– ISBN 9781450300728, p. 775–782
dc.relation.referencesGarcía-Calvo, Raúl ; Guisado, JL ; del Rio, Fernando D. ; Córdoba, Antonio ; Jiménez-Morales, Francisco: Graphics Processing Unit–Enhanced Genetic Algorithms for Solving the Temporal Dynamics of Gene Regulatory Networks. En: Evolutionary Bioinformatics 14 (2018), p. 1176934318767889.– PMID: 29662297
dc.relation.referencesGhorpade, Jayshree: GPGPU Processing in CUDA Architecture. En: Advanced Computing: An International Journal 3 (2012), Januar, Nr. 1, p. 105–120.– ISSN 2229–726X
dc.relation.referencesGomez, J.: Self adaptation of operator rates for multimodal optimization. En: Pro ceedings of the 2004 Congress on Evolutionary Computation (IEEE Cat. No.04TH8753) Vol. 2, 2004, p. 1720–1726 Vol.2
dc.relation.referencesGomez, Jonatan: Self Adaptation of Operator Rates in Evolutionary Algorithms, 2004.– ISBN 978–3–540–22344–3, p. 1162–1173
dc.relation.referencesHanawa, Yuki ; Harada, Tomohiro ; Miura, Yukiya: Analysis of the Impact of Pre diction Accuracy on Search Performance in Surrogate-assisted Evolutionary Algorithms. En: 2024 IEEE Congress on Evolutionary Computation (CEC), 2024, p. 1–8
dc.relation.referencesHendricks, Dieter ; Gebbie, Tim ; Wilcox, Diane: High-speed detection of emergent market clustering via an unsupervised parallel genetic algorithm. En: South African Journal of Science 112 (2016), Februar, Nr. 1/2, p. 9.– ISSN 1996–7489
dc.relation.referencesHuang, Beichen ; Cheng, Ran ; Li, Zhuozhao ; Jin, Yaochu ; Tan, Kay C. EvoX: A Distributed GPU-accelerated Framework for Scalable Evolutionary Computation. 2024
dc.relation.referencesHuang, Bin ; Liu, Anjun ; Tian, Min ; Pan, Jingshan ; Zhang, Yu: Parallel Performance and Optimization of the Lattice Boltzmann Method Software Palabos Using CUDA. En: Proceedings of the 6th International Conference on High Performance Compilation, Computing and Communications. New York, NY, USA : Association for Computing Machinery, 2022 (HP3C ’22).– ISBN 9781450396295, p. 91–98
dc.relation.referencesbin Huang, Yi ; Li, Kang ; Wang, Ge ; Cao, Min ; Li, Pin ; jia Zhang, Yu. Recognition of convolutional neural network based on CUDA Technology. 2018
dc.relation.referencesIkotun, Abiodun M. ; Ezugwu, Absalom E. ; Abualigah, Laith ; Abuhaija, Belal ; Heming, Jia: K-means clustering algorithms: A comprehensive review, variants analysis, and advances in the era of big data. En: Information Sciences 622 (2023), p. 178–210.– ISSN 0020–0255
dc.relation.referencesJanssen, Dylan ; Pullan, Wayne ; Liew, Alan Wee-Chung. GPU Based Differential Evolution: New Insights and Comparative Study. 2024
dc.relation.referencesJeon, Se H. ; Hong, Seungwoo ; Lee, Ho J. ; Khazoom, Charles ; Kim, Sangbae. CusADi: A GPUParallelization Framework for Symbolic Expressions and Optimal Con trol. 2024
dc.relation.referencesKarunarathne, Wathsala ; Bala, Indu ; Chauhan, Dikshit ; Roughan, Matthew ; Mitchell, Lewis. Modified CMA-ES Algorithm for Multi-Modal Optimization: In corporating Niching Strategies and Dynamic Adaptation Mechanism. 2024
dc.relation.referencesKortelainen, Matti J. ; Kwok, Martin ; on behalf of the CMS Collabo ration: Performance of CUDA Unified Memory in CMS Heterogeneous Pixel Recon struction. En: EPJ Web Conf. 251 (2021), p. 03035
dc.relation.referencesKrömer, Pavel ; Platos, Jan ; Snásel, Václav: Evolutionary Clustering on CUDA. En: et al., Luc De R. (Ed.): ECAI 2012– 20th European Conference on Artificial Intelligence Vol. 242, IOS Press, 2012, p. 909–910
dc.relation.referencesLee, D. ; Dinov, I. ; Dong, B. ; Gutman, B. ; Yanovsky, I. ; Toga, A. W.: CUDA optimization strategies for compute- and memory-bound neuroimaging algorithms. En: Computer Methods and Programs in Biomedicine 106 (2012), Nr. 3, p. 175–187
dc.relation.referencesLee, Sunjung ; Hwang, Seunghwan ; Kim, Michael J. ; Choi, Jaewan ; Ahn, Jung H.: Future Scaling of Memory Hierarchy for Tensor Cores and Eliminating Redundant Shared Memory Traffic Using Inter-Warp Multicasting. En: IEEE Transactions on Computers 71 (2022), Nr. 12, p. 3115–3126
dc.relation.referencesLeón, E. ; Nasraoui, O. ; Gómez, J.: ECSAGO: Evolutionary Clustering with Self Adaptive Genetic Operators. En: 2006 IEEE International Conference on Evolutionary Computation, 2006, p. 1768–1775
dc.relation.referencesLeón, Elizabeth ; Gómez, Jonatan ; Nasraoui, Olfa: A Genetic Niching Algorithm with Self-Adaptating Operator Rates for Document Clustering. En: 2012 Eighth Latin American Web Congress, 2012, p. 79–86
dc.relation.referencesLeón, Elizabeth ; Nasraoui, Olfa ; Gómez, Jonatan: Scalable evolutionary clustering algorithm with Self Adaptive Genetic Operators. En: IEEE Congress on Evolutionary Computation, 2010, p. 1–8
dc.relation.referencesLi, Hao ; Liang, Zhenyu ; Cheng, Ran: GPU-accelerated Evolutionary Many objective Optimization Using Tensorized NSGA-III. En: 2025 IEEE Congress on Evo lutionary Computation (CEC), 2025, p. 1–8
dc.relation.referencesLi, Shuijia ; Wang, Rui ; Gong, Wenyin ; Liao, Zuowen ; Wang, Ling: A Co Evolutionary Dual Niching Differential Evolution Algorithm for Nonlinear Equation Systems Optimization. En: IEEE Transactions on Emerging Topics in Computational Intelligence 9 (2025), Nr. 1, p. 109–118
dc.relation.referencesLiang, Zhenyu ; Jiang, Tao ; Sun, Kebin ; Cheng, Ran: GPU-accelerated Evolu tionary Multiobjective Optimization Using Tensorized RVEA. En: Proceedings of the Genetic and Evolutionary Computation Conference. New York, NY, USA : Association for Computing Machinery, 2024 (GECCO ’24).– ISBN 9798400704949, p. 566–575
dc.relation.referencesLiang, Zhenyu ; Li, Hao ; Yu, Naiwei ; Sun, Kebin ; Cheng, Ran: Bridging Evo lutionary Multiobjective Optimization and GPU Acceleration via Tensorization. En: IEEE Transactions on Evolutionary Computation (2025), p. 1–1
dc.relation.referencesLin, Huanxin ; Wang, Cho-Li ; Liu, Hongyuan: On-GPU Thread-Data Remapping for Branch Divergence Reduction. En: ACM Trans. Archit. Code Optim. 15 (2018), Oktober, Nr. 3.– ISSN 1544–3566
dc.relation.referencesLiu, Yan ; Qiao, Hong ; Wang, Junbin ; Jiang, Yunfei: Influencing mechanism of the intellectual capability of big data analytics on the operational performance of enterprises. En: Heliyon 10 (2024), Nr. 3, p. e25032.– ISSN 2405–8440
dc.relation.referencesMatanga, Yves ; Owolawi, Pius ; Du, Chunling ; van Wyk, Etienne: Niching Global Optimisation: Systematic Literature Review. En: Algorithms 17 (2024), Nr. 10.– ISSN 1999–4893
dc.relation.referencesMeng, Xiang. Integrating Chaotic Evolutionary and Local Search Techniques in Deci sion Space for Enhanced Evolutionary Multi-Objective Optimization. 2024
dc.relation.referencesMing, Yuewei ; Zhu, En ; Wang, Mao ; Liu, Qiang ; Liu, Xinwang ; Yin, Jianping: Scalable k-means for large-scale clustering. En: Intelligent Data Analysis 23 (2019), Nr. 4, p. 825–838
dc.relation.referencesMonko, Gloriana ; Kimura, Masaomi: Optimized DBSCAN Parameter Selection: Stratified Sampling for Epsilon and GridSearch for Minimum Samples. En: Computer Science & Information Technology (CS & IT) (2023), October, p. 43–61.– Available at SSRN
dc.relation.referencesMuravyov, Sergey ; Antipov, Denis ; Buzdalova, Arina ; Filchenkov, Andrey: Efficient Computation of Fitness Function for Evolutionary Clustering. En: MENDEL 25 (2019), Jun., Nr. 1, p. 87–94
dc.relation.referencesMurtagh, Fionn ; Legendre, Pierre: Ward’s hierarchical agglomerative clustering method: which algorithms implement it correctly? En: Journal of Classification 31 (2014), Nr. 3, p. 274–295
dc.relation.referencesNasraoui, O. ; Krishnapuram, R.: A novel approach to unsupervised robust cluster ing using genetic niching. En: Ninth IEEE International Conference on Fuzzy Systems. FUZZ- IEEE 2000 (Cat. No.00CH37063) Vol. 1, 2000, p. 170–175 vol.1
dc.relation.referencesNasraoui, O. ; León, Elizabeth: Scalable and adaptive evolutionary clustering for noisy and dynamic data, 2005
dc.relation.referencesIn: Nasraoui, Olfa ; Leon, Elizabeth ; Krishnapuram, Raghu: Unsupervised Niche Clustering: Discovering an Unknown Number of Clusters in Noisy Data Sets. Berlin, Heidelberg : Springer Berlin Heidelberg, 2005, p. 157–188.– ISBN 978–3–540–32358–7
dc.relation.referencesNVIDIA Corporation: CUDA Programming Guide. 2024.– Version 12.3
dc.relation.referencesNVIDIA Corporation: CUDA Zone. https://developer.nvidia.com/ cuda-zone. 2024.– Accessed: 2025-07-13
dc.relation.referencesOkuta, Ryosuke ; Unno, Yuya ; Nishino, Daisuke ; Hido, Shohei ; Crissman: CuPy : A NumPy-Compatible Library for NVIDIA GPU Calculations, 2017
dc.relation.referencesOsuna, Edgar C. ; Sudholt, Dirk: Runtime analysis of probabilistic crowding and restricted tournament selection for bimodal optimisation. En: Proceedings of the Genetic and Evolutionary Computation Conference, ACM, Juli 2018 (GECCO ’18), p. 929–936
dc.relation.referencesPark, Bumgyu ; Park, Jonglae ; Joo, Hyunwook ; Park, Choonghoon ; Lee, Daeyeong ; Jo, Chulmin ; Hur, Woonhaing: DVFS method of memory hierarchy based on CPU microarchitectural information. En: 2022 29th IEEE International Conference on Electronics, Circuits and Systems (ICECS), 2022, p. 1–4
dc.relation.referencesPham, Vu Hong S. ; Nguyen Dang, Nghiep T. ; Nguyen, Van N.: Enhancing engi neering optimization using hybrid sine cosine algorithm with Roulette wheel selection and opposition-based learning. En: Scientific Reports 14 (2024), Nr. 1, p. 694.– ISSN 2045–2322
dc.relation.referencesRaju, Vadicherla ; Supreethi, K. P.: Subspace Clustering for High-Dimensional Data: A Survey of Methods, Challenges, and Conceptual Frameworks for Future Research. En: Journal of Information Systems Engineering and Management 10 (2025), Nr. 35s.– Published April 11, 2025; accessed 2025-07-13
dc.relation.referencesRen, Yuhong ; Tang, Jiafu ; Yu, Yang ; Li, Xiaolong: A two-stage stochastic program ming model and parallel Master–Slave adaptive GA for flexible Seru system formation. En: International Journal of Production Research 62 (2024), Nr. 4, p. 1144–1161
dc.relation.referencesRichter, Samuel N.: Evolved parameterized selection for evolutionary algorithms, 2019
dc.relation.referencesRico, Noelia ; D´ ıaz, Irene: A more informed clustering algorithm through the aggre gation of linkage methods. En: 2022 IEEE International Conference on Fuzzy Systems (FUZZ-IEEE), 2022, p. 1–8
dc.relation.referencesSaumya, Charitha ; Sundararajah, Kirshanthan ; Kulkarni, Milind. DARM: Control-Flow Melding for SIMT Thread Divergence Reduction– Extended Version. 2022
dc.relation.referencesSembach, Lena ; Burgard, Jan P. ; Schulz, Volker: A Riemannian Newton Trust-Region Method for Fitting Gaussian Mixture Models. En: Statistics and Com puting 32 (2021), Nr. 1, p. 8.– Published 17 December 2021; Open access; accessed 2025-07-13
dc.relation.referencesShah, Nasir A. ; Lazarescu, Mihai T. ; Quasso, Roberto ; Lavagno, Luciano: CUDA-Optimized GPU Acceleration of 3GPP 3D Channel Model Simulations for 5G Network Planning. En: Electronics 12 (2023), Nr. 15.– ISSN 2079–9292
dc.relation.referencesShi, Jinliang ; Li, Shigang ; Xu, Youxuan ; Fu, Rongtian ; Wang, Xueying ; Wu, Tong. FlashSparse: Minimizing Computation Redundancy for Fast Sparse Matrix Mul tiplications on Tensor Cores. 2024
dc.relation.referencesda Silva Reis, C´esar Augusto B. ; Botezelli, Daniel ; de Azevedo, Arthur M. ; dos Santos Magalh˜ aes, Elisan ; da Silveira Neto, Aristeu: Accelerating Conjugate Heat Transfer Simulations in Squared Heated Cavities through Graphics Processing Unit (GPU) Computing. En: Computation 12 (2024), Nr. 5.– ISSN 2079–3197
dc.relation.referencesSitchinava, Nodari ; Weichert, Volker. Bank Conflict Free Comparison-based Sort ing On GPUs. 2016
dc.relation.referencesSkorpil, Vladimir ; Oujezsky, Vaclav: Parallel Genetic Algorithms’ Implementation Using a Scalable Concurrent Operation in Python. En: Sensors (Basel, Switzerland) 22 (2022), Nr. 6, p. 2389.– ISSN 1424–8220
dc.relation.referencesSong, Dongmei: Optimization and acceleration of image processing algorithms based on CUDA parallel architecture. En: Zhang, Jie (Ed.) ; Sun, Ning (Ed.): Third Inter national Conference on Electronic Information Engineering, Big Data, and Computer Technology (EIBDCT 2024) Vol. 13181 International Society for Optics and Photonics, SPIE, 2024, p. 131811X
dc.relation.referencesStahl, Daniel ; Sallis, Hannah: Model-based cluster analysis. En: Wiley Interdisci plinary Reviews: Computational Statistics 4 (2012), Juli, Nr. 4, p. 341–358
dc.relation.referencesSun, Zhuoran ; Liu, Ying Y. ; Thulasiraman, Parimala ; Thulasiram, Ruppa: Parallel Co-Evolutionary Algorithm and Implementation on CPU-GPU Multicore. En: Proceedings of the Genetic and Evolutionary Computation Conference Companion. New York, NY, USA:Association for Computing Machinery, 2024 (GECCO ’24 Companion).– ISBN 9798400704956, p. 109–110
dc.relation.referencesTang, Yujin ; Tian, Yingtao ; Ha, David: EvoJAX: hardware-accelerated neuroevolu tion. En: Proceedings of the Genetic and Evolutionary Computation Conference Com panion, ACM, Juli 2022 (GECCO ’22), p. 308–311
dc.relation.referencesThamma, Sankara R.: Reimagining Credit Underwriting with Agentic AI : Real-Time Decisions, Transparent Scoring, and Inclusive Automation. En: International Journal of Scientific Research in Computer Science, Engineering and Information Technology 10 (2024), Dec., Nr. 6, p. 390–395
dc.relation.referencesThanh, Pham D.; Dung, DinhA.; Tien, Tran N.; Binh, HuynhThi T.: AnEffective Representation Scheme in Multifactorial Evolutionary Algorithm for Solving Cluster Shortest-Path Tree Problem. En: 2018 IEEE Congress on Evolutionary Computation (CEC), 2018, p. 1–8
dc.relation.referencesTian, Ye ; Chen, Haowen ; Xiang, Xiaoshu ; Jiang, Hao ; Zhang, Xingyi: A Com parative Study on Evolutionary Algorithms and Mathematical Programming Methods for Continuous Optimization. En: 2022 IEEE Congress on Evolutionary Computation (CEC), 2022, p. 1–8
dc.relation.referencesToklu, Nihat E. ; Atkinson, Timothy ; Micka, Vojtˇech ; Liskowski, Pawel ; Sri vastava, Rupesh K. EvoTorch: Scalable Evolutionary Computation in Python. 2023
dc.relation.referencesValkoviˇ c, Patrik ; Pil´ at, Martin: Implementing and evaluating parallel evolutionary algorithms in modern GPU computing libraries. En: Proceedings of the Genetic and Evolutionary Computation Conference Companion. New York, NY, USA : Association for Computing Machinery, 2022 (GECCO ’22).– ISBN 9781450392686, p. 506–509
dc.relation.referencesWang, Boqun ; Zhang, Hailong ; Nie, Jun ; Wang, Jie ; Ye, Xinchen ; Ergesh, Toktonur ; Zhang, Meng ; Li, Jia ; Wang, Wanqiong: Multipopulation Genetic Algorithm Based on GPU for Solving TSP Problem. En: Mathematical Problems in Engineering 2020 (2020), August, p. 1398595.– Volume 2020, Issue 1, Open Access, Academic Editor: Purushothaman Damodaran, Citations: 3
dc.relation.referencesWang, Hu-Long ; Yang, Qiang ; Gao, Xu-Dong ; Lu, Zhen-Yu: Population-Level Hybridization between Roulette Wheel Selection and Tournament Selection for Particle Swarm Optimization. En: 2024 11th International Conference on Machine Intelligence Theory and Applications (MiTA), 2024, p. 1–8
dc.relation.referencesWani, Abdul A.: Comprehensive Analysis of Clustering Algorithms: Exploring Limi tations and Innovative Solutions. En: PeerJ Computer Science 10 (2024), p. e2286. Accessed: 2025-07-13
dc.relation.referencesWEN, Tong ; HUA, Ji-xue ; YANG, Jin-shuai ; ZHAI, Xi-yang: Memetic Differ ential Evolution with Baldwin Effect and Opposition-Based Learning. En: DEStech Transactions on Computer Science and Engineering (2017), 07
dc.relation.referencesXiao, Ethan: Comprehensive K-Means Clustering. En: Journal of Computer and Communications 12 (2024), Nr. 3, p. 146–159.– Accessed: 2025-07-13
dc.relation.referencesXie, Songjie ; Wu, Youlong ; Liao, Kewen ; Chen, Lu ; Liu, Chengfei ; Shen, Haifeng ; Tang, MingJian ; Sun, Lu: Fed-SC: One-Shot Federated Subspace Clustering over High-Dimensional Data. En: 2023 IEEE 39th International Conference on Data Engineering (ICDE), 2023, p. 2905–2918
dc.relation.referencesXiong, Q. Y. ; Huang, S. Y. ; Yuan, Z. G. ; Jiang, K. ; Wei, Y. Y. ; Xu, S. B. ; Zhang, J. ; Wang, Z. ; Lin, R. T. ; Yu, L.: A Scheme of Full Kinetic Particle-in cell Algorithms for GPU Acceleration Using CUDA Fortran Programming. En: The Astrophysical Journal Supplement Series 264 (2022), dec, Nr. 1, p. 3
dc.relation.referencesXu, Zhenhao ; Cao, Yu ; Zhou, Zhiwei ; Li, Yiyuan ; Shi, Yaoyao ; Zhao, Jinhui: An evolutionary multi-objective energy management method for PV-battery-diesel mi crogrids under uncertain scenarios. En: IET Generation, Transmission & Distribution 18 (2024), Nr. 13, p. 2905–2917
dc.relation.referencesYang, Xin-She. Biology-Derived Algorithms in Engineering Optimization. 2010
dc.relation.referencesYuan, X. ; Zhang, T. ; Dai, X. [u. a.]: Master–slave model-based parallel chaos optimization algorithm for parameter identification problems. En: Nonlinear Dynamics 83 (2016), p. 1727–1741
dc.relation.referencesZ˘ avoianu, Alexandru-Ciprian ; Lughofer, Edwin ; Koppelst¨ atter, Werner ; Weidenholzer, G¨unther ; Amrhein, Wolfgang ; Klement, Erich P.: On the Performance of Master-Slave Parallelization Methods for Multi-Objective Evolution ary Algorithms. En: Rutkowski, Leszek (Ed.) ; Korytkowski, Marcin (Ed.) ; Scherer, Rafal (Ed.) ; Tadeusiewicz, Ryszard (Ed.) ; Zadeh, Lotfi A. (Ed.) ; Zu rada, Jacek M. (Ed.): Artificial Intelligence and Soft Computing. Berlin, Heidelberg : Springer Berlin Heidelberg, 2013.– ISBN 978–3–642–38610–7, p. 122–134
dc.relation.referencesZhang, Shaojie: Time Predictable Modeling Method for GPU Architecture with SIMT and Cache Miss Awareness. En: Journal of Electronic Research and Application 8 (2024), Nr. 2, p. 109–114.– ISSN 2208–3510 (Online), 2208–3502 (Print)
dc.relation.referencesZhang, Weixiang ; Xie, Shuzhao ; Ren, Chengwei ; Xie, Siyi ; Tang, Chen ; Ge, Shijia ; Wang, Mingzi ; Wang, Zhi. EVOS: Efficient Implicit Neural Training via EVOlutionary Selector. 2025
dc.relation.referencesZubanovic, D. ; Hidic, A. ; Hajdarevic, A. ; Nosovic, N. ; Konjicija, S.: Per formance analysis of parallel master-slave Evolutionary strategies (,) model python im plementation for CPU and GPGPU. En: 2014 37th International Convention on In formation and Communication Technology, Electronics and Microelectronics (MIPRO), 2014, p. 1609–1613
dc.rights.accessrightsinfo:eu-repo/semantics/openAccess
dc.rights.licenseReconocimiento 4.0 Internacional
dc.rights.urihttp://creativecommons.org/licenses/by/4.0/
dc.subject.ddc000 - Ciencias de la computación, información y obras generales::004 - Procesamiento de datos Ciencia de los computadores
dc.subject.ddc000 - Ciencias de la computación, información y obras generales::006 - Métodos especiales de computación
dc.subject.lembALGORITMOS (COMPUTADORES)spa
dc.subject.lembComputer algorithmseng
dc.subject.lembPROGRAMACION (COMPUTADORES ELECTRONICOS)spa
dc.subject.lembProgramming (electronic computer)eng
dc.subject.lembANALISIS DE ENVOLVIMIENTO DE DATOSspa
dc.subject.lembData envelopment analysiseng
dc.subject.lembPROGRAMACION EVOLUTIVA (COMPUTACION)spa
dc.subject.lembEvolutionary programming (Computer science)eng
dc.subject.proposalGpu parallelizationeng
dc.subject.proposalCudaeng
dc.subject.proposalEvolutionary clusteringeng
dc.subject.proposalECSAGO algorithmeng
dc.subject.proposalHigh-performance computingeng
dc.subject.proposalMaster-slave architectureeng
dc.subject.proposalScalability optimizationeng
dc.subject.proposalParalelización gpuspa
dc.subject.proposalAlgoritmo ECSAGOspa
dc.subject.proposalComputación de alto rendimientospa
dc.subject.proposalOptimización de escalabilidadspa
dc.subject.proposalAgrupamiento evolutivospa
dc.subject.proposalArquitectura maestro-esclavospa
dc.titleGPU/CUDA-Based parallelization of the ECSAGO evolutionary algorithmspa
dc.title.translatedParalelización del algoritmo evolutivo ECSAGO usando GPU/CUDAeng
dc.typeTrabajo de grado - Maestría
dc.type.coarhttp://purl.org/coar/resource_type/c_bdcc
dc.type.coarversionhttp://purl.org/coar/version/c_ab4af688f83e57aa
dc.type.contentText
dc.type.driverinfo:eu-repo/semantics/masterThesis
dc.type.redcolhttp://purl.org/redcol/resource_type/TM
dc.type.versioninfo:eu-repo/semantics/acceptedVersion
dcterms.audience.professionaldevelopmentEstudiantes
dcterms.audience.professionaldevelopmentMaestros
dcterms.audience.professionaldevelopmentInvestigadores
oaire.accessrightshttp://purl.org/coar/access_right/c_abf2

Archivos

Bloque original

Mostrando 1 - 1 de 1
Cargando...
Miniatura
Nombre:
Paralelización del Algoritmo Evolutivo ECSAGO usando GPU-CUDA.pdf
Tamaño:
5.36 MB
Formato:
Adobe Portable Document Format
Descripción:
Tesis de Maestría en Ingeniería de Sistemas y Computación

Bloque de licencias

Mostrando 1 - 1 de 1
Cargando...
Miniatura
Nombre:
license.txt
Tamaño:
5.74 KB
Formato:
Item-specific license agreed upon to submission
Descripción: