An explainable Kernel-driven approach for reliable supervised learning
| dc.contributor.advisor | Álvarez Meza, Andrés Marino | |
| dc.contributor.author | Lugo Rojas, Juan Camilo | |
| dc.contributor.cvlac | Lugo Rojas, Juan Camilo [0002054445] | |
| dc.contributor.googlescholar | Lugo Rojas, Juan Camilo [WE7a9rAAAAAJ] | |
| dc.contributor.orcid | Lugo Rojas, Juan Camilo [0009000299456421] | |
| dc.contributor.researchgroup | Grupo de Control y Procesamiento Digital de Señales | |
| dc.date.accessioned | 2026-02-27T12:28:04Z | |
| dc.date.available | 2026-02-27T12:28:04Z | |
| dc.date.issued | 2025 | |
| dc.description | graficas, tablas | spa |
| dc.description.abstract | Kernel methods provide a principled and mathematically rigorous framework for machine learning, offering strong generalization guarantees and natural mechanisms for uncertainty quantification. Despite these advantages, their adoption in modern large-scale and heterogeneous scenarios has been hindered by computational limitations, sensitivity to noisy or inconsistent supervision, and difficulties in extending them to structured or non-Euclidean data. Addressing these challenges is crucial to preserve the theoretical strengths of kernel methods while making them viable for contemporary applications in artificial intelligence. The main problem tackled in this thesis is the limited practicality of kernel methods in real-world machine learning tasks, where three factors are especially restrictive: the computational cost of scaling to large datasets, the fragility of kernel-based models under noisy or multi-annotator supervision, and the difficulty of interpreting or applying them in settings where structure and reliability are central. This thesis addresses these issues through three complementary research directions: scalability, robustness, and interpretability. First, we introduce CRFFDT-Net, a scalable architecture based on Convolutional Random Fourier Features, which achieves competitive accuracy for Automatic Modulation Classification while significantly reducing computational cost. Second, we propose the MAR-CCGP framework, a robust Gaussian process model that captures inter-annotator correlations and input-dependent noise, enabling reliable regression under heterogeneous and noisy supervision in both semi-synthetic and real sensory datasets. Third, we extend interpretability in kernel-based learning by analyzing GradCAM++ maps for deep architectures in signal classification and by deriving localized trustworthiness measures for multi-annotator regression, thus providing new perspectives on model reasoning and annotator reliability. Overall, this thesis demonstrates that kernel methods, when enhanced with scalable approximations, robust probabilistic formulations, and principled interpretability tools, remain a powerful paradigm for machine learning in modern contexts. The proposed contributions bridge theoretical elegance and practical applicability, offering efficient, reliable, and interpretable solutions. These results open promising directions for future research, including advanced random feature variants, richer Gaussian process formulations, and new interpretability frameworks that connect kernel-based representations with deep architectures in increasingly complex domains (Texto tomado de la fuente). | eng |
| dc.description.abstract | Los métodos kernel proporcionan un marco teórico sólido y matemáticamente riguroso para el aprendizaje automático, ofreciendo fuertes garantías de generalización y mecanismos naturales para la cuantificación de la incertidumbre. A pesar de estas ventajas, su adopción en escenarios modernos a gran escala y heterogéneos se ha visto limitada por restricciones computacionales, sensibilidad a supervisión ruidosa o inconsistente, y dificultades para extenderlos a datos estructurados o no euclidianos. Abordar estos desafíos es fundamental para preservar las fortalezas teóricas de los métodos kernel y hacerlos viables en aplicaciones contemporáneas de inteligencia artificial. El problema principal abordado en esta tesis es la limitada aplicabilidad práctica de los métodos kernel en tareas reales de aprendizaje automático, donde tres factores resultan especialmente restrictivos: el costo computacional al escalar a grandes conjuntos de datos, la fragilidad de los modelos basados en kernels frente a supervisión ruidosa o proveniente de múltiples anotadores, y la dificultad de interpretarlos o aplicarlos en contextos donde la estructura y la confiabilidad son centrales. Esta tesis aborda estas limitaciones a través de tres líneas de investigación complementarias: escalabilidad, robustez e interpretabilidad. En primer lugar, se introduce CRFFDT-Net, una arquitectura escalable basada en Características Aleatorias de Fourier Convolucionales, que alcanza un rendimiento competitivo en Clasificación Automática de Modulación mientras reduce significativamente el costo computacional. En segundo lugar, se propone el marco MAR-CCGP, un modelo robusto de procesos gaussianos que captura correlaciones entre anotadores y ruido dependiente de la entrada, permitiendo regresión confiable bajo supervisión heterogénea y ruidosa tanto en conjuntos de datos semisintéticos como en datos sensoriales reales. En tercer lugar, se amplía la interpretabilidad en el aprendizaje basado en kernels mediante el análisis de mapas GradCAM++ en arquitecturas profundas para clasificación de señales y mediante la derivación de medidas localizadas de confiabilidad en regresión con múltiples anotadores, proporcionando nuevas perspectivas sobre el razonamiento del modelo y la fiabilidad de los anotadores. En conjunto, esta tesis demuestra que los métodos kernel, cuando se potencian con aproximaciones escalables, formulaciones probabilísticas robustas y herramientas de interpretabilidad fundamentadas, siguen siendo un paradigma poderoso para el aprendizaje automático en contextos modernos. Las contribuciones propuestas conectan la elegancia teórica con la aplicabilidad práctica, ofreciendo soluciones eficientes, confiables e interpretables. Estos resultados abren direcciones prometedoras para futuras investigaciones, incluyendo variantes avanzadas de características aleatorias, formulaciones más ricas de procesos gaussianos y nuevos marcos de interpretabilidad que conecten representaciones basadas en kernels con arquitecturas profundas en dominios cada vez más complejos. | spa |
| dc.description.curriculararea | Eléctrica, Electrónica, Automatización Y Telecomunicaciones.Sede Manizales | |
| dc.description.degreelevel | Maestría | |
| dc.description.degreename | Magíster en Ingeniería - Automatización Industrial | |
| dc.format.extent | xxv, 131 páginas | |
| dc.format.mimetype | application/pdf | |
| dc.identifier.instname | Universidad Nacional de Colombia | spa |
| dc.identifier.reponame | Repositorio Institucional Universidad Nacional de Colombia | spa |
| dc.identifier.repourl | https://repositorio.unal.edu.co/ | spa |
| dc.identifier.uri | https://repositorio.unal.edu.co/handle/unal/89695 | |
| dc.language.iso | eng | |
| dc.publisher | Universidad Nacional de Colombia | |
| dc.publisher.branch | Universidad Nacional de Colombia - Sede Manizales | |
| dc.publisher.faculty | Facultad de Ingeniería y Arquitectura | |
| dc.publisher.place | Manizales, Colombia | |
| dc.publisher.program | Manizales - Ingeniería y Arquitectura - Maestría en Ingeniería - Automatización Industrial | |
| dc.relation.indexed | Agrosavia | |
| dc.relation.indexed | Bireme | |
| dc.relation.indexed | RedCol | |
| dc.relation.indexed | LaReferencia | |
| dc.relation.indexed | Agrovoc | |
| dc.relation.references | [Abedsoltan et al., 2023] Abedsoltan, A., Belkin, M., and Pandit, P. (2023). Toward large kernel models. In International Conference on Machine Learning, pages 61–78. PMLR | |
| dc.relation.references | [Achatz et al., 2025] Achatz, J., Sailer, P., Mayer, S., and Schubert, M. (2025). An explainable segmentation decision tree model for enhanced decision support in roundwood sorting. Knowledge-Based Systems, page 113814 | |
| dc.relation.references | [Adebayo et al., 2018] Adebayo, J., Gilmer, J., Muelly, M., Goodfellow, I., Hardt, M., and Kim, B. (2018). Sanity checks for saliency maps. Advances in neural information processing systems, 31 | |
| dc.relation.references | [Afoakwa et al., 2008] Afoakwa, E. O., Paterson, A., Fowler, M., and Ryan, A. (2008). Flavor formation and character in cocoa and chocolate: a critical review. Critical reviews in food science and nutrition, 48(9):840–857. | |
| dc.relation.references | [Agarwal et al., 2021] Agarwal, R., Melnick, L., Frosst, N., Zhang, X., Lengerich, B., Caruana, R., and Hinton, G. E. (2021). Neural additive models: Interpretable machine learning with neural nets. Advances in neural information processing systems, 34:4699–4711 | |
| dc.relation.references | [Alaoui and Mahoney, 2015] Alaoui, A. and Mahoney, M. W. (2015). Fast randomized kernel ridge regression with statistical guarantees. Advances in neural information processing systems, 28 | |
| dc.relation.references | [Algikar and Mili, 2023] Algikar, P. and Mili, L. (2023). Robust gaussian process regression with huber likelihood. arXiv preprint arXiv:2301.07858 | |
| dc.relation.references | [Altamirano et al., 2023] Altamirano, M., Briol, F.-X., and Knoblauch, J. (2023). Robust and conjugate gaussian process regression. arXiv preprint arXiv:2311.00463 | |
| dc.relation.references | [Alvarez Melis and Jaakkola, 2018] Alvarez Melis, D. and Jaakkola, T. (2018). Towards robust interpretability with self-explaining neural networks. Advances in neural information processing systems, 31 | |
| dc.relation.references | [Ament et al., 2024] Ament, S., Santorella, E., Eriksson, D., Letham, B., Balandat, M., and Bakshy, E. (2024). Robust gaussian processes via relevance pursuit. Advances in Neural Information Processing Systems, 37:61700–61734 | |
| dc.relation.references | [AOAC International, 2019a] AOAC International (2019a). Aoac official method 931.04: Moisture in cocoa products. Available at: https://www.aoac.org | |
| dc.relation.references | [AOAC International, 2019b] AOAC International (2019b). Aoac official method 963.15: Fat (crude) in cacao products. Available at: https://www.aoac.org | |
| dc.relation.references | [Aprotosoaie et al., 2016] Aprotosoaie, A. C., Luca, S. V., and Miron, A. (2016). Flavor chemistry of cocoa and cocoa products—an overview. Comprehensive Reviews in Food Science and Food Safety, 15(1):73–91 | |
| dc.relation.references | [Araujo et al., 2023] Araujo, S., Peres, R., Ramalho, J., Lidon, F., and Barata, J. (2023). Machine learning applications in agriculture: Current trends, challenges, and future perspectives. Agronomy, 13(12):2976 | |
| dc.relation.references | [Asif et al., 2021] Asif, N., Sarker, Y., Chakrabortty, R., and Ryan, M. (2021). Graph neural network: A comprehensive review on non-euclidean space. IEEE Access | |
| dc.relation.references | [Azur et al., 2011] Azur, M. J., Stuart, E. A., Frangakis, C., and Leaf, P. J. (2011). Multiple imputation by chained equations: what is it and how does it work? International journal of methods in psychiatric research, 20(1):40–49 | |
| dc.relation.references | [Bach et al., 2015] Bach, S., Binder, A., Montavon, G., Klauschen, F., Müller, K.- R., and Samek, W. (2015). On pixel-wise explanations for non-linear classifier decisions by layer-wise relevance propagation. PloS one, 10(7):e0130140. | |
| dc.relation.references | [Baruah and Mazumder, 2025] Baruah, S. and Mazumder, D. (2025). A review on application of machine learning techniques coupled with e-nose in healthcare, agriculture and allied domains. IEEE Xplore | |
| dc.relation.references | [Beckett et al., 2017] Beckett, S. T., Fowler, M. S., and Ziegler, G. R. (2017). Beckett’s industrial chocolate manufacture and use. John Wiley & Sons | |
| dc.relation.references | [Bibal et al., 2022] Bibal, A., Cardon, R., Alfter, D., Wilkens, R., Wang, X., François, T., and Watrin, P. (2022). Is attention explanation? an introduction to the debate. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 3889–3900 | |
| dc.relation.references | [Boyne et al., 2025] Boyne, T., Folch, J., Lee, R., and Shafei, B. (2025). Bark: A fully bayesian tree kernel for black-box optimization. arXiv preprint arXiv:2503.05574 | |
| dc.relation.references | [Breiman et al., 2017] Breiman, L., Friedman, J., Olshen, R. A., and Stone, C. J. (2017). Classification and regression trees. Chapman and Hall/CRC | |
| dc.relation.references | [Bronstein et al., 2021] Bronstein, M. M., Bruna, J., Cohen, T., and Veličković, P. (2021). Geometric deep learning: Grids, groups, graphs, geodesics, and gauges. arXiv preprint arXiv:2104.13478 | |
| dc.relation.references | [Campese et al., 2022] Campese, S., Agostini, F., and Pazzini, J. (2022). Beyond transformers: fault type detection in maintenance tickets with kernel methods, boost decision trees and neural networks. In IEEE International Joint Conference on Neural Networks (IJCNN) | |
| dc.relation.references | [Carneiro, 2024] Carneiro, G. (2024). Machine Learning with Noisy Labels: Definitions, Theory, Techniques and Solutions. Springer | |
| dc.relation.references | [Chami et al., 2022] Chami, I., Abu-El-Haija, S., Perozzi, B., and Ré, C. (2022). Machine learning on graphs: A model and comprehensive taxonomy. Journal of Machine Learning Research, 23(234):1–92 | |
| dc.relation.references | [Chang and Shahrampour, 2022] Chang, T.-J. and Shahrampour, S. (2022). Rfn: A random-feature based newton method for empirical risk minimization in reproducing kernel hilbert spaces. IEEE Transactions on Signal Processing, 70:5308– 5319 | |
| dc.relation.references | [Chattopadhay et al., 2018] Chattopadhay, A., Sarkar, A., Howlader, P., and Balasubramanian, V. N. (2018). Grad-cam++: Generalized gradient-based visual explanations for deep convolutional networks. In 2018 IEEE winter conference on applications of computer vision (WACV), pages 839–847. IEEE | |
| dc.relation.references | [Chattopadhyay et al., 2017] Chattopadhyay, A., Sarkar, A., Howlader, P., and Balasubramanian, V. N. (2017). Grad-cam++: Generalized gradient-based visual explanations for deep convolutional networks. CoRR, abs/1710.11063 | |
| dc.relation.references | [Chefer et al., 2021] Chefer, H., Gur, S., and Wolf, L. (2021). Transformer interpretability beyond attention visualization. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 782–791 | |
| dc.relation.references | [Chen et al., 2019] Chen, C., Li, O., Tao, D., Barnett, A., Rudin, C., and Su, J. K. (2019). This looks like that: deep learning for interpretable image recognition. Advances in neural information processing systems, 32 | |
| dc.relation.references | [Chen et al., 2021a] Chen, L., Zhou, S., Ma, J., and Xu, M. (2021a). Fast kernel kmeans clustering using incomplete cholesky factorization. Applied Mathematics and Computation, 402:126037 | |
| dc.relation.references | [Chen et al., 2025] Chen, Y., Epperly, E. N., Tropp, J. A., and Webber, R. J. (2025). Randomly pivoted cholesky: Practical approximation of a kernel matrix with few entry evaluations. Communications on Pure and Applied Mathematics, 78(5):995– 1041 | |
| dc.relation.references | [Chen et al., 2021b] Chen, Z., Wang, H., Sun, H., Chen, P., Han, T., Liu, X., and Yang, J. (2021b). Structured probabilistic end-to-end learning from crowds. In Proceedings of the twenty-ninth international conference on international joint conferences on artificial intelligence, pages 1512–1518 | |
| dc.relation.references | [Chu et al., 2021] Chu, Z., Ma, J., and Wang, H. (2021). Learning from crowds by modeling common confusions. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 35, pages 5832–5840 | |
| dc.relation.references | [Colonges et al., 2022] Colonges, K., Seguine, E., Saltos, A., Davrieux, F., Minier, J., Jimenez, J.-C., Lahon, M.-C., Calderon, D., Subia, C., Sotomayor, I., et al. (2022). Diversity and determinants of bitterness, astringency, and fat content in cultivated nacional and native amazonian cocoa accessions from ecuador. The Plant Genome, 15(4):e20218 | |
| dc.relation.references | [Covert et al., 2021a] Covert, I., Lundberg, S., and Lee, S.-I. (2021a). Explaining by removing: A unified framework for model explanation. Journal of Machine Learning Research, 22(209):1–90 | |
| dc.relation.references | [Covert et al., 2021b] Covert, I., Lundberg, S., and Lee, S.-I. (2021b). Explaining by removing: A unified framework for model explanation. Journal of Machine Learning Research, 22(209):1–90 | |
| dc.relation.references | [Dai et al., 2014] Dai, B., Xie, B., He, N., Liang, Y., Raj, A., Balcan, M.-F., and Song, L. (2014). Scalable kernel methods via doubly stochastic gradients. Advances in neural information processing systems, 27 | |
| dc.relation.references | [Dai et al., 2020] Dai, Y., Zhang, T., Lin, Z., Yin, F., Theodoridis, S., and Cui, S. (2020). An interpretable and sample efficient deep kernel for gaussian process. In Conference on Uncertainty in Artificial Intelligence, pages 759–768. PMLR | |
| dc.relation.references | [Dalton et al., 2024] Dalton, D., Lazarus, A., Gao, H., and Husmeier, D. (2024). Boundary constrained gaussian processes for robust physics-informed machine learning of linear partial differential equations. Journal of Machine Learning Research, 25 | |
| dc.relation.references | [Davani et al., 2022] Davani, A. M., Díaz, M., and Prabhakaran, V. (2022). Dealing with disagreements: Looking beyond the majority vote in subjective annotations. Transactions of the Association for Computational Linguistics, 10:92–110 | |
| dc.relation.references | [Dawid and Skene, 1979] Dawid, A. P. and Skene, A. M. (1979). Maximum likelihood estimation of observer error-rates using the em algorithm. Journal of the Royal Statistical Society: Series C (Applied Statistics), 28(1):20–28 | |
| dc.relation.references | [Doshi-Velez and Kim, 2017] Doshi-Velez, F. and Kim, B. (2017). Towards a rigorous science of interpretable machine learning. arXiv preprint arXiv:1702.08608 | |
| dc.relation.references | [Duque et al., 2022] Duque, A. F., Morin, S., and Wolf, G. (2022). Geometry regularized autoencoders. IEEE Transactions on Pattern Analysis and Machine Intelligence | |
| dc.relation.references | [Elias et al., 2022] Elias, V. R. M., Gogineni, V. C., Martins, W. A., and Werner, S. (2022). Kernel regression over graphs using random fourier features. IEEE Transactions on Signal Processing, 70:936–949 | |
| dc.relation.references | [Fang et al., 2021] Fang, J., Zhang, Q., Meng, Z., and Liang, S. (2021). Structureaware random fourier kernel for graphs. Advances in Neural Information Processing Systems, 34:17681–17694 | |
| dc.relation.references | [Fine and Scheinberg, 2001] Fine, S. and Scheinberg, K. (2001). Efficient svm training using low-rank kernel representations. Journal of Machine Learning Research, 2(Dec):243–264 | |
| dc.relation.references | [Frénay and Verleysen, 2014] Frénay, B. and Verleysen, M. (2014). Classification in the presence of label noise: a survey. IEEE Transactions on Neural Networks and Learning Systems, 25(5):845–869 | |
| dc.relation.references | [Frittoli et al., 2022] Frittoli, L., Carrera, D., and Boracchi, G. (2022). Nonparametric and online change detection in multivariate datastreams using quanttree. IEEE Transactions on Knowledge and Data Engineering | |
| dc.relation.references | [Fu, 2025] Fu, Y. (2025). Error bounds estimate for gaussian processes and kernel methods and application in stability analysis. https: //www.uwaterloo.ca/applied-mathematics/sites/default/files/ uploads/documents/yirun-fu-reaserch-paper-.pdf | |
| dc.relation.references | [Gal and van der Wilk, 2014] Gal, Y. and van der Wilk, M. (2014). Variational inference in sparse gaussian process regression and latent variable models – a gentle tutorial. arXiv:1402.6842 | |
| dc.relation.references | [Gallup, 2024] Gallup (2024). Ai at work has nearly doubled in two years. https: //www.gallup.com/workplace/691643/work-nearly-doubled-two-years. aspx. Accessed: 2025-09-06 | |
| dc.relation.references | [Ghorbani et al., 2019] Ghorbani, A., Abid, A., and Zou, J. (2019). Interpretation of neural networks is fragile. In Proceedings of the AAAI conference on artificial intelligence, volume 33, pages 3681–3688 | |
| dc.relation.references | [Gil-González et al., 2025] Gil-González, J., Daza-Santacoloma, G., Cárdenas-Peña, D., Orozco-Gutiérrez, A., and Álvarez-Meza, A. (2025). Generalized cross-entropy for learning from crowds based on correlated chained gaussian processes. Results in Engineering, 25:103863 | |
| dc.relation.references | [Gil-Gonzalez et al., 2021a] Gil-Gonzalez, J., Giraldo, J.-J., Álvarez-Meza, A. M., Orozco-Gutierrez, A., and Álvarez, M. A. (2021a). Correlated chained gaussian processes for datasets with multiple annotators. IEEE Transactions on Neural Networks and Learning Systems, 34(8):4514–4528 | |
| dc.relation.references | [Gil-Gonzalez et al., 2021b] Gil-Gonzalez, J., Giraldo, J.-J., and Orozco-Gutierrez, A. (2021b). Correlated chained gaussian processes for datasets with multiple annotators. IEEE Transactions on Neural Networks and Learning Systems, 34(8):4514– 4528 | |
| dc.relation.references | [Gil-Gonzalez et al., 2021c] Gil-Gonzalez, J., Orozco-Gutierrez, A., and Álvarez- Meza, A. (2021c). Learning from multiple inconsistent and dependent annotators to support classification tasks. Neurocomputing, 423:236–247 | |
| dc.relation.references | [Gogineni et al., 2022] Gogineni, V. C., Sambangi, R., Alex, D., Mula, S., and Werner, S. (2022). Algorithm and architecture design of random fourier features-based kernel adaptive filters. IEEE Transactions on Circuits and Systems I: Regular Papers, 70(2):833–845 | |
| dc.relation.references | [Goldberg et al., 1997] Goldberg, P., Williams, C., and Bishop, C. (1997). Regression with input-dependent noise: A gaussian process treatment. Advances in neural information processing systems, 10 | |
| dc.relation.references | [Gordon et al., 2021] Gordon, M. L., Zhou, K., Patel, K., Hashimoto, T., and Bernstein, M. S. (2021). The disagreement deconvolution: Bringing machine learning performance metrics in line with reality. In Proceedings of the 2021 chi conference on human factors in computing systems, pages 1–14 | |
| dc.relation.references | [Grand View Research, 2024] Grand View Research (2024). Artificial intelligence market size, share & trends report 2024–2030. https://www.grandviewresearch.com/industry-analysis/ artificial-intelligence-ai-market. Accessed: 2025-09-06 | |
| dc.relation.references | [Guidotti et al., 2018] Guidotti, R., Monreale, A., Ruggieri, S., Turini, F., Giannotti, F., and Pedreschi, D. (2018). A survey of methods for explaining black box models. ACM computing surveys (CSUR), 51(5):1–42 | |
| dc.relation.references | [Guo et al., 2023a] Guo, H., Wang, B., and Yi, G. (2023a). Label correction of crowdsourced noisy annotations with an instance-dependent noise transition model. Advances in Neural Information Processing Systems, 36:347–386 | |
| dc.relation.references | [Guo et al., 2023b] Guo, J., Sun, S., and Xu, W. (2023b). Modeling annotator expertise with auxiliary metadata for robust crowd labeling. In Advances in Neural Information Processing Systems | |
| dc.relation.references | [Han et al., 2020] Han, B., Yao, Q., Liu, T., Niu, G., Tsang, I. W., and Kwok, J. T. (2020). A survey of label-noise representation learning: Past, present and future. arXiv preprint arXiv:2011.04406 | |
| dc.relation.references | [Hastie, 2017] Hastie, T. J. (2017). Generalized additive models. Statistical models in S, pages 249–307 | |
| dc.relation.references | [He et al., 2024a] He, J., Zhang, X., and Liu, Y. (2024a). Efficient random feature methods for asymmetric and non-stationary kernels. In Proceedings of the AAAI Conference on Artificial Intelligence | |
| dc.relation.references | [He et al., 2024b] He, M., He, F., Liu, F., and Huang, X. (2024b). Random fourier features for asymmetric kernels. Machine Learning, 113(11):8459–8485 | |
| dc.relation.references | [Healy and McInnes, 2024] Healy, J. and McInnes, L. (2024). Uniform manifold approximation and projection. Nature Reviews Methods Primers, 4(1):82 | |
| dc.relation.references | [Hensman et al., 2015] Hensman, J., Matthews, A., and Ghahramani, Z. (2015). Scalable variational gaussian process classification. In Artificial intelligence and statistics, pages 351–360. PMLR | |
| dc.relation.references | [Herde et al., 2024] Herde, M., Lührs, L., Huseljic, D., and Sick, B. (2024). Annotmix: Learning with noisy class labels from multiple annotators via a mixup extension. arXiv preprint arXiv:2405.03386 | |
| dc.relation.references | [Hjort et al., 2024] Hjort, A., Scheel, I., Sommervoll, D. E., and Pensar, J. (2024). Locally interpretable tree boosting: An application to house price prediction. Decision Support Systems, 178:114106 | |
| dc.relation.references | [Huber, 1992] Huber, P. J. (1992). Robust estimation of a location parameter. In Breakthroughs in statistics: Methodology and distribution, pages 492–518. Springer | |
| dc.relation.references | [Humbert et al., 2022] Humbert, P., Le Bars, B., and Minvielle, L. (2022). Robust kernel density estimation with median-of-means principle. In International Conference on Machine Learning, pages 9444–9465. PMLR | |
| dc.relation.references | [ICONTEC, 2004] ICONTEC (2004). Ntc 3932: Sensory analysis – identification and selection of descriptors to establish a sensory profile using a multidimensional approach. Available at: https://www.icontec.org | |
| dc.relation.references | [International Office of Cocoa, Chocolate and Confectionery, 2000] International Office of Cocoa, Chocolate and Confectionery (2000). Ioccc analytical method 46: Viscosity of cocoa and chocolate. Available at: https://www.ioccc.org | |
| dc.relation.references | [International Organization for Standardization, 2020] International Organization for Standardization (2020). Iso 13320:2020 particle size analysis—laser diffraction methods. Available at: https://www.iso.org | |
| dc.relation.references | [Jain and Wallace, 2019] Jain, S. and Wallace, B. C. (2019). Attention is not explanation. arXiv preprint arXiv:1902.10186 | |
| dc.relation.references | [Jiang et al., 2023a] Jiang, J., Leofante, F., Rago, A., and Toni, F. (2023a). Formalising the robustness of counterfactual explanations for neural networks. In Proceedings of the AAAI conference on artificial intelligence, volume 37, pages 14901– 14909 | |
| dc.relation.references | [Jiang et al., 2023b] Jiang, W., Huang, Z., Wang, K., and Zhang, Q. (2023b). Counterfactual explanations with robustness guarantees. In Proceedings of the International Conference on Learning Representations | |
| dc.relation.references | [Johnson et al., 2020] Johnson, J. E., Laparra, V., Pérez-Suay, A., Mahecha, M. D., and Camps-Valls, G. (2020). Kernel methods and their derivatives: Concept and perspectives for the earth system sciences. Plos one, 15(10):e0235885 | |
| dc.relation.references | [Jonscher et al., 2024] Jonscher, C., Möller, S., Liesecke, L., Hofmeister, B., Grießmann, T., and Rolfes, R. (2024). Heteroscedastic gaussian processes for data normalisation in probabilistic novelty detection of a wind turbine. Engineering Structures, 305:117786 | |
| dc.relation.references | [Jylänki et al., 2011] Jylänki, P., Vanhatalo, J., and Vehtari, A. (2011). Robust gaussian process regression with a student-t likelihood. Journal of Machine Learning Research, 12(11) | |
| dc.relation.references | [Kaltiokallio et al., 2025] Kaltiokallio, O., Hostettler, R., Talvitie, J., and Valkama, M. (2025). Heteroscedastic gaussian process model for received signal strength based device-free localization. In 2025 IEEE/ION Position, Location and Navigation Symposium (PLANS), pages 980–991. IEEE | |
| dc.relation.references | [Kaur et al., 2020] Kaur, H., Nori, H., Jenkins, S., Caruana, R., Wallach, H., and Wortman Vaughan, J. (2020). Interpreting interpretability: understanding data scientists’ use of interpretability tools for machine learning. In Proceedings of the 2020 CHI conference on human factors in computing systems, pages 1–14 | |
| dc.relation.references | [Kersting et al., 2007] Kersting, K., Plagemann, C., Pfaff, P., and Burgard, W. (2007). Most likely heteroscedastic gaussian process regression. In Proceedings of the 24th international conference on Machine learning, pages 393–400 | |
| dc.relation.references | [Kim et al., 2016] Kim, B., Khanna, R., and Koyejo, O. O. (2016). Examples are not enough, learn to criticize! criticism for interpretability. Advances in neural information processing systems, 29 | |
| dc.relation.references | [Kim and Ghahramani, 2012] Kim, H.-C. and Ghahramani, Z. (2012). Bayesian classifier combination. In Artificial Intelligence and Statistics, pages 619–627. PMLR | |
| dc.relation.references | [Koh and Liang, 2017] Koh, P. W. and Liang, P. (2017). Understanding black-box predictions via influence functions. In International conference on machine learning, pages 1885–1894. PMLR | |
| dc.relation.references | [Koh et al., 2020] Koh, P. W., Nguyen, T., Tang, Y. S., Mussmann, S., Pierson, E., Kim, B., and Liang, P. (2020). Concept bottleneck models. In International conference on machine learning, pages 5338–5348. PMLR | |
| dc.relation.references | [Lázaro-Gredilla and Titsias, 2011] Lázaro-Gredilla, M. and Titsias, M. K. (2011). Variational heteroscedastic gaussian process regression. In ICML, pages 841–848 | |
| dc.relation.references | [Le et al., 2014] Le, Q. V., Sarlós, T., and Smola, A. J. (2014). Fastfood: Approximate kernel expansions in loglinear time. arXiv preprint arXiv:1408.3060 | |
| dc.relation.references | [Leem and Seo, 2024] Leem, S. and Seo, H. (2024). Attention guided cam: visual explanations of vision transformer guided by self-attention. In Proceedings of the AAAI conference on artificial intelligence, volume 38, pages 2956–2964 | |
| dc.relation.references | [Lei and Tao, 2023] Lei, S. and Tao, D. (2023). A comprehensive survey of dataset distillation. IEEE Transactions on Pattern Analysis and Machine Intelligence | |
| dc.relation.references | [Li et al., 2023a] Li, B., Zhao, X., and Liu, Q. (2023a). Multi-annotator learning with instance-dependent reliability. In Proceedings of the 40th International Conference on Machine Learning | |
| dc.relation.references | [Li et al., 2023b] Li, J., Liu, Y., and Wang, W. (2023b). Optimal convergence rates for distributed nyström approximation. Journal of Machine Learning Research, 24(141):1–39 | |
| dc.relation.references | [Li et al., 2023c] Li, J., Sun, H., and Li, J. (2023c). Beyond confusion matrix: learning from multiple annotators with awareness of instance features. Machine Learning, 112(3):1053–1075 | |
| dc.relation.references | [Li and Zhu, 2024] Li, M. and Zhu, C. (2024). Noisy label processing for classification: A survey. arXiv preprint arXiv:2404.04159 | |
| dc.relation.references | [Li et al., 2023d] Li, R., John, S., and Solin, A. (2023d). Improving hyperparameter learning under approximate inference in gaussian process models. In International Conference on Machine Learning, pages 19595–19615. PMLR | |
| dc.relation.references | [Li et al., 2023e] Li, S., Li, T., Sun, C., Yan, R., and Chen, X. (2023e). Multilayer gradcam: An effective tool towards explainable deep neural networks for intelligent fault diagnosis. Journal of manufacturing systems, 69:20–30 | |
| dc.relation.references | [Li et al., 2021a] Li, S., Yang, Z., and Gao, J. (2021a). Towards a unified analysis of random fourier features. Journal of Machine Learning Research, 22(139):1–38 | |
| dc.relation.references | [Li, 2022] Li, Z. (2022). Sharp analysis of random fourier features in classification. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 36, pages 7444– 7452 | |
| dc.relation.references | [Li et al., 2021b] Li, Z., Ton, J.-F., Oglic, D., and Sejdinovic, D. (2021b). Towards a unified analysis of random fourier features. Journal of Machine Learning Research, 22(108):1–51 | |
| dc.relation.references | [Liang et al., 2021a] Liang, J., Wu, Y., Xu, D., and Honavar, V. G. (2021a). Longitudinal deep kernel gaussian process regression. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 35, pages 8556–8564 | |
| dc.relation.references | [Liang et al., 2021b] Liang, J., Zhang, Y., and Wilson, A. G. (2021b). Longitudinal deep kernel gaussian processes. In Proceedings of the 38th International Conference on Machine Learning | |
| dc.relation.references | [Liang et al., 2022] Liang, X., Zhang, Z., Chen, X., and Jian, L. (2022). Kernel learning with nonconvex ramp loss. Statistical Analysis and Data Mining: The ASA Data Science Journal, 15(6):751–765 | |
| dc.relation.references | [Likhosherstov et al., 2022a] Likhosherstov, V., Choromanski, K., Pacchiano, A., and Weller, A. (2022a). Chefs’ random tables: Nontrivial spectral feature maps and approximate kernel methods. In Advances in Neural Information Processing Systems | |
| dc.relation.references | [Likhosherstov et al., 2022b] Likhosherstov, V., Choromanski, K. M., Dubey, K. A., Liu, F., Sarlos, T., and Weller, A. (2022b). Chefs’ random tables: Nontrigonometric random features. Advances in Neural Information Processing Systems, 35:34559–34573 | |
| dc.relation.references | [Liu et al., 2021a] Liu, F., Huang, X., Chen, Y., and Suykens, J. A. (2021a). Random features for kernel approximation: A survey on algorithms, theory, and beyond. IEEE Transactions on Pattern Analysis and Machine Intelligence, 44(10):7128–7148 | |
| dc.relation.references | [Liu et al., 2023] Liu, H., Chen, J., Dy, J., and Fu, Y. (2023). Transforming complex problems into k-means solutions. IEEE Transactions on Pattern Analysis and Machine Intelligence | |
| dc.relation.references | [Liu et al., 2021b] Liu, H., Liu, T., Wang, Y., and Sun, P. (2021b). Random features for kernel approximation: A survey on algorithms, theory, and beyond. IEEE Transactions on Pattern Analysis and Machine Intelligence, 43(12):4233–4254 | |
| dc.relation.references | [Liu et al., 2021c] Liu, H., Thekinen, J., Mollaoglu, S., Tang, D., Yang, J., Cheng, Y., Liu, H., and Tang, J. (2021c). Toward annotator group bias in crowdsourcing. arXiv preprint arXiv:2110.08038 | |
| dc.relation.references | [Liu, 2021] Liu, Y. (2021). Refined learning bounds for kernel and approximate kmeans. Advances in neural information processing systems, 34:6142–6154 | |
| dc.relation.references | [López-Pérez et al., 2021] López-Pérez, M., Amgad, M., Morales-Álvarez, P., Ruiz, P., Cooper, L. A., Molina, R., and Katsaggelos, A. K. (2021). Learning from crowds in digital pathology using scalable variational gaussian processes. Scientific reports, 11(1):11612 | |
| dc.relation.references | [López-Pérez et al., 2023] López-Pérez, M., Morales-Álvarez, P., Cooper, L. A., Molina, R., and Katsaggelos, A. K. (2023). Deep gaussian processes for classification with multiple noisy annotators. application to breast cancer tissue classification. IEEE Access, 11:6922–6934 | |
| dc.relation.references | [Lu et al., 2016] Lu, J., Hoi, S. C., Wang, J., Zhao, P., and Liu, Z.-Y. (2016). Large scale online kernel learning. Journal of Machine Learning Research, 17(47):1–43 | |
| dc.relation.references | [Lu et al., 2023a] Lu, R., Nguyen, T., and Teh, Y. W. (2023a). Robust and scalable gaussian process regression via structured variational inference. Journal of Machine Learning Research, 24(234):1–37 | |
| dc.relation.references | [Lu et al., 2023b] Lu, Y., Ma, J., Fang, L., Tian, X., and Jiang, J. (2023b). Robust and scalable gaussian process regression and its applications. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 21950–21959 | |
| dc.relation.references | [Lundberg and Lee, 2017a] Lundberg, S. M. and Lee, S.-I. (2017a). A unified approach to interpreting model predictions. Advances in neural information processing systems, 30 | |
| dc.relation.references | [Lundberg and Lee, 2017b] Lundberg, S. M. and Lee, S.-I. (2017b). A unified approach to interpreting model predictions. In Proceedings of the 31st International Conference on Neural Information Processing Systems, pages 4768–4777 | |
| dc.relation.references | [Majdi and Rodriguez, 2023] Majdi, M. S. and Rodriguez, J. J. (2023). Crowdcertain: Label aggregation in crowdsourced and ensemble learning classification. arXiv preprint arXiv:2310.16293 | |
| dc.relation.references | [Marcinkevičs and Vogt, 2023] Marcinkevičs, R. and Vogt, J. E. (2023). Interpretable and explainable machine learning: A methods-centric overview with concrete examples. Wiley Interdisciplinary Reviews: Data Mining and Knowledge Discovery, 13(3):e1493 | |
| dc.relation.references | [McKinsey & Company, 2024] McKinsey & Company (2024). The state of ai in 2024: Generative ai’s breakout year. https://www.mckinsey.com/capabilities/ quantumblack/our-insights/the-state-of-ai | |
| dc.relation.references | [Meanti et al., 2020] Meanti, G., Carratino, L., Rosasco, L., and Rudi, A. (2020). Kernel methods through the roof: handling billions of points efficiently. Advances in Neural Information Processing Systems, 33:14410–14422 | |
| dc.relation.references | [Meza et al., 2018] Meza, B. E., Carboni, A. D., and Peralta, J. M. (2018). Water adsorption and rheological properties of full-fat and low-fat cocoa-based confectionery coatings. Food and Bioproducts Processing, 110:16–25 | |
| dc.relation.references | [Mishra et al., 2021] Mishra, S., Dutta, S., Long, J., and Magazzeni, D. (2021). A survey on the robustness of feature importance and counterfactual explanations. arXiv preprint arXiv:2111.00358 | |
| dc.relation.references | [Mokhberian et al., 2023] Mokhberian, N., Marmarelis, M. G., Hopp, F. R., Basile, V., Morstatter, F., and Lerman, K. (2023). Capturing perspectives of crowdsourced annotators in subjective learning tasks. arXiv preprint arXiv:2311.09743 | |
| dc.relation.references | [Molnar, 2020] Molnar, C. (2020). Interpretable machine learning. Lulu.com | |
| dc.relation.references | [Mothilal et al., 2020a] Mothilal, R. K., Sharma, A., and Tan, C. (2020a). Explaining machine learning classifiers through diverse counterfactual explanations. In Proceedings of the 2020 conference on fairness, accountability, and transparency, pages 607–617. | |
| dc.relation.references | [Mothilal et al., 2020b] Mothilal, R. K., Sharma, A., and Tan, C. (2020b). Explaining machine learning classifiers through diverse counterfactual explanations. In Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency, pages 607–617 | |
| dc.relation.references | [Murphy, 2022] Murphy, K. P. (2022). Probabilistic machine learning: an introduction. MIT press | |
| dc.relation.references | [Musco and Musco, 2017] Musco, C. and Musco, C. (2017). Recursive sampling for the nyström method. In Advances in Neural Information Processing Systems, volume 30 | |
| dc.relation.references | [Narayanan and Subbiah, 2024] Narayanan, L. and Subbiah, P. (2024). Future professions in agriculture, medicine, education, fitness, r&d, transport, and communication. In Envisioning Future Professions. Wiley | |
| dc.relation.references | [Netguru, 2025] Netguru (2025). Ai adoption statistics 2025. https://www. netguru.com/blog/ai-adoption-statistics. Accessed: 2025-09-06 | |
| dc.relation.references | [Nguyen et al., 2022] Nguyen, V.-A., Shi, P., Ramakrishnan, J., Torabi, N., Arora, N. S., Weinsberg, U., and Tingley, M. (2022). Crowdsourcing with contextual uncertainty. In Proceedings of the 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, pages 3645–3655 | |
| dc.relation.references | [Niroomand et al., 2023] Niroomand, M., Dicks, L., and Pyzer-Knapp, E. (2023). Physics-inspired approaches toward understanding gaussian processes. arXiv preprint arXiv:2305.10748. | |
| dc.relation.references | [Noack et al., 2024] Noack, M. M., Luo, H., and Risser, M. D. (2024). A unifying perspective on non-stationary kernels for deeper gaussian processes. APL Machine Learning, 2(1) | |
| dc.relation.references | [Ober et al., 2021] Ober, S. W., Rasmussen, C. E., and van der Wilk, M. (2021). The promises and pitfalls of deep kernel learning. In Uncertainty in Artificial Intelligence, pages 1206–1216. PMLR | |
| dc.relation.references | [O’shea and West, 2016] O’shea, T. J. and West, N. (2016). Radio machine learning dataset generation with gnu radio. In Proceedings of the GNU radio conference, volume 1 | |
| dc.relation.references | [Pallathadka et al., 2023] Pallathadka, H., Mustafa, M., and Sanchez, D. (2023). Impact of machine learning on management, healthcare and agriculture. Elsevier (Preprint via ResearchGate) | |
| dc.relation.references | [Paral et al., 2023] Paral, P., Chatterjee, A., Rakshit, A., and Pal, S. K. (2023). Extended target tracking in human–robot coexisting environments via multisensor information fusion: A heteroscedastic gaussian process regressionbased approach. IEEE transactions on industrial informatics, 19(9):9877–9886 | |
| dc.relation.references | [Prado-Romero and Stilo, 2022] Prado-Romero, M. A. and Stilo, G. (2022). Gretel: Graph counterfactual explanation evaluation framework. In Proceedings of the 31st ACM International Conference on Information & Knowledge Management, pages 4389– 4393 | |
| dc.relation.references | [Principato et al., 2024] Principato, L., Carullo, D., Gruppi, A., Lambri, M., Bassani, A., and Spigno, G. (2024). Correlation of rheology and oral tribology with sensory perception of commercial hazelnut and cocoa-based spreads. Journal of Texture Studies, 55(4):e12850 | |
| dc.relation.references | [Rahimi and Recht, 2007] Rahimi, A. and Recht, B. (2007). Random features for large-scale kernel machines. Advances in neural information processing systems, 20 | |
| dc.relation.references | [Rajaraman and Shanmugam, 2023] Rajaraman, P. and Shanmugam, U. (2023). Explainable ai for medical imaging: Advancing transparency and trust in diagnostic decision-making. In 2023 Innovations in Power and Advanced Computing Technologies (i-PACT), pages 1–6. IEEE | |
| dc.relation.references | [Ribeiro et al., 2016a] Ribeiro, M. T., Singh, S., and Guestrin, C. (2016a). ” why should i trust you?” explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, pages 1135–1144 | |
| dc.relation.references | [Ribeiro et al., 2016b] Ribeiro, M. T., Singh, S., and Guestrin, C. (2016b). “why should i trust you?”: Explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pages 1135–1144 | |
| dc.relation.references | [Rodrigues and Pereira, 2018] Rodrigues, F. and Pereira, F. (2018). Deep learning from crowds. In Proceedings of the AAAI conference on artificial intelligence, volume 32 | |
| dc.relation.references | [Rodrigues et al., 2014] Rodrigues, F., Pereira, F., and Ribeiro, B. (2014). Gaussian process classification and active learning with multiple annotators. In International conference on machine learning, pages 433–441. PMLR | |
| dc.relation.references | [Rossi et al., 2021] Rossi, S., Heinonen, M., Bonilla, E., Shen, Z., and Filippone, M. (2021). Sparse gaussian processes revisited: Bayesian approaches to inducingvariable approximations. In International Conference on Artificial Intelligence and Statistics, pages 1837–1845. PMLR | |
| dc.relation.references | [Roy et al., 2021] Roy, S. K., Hong, D., Kar, P., Wu, X., Liu, X., and Zhao, D. (2021). Lightweight heterogeneous kernel convolution for hyperspectral image classification with noisy labels. IEEE Geoscience and Remote Sensing Letters, 19:1–5 | |
| dc.relation.references | [Rudi et al., 2017] Rudi, A., Carratino, L., and Rosasco, L. (2017). Falkon: An optimal large scale kernel method. Advances in neural information processing systems, 30 | |
| dc.relation.references | [Rudin, 2019] Rudin, C. (2019). Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nature machine intelligence, 1(5):206–215 | |
| dc.relation.references | [Ruiz et al., 2023] Ruiz, P., Morales-Álvarez, P., Coughlin, S., Molina, R., and Katsaggelos, A. K. (2023). Probabilistic fusion of crowds and experts for the search of gravitational waves. Knowledge-Based Systems, 261:110183 | |
| dc.relation.references | [Salimy et al., 2022] Salimy, A., Mitiche, I., Boreham, P., Nesbitt, A., and Morison, G. (2022). Dynamic noise reduction with deep residual shrinkage networks for online fault classification. Sensors, 22(2):515 | |
| dc.relation.references | [Saul et al., 2016] Saul, A. D., Hensman, J., Vehtari, A., and Lawrence, N. D. (2016). Chained gaussian processes. In Artificial intelligence and statistics, pages 1431–1440. PMLR | |
| dc.relation.references | [Scampicchio et al., 2025] Scampicchio, A., Arcari, E., and Lahr, A. (2025). Gaussian processes for dynamics learning in model predictive control. arXiv preprint arXiv:2502.02310 | |
| dc.relation.references | [Schioppa et al., 2023a] Schioppa, A., Filippova, K., Titov, I., and Zablotskaia, P. (2023a). Theoretical and practical perspectives on what influence functions do. In Oh, A., Naumann, T., Globerson, A., Saenko, K., Hardt, M., and Levine, S., editors, Advances in Neural Information Processing Systems, volume 36, pages 27560–27581. Curran Associates, Inc. | |
| dc.relation.references | [Schioppa et al., 2022a] Schioppa, A., Hohman, F., and Ribeiro, M. T. (2022a). Scaling influence functions to modern deep learning. In Proceedings of the 39th International Conference on Machine Learning | |
| dc.relation.references | [Schioppa et al., 2023b] Schioppa, A., Ribeiro, M. T., and Hohman, F. (2023b). Influence functions revisited: Improving and extending influence for modern architectures. In Advances in Neural Information Processing Systems | |
| dc.relation.references | [Schioppa et al., 2022b] Schioppa, A., Zablotskaia, P., Vilar, D., and Sokolov, A. (2022b). Scaling up influence functions. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 36, pages 8179–8186 | |
| dc.relation.references | [Scholkopf and Smola, 2018] Scholkopf, B. and Smola, A. J. (2018). Learning with kernels: support vector machines, regularization, optimization, and beyond. MIT press | |
| dc.relation.references | [Selvaraju et al., 2017] Selvaraju, R. R., Cogswell, M., Das, A., Vedantam, R., Parikh, D., and Batra, D. (2017). Grad-cam: Visual explanations from deep networks via gradient-based localization. In Proceedings of the IEEE international conference on computer vision, pages 618–626 | |
| dc.relation.references | [Shah et al., 2014] Shah, A., Wilson, A., and Ghahramani, Z. (2014). Student-t processes as alternatives to gaussian processes. In Artificial intelligence and statistics, pages 877–885. PMLR | |
| dc.relation.references | [Shrikumar et al., 2017] Shrikumar, A., Greenside, P., and Kundaje, A. (2017). Learning important features through propagating activation differences. In International conference on machine learning, pages 3145–3153. PMlR | |
| dc.relation.references | [Simonyan et al., 2013] Simonyan, K., Vedaldi, A., and Zisserman, A. (2013). Deep inside convolutional networks: Visualising image classification models and saliency maps. arXiv preprint arXiv:1312.6034 | |
| dc.relation.references | [Singh et al., 2024] Singh, M., Joshi, M., and Tyagi, K. (2024). Future professions in agriculture, medicine, education, fitness, research and development, transport, and communication. In Emerging Technologies and Their Applications. Wiley | |
| dc.relation.references | [Sorek and Todros, 2024] Sorek, Y. and Todros, K. (2024). Robust regression analysis based on the k-divergence. In ICASSP 2024-2024 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 9511–9515. IEEE | |
| dc.relation.references | [Stanford HAI, 2025] Stanford HAI (2025). Ai index report 2025. https:// hai.stanford.edu/ai-index/2025-ai-index-report. Accessed: 2025-09-06 | |
| dc.relation.references | [Sundararajan et al., 2017] Sundararajan, M., Taly, A., and Yan, Q. (2017). Axiomatic attribution for deep networks. In International conference on machine learning, pages 3319–3328. PMLR | |
| dc.relation.references | [Sutherland and Schneider, 2015] Sutherland, D. J. and Schneider, J. (2015). On the error of random fourier features. arXiv preprint arXiv:1506.02785 | |
| dc.relation.references | [Szczepankiewicz et al., 2023] Szczepankiewicz, K., Popowicz, A., Charkiewicz, K., Nałęcz-Charkiewicz, K., Szczepankiewicz, M., Lasota, S., Zawistowski, P., and Radlak, K. (2023). Ground truth based comparison of saliency maps algorithms. Scientific Reports, 13(1):16887 | |
| dc.relation.references | [Teh et al., 2005] Teh, Y. W., Seeger, M., and Jordan, M. I. (2005). Semiparametric latent factor models. In International Workshop on Artificial Intelligence and Statistics, pages 333–340. PMLR | |
| dc.relation.references | [Thunderbit, 2025] Thunderbit (2025). Top artificial intelligence statistics & trends in 2025. https://thunderbit.com/blog/ top-artificial-intelligence-stats. Accessed: 2025-09-06 | |
| dc.relation.references | [Titsias, 2009] Titsias, M. (2009). Variational learning of inducing variables in sparse gaussian processes. In Artificial intelligence and statistics, pages 567–574. PMLR | |
| dc.relation.references | [Triana-Martinez et al., 2023] Triana-Martinez, J. C., Gil-González, J., Fernandez- Gallego, J. A., Álvarez-Meza, A. M., and Castellanos-Dominguez, C. G. (2023). Chained deep learning using generalized cross-entropy for multiple annotators classification. Sensors, 23(7):3518 | |
| dc.relation.references | [Tripp et al., 2023a] Tripp, A., Bacallado, S., Singh, S., and Hernández-Lobato, J. M. (2023a). Tanimoto random features for scalable molecular machine learning. Advances in Neural Information Processing Systems, 36:33656–33686 | |
| dc.relation.references | [Tripp et al., 2023b] Tripp, J., Müller, K., and Borgwardt, K. (2023b). Random fourier features for tanimoto kernels. Journal of Machine Learning Research, 24(77):1–28 | |
| dc.relation.references | [Uhrenholt et al., 2021] Uhrenholt, A. K., Charvet, V., and Jensen, B. S. (2021). Probabilistic selection of inducing points in sparse gaussian processes. In Uncertainty in Artificial Intelligence, pages 1035–1044. PMLR | |
| dc.relation.references | [Vakili et al., 2021] Vakili, S., Moss, H., Artemev, A., Dutordoir, V., and Picheny, V. (2021). Scalable thompson sampling using sparse gaussian process models. Advances in neural information processing systems, 34:5631–5643 | |
| dc.relation.references | [van Beek et al., 2021] van Beek, A., Ghumman, U. F., Munshi, J., Tao, S., Chien, T., Balasubramanian, G., Plumlee, M., Apley, D., and Chen, W. (2021). Scalable adaptive batch sampling in simulation-based design with heteroscedastic noise. Journal of Mechanical Design, 143(3):031709 | |
| dc.relation.references | [Villacampa-Calvo et al., 2021] Villacampa-Calvo, C., Zaldívar, B., Garrido-Merchán, E. C., and Hernández-Lobato, D. (2021). Multi-class gaussian process classification with noisy inputs. Journal of Machine Learning Research, 22(36):1–52 | |
| dc.relation.references | [Virgolin and Fracaros, 2023] Virgolin, M. and Fracaros, S. (2023). On the robustness of sparse counterfactual explanations to adverse perturbations. Artificial Intelligence, 316:103840 | |
| dc.relation.references | [Vo et al., 2024] Vo, H.-T. V., Thien, N. N., Mui, K. C., and Tien, P. P. (2024). Enhancing confidence in brain tumor classification models with grad-cam and grad-cam++. Indonesian Journal of Electrical Engineering and Informatics (IJEEI), 12(4):926–939 | |
| dc.relation.references | [Wachter et al., 2017] Wachter, S., Mittelstadt, B., and Russell, C. (2017). Counterfactual explanations without opening the black box: Automated decisions and the gdpr. Harv. JL & Tech., 31:841 | |
| dc.relation.references | [Wacker and Filippone, 2022] Wacker, J. and Filippone, M. (2022). Local random feature approximations of the gaussian kernel. Procedia Computer Science, 207:987– 996 | |
| dc.relation.references | [Wang et al., 2023] Wang, C., Gao, Y., Fan, C., Hu, J., Lam, T. L., Lane, N. D., and Bianchi-Berthouze, N. (2023). Learn2agree: Fitting with multiple annotators without objective ground truth. In International Workshop on Trustworthy Machine Learning for Healthcare, pages 147–162. Springer | |
| dc.relation.references | [Wang et al., 2024a] Wang, T., Lai, X., and Cao, J. (2024a). A highly efficient admmbased algorithm for outlier-robust regression with huber loss. Applied Intelligence, 54(6):5147–5166 | |
| dc.relation.references | [Wang and Lin, 2021] Wang, T. and Lin, Q. (2021). Hybrid predictive models: When an interpretable model collaborates with a black-box model. Journal of Machine Learning Research, 22(137):1–38 | |
| dc.relation.references | [Wang et al., 2021] Wang, T., Xu, L., and Li, J. (2021). Sdcrkl-gp: Scalable deep convolutional random kernel learning in gaussian process for image recognition. Neurocomputing, 456:288–298 | |
| dc.relation.references | [Wang et al., 2024b] Wang, W., Khalil, M. M. Y., and Bayisa, L. Y. (2024b). Online variational gaussian process for time series data. Journal of Big Data, 11(1):174 | |
| dc.relation.references | [Wang et al., 2022] Wang, Z., Xing, W., Kirby, R., and Zhe, S. (2022). Physics informed deep kernel learning. In International Conference on Artificial Intelligence and Statistics, pages 1206–1218. PMLR | |
| dc.relation.references | [Wei et al., 2024] Wei, Y., Zhuang, V., Soedarmadji, S., and Sui, Y. (2024). Scalable bayesian optimization via focalized sparse gaussian processes. Advances in Neural Information Processing Systems, 37:120443–120467 | |
| dc.relation.references | [Wen et al., 2025] Wen, H., Betken, A., and Koolen, W. (2025). On the robustness of kernel ridge regression using the cauchy loss function. arXiv preprint arXiv:2503.20120 | |
| dc.relation.references | [Whitehill et al., 2009] Whitehill, J., Wu, T.-f., Bergsma, J., Movellan, J., and Ruvolo, P. (2009). Whose vote should count more: Optimal integration of labels from labelers of unknown expertise. Advances in neural information processing systems, 22 | |
| dc.relation.references | [Williams and Rasmussen, 1995] Williams, C. and Rasmussen, C. (1995). Gaussian processes for regression. Advances in neural information processing systems, 8 | |
| dc.relation.references | [Williams and Seeger, 2000] Williams, C. and Seeger, M. (2000). Using the nyström method to speed up kernel machines. Advances in neural information processing systems, 13 | |
| dc.relation.references | [Williams and Rasmussen, 2006] Williams, C. K. and Rasmussen, C. E. (2006). Gaussian processes for machine learning, volume 2. MIT press Cambridge, MA | |
| dc.relation.references | [Wilson et al., 2016a] Wilson, A. G., Hu, Z., Salakhutdinov, R., and Xing, E. P. (2016a). Deep kernel learning. In Artificial intelligence and statistics, pages 370–378. PMLR | |
| dc.relation.references | [Wilson et al., 2016b] Wilson, A. G., Hu, Z., Salakhutdinov, R. R., and Xing, E. P. (2016b). Stochastic variational deep kernel learning. Advances in neural information processing systems, 29 | |
| dc.relation.references | [Wu et al., 2022a] Wu, Y., Song, Z., and Ma, H. (2022a). Model explanation with causal adjustment. In Proceedings of the 39th International Conference on Machine Learning | |
| dc.relation.references | [Wu et al., 2022b] Wu, Y.-X., Wang, X., Zhang, A., Hu, X., Feng, F., He, X., and Chua, T.-S. (2022b). Deconfounding to explanation evaluation in graph neural networks. arXiv preprint arXiv:2201.08802 | |
| dc.relation.references | [Xu and Pan, 2024] Xu, H. and Pan, J. (2024). Hhd-gp: Incorporating helmholtzhodge decomposition into gaussian processes for learning dynamical systems. In NeurIPS 2024 | |
| dc.relation.references | [Yao et al., 2023a] Yao, J., Erichson, N. B., and Lopes, M. E. (2023a). Error estimation for random fourier features. In International Conference on Artificial Intelligence and Statistics, pages 2348–2364. PMLR | |
| dc.relation.references | [Yao et al., 2023b] Yao, Z., Xu, Y., and Cheng, G. (2023b). Adaptive random features via bootstrap aggregation. In Proceedings of the 40th International Conference on Machine Learning | |
| dc.relation.references | [Yasui and Sato, 2025] Yasui, D. and Sato, H. (2025). Improving local fidelity and interpretability of lime by replacing only the sampling process with cvae. IEEE Access | |
| dc.relation.references | [Ye et al., 2022] Ye, K., Zhao, J., Duan, N., and Zhang, Y. (2022). Physics-informed sparse gaussian process for probabilistic stability analysis of large-scale power system with dynamic pvs and loads. IEEE Transactions on Power Systems | |
| dc.relation.references | [Yeh et al., 2019] Yeh, C.-K., Hsieh, C.-Y., Suggala, A., Inouye, D. I., and Ravikumar, P. K. (2019). On the (in) fidelity and sensitivity of explanations. Advances in neural information processing systems, 32 | |
| dc.relation.references | [Zhang et al., 2025] Zhang, L., Lian, Z., Liu, H., Takebe, T., and Nakashima, Y. (2025). Qumatl: Query-based multi-annotator tendency learning. arXiv preprint arXiv:2503.15237 | |
| dc.relation.references | [Zhao and Chen, 2024a] Zhao, J. and Chen, X. (2024a). Nested heteroscedastic gaussian process for simulation metamodeling. In 2024 Winter Simulation Conference (WSC), pages 419–430. IEEE | |
| dc.relation.references | [Zhao and Chen, 2024b] Zhao, W. and Chen, Y. (2024b). Nested heteroscedastic gaussian processes for robust regression. Pattern Recognition | |
| dc.relation.references | [Zinage et al., 2024a] Zinage, A., Raza, M., and Wen, Y. (2024a). Kolmogorov-arnold networks meet gaussian processes: A new deep kernel learning framework. In Proceedings of the International Conference on Learning Representations | |
| dc.relation.references | [Zinage et al., 2024b] Zinage, S., Mondal, S., and Sarkar, S. (2024b). Dkl-kan: Scalable deep kernel learning using kolmogorov-arnold networks. arXiv preprint arXiv:2407.21176 | |
| dc.rights.accessrights | info:eu-repo/semantics/openAccess | |
| dc.rights.license | Reconocimiento 4.0 Internacional | |
| dc.rights.uri | http://creativecommons.org/licenses/by/4.0/ | |
| dc.subject.ddc | 510 - Matemáticas::519 - Probabilidades y matemáticas aplicadas | |
| dc.subject.proposal | Kernel Methods | eng |
| dc.subject.proposal | Random Fourier Features | eng |
| dc.subject.proposal | Gaussian Processes | eng |
| dc.subject.proposal | Automatic Modulation Classification | eng |
| dc.subject.proposal | Crowdlearning | eng |
| dc.subject.proposal | Interpretability | eng |
| dc.subject.proposal | Métodos Kernel | spa |
| dc.subject.proposal | Características Aleatorias de Fourier | spa |
| dc.subject.proposal | Procesos Gaussianos | spa |
| dc.subject.proposal | Clasificación Automática de Modulación | spa |
| dc.subject.proposal | Aprendizaje Colaborativo | spa |
| dc.subject.proposal | Interpretabilidad | spa |
| dc.subject.unesco | Inteligencia artificial | |
| dc.subject.unesco | Artificial intelligence | |
| dc.subject.unesco | Aprendizaje | |
| dc.subject.unesco | Learning | |
| dc.subject.unesco | Procesamiento de datos | |
| dc.subject.unesco | Data processing | |
| dc.title | An explainable Kernel-driven approach for reliable supervised learning | eng |
| dc.title.translated | Un enfoque explicable basado en Kernels para un aprendizaje supervisado confiable | spa |
| dc.type | Trabajo de grado - Maestría | |
| dc.type.coar | http://purl.org/coar/resource_type/c_bdcc | |
| dc.type.coarversion | http://purl.org/coar/version/c_ab4af688f83e57aa | |
| dc.type.content | Text | |
| dc.type.driver | info:eu-repo/semantics/masterThesis | |
| dc.type.version | info:eu-repo/semantics/acceptedVersion | |
| dcterms.audience.professionaldevelopment | Bibliotecarios | |
| dcterms.audience.professionaldevelopment | Estudiantes | |
| dcterms.audience.professionaldevelopment | Investigadores | |
| dcterms.audience.professionaldevelopment | Maestros | |
| oaire.accessrights | http://purl.org/coar/access_right/c_abf2 |
Archivos
Bloque original
1 - 1 de 1
Cargando...
- Nombre:
- Tesis de Maestría en Ingeniería - Automatización Industrial.pdf
- Tamaño:
- 5.82 MB
- Formato:
- Adobe Portable Document Format
- Descripción:
- Tesis de Maestría en Ingeniería - Automatización Industrial
Bloque de licencias
1 - 1 de 1
Cargando...
- Nombre:
- license.txt
- Tamaño:
- 5.74 KB
- Formato:
- Item-specific license agreed upon to submission
- Descripción:

