Aprendizaje profundo para la predicción de temperatura en las paredes refractarias de un horno de arco eléctrico

dc.contributor.advisorTibaduiza Burgos, Diego Alexander
dc.contributor.advisorLeon-Medina, Jersson Xavier
dc.contributor.authorGodoy Rojas, Diego Fernando
dc.contributor.orcidDiego F. Godoy-Rojas [0000-0002-1639-7992]spa
dc.contributor.researchgroupGrupo de Investigación en Electrónica de Alta Frecuencia y Telecomunicaciones (Cmun)spa
dc.date.accessioned2023-06-02T14:26:53Z
dc.date.available2023-06-02T14:26:53Z
dc.date.issued2022
dc.descriptionilustraciones, graficasspa
dc.description.abstractEn el presente documento se detalla el flujo de trabajo llevado a cabo para el desarrollo de modelos de aprendizaje profundo para la estimación de temperatura de pared media en dos hornos de arco eléctrico pertenecientes a la empresa Cerro Matoso S.A. El documento inicia con una introducción al contexto bajo el cual se desarrolló el trabajo final de maestría, dando paso a la descripción teórica de todos los aspectos relevantes y generalidades sobre el funcionamiento de la planta, las series de tiempo y el aprendizaje profundo requeridas durante el desarrollo del proyecto. El flujo de trabajo se divide en una metodología de 3 pasos, empezando por el estudio y preparación del conjunto de datos brindado por CMSA, seguido por el desarrollo, entrenamiento y selección de diversos modelos de aprendizaje profundo usados en predicciones con datos de un conjunto de prueba obteniendo errores RMSE entre 1-2 °C y finalizando con una etapa de validación que estudia el desempeño de los diversos modelos obtenidos frente a diversas variaciones en las condiciones de los parámetros de entrenamiento. (Texto tomado de la fuente)spa
dc.description.abstractThis document details the workflow followed for the development of deep learning models for the estimation of mean wall temperature in two electric arc furnaces belonging to the company Cerro Matoso S.A. The document begins by establishing the development context of the final master's degree project. Afterwards, the theoretical description of all the relevant aspects and generalities about the operation of the plant, the time series and the deep learning required during the development of the project is given. The workflow is divided into a 3-step methodology starting with the study and preparation of the data set provided by CMSA, followed by the development, training and selection of various deep learning models used in predictions with data from a test set. obtaining RMSE errors between 1-2 °C and ending with a validation stage that studies the performance of the various models obtained against various variations in the conditions of the training parameters.eng
dc.description.degreelevelMaestríaspa
dc.description.degreenameMagíster en Ingeniería - Automatización Industrialspa
dc.description.notesContiene diagramas, formulas, ilustraciones y tablas.spa
dc.description.notesEl presente trabajo fue realizado dentro del marco de la colaboración entre la Universidad Nacional de Colombia y Cerro Matoso S.A, financiada por el Ministerio Colombiano de Ciencia mediante la convocatoria 786: “Convocatoria para el registro de proyectos que aspiran a obtener beneficios tributarios por inversión en CTel“. La totalidad de los registros empleados en el presente proyecto son de carácter privado y pertenecen a Cerro Matoso S.A. Dichos registros no pueden ser publicados, compartidos o reproducidos total o parcialmente sin el conocimiento y expresa autorización de Cerro Matoso S.A.spa
dc.description.researchareaAutomatización de Procesos y Máquinasspa
dc.format.extentxvi, 90 páginasspa
dc.format.mimetypeapplication/pdfspa
dc.identifier.instnameUniversidad Nacional de Colombiaspa
dc.identifier.reponameRepositorio Institucional Universidad Nacional de Colombiaspa
dc.identifier.repourlhttps://repositorio.unal.edu.co/spa
dc.identifier.urihttps://repositorio.unal.edu.co/handle/unal/83956
dc.language.isospaspa
dc.publisherUniversidad Nacional de Colombiaspa
dc.publisher.branchUniversidad Nacional de Colombia - Sede Bogotáspa
dc.publisher.facultyFacultad de Ingenieríaspa
dc.publisher.placeBogotá, Colombiaspa
dc.publisher.programBogotá - Ingeniería - Maestría en Ingeniería - Automatización Industrialspa
dc.relation.referencesD. Tibaduiza et al., “Structural Health Monitoring System for Furnace Refractory Wall Thickness Measurements at Cerro Matoso SA”, Lecture Notes in Civil Engineering, pp. 414-423, 2021. DOI: 10.1007/978-3-030-64594-6_41spa
dc.relation.referencesF. Pozo et al., “Structural health monitoring and condition monitoring applications: sensing, distributed communication and processing”, International Journal of distributed sensor networks, vol 16, no. 9, p 1-3, 2020. DOI: 10.1177/1550147720963270spa
dc.relation.referencesJ. Birat, “A futures study analysis of the technological evolution of the EAF by 2010”, Revue de Métallurgie, vol. 97, no. 11, pp. 1347-1363, 2000. DOI: 10.1051/metal:2000114spa
dc.relation.references“Redes neuronales profundas - Tipos y Características - Código Fuente”, Código Fuente, 2021. [Online]. Disponible: https://www.codigofuente.org/redes-neuronales-profundas-tipos-caracteristicas/. [Acceso: 17- Jul- 2021].spa
dc.relation.references“Illustrated Guide to LSTM’s and GRU’s: A step by step explanation”, Medium, 2021. [Online]. Disponible: https://towardsdatascience.com/illustrated-guide-to-lstms-and-grus-a-step-by-step-explanation-44e9eb85bf21. [Acceso: 17- Jul-2021].spa
dc.relation.references“Major Mines & Projects | Cerro Matoso Mine”, Miningdataonline.com, 2021. [Online]. Disponible: https://miningdataonline.com/property/336/Cerro-Matoso-Mine.aspx. [Acceso: 25- Nov- 2021]spa
dc.relation.referencesJanzen, J.; Gerritsen, T.; Voermann, N.; Veloza, E.R.; Delgado, R.C. Integrated Furnace Controls: Implementation on a Covered-Arc (Shielded Arc) Furnace at Cerro Matoso. In Proceedings of the 10th International Ferroalloys Congress, Cape Town, South Africa, 1–4 Feb. 2004; pp. 659–669.spa
dc.relation.referencesR. Garcia-Segura, J. Vázquez Castillo, F. Martell-Chavez, O. Longoria-Gandara, and J. Ortegón Aguilar, “Electric Arc Furnace Modeling with Artificial Neural Networks and Arc Length with Variable Voltage Gradient,” Energies, vol. 10, no. 9, p. 1424, Sep. 2017spa
dc.relation.referencesC. Chen, Y. Liu, M. Kumar, and J. Qin, “Energy Consumption Modelling Using Deep Learning Technique — A Case Study of EAF”, Procedia CIRP, vol. 72, pp. 1063-1068, 2018. DOI: 10.1016/j.procir.2018.03.095.spa
dc.relation.referencesS. Ismaeel, A. Miri, A. Sadeghian, and D. Chourishi, “An Extreme Learning Machine (ELM) Predictor for Electric Arc Furnaces’ v-i Characteristics,”2015 IEEE 2nd International Conference on Cyber Security and Cloud Computing, 2015, pp. 329-334, DOI: 10.1109/CSCloud.2015.94.spa
dc.relation.referencesJ. Mesa Fernández, V. Cabal, V. Montequin and J. Balsera, “Online estimation of electric arc furnace tap temperature by using fuzzy neural networks”, Engineering Applications of Artificial Intelligence, vol. 21, no. 7, pp. 1001-1012, 2008. DOI: 10.1016/j.engappai.2007.11.008.spa
dc.relation.referencesM. Kordos, M. Blachnik and T. Wieczorek, “Temperature Prediction in Electric Arc Furnace with Neural Network Tree”, Lecture Notes in Computer Science, pp. 71-78, 2011. DOI: 10.1007/978-3-642-21738-8 10.spa
dc.relation.referencesJ. Camacho et al., “A Data Cleaning Approach for a Structural Health Monitoring System in a 75 MW Electric Arc Ferronickel Furnace”, Proceedings of 7th International Electronic Conference on Sensors and Applications, 2020. DOI: 10.3390/ecsa-7-08245.spa
dc.relation.referencesJ. Leon-Medina et al., “Deep Learning for the Prediction of Temperature Time Series in the Lining of an Electric Arc Furnace for Structural Health Monitoring at Cerro Matoso S.A. (CMSA)”, Proceedings of 7th International Electronic Conference on Sensors and Applications, 2020. DOI: 10.3390/ecsa-7-08246.spa
dc.relation.referencesJ. Leon-Medina et al., “Temperature Prediction Using Multivariate Time Series Deep Learning in the Lining of an Electric Arc Furnace for Ferronickel Production”, Sensors, vol. 21, no. 20, p. 6894, 2021. DOI: 10.3390/s21206894.spa
dc.relation.referencesR. Wan, S. Mei, J. Wang, M. Liu, and F. Yang, “Multivariate Temporal Convolutional Network: A Deep Neural Networks Approach for Multivariate Time Series Forecasting”, Electronics, vol. 8, no. 8, p. 876, 2019. DOI: 10.3390/electronics8080876spa
dc.relation.referencesS. Shih, F. Sun, and H. Lee, “Temporal pattern attention for multivariate time series forecasting”, Machine Learning, vol. 108, no. 8-9, pp. 1421-1441, 2019. DOI: 10.1007/s10994- 019-05815-0spa
dc.relation.referencesS. Du, T. Li, Y. Yang and S. Horng, “Multivariate time series forecasting via attention-based encoder–decoder framework”, Neurocomputing, vol. 388, pp. 269-279, 2020. DOI: 10.1016/j.neucom.2019.12.118.spa
dc.relation.referencesS. Huang, D. Wang, X. Wu, and A. Tang, “DSANet: Dual Self-Attention Network for Multivariate Time Series Forecasting”, Proceedings of the 28th ACM International Conference on Information and Knowledge Management, 2019. DOI: 10.1145/3357384.3358132spa
dc.relation.referencesCMSA, PR032018OP - Manual del Sistema de Control Estructural del Horno Eléctrico 412-FC-01, 02 ed., 2017.spa
dc.relation.referencesD. F. Godoy-Rojas et al., “Attention-Based Deep Recurrent Neural Network to Forecast the Temperature Behavior of an Electric Arc Furnace Side-Wall,” Sensors, vol. 22, no. 4, p. 1418, Feb. 2022, doi: 10.3390/s22041418.spa
dc.relation.referencesAmerican Petroleum Institute (API), “API RP 551 - Process Measurement”, 2da edición, pp. 30-36, febrero 2016, Disponible: https://standards.globalspec.com /std/9988220/API %20RPspa
dc.relation.references“Specification for temperature-electromotive force (EMF) tables for standardized thermocouples” ASTM DOI: 10.1520/e0230_e0230m-17.spa
dc.relation.referencesW. W. S. Wei, “Time Series analysis”, Oxford Handbooks Online, pp. 458–485, 2013.spa
dc.relation.referencesJ. D. Hamilton, “Time Series analysis”, Princeton, NJ: Princeton University Press, 2020.spa
dc.relation.referencesP. P. Shinde and S. Shah, “A Review of Machine Learning and Deep Learning Applications”, 2018 Fourth International Conference on Computing Communication Control and Automation (ICCUBEA), Pune, India, 2018, pp. 1-6, doi: 10.1109/ICCU BEA.2018.8697857.spa
dc.relation.referencesLeCun, Y., Bengio, Y. and Hinton, G. Deep learning. Nature 521, 436–444 (2015). https://doi.org/10.1038/nature14539.spa
dc.relation.referencesL. Zhang, J. Tan, D. Han, and H. Zhu, “From machine learning to Deep Learning: Progress in Machine Intelligence for Rational Drug Discovery”, Drug Discovery Today, vol. 22, no. 11, pp. 1680–1685, 2017.spa
dc.relation.referencesC. M. Bishop, “Neural networks and their applications”, Review of Scientific Instruments, vol. 65, no. 6, pp. 1803–1832, 1994.spa
dc.relation.referencesK. Suzuki, Ed., “Artificial Neural Networks - Architectures and Applications”, Jan. 2013, doi: 10.5772/3409.spa
dc.relation.referencesC. Zanchettin and T. B. Ludermir, “A methodology to train and improve artificial neural networks weights and connections”, The 2006 IEEE International Joint Conference on Neural Network Proceedings.spa
dc.relation.referencesSibi, P., S. Allwyn Jones, and P. Siddarth., “Analysis of different activation functions using back propagation neural networks”, Journal of theoretical and applied information technology 47.3 (2013): 1264-1268.spa
dc.relation.referencesA. D. Rasamoelina, F. Adjailia and P. Sincák, “A Review of Activation Function for Artificial Neural Network”, 2020 IEEE 18th World Symposium on Applied Machine Intelligence and Informatics (SAMI), Herlany, Slovakia, 2020, pp. 281-286, doi: 10.1109/SA MI48414.2020.9108717spa
dc.relation.referencesSharma, Sagar, Simone Sharma, and Anidhya Athaiya, “Activation functions in neural networks”, towards data science 6.12 (2017): 310-316.spa
dc.relation.referencesElliott, David L., “A better activation function for artificial neural networks”, 1993.spa
dc.relation.referencesXU, Jingyi, et al., “A semantic loss function for deep learning with symbolic knowledge”, International conference on machine learning. PMLR, 2018. p. 5502-5511.spa
dc.relation.referencesLEE, Tae-Hwy., “Loss functions in time series forecasting”, International encyclopedia of the social sciences, 2008, p. 495-502.spa
dc.relation.referencesHODSON, Timothy O., “Root-mean-square error (RMSE) or mean absolute error (MAE): when to use them or not”, Geoscientific Model Development, 2022, vol. 15, no 14, p. 5481-5487.spa
dc.relation.referencesC. Alippi, “Weight update in back-propagation neural networks: The role of activation functions”, 1991 IEEE International Joint Conference on Neural Networks, 1991.spa
dc.relation.referencesD. Svozil, V. Kvasnicka, and Pospichal Jirí, “Introduction to multi-layer feed-forward neural networks”, Chemometrics and Intelligent Laboratory Systems, vol. 39, no. 1, pp. 43–62, 1997.spa
dc.relation.referencesRuder, S., “An overview of gradient descent optimization algorithms”, arXiv:1609.04747spa
dc.relation.referencesS.-ichi Amari, “Backpropagation and stochastic gradient descent method,” Neurocomputing, vol. 5, no. 4-5, pp. 185–196, 1993.spa
dc.relation.referencesSmith, Leslie N., “A disciplined approach to neural network hyper-parameters: Part 1–learning rate, batch size, momentum, and weight decay” arXiv:1803.09820, 2018.spa
dc.relation.referencesN. Bacanin, T. Bezdan, E. Tuba, I. Strumberger, and M. Tuba, “Optimizing convolutional neural network hyperparameters by enhanced swarm intelligence metaheuristics”, Algorithms, vol. 13, no. 3, p. 67, 2020.spa
dc.relation.referencesM. Kuan and K. Hornik, “Convergence of learning algorithms with constant learning rates”, IEEE Transactions on Neural Networks, vol. 2, no. 5, pp. 484-489, Sept. 1991, doi: 10.1109/72.134285.spa
dc.relation.referencesD. R. Wilson and T. R. Martinez, “The need for small learning rates on large problems” IJCNN’01. International Joint Conference on Neural Networks. Proceedings (Cat. No.01CH37222), Washington, DC, USA, 2001, pp. 115-119 vol.1, doi: 10.1109/IJCNN.2001.939002spa
dc.relation.referencesY. Bengio, “Gradient-based optimization of hyperparameters”, Neural Computation, vol. 12, no. 8, pp. 1889–1900, 2000.spa
dc.relation.references“Recurrent neural networks architectures”, Wiley Series in Adaptive and Learning Sys tems for Signal Processing, Communications and Control, pp. 69–89.spa
dc.relation.referencesA. L. Caterini and D. E. Chang, “Recurrent neural networks”, Deep Neural Networks in a Mathematical Framework, pp. 59–79, 2018.spa
dc.relation.referencesSutskever, Ilya, “Training recurrent neural networks” Toronto, ON, Canada: University of Toronto, 2013.spa
dc.relation.referencesA. Sherstinsky, “Fundamentals of Recurrent Neural Network (RNN) and long short-term memory (LSTM) network”, Physica D: Nonlinear Phenomena, vol. 404, p. 132306, 2020spa
dc.relation.referencesS. Hochreiter and J. Schmidhuber, “Long Short-Term Memory”, In Neural Computation, vol. 9, no. 8, pp. 1735-1780, 15 Nov. 1997, doi: 10.1162/neco.1997.9.8.1735.spa
dc.relation.referencesY. Hua, Z. Zhao, R. Li, X. Chen, Z. Liu and H. Zhang, “Deep Learning with Long Short-Term Memory for Time Series Prediction”, In IEEE Communications Magazine, vol. 57, no. 6, pp. 114-119, June 2019, doi: 10.1109/MCOM.2019.1800155.spa
dc.relation.referencesX. Song, Y. Liu, L. Xue, J. Wang, J. Zhang, J. Wang, L. Jiang, and Z. Cheng, “Time-series well performance prediction based on Long Short-Term Memory (LSTM) neural network model”, Journal of Petroleum Science and Engineering, vol. 186, p. 106682, 2020.spa
dc.relation.referencesR. Dey and F. M. Salem, ”Gate-variants of Gated Recurrent Unit (GRU) neural networks”, 2017 IEEE 60th International Midwest Symposium on Circuits and Systems (MWSCAS), Boston, MA, USA, 2017, pp. 1597-1600, doi: 10.1109/MWSCAS.2017.8053243.spa
dc.relation.referencesY. Wang, W. Liao, and Y. Chang, “Gated recurrent unit network-based short-term photovoltaic forecasting”, Energies, vol. 11, no. 8, p. 2163, 2018.spa
dc.relation.referencesH. Lin, A. Gharehbaghi, Q. Zhang, S. S. Band, H. T. Pai, K.-W. Chau, and A. Mosavi, “Time Series-based groundwater level forecasting using gated recurrent unit deep neural networks”, Engineering Applications of Computational Fluid Mechanics, vol. 16, no. 1, pp. 1655–1672, 2022spa
dc.relation.referencesS. H. Park, B. Kim, C. M. Kang, C. C. Chung and J. W. Choi, “Sequence-to-Sequence Prediction of Vehicle Trajectory via LSTM Encoder-Decoder Architecture”, 2018 IEEE Intelligent Vehicles Symposium (IV), Changshu, China, 2018, pp. 1672-1678, doi: 10.1109/IVS.2018.8500658.spa
dc.relation.referencesR. Laubscher, “Time-series forecasting of coal-fired power plant reheater metal temperatures using encoder-decoder recurrent neural networks”, Energy, vol. 189, p. 116187, 2019.spa
dc.relation.referencesC. Olah and S. Carter, “Attention and augmented recurrent neural networks”, Distill, 08-Sep-2016. [Online]. Disponible: https://distill.pub/2016/augmented-rnns/. [Acceso: 15- Sep-2022].spa
dc.relation.referencesZ. Niu, G. Zhong, and H. Yu, “A review on the attention mechanism of Deep Learning”, Neurocomputing, vol. 452, pp. 48–62, 2021.spa
dc.relation.referencesY. Qin, D. Song, H. Chen, W. Cheng, G. Jiang, and G. W. Cottrell, “A dual-stage attention-based recurrent neural network for time series prediction”, Proceedings of the Twenty-Sixth International Joint Conference on Artificial Intelligence, 2017.spa
dc.relation.referencesIT-0003-A28-C3-V1-18.11.2019 - Informe preliminar con análisis estadístico de datos y correlaciones posibles.spa
dc.relation.referencesIT-O3O4-C15C34.2.3-V1-17.06.2020 - Informe técnico de caracterización e identificación de variables del horno línea 1 FC01.spa
dc.relation.referencesIT-O3O4.C38.2.1-V1-04.10.2021 - Informe técnico de caracterización e identificación de variables del horno línea 2 FC150.spa
dc.rights.accessrightsinfo:eu-repo/semantics/openAccessspa
dc.rights.licenseAtribución-NoComercial 4.0 Internacionalspa
dc.rights.urihttp://creativecommons.org/licenses/by-nc/4.0/spa
dc.subject.ddc600 - Tecnología (Ciencias aplicadas)spa
dc.subject.lembMachine learningspa
dc.subject.lembAPRENDIZAJE AUTOMATICO (INTELIGENCIA ARTIFICIAL)eng
dc.subject.proposalAprendizaje profundospa
dc.subject.proposalDeep Learningeng
dc.subject.proposalGRU
dc.subject.proposalInteligencia artificialspa
dc.subject.proposalLSTM
dc.subject.proposalMecanismos de atenciónspa
dc.subject.proposalAttention Mechanismseng
dc.subject.proposalRedes Neuronalesspa
dc.subject.proposalNeural Networkseng
dc.subject.proposalSeries de tiempospa
dc.subject.proposalTime Series forecastingeng
dc.subject.proposalSalud estructuralspa
dc.titleAprendizaje profundo para la predicción de temperatura en las paredes refractarias de un horno de arco eléctricospa
dc.title.translatedDeep learning for temperature prediction in the refractory walls of an electric arc furnaceeng
dc.typeTrabajo de grado - Maestríaspa
dc.type.coarhttp://purl.org/coar/resource_type/c_bdccspa
dc.type.coarversionhttp://purl.org/coar/version/c_ab4af688f83e57aaspa
dc.type.contentTextspa
dc.type.driverinfo:eu-repo/semantics/masterThesisspa
dc.type.redcolhttp://purl.org/redcol/resource_type/TPspa
dc.type.versioninfo:eu-repo/semantics/acceptedVersionspa
dcterms.audience.professionaldevelopmentEstudiantesspa
dcterms.audience.professionaldevelopmentInvestigadoresspa
dcterms.audience.professionaldevelopmentMaestrosspa
dcterms.audience.professionaldevelopmentPúblico generalspa
oaire.accessrightshttp://purl.org/coar/access_right/c_abf2spa

Archivos

Bloque original

Mostrando 1 - 1 de 1
Cargando...
Miniatura
Nombre:
1031173820.2023.pdf
Tamaño:
5.52 MB
Formato:
Adobe Portable Document Format
Descripción:
Tesis de Maestría en Automatización Industrial

Bloque de licencias

Mostrando 1 - 1 de 1
Cargando...
Miniatura
Nombre:
license.txt
Tamaño:
5.74 KB
Formato:
Item-specific license agreed upon to submission
Descripción: