Análisis comparativo de metodologías de pronóstico para múltiples series de tiempo de conteos

dc.contributor.advisorCabarcas Jaramillo, Daniel
dc.contributor.advisorGonzáles Alvarez, Nelfi Gertrudis
dc.contributor.authorBetancur Rodríguez, Daniel
dc.date.accessioned2024-04-16T15:44:15Z
dc.date.available2024-04-16T15:44:15Z
dc.date.issued2024-04-16
dc.descriptionIlustracionesspa
dc.description.abstractEl pronóstico de series de tiempo de conteos es un caso particular de interés para la asignación óptima de capacidades e inventarios acorde a la demanda esperada, entre otras aplicaciones. Para abordar el pronóstico de las series de tiempo de conteos se han propuesto modelos estadísticos como los modelos autorregresivos para series de conteo o los modelos dinámicos generalizados. Por otro lado, se han aplicado metodologías basadas en algoritmos de machine learning apalancándose en la creciente potencia computacional, como las redes neuronales recurrentes y las arquitecturas basadas en algoritmos de atención, llamadas Transformers. El presente trabajo explora el problema del pronóstico paralelo de múltiples series de conteo, aplicando metodologías propias de la estadística y el machine learning en diversos escenarios de simulación en los cuales se compara la calidad de pronóstico, el tiempo computacional demandado y el esfuerzo para adaptar las metodologías a casos reales (texto tomado de la fuente)spa
dc.description.abstractForecasting time series of counts, with support on non-negative integers, is a particular case of interest for optimal job assigment and inventory allocation according to expected demand, among other applications. To address the problem of forecasting time series of counts, statiscal models such as autorregresive models for count data or dynamic generalized models have been proposed. On the other side, methodologies based on machine learning algorithms have been applied, leveraging on the increasing computational power, such as recurrent neuronal netwroks, LSTM networks architecures and architectures based in attention algorithms called Transformers. This study explores the problem of parallel forecasting multiple time series of counts, applying statistical and machine learning methodologies to various simulation scenarios in which the forecasting performance, demanded computational time, and the effort to adapt each methodology to real cases are comparedeng
dc.description.curricularareaÁrea Curricular Estadísticaspa
dc.description.degreelevelMaestríaspa
dc.description.degreenameMagíster en Estadísticaspa
dc.description.researchareaAnalíticaspa
dc.description.researchareaProcesos estocásticosspa
dc.format.extent1 recursos en línea (167 páginas)spa
dc.format.mimetypeapplication/pdfspa
dc.identifier.instnameUniversidad Nacional de Colombiaspa
dc.identifier.reponameRepositorio Institucional Universidad Nacional de Colombiaspa
dc.identifier.repourlhttps://repositorio.unal.edu.co/spa
dc.identifier.urihttps://repositorio.unal.edu.co/handle/unal/85925
dc.language.isospaspa
dc.publisherUniversidad Nacional de Colombiaspa
dc.publisher.branchUniversidad Nacional de Colombia - Sede Medellínspa
dc.publisher.facultyFacultad de Cienciasspa
dc.publisher.placeMedellín, Colombiaspa
dc.publisher.programMedellín - Ciencias - Maestría en Ciencias - Estadísticaspa
dc.relation.indexedLaReferenciaspa
dc.relation.referencesAghababaei Jazi, M., & Alamatsaz, M. (2012). Two new thinning operators and their appli cations. Global Journal of Pure and Applied Mathematics, 8, 13-28spa
dc.relation.referencesAllende, H., Moraga, C., & Salas, R. (2002). Artificial neural networks in time series fore casting: a comparative analysis. Kybernetika, 38(6), 685-707spa
dc.relation.referencesBa, J. L., Kiros, J. R., & Hinton, G. E. (2016). Layer Normalizationspa
dc.relation.referencesBahdanau, D., Cho, K., & Bengio, Y. (2016a). Neural Machine Translation by Jointly Lear ning to Align and Translatespa
dc.relation.referencesBahdanau, D., Cho, K., & Bengio, Y. (2016b). Neural Machine Translation by Jointly Lear ning to Align and Translatespa
dc.relation.referencesBandara, K., Shi, P., Bergmeir, C., Hewamalage, H., Tran, Q., & Seaman, B. (2019). Sales Demand Forecast in E-commerce Using a Long Short-Term Memory Neural Network Methodology. En T. Gedeon, K. W. Wong & M. Lee (Eds.), Neural Information Processing (pp. 462-474). Springer International Publishingspa
dc.relation.referencesByrd, R. H., Schnabel, R. B., & Shultz, G. A. (1987). A Trust Region Algorithm for Non linearly Constrained Optimization. SIAM Journal on Numerical Analysis, 24(5), 1152-1170. Consultado el 7 de mayo de 2023, desde http://www.jstor.org/stable/ 2157645spa
dc.relation.referencesChollet, F. (2017). Deep Learning with Python (1st). Manning Publications Cospa
dc.relation.referencesChristou, V., & Fokianos, K. (2015). On count time series prediction. Journal of Statistical Computation and Simulation, 85(2), 357-373. https://doi.org/10.1080/00949655. 2013.823612spa
dc.relation.referencesDavis, R. A., Fokianos, K., Holan, S. H., Joe, H., Livsey, J., Lund, R., Pipiras, V., & Ravishanker, N. (2021). Count Time Series: A methodological Review. Journal of the American Statistical Association, 116, 1533-1547. https://doi.org/10.1080/01621459. 2021.1904957spa
dc.relation.referencesDufour, J.-M. (2008). Estimation of ARMA models by maximum likelihood. https:// jeanmariedufour.github.io/ResE/Dufour 2008 C TS ARIMA Estimation.pdfspa
dc.relation.referencesDunsmuir, W. T. (2016). Generalized Linear Autoregressive Moving Average Models. En R. A. Davis, S. H. Holan, R. Lund & N. Ravishanker (Eds.). CRC Pressspa
dc.relation.referencesExcoffier, M., Gicquel, C., & Jouini, O. (2016). A joint chance-constrained programming approach for call center workforce scheduling under uncertain call arrival forecasts. Computers & Industrial Engineering, 96, 16-30. https://doi.org/https://doi.org/10. 1016/j.cie.2016.03.013spa
dc.relation.referencesFarsani, R., Pazouki, E., & Jecei, J. (2021). A Transformer Self-Attention Model for Time Series Forecasting. Journal of Electrical and Computer Engineering Innovations, 9, 1-10. https://doi.org/10.22061/JECEI.2020.7426.391spa
dc.relation.referencesFearnhead, P. (2011). MCMC for State Space Models. En S. Brooks, A. Gelman, G. Jones & X.-L. Meng (Eds.). Chapman; HALL/CRCspa
dc.relation.referencesFeng, C., Li, L., & Sadeghpour, A. (2020). A comparison of residual diagnosis tools for diagnosing regression models for count data. BMC Medical Research Methodology, 20, 1-21. https://doi.org/10.1186/s12874-020-01055-2spa
dc.relation.referencesFerland, R., Latour, A., & Oraichi, D. (2006). Integer-Valued GARCH Process. Journal of Time Series Analysis, 27(6), 923-942. https://doi.org/https://doi.org/10.1111/j. 1467-9892.2006.00496.xspa
dc.relation.referencesFokianos, K. (2012). Count Time Series Models. Handbook of Statistics, 30, 315-347. https: //doi.org/10.1016/B978-0-444-53858-1.00012-0spa
dc.relation.referencesFokianos, K., Rahbek, A., & Tjøstheim, D. (2009). Poisson Autoregression. Journal of the American Statistical Association, 104(488), 1430-1439. Consultado el 15 de marzo de 2023, desde http://www.jstor.org/stable/40592351spa
dc.relation.referencesFokianos, K., & Tjøstheim, D. (2011). Log-linear Poisson autoregression. Journal of Multi variate Analysis, 102(3), 563-578. https://doi.org/https://doi.org/10.1016/j.jmva. 2010.11.002spa
dc.relation.referencesGamerman, D., Abanto-Valle, C., Silva, R., & Martins, T. (2016). Dynamic Bayesian Mo dels for Discrete-Valued Time Series. En R. A. Davis, S. H. Holan, R. Lund & N. Ravishanker (Eds.). CRC Pessspa
dc.relation.referencesGoodfellow, I., Bengio, Y., & Courville, A. (2016). Deep Learning [http://www.deeplearningbook. org]. MIT Pressspa
dc.relation.referencesHe, K., Zhang, X., Ren, S., & Sun, J. (2016). Deep Residual Learning for Image Recogni tion. 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 770-778. https://doi.org/10.1109/CVPR.2016.90spa
dc.relation.referencesHewamalage, H., Bergmeir, C., & Bandara, K. (2022). Global models for time series forecas ting: A Simulation study. Pattern Recognition, 124, 108441. https://doi.org/https: //doi.org/10.1016/j.patcog.2021.108441spa
dc.relation.referencesHoppe, R. W. (2006). Chapter 4 Sequential Quadratic Programming. https://www.math. uh.edu/∼rohop/fall 06/Chapter4.pdfspa
dc.relation.referencesHyndman, R. J. Focused Workshop: Synthetic Data — Generating time series. En: 2021. https://www.youtube.com/watch?v=F3lWECtFa44&ab channel=AustralianDataScienceNetwokspa
dc.relation.referencesHyndman, R. J., & Athanasopoulos, G. (2021). Forecasting: Principles and Practice (3rd). OTexts.spa
dc.relation.referencesHyndman, R. J., Kang, Y., Montero-Manso, P., O’Hara-Wild, M., Talagala, T., Wang, E., & Yang, Y. (2023). tsfeatures: Time Series Feature Extraction [https://pkg.robjhyndman.com/tsfeatures/, https://github.com/robjhyndman/tsfeatures]spa
dc.relation.referencesHyndman, R. J., & Koehler, A. B. (2006). Another look at measures of forecast accuracy. International Journal of Forecasting, 22(4), 679-688. https://doi.org/https://doi. org/10.1016/j.ijforecast.2006.03.001spa
dc.relation.referencesJia, Y. (2018). Some Models for Count TimeSeries (Tesis doctoral). Clemson University. 105 Sikes Hall, Clemson, SC 29634, Estados Unidos. https://tigerprints.clemson. edu/all dissertations/2213spa
dc.relation.referencesKang, Y., Hyndman, R. J., & Li, F. (2020). GRATIS: GeneRAting TIme Series with diverse and controllable characteristics. Statistical Analysis and Data Mining: The ASA Data Science Journal, 13(4), 354-376. https://doi.org/10.1002/sam.11461spa
dc.relation.referencesLiboschik, T., Fokianos, K., & Fried, R. (2017). tscount: An R Package for Analysis of Count Time Series Following Generalized Linear Models. Journal of Statistical Software, 82(5), 1-51. https://doi.org/10.18637/jss.v082.i05spa
dc.relation.referencesLund, R., & Livsey, J. (2016). Renewal-Based Count Time Series. En R. A. Davis, S. H. Holan, R. Lund & N. Ravishanker (Eds.). CRC Pressspa
dc.relation.referencesMakridakis, S. (1993). Accuracy measures: theoretical and practical concerns. International Journal of Forecasting, 9(4), 527-529. https://doi.org/https://doi.org/10.1016/0169 2070(93)90079-3spa
dc.relation.referencesMakridakis, S., Spiliotis, E., & Assimakopoulos, V. (2018). Statistical and Machine Learning forecasting methods: Concerns and ways forward. Plos One. https://doi.org/10.1371/ journal.pone.0194889spa
dc.relation.referencesMakridakis, S., Spiliotis, E., & Assimakopoulos, V. (2020). The M4 Competition: 100,000 time series and 61 forecasting methods [M4 Competition]. International Journal of Forecasting, 36(1), 54-74. https://doi.org/https://doi.org/10.1016/j.ijforecast.2019. 04.014spa
dc.relation.referencesMart´ ın Abadi, Ashish Agarwal, Paul Barham, Eugene Brevdo, Zhifeng Chen, Craig Citro, Greg S. Corrado, Andy Davis, Jeffrey Dean, Matthieu Devin, Sanjay Ghemawat, Ian Goodfellow, Andrew Harp, Geoffrey Irving, Michael Isard, Jia, Y., Rafal Jozefo wicz, Lukasz Kaiser, Manjunath Kudlur, ... Xiaoqiang Zheng. (2015). TensorFlow: Large-Scale Machine Learning on Heterogeneous Systems [Software available from tensorflow.org]. https://www.tensorflow.org/spa
dc.relation.referencesMcCulloch, W. S., & Pitts, W. (1943). A logical calculus of the ideas immanent in ner vous activity. The bulletin of mathematical biophysics, 5. https://doi.org/10.1007/ BF02478259spa
dc.relation.referencesMikolov, T., Chen, K., Corrado, G., & Dean, J. (2013). Efficient Estimation of Word Repre sentations in Vector Spacespa
dc.relation.referencesMontero-Manso, P., & Hyndman, R. J. (2021). Principles and algorithms for forecasting groups of time series: Locality and globality. International Journal of Forecasting, 37(4), 1632-1653. https://doi.org/https://doi.org/10.1016/j.ijforecast.2021.03.004spa
dc.relation.referencesNariswari, R., & Pudjihastuti, H. (2019). Bayesian Forecasting for Time Series of Count Data [The 4th International Conference on Computer Science and Computational Intelligence (ICCSCI 2019) : Enabling Collaboration to Escalate Impact of Research Results for Society]. Procedia Computer Science, 157, 427-435. https://doi.org/https: //doi.org/10.1016/j.procs.2019.08.235spa
dc.relation.referencesNelder, J. A., & Wedderburn, R. W. M. (1972). Generalized Linear Models. Journal of the Royal Statistical Society. Series A (General), 135(3), 370-384. Consultado el 13 de enero de 2024, desde http://www.jstor.org/stable/2344614spa
dc.relation.referencesNg, A. Y., Katanforoosh, K., & Mourri, Y. B. (2023). Neural Networks and Deep Learning [MOOC]. Coursera. https://www.coursera.org/learn/neural-networks-deep-learningspa
dc.relation.referencesNie, Y., Nguyen, N. H., Sinthong, P., & Kalagnanam, J. (2023). A Time Series is Worth 64 Words: Long-term Forecasting with Transformersspa
dc.relation.referencesNielsen, M. A. (2015). Neural Networks and Deep Learning. Determination Press. of Transportation, N. D. (2017). Bicycle Counts for East River Bridges (Historical) [Daily total of bike counts conducted monthly on the Brooklyn Bridge, Manhattan Brid ge, Williamsburg Bridge, and Queensboro Bridge. https://data.cityofnewyork.us/ Transportation/Bicycle-Counts-for-East-River-Bridges-Historical-/gua4-p9wg]spa
dc.relation.referencesParr, T., & Howard, J. (2018). The Matrix Calculus You Need For Deep Learningspa
dc.relation.referencesPedregosa, F., Varoquaux, G., Gramfort, A., Michel, V., Thirion, B., Grisel, O., Blondel, M., Prettenhofer, P., Weiss, R., Dubourg, V., Vanderplas, J., Passos, A., Cournapeau, D., Brucher, M., Perrot, M., & Duchesnay, E. (2011). Scikit-learn: Machine Learning in Python. Journal of Machine Learning Research, 12, 2825-2830spa
dc.relation.referencesPhuong, M., & Hutter, M. (2022). Formal Algorithms for Transformersspa
dc.relation.referencesR Core Team. (2023). R: A Language and Environment for Statistical Computing. R Foun dation for Statistical Computing. Vienna, Austria. https://www.R-project.org/spa
dc.relation.referencesRuder, S. (2016). An overview of gradient descent optimization algorithms. CoRR, abs/1609.04747. http://arxiv.org/abs/1609.04747spa
dc.relation.referencesRue, H., Martino, S., & Chopin, N. (2009). Approximate Bayesian Inference for Latent Gaussian models by using Integrated Nested Laplace Approximations. Journal of the Royal Statistical Society Series B: Statistical Methodology, 71(2), 319-392. https: //doi.org/10.1111/j.1467-9868.2008.00700.xspa
dc.relation.referencesSathish, V., Mukhopadhyay, S., & Tiwari, R. (2020). ARMA Models for Zero Inflated Count Time Series. https://doi.org/10.48550/ARXIV.2004.10732spa
dc.relation.referencesSchmidhuber, J. (2015). Deep learning in neural networks: An overview. Neural Networks, 61, 85-117. https://doi.org/10.1016/j.neunet.2014.09.003spa
dc.relation.referencesSeabold, S., & Perktold, J. (2010). statsmodels: Econometric and statistical modeling with python. 9th Python in Science Conferencespa
dc.relation.referencesShenstone, L., & Hyndman, R. J. (2005). Stochastic models underlying Croston’s method for intermittent demand forecasting. Journal of Forecasting, 24(6), 389-402. https: //doi.org/https://doi.org/10.1002/for.963spa
dc.relation.referencesShmueli, G., Bruce, P. C., Yahav, I., Patel, N. R., & Lichtendahl Jr., K. C. (2018). Data mining for business analytics. Wileyspa
dc.relation.referencesShmueli, G., & Lichtebdahl, K. C. (2018). Practical time series forecasting with R. Axelrod Schnall Publishersspa
dc.relation.referencesShrivastava, S. (2020). Cross Validation in Time Series. https://medium.com/@soumyachess1496/ cross-validation-in-time-series-566ae4981ce4spa
dc.relation.referencesSmith, T. G. (2017). pmdarima: ARIMA estimators for Pythonspa
dc.relation.referencesSrivastava, N., Hinton, G., Krizhevsky, A., Sutskever, I., & Salakhutdinov, R. (2014). Dro pout: A Simple WaytoPrevent Neural Networks from Overfitting. Journal of Machine Learning Research, 15(56), 1929-1958. http://jmlr.org/papers/v15/srivastava14a. htmlspa
dc.relation.referencesTerven, J., Cordova-Esparza, D. M., Ramirez-Pedraza, A., & Chavez-Urbiola, E. A. (2023). Loss Functions and Metrics in Deep Learningspa
dc.relation.referencesVaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., Kaiser, L., & Polosukhin, I. (2017). Attention is all you need. Advances in neural information processing systems, 5998-6008. http://arxiv.org/abs/1706.03762spa
dc.relation.referencesVirtanen, P., Gommers, R., Oliphant, T. E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., van der Walt, S. J., Brett, M., Wilson, J., Millman, K. J., Mayorov, N., Nelson, A. R. J., Jones, E., Kern, R., Larson, E., ... SciPy 1.0 Contributors. (2020). SciPy 1.0: Fundamental Algorithms for Scientific Computing in Python. Nature Methods, 17, 261-272. https://doi.org/ 10.1038/s41592-019-0686-2spa
dc.relation.referencesWen, Q., Zhou, T., Zhang, C., Chen, W., Ma, Z., Yan, J., & Sun, L. (2023). Transformers in Time Series: A Surveyspa
dc.relation.referencesZeng, A., Chen, M.-H., Zhang, L., & Xu, Q. (2022). Are Transformers Effective for Ti me Series Forecasting? AAAI Conference on Artificial Intelligence. https://api. semanticscholar.org/CorpusID:249097444spa
dc.relation.referencesZhang, A., Lipton, Z. C., Li, M., & Smola, A. J. (2021). Dive into Deep Learning. arXiv preprint arXiv:2106.11342spa
dc.rights.accessrightsinfo:eu-repo/semantics/openAccessspa
dc.rights.licenseAtribución-NoComercial 4.0 Internacionalspa
dc.rights.urihttp://creativecommons.org/licenses/by-nc/4.0/spa
dc.subject.ddc510 - Matemáticas::519 - Probabilidades y matemáticas aplicadasspa
dc.subject.lembAnálisis de series de tiempo
dc.subject.lembProcesos de Poisson
dc.subject.lembRedes neuronales (computadores)
dc.subject.lembAprendizaje automático (inteligencia artificial)
dc.subject.proposalModelos lineales generalizadosspa
dc.subject.proposalpredicciónspa
dc.subject.proposaldatos de conteosspa
dc.subject.proposalregresión Poissonspa
dc.subject.proposalseries de tiempospa
dc.subject.proposalredes neuronales recurrentesspa
dc.subject.proposaltransformersspa
dc.subject.proposalGeneralized lineal modelsita
dc.subject.proposalPredictioneng
dc.subject.proposalCount dataeng
dc.subject.proposalPoisson regressioneng
dc.subject.proposalStatespace modelseng
dc.subject.proposalTime serieseng
dc.subject.proposalReuronal networkseng
dc.subject.proposalRecurrent neuronal networkseng
dc.subject.proposalTransformerseng
dc.titleAnálisis comparativo de metodologías de pronóstico para múltiples series de tiempo de conteosspa
dc.title.translatedComparative analysis of forecasting methodologies for multiple time series of countseng
dc.typeTrabajo de grado - Maestríaspa
dc.type.coarhttp://purl.org/coar/resource_type/c_bdccspa
dc.type.coarversionhttp://purl.org/coar/version/c_ab4af688f83e57aaspa
dc.type.contentTextspa
dc.type.driverinfo:eu-repo/semantics/masterThesisspa
dc.type.redcolhttp://purl.org/redcol/resource_type/TMspa
dc.type.versioninfo:eu-repo/semantics/acceptedVersionspa
dcterms.audience.professionaldevelopmentAdministradoresspa
dcterms.audience.professionaldevelopmentEstudiantesspa
dcterms.audience.professionaldevelopmentInvestigadoresspa
dcterms.audience.professionaldevelopmentMaestrosspa
oaire.accessrightshttp://purl.org/coar/access_right/c_abf2spa

Archivos

Bloque original

Mostrando 1 - 1 de 1
Cargando...
Miniatura
Nombre:
1152456210.2024.pdf
Tamaño:
3.23 MB
Formato:
Adobe Portable Document Format
Descripción:
Tesis Maestría en Ciencias - Estadística

Bloque de licencias

Mostrando 1 - 1 de 1
Cargando...
Miniatura
Nombre:
license.txt
Tamaño:
5.74 KB
Formato:
Item-specific license agreed upon to submission
Descripción: