Exploración de las redes neuronales para la proyección de la máxima pérdida esperada de una póliza de seguros: aplicación para un seguro previsional

dc.contributor.advisorGómez Vélez, César Augusto
dc.contributor.authorOrtega Alzate, Santiago
dc.date.accessioned2022-03-16T19:57:53Z
dc.date.available2022-03-16T19:57:53Z
dc.date.issued2021-03-11
dc.description.abstractDurante la definición del precio comercial de una póliza de seguros, las aseguradoras deben cuantificar el riesgo asumido para constituir reservas suficientes y hacer frente a las reclamaciones futuras. Durante ese proceso, es determinante estimar el número de siniestros a pagar por la póliza (frecuencia de la póliza). En este trabajo de grado se estima la frecuencia para una póliza del seguro previsional con vigencia en el año 2020, para la cobertura de sobrevivencia. Esto mediante el ajuste de un modelo de regresión cuasi-Poisson y el entrenamiento de una red neuronal recurrente. Se requirió en el histórico de siniestros pagados en los t = 10 años anteriores al 2020. Debido a que el pago de los siniestros cubiertos por el seguro previsional puede realizarse en cualquier momento posterior a la ocurrencia de estos, es necesario el cálculo intermedio de una variable conocida como Siniestros Incurridos, pero no Reportados - IBNR, por sus siglas en inglés (un siniestro ocurrido en 2019 puede ser avisado y pagado en el año 2024, por lo que aún se desconoce). De esta manera, primero se estimó la IBNR para las pólizas 2010 − 2019 partiendo de un triángulo de siniestros incurridos. Para el cálculo de la IBNR y la frecuencia existen metodologías con bases teóricas bien establecidas, tales como la regresión cuasi-Poisson. Sin embargo, es común encontrar en la literatura avances sobre el uso de inteligencia artificial en este tipo de aplicaciones. Por lo tanto, se aplican las redes neuronales recurrentes - RNN con el propósito de contrastar su desempeño con el de la regresión cuasi-Poisson y contribuir con este esfuerzo. En general, los resultados de IBNR y frecuencia son inferiores cuando se utiliza la RNN, en comparación con los resultados vía regresión cuasi-Poisson, sobre todo en las pólizas más recientes (2015−2019). Una intuición primaria sería que la RNN tiende a subestimar el riesgo a cubrir por la póliza. Probablemente, se necesitan más datos para la estimación (como, por ejemplo, características de los asegurados). En conclusión, el uso de RNN se presenta como una alternativa prometedora para el pronóstico de la frecuencia de una póliza. En futuros estudios, se recomienda incluir durante el entrenamiento de esta, variables adicionales vinculadas al riesgo que se busca tarifar y que describan mejor a la población asegurada. (Texto tomado de la fuente)spa
dc.description.abstractIn the pricing of an insurance policy, insurance companies must quantify the value of the assumed risk in order to constitute sufficient reserves and face future claims. During that process, it is highly important to estimate the number of claims to be paid by the policy (frequency of the policy). In this thesis, the frequency is estimated for a pension insurance policy effective in the year 2020, specifically for survival coverage. This is done by fitting a quasi-Poisson regression model and training a recurrent neural network. The input is the historical claims count of t = 10 years prior to 2020. Due to the fact that the payment of the claims covered by the social security insurance can be made at any time after their occurrence, the intermediate calculation of a variable known as Incurred but Not Reported Claims - IBNR is required (a loss that occurred in 2019 can be notified in the year 2024, so it is still unknown). In this way, IBNR was first estimated for the 2010 − 2019 policies using a triangle of incurred claims. For the IBNR calculation and frequency, there are methodologies with well-established theoretical bases, such as quasi-Poisson regression. However, it is common to find in the literature advances on the use of artificial intelligence in this type of application. Therefore, recurrent neural networks - RNN are applied with the purpose of contrasting their performance with that of quasi-Poisson regression and contributing to this effort. In general, IBNR and frequency results are lower when using RNN for estimation compared to results with quasi-Poisson regression, especially for more recent policies (2015 − 2019). Since the quasi-Poisson regression model resulted in higher estimates, it can be inferred that the RNN tends to underestimate the risk to be covered by the policy. Probably, the data considered for its estimation (the number of insured people) provide more information to the model than the sequential input patterns used in the RNN. In conclusion, the use of RNN is presented as a promising alternative for forecasting the frequency of a policy. In future studies, during the neural network training, it is recommended to include additional variables linked to the risk to be priced and that better describe the insured population.eng
dc.description.curricularareaÁrea Curricular Estadísticaspa
dc.description.degreelevelMaestríaspa
dc.description.degreenameMagíster en Ciencias - Estadísticaspa
dc.description.researchareaMétodos estadísticos en finanzas y actuariaspa
dc.format.extentxvi, 101 páginasspa
dc.format.mimetypeapplication/pdfspa
dc.identifier.instnameUniversidad Nacional de Colombiaspa
dc.identifier.reponameRepositorio Institucional Universidad Nacional de Colombiaspa
dc.identifier.repourlhttps://repositorio.unal.edu.co/spa
dc.identifier.urihttps://repositorio.unal.edu.co/handle/unal/81256
dc.language.isospaspa
dc.publisherUniversidad Nacional de Colombiaspa
dc.publisher.branchUniversidad Nacional de Colombia - Sede Medellínspa
dc.publisher.departmentEscuela de estadísticaspa
dc.publisher.facultyFacultad de Cienciasspa
dc.publisher.placeMedellín, Colombiaspa
dc.publisher.programMedellín - Ciencias - Maestría en Ciencias - Estadísticaspa
dc.relation.referencesAckley, D. H., Hinton, G. E., y Sejnowski, T. J. (1985). A learning algorithm for Boltzmann machines. Cognitive science, 9 (1), 147–169.spa
dc.relation.referencesAgarap, A. F. (2018). Deep learning using rectified linear units (ReLu). arXiv preprint arXiv:1803.08375 .spa
dc.relation.referencesAI In The 1980s And Beyond: An MIT Survey. (s.f.). , 6 .spa
dc.relation.referencesAnderson, J. A., Silverstein, J. W., Ritz, S. A., y Jones, R. S. (1977). Distinctive features, categorical perception, and probability learning: Some applications of a neural model. Psychological review, 84 (5), 413.spa
dc.relation.referencesBianchi, F. M., Maiorino, E., Kampffmeyer, M. C., Rizzi, A., y Jenssen, R. (2017). Recurrent neural networks for short-term load forecasting: an overview and comparative analysis.spa
dc.relation.referencesBoard, I. A. S. (2015). Proyecto de Norma Marco Conceptual para la Información Financiera.spa
dc.relation.referencesBornhuetter, R., y Ferguson, R. (1972). The actuary and IBNR. ASTIN Bulletin, 59 (1), 181–195.spa
dc.relation.referencesCarpenter, G. A., y Grossberg, S. (1985). Category learning and adaptive pattern recognition: A neural network model. En Proceedings, Third Army Conference on Applied Mathematics and Computing, ARO Report (pp. 86–1).spa
dc.relation.referencesCaruana, R. (1997). Multitask learning. Machine learning, 28 (1), 41–75.spa
dc.relation.referencesCastañer, A., y Claramunt, M. (2017). Solvencia II (2ªed.). Barcelona, España: Departamento de Matemática Económica, Financiera y Actuarial de la Universidad de Barcelona.spa
dc.relation.referencesCayre, M., Malaterre, J., Scotto-Lomassese, S., Strambi, C., y Strambi, A. (2002). The common properties of neurogenesis in the adult brain: from invertebrates to vertebrates. Comparative Biochemistry and Physiology Part B: Biochemistry and Molecular Biology, 132 (1), 1–15.spa
dc.relation.referencesCharpentier, A. (2014). Computational actuarial science with R. CRC press.spa
dc.relation.referencesChe, Z., Purushotham, S., Cho, K., Sontag, D., y Liu, Y. (2018). Recurrent neural networks for multivariate time series with missing values. Scientific reports, 8 (1), 1–12.spa
dc.relation.referencesCho, K., Van Merri¨enboer, B., Gulcehre, C., Bahdanau, D., Bougares, F., Schwenk, H., y Bengio, Y. (2014). Learning phrase representations using rnn encoder-decoder for statistical machine translation. arXiv preprint arXiv:1406.1078 .spa
dc.relation.referencesCiaburro, G., y Venkateswaran, B. (2017). Neural networks with R: Smart models using CNN, RNN, deep learning, and artificial intelligence principles. Packt Publishing Ltd.spa
dc.relation.referencesClaramunt, M., y Costa, T. (2003). Matemática actuarial no vida: Un enfoque práctico. En Colección de publicaciones (Vol. 63). Barcelona, España: Departamento de Matemática Económica, Financiera y Actuarial de la Universidad de Barcelona.spa
dc.relation.referencesConnor, J. T., Martin, R. D., y Atlas, L. E. (1994). Recurrent neural networks and robust time series prediction. IEEE transactions on neural networks, 5 (2), 240–254.spa
dc.relation.referencesConsul, P., y Famoye, F. (1992). Generalized Poisson regression model. Communications in Statistics-Theory and Methods, 21 (1), 89–109.spa
dc.relation.referencesDecreto 2973. (2013). Bogotá DC, Colombia: Ministerio de Hacienda y Crédito Público.spa
dc.relation.referencesDey, R., y Salem, F. M. (s.f.). Gate-variants of gated recurrent unit (GRU) neural networks. En 2017 IEEE 60th international midwest symposium on circuits and systems (mwscas.spa
dc.relation.referencesDuan, J.-C., y Yu, M.-T. (1999). Capital standard, forbearance and deposit insurance pricing under GARCH. Journal of banking & finance, 23 (11), 1691–1706.spa
dc.relation.referencesDubey, A. K., y Jain, V. (2019). Comparative study of convolution neural network’s relu and leaky-relu activation functions. En Applications of Computing, Automation and Wireless Systems in Electrical Engineering (pp. 873–880). Springer.spa
dc.relation.referencesEckle, K., y Schmidt-Hieber, J. (2019). A comparison of deep networks with relu activation function and linear spline-type methods. Neural Networks, 110 , 232–242.spa
dc.relation.referencesEngland, P., y Verrall, R. (1998). Analytic and bootstrap estimates of prediction errors in claims reserving. Insurance: Mathematics and Economics, 25 (3), 459–478.spa
dc.relation.referencesEngland, P., y Verrall, R. (2002). Stochastic claims reserving in general insurance. Journal of the Institute of Actuaries, 8 (3), 443-518.spa
dc.relation.referencesFausett, L. V. (2006). Fundamentals of neural networks: architectures, algorithms and applications. Pearson Education India.spa
dc.relation.referencesFernandez-Arjona, L. (2021). A neural network model for solvency calculations in life insurance. Annals of Actuarial Science, 15 (2), 259–275.spa
dc.relation.referencesFodor, J. A., y Pylyshyn, Z. W. (1988). Connectionism and cognitive architecture: A critical analysis. Cognition, 28 (1-2), 3–71.spa
dc.relation.referencesForsyth, R. (1989). Neural Works Professional II. Expert Systems, 6 (2), 116–120.spa
dc.relation.referencesFrees, E. W. (2009). Regression modeling with actuarial and financial applications. Cambridge University Press.spa
dc.relation.referencesFreund, Y., y Schapire, R. E. (1999). Large margin classification using the perceptron algorithm. Machine learning, 37 (3), 277–296.spa
dc.relation.referencesFriedland, J. (2010). Estimating Unpaid Claims Using Basic Techniques. Virginia, United States: © Casualty Actuarial Society.spa
dc.relation.referencesFukushima, K. (1975). Cognitron: A self-organizing multilayered neural network. Biological cybernetics, 20 (3), 121–136.spa
dc.relation.referencesFukushima, K. (1988). Neocognitron: A hierarchical neural network capable of visual pattern recognition. Neural networks, 1 (2), 119–130.spa
dc.relation.referencesGabrielli, A. (2020). A neural network boosted double overdispersed poisson claims reserving model. ASTIN Bulletin: The Journal of the IAA, 50 (1), 25–60.spa
dc.relation.referencesGabrielli, A., Richman, R., y W¨uthrich, M. V. (2020). Neural network embedding of the over-dispersed poisson reserving model. Scandinavian Actuarial Journal, 2020 (1), 1–29.spa
dc.relation.referencesGallant, S. I., y cols. (1990). Perceptron-based learning algorithms. IEEE Transactions on neural networks, 1 (2), 179–191.spa
dc.relation.referencesGarz´on Tafur, L. C., y Torres Aya, A. L. (2019). Análisis de la metodología Chain Ladder para el cálculo de la reserva técnica de siniestros no avisados (IBNR) del seguro obligatorio de accidentes de tránsito (SOAT) en Colombia (Tesis Doctoral no publicada).spa
dc.relation.referencesGlorot, X., y Bengio, Y. (2010). Understanding the difficulty of training deep feedforward neural networks. En Proceedings of the thirteenth international conference on artificial intelligence and statistics (pp. 249–256).spa
dc.relation.referencesGoldberg, D. E., y Holland, J. H. (1988). Genetic algorithms and machine learning.spa
dc.relation.referencesGourieroux, C., Holly, A., y Monfort, A. (1982). Likelihood ratio test, Wald test, and Kuhn-Tucker test in linear models with inequality constraints on the regression parameters. Econometrica: journal of the Econometric Society, 63–80.spa
dc.relation.referencesGreff, K., Srivastava, R. K., Koutn´ık, J., Steunebrink, B. R., y Schmidhuber, J. (2016). Lstm: A search space odyssey. IEEE transactions on neural networks and learning systems, 28 (10), 2222–2232.spa
dc.relation.referencesGrossberg, S. (1982). Classical and instrumental learning by neural networks. Studies of Mind and Brain, 65–156.spa
dc.relation.referencesGuardiola, A. (1990). Manual de introducción al seguro. Editorial MAPFRE, S. A.spa
dc.relation.referencesGuzmán Gutiérrez, C. S., y cols. (2019). Sistema Pensional Colombiano: implicaciones de la educación financiera sobre las decisiones de traslado de los individuos.spa
dc.relation.referencesHachemeister, C. A., y Stanard, J. N. (1975). IBNR claims count estimation with static lag functions. En Astin colloquium, portimao, portugal.spa
dc.relation.referencesHebb, D. O. (2005). The organization of behavior: A neuropsychological theory. Psychology Press.spa
dc.relation.referencesHochreiter, S. (1998). The vanishing gradient problem during learning recurrent neural nets and problem solutions. International Journal of Uncertainty, Fuzziness and Knowledge-Based Systems, 6 (02), 107–116.spa
dc.relation.referencesHochreiter, S., y Schmidhuber, J. (1997). Long short-term memory. Neural computation, 9 (8), 1735–1780.spa
dc.relation.referencesHopfield, J. J. (1982). Neural networks and physical systems with emergent collective computational abilities. Proceedings of the national academy of sciences, 79 (8), 2554–2558.spa
dc.relation.referencesHorn, R. A. (1990). The hadamard product. En Proc. symp. appl. math (Vol. 40, pp.87–169).spa
dc.relation.referencesIsmail, N., y Jemain, A. A. (2007). Handling overdispersion with negative binomial and generalized poisson regression models. En Casualty actuarial society forum (Vol. 2007, pp. 103–58).spa
dc.relation.referencesJong, P., y Heller, G. (2008). Generalized linear models for insurance data. Cambridge: Cambridge University Press.spa
dc.relation.referencesJozefowicz, R., Zaremba, W., y Sutskever, I. (2015). An empirical exploration of recurrent network architectures. En International conference on machine learning (pp. 2342–2350).spa
dc.relation.referencesKaas, R., Goovaerts, M., Dhaene, J., y Denuit, M. (2008). Modern actuarial risk theory: using r (Vol. 128). Springer Science & Business Media.spa
dc.relation.referencesKaas, R., Goovaerts, M., y Dhaene, M., J.and Denuit. (2008). Modern Actuarial Risk Theory - using R, second ed. Springer-Verlag Berlin Heidelberg.spa
dc.relation.referencesKohonen, T. (1982). Self-organized formation of topologically correct feature maps. Biological cybernetics, 43 (1), 59–69.spa
dc.relation.referencesKohonen, T., Lehti¨o, P., Rovamo, J., Hyv¨arinen, J., Bry, K., y Vainio, L. (1977). A principle of neural associative memory. Neuroscience, 2 (6), 1065–1076.spa
dc.relation.referencesKremer, E. (1982). IBNR claims and the two-way model of ANOVA. Scandinavian Actuarial Journal, 1982 (1), 47-55.spa
dc.relation.referencesKv˚alseth, T. O. (1985). Cautionary note about r2. The American Statistician, 39 (4),279–285.spa
dc.relation.referencesLey número 100. (1993). Bogotá DC, Colombia: Congreso de la República de Colombia.spa
dc.relation.referencesLi, Q., Li, Y., Gao, J., Su, L., Zhao, B., Demirbas, M., . . . Han, J. (2014). A confidence-aware approach for truth discovery on long-tail data. Proceedings of the VLDB Endowment, 8 (4), 425–436.spa
dc.relation.referencesLin, T., Horne, B. G., Tino, P., y Giles, C. L. (1996). Learning long-term dependencies in NARX recurrent neural networks. IEEE Transactions on Neural Networks, 7 (6), 1329–1338.spa
dc.relation.referencesL´opez, R. F., y Fernandez, J. M. F. (2008). Las redes neuronales artificiales. Netbiblo.spa
dc.relation.referencesMacCuLLAGH, P., y Nelder, J. (1983). Generalized linear models.spa
dc.relation.referencesMack, T. (1982). Measuring the variability of Chain Ladder reserve estimates. Insurance: Mathematics and Economics, 15 (1), 133-138.spa
dc.relation.referencesMack, T. (1991). A simple parametric model for rating automobile insurance or estimating IBNR claims reserves. ASTIN Bulletin: The Journal of the IAA, 21 (1), 93–109.spa
dc.relation.referencesMack, T. (1993). Distribution-free calculation of the standard error of Chain Ladder reserve estimates. ASTIN Bulletin, 23 (2), 213-225.spa
dc.relation.referencesMack, T. (1999). The standard error of Chain Ladder reserve estimates: Recursive calculation and inclusion of a tail factor. ASTIN Bulletin: The Journal of the IAA, 29 (2), 361–366.spa
dc.relation.referencesMcCulloch, W. S., y Pitts, W. (1943). A logical calculus of the ideas immanent in nervous activity. The bulletin of mathematical biophysics, 5 (4), 115–133.spa
dc.relation.referencesMcFadden, D. (1974). The measurement of urban travel demand. Journal of public economics, 3 (4), 303–328.spa
dc.relation.referencesMedsker, L. R., y Jain, L. (2001). Recurrent neural networks. Design and Applications, 5 ,64–67.spa
dc.relation.referencesMora, H. E. B. (2018). Cálculo de IBNR, un ejemplo práctico. En Colección de publicaciones. Colombia: Universidad Antonio Nariño.spa
dc.relation.referencesMurphy, D. M. (1994). Unbiased loss development factors. En CAS Forum (Vol. 1, p. 183).spa
dc.relation.referencesNakisa, B., Rastgoo, M. N., Rakotonirainy, A., Maire, F., y Chandran, V. (2018). Long short term memory hyperparameter optimization for a neural network based emotion recognition framework. IEEE Access, 6 , 49325–49338.spa
dc.relation.referencesNorberg, R. (1986). A contribution to modelling of IBNR claims. Scandinavian Actuarial Journal, 1986 (3-4), 155-203.spa
dc.relation.referencesOlazaran, M. (1993). A sociological history of the neural network controversy. Advances in computers, 37 , 335–425.spa
dc.relation.referencesOlbricht, W. (2012). Tree-based methods: a useful tool for life insurance. European Actuarial Journal volume, 2 , 129–147.spa
dc.relation.referencesPascanu, R., Mikolov, T., y Bengio, Y. (2013). On the difficulty of training recurrent neural networks. En International conference on machine learning (pp. 1310–1318).spa
dc.relation.referencesPicard, R. R., y Cook, R. D. (1984). Cross-validation of regression models. Journal of the American Statistical Association, 79 (387), 575–583.spa
dc.relation.referencesPlatt, J. (1991). A resource-allocating network for function interpolation. Neural Computation, 3 (2), 213–225.spa
dc.relation.referencesPr¨ohl, K. D., C.and Schmidt. (2005). Multivariate Chain-Ladder. Dresdner Schriften zur Versicherungsmathematik.spa
dc.relation.referencesQuinlan, P. T. (2003). Connectionist models of development: Developmental processes in real and artificial neural networks. Taylor & Francis.spa
dc.relation.referencesRamachandran, P., Zoph, B., y Le, Q. V. (2017). Searching for activation functions. arXiv preprint arXiv:1710.05941 .spa
dc.relation.referencesRather, A. M., Agarwal, A., y Sastry, V. (2015). Recurrent neural network and a hybrid model for prediction of stock returns. Expert Systems with Applications, 42 (6), 3234–3241.spa
dc.relation.referencesRenato, S. (2003). Neurons and Synapses: The History of Its Discovery. Brain & Mind Magazine, 17 .spa
dc.relation.referencesRenshaw, A., y Verrall, R. (1998). A stochastic model underlying the Chain Ladder technique. British Actuarial Journal, 4 (4), 903–923.spa
dc.relation.referencesRiedmiller, M. (1994). Advanced supervised learning in multi-layer perceptrons—from backpropagation to adaptive learning algorithms. Computer Standards & Interfaces, 16 (3), 265–278.spa
dc.relation.referencesRocco, I., Arandjelovic, R., y Sivic, J. (2017). Convolutional neural network architecture for geometric matching. En Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 6148–6157).spa
dc.relation.referencesRogers, T. T., y McClelland, J. L. (2014). Parallel distributed processing at 25: Further explorations in the microstructure of Cognition. Cognitive science, 38 (6), 1024–1077.spa
dc.relation.referencesRosenblatt, F. (1958). The perceptron: a probabilistic model for information storage and organization in the brain. Psychological Review, 65 (6), 386.spa
dc.relation.referencesRumelhart, D. E., Durbin, R., Golden, R., y Chauvin, Y. (1995). Backpropagation: The basic theory. Backpropagation: Theory, architectures and applications, 1–34.spa
dc.relation.referencesSahoo, B. B., Jha, R., Singh, A., y Kumar, D. (2019). Long short-term memory (LSTM) recurrent neural network for low-flow hydrological time series forecasting. Acta Geophysica, 67 (5), 1471–1481.spa
dc.relation.referencesSak, H., Senior, A. W., y Beaufays, F. (2014). Long short-term memory recurrent neural network architectures for large scale acoustic modeling.spa
dc.relation.referencesSchiegl, M. (2008). Parameter estimation for a stochastic claim reserving model. Presented at DPG Conference,Berlin.spa
dc.relation.referencesSchiegl, M. (2009). A three dimensional stochastic model for claim reserving. Proceedings 39th ASTIN Colloquium.spa
dc.relation.referencesSchmidt-Hieber, J. (2020). Nonparametric regression using deep neural networks with relu activation function. The Annals of Statistics, 48 (4), 1875–1897.spa
dc.relation.referencesShanmuganathan, S. (2016). Artificial neural network modelling: An introduction. En Artificial neural network modelling (pp. 1–14). Springerspa
dc.relation.referencesSousa, D. A. (2014). Neurociencia educativa: Mente, cerebro y educaci´on (Vol. 131). Narcea Ediciones.spa
dc.relation.referencesSrivastava, N., Hinton, G., Krizhevsky, A., Sutskever, I., y Salakhutdinov, R. (2014). Dropout: a simple way to prevent neural networks from overfitting. The journal of machine learning research, 15 (1), 1929–1958.spa
dc.relation.referencesSuzuki, K. (2013). Artificial neural networks: Architectures and applications. BoD–Books on Demand.spa
dc.relation.referencesTaylor, G. (1986). Claims reserving in non-life insurance. Elsevier Science.spa
dc.relation.referencesTaylor, G. (2000). Loss Reserving: An Actuarial Perspective. Nueva York, Estados Unidos de América: Springer.spa
dc.relation.referencesTaylor, G., y Ashe, F. (1983). Second moments of estimates of outstanding claims. Journal of Econometrics, 23 , 37–61.spa
dc.relation.referencesTsantekidis, A., Passalis, N., Tefas, A., Kanniainen, J., Gabbouj, M., y Iosifidis, A. (2017). Forecasting stock prices from the limit order book using convolutional neural networks. En 2017 IEEE 19th conference on business informatics (CBI) (Vol. 1, pp. 7–12).spa
dc.relation.referencesVenkatesh, Y., y Raja, S. K. (2003). On the classification of multispectral satellite images using the multilayer perceptron. Pattern Recognition, 36 (9), 2161–2175.spa
dc.relation.referencesVerrall, R. (1989). A state space representation of the Chain Ladder linear model. Journal of the Institute of Actuaries, 116 , 589-609.spa
dc.relation.referencesVerrall, R. (1994). Statistical methods for the Chain Ladder technique. Casuality Actuarial Society(1), 393–445.spa
dc.relation.referencesVerrall, R., Nielsen, J. P., y Jessen, A. H. (2010). Prediction of rbns and ibnr claims using claim amounts and claim counts. ASTIN Bulletin: The Journal of the IAA, 40 (2), 871–887.spa
dc.relation.referencesVogl, T. P., Mangis, J., Rigler, A., Zink, W., y Alkon, D. (1988). Accelerating the convergence of the back-propagation method. Biological cybernetics, 59 (4), 257–263.spa
dc.relation.referencesVon der Malsburg, C. (1973). Self-organization of orientation sensitive cells in the striate cortex. Kybernetik , 14 (2), 85–100.spa
dc.relation.referencesWerbos, P. J. (1988). Generalization of backpropagation with application to a recurrent gas market model. Neural networks, 1 (4), 339–356.spa
dc.relation.referencesWerbos, P. J. (1990). Backpropagation through time: what it does and how to do it.Proceedings of the IEEE, 78 (10), 1550–1560.spa
dc.relation.referencesWhite, H., y cols. (1992). Artificial neural networks. Blackwell Cambridge, Mass.spa
dc.relation.referencesWidrow, B., y Hoff, M. E. (1960). Adaptive switching circuits (Inf. Tec.). Stanford Univ Ca Stanford Electronics Labs.spa
dc.relation.referencesW¨uthrich, M. (2018). Neural networks applied to Chain–Ladder reserving. European Actuarial Journal, 2018:8 , 407–436.spa
dc.relation.referencesW¨uthrich, M. (2019). Bias regularization in neural network models for general insurance pricing. European Actuarial Journal, 1–24.spa
dc.relation.referencesW¨uthrich, M., y Merz, M. (2008). Stochastic Claims Reserving Methods in Insurance. New York, United States: Wiley Finance.spa
dc.relation.referencesYegnanarayana, B. (2009). Artificial neural networks. PHI Learning Pvt. Ltd.spa
dc.relation.referencesZhang, Y. (2010). A general multivariate Chain Ladder model. Insurance: Mathematics and Economics, 46 (3), 588-599.spa
dc.rights.accessrightsinfo:eu-repo/semantics/openAccessspa
dc.rights.licenseAtribución-NoComercial-SinDerivadas 4.0 Internacionalspa
dc.rights.urihttp://creativecommons.org/licenses/by-nc-nd/4.0/spa
dc.subject.ddc510 - Matemáticas::519 - Probabilidades y matemáticas aplicadasspa
dc.subject.lembProcesos de poisson
dc.subject.lembPoisson processes
dc.subject.lembRedes neuronales (Computación)
dc.subject.lembRiesgo (Seguros)
dc.subject.proposalIBNReng
dc.subject.proposalRNNeng
dc.subject.proposalPóliza de seguro previsionalspa
dc.subject.proposalRedes neuronales artificialesspa
dc.subject.proposalRed neuronal recurrentespa
dc.subject.proposalArtificial neural networkseng
dc.subject.proposalClaims reserveeng
dc.subject.proposalPension insurance policyeng
dc.subject.proposalPoisson regressioneng
dc.subject.proposalRecurrent neural networks -RNNeng
dc.titleExploración de las redes neuronales para la proyección de la máxima pérdida esperada de una póliza de seguros: aplicación para un seguro previsionalspa
dc.title.translatedExploration of neural networks for the projection of the maximum expected loss of an insurance policy: application for pension insurance
dc.typeTrabajo de grado - Maestríaspa
dc.type.coarhttp://purl.org/coar/resource_type/c_bdccspa
dc.type.coarversionhttp://purl.org/coar/version/c_ab4af688f83e57aaspa
dc.type.contentTextspa
dc.type.driverinfo:eu-repo/semantics/masterThesisspa
dc.type.redcolhttp://purl.org/redcol/resource_type/TMspa
dc.type.versioninfo:eu-repo/semantics/acceptedVersionspa
dcterms.audience.professionaldevelopmentEstudiantesspa
dcterms.audience.professionaldevelopmentInvestigadoresspa
dcterms.audience.professionaldevelopmentMaestrosspa
dcterms.audience.professionaldevelopmentPúblico generalspa
oaire.accessrightshttp://purl.org/coar/access_right/c_abf2spa

Archivos

Bloque original

Mostrando 1 - 1 de 1
Cargando...
Miniatura
Nombre:
1036952578.2021.pdf
Tamaño:
2.5 MB
Formato:
Adobe Portable Document Format
Descripción:
Tesis de Maestría en Ciencias - Estadística

Bloque de licencias

Mostrando 1 - 1 de 1
Cargando...
Miniatura
Nombre:
license.txt
Tamaño:
3.98 KB
Formato:
Item-specific license agreed upon to submission
Descripción: