Deep learning for top tagging

dc.contributor.advisorSandoval Usme, Carlos Eduardo
dc.contributor.authorRiaño Reyes, Diana Catalina
dc.contributor.researchgroupGrupo de Partículas Fenyx-Un
dc.date.accessioned2026-02-03T13:34:10Z
dc.date.available2026-02-03T13:34:10Z
dc.date.issued2025
dc.descriptionIlustraciones, diagramas, gráficosspa
dc.description.abstractEl jet tagging, una tarea de clasificación crucial en la física de altas energías, se ha beneficiado cada vez más de la aplicación del deep learning. Mientras que los enfoques anteriores han representado los jets como imágenes o secuencias, los métodos modernos aprovechan las representaciones de nubes de partículas invariantes a la permutación con arquitecturas como el Particle Transformer (ParT). Este trabajo presenta una investigación del modelo ParT, comenzando con un análisis exploratorio de datos (EDA) de las características a nivel de jet y de partícula, seguido del entrenamiento del modelo adaptado a los recursos computacionales disponibles y una rigurosa evaluación de su rendimiento. El análisis revela que el rendimiento superior de arquitecturas como ParT no depende únicamente de la complejidad del modelo, sino que se ve significativamente potenciado por la integración de características informadas por la física. Esto subraya la importancia primordial de la calidad de las características y el conocimiento específico del dominio para garantizar que los modelos de deep learning puedan capturar eficazmente las relaciones físicas subyacentes en tareas de alta discriminación. La ingeniería de features es el factor más crítico, elevando el potencial de descubrimiento en más de un 300 % para jets de heavy flavor. Seguidamente, la arquitectura del modelo es decisiva, con ParT mejorando el descubrimiento en un 60 % en promedio. El tamaño del dataset tiene un impacto secundario. (Texto tomado de la fuente)spa
dc.description.abstractJet tagging, a critical classification task in high-energy physics, has increasingly benefited from the application of deep learning. While previous approaches have represented jets as images or sequences, modern methods leverage permutation-invariant particle cloud representations with architectures like the Particle Transformer (ParT). This work presents an investigation of the ParT model, beginning with an exploratory data analysis (EDA) of jet and particle-level features, followed by model training adapted to available computational resources and a rigorous performance evaluation. The analysis reveals that the superior performance of architectures like ParT is not solely dependent on the complexity of the model but is significantly enhanced by the integration of physics-informed features. This underscores the paramount importance of feature quality and domain-specific knowledge in ensuring that deep learning models can effectively capture the underlying physical relationships in high-discrimination tasks. Comprehensive feature engineering is the most critical performance driver, elevating discovery potential by over 300\% for heavy-flavor jets. Model architecture is the second decisive factor; the ParT model increases the average discovery potential by 60\%. In comparison, dataset size has a secondary, more modest impact.eng
dc.description.degreelevelMaestría
dc.description.degreenameMagíster en Estadística
dc.description.researchareaFísica de Altas Energías
dc.format.extentxiii, 73 páginas
dc.format.mimetypeapplication/pdf
dc.identifier.instnameUniversidad Nacional de Colombiaspa
dc.identifier.reponameRepositorio Institucional Universidad Nacional de Colombiaspa
dc.identifier.repourlhttps://repositorio.unal.edu.co/spa
dc.identifier.urihttps://repositorio.unal.edu.co/handle/unal/89373
dc.language.isoeng
dc.publisherUniversidad Nacional de Colombia
dc.publisher.branchUniversidad Nacional de Colombia - Sede Bogotá
dc.publisher.facultyFacultad de Ciencias
dc.publisher.placeBogotá, Colombia
dc.publisher.programBogotá - Ciencias - Maestría en Ciencias - Estadística
dc.relation.referencesA. R. Webb and David Lowe. A theorem connecting adaptive feed-forward layered networks and nonlinear discriminant analysis. Memorandum 4209, Royal Signals and Radar Establishment, 1988.
dc.relation.referencesG. James, D. Witten, T. Hastie, R. Tibshirani, and J. Taylor. An Introduction to Statistical Learning: with Applications in Python. Springer Texts in Statistics. Springer International Publishing, 2023.
dc.relation.referencesA. R. Webb and David Lowe. A theorem connecting adaptive feed-forward layered networks and nonlinear discriminant analysis. Memorandum 4209, Royal Signals and Radar Establishment, 1988.
dc.relation.referencesFionn Murtagh. Multilayer perceptrons for classification and regression. Neurocompu- ting, 2(5):183–197, 1991.
dc.relation.referencesR.Paul Gorman and Terrence J. Sejnowski. Analysis of hidden units in a layered network trained to classify sonar targets. Neural Networks, 1(1):75–89, 1988.
dc.relation.referencesKurt Hornik, Maxwell Stinchcombe, and Halbert White. Multilayer feedforward net- works are universal approximators. Neural Networks, 2(5):359–366, 1989.
dc.relation.referencesD. P. Casasent and E. Barnard. Adaptive-clustering optical neural net. Applied Optics, 29(17):2603–2615, Jun 1990.
dc.relation.referencesEgor Dyukarev. Comparison of artificial neural network and regression models for filling temporal gaps of meteorological variables time series. Applied Sciences, 13(4), 2023.
dc.relation.referencesP. Smyth and J. Mellstrom. Initial results on fault diagnosis of DSN antenna control assemblies using pattern recognition techniques. TDA Progress Report, 42:136–151, 1990.
dc.relation.referencesKevin P. Murphy. Probabilistic Machine Learning: An introduction. MIT Press, 2022.
dc.relation.referencesIan Goodfellow, Yoshua Bengio, and Aaron Courville. Deep Learning. MIT Press, 2016.66
dc.relation.referencesY. LeCun, B. Boser, J. S. Denker, D. Henderson, R. E. Howard, W. Hubbard, L. D. Jackel, et al. Handwritten digit recognition with a back-propagation network. In Advances in Neural Information Processing Systems, volume 2, pages 396–404. Morgan Kaufmann, 1990. NIPS 1989, Denver, CO, USA.
dc.relation.referencesY. LeCun, F. J. Huang, and L. Bottou. Learning methods for generic object recognition with invariance to pose and lighting. In Proceedings of the 2004 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, volume 2, pages II–97, Washington, DC, USA, 2004. IEEE.
dc.relation.referencesY. LeCun, K. Kavukcuoglu, and C. Farabet. Convolutional networks and applications in vision. In Proceedings of 2010 IEEE International Symposium on Circuits and Systems, pages 253–256, Paris, France, 2010. IEEE.
dc.relation.referencesYann LeCun, Yoshua Bengio, and Geoffrey Hinton. 521(7553):436–444, 2015. Deep learning. nature,
dc.relation.referencesLaith Alzubaidi, Jinglan Zhang, Amjad J Humaidi, Ayad Al-Dujaili, Ye Duan, Omran Al-Shamma, José Santamarı́a, Mohammed A Fadhel, Muthana Al-Amidie, and Laith Farhan. Review of deep learning: concepts, cnn architectures, challenges, applications, future directions. Journal of big Data, 8(1):53, 2021.
dc.relation.referencesDavid H. Hubel and Torsten N. Wiesel. Receptive fields, binocular interaction and functional architecture in the cat’s visual cortex. The Journal of Physiology, 160:106– 154, 1962.
dc.relation.referencesDaniel J. Felleman and David C. Van Essen. Distributed hierarchical processing in the primate cerebral cortex. Cerebral Cortex, 1(1):1–47, 1991.
dc.relation.referencesJ. Fieres, J. Schemmel, and K. Meier. Training convolutional networks of threshold neurons suited for low-power hardware implementation. In Proceedings of the Interna- tional Joint Conference on Neural Networks (IJCNN’06), pages 21–28. IEEE, 2006.
dc.relation.referencesC. Nebauer. Evaluation of convolutional neural networks for visual recognition. IEEE Transactions on Neural Networks, 9(4):685–696, 1998.
dc.relation.referencesI. Arel, D. C. Rose, and T. P. Karnowski. Deep machine learning—a new frontier in artificial intelligence research [research frontier]. IEEE Computational Intelligence Magazine, 5(4):13–18, 2010.
dc.relation.referencesE. A. Smirnov, D. M. Timoshenko, and S. N. Andrianov. Comparison of regularization methods for imagenet classification with deep convolutional neural networks. Aasri Procedia, 6:89–94, 2014.
dc.relation.referencesY. LeCun, L. Bottou, Y. Bengio, and P. Haffner. Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11):2278–2324, 1998.
dc.relation.referencesF. H. C. Tivive and A. Bouzerdoum. Efficient training algorithms for a class of shunting inhibitory convolutional neural networks. IEEE Transactions on Neural Networks, 16(3):541–556, 2005.
dc.relation.referencesA. Krizhevsky, I. Sutskever, and G. E. Hinton. Imagenet classification with deep convolutional neural networks. In Advances in Neural Information Processing Systems, pages 1097–1105, 2012.
dc.relation.referencesM. D. Zeiler and R. Fergus. Stochastic pooling for regularization of deep convolutional neural networks. arXiv preprint, arXiv:1301.3557, 2013.
dc.relation.referencesJ. Donahue, Y. Jia, O. Vinyals, J. Hoffman, N. Zhang, E. Tzeng, and T. Darrell. Decaf: A deep convolutional activation feature for generic visual recognition. In International Conference on Machine Learning, pages 647–655, 2014.
dc.relation.referencesC. Szegedy, A. Toshev, and D. Erhan. Deep neural networks for object detection. In Advances in Neural Information Processing Systems, pages 2553–2561, 2013.
dc.relation.referencesD. Timoshenko and V. Grishkin. Composite face detection method for automa- tic moderation of user avatars. In Computer Science and Information Technologies (CSIT’13), 2013.
dc.relation.referencesT. N. Sainath, B. Kingsbury, A. R. Mohamed, G. E. Dahl, G. Saon, H. Soltau, and B. Ramabhadran. Improvements to deep convolutional neural networks for lvcsr. In Proceedings of the IEEE Workshop on Automatic Speech Recognition and Understan- ding (ASRU), pages 315–320, 2013.
dc.relation.referencesX. Luo, R. Shen, J. Hu, J. Deng, L. Hu, and Q. Guan. A deep convolution neural network model for vehicle recognition and face recognition. Procedia Computer Science, 107:715–720, 2017.
dc.relation.referencesH. Pratt, F. Coenen, D. M. Broadbent, S. P. Harding, and Y. Zheng. Convolutional neural networks for diabetic retinopathy. Procedia Computer Science, 90:200–205, 2016.
dc.relation.referencesA. Uçar. Deep convolutional neural networks for facial expression recognition. In Pro- ceedings of the IEEE International Conference on Innovations in Intelligent Systems and Applications (INISTA), pages 371–375, July 2017.
dc.relation.referencesSakshi Indolia, Anil Kumar Goswami, S.P. Mishra, and Pooja Asopa. Conceptual understanding of convolutional neural network- a deep learning approach. Procedia Computer Science, 132:679–688, 2018. International Conference on Computational Intelligence and Data Science.
dc.relation.referencesDavid E Rumelhart, Geoffrey E Hinton, and Ronald J Williams. Learning representa- tions by back-propagating errors. Nature, 323:533–536, 1986.
dc.relation.referencesSepp Hochreiter. Untersuchungen zu dynamischen neuronalen Netzen. PhD thesis, T.U. Münich, 1991.
dc.relation.referencesYoshua Bengio, Patrice Simard, and Paolo Frasconi. Learning long-term dependencies with gradient descent is difficult. IEEE Transactions on Neural Networks, 5:157–166, 1994.
dc.relation.referencesSepp Hochreiter and Jürgen Schmidhuber. Long short-term memory. Neural Compu- tation, 9:1735–1780, 1997.
dc.relation.referencesCorentin Tallec and Yann Ollivier. Can recurrent neural networks warp time?, 2018.
dc.relation.referencesShivangi Mahto, Vy A. Vo, Javier S. Turek, and Alexander G. Huth. Multi-timescale representation learning in lstm language models, 2021.
dc.relation.referencesDaniel Neil, Michael Pfeiffer, and Shih-Chii Liu. Phased lstm: Accelerating recurrent network training for long or event-based sequences, 2016.
dc.relation.referencesJunyoung Chung, Caglar Gulcehre, KyungHyun Cho, and Yoshua Bengio. Empirical evaluation of gated recurrent neural networks on sequence modeling, 2014.
dc.relation.referencesKhaled Alomar, Halil Ibrahim Aysel, and Xiaohao Cai. Rnns, cnns and transformers in human action recognition: A survey and a hybrid model, 2024.
dc.relation.referencesDrew Linsley, Junkyung Kim, Vijay Veerabadran, and Thomas Serre. Learning long- range spatial dependencies with horizontal gated-recurrent units, 2019.
dc.relation.referencesWeichuan Zhang, Jiale Wang, Tao Lei, Zicheng Pan, Mohammad Aminul Islam, Yongsheng Gao, and Changming Sun. A lightweight transformer guided by features from multiple receptive fields for few-shot fine-grained image classification, 2024.
dc.relation.referencesYuxu Peng, Xin Yi, Dengyong Zhang, Lebing Zhang, Yuehong Tian, and Zhifeng Zhou. Convmedsegnet: A multi-receptive field depthwise convolutional neural network for medical image segmentation. Computers in Biology and Medicine, 176:108559, 2024.
dc.relation.referencesXu Ma, Mengsheng Chen, Junhui Zhang, Lijuan Song, Fang Du, and Zhenhua Yu. Parf-net: integrating pixel-wise adaptive receptive fields into hybrid transformer-cnn network for medical image segmentation, 2025.
dc.relation.referencesIbomoiye Domor Mienye, Theo G. Swart, and George Obaido. Recurrent neural net- works: A comprehensive review of architectures, variants, and applications. Informa- tion, 15(9), 2024.
dc.relation.referencesAlex Graves. Generating sequences with recurrent neural networks, 2013.
dc.relation.referencesDzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. Neural machine translation by jointly learning to align and translate, 2014.
dc.relation.referencesIlya Sutskever, Oriol Vinyals, and Quoc V. Le. Sequence to sequence learning with neural networks. In Z. Ghahramani, M. Welling, C. Cortes, N. D. Lawrence, and K. Q. Weinberger, editors, Advances in Neural Information Processing Systems, volume 27, pages 3104–3112. Curran Associates, Inc., 2014.
dc.relation.referencesMinh-Thang Luong, Hieu Pham, and Christopher D. Manning. Effective approaches to attention-based neural machine translation, 2015.
dc.relation.referencesGianni Brauwers and Flavius Frasincar. A general survey on attention mechanisms in deep learning. IEEE Transactions on Knowledge and Data Engineering, 35(4):3279– 3298, 2021.
dc.relation.referencesY. Hu, Y. Wong, W. Wei, Y. Du, M. Kankanhalli, and W. Geng. A novel attention- based hybrid cnn-rnn architecture for semg-based gesture recognition. PloS one, 13(10):e0206049, 2018.
dc.relation.referencesAshish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. Attention is all you need. In I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett, editors, Advances in Neural Information Processing Systems, volume 30, pages 5998– 6008. Curran Associates, Inc., 2017.
dc.relation.referencesGuy Dar, Mor Geva, Ankit Gupta, and Jonathan Berant. Analyzing transformers in embedding space, 2022.
dc.relation.referencesJacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. BERT: Pre- training of deep bidirectional transformers for language understanding. In Procee- dings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171–4186, 2019.
dc.relation.referencesAston Zhang, Zachary C. Lipton, Mu Li, and Alexander J. Smola. Dive into deep learning, 2023.
dc.relation.referencesDan Guest, Kyle Cranmer, and Daniel Whiteson. Deep Learning and Its Application to LHC Physics. Annu. Rev. Nucl. Part. Sci., 68:161–181, 2018.
dc.relation.referencesHuilin Qu and Loukas Gouskos. Jet tagging via particle clouds. Phys. Rev. D, 101(5):056019, 2020.
dc.relation.referencesJoep Geuskens, Nishank Gite, Michael Krämer, Vinicius Mikuni, Alexander Mück, Benjamin Nachman, and Humberto Reyes-González. The Fundamental Limit of Jet Tagging. 2025.
dc.relation.referencesATLAS Collaboration. Higgs into fermions. https://atlas.cern/updates/news/ higgs-fermions, November 2013. ATLAS experiment preliminary results showing evidence that the Higgs boson decays to two taus.
dc.relation.referencesDaniel Guest, Julian Collado, Pierre Baldi, Shih-Chieh Hsu, Gregor Urban, and Da- niel Whiteson. Jet Flavor Classification in High-Energy Physics with Deep Neural Networks. Phys. Rev. D, 94(11):112002, 2016.
dc.relation.referencesHuilin Qu, Congqiao Li, and Sitian Qian. Particle Transformer for Jet Tagging. 2022. ICML 2022.
dc.relation.referencesAndrew J. Larkoski, Ian Moult, and Benjamin Nachman. Jet Substructure at the Large Hadron Collider: A Review of Recent Advances in Theory and Machine Learning. Phys. Rept., 841:1–63, 2020.
dc.relation.referencesAntimo Cagnotta, Francesco Carnevali, and Agostino De Iorio. Machine Learning Applications for Jet Tagging in the CMS Experiment. Appl. Sci., 12(20):10574, 2022.
dc.relation.referencesJ. A. Aguilar-Saavedra and B. Zaldı́var. Jet tagging made easy. 2020.
dc.relation.referencesHuilin Qu. Jet tagging with machine learning. In Machine Learning at HEP Workshop 2024, High Energy Accelerator Research Organization (KEK), Japan, January 2024. KEK. Presentation slides.
dc.relation.referencesCharles R Qi, Hao Su, Kaichun Mo, and Leonidas J Guibas. Pointnet: Deep learning on point sets for 3d classification and segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2017.
dc.relation.referencesYue Wang, Yongbin Sun, Ziwei Liu, Sanjay E. Sarma, Michael M. Bronstein, and Justin M. Solomon. Dynamic graph cnn for learning on point clouds. ACM Trans. Graph., 38(5), October 2019.
dc.relation.referencesFrédéric A. Dreyer and Huilin Qu. Jet tagging in the lund plane with graph networks, 2021.
dc.relation.referencesG. Aad et al. Measurement of the flavour composition of dijet events in pp collisions at s = 7 TeV with the ATLAS detector. Eur. Phys. J. C, 75(1):17, 2015.
dc.relation.referencesALICE Collaboration. Heavy-flavor jets in heavy-ion collisions. EPJ Web Conf., 276:02014, 2023.
dc.relation.referencesCMS Collaboration. Jet flavor tagging. CMS Open Data Workshop, 2024.
dc.relation.referencesG. Aad et al. Performance of b-Jet Identification in the ATLAS Experiment. J. Instrum., 11:P04008, 2016.
dc.relation.referencesAndrew J. Larkoski, Simone Marzani, Gregory Soyez, and Jesse Thaler. Soft Drop. JHEP, 05:146, 2014.
dc.relation.referencesFrédéric A. Dreyer, Lina Necib, Gregory Soyez, and Jesse Thaler. The jet mass distri- bution after Soft Drop. JHEP, 06:093, 2018.
dc.relation.referencesJesse Thaler and Ken Van Tilburg. Identifying Boosted Objects with N-subjettiness. JHEP, 03:015, 2011.
dc.relation.referencesJesse Thaler and Ken Van Tilburg. Maximizing Boosted Top Identification by Mini- mizing N-subjettiness. JHEP, 02:093, 2012.
dc.relation.referencesG. Aad et al. Computing N-subjettiness for boosted jets. 2018.
dc.relation.referencesHuilin Qu, Congqiao Li, and Sitian Qian. JetClass: A large-scale dataset for deep learning in jet physics, jun 2022.
dc.relation.referencesCatalina Riaño and Carlos Sandoval. Particle transformer review. https://github.com/dcrianor/particle transformer v2, 2025. Fork from https://github.com/jet-universe/particle transformer.
dc.relation.referencesM.K. Singh, H.B. Li, H.T. Wong, V. Sharma, and L. Singh. Projections of discovery potentials from expected background. Physical Review D, 109(3), February 2024.
dc.relation.referencesSteven Weinberg. Half a Century of the Standard Model. Phys. Rev. Lett., 121:220001, 2018.
dc.relation.referencesThe Editors of Encyclopaedia Britannica. Standard model. https://www. britannica.com/science/standard-model, 2024. Accessed: 2025-09-09.
dc.relation.referencesAidan Robson. The Standard Model of Particle Physics. https://indico. cern.ch/event/528094/contributions/2171249/attachments/1319109/1977592/ ASPStandardModelSmall.pdf, 2016. Lecture at the African School of Fundamental Physics and Applications 2016, Accessed: 2025-09-09.
dc.relation.referencesDavid Tong. The Standard Model. https://www.damtp.cam.ac.uk/user/tong/sm/ standardmodel.pdf, 2022. Lecture Notes, University of Cambridge, Accessed: 2025- 09-09.
dc.relation.referencesTa-Pei Cheng and Ling-Fong Li. Gauge Theory of Elementary Particle Physics. Oxford University Press, Oxford, 1984.
dc.relation.referencesJ. J. Bevelacqua. Standard model of particle physics—a health physics perspective. Health Physics, 99(5), 2010.
dc.relation.referencesW. N. Cottingham and D. A. Greenwood. An Introduction to the Standard Model of Particle Physics. Cambridge University Press, 2nd edition, 2007.
dc.relation.referencesL. Evans. The Large Hadron Collider. In Proceedings of the 2006 SLAC Summer Institute on Particle Physics (SSI 2006), 2006. eConf C060717.
dc.relation.referencesHeather M. Gray. LHC Experiments. volume TASI2022, page 004, 2024.
dc.relation.referencesCMS Collaboration. The CMS Detector. https://cms.cern/detector, 2025. Acces- sed: 2025-09-09.
dc.relation.referencesCMS Collaboration. Identifying Tracks. identifying-tracks, 2025. Accessed: 2025-09-09. https://cms.cern/detector/
dc.relation.referencesATLAS Collaboration. Operation and performance of the ATLAS semiconductor trac- ker in LHC Run 2. JINST, 17:P01013, 2022.
dc.relation.referencesCMS Collaboration. The CMS experiment at the CERN LHC. JINST, 3:S08004, 2008.
dc.relation.referencesJohn Alison. The Road to Discovery: Detector Alignment, Electron Identification, Particle Misidentification, WW Physics, and the Discovery of the Higgs Boson. PhD thesis, University of Pennsylvania, 2012. Chapter 3: The ATLAS Experiment.
dc.relation.referencesCMS Collaboration. Performance of the CMS muon detector and muon reconstruction with proton-proton collisions at s = 13 TeV. JINST, 13:P06015, 2018.
dc.relation.referencesATLAS Collaboration. The ATLAS Experiment at the CERN Large Hadron Collider. JINST, 3:S08003, 2008.
dc.relation.referencesLorenzo Rossini. Modeling Radiation Damage to Pixel Sensors in the ATLAS Detector. PoS, EPS-HEP2019:126, 2020.
dc.relation.referencesATLAS Collaboration. Atlas schematics. Schematics, 2024. Accessed: 2025-09-14. https://atlas.cern/Resources/
dc.relation.referencesCMS Collaboration. The cms detector. https://cms.cern/detector, 2024. Accessed: 2025-09-14.
dc.relation.referencesRyan Atkin. Review of jet reconstruction algorithms. 645(1):012008, 2015. J. Phys. Conf. Ser.,
dc.relation.referencesMatteo Cacciari. Jet algorithms and jet substructure. https://indico.ihep.ac.cn/ event/5966/contributions/78775/attachments/40666/46951/jets.pdf, 2016. Lecture at the 2016 CTEQ-IHEP Summer School on QCD and Electroweak Theory, Accessed: 2025-09-09.
dc.relation.referencesMatteo Cacciari, Gavin P. Salam, and Gregory Soyez. The anti-k t jet clustering algorithm. JHEP, 04:063, 2008.
dc.rights.accessrightsinfo:eu-repo/semantics/openAccess
dc.rights.licenseReconocimiento 4.0 Internacional
dc.rights.urihttp://creativecommons.org/licenses/by/4.0/
dc.subject.blaaAprendizaje profundo (Inteligencia artificial)spa
dc.subject.blaaFísica de partículasspa
dc.subject.blaaColisiones (Física nuclear)spa
dc.subject.ddc530 - Física::539 - Física moderna
dc.subject.ddc000 - Ciencias de la computación, información y obras generales::004 - Procesamiento de datos Ciencia de los computadores
dc.subject.lembRedes neuronales (Computadores)spa
dc.subject.lembNeural networks (Computer science)eng
dc.subject.proposalHEPeng
dc.subject.proposalDeep Learningeng
dc.subject.proposalAttention mechanismseng
dc.subject.proposalStatistical learningeng
dc.subject.proposalJet taggingeng
dc.subject.proposalFísica de partículasspa
dc.subject.proposalEtiquetado de jetsspa
dc.subject.proposalAprendizaje Profundospa
dc.subject.proposalMecanismos de atenciónspa
dc.subject.proposalAprendizaje estadísticospa
dc.titleDeep learning for top taggingeng
dc.title.translatedAprendizaje automático para etiquetado de quarks Topspa
dc.typeTrabajo de grado - Maestría
dc.type.coarhttp://purl.org/coar/resource_type/c_bdcc
dc.type.coarversionhttp://purl.org/coar/version/c_ab4af688f83e57aa
dc.type.contentText
dc.type.driverinfo:eu-repo/semantics/masterThesis
dc.type.redcolhttp://purl.org/redcol/resource_type/TM
dc.type.versioninfo:eu-repo/semantics/acceptedVersion
dcterms.audience.professionaldevelopmentInvestigadores
dcterms.audience.professionaldevelopmentEstudiantes
oaire.accessrightshttp://purl.org/coar/access_right/c_abf2

Archivos

Bloque original

Mostrando 1 - 1 de 1
Cargando...
Miniatura
Nombre:
Tesis_Final_Repositorio.pdf
Tamaño:
6.4 MB
Formato:
Adobe Portable Document Format
Descripción:
Tesis de Maestría en Estadística

Bloque de licencias

Mostrando 1 - 1 de 1
Cargando...
Miniatura
Nombre:
license.txt
Tamaño:
5.74 KB
Formato:
Item-specific license agreed upon to submission
Descripción: