Mostrar el registro sencillo del documento

dc.rights.licenseAtribución-NoComercial-CompartirIgual 4.0 Internacional
dc.contributor.advisorGonzález Osorio, Fabio Augusto
dc.contributor.advisorCruz Roa, Ángel Alfonso
dc.contributor.authorMedina Carrillo, Sebastian Rodrigo
dc.date.accessioned2024-07-17T02:18:49Z
dc.date.available2024-07-17T02:18:49Z
dc.date.issued2024
dc.identifier.urihttps://repositorio.unal.edu.co/handle/unal/86504
dc.description.abstractGleason grading is recognized as the standard method for diagnosing prostate cancer. However, it is subject to significant inter-observer variability due to its reliance on subjective visual assessment. Current deep learning approaches for grading often require exhaustive pixel-level annotations and are generally limited to patch-level predictions, which do not incorporate slide-level information. Recently, weakly-supervised techniques have shown promise in generating whole-slide label predictions using pathology report labels, which are more readily available. However, these methods frequently lack visual and quantitative interpretability, reinforcing the black box nature of deep learning models, hindering their clinical adoption. This thesis introduces WiSDoM, a novel weakly-supervised and interpretable approach leveraging attention mechanisms and Kernel Density Matrices for the grading of prostate cancer on whole slides. This method is adaptable to varying levels of supervision. WiSDoM facilitates multi-scale interpretability through several features: detailed heatmaps that provide granular visual insights by highlighting critical morphological features without requiring tissue annotations; example-based phenotypical prototypes that illustrate the internal representation learned by the model, aiding in clinical verification; and visual-quantitative measures of model uncertainty, which enhance the transparency of the model's decision-making process, a crucial factor for clinical use. WiSDoM has been validated on core-needle biopsies from two different institutions, demonstrating robust agreement with the reference standard (quadratically weighted Kappa of 0.93). WiSDoM achieves state-of-the-art inter-observer agreement performance on the PANDA Challenge publicly available dataset while being clinically interpretable.
dc.description.abstractLa clasificación de Gleason se reconoce como el método estándar para diagnosticar el cáncer de próstata. Sin embargo, está sujeto a una variabilidad significativa entre observadores debido a su dependencia de la evaluación visual subjetiva. Los enfoques actuales de aprendizaje profundo a menudo requieren anotaciones exhaustivas a nivel de píxeles y generalmente se limitan a predicciones a nivel de parche, que no incorporan información a nivel de lámina. Recientemente, las técnicas débilmente supervisadas se han mostrado prometedoras a la hora de generar predicciones de etiquetas de láminas completas utilizando etiquetas de informes de patología, que están más fácilmente disponibles. Sin embargo, estos métodos frecuentemente carecen de interpretabilidad visual y cuantitativa, lo que refuerza la naturaleza de caja negra de los modelos de aprendizaje profundo y dificulta su adopción clínica. Esta tesis introduce WiSDoM, un enfoque novedoso interpretable y débilmente supervisado que aprovecha los mecanismos de atención y las matrices de densidad para gradar cáncer de próstata en láminas completas. Este método se adapta a distintos niveles de supervisión. WiSDoM facilita la interpretabilidad a múltiples escalas a través de varias características: mapas de calor detallados que brindan información visual granular al resaltar características morfológicas críticas sin requerir anotaciones de tejido; prototipos fenotípicos basados en ejemplos que ilustran la representación interna aprendida por el modelo, ayudando en la verificación clínica; y medidas visual-cuantitativas de incertidumbre del modelo, que mejoran la transparencia del proceso de toma de decisiones, un factor crucial para el uso clínico. WiSDoM se ha validado en biopsias de dos instituciones diferentes, lo que demuestra una sólida concordancia con el estándar de referencia (Kappa ponderado cuadráticamente de 0,93). WiSDoM logra un rendimiento del estado del arte de acuerdo entre observadores en el conjunto de datos PANDA Challenge además de ser clínicamente interpretable. (Texto tomado de la fuente).
dc.description.sponsorshipResearch reported in this publication was partially supported by projects BPIN 2019000100- 060 ”Implementation of a Network for Research, Technological Development and Innovation in Digital Pathology (RedPat) supported by Industry 4.0 technologies” from FCTeI of SGR resources, approved by OCAD of FCTeI and MinCiencias, and project 110192092354, en- titled ”Program for the Early Detection of Premalignant Lesions and Gastric Cancer in urban, rural and dispersed areas in the Department of Nariño” of call No. 920 of 2022 of MinCiencias.
dc.format.extentxi, 53 páginas
dc.format.mimetypeapplication/pdf
dc.language.isoeng
dc.publisherUniversidad Nacional de Colombia
dc.rights.urihttp://creativecommons.org/licenses/by-nc-sa/4.0/
dc.subject.ddc610 - Medicina y salud::616 - Enfermedades
dc.subject.ddc000 - Ciencias de la computación, información y obras generales::004 - Procesamiento de datos Ciencia de los computadores
dc.titleA Deep Learning model for automatic grading of prostate cancer histopathology images
dc.typeTrabajo de grado - Maestría
dc.type.driverinfo:eu-repo/semantics/masterThesis
dc.type.versioninfo:eu-repo/semantics/acceptedVersion
dc.publisher.programBogotá - Ingeniería - Maestría en Ingeniería - Ingeniería de Sistemas y Computación
dc.contributor.refereeRomero, Eduardo
dc.contributor.refereeTabares Soto, Reinel
dc.contributor.researchgroupMindlab
dc.description.degreelevelMaestría
dc.description.degreenameMagíster en Ingeniería - Ingeniería de Sistemas y Computación
dc.description.researchareaSistemas Inteligentes
dc.identifier.instnameUniversidad Nacional de Colombia
dc.identifier.reponameRepositorio Institucional Universidad Nacional de Colombia
dc.identifier.repourlhttps://repositorio.unal.edu.co/
dc.publisher.facultyFacultad de Ingeniería
dc.publisher.placeBogotá, Colombia
dc.publisher.branchUniversidad Nacional de Colombia - Sede Bogotá
dc.relation.referencesBulten et al. Artificial intelligence for diagnosis and gleason grading of prostate cancer: the panda challenge. Nature Medicine, 28(1):154–163, Jan 2022. ISSN 1546-170X. doi: 10.1038/s41591-021-01620-2.
dc.relation.referencesHyuna Sung, Jacques Ferlay, Rebecca L Siegel, Mathieu Laversanne, Isabelle Soerjo- mataram, Ahmedin Jemal, and Freddie Bray. Global cancer statistics 2020: Globocan estimates of incidence and mortality worldwide for 36 cancers in 185 countries. CA: a cancer journal for clinicians, 71(3):209–249, 2021.
dc.relation.referencesPrashanth Rawla. Epidemiology of prostate cancer. World journal of oncology, 10(2): 63, 2019.
dc.relation.referencesDonald F Gleason and George T Mellinger. Prediction of prognosis for prostatic adeno- carcinoma by combined histological grading and clinical staging. The Journal of urology, 111(1):58–64, 1974.
dc.relation.referencesMark S Litwin and Hung-Jui Tan. The diagnosis and treatment of prostate cancer: a review. Jama, 317(24):2532–2542, 2017.
dc.relation.referencesJonathan I Epstein, Lars Egevad, Mahul B Amin, Brett Delahunt, John R Srigley, and Peter A Humphrey. The 2014 international society of urological pathology (isup) consensus conference on gleason grading of prostatic carcinoma. The American journal of surgical pathology, 40(2):244–252, 2016.
dc.relation.referencesTayyar A Ozkan, Ahmet T Eruyar, Oguz O Cebeci, Omur Memik, Levent Ozcan, and Ibrahim Kuskonmaz. Interobserver variability in gleason histological grading of prostate cancer. Scandinavian journal of urology, 50(6):420–424, 2016.
dc.relation.referencesPatricia Raciti, Jillian Sue, Rodrigo Ceballos, Ran Godrich, Jeremy D Kunz, Supriya Kapur, Victor Reuter, Leo Grady, Christopher Kanan, David S Klimstra, et al. Novel artificial intelligence system increases the detection of prostate cancer in whole slide images of core needle biopsies. Modern Pathology, 33(10):2058–2066, 2020.
dc.relation.referencesWouter Bulten, Hans Pinckaers, Hester van Boven, Robert Vink, Thomas de Bel, Bram van Ginneken, Jeroen van der Laak, Christina Hulsbergen-van de Kaa, and Geert Lit- jens. Automated deep-learning system for gleason grading of prostate cancer using biopsies: a diagnostic study. The Lancet Oncology, 21(2):233–241, 2020.
dc.relation.referencesLars Egevad, T Granfors, L Karlberg, A Bergh, and Per Stattin. Prognostic value of the gleason score in prostate cancer. BJU international, 89(6):538–542, 2002.
dc.relation.referencesMatthew R. Cooperberg, Jeanette M. Broering, and Peter R. Carroll. Time trends and local variation in primary treatment of localized prostate cancer. Journal of Clinical Oncology, 28(7):1117–1123, 2010. doi: 10.1200/JCO.2009.26.0133. URL https://doi. org/10.1200/JCO.2009.26.0133. PMID: 20124165.
dc.relation.referencesJonathan I Epstein. An update of the gleason grading system. The Journal of urology, 183(2):433–440, 2010.
dc.relation.referencesLars Egevad, Amar S Ahmad, Ferran Algaba, Daniel M Berney, Liliane Boccon-Gibod, Eva Comp´erat, Andrew J Evans, David Griffiths, Rainer Grobholz, Glen Kristiansen, Cord Langner, Antonio Lopez-Beltran, Rodolfo Montironi, Sue Moss, Pedro Oliveira, Ben Vainer, Murali Varma, and Philippe Camparo. Standardization of gleason grading among 337 european pathologists. Histopathology, 62(2):247–256, 2013. doi: https: //doi.org/10.1111/his.12008. URL https://onlinelibrary.wiley.com/doi/abs/10. 1111/his.12008.
dc.relation.referencesKaustav Bera, Kurt A Schalper, David L Rimm, Vamsidhar Velcheti, and Anant Madab- hushi. Artificial intelligence in digital pathology—new tools for diagnosis and precision oncology. Nature reviews Clinical oncology, 16(11):703–715, 2019.
dc.relation.referencesAnant Madabhushi, Michael D Feldman, and Patrick Leo. Deep-learning approaches for gleason grading of prostate biopsies. The Lancet Oncology, 21(2):187–189, 2020.
dc.relation.referencesEirini Arvaniti et al. Automated gleason grading of prostate cancer tissue microarrays via deep learning. Scientific Reports, 8(1):12054, Aug 2018. ISSN 2045-2322. doi: 10.1038/s41598-018-30535-1. URL https://doi.org/10.1038/s41598-018-30535-1.
dc.relation.referencesMing Y Lu, Drew FK Williamson, Tiffany Y Chen, Richard J Chen, Matteo Barbieri, and Faisal Mahmood. Data-efficient and weakly supervised computational pathology on whole-slide images. Nature biomedical engineering, 5(6):555–570, 2021.
dc.relation.referencesFabio A. Gonz´alez, Ra´ul Ramos-Poll´an, and Joseph A. Gallego-Mejia. Kernel density matrices for probabilistic deep learning, 2023.
dc.relation.referencesMohamed Slaoui and Laurence Fiette. Histopathology procedures: from tissue sampling to histopathological evaluation. Drug Safety Evaluation: Methods and Protocols, pages 69–82, 2011.
dc.relation.referencesGeert Litjens, Clara I. S´anchez, Nadya Timofeeva, Meyke Hermsen, Iris Nagtegaal, Iringo Kovacs, Christina Hulsbergen van de Kaa, Peter Bult, Bram van Ginneken, and Jeroen van der Laak. Deep learning as a tool for increased accuracy and efficiency of Bibliography 47 histopathological diagnosis. Scientific Reports, 6(1):26286, May 2016. ISSN 2045-2322. doi: 10.1038/srep26286. URL https://doi.org/10.1038/srep26286.
dc.relation.referencesWilliam C Allsbrook, Jr, Kathy A Mangold, Maribeth H Johnson, Roger B Lane, Cyn- thia G Lane, and Jonathan I Epstein. Interobserver reproducibility of gleason grading of prostatic carcinoma: General pathologist. Hum. Pathol., 32(1):81–88, January 2001.
dc.relation.referencesWilliam C Allsbrook, Jr, Kathy A Mangold, Maribeth H Johnson, Roger B Lane, Cynthia G Lane, Mahul B Amin, David G Bostwick, Peter A Humphrey, Edward C Jones, Victor E Reuter, Wael Sakr, Isabell A Sesterhenn, Patricia Troncoso, Thomas M Wheeler, and Jonathan I Epstein. Interobserver reproducibility of gleason grading of prostatic carcinoma: Urologic pathologists. Hum. Pathol., 32(1):74–80, January 2001.
dc.relation.referencesKarolina Cyll, Elin Ersvær, Ljiljana Vlatkovic, Manohar Pradhan, Wanja Kildal, Marte Avranden Kjær, Andreas Kleppe, Tarjei S. Hveem, Birgitte Carlsen, Silje Gill, Sven L¨offeler, Erik Skaaheim Haug, H˚akon Wæhre, Prasanna Sooriakumaran, and H˚avard E. Danielsen. Tumour heterogeneity poses a significant challenge to cancer biomarker research. British Journal of Cancer, 117(3):367–375, Jul 2017. ISSN 1532-1827. doi: 10.1038/bjc.2017.171. URL https://doi.org/10.1038/bjc.2017.171.
dc.relation.referencesArpit Aggarwal, Sirvan Khalighi, Deepak Babu, Haojia Li, Sepideh Azarianpour- Esfahani, Germ´an Corredor, Pingfu Fu, Mojgan Mokhtari, Tilak Pathak, Elizabeth Thayer, Susan Modesitt, Haider Mahdi, Stefanie Avril, and Anant Madabhushi. Com- putational pathology identifies immune-mediated collagen disruption to predict clinical outcomes in gynecologic malignancies. Communications Medicine, 4(1):2, Jan 2024. ISSN 2730-664X. doi: 10.1038/s43856-023-00428-0. URL https://doi.org/10.1038/ s43856-023-00428-0.
dc.relation.referencesCristian Barrera, Germ´an Corredor, Vidya Sankar Viswanathan, Ruiwen Ding, Paula Toro, Pingfu Fu, Christina Buzzy, Cheng Lu, Priya Velu, Philipp Zens, et al. Deep computational image analysis of immune cell niches reveals treatment-specific outcome associations in lung cancer. NPJ precision oncology, 7(1):52, 2023.
dc.relation.referencesJeroen Van der Laak, Geert Litjens, and Francesco Ciompi. Deep learning in histopathology: the path to the clinic. Nature medicine, 27(5):775–784, 2021.
dc.relation.referencesGeert Litjens, Thijs Kooi, Babak Ehteshami Bejnordi, Arnaud Arindra Adiyoso Setio, Francesco Ciompi, Mohsen Ghafoorian, Jeroen Awm Van Der Laak, Bram Van Gin- neken, and Clara I S´anchez. A survey on deep learning in medical image analysis. Medical image analysis, 42:60–88, 2017.
dc.relation.referencesGeorge Lee, Robert W Veltri, Guangjing Zhu, Sahirzeeshan Ali, Jonathan I Epstein, and Anant Madabhushi. Nuclear shape and architecture in benign fields predict biochemical recurrence in prostate cancer patients following radical prostatectomy: preliminary findings. European urology focus, 3(4-5):457–466, 2017.
dc.relation.referencesCheng Lu, David Romo-Bucheli, Xiangxue Wang, Andrew Janowczyk, Shridar Ganesan, Hannah Gilmore, David Rimm, and Anant Madabhushi. Nuclear shape and orienta- tion features from h&e images predict survival in early-stage estrogen receptor-positive breast cancers. Laboratory investigation, 98(11):1438–1448, 2018.
dc.relation.referencesGerm´an Corredor, Xiangxue Wang, Yu Zhou, Cheng Lu, Pingfu Fu, Konstantinos Syri- gos, David L Rimm, Michael Yang, Eduardo Romero, Kurt A Schalper, et al. Spatial architecture and arrangement of tumor-infiltrating lymphocytes for predicting likeli- hood of recurrence in early-stage non–small cell lung cancer. Clinical cancer research, 25(5):1526–1534, 2019.
dc.relation.referencesAndrew Janowczyk and Anant Madabhushi. Deep learning for digital pathology image analysis: A comprehensive tutorial with selected use cases. Journal of Pathology Infor- matics, 7(1):29, 2016. ISSN 2153-3539. doi: https://doi.org/10.4103/2153-3539.186902.
dc.relation.referencesMaschenka C.A. Balkenhol, David Tellez, Willem Vreuls, Pieter C. Clahsen, Hans Pinckaers, Francesco Ciompi, Peter Bult, and Jeroen A.W.M. van der Laak. Deep learning assisted mitotic counting for breast cancer. Laboratory Investigation, 99(11): 1596–1606, 2019. ISSN 0023-6837. doi: https://doi.org/10.1038/s41374-019-0275-0.
dc.relation.referencesAngel Cruz-Roa, Hannah Gilmore, Ajay Basavanhally, Michael Feldman, Shridar Gane- san, Natalie NC Shih, John Tomaszewski, Fabio A Gonz´alez, and Anant Madabhushi. Accurate and reproducible invasive breast cancer detection in whole-slide images: A deep learning approach for quantifying tumor extent. Scientific reports, 7(1):46450, 2017.
dc.relation.referencesHaibo Wang, Angel Cruz-Roa, Ajay Basavanhally, Hannah Gilmore, Natalie Shih, Mike Feldman, John Tomaszewski, Fabio Gonzalez, and Anant Madabhushi. Mitosis detec- tion in breast cancer pathology images by combining handcrafted and convolutional neural network features. Journal of Medical Imaging, 1(3):034003–034003, 2014.
dc.relation.referencesYousef Al-Kofahi, Wiem Lassoued, William Lee, and Badrinath Roysam. Improved automatic detection and segmentation of cell nuclei in histopathology images. IEEE Transactions on Biomedical Engineering, 57(4):841–852, 2009.
dc.relation.referencesYun Liu, Krishna Gadepalli, Mohammad Norouzi, George E Dahl, Timo Kohlberger, Aleksey Boyko, Subhashini Venugopalan, Aleksei Timofeev, Philip Q Nelson, Greg S Corrado, et al. Detecting cancer metastases on gigapixel pathology images. arXiv preprint arXiv:1703.02442, 2017.
dc.relation.referencesGeorge Lee, Rachel Sparks, Sahirzeeshan Ali, Natalie NC Shih, Michael D Feldman, Elaine Spangler, Timothy Rebbeck, John E Tomaszewski, and Anant Madabhushi. Co- occurring gland angularity in localized subgraphs: predicting biochemical recurrence in intermediate-risk prostate cancer patients. PloS one, 9(5):e97954, 2014.
dc.relation.referencesXiangxue Wang, Andrew Janowczyk, Yu Zhou, Rajat Thawani, Pingfu Fu, Kurt Schalper, Vamsidhar Velcheti, and Anant Madabhushi. Prediction of recurrence in early stage non-small cell lung cancer using computer extracted nuclear features from digital h&e images. Scientific reports, 7(1):13543, 2017.
dc.relation.referencesFrederick M Howard, James Dolezal, Sara Kochanny, Galina Khramtsova, Jasmine Vick- ery, Andrew Srisuwananukorn, Anna Woodard, Nan Chen, Rita Nanda, Charles M Perou, et al. Integration of clinical features and deep learning on pathology for the pre- diction of breast cancer recurrence assays and risk of recurrence. NPJ Breast Cancer, 9 (1):25, 2023.
dc.relation.referencesKimmo Kartasalo, Wouter Bulten, Brett Delahunt, Po-Hsuan Cameron Chen, Hans Pinckaers, Henrik Olsson, Xiaoyi Ji, Nita Mulliqi, Hemamali Samaratunga, Toyonori Tsuzuki, et al. Artificial intelligence for diagnosis and gleason grading of prostate cancer in biopsies—current status and next steps. European Urology Focus, 7(4):687–691, 2021.
dc.relation.referencesRose S. George et al. Artificial intelligence in prostate cancer: Definitions, cur- rent research, and future directions. Urologic Oncology: Seminars and Original In- vestigations, 40(6):262–270, 2022. ISSN 1078-1439. doi: https://doi.org/10.1016/j. urolonc.2022.03.003.
dc.relation.referencesMarit Lucas et al. Deep learning for automatic gleason pattern classification for grade group determination of prostate biopsies. Virchows Archiv, 475(1):77–83, Jul 2019. ISSN 1432-2307. doi: 10.1007/s00428-019-02577-x.
dc.relation.referencesPeter Str¨om et al. Artificial intelligence for diagnosis and grading of prostate cancer in biopsies: a population-based, diagnostic study. Lancet Oncol, 21(2):222–232, January 2020.
dc.relation.referencesGabriele Campanella et al. Clinical-grade computational pathology using weakly su- pervised deep learning on whole slide images. Nature Medicine, 25(8):1301–1309, Aug 2019. ISSN 1546-170X. doi: 10.1038/s41591-019-0508-1. URL https://doi.org/10. 1038/s41591-019-0508-1.
dc.relation.referencesGabriele Campanella, Vitor Werneck Krauss Silva, and Thomas J Fuchs. Terabyte-scale deep multiple instance learning for classification and localization in pathology. arXiv preprint arXiv:1805.06983, 2018.
dc.relation.referencesHans Pinckaers, Wouter Bulten, Jeroen van der Laak, and Geert Litjens. Detection of prostate cancer in Whole-Slide images through End-to-End training with Image-Level labels. IEEE Trans Med Imaging, 40(7):1817–1826, June 2021.
dc.relation.referencesKunal Nagpal et al. Development and validation of a deep learning algorithm for improving gleason scoring of prostate cancer. npj Digital Medicine, 2(1):48, Jun 2019. ISSN 2398-6352. doi: 10.1038/s41746-019-0112-2.
dc.relation.referencesGeert Litjens, Peter Bandi, Babak Ehteshami Bejnordi, Oscar Geessink, Maschenka Balkenhol, Peter Bult, Altuna Halilovic, Meyke Hermsen, Rob van de Loo, Rob Vo- gels, Quirine F Manson, Nikolas Stathonikos, Alexi Baidoshvili, Paul van Diest, Carla Wauters, Marcory van Dijk, and Jeroen van der Laak. 1399 H amp;E-stained sentinel lymph node sections of breast cancer patients: the CAMELYON dataset. GigaScience, 7(6), 05 2018. ISSN 2047-217X. doi: 10.1093/gigascience/giy065.
dc.relation.referencesP´eter B´andi, Oscar Geessink, Quirine Manson, Marcory Van Dijk, Maschenka Balken- hol, Meyke Hermsen, Babak Ehteshami Bejnordi, Byungjae Lee, Kyunghyun Paeng, Aoxiao Zhong, Quanzheng Li, Farhad Ghazvinian Zanjani, Svitlana Zinger, Keisuke Fukuta, Daisuke Komura, Vlado Ovtcharov, Shenghua Cheng, Shaoqun Zeng, Jeppe Thagaard, Anders B. Dahl, Huangjing Lin, Hao Chen, Ludwig Jacobsson, Martin Hed- lund, Melih C¸etin, Eren Halıcı, Hunter Jackson, Richard Chen, Fabian Both, J¨org Franke, Heidi K¨usters-Vandevelde, Willem Vreuls, Peter Bult, Bram van Ginneken, Jeroen van der Laak, and Geert Litjens. From detection of individual metastases to classification of lymph node status at the patient level: The camelyon17 challenge. IEEE Transactions on Medical Imaging, 38(2):550–560, 2019. doi: 10.1109/TMI.2018. 2867350.
dc.relation.referencesRichard J Chen, Tong Ding, Ming Y Lu, Drew FK Williamson, Guillaume Jaume, Bowen Chen, Andrew Zhang, Daniel Shao, Andrew H Song, Muhammad Shaban, et al. A general-purpose self-supervised model for computational pathology. arXiv preprint arXiv:2308.15474, 2023.
dc.relation.referencesAmitojdeep Singh, Sourya Sengupta, and Vasudevan Lakshminarayanan. Explainable deep learning models in medical image analysis. Journal of imaging, 6(6):52, 2020.
dc.relation.referencesChaofan Chen, Oscar Li, Daniel Tao, Alina Barnett, Cynthia Rudin, and Jonathan K Su. This looks like that: Deep learning for interpretable image recognition. In H. Wal- lach, H. Larochelle, A. Beygelzimer, F. d'Alch´e-Buc, E. Fox, and R. Garnett, editors, Advances in Neural Information Processing Systems, volume 32. Curran As- sociates, Inc., 2019.
dc.relation.referencesShancheng Jiang, Huichuan Li, and Zhi Jin. A visually interpretable deep learning framework for histopathological image-based skin cancer diagnosis. IEEE Journal of Biomedical and Health Informatics, 25(5):1483–1494, 2021.
dc.relation.referencesJie Hao, Sai Chandra Kosaraju, Nelson Zange Tsaku, Dae Hyun Song, and Mingon Kang. Page-net: interpretable and integrative deep learning for survival analysis using histopathological images and genomic data. In Pacific Symposium on Biocomputing 2020, pages 355–366. World Scientific, 2019.
dc.relation.referencesGuangli Li, Chuanxiu Li, Guangting Wu, Donghong Ji, and Hongbin Zhang. Multi-view attention-guided multiple instance detection network for interpretable breast cancer histopathological image diagnosis. IEEE Access, 9:79671–79684, 2021.
dc.relation.referencesSoufiane Belharbi, J´erˆome Rony, Jose Dolz, Ismail Ben Ayed, Luke McCaffrey, and Eric Granger. Deep interpretable classification and weakly-supervised segmentation of histology images via max-min uncertainty. IEEE Transactions on Medical Imaging, 41 (3):702–714, 2021.
dc.relation.referencesAngel Alfonso Cruz-Roa, John Edison Arevalo Ovalle, Anant Madabhushi, and Fabio Augusto Gonz´alez Osorio. A deep learning architecture for image representa- tion, visual interpretability and automated basal-cell carcinoma cancer detection. In Medical Image Computing and Computer-Assisted Intervention–MICCAI 2013: 16th International Conference, Nagoya, Japan, September 22-26, 2013, Proceedings, Part II 16, pages 403–410. Springer, 2013.
dc.relation.referencesGang Xu, Zhigang Song, Zhuo Sun, Calvin Ku, Zhe Yang, Cancheng Liu, Shuhao Wang, Jianpeng Ma, and Wei Xu. Camel: A weakly supervised learning framework for histopathology image segmentation. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), October 2019.
dc.relation.referencesZohaib Salahuddin, Henry C. Woodruff, Avishek Chatterjee, and Philippe Lambin. Transparency of deep neural networks for medical image analysis: A review of inter- pretability methods. Computers in Biology and Medicine, 140:105111, 2022. ISSN 0010-4825. doi: https://doi.org/10.1016/j.compbiomed.2021.105111.
dc.relation.referencesFinale Doshi-Velez and Been Kim. Towards a rigorous science of interpretable machine learning. arXiv preprint arXiv:1702.08608, 2017.
dc.relation.referencesZhuchen Shao, Hao Bian, Yang Chen, Yifeng Wang, Jian Zhang, Xiangyang Ji, et al. Transmil: Transformer based correlated multiple instance learning for whole slide image classification. Advances in neural information processing systems, 34:2136–2147, 2021.
dc.relation.referencesSyed Ashar Javed, Dinkar Juyal, Harshith Padigela, Amaro Taylor-Weiner, Limin Yu, and Aaditya Prakash. Additive mil: intrinsically interpretable multiple instance learning for pathology. Advances in Neural Information Processing Systems, 35:20689–20702, 2022.
dc.relation.referencesSantiago Toledo-Cort´es, Diego H. Useche, Henning M¨uller, and Fabio A. Gonz´alez. Grading diabetic retinopathy and prostate cancer diagnostic images with deep quantum ordinal regression. Computers in Biology and Medicine, 145:105472, 2022. ISSN 0010- 4825. doi: https://doi.org/10.1016/j.compbiomed.2022.105472.
dc.relation.referencesMichael A. Nielsen and Isaac L. Chuang. Quantum Computation and Quantum Infor- mation: 10th Anniversary Edition. Cambridge University Press, 2010.
dc.relation.referencesFabio A. Gonz´alez, Ra´ul Ramos-Poll´an, and Joseph A. Gallego-Mejia. Quantum kernel mixtures for probabilistic deep learning, 2023.
dc.relation.referencesZhuang Liu, Hanzi Mao, Chao-Yuan Wu, Christoph Feichtenhofer, Trevor Darrell, and Saining Xie. A convnet for the 2020s. CoRR, abs/2201.03545, 2022.
dc.relation.referencesMingxing Tan and Quoc V. Le. Efficientnetv2: Smaller models and faster training, 2021.
dc.relation.referencesYongming Rao, Wenliang Zhao, Benlin Liu, Jiwen Lu, Jie Zhou, and Cho-Jui Hsieh. Dynamicvit: Efficient vision transformers with dynamic token sparsification. CoRR, abs/2106.02034, 2021.
dc.relation.referencesDiederik P. Kingma and Jimmy Ba. Adam: A method for stochastic optimization, 2017.
dc.relation.referencesHemamali Samaratunga, Lars Egevad, John Yaxley, Joanna Perry-Keene, Ian Le Fevre, James Kench, Admire Matsika, David Bostwick, Kenneth Iczkowski, and Brett Delahunt. Gleason score 3+3=6 prostatic adenocarcinoma is not benign and the current debate is unhelpful to clinicians and patients. Pathology, 2023. ISSN 0031-3025. doi: https://doi.org/10.1016/j.pathol.2023.10.005.
dc.relation.referencesShiwen Shen, Simon X Han, Denise R Aberle, Alex A Bui, and William Hsu. An interpretable deep hierarchical semantic convolutional neural network for lung nod- ule malignancy classification. Expert Systems with Applications, 128:84–95, 2019.ISSN 0957-4174. doi: https://doi.org/10.1016/j.eswa.2019.01.048.
dc.relation.referencesEunji Kim, Siwon Kim, Minji Seo, and Sungroh Yoon. Xprotonet: Diagnosis in chest radiography with global and local explanations. In 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 15714–15723, 2021. doi: 10.1109/CVPR46437.2021.01546.
dc.relation.referencesOscar Li, Hao Liu, Chaofan Chen, and Cynthia Rudin. Deep learning for case-based reasoning through prototypes: A neural network that explains its predictions. In Pro- ceedings of the Thirty-Second AAAI Conference on Artificial Intelligence and Thirtieth Innovative Applications of Artificial Intelligence Conference and Eighth AAAI Sympo- sium on Educational Advances in Artificial Intelligence, AAAI’18/IAAI’18/EAAI’18. AAAI Press, 2018. ISBN 978-1-57735-800-8.
dc.relation.referencesCher Bass, Mariana da Silva, Carole Sudre, Logan ZJ Williams, Petru-Daniel Tudosiu, Fidel Alfaro-Almagro, Sean P Fitzgibbon, Matthew F Glasser, Stephen M Smith, and Emma C Robinson. Icam-reg: Interpretable classification and regression with feature attribution for mapping neurological phenotypes in individual scans. arXiv preprint arXiv:2103.02561, 2021.
dc.relation.referencesChristian F. Baumgartner, Lisa M. Koch, Kerem Can Tezcan, Jia Xi Ang, and Ender Konukoglu. Visual feature attribution using wasserstein gans. In 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 8309–8319, 2018. doi: 10.1109/CVPR.2018.00867.
dc.relation.referencesAmitojdeep Singh, Sourya Sengupta, and Vasudevan Lakshminarayanan. Explainable deep learning models in medical image analysis. J. Imaging, 6(6):52, June 2020.
dc.relation.referencesJoona Pohjonen and Valeria Ariotta. Histoprep: Preprocessing large medical images for machine learning made easy! https://github.com/jopo666/HistoPrep, 2022.
dc.rights.accessrightsinfo:eu-repo/semantics/openAccess
dc.subject.decsAprendizaje Profundo
dc.subject.decsDeep Learning
dc.subject.decsNeoplasias de la Próstata/diagnóstico por imagen
dc.subject.decsProstatic Neoplasms/diagnostic imaging
dc.subject.decsPatología
dc.subject.decsPathology
dc.subject.proposalProstate cancer
dc.subject.proposalHistopathology
dc.subject.proposalDeep Learning
dc.subject.proposalCancer grading
dc.subject.proposalDensity matrix
dc.subject.proposalInterpretability
dc.subject.proposalCáncer de prostata
dc.subject.proposalHistopatología
dc.subject.proposalAprendizaje automático
dc.subject.proposalGradación de cáncer
dc.subject.proposalMatriz de densidad
dc.subject.proposalInterpretabilidad
dc.title.translatedModelo de Deep Learning para la gradación automática de imágenes histopatológicas de cáncer de próstata
dc.type.coarhttp://purl.org/coar/resource_type/c_bdcc
dc.type.coarversionhttp://purl.org/coar/version/c_ab4af688f83e57aa
dc.type.contentText
dc.type.redcolhttp://purl.org/redcol/resource_type/TM
oaire.accessrightshttp://purl.org/coar/access_right/c_abf2
dcterms.audience.professionaldevelopmentInvestigadores
dcterms.audience.professionaldevelopmentPúblico general


Archivos en el documento

Thumbnail

Este documento aparece en la(s) siguiente(s) colección(ones)

Mostrar el registro sencillo del documento

Atribución-NoComercial-CompartirIgual 4.0 InternacionalEsta obra está bajo licencia internacional Creative Commons Reconocimiento-NoComercial 4.0.Este documento ha sido depositado por parte de el(los) autor(es) bajo la siguiente constancia de depósito