Method for the segmentation of brain magnetic resonance images using a neural network architecture based on attention models

dc.contributor.advisorSanchez Torres, German
dc.contributor.advisorBranch Bedoya, John Willian
dc.contributor.authorLaiton Bonadiez, Camilo Andres
dc.contributor.researchgroupGidia: Grupo de Investigación YyDesarrollo en Inteligencia Artificialspa
dc.date.accessioned2022-10-19T15:23:04Z
dc.date.available2022-10-19T15:23:04Z
dc.date.issued2022-07-31
dc.descriptionilustraciones, diagramas, tablasspa
dc.description.abstractIn recent years, the use of deep learning-based models for developing advanced healthcare systems has been growing due to the results they can achieve. However, the majority of the proposed deep learning-models largely use convolutional and pooling operations, causing a loss in valuable data and focusing on local information. In this thesis, we propose a deep learning-based approach that uses global and local features which are of importance in the medical image segmentation process. In order to train the architecture, we used extracted three-dimensional (3D) blocks from the full magnetic resonance image resolution, which were sent through a set of successive convolutional neural network (CNN) layers free of pooling operations to extract local information. Later, we sent the resulting feature maps to successive layers of self-attention modules to obtain the global context, whose output was later dispatched to the decoder pipeline composed mostly of upsampling layers. The model was trained using the Mindboggle-101 dataset. The experimental results showed that the self-attention modules allow segmentation with a higher Mean Dice Score of 0.90 ± 0.036 compared with other UNet-based approaches. The average segmentation time was approximately 0.032 s per brain structure. The proposed model allows tackling the brain structure segmentation task properly. Exploiting the global context that the self-attention modules incorporate allows for more precise and faster segmentation. We segmented 37 brain structures and, to the best of our knowledge, it is the largest number of structures under a 3D approach using attention mechanisms.eng
dc.description.abstractEn los últimos años, el uso de modelos basados en aprendizaje profundo para el desarrollo de sistemas de salud avanzados ha ido en aumento debido a los excelentes resultados que pueden alcanzar. Sin embargo, la mayoría de los modelos de aprendizaje profundo propuestos utilizan, en gran medida, operaciones convolucionales y de pooling, lo que provoca una pérdida de datos valiosos centrándose principalmente en la información local. En esta tesis, proponemos un enfoque basado en el aprendizaje profundo que utiliza características globales y locales que son importantes en el proceso de segmentación de imágenes médicas. Para entrenar la arquitectura, utilizamos bloques tridimensionales (3D) extraídos de la resolución completa de la imagen de resonancia magnética. Estas se enviaron a través de un conjunto de capas sucesivas de redes neuronales convolucionales (CNN) libres de operaciones de pooling para extraer información local. Luego, enviamos los mapas de características resultantes a capas sucesivas de módulos de autoatención para obtener el contexto global, cuya salida se envió más tarde a la canalización del decodificador compuesta principalmente por capas de upsampling. El modelo fue entrenado usando el conjunto de datos Mindboggle-101. Los resultados experimentales mostraron que los módulos de autoatención permiten la segmentación con un Mean Dice Score 0,90 ± 0,036 la cual es mayor en comparación con otros enfoques basados en UNet. El tiempo medio de segmentación fue de aproximadamente 0,032 s por estructura cerebral. El modelo propuesto permite abordar adecuadamente la tarea de segmentación de estructuras cerebrales. Así mismo, permite aprovechar el contexto global que incorporan los módulos de autoatención logrando una segmentación más precisa y rápida. En este trabajo segmentamos 37 estructuras cerebrales y, según nuestro conocimiento, es el mayor número de estructuras bajo un enfoque 3D utilizando mecanismos de atención. (Texto tomado de la fuente)spa
dc.description.curricularareaÁrea Curricular de Ingeniería de Sistemas e Informáticaspa
dc.description.degreelevelMaestríaspa
dc.description.degreenameMagíster en Ingeniería - Ingeniería de Sistemasspa
dc.description.methodsLa metodología constó de 4 fases. La primera se concentró en Determinar el conjunto de imágenes de resonancia magnética que fueron parte del conjunto de datos y seleccionar el conjunto de estructuras anatómicas a segmentar. La segunda fase estuvo dirigida a Definir el conjunto de técnicas a emplear en el preprocesamiento de las imágenes de resonancia magnética definidas. La tercera fase se concentró en Diseñar la arquitectura de red neuronal para la segmentación de las estructuras anatómicas cerebrales y por último, la cuarta fase estuvo dirigida a evaluar y comparar el modelo de red neuronal implementado con aquellos existentes en el estado del arte.spa
dc.description.researchareaVisión artificial y aprendizaje automáticospa
dc.format.extentxi, 54 páginasspa
dc.format.mimetypeapplication/pdfspa
dc.identifier.instnameUniversidad Nacional de Colombiaspa
dc.identifier.reponameRepositorio Institucional Universidad Nacional de Colombiaspa
dc.identifier.repourlhttps://repositorio.unal.edu.co/spa
dc.identifier.urihttps://repositorio.unal.edu.co/handle/unal/82378
dc.language.isoengspa
dc.publisherUniversidad Nacional de Colombiaspa
dc.publisher.branchUniversidad Nacional de Colombia - Sede Medellínspa
dc.publisher.departmentDepartamento de la Computación y la Decisiónspa
dc.publisher.facultyFacultad de Minasspa
dc.publisher.placeMedellín, Colombiaspa
dc.publisher.programMedellín - Minas - Maestría en Ingeniería - Ingeniería de Sistemasspa
dc.relation.referencesM. M. Miller-Thomas and T. L. Benzinger, “Neurologic applications of pet/mr imaging,” Magnetic Resonance Imaging Clinics of North America, vol. 25, no. 2, pp. 297–313, 2017. Hybrid PET/MR Imaging.spa
dc.relation.referencesW. Mier and D. Mier, “Advantages in functional imaging of the brain,” Frontiers in Human Neuroscience, vol. 9, 2015.spa
dc.relation.referencesH. Neeb, K. Zilles, and N. J. Shah, “Fully-automated detection of cerebral water content changes: Study of age- and gender-related h2o patterns with quantitative mri,” NeuroImage, vol. 29, no. 3, pp. 910–922, 2006.spa
dc.relation.referencesM. E. Shenton, C. C. Dickey, M. Frumin, and R. W. McCarley, “A review of mri findings in schizophrenia,” Schizophrenia Research, vol. 49, no. 1, pp. 1–52, 2001.spa
dc.relation.referencesG. Widmann, B. Henninger, C. Kremser, and W. Jaschke, “Sequences in head & neck radiology – state of the art mri,” 2017.spa
dc.relation.referencesH. Yu, L. T. Yang, Q. Zhang, D. Armstrong, and M. J. Deen, “Convolutional neural networks for medical image analysis: State-of-the-art, comparisons, improvement and perspectives,” Neurocomputing, vol. 444, pp. 92–110, 2021.spa
dc.relation.referencesX. Xie, J. Niu, X. Liu, Z. Chen, S. Tang, and S. Yu, “A survey on incorporating domain knowledge into deep learning for medical image analysis,” Medical Image Analysis, vol. 69, p. 101985, 2021.spa
dc.relation.referencesU. Ilhan and A. Ilhan, “Brain tumor segmentation based on a new threshold ap- proach,” Procedia Computer Science, vol. 120, pp. 580–587, 2017. 9th International Conference on Theory and Application of Soft Computing, Computing with Words and Perception, ICSCCW 2017, 22-23 August 2017, Budapest, Hungary.spa
dc.relation.referencesW. Deng, W. Xiao, H. Deng, and J. Liu, “Mri brain tumor segmentation with region growing method based on the gradients and variances along and inside of the boundary curve,” in 2010 3rd International Conference on Biomedical Engineering and Informatics, vol. 1, pp. 393–396, 2010.spa
dc.relation.referencesJ. Ashburner and K. J. Friston, “Unified segmentation,” NeuroImage, vol. 26, no. 3, pp. 839–851, 2005.spa
dc.relation.referencesJ. Liu and L. Guo, “A new brain mri image segmentation strategy based on k- means clustering and svm,” in 2015 7th International Conference on Intelligent Human-Machine Systems and Cybernetics, vol. 2, pp. 270–273, 2015.spa
dc.relation.referencesX. Zhao and X.-M. Zhao, “Deep learning of brain magnetic resonance images: A brief review,” Methods, vol. 192, pp. 131–140, 2021. Deep networks and network representation in bioinformatics.spa
dc.relation.referencesW. T. Le, F. Maleki, F. P. Romero, R. Forghani, and S. Kadoury, “Overview of machine learning: Part 2: Deep learning for medical image analysis,” Neuroimaging Clinics of North America, vol. 30, no. 4, pp. 417–431, 2020. Machine Learning and Other Artificial Intelligence Applications.spa
dc.relation.referencesX. Liu, H. Wang, Z. Li, and L. Qin, “Deep learning in ecg diagnosis: A review,” Knowledge-Based Systems, vol. 227, p. 107187, 2021.spa
dc.relation.referencesA. Nogales, A ́lvaro J. Garc ́ıa-Tejedor, D. Monge, J. S. Vara, and C. Anto ́n, “A survey of deep learning models in medical therapeutic areas,” Artificial Intelligence in Medicine, vol. 112, p. 102020, 2021.spa
dc.relation.referencesJ. Gu, Z. Wang, J. Kuen, L. Ma, A. Shahroudy, B. Shuai, T. Liu, X. Wang, G. Wang, J. Cai, and T. Chen, “Recent advances in convolutional neural networks,” Pattern Recognition, vol. 77, pp. 354–377, 2018.spa
dc.relation.referencesL. Alzubaidi, J. Zhang, A. J. Humaidi, A. Al-Dujaili, Y. Duan, O. Al-Shamma, J. Santamar ́ıa, M. A. Fadhel, M. Al-Amidie, and L. Farhan, “Review of deep lear- ning: concepts, cnn architectures, challenges, applications, future directions,” Jour- nal of Big Data, vol. 8, p. 53, Mar 2021.spa
dc.relation.referencesM. Kwabena Patrick, A. Felix Adekoya, A. Abra Mighty, and B. Y. Edward, “Cap- sule networks – a survey,” Journal of King Saud University - Computer and Infor- mation Sciences, vol. 34, no. 1, pp. 1295–1310, 2022.spa
dc.relation.referencesS. Sabour, N. Frosst, and G. E. Hinton, “Dynamic routing between capsules,” in Proceedings of the 31st International Conference on Neural Information Processing Systems, NIPS’17, (Red Hook, NY, USA), p. 3859–3869, Curran Associates Inc., 2017.spa
dc.relation.referencesA. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, L. Kaiser, and I. Polosukhin, “Attention is all you need,” 2017.spa
dc.relation.referencesA. Dosovitskiy, L. Beyer, A. Kolesnikov, D. Weissenborn, X. Zhai, T. Unterthiner, M. Dehghani, M. Minderer, G. Heigold, S. Gelly, J. Uszkoreit, and N. Houlsby, “An image is worth 16x16 words: Transformers for image recognition at scale,” 2020.spa
dc.relation.referencesJ. Chen, Y. Lu, Q. Yu, X. Luo, E. Adeli, Y. Wang, L. Lu, A. L. Yuille, and Y. Zhou, “Transunet: Transformers make strong encoders for medical image segmentation,” 2021.spa
dc.relation.referencesB. Fischl, D. H. Salat, E. Busa, M. Albert, M. Dieterich, C. Haselgrove, A. van der Kouwe, R. Killiany, D. Kennedy, S. Klaveness, A. Montillo, N. Makris, B. Rosen, and A. M. Dale, “Whole brain segmentation: automated labeling of neuroanatomical structures in the human brain,” Neuron, vol. 33, pp. 341–355, Jan. 2002.spa
dc.relation.referencesD. W. Shattuck and R. M. Leahy, “Brainsuite: An automated cortical surface iden- tification tool,” Medical Image Analysis, vol. 6, no. 2, pp. 129–142, 2002.spa
dc.relation.referencesM. Jenkinson, C. F. Beckmann, T. E. Behrens, M. W. Woolrich, and S. M. Smith, “Fsl,” NeuroImage, vol. 62, no. 2, pp. 782–790, 2012. 20 YEARS OF fMRI.spa
dc.relation.referencesO. Ronneberger, P. Fischer, and T. Brox, “U-net: Convolutional networks for bio- medical image segmentation,” in Medical Image Computing and Computer-Assisted Intervention – MICCAI 2015 (N. Navab, J. Hornegger, W. M. Wells, and A. F. Frangi, eds.), (Cham), pp. 234–241, Springer International Publishing, 2015.spa
dc.relation.referencesI. Despotovi ́c, B. Goossens, and W. Philips, “Mri segmentation of the human brain: Challenges, methods, and applications,” Computational and Mathematical Methods in Medicine, vol. 2015, p. 450341, Mar 2015.spa
dc.relation.referencesP. A. Yushkevich, Y. Gao, and G. Gerig, “ITK-SNAP: An interactive tool for semi-automatic segmentation of multi-modality biomedical images,” Annu Int Conf IEEE Eng Med Biol Soc, vol. 2016, pp. 3342–3345, Aug. 2016.spa
dc.relation.referencesS. Pieper, M. Halle, and R. Kikinis, “3d slicer,” in 2004 2nd IEEE International Symposium on Biomedical Imaging: Nano to Macro (IEEE Cat No. 04EX821), pp. 632–635 Vol. 1, 2004.spa
dc.relation.referencesC. Qin, Y. Wu, W. Liao, J. Zeng, S. Liang, and X. Zhang, “Improved u-net3+ with stage residual for brain tumor segmentation,” BMC Medical Imaging, vol. 22, p. 14, Jan 2022.spa
dc.relation.referencesJ. Sun, Y. Peng, Y. Guo, and D. Li, “Segmentation of the multimodal brain tumor image used the multi-pathway architecture method based on 3d fcn,” Neurocompu- ting, vol. 423, pp. 34–45, 2021.spa
dc.relation.referencesW. Dai, B. Woo, S. Liu, M. Marques, F. Tang, S. Crozier, C. Engstrom, and S. Chan- dra, “Can3d: Fast 3d knee mri segmentation via compact context aggregation,” in 2021 IEEE 18th International Symposium on Biomedical Imaging (ISBI), pp. 1505– 1508, 2021.spa
dc.relation.referencesT. S. Deepthi Murthy and G. Sadashivappa, “Brain tumor segmentation using th- resholding, morphological operations and extraction of features of tumor,” in 2014 International Conference on Advances in Electronics Computers and Communica- tions, pp. 1–6, 2014.spa
dc.relation.referencesW. Polakowski, D. Cournoyer, S. Rogers, M. DeSimio, D. Ruck, J. Hoffmeister, and R. Raines, “Computer-aided breast cancer detection and diagnosis of masses using difference of gaussians and derivative-based feature saliency,” IEEE Transactions on Medical Imaging, vol. 16, no. 6, pp. 811–819, 1997.spa
dc.relation.referencesM. Wani and B. Batchelor, “Edge-region-based segmentation of range images,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 16, no. 3, pp. 314–319, 1994.spa
dc.relation.referencesJ. Wu, F. Ye, J.-L. Ma, X.-P. Sun, J. Xu, and Z.-M. Cui, “The segmentation and visualization of human organs based on adaptive region growing method,” in 2008 IEEE 8th International Conference on Computer and Information Technology Workshops, pp. 439–443, 2008.spa
dc.relation.referencesN. Passat, C. Ronse, J. Baruthio, J.-P. Armspach, C. Maillot, and C. Jahn, “Region- growing segmentation of brain vessels: An atlas-based automatic approach,” Journal of Magnetic Resonance Imaging, vol. 21, no. 6, pp. 715–725, 2005.spa
dc.relation.referencesP. Gibbs, D. L. Buckley, S. J. Blackband, and A. Horsman, “Tumour volume de- termination from MR images by morphological segmentation,” Physics in Medicine and Biology, vol. 41, pp. 2437–2446, nov 1996.spa
dc.relation.referencesS. Pohlman, K. A. Powell, N. A. Obuchowski, W. A. Chilcote, and S. Grundfest- Broniatowski, “Quantitative classification of breast tumors in digitized mammo- grams,” Medical Physics, vol. 23, no. 8, pp. 1337–1345, 1996.spa
dc.relation.referencesE. A. A. Maksoud, M. Elmogy, and R. M. Al-Awadi, “Mri brain tumor segmenta- tion system based on hybrid clustering techniques,” in Advanced Machine Learning Technologies and Applications (A. E. Hassanien, M. F. Tolba, and A. Taher Azar, eds.), (Cham), pp. 401–412, Springer International Publishing, 2014.spa
dc.relation.referencesX. Artaechevarria, A. Munoz-Barrutia, and C. Ortiz-de Solorzano, “Combination strategies in multi-atlas image segmentation: Application to brain mr data,” IEEE Transactions on Medical Imaging, vol. 28, no. 8, pp. 1266–1277, 2009.spa
dc.relation.referencesP. Coupe, J. V. Manjon, V. Fonov, J. Pruessner, M. Robles, and D. L. Collins, “Patch-based segmentation using expert priors: Application to hippocampus and ventricle segmentation,” NeuroImage, vol. 54, no. 2, pp. 940–954, 2011.spa
dc.relation.referencesH. Wang, J. W. Suh, S. R. Das, J. B. Pluta, C. Craige, and P. A. Yushkevich, “Multi-atlas segmentation with joint label fusion,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 35, no. 3, pp. 611–623, 2013.spa
dc.relation.referencesG. Wu, Q. Wang, D. Zhang, F. Nie, H. Huang, and D. Shen, “A generative proba- bility model of joint label fusion for multi-atlas based brain segmentation,” Medical Image Analysis, vol. 18, no. 6, pp. 881–890, 2014. Sparse Methods for Signal Re- construction and Medical Image Analysis.spa
dc.relation.referencesM. Kass, A. Witkin, and D. Terzopoulos, “Snakes: Active contour models,” Inter- national Journal of Computer Vision, vol. 1, pp. 321–331, Jan 1988.spa
dc.relation.referencesZ. Wu, Y. Guo, S. H. Park, Y. Gao, P. Dong, S.-W. Lee, and D. Shen, “Robust brain roi segmentation by deformation regression and deformable shape model,” Medical Image Analysis, vol. 43, pp. 198–213, 2018.spa
dc.relation.referencesA.Rajendran and R. Dhanasekaran, “Fuzzy clustering and deformable model for tumor segmentation on mri brain image: A combined approach,” Procedia Engineering, vol. 30, pp. 327–333, 2012. International Conference on Communication Technology and System Design 2011.spa
dc.relation.referencesH. Khotanlou, J. Atif, O. Colliot, and I. Bloch, “3d brain tumor segmentation using fuzzy classification and deformable models,” in Fuzzy Logic and Applications (I. Bloch, A. Petrosino, and A. G. B. Tettamanzi, eds.), (Berlin, Heidelberg), pp. 312–318, Springer Berlin Heidelberg, 2006.spa
dc.relation.referencesS. S. Tng, N. Q. K. Le, H.-Y. Yeh, and M. C. H. Chua, “Improved prediction model of protein lysine crotonylation sites using bidirectional recurrent neural networks,” Journal of Proteome Research, vol. 21, pp. 265–273, Jan 2022.spa
dc.relation.referencesN. Q. K. Le and Q.-T. Ho, “Deep transformers and convolutional neural network in identifying dna n6-methyladenine sites in cross-species genomes,” Methods, 2021.spa
dc.relation.referencesD. Bank, N. Koenigstein, and R. Giryes, “Autoencoders,” 2020.spa
dc.relation.referencesG. Montufar, “Restricted boltzmann machines: Introduction and review,” 2018.spa
dc.relation.referencesA. Sherstinsky, “Fundamentals of recurrent neural network (rnn) and long short-term memory (lstm) network,” Physica D: Nonlinear Phenomena, vol. 404, p. 132306, 2020.spa
dc.relation.referencesY. LeCun, B. Boser, J. S. Denker, D. Henderson, R. E. Howard, W. Hubbard, and L. D. Jackel, “Backpropagation Applied to Handwritten Zip Code Recognition,” Neural Computation, vol. 1, pp. 541–551, 12 1989.spa
dc.relation.referencesK. Fukushima, “Neocognitron: A self-organizing neural network model for a mechanism of pattern recognition unaffected by shift in position,” Biological Cybernetics, vol. 36, pp. 193–202, Apr 1980.spa
dc.relation.referencesD. Nie, L. Wang, Y. Gao, and D. Shen, “Fully convolutional networks for multi-modality isointense infant brain image segmentation,” in 2016 IEEE 13th Interna- tional Symposium on Biomedical Imaging (ISBI), pp. 1342–1345, 2016.spa
dc.relation.referencesS. Bao and A. C. S. Chung, “Multi-scale structured cnn with label consistency for brain mr image segmentation,” Computer Methods in Biomechanics and Biomedical Engineering: Imaging & Visualization, vol. 6, no. 1, pp. 113–117, 2018.spa
dc.relation.referencesL. Henschel, S. Conjeti, S. Estrada, K. Diers, B. Fischl, and M. Reuter, “Fastsurfer - a fast and accurate deep learning based neuroimaging pipeline,” NeuroImage, vol. 219, p. 117012, 2020.spa
dc.relation.referencesT. Brosch, L. Y. W. Tang, Y. Yoo, D. K. B. Li, A. Traboulsee, and R. Tam, “Deep 3d convolutional encoder networks with shortcuts for multiscale feature integration applied to multiple sclerosis lesion segmentation,” IEEE Transactions on Medical Imaging, vol. 35, no. 5, pp. 1229–1239, 2016.spa
dc.relation.referencesS. Valverde, M. Cabezas, E. Roura, S. Gonzalez-Villa, D. Pareto, J. C. Vilanova, L. Ramio-Torrenta, Alex Rovira, A. Oliver, and X. Llado , “Improving automated multiple sclerosis lesion segmentation with a cascaded 3d convolutional neural net- work approach,” NeuroImage, vol. 155, pp. 159–168, 2017.spa
dc.relation.referencesR. E. Gabr, I. Coronado, M. Robinson, S. J. Sujit, S. Datta, X. Sun, W. J. Allen, F. D. Lublin, J. S. Wolinsky, and P. A. Narayana, “Brain and lesion segmentation in multiple sclerosis using fully convolutional neural networks: A large-scale study,” Multiple Sclerosis Journal, vol. 26, no. 10, pp. 1217–1226, 2020. PMID: 31190607.spa
dc.relation.referencesM. Havaei, A. Davy, D. Warde-Farley, A. Biard, A. Courville, Y. Bengio, C. Pal, P.-M. Jodoin, and H. Larochelle, “Brain tumor segmentation with deep neural networks,” Medical Image Analysis, vol. 35, pp. 18–31, 2017.spa
dc.relation.referencesM. Havaei, F. Dutil, C. Pal, H. Larochelle, and P.-M. Jodoin, “A convolutional neural network approach to brain tumor segmentation,” in Brainlesion: Glioma, Multiple Sclerosis, Stroke and Traumatic Brain Injuries (A. Crimi, B. Menze, O. Maier, M. Reyes, and H. Handels, eds.), (Cham), pp. 195–208, Springer International Pu- blishing, 2016.spa
dc.relation.referencesL. Chen, P. Bentley, and D. Rueckert, “Fully automatic acute ischemic lesion segmentation in dwi using convolutional neural networks,” NeuroImage: Clinical, vol. 15, pp. 633–643, 2017.spa
dc.relation.referencesZ. Akkus, I. Ali, J. Sedlar, T. L. Kline, J. P. Agrawal, I. F. Parney, C. Giannini, and B. J. Erickson, “Predicting 1p19q chromosomal deletion of low-grade gliomas from mr images using deep learning,” 2016.spa
dc.relation.referencesP. Kumar, P. Nagar, C. Arora, and A. Gupta, “U-segnet: Fully convolutional neural network based automated brain tissue segmentation tool,” in 2018 25th IEEE International Conference on Image Processing (ICIP), pp. 3503–3507, 2018.spa
dc.relation.referencesZ. Zhou, M. M. R. Siddiquee, N. Tajbakhsh, and J. Liang, “Unet++: Redesigning skip connections to exploit multiscale features in image segmentation,” IEEE Transactions on Medical Imaging, vol. 39, no. 6, pp. 1856–1867, 2020.spa
dc.relation.referencesN. Ibtehaz and M. S. Rahman, “Multiresunet : Rethinking the u-net architecture for multimodal biomedical image segmentation,” Neural Networks, vol. 121, pp. 74–87, 2020.spa
dc.relation.referencesH. Salehinejad, S. Sankar, J. Barfett, E. Colak, and S. Valaee, “Recent advances in recurrent neural networks,” 2018.spa
dc.relation.referencesJ. Devlin, M.-W. Chang, K. Lee, and K. Toutanova, “Bert: Pre-training of deep bidirectional transformers for language understanding,” 2018.spa
dc.relation.referencesS. Zheng, J. Lu, H. Zhao, X. Zhu, Z. Luo, Y. Wang, Y. Fu, J. Feng, T. Xiang, P. H. S. Torr, and L. Zhang, “Rethinking semantic segmentation from a sequence-to-sequence perspective with transformers,” 2020.spa
dc.relation.referencesJ. M. J. Valanarasu, P. Oza, I. Hacihaliloglu, and V. M. Patel, “Medical transformer: Gated axial-attention for medical image segmentation,” 2021.spa
dc.relation.referencesA. Klein and J. Tourville, “101 labeled brain images and a consistent human cortical labeling protocol,” Frontiers in Neuroscience, vol. 6, 2012.spa
dc.relation.referencesH. Z. U. Rehman, H. Hwang, and S. Lee, “Conventional and deep learning methods for skull stripping in brain mri,” Applied Sciences, vol. 10, no. 5, 2020.spa
dc.relation.referencesG. Fein, B. Landman, H. Tran, J. Barakos, K. Moon, V. Di Sclafani, and R. Shumway, “Statistical parametric mapping of brain morphology: Sensitivity is dramatically increased by using brain-extracted images as inputs,” NeuroImage, vol. 30, no. 4, pp. 1187–1195, 2006.spa
dc.relation.referencesJ. Acosta-Cabronero, G. B. Williams, J. M. Pereira, G. Pengas, and P. J. Nestor, “The impact of skull-stripping and radio-frequency bias correction on grey-matter segmentation for voxel-based morphometry,” NeuroImage, vol. 39, no. 4, pp. 1654– 1665, 2008.spa
dc.relation.referencesP. A. Taylor, G. Chen, D. R. Glen, J. K. Rajendra, R. C. Reynolds, and R. W. Cox, “Fmri processing with afni: Some comments and corrections on “exploring the impact of analysis software on task fmri results”,” bioRxiv, 2018.spa
dc.relation.referencesB. B. Avants, N. J. Tustison, G. Song, P. A. Cook, A. Klein, and J. C. Gee, “A reproducible evaluation of ants similarity metric performance in brain image registration,” NeuroImage, vol. 54, pp. 2033–2044, Feb 2011. 20851191[pmid].spa
dc.relation.referencesD. W. Shattuck, S. R. Sandor-Leahy, K. A. Schaper, D. A. Rottenberg, and R. M. Leahy, “Magnetic resonance image tissue classification using a partial volume model,” Neuroimage, vol. 13, pp. 856–876, May 2001.spa
dc.relation.referencesS. M. Smith, “Fast robust automated brain extraction,” Human Brain Mapping, vol. 17, no. 3, pp. 143–155, 2002.spa
dc.relation.referencesB. Puccio, J. P. Pooley, J. S. Pellman, E. C. Taverna, and R. C. Craddock, “The preprocessed connectomes project repository of manually corrected skull-stripped T1-weighted anatomical MRI data,” GigaScience, vol. 5, 10 2016. s13742-016-0150- 5.spa
dc.relation.referencesM. Yi-de, L. Qing, and Q. Zhi-bai, “Automated image segmentation using improved pcnn model based on cross-entropy,” in Proceedings of 2004 International Symposium on Intelligent Multimedia, Video and Speech Processing, 2004., pp. 743–746, 2004.spa
dc.relation.referencesS. Jadon, “A survey of loss functions for semantic segmentation,” in 2020 IEEE Conference on Computational Intelligence in Bioinformatics and Computational Biology (CIBCB), IEEE, oct 2020.spa
dc.relation.referencesT. Sugino, T. Kawase, S. Onogi, T. Kin, N. Saito, and Y. Nakajima, “Loss weigh- tings for improving imbalanced brain structure segmentation using fully convolutional networks,” Healthcare, vol. 9, no. 8, 2021.spa
dc.relation.referencesC.H.Sudre,W.Li,T.Vercauteren,S.Ourselin,andM.JorgeCardoso,“Generalised dice overlap as a deep learning loss function for highly unbalanced segmentations,” in Deep Learning in Medical Image Analysis and Multimodal Learning for Clinical Decision Support (M. J. Cardoso, T. Arbel, G. Carneiro, T. Syeda-Mahmood, J. M. R. Tavares, M. Moradi, A. Bradley, H. Greenspan, J. P. Papa, A. Madabhushi, J. C. Nascimento, J. S. Cardoso, V. Belagiannis, and Z. Lu, eds.), (Cham), pp. 240– 248, Springer International Publishing, 2017.spa
dc.relation.referencesT.-Y. Lin, P. Goyal, R. Girshick, K. He, and P. Dolla ́r, “Focal loss for dense object detection,” 2017.spa
dc.relation.referencesE. Castro, J. S. Cardoso, and J. C. Pereira, “Elastic deformations for data augmentation in breast cancer mass detection,” in 2018 IEEE EMBS International Conference on Biomedical Health Informatics (BHI), pp. 230–234, 2018.spa
dc.relation.referencesG. Valvano, N. Martini, A. Leo, G. Santini, D. Della Latta, E. Ricciardi, and D. Chiappino, “Training of a skull-stripping neural network with efficient data augmentation,” 2018.spa
dc.relation.referencesA. L. Maas, “Rectifier nonlinearities improve neural network acoustic models,” 2013.spa
dc.relation.referencesK. He, X. Zhang, S. Ren, and J. Sun, “Delving deep into rectifiers: Surpassing human-level performance on imagenet classification,” 2015.spa
dc.relation.referencesC. Wachinger, M. Reuter, and T. Klein, “Deepnat: Deep convolutional neural network for segmenting neuroanatomy,” NeuroImage, vol. 170, pp. 434–445, 2018. Segmenting the Brain.spa
dc.relation.referencesA. Guha Roy, S. Conjeti, N. Navab, and C. Wachinger, “Quicknat: A fully convolutional network for quick and accurate segmentation of neuroanatomy,” NeuroImage, vol. 186, pp. 713–727, 2019.spa
dc.rights.accessrightsinfo:eu-repo/semantics/openAccessspa
dc.rights.licenseAtribución-NoComercial-SinDerivadas 4.0 Internacionalspa
dc.rights.urihttp://creativecommons.org/licenses/by-nc/4.0/spa
dc.subject.ddc000 - Ciencias de la computación, información y obras generales::003 - Sistemasspa
dc.subject.ddc000 - Ciencias de la computación, información y obras generales::004 - Procesamiento de datos Ciencia de los computadoresspa
dc.subject.ddc000 - Ciencias de la computación, información y obras generales::005 - Programación, programas, datos de computaciónspa
dc.subject.ddc610 - Medicina y salud::611 - Anatomía humana, citología, histologíaspa
dc.subject.lembMagnetic resonance imagingeng
dc.subject.lembResonancia magnética en imágenesspa
dc.subject.lembDiagnóstico por imágenesspa
dc.subject.lembDiagnostic imagingeng
dc.subject.proposalMedical image segmentationeng
dc.subject.proposalDeep learningeng
dc.subject.proposalTransformerseng
dc.subject.proposalConvolutional neural networkseng
dc.subject.proposalBrain structureseng
dc.subject.proposalSegmentación de imágenes médicasspa
dc.subject.proposalAprendizaje profundospa
dc.subject.proposalRedes neuronales convolucionalesspa
dc.subject.proposalTransformersspa
dc.subject.proposalEstructuras cerebralesspa
dc.titleMethod for the segmentation of brain magnetic resonance images using a neural network architecture based on attention modelseng
dc.title.translatedMétodo para la segmentación de imágenes de resonancia magnética cerebrales usando una arquitectura de red neuronal basada en modelos de atenciónspa
dc.typeTrabajo de grado - Maestríaspa
dc.type.coarhttp://purl.org/coar/resource_type/c_bdccspa
dc.type.coarversionhttp://purl.org/coar/version/c_ab4af688f83e57aaspa
dc.type.contentTextspa
dc.type.driverinfo:eu-repo/semantics/masterThesisspa
dc.type.redcolhttp://purl.org/redcol/resource_type/TMspa
dc.type.versioninfo:eu-repo/semantics/acceptedVersionspa
dcterms.audience.professionaldevelopmentEstudiantesspa
dcterms.audience.professionaldevelopmentInvestigadoresspa
dcterms.audience.professionaldevelopmentMaestrosspa
oaire.accessrightshttp://purl.org/coar/access_right/c_abf2spa

Archivos

Bloque original

Mostrando 1 - 1 de 1
Cargando...
Miniatura
Nombre:
1016093055.2022.pdf
Tamaño:
13.46 MB
Formato:
Adobe Portable Document Format
Descripción:
Tesis de Maestría en Ingeniería - Ingeniería de Sistemas

Bloque de licencias

Mostrando 1 - 1 de 1
Cargando...
Miniatura
Nombre:
license.txt
Tamaño:
5.74 KB
Formato:
Item-specific license agreed upon to submission
Descripción: