Mostrar el registro sencillo del documento

dc.rights.licenseReconocimiento 4.0 Internacional
dc.contributor.advisorGómez-Mendoza, Juan Bernardo
dc.contributor.authorVargas López, Julián David
dc.date.accessioned2022-06-13T20:56:36Z
dc.date.available2022-06-13T20:56:36Z
dc.date.issued2022
dc.identifier.urihttps://repositorio.unal.edu.co/handle/unal/81576
dc.descriptiongráficos, ilustraciones, tablas.
dc.description.abstractEl aprendizaje profundo ha tenido un impacto notable en el análisis de imágenes médicas. Desde clasificar tejidos hasta localizar áreas anormales en una región, herramientas como las redes neuronales convolucionales (CNNs) y sus múltiples arquitecturas han mostrado resultados prometedores en esta área de la medicina. En patología digital, estos modelos neuronales se están convirtiendo cada vez más en una herramienta vital en el apoyo diagnostico y pronóstico para los patólogos. Actualmente, múltiples instituciones médicas utilizan CNNs en sus laboratorios para optimizar el tiempo de búsqueda de regiones anormales en imágenes médicas digitales - como lo son las muestras de biopsias -, generando automáticamente información relevante en el diagnóstico y pronóstico de un paciente. Su aplicabilidad se ha logrado en gran medida gracias a la existencia de habilitadores tecnológicos, como hardware especializado (p.e., procesadores gráficos o GPUs), que permiten manipular y procesar grandes cantidades de datos de manera simultánea. Sin embargo, las GPUs no puede procesar las imágenes en algunos casos debido a su tamaño. Las imágenes histopatológicas son un ejemplo de este tipo de imágenes, donde el tamaño de las imágenes puede ser del orden de hasta 25.000 x 30.000 píxeles. Se han diseñado estrategias que permiten manipular este tipo de imágenes, desde optimizar la forma de entrenar las CNNs hasta dividir la imagen en parches con un tamaño manejable. Sin embargo, analizar la biopsia, elegir las áreas de interés y crear las etiquetas correspondientes, son procesos que se realizan de forma manual y resultan dispendiosos para el especialista. Por lo tanto, es necesario desarrollar nuevas estrategias para apoyar al patólogo en estas tareas. En este documento, se plantean tres metodologías que permiten apoyar al patólogo en el análisis de imágenes histopatológicas de tejido prostático. El primer diseño emplea transformaciones de color que proporcionan información adicional sobre la imagen. Se mostró cómo estas técnicas mejoran y resaltan las estructuras presentes en el tejido (se logra una mejor definición de los núcleos, aumenta el contraste en el estroma y las células epiteliales, etc.). Estas transformaciones de color tienen la ventaja de que su implementación no genera un costo computacional considerable, permitiendo manipular la imagen de forma rápida, incluso en ordenadores que no posean un hardware especializado. El segundo diseño analiza el proceso de segmentación de una imagen con redes neuronales convolucionales. Se expuso el problema que se genera cuando se trata de clasificar estas imágenes dividiéndola en pequeños parches, en donde el tiempo de segmentación por imagen puede llegar a las 24 horas o más. En consecuencia, se diseña una estrategia para mitigar este problema empleando un porcentaje de pixeles de la imagen para segmentarla. Esta técnica permite disminuir el tiempo de segmentación a solo 5 minutos por imagen. Además, se logró demostrar experimentalmente, que la información que se pierde a medida que se disminuye el porcentaje de pixeles es muy pequeña (cerca del 5%), en comparación con el proceso en donde se emplean todos los pixeles de la imagen. Finalmente, nuestro tercer diseño consiste en crear una metodología que permite localizar las áreas sospechosas en imágenes de cáncer de próstata utilizando redes neuronales convolucionales. Empleando los resultados de la etapa anterior, se diseña una red neuronal convolucional que posee una cantidad pequeña de parámetros de entrenamiento (cerca de 50 mil). Esta red realiza dos tareas distintas: segmentar el estroma y segmentar el tejido sospechoso. Uniendo estos dos resultados y descartando los pixeles que pertenecieran al estroma segmentado, se logra localizar zonas sospechosas en imágenes de tejido prostático. Adicionalmente esta red se diseño pensando en el costo computacional que generan algunas redes en el estado del arte, y en el sobredimensionamiento del problema que puede surgir al emplear dichas redes. (Texto tomado de la fuente)
dc.description.abstractDeep learning has had a noticeable impact on medical image analysis. From classifying tissues to locating abnormal areas in a region, CNNs and their multiple architectures have shown a future in this area of medicine. In digital pathology, these neural models are increasingly becoming a vital tool in diagnostic and prognostic support for pathologists. Currently, multiple medical institutions use CNNs in their laboratories to optimize the search time for abnormal regions of a complete tissue slide (biopsy sample), automatically generating diagnoses and prognoses of a patient, etc. This success of CNN was achieved mainly by using specialized hardware (GPUs) that allow large amounts of data to be manipulated and processed. However, analyzing the biopsy, choosing the areas of interest and creating the corresponding labels are processes that are carried out manually and are costly for the specialist. Therefore, it is necessary to develop new strategies to support the pathologist in these tasks. In this document, three methodologies are proposed that allow the pathologist to be supported in the analysis of histopathological images of prostate tissue. The rst layout employs color transformations that provide additional information about the image. It was shown how these techniques improve and highlight the structures present in the tissue (the shape of the nuclei was better de ned, the contrast increased in the stroma and epithelial cells, etc.). These color transformations have the advantage that their implementation does not generate a considerable computational cost, allowing the image to be manipulated quickly, even on computers that do not have specialized hardware. The second design analyzes the segmentation process of an image with convolutional neural networks. The problem generated when we try to classify these images by dividing them into small patches, where the segmentation time per image can reach 24 hours or more, was exposed. Consequently, we designed a strategy to mitigate this problem by using a percentage of pixels in the image to segment it. This technique allowed the segmentation time to be reduced to only 5 minutes per image. In addition, we were able to demonstrate experimentally that the information lost as we decrease the percentage of pixels is very small (about 5%), compared to the process where all the pixels of the image are used. Finally, our third design creates a methodology that locates suspicious areas in prostate cancer images using convolutional neural networks. Using the previous stage results, we design a convolutional neural network with a small number of training parameters (about 50 thousand). This network performs two distinct tasks: segmenting the stroma and suspect tissue. Combining these two results and discarding the pixels that belonged to the segmented stroma, it is possible to locate suspicious areas in images of prostate tissue. Additionally, this network was designed considering the computational cost generated by some networks in the state of the art and the over-sizing of the problem that can arise when using these networks.
dc.format.extentlxvi páginas
dc.format.mimetypeapplication/pdf
dc.language.isoeng
dc.publisherUniversidad Nacional de Colombia
dc.rights.urihttp://creativecommons.org/licenses/by/4.0/
dc.subject.ddc000 - Ciencias de la computación, información y obras generales::005 - Programación, programas, datos de computación
dc.titleAplicación de técnicas de preprocesamiento y segmentación de imágenes para el apoyo diagnóstico en la detección de cáncer de próstata
dc.typeTrabajo de grado - Maestría
dc.type.driverinfo:eu-repo/semantics/masterThesis
dc.type.versioninfo:eu-repo/semantics/acceptedVersion
dc.publisher.programManizales - Ingeniería y Arquitectura - Maestría en Ingeniería - Automatización Industrial
dc.contributor.researchgroupSoft and Hard Applied Computing (SHAC)
dc.description.degreelevelMaestría
dc.description.degreenameMagíster en Ingeniería - Automatización Industrial
dc.description.researchareaIndustrial Automation
dc.identifier.instnameUniversidad Nacional de Colombia
dc.identifier.reponameRepositorio Institucional Universidad Nacional de Colombia
dc.identifier.repourlhttps://repositorio.unal.edu.co/
dc.publisher.departmentDepartamento de Ingeniería Eléctrica y Electrónica
dc.publisher.facultyFacultad de Ingeniería y Arquitectura
dc.publisher.placeManizales, Colombia
dc.publisher.branchUniversidad Nacional de Colombia - Sede Manizales
dc.relation.referencesJustin Streicher, Brian Lee Meyerson, Vidhya Karivedu, and Abhinav Sidana. A review of optimal prostate biopsy: indications and techniques. Therapeutic Advances in Urology, 11:175628721987007, 2019.
dc.relation.referencesEric Swanson and W. Dean Wallace. Histopathology Methods and Protocols. Methods in Molecular Biology, 1180:283-291, 2014.
dc.relation.referencesNi Chen and Qiao Zhou. The evolving gleason grading system. Chinese Journal of Cancer Research, 28(1):58-64, 2016.
dc.relation.referencesJohn Murtagh. Gleason score.
dc.relation.referencesJames A. Diao, Richard J. Chen, and Joseph C. Kvedar. E cient cellular annotation of histopathology slides with real-time AI augmentation. npj Digital Medicine, 4(1):1-2, 2021.
dc.relation.referencesShivang Naik, Anant Madabhushi, John Tomaszeweski, and Michael D. Feldman. A quantitative exploration of e cacy of gland morphology in prostate cancer grading. Proceedings of the IEEE Annual Northeast Bioengineering Conference, NEBEC, 08854:58-59, 2007.
dc.relation.referencesScott Doyle, Mark Hwang, Kinsuk Shah, Anant Madabhushi, Michael Feldman, and John Tomaszeweski. Automated grading of prostate cancer using architectural and textural image features. 2007 4th IEEE International Symposium on Biomedical Imaging: From Nano to Macro - Proceedings, pages 1284-1287, 2007.
dc.relation.referencesFeldman Naik, S., Doyle, S., Madabhushi, A., Tomaszewski, J. Automated nuclear and gland segmentation and gleason grading of prostate histology by integrating low-, high-level and domain speci c information. In: Special Workshop on Computational Histopathology (CHIP), in conjunction with 5th IEEE Internat. Symp. on Biomedical Imaging (, pages 1-8, 2008.
dc.relation.referencesKien Nguyen, Anil K. Jain, and Ronald L. Allen. Automated gland segmentation and classi cation for gleason grading of prostate tissue images. In 2010 20th International Conference on Pattern Recognition, pages 1497-1500, 2010.
dc.relation.referencesKien Nguyen, Anindya Sarkar, and Anil K. Jain. Prostate cancer grading: Use of graph cut and spatial arrangement of nuclei. IEEE Transactions on Medical Imaging, 33(12):2254-2270, 2014.
dc.relation.referencesKien Nguyen, Anil K. Jain, and Ronald L. Allen. Automated gland segmentation and classi cation for gleason grading of prostate tissue images. Proceedings - International Conference on Pattern Recognition, pages 1497-1500, 2010.
dc.relation.referencesHong Kong. Proceedings of 2010 IEEE 17th International Conference on Image Processing CROSS-SECTIONAL MICROSCOPIC IMAGES Tian Xia Yizhou Yu Jing Hua Wayne State University University of Illinois at Urbana-Champaign. Training, pages 1057-1060, 2010.
dc.relation.referencesHadi Rezaeilouyeh, Mohammad H. Mahoor, Francisco G. La Rosa, and Jun Jason Zhang. Prostate cancer detection and gleason grading of histological images using shearlet transform. Conference Record - Asilomar Conference on Signals, Systems and Computers, pages 268-272, 2013.
dc.relation.referencesAli Tabesh, Mikhail Teverovskiy, Ho Yuen Pang, Vinay P. Kumar, David Verbel, Angeliki Kotsianti, and Olivier Saidi. Multifeature prostate cancer diagnosis and gleason grading of histological images. IEEE Transactions on Medical Imaging, 26(10):1366-1378, 2007.
dc.relation.referencesAli Tabesh, Vinay P. Kumar, Ho-Yuen Pang, David Verbel, Angeliki Kotsianti, Mikhail Teverovskiy, and Olivier Saidi. Automated prostate cancer diagnosis and Gleason grading of tissue microarrays. Medical Imaging 2005: Image Processing, 5747:58, 2005.
dc.relation.referencesSiyamalan Manivannan, Wenqi Li, Jianguo Zhang, Emanuele Trucco, and Stephen J. McKenna. Structure Prediction for Gland Segmentation with Hand-Crafted and Deep Convolutional Features. IEEE Transactions on Medical Imaging, 37(1):210-221, 2018.
dc.relation.referencesYan Xu, Yang Li, Yipei Wang, Mingyuan Liu, Yubo Fan, Maode Lai, and Eric I.Chao Chang. Gland Instance Segmentation Using Deep Multichannel Neural Networks. IEEE Transactions on Biomedical Engineering, 64(12):2901-2912, 2017.
dc.relation.referencesPhilipp Kainz, Michael Pfeiffer, and Martin Urschler. Semantic Segmentation of Colon Glands with Deep Convolutional Neural Networks and Total Variation Segmentation. pages 1-15, 2015.
dc.relation.referencesSafi yeh Rezaei, Ali Emami, Nader Karimi, and Shadrokh Samavi. Gland Segmentation in Histopathological Images by Deep Neural Network. 2020 25th International Computer Conference, Computer Society of Iran, CSICC 2020, pages 1-5, 2020.
dc.relation.referencesChristophe Avenel, Anna Tolf, Anca Dragomir, and Ingrid B. Carlbom. Glandular Segmentation of Prostate Cancer: An Illustration of How the Choice of Histopathological Stain Is One Key to Success for Computational Pathology. Frontiers in Bioengineering and Biotechnology, 7(July):1-11, 2019.
dc.relation.referencesHao Chen, Xiaojuan Qi, Lequan Yu, and Pheng Ann Heng. DCAN: Deep Contour- Aware Networks for Accurate Gland Segmentation. Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2016-December:2487-2496, 2016.
dc.relation.referencesYuanping Zhou, Changqin Shi, Bingyan Lai, and Giorgos Jimenez. Contrast enhancement of medical images using a new version of the World Cup Optimization algorithm. Quantitative Imaging in Medicine and Surgery, 9(9):1528-1547, 2019.
dc.relation.referencesYawu Li, Ning Li, Xiang Yu, Kai Huang, Ting Zheng, Xiaofeng Cheng, Shaoqun Zeng, and Xiuli Liu. Hematoxylin and eosin staining of intact tissues via delipidation and ultrasound. Scienti c Reports, 8(1):1-8, 2018.
dc.relation.referencesKrishnan Nallaperumal, Muthukumar Subramanyam, Ravi Subban, Pasupathi Perumalsamy, Shashikala Durairaj, S. Gayathri Devi, and S. Selva Kumar. An analysis of suitable color space for visually plausible shadow-free scene reconstruction from single image. 2013 IEEE International Conference on Computational Intelligence and Computing Research, IEEE ICCIC 2013, (October 2014), 2013.
dc.relation.referencesArash Abadpour. Color Image Processing Using Principal Component Analysis. (July):697, 2005.
dc.relation.referencesRafael C. Gonzalez and Richard E. Woods. Digital Image Processing. Pearson Education, United States, 2008.
dc.relation.referencesNicholas McCarthy, Padraig Cunningham, and Gillian Ohurley. The contribution of morphological features in the classi cation of prostate carcinoma in digital pathology images. Proceedings - International Conference on Pattern Recognition, pages 3269-3273, 2014.
dc.relation.referencesFeldman Naik, S., Doyle, S., Madabhushi, A., Tomaszewski, J. Automated nuclear and gland segmentation and gleason grading of prostate histology by integrating low, high-level and domain speci c information. In: Special Workshop on Computational Histopathology (CHIP), in conjunction with 5th IEEE Internat. Symp. on Biomedical Imaging (, pages 1-8, 2008.
dc.relation.referencesMichaela Weingant, Hayley M. Reynolds, Annette Haworth, Catherine Mitchell, Scott Williams, and Matthew D. DiFranco. Ensemble prostate tumor classi cation in HE whole slide imaging via stain normalization and cell density estimation. Lecture Notes in Computer Science (including subseries Lecture Notes in Arti cial Intelligence and Lecture Notes in Bioinformatics), 9352(October):280-287, 2015.
dc.relation.referencesPatel Janakkumar Baldevbhai. Color Image Segmentation for Medical Images using L*a*b* Color Space. IOSR Journal of Electronics and Communication Engineering, 1(2):24-45, 2012.
dc.relation.referencesJ. P. Gasparri, A. Bouchet, G. Abras, V. Ballarin, and J. I. Pastore. Medical image segmentation using the HSI color space and Fuzzy Mathematical Morphology. Journal of Physics: Conference Series, 332(1), 2011.
dc.relation.referencesWurood A and Jbara and Rafah A Jaafar. MRI Medical Images Enhancement based on Histogram Equalization and Adaptive Histogram Equalization. International Journal of Computer Trends and Technology, 50(2):91-93, 2017.
dc.relation.referencesGurleen Singh and Sukhpreet Kaur. Combination of Brightness Preserving Bi- Histogram Equalization and Discrete Wavelet Transform using LUV Color Space for Image Enhancement. International Journal of Computer Applications, 148(13):26-30, 2016.
dc.relation.referencesSaul McLeod. Likert Scale Likert Scale Examples How can you analyze data from a Likert Scale ? Simply Psychology, pages 1-3, 2008.
dc.relation.referencesWilliam K Pratt. Digital Image Processing: PIKS Inside, volume 5. 2007.
dc.relation.referencesLaith Alzubaidi, Jinglan Zhang, Amjad J. Humaidi, Ayad Al-Dujaili, Ye Duan, Omran Al-Shamma, J. Santamaría, Mohammed A. Fadhel, Muthana Al-Amidie, and Laith Farhan. Review of deep learning: concepts, CNN architectures, challenges, applications, future directions, volume 8. Springer International Publishing, 2021.
dc.relation.referencesGuifang Lin andWei Shen. Research on convolutional neural network based on improved Relu piecewise activation function. Procedia Computer Science, 131:977-984, 2018.
dc.relation.referencesYu Han Liu. Feature Extraction and Image Recognition with Convolutional Neural Networks. Journal of Physics: Conference Series, 1087(6), 2018.
dc.relation.referencesIqbal H. Sarker. Machine Learning: Algorithms, Real-World Applications and Research Directions. SN Computer Science, 2(3):1-21, 2021.
dc.relation.referencesForoogh Sharifzadeh, Gholamreza Akbarizadeh, and Yousef Sei Kavian. Ship Classi cation in SAR Images Using a New Hybrid CNN{MLP Classi er. Journal of the Indian Society of Remote Sensing, 47(4):551-562, 2019.
dc.relation.referencesGuangyu Jia, Hak Keung Lam, and Yujia Xu. Classi cation of COVID-19 chest X-Ray and CT images using a type of dynamic CNN modi cation method. Computers in Biology and Medicine, 134(April):104425, 2021.
dc.relation.referencesDimpy Varshni, Kartik Thakral, Lucky Agarwal, Rahul Nijhawan, and Ankush Mittal. Pneumonia Detection Using CNN based Feature Extraction. Proceedings of 2019 3rd IEEE International Conference on Electrical, Computer and Communication Technologies, ICECCT 2019, 2019.
dc.relation.referencesConnor Shorten and Taghi M. Khoshgoftaar. A survey on Image Data Augmentation for Deep Learning. Journal of Big Data, 6(1), 2019.
dc.relation.referencesLukasz Raczkowski, Marcin Mo_zejko, Joanna Zambonelli, and Ewa Szczurek. ARA: accurate, reliable and active histopathological image classi cation framework with Bayesian deep learning. Scienti c Reports, 9(1):1-12, 2019.
dc.relation.referencesShereen Fouad, David Randell, Antony Galton, Hisham Mehanna, and Gabriel Landini. Epithelium and stroma identi cation in histopathological images using unsupervised and semi-supervised superpixel-based segmentation. Journal of Imaging, 3(4):1-18, 2017.
dc.relation.referencesAndrew G. Howard, Menglong Zhu, Bo Chen, Dmitry Kalenichenko, Weijun Wang, Tobias Weyand, Marco Andreetto, and Hartwig Adam. MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications. 2017.
dc.relation.referencesJulián David Vargas López, Nicolás Toro-García, Juan Bernardo Gomez Mendoza, Paula Andrea Toro-Castaño, Rafael Pava-Marín, and Álex Enrique Pava-Ripoll. Histopathology color image processing in prostate carcinoma. page 16, 2020.
dc.relation.referencesYash Sharmay, Lubaina Ehsany, Sana Syed, and Donald E. Brown. HistoTransfer: Understanding Transfer Learning for Histopathology. pages 1-4, 2021.
dc.relation.referencesEirini Arvaniti, Kim S. Fricker, Michael Moret, Niels Rupp, Thomas Hermanns, Christian Fankhauser, Norbert Wey, Peter J. Wild, Jan H. Rüschoff, and Manfred Claassen. Automated Gleason grading of prostate cancer tissue microarrays via deep learning. Scienti c Reports, 8(1):1-11, 2018.
dc.relation.referencesYada Pruksachatkun, Jason Phang, Haokun Liu, Phu Mon Htut, Xiaoyi Zhang, Richard Yuanzhe Pang, Clara Vania, Katharina Kann, and Samuel R. Bowman. Intermediate-Task Transfer Learning with Pretrained Language Models: When and Why Does It Work? pages 5231-5247, 2020.
dc.relation.referencesZabit Hameed, So a Zahia, Begonya Garcia-Zapirain, Jos e Javier Aguirre, and Ana María Vanegas. Breast cancer histopathology image classi cation using an ensemble of deep learning models. Sensors (Switzerland), 20(16):1-17, 2020.
dc.relation.referencesKaren Simonyan and Andrew Zisserman. Very deep convolutional networks for largescale image recognition. 3rd International Conference on Learning Representations, ICLR 2015 - Conference Track Proceedings, pages 1-14, 2015.
dc.relation.referencesChiranjibi Sitaula and Sunil Aryal. Fusion of whole and part features for the classication of histopathological image of breast tissue. Health Information Science and Systems, 8(1):1-12, 2020.
dc.relation.referencesWenchao Han, Carol Johnson, Mena Gaed, José A. Gómez, Madeleine Moussa, Joseph L. Chin, Stephen Pautler, Glenn S. Bauman, and Aaron D. Ward. Histologic tissue components provide major cues for machine learning-based prostate cancer detection and grading on prostatectomy specimens. Scienti c Reports, 10(1):1-12, 2020.
dc.relation.referencesShidan Wang, Alyssa Chen, Lin Yang, Ling Cai, Yang Xie, Junya Fujimoto, Adi Gazdar, and Guanghua Xiao. Comprehensive analysis of lung cancer pathology images to discover tumor shape and boundary features that predict survival outcome. Scienti c Reports, 8(1):1-9, 2018.
dc.relation.referencesWeiwei Shi, Yihong Gong, Xiaoyu Tao, Jinjun Wang, and Nanning Zheng. Improving CNN performance accuracies with min-max objective. IEEE Transactions on Neural Networks and Learning Systems, 29(7):2872-2885, 2018.
dc.relation.referencesPedro Porto Buarque de Gusm~ao, Gianluca Francini, Skjalg Leps y, and Enrico Magli. Fast Training of Convolutional Neural Networks via Kernel Rescaling. pages 1-13, 2016.
dc.relation.referencesYizhi Liu, Yao Wang, Ruofei Yu, Mu Li, Vin Sharma, and Yida Wang. Optimizing CNN model inference on CPUs. Proceedings of the 2019 USENIX Annual Technical Conference, USENIX ATC 2019, pages 1025-1039, 2019.
dc.relation.referencesSamuel S. Ogden and Tian Guo. Characterizing the Deep Neural Networks Inference Performance of Mobile Applications. 2019.
dc.relation.referencesMarc Macenko, Marc Niethammer, J. S. Marron, David Borland, John T. Woosley, Xiaojun Guan, Charles Schmitt, and Nancy E. Thomas. A method for normalizing histology slides for quantitative analysis. Proceedings - 2009 IEEE International Symposium on Biomedical Imaging: From Nano to Macro, ISBI 2009, pages 1107-1110, 2009.
dc.rights.accessrightsinfo:eu-repo/semantics/openAccess
dc.subject.lembDiagnóstico por imágenes -- Innovaciones tecnológicas
dc.subject.proposalTejido de próstata
dc.subject.proposalRedes neuronales convolucionales
dc.subject.proposalOptimización
dc.subject.proposalSegmentación
dc.subject.proposalBalance de blancos
dc.subject.proposalProstate tissue
dc.subject.proposalConvolutional neural networks
dc.subject.proposalOptimization
dc.subject.proposalSegmentation
dc.subject.proposalWhite balance technique
dc.title.translatedApplication of image preprocessing and segmentation techniques for diagnostic support in the detection of prostate cancer.
dc.type.coarhttp://purl.org/coar/resource_type/c_bdcc
dc.type.coarversionhttp://purl.org/coar/version/c_ab4af688f83e57aa
dc.type.contentImage
dc.type.contentText
oaire.accessrightshttp://purl.org/coar/access_right/c_abf2
dcterms.audience.professionaldevelopmentBibliotecarios
dcterms.audience.professionaldevelopmentEstudiantes
dcterms.audience.professionaldevelopmentInvestigadores
dcterms.audience.professionaldevelopmentPúblico general
dc.description.curricularareaEléctrica, Electrónica, Automatización Y Telecomunicaciones


Archivos en el documento

Thumbnail
Thumbnail

Este documento aparece en la(s) siguiente(s) colección(ones)

Mostrar el registro sencillo del documento

Reconocimiento 4.0 InternacionalEsta obra está bajo licencia internacional Creative Commons Reconocimiento-NoComercial 4.0.Este documento ha sido depositado por parte de el(los) autor(es) bajo la siguiente constancia de depósito