A deep learning approach for 3D reconstruction of indoor scenes
dc.contributor.advisor | Prieto Ortiz, Flavio Augusto | spa |
dc.contributor.author | Gómez, Diego | spa |
dc.date.accessioned | 2025-02-25T14:25:29Z | |
dc.date.available | 2025-02-25T14:25:29Z | |
dc.date.issued | 2024 | |
dc.description | ilustraciones, diagramas, fotografías, tablas | spa |
dc.description.abstract | La presente tesis de maestría expone el fundamento, experimentación, resultados y análisis del entrenamiento y evaluación de métodos para la reconstrucción 3D implícita, específicamente Neural Radiance Fields (NeRF), mediante diferentes acercamientos para el conjunto de datos, que refieren a las imágenes originales y a técnicas de aumentación, con el propósito de establecer el impacto de la aumentación de imágenes en el rendimiento de métodos NeRF y seleccionar los acercamientos más viables. Los conjuntos de datos originales fueron manualmente recolectados para seis escenas categorizadas en dos variantes, tres para objetos específicos y tres para vistas amplias, donde un proceso de aumentación con transformaciones de color y geométricas, resultó en 18 conjuntos finales generados con el software COLMAP, el cual calculó las poses de las cámaras y puntos clave de las escenas. Si bien se probó un acercamiento para aumentar imágenes con una red generativa antagónica dual, mediante WGAN-GP para generar nuevas muestras y una SRGAN para incrementar la resolución, el resultado no fue satisfactorio dado las inconsistencias en las vistas de las cámaras y las distorsiones en las escenas. A partir de una revisión de la literatura y teniendo en cuenta las limitaciones de hardware, se seleccionaron cuatro métodos NeRF (i.e. 3D Gaussian Splatting, Instant-NGP, Nerfacto, zip-NeRF) para el entrenamiento y evaluación de los 18 conjuntos de datos, resultando en 72 modelos y un tiempo total de más de 101 horas para ambos procesos. Según las métricas de evaluación y resultados visuales, la aumentación de color mostró un incrementar en los resultados con respecto a las imágenes originales, mientras que las transformaciones geométricas generaron el efecto contrario. Así mismo, mediante un extenso análisis y discusión, se llegó a la selección del acercamiento de aumentación con color y de 3D Gaussian Splatting como el método NeRF. El documento está dividido en seis capítulos, que contienen la introducción, explicación teórica de la reconstrucción 3D y la aumentación de imágenes, procesos de experimentación, resultados, análisis, conclusiones y posibles trabajos futuros (Texto tomado de la fuente). | spa |
dc.description.abstract | This master’s thesis presents the foundation, experimentation, results and analysis for the training and evaluation of implicit 3D reconstruction methods, specifically Neural Radiance Fields (NeRF), by using different dataset approaches that refer to original images and augmentation techniques, with the purpose of identifying the impact of image augmentation on the performance of NeRF-based methods and to select the most feasible approaches. Original image datasets were manually collected for six scenes and categorized into two types, three for specific objects and three for wide views, where an augmentation process with color and geometric transformations resulted in 18 final datasets generated with the COLMAP software, which calculated the camera poses and keypoints of the scenes. While a dual generative adversarial network (dual-GAN) approach was tested to augment images, with a WGAN-GP to generate new samples and a SRGAN to increase resolution, it turned out not to be a feasible alternative given the camera inconsistencies and distortions in the scenes. Based on a literature review and taking into account hardware limitations, four NeRF-based methods were selected (i.e. 3D Gaussian Splatting, Instant-NGP, Nerfacto, zip-NeRF) for training and evaluation with the 18 datasets, that resulted in 72 models and a total time of more than 101 hours for both processes. From the evaluation metrics and visual results, color augmentations showed to increase results with respect to the original data, while geometric transformations generated the opposite effect. Also, through an extensive analysis and discussion, resulted in the selection of the color augmentation approach to increase image data, and the 3D Gaussian Splatting as the NeRF-based method. The document is divided into six chapters, that contain the introduction, theoretical explanation of 3D reconstruction and image augmentation, experimentation processes, results and analysis, conclusions and possible future works. | eng |
dc.description.degreelevel | Maestría | spa |
dc.description.degreename | Maestría en Ingeniería - Ingeniería de Sistemas y Computación | spa |
dc.description.researcharea | Sistemas inteligentes (Intelligent systems) | spa |
dc.format.extent | 130 páginas | spa |
dc.format.mimetype | application/pdf | spa |
dc.identifier.instname | Universidad Nacional de Colombia | spa |
dc.identifier.reponame | Repositorio Institucional Universidad Nacional de Colombia | spa |
dc.identifier.repourl | https://repositorio.unal.edu.co/ | spa |
dc.identifier.uri | https://repositorio.unal.edu.co/handle/unal/87552 | |
dc.language.iso | eng | spa |
dc.publisher | Universidad Nacional de Colombia | spa |
dc.publisher.branch | Universidad Nacional de Colombia - Sede Bogotá | spa |
dc.publisher.faculty | Facultad de Ingeniería | spa |
dc.publisher.place | Bogotá, Colombia | spa |
dc.publisher.program | Bogotá - Ingeniería - Maestría en Ingeniería - Ingeniería de Sistemas y Computación | spa |
dc.relation.references | Henrik Aanæs et al. “Large-Scale Data for Multiple-View Stereopsis”. In: vol. 120. Nov. 2016, pp. 153–168. DOI: 10.1007/s11263-016-0902-9. URL: https://doi.org/10.1007/s11263-016-0902-9. | spa |
dc.relation.references | Mark G. Adkins William A.and Davidson. “The Laplace Transform”. In: Ordinary Differential Equations. New York, NY: Springer New York, 2012, pp. 101–202. ISBN: 978-1-4614-3618-8. DOI: 10.1007/978-1-4614-3618-8_2. URL: https://doi.org/10.1007/978-1-4614-3618-8_2. | spa |
dc.relation.references | Charu C. Aggarwal. “An Introduction to Neural Networks”. In: Neural Networks and Deep Learning: A Textbook. Cham: Springer International Publishing, 2018, pp. 1–52. ISBN: 978-3-319-94463-0. DOI:10.1007/978-3-319-94463-0_1. URL: https://doi.org/10.1007/978-3-319-94463-0_1. | spa |
dc.relation.references | Mitko Aleksandrov, Sisi Zlatanova, and David J. Heslop. “Voxelisation Algorithms and Data Structures: A Review”. In: Sensors 21.24 (2021). ISSN: 1424-8220. DOI: 10.3390/s21248241. URL: https://www.mdpi.com/1424-8220/21/24/8241. | spa |
dc.relation.references | Laith Alzubaidi et al. “Review of deep learning: concepts, CNN architectures, challenges, applications, future directions”. In: Journal of Big Data 8 (2021). ISSN: 2196-1115. DOI: 10.1186/s40537-021-00444-8. URL: https://doi.org/10.1186/s40537-021-00444-8. | spa |
dc.relation.references | Martin Arjovsky, Soumith Chintala, and Léon Bottou. “Wasserstein Generative Adversarial Networks”. In: Proceedings of the 34th International Conference on Machine Learning. Vol. 70. Proceedings of Machine Learning Research. PMLR, 2017, pp. 214–223. URL: https://proceedings.mlr.press/ v70/arjovsky17a.html. | spa |
dc.relation.references | Zhenyu Bao et al. 3D Reconstruction and New View Synthesis of Indoor Environments based on a Dual Neural Radiance Field. 2024. arXiv: 2401.14726 [cs.CV]. URL: https://arxiv.org/abs/2401.14726. | spa |
dc.relation.references | Jonathan T. Barron et al. “Mip-NeRF 360: Unbounded Anti-Aliased Neural Radiance Fields”. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). June 2022, pp. 5470–5479. | spa |
dc.relation.references | Jonathan T. Barron et al. “Mip-NeRF: A Multiscale Representation for Anti-Aliasing Neural Radiance Fields”. In: 2021 IEEE/CVF International Conference on Computer Vision (ICCV). 2021, pp. 5835–5844. DOI: 10.1109/ICCV48922.2021.00580. | spa |
dc.relation.references | Jonathan T. Barron et al. “Zip-NeRF: Anti-Aliased Grid-Based Neural Radiance Fields”. In: 2023 IEEE/CVF International Conference on Computer Vision (ICCV). 2023, pp. 19640–19648. DOI: 10.1109/ ICCV51070.2023.01804. | spa |
dc.relation.references | Athanasios Bimpas et al. “Leveraging pervasive computing for ambient intelligence: A survey on recent advancements, applications and open challenges”. In: Computer Networks 239 (2024), p. 110156. ISSN: 1389-1286. DOI: https://doi.org/10.1016/j.comnet.2023.110156. URL: https://www.sciencedirect.com/science/article/pii/S1389128623006011. | spa |
dc.relation.references | Mark J. Burger Wilhelm and Burge. “Color Images”. In: Digital Image Processing: An Algorithmic Introduction. Cham: Springer International Publishing, 2022, pp. 375–423. DOI: 10.1007/978-3-031-05744-1_13. URL: https://doi.org/10.1007/978-3-031-05744-1_13. | spa |
dc.relation.references | Mark J. Burger Wilhelm and Burge. “Digital Images”. In: Digital Image Processing: An Algorithmic Introduction. Cham: Springer International Publishing, 2022, pp. 3–28. DOI: 10.1007/978-3-031-05744-1_1. URL: https://doi.org/10.1007/978-3-031-05744-1_1. | spa |
dc.relation.references | Wilhelm Burger and Mark J. Burge. “Geometric Operations”. In: Digital Image Processing: An Algorithmic Introduction. Cham: Springer International Publishing, 2022, pp. 601–637. DOI: 10.1007/978-3-031-05744-1_21. URL: https://doi.org/10.1007/978-3-031-05744-1_21. | spa |
dc.relation.references | Junyi Chai et al. “Deep learning in computer vision: A critical review of emerging techniques and application scenarios”. In: Machine Learning with Applications 6 (2021), p. 100134. ISSN: 2666-8270. DOI: https://doi.org/10.1016/j.mlwa.2021.100134. URL: https://www.sciencedirect.com/science/ article/pii/S2666827021000670. | spa |
dc.relation.references | Angel X. Chang et al. ShapeNet: An Information-Rich 3D Model Repository. 2015. arXiv: 1512.03012 [cs.GR]. URL: https://arxiv.org/abs/1512.03012. | spa |
dc.relation.references | Dylan Callaghan Chapell et al. “NeRF-based 3D Reconstruction and Orthographic Novel View Synthesis Experiments Using City-Scale Aerial Images”. In: 2023 IEEE Applied Imagery Pattern Recognition Workshop (AIPR). 2023, pp. 1–7. DOI: 10.1109/AIPR60534.2023.10440701. | spa |
dc.relation.references | Samarth Chopra et al. AgriNeRF: Neural Radiance Fields for Agriculture in Challenging Lighting Conditions. 2024. arXiv: 2409.15487 [cs.RO]. URL: https://arxiv.org/abs/2409.15487. | spa |
dc.relation.references | Christopher B. Choy et al. “3D-R2N2: A Unified Approach for Single and Multi-view 3D Object Reconstruction”. In: Computer Vision -ECCV 2016. Ed. by Bastian Leibe et al. Cham: Springer International Publishing, 2016, pp. 628–644. ISBN: 978-3-319-46484-8. DOI: 10.1007/978-3-319-46484-8_38. | spa |
dc.relation.references | Depeng Cui et al. “3D reconstruction of building structures incorporating neural radiation fields and geometric constraints”. In: Automation in Construction 165 (2024), p. 105517. ISSN: 0926-5805. DOI: https://doi.org/10.1016/j.autcon.2024.105517. URL: https://www.sciencedirect.com/ science/article/pii/S092658052400253X. | spa |
dc.relation.references | Angela Dai et al. ScanNet: Richly-annotated 3D Reconstructions of Indoor Scenes. 2017. arXiv: 1702. 04405 [cs.CV]. URL: https://arxiv.org/abs/1702.04405. | spa |
dc.relation.references | Graupe Daniel. Principles Of Artificial Neural Networks. Vol. 3. Advanced Series in Circuits and Systems. World Scientific, 2013. ISBN: 9789814522731, 9789814522748, 9789814522755. | spa |
dc.relation.references | Shaveta Dargan et al. “A Survey of Deep Learning and Its Applications: A New Paradigm to Machine Learning”. In: Archives of Computational Methods in Engineering 27 (4 Sept. 2020), pp. 1071–1092. ISSN: 1886-1784. DOI: 10.1007/s11831-019-09344-w. URL: https://doi.org/10.1007/s11831-019-09344-w. | spa |
dc.relation.references | Arcangelo Distante and Cosimo Distante. “Geometric Transformations”. In: Handbook of Image Processing and Computer Vision: Volume 2: From Image to Pattern. Cham: Springer International Publishing, 2020, pp. 149–208. DOI: 10.1007/978-3-030-42374-2_3. URL: https://doi.org/10.1007/978-3-030-42374-2_3. | spa |
dc.relation.references | Chao Dong et al. Image Super-Resolution Using Deep Convolutional Networks. 2015. arXiv: 1501.00092 [cs.CV]. URL: https://arxiv.org/abs/1501.00092. | spa |
dc.relation.references | Daniel Duckworth et al. SMERF: Streamable Memory Efficient Radiance Fields for Real-Time Large-Scene Exploration. 2024. arXiv: 2312.07541 [cs.CV]. URL: https://arxiv.org/abs/2312.07541. | spa |
dc.relation.references | George Fahim, Khalid Amin, and Sameh Zarif. “Single-View 3D reconstruction: A Survey of deep learning methods”. In: Computers and Graphics (Pergamon) 94 (Feb. 2021), pp. 164–190. ISSN: 00978493. DOI: 10.1016/J.CAG.2020.12.004. | spa |
dc.relation.references | Kiu Fu, Jiansheng Peng, and Zhang Hanxiao. “Single image 3D object reconstruction based on deep learning: A review”. In: Multimedia Tools and Applications 80 (1 Jan. 2021), pp. 463–498. ISSN: 15737721. | spa |
dc.relation.references | Qiancheng Fu et al. “Geo-Neus: Geometry-Consistent Neural Implicit Surfaces Learning for Multi-view Reconstruction”. In: Advances in Neural Information Processing Systems. Ed. by S. Koyejo et al. Vol. 35. Curran Associates, Inc., 2022, pp. 3403–3416. | spa |
dc.relation.references | Borko Furht, Esad Akar, and Whitney Angelica Andrews. “Introduction to Digital Imaging”. In: Digital Image Processing: Practical Approach. Cham: Springer International Publishing, 2018, pp. 1–5. ISBN: 978-3-319-96634-2. DOI: 10.1007/978-3-319-96634-2_1. URL: https://doi.org/10.1007/978-3-319-96634-2_1 | spa |
dc.relation.references | Kaifeng Gao et al. “Julia language in machine learning: Algorithms, applications, and open issues”. In: Computer Science Review 37 (2020), p. 100254. ISSN: 1574-0137. DOI: https : / / doi . org / 10 . 1016 / j . cosrev . 2020 . 100254. URL: https : / / www . sciencedirect . com / science / article / pii / S157401372030071X | spa |
dc.relation.references | Kyle Gao et al. NeRF: Neural Radiance Field in 3D Vision, A Comprehensive Review. 2023. arXiv: 2210. 00379 [cs.CV]. URL: https://arxiv.org/abs/2210.00379. | spa |
dc.relation.references | Georgia Gkioxari, Jitendra Malik, and Justin Johnson. “Mesh R-CNN”. In: CoRR abs/1906.02739 (2019). arXiv: 1906.02739. URL: http://arxiv.org/abs/1906.02739. | spa |
dc.relation.references | Melissa J. Goertzen. “Introduction to Quantitative Research and Data”. In: Library Technology Reports 53.14 (2017), p. 12. ISSN: 0024-2586. | spa |
dc.relation.references | Ian Goodfellow, Yoshua Bengio, and Aaron Courville. Deep Learning. http://www.deeplearningbook. org, last visited: 2024-10-20. MIT Press, 2016. | spa |
dc.relation.references | Ian Goodfellow et al. “Generative Adversarial Nets”. In: Advances in Neural Information Processing Systems. Ed. by Z. Ghahramani et al. Vol. 27. Curran Associates, Inc., 2014. | spa |
dc.relation.references | Jiaqi Gu et al. “Deep Generative Adversarial Networks for Thin-Section Infant MR Image Reconstruction”. In: IEEE Access 7 (2019), pp. 68290–68304. DOI: 10.1109/ACCESS.2019.2918926 | spa |
dc.relation.references | Ishaan Gulrajani et al. “Improved Training of Wasserstein GANs”. In: Advances in Neural Information Processing Systems. Vol. 30. Curran Associates, Inc., 2017. | spa |
dc.relation.references | Xian-Feng Han, Hamid Laga, and Mohammed Bennamoun. “Image-based 3D Object Reconstruction: State-of-the-Art and Trends in the Deep Learning Era”. In: CoRR abs/1906.06543 (2019). arXiv: 1906.06543. URL: http://arxiv.org/abs/1906.06543. | spa |
dc.relation.references | [40] Moiz Hassan, Kandasamy Illanko, and Xavier N. Fernando. “Single Image Super Resolution Using Deep Residual Learning”. In: AI 5.1 (2024), pp. 426–445. ISSN: 2673-2688. DOI: 10.3390/ai5010021. URL: https://www.mdpi.com/2673-2688/5/1/21. | spa |
dc.relation.references | K. He et al. “Deep Residual Learning for Image Recognition”. In: 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). June 2016, pp. 770–778. DOI: 10.1109/CVPR.2016.90. | spa |
dc.relation.references | Lei He et al. Neural Radiance Field in Autonomous Driving: A Survey. 2024. arXiv: 2404.13816 [cs.CV]. URL: https://arxiv.org/abs/2404.13816. | spa |
dc.relation.references | Shahd Hejazi, Michael Packianather, and Ying Liu. “A Novel approach using WGAN-GP and Conditional WGAN-GP for Generating Artificial Thermal Images of Induction Motor Faults”. In: Procedia Computer Science 225 (2023). 27th International Conference on Knowledge Based and Intelligent Information and Engineering Sytems (KES 2023), pp. 3681–3691. ISSN: 1877-0509. DOI: https : //doi.org/10.1016/j.procs.2023.10.363. URL: https://www.sciencedirect.com/science/article/pii/ S1877050923015211. | spa |
dc.relation.references | Andrew G. Howard et al. “MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications”. In: CoRR abs/1704.04861 (2017). arXiv: 1704.04861. URL: http://arxiv.org/abs/ 1704.04861. | spa |
dc.relation.references | Kewei Hu et al. “High-fidelity 3D reconstruction of plants using Neural Radiance Fields”. In: Computers and Electronics in Agriculture 220 (2024), p. 108848. ISSN: 0168-1699. DOI: https://doi.org/ 10 . 1016 / j . compag . 2024 . 108848. URL: https : / / www . sciencedirect . com / science / article / pii / S0168169924002394. | spa |
dc.relation.references | Yueyu Hu et al. Low Latency Point Cloud Rendering with Learned Splatting. 2024. arXiv: 2409.16504 [cs.CV]. URL: https://arxiv.org/abs/2409.16504. | spa |
dc.relation.references | Zexu Huang et al. “Efficient neural implicit representation for 3D human reconstruction”. In: Pattern Recognition 156 (2024), p. 110758. ISSN: 0031-3203. DOI: https://doi.org/10.1016/j.patcog.2024. 110758. URL: https://www.sciencedirect.com/science/article/pii/S0031320324005090. | spa |
dc.relation.references | Zhihao Jia, Bing Wang, and Changhao Chen. Drone-NeRF: Efficient NeRF Based 3D Scene Reconstruction for Large-Scale Drone Survey. 2023. arXiv: 2308.15733 [cs.CV]. URL: https://arxiv.org/abs/2308. 15733. | spa |
dc.relation.references | Zhizhong Kang et al. “A Review of Techniques for 3D Reconstruction of Indoor Environments”. In: ISPRS International Journal of Geo-Information 9.5 (2020). ISSN: 2220-9964. DOI: 10.3390/ijgi9050330. URL: https://www.mdpi.com/2220-9964/9/5/330. | spa |
dc.relation.references | Bernhard Kerbl et al. “3D Gaussian Splatting for Real-Time Radiance Field Rendering”. In: ACM Trans. Graph. 42.4 (July 2023). ISSN: 0730-0301. DOI: 10.1145/3592433. URL: https://doi.org/10. 1145/3592433. | spa |
dc.relation.references | Shakiba Kheradmand et al. Accelerating Neural Field Training via Soft Mining. 2023. arXiv: 2312.00075 [cs.CV]. URL: https://arxiv.org/abs/2312.00075. | spa |
dc.relation.references | Arno Knapitsch et al. “Tanks and temples: benchmarking large-scale scene reconstruction”. In: 36.4 (July 2017). ISSN: 0730-0301. DOI: 10.1145/3072959.3073599. URL: https://doi.org/10.1145/ 3072959.3073599. | spa |
dc.relation.references | Alex Krizhevsky, Ilya Sutskever, and Geoffrey E. Hinton. “ImageNet classification with deep convolutional neural networks”. In: Commun. ACM 60.6 (2017), pp. 84–90. ISSN: 0001-0782. DOI: 10. 1145/3065386. URL: https://doi.org/10.1145/3065386. | spa |
dc.relation.references | Y. Lecun et al. “Gradient-based learning applied to document recognition”. In: Proceedings of the IEEE 86.11 (Nov. 1998), pp. 2278–2324. ISSN: 1558-2256. DOI: 10.1109/5.726791. | spa |
dc.relation.references | Yann LeCun, Yoshua Bengio, and Geoffrey Hinton. “Deep learning”. In: Nature 521 (7553 2015), pp. 436–444. ISSN: 1476-4687. DOI: 10.1038/nature14539. URL: https://doi.org/10.1038/nature14539. | spa |
dc.relation.references | Christian Ledig et al. “Photo-Realistic Single Image Super-Resolution Using a Generative Adversarial Network”. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR). July 2017. | spa |
dc.relation.references | Ke Li et al. “Interacting with Neural Radiance Fields in Immersive Virtual Reality”. In: Extended Abstracts of the 2023 CHI Conference on Human Factors in Computing Systems. CHI EA ’23. Hamburg, Germany: Association for Computing Machinery, 2023. ISBN: 9781450394222. DOI:10.1145/3544549.3583920. URL: https://doi.org/10.1145/3544549.3583920. | spa |
dc.relation.references | Xi Li et al. “3D Shape Reconstruction of Furniture Object from a Single Real Indoor Image”. In: IEEE, 2020, pp. 101–104. DOI: 10.1109/ICCWAMTIP51612.2020.9317479. | spa |
dc.relation.references | You Li et al. “OptiViewNeRF: Optimizing 3D reconstruction via batch view selection and scene uncertainty in Neural Radiance Fields”. In: International Journal of Applied Earth Observation and Geoinformation 136 (2025), p. 104306. ISSN: 1569-8432. DOI: https://doi.org/10.1016/j.jag.2024.104306. URL: https://www.sciencedirect.com/science/article/pii/S1569843224006642. | spa |
dc.relation.references | Zhaoshuo Li et al. “Neuralangelo: High-Fidelity Neural Surface Reconstruction”. In: 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). 2023, pp. 8456–8465. DOI: 10.1109/ CVPR52729.2023.00817. | spa |
dc.relation.references | Zhaoji Lin, Yutao Huang, and Li Yao. “Three-Dimensional Reconstruction of Indoor Scenes Based on Implicit Neural Representation”. In: Journal of Imaging 10.9 (2024). ISSN: 2313-433X. DOI: 10. 3390/jimaging10090231. URL: https://www.mdpi.com/2313-433X/10/9/231. | spa |
dc.relation.references | Jingwang Ling, Zhibo Wang, and Feng Xu. “ShadowNeuS: Neural SDF Reconstruction by Shadow Ray Supervision”. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). June 2023, pp. 175–185. | spa |
dc.relation.references | Chen Liu et al. “PlaneNet: Piece-wise Planar Reconstruction from a Single RGB Image”. In: CoRR abs/1804.06278 (2018). arXiv: 1804.06278. URL: http://arxiv.org/abs/1804.06278. | spa |
dc.relation.references | Chen Liu et al. “PlaneRCNN: 3D Plane Detection and Reconstruction from a Single Image”. In: CoRR abs/1812.04072 (2018). arXiv: 1812.04072. URL: http://arxiv.org/abs/1812.04072. | spa |
dc.relation.references | Xiao Liu et al. “Learning disentangled representations in the imaging domain”. In: Medical Image Analysis 80 (2022), p. 102516. ISSN: 1361-8415. DOI: https://doi.org/10.1016/j.media.2022. 102516. URL: https://www.sciencedirect.com/science/article/pii/S1361841522001633. | spa |
dc.relation.references | William E. Lorensen and Harvey E. Cline. “Marching cubes: A high resolution 3D surface construction algorithm”. In: Proceedings of the 14th Annual Conference on Computer Graphics and Interactive Techniques. SIGGRAPH ’87. New York, NY, USA: Association for Computing Machinery, 1987, pp. 163– 169. ISBN: 0897912276. DOI: 10.1145/37401.37422. URL: https://doi.org/10.1145/37401. 37422. | spa |
dc.relation.references | Zhaoliang Lun et al. “3D Shape Reconstruction from Sketches via Multi-view Convolutional Networks”. In: 2017 International Conference on 3D Vision (3DV). 2017, pp. 67–77. DOI: 10.1109/3DV. 2017.00018. | spa |
dc.relation.references | Xiaoyang Lyu et al. 3DGSR: Implicit Surface Reconstruction with 3D Gaussian Splatting. 2024. arXiv: 2404.00409 [cs.CV]. URL: https://arxiv.org/abs/2404.00409. | spa |
dc.relation.references | Priyanka Mandikal et al. 3D-LMNet: Latent Embedding Matching for Accurate and Diverse 3D Point Cloud Reconstruction from a Single Image. 2019. arXiv: 1807.07796 [cs.CV]. URL: https://arxiv.org/abs/ 1807.07796. | spa |
dc.relation.references | Bogdan Maxim and Sergiu Nedevschi. “A survey on the current state of the art on deep learning 3D reconstruction”. In: (2021), pp. 283–290. DOI: 10.1109/ICCP53602.2021.9733639. | spa |
dc.relation.references | Palma Méndez and José Tomás. Inteligencia artificial: Métodos, técnicas y aplicaciones. 1st ed. Madrid, España: McGraw-Hill Interamericana de España, 2008. ISBN: 9788448156183, 9788448629793. | spa |
dc.relation.references | Ben Mildenhall et al. “Local light field fusion: practical view synthesis with prescriptive sampling guidelines”. In: ACM Trans. Graph. 38.4 (July 2019). ISSN: 0730-0301. DOI: 10.1145/3306346.3322980. URL: https://doi.org/10.1145/3306346.3322980. | spa |
dc.relation.references | Ben Mildenhall et al. “NeRF: Representing Scenes as Neural Radiance Fields for View Synthesis”. In: Computer Vision -ECCV 2020. Ed. by Andrea Vedaldi et al. Cham: Springer International Publishing, 2020, pp. 405–421. ISBN: 978-3-030-58452-8. | spa |
dc.relation.references | Thomas Müller et al. “Instant Neural Graphics Primitives with a Multiresolution Hash Encoding”. In: ACM Transactions on Graphics 41.4 (July 2022). ISSN: 0730-0301. DOI: 10.1145/3528223.3530127. URL: https://doi.org/10.1145/3528223.3530127. | spa |
dc.relation.references | Charlie Nash et al. PolyGen: An Autoregressive Generative Model of 3D Meshes. 2020. arXiv: 2002.10880 [cs.GR]. URL: https://arxiv.org/abs/2002.10880. | spa |
dc.relation.references | Yinyu Nie et al. “Total3DUnderstanding: Joint Layout, Object Pose and Mesh Reconstruction for Indoor Scenes from a Single Image”. In: CoRR abs/2002.12212 (2020). arXiv: 2002.12212. URL: https://arxiv.org/abs/2002.12212. | spa |
dc.relation.references | Koundinya Nouduri et al. “Deep realistic novel view generation for city-scale aerial images”. In: Proceedings -International Conference on Pattern Recognition (2020), pp. 10561–10567. ISSN: 10514651. DOI: 10.1109/ICPR48806.2021.9412844. | spa |
dc.relation.references | Yingxue Pang et al. Image-to-Image Translation: Methods and Applications. 2021. arXiv: 2101.08629 [cs.CV]. URL: https://arxiv.org/abs/2101.08629. | spa |
dc.relation.references | AKM Shahariar Azad Rabby and Chengcui Zhang. BeyondPixels: A Comprehensive Review of the Evolution of Neural Radiance Fields. 2024. arXiv: 2306.03000 [cs.CV]. URL: https://arxiv.org/abs/2306.03000. | spa |
dc.relation.references | Alec Radford, Luke Metz, and Soumith Chintala. Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks. 2016. arXiv: 1511.06434 [cs.LG]. URL: https://arxiv.org/ abs/1511.06434. | spa |
dc.relation.references | Carlos Alberto Ramos. “Los paradigmas de la investigación científica”. In: Avances en Psicología 23.1 (2015), pp. 9–17. DOI: 10.33539/avpsicol.2015.v23n1.167. URL: https://revistas.unife.edu.pe/ index.php/avancesenpsicologia/article/view/167. | spa |
dc.relation.references | Christian Reiser et al. MERF: Memory-Efficient Radiance Fields for Real-time View Synthesis in Unbounded Scenes. 2023. arXiv: 2302.12249 [cs.CV]. URL: https://arxiv.org/abs/2302.12249. | spa |
dc.relation.references | Fabio Remondino et al. “A Critical Analysis of NeRF-Based 3D Reconstruction”. In: Remote Sensing 15.14 (2023). ISSN: 2072-4292. DOI: 10.3390/rs15143585. URL: https://www.mdpi.com/2072-4292/15/14/3585. | spa |
dc.relation.references | Armando Levid Rodríguez-Santiago et al. “A deep learning architecture for 3d mapping urban landscapes”. In: Applied Sciences (Switzerland) 11 (23 Dec. 2021). ISSN: 20763417. DOI: 10.3390/APP112311551. | spa |
dc.relation.references | Philip J. Schneider and David H. Eberly. “Chapter 9 -Geometric Primitives in 3D”. In: Geometric Tools for Computer Graphics. The Morgan Kaufmann Series in Computer Graphics. San Francisco: Morgan Kaufmann, 2003, pp. 325–364. ISBN: 978-1-55860-594-7. DOI: https://doi.org/10.1016/B978-155860594-7/50012-6. URL: https://www.sciencedirect.com/science/article/pii/B9781558605947500126. | spa |
dc.relation.references | Johannes L. Schönberger and Jan-Michael Frahm. “Structure-from-Motion Revisited”. In: 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). 2016. DOI: 10.1109/CVPR.2016.445. | spa |
dc.relation.references | Johannes L. Schönberger et al. “Pixelwise View Selection for Unstructured Multi-View Stereo”. In: Computer Vision -ECCV 2016. Cham: Springer International Publishing, 2016, pp. 501–518. ISBN: 978-3-319-46487-9. DOI: 10.1007/978-3-319-46487-9_31. | spa |
dc.relation.references | De Rosal Igantius Moses Setiadi. “PSNR vs SSIM: imperceptibility quality assessment for image steganography”. In: Multimedia Tools and Applications 80 (6 2021), pp. 501–518. ISSN: 1573-7721. DOI: 10.1007/s11042-020-10035-z. | spa |
dc.relation.references | Pourya Shamsolmoali et al. “Image synthesis with adversarial networks: A comprehensive survey and case studies”. In: Information Fusion 72 (2021), pp. 126–146. ISSN: 1566-2535. DOI: https : //doi.org/10.1016/j.inffus.2021.02.014. URL: https://www.sciencedirect.com/science/article/pii/ S1566253521000385. | spa |
dc.relation.references | Connor Shorten and Taghi M. Khoshgoftaar. “A survey on Image Data Augmentation for Deep Learning”. In: Journal of Big Data 6 (1 2019). ISSN: 2196-1115. DOI: 10.1186/s40537-019-0197-0. URL: https://doi.org/10.1186/s40537-019-0197-0. | spa |
dc.relation.references | Nathan Silberman et al. “Indoor Segmentation and Support Inference from RGBD Images”. In: Computer Vision -ECCV 2012. Berlin, Heidelberg: Springer Berlin Heidelberg, 2012, pp. 746–760. ISBN: 978-3-642-33715-4. | spa |
dc.relation.references | Ivan Nunes da Silva et al. “Introduction”. In: Artificial Neural Networks : A Practical Course. Cham: Springer International Publishing, 2017, pp. 3–19. ISBN: 978-3-319-43162-8. DOI: 10.1007/978-3-319-43162-8_1. URL: https://doi.org/10.1007/978-3-319-43162-8_1. | spa |
dc.relation.references | Ivan Nunes da Silva et al. “Multilayer Perceptron Networks”. In: Artificial Neural Networks : A Practical Course. Cham: Springer International Publishing, 2017, pp. 55–115. ISBN: 978-3-319-43162-8. DOI: 10.1007/978-3-319-43162-8_5. URL: https://doi.org/10.1007/978-3-319-43162-8_5. | spa |
dc.relation.references | Karen Simonyan and Andrew Zisserman. Very Deep Convolutional Networks for Large-Scale Image Recognition. 2014. arXiv: 1409.1556 [cs.CV]. | spa |
dc.relation.references | Vincent Sitzmann et al. “Light Field Networks: Neural Scene Representations with Single-Evaluation Rendering”. In: Advances in Neural Information Processing Systems. Ed. by M. Ranzato et al. Vol. 34. Curran Associates, Inc., 2021, pp. 19313–19325. | spa |
dc.relation.references | Eugen Šlapak et al. “Neural radiance fields in the industrial and robotics domain: Applications, research opportunities and use cases”. In: Robotics and Computer-Integrated Manufacturing 90 (2024), p. 102810. ISSN: 0736-5845. DOI: https://doi.org/10.1016/j.rcim.2024.102810. URL: https: //www.sciencedirect.com/science/article/pii/S0736584524000978. | spa |
dc.relation.references | Shuran Song, Samuel P. Lichtenberg, and Jianxiong Xiao. “SUN RGB-D: A RGB-D scene understanding benchmark suite”. In: 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). 2015, pp. 567–576. DOI: 10.1109/CVPR.2015.7298655. | spa |
dc.relation.references | Jia-Mu Sun, Tong Wu, and Lin Gao. “Recent advances in implicit representation-based 3D shape generation”. In: Visual Intelligence 2 (2024). ISSN: 2731-9008. DOI: 10.1007/s44267-024-00042-1. URL: https://doi.org/10.1007/s44267-024-00042-1. | spa |
dc.relation.references | Tiancheng Sun et al. NeLF: Neural Light-transport Field for Portrait View Synthesis and Relighting. 2021. arXiv: 2107.12351 [cs.CV]. URL: https://arxiv.org/abs/2107.12351. | spa |
dc.relation.references | Xingyuan Sun et al. “Pix3d: Dataset and methods for single-image 3d shape modeling”. In: Proceedings of the IEEE conference on computer vision and pattern recognition. 2018, pp. 2974–2983. | spa |
dc.relation.references | D. Sundararajan. “Geometric Transformations and Image RegistrationGeometric transformations”. In: Digital Image Processing: A Signal Processing and Algorithmic Approach. Singapore: Springer Singapore, 2017, pp. 163–188. ISBN: 978-981-10-6113-4. DOI: 10.1007/978-981-10-6113-4_6. URL: https://doi.org/10.1007/978-981-10-6113-4_6. | spa |
dc.relation.references | D. Sundararajan. “Introduction”. In: Digital Image Processing: A Signal Processing and Algorithmic Approach. Singapore: Springer Singapore, 2017, pp. 1–21. ISBN: 978-981-10-6113-4. DOI: 10.1007/ 978-981-10-6113-4_1. URL: https://doi.org/10.1007/978-981-10-6113-4_1. | spa |
dc.relation.references | C. Szegedy et al. “Going deeper with convolutions”. In: 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). June 2015, pp. 1–9. DOI: 10.1109/CVPR.2015.7298594. | spa |
dc.relation.references | Richard Szeliski. “Image Processing”. In: Computer Vision: Algorithms and Applications. Cham: Springer International Publishing, 2022, pp. 85–151. DOI: 10.1007/978-3-030-34372-9_3. URL: https: //doi.org/10.1007/978-3-030-34372-9_3. | spa |
dc.relation.references | Takafumi Taketomi, Hideaki Uchiyama, and Sei Ikeda. “Visual SLAM algorithms: a survey from 2010 to 2016”. In: IPSJ Transactions on Computer Vision and Applications 9 (1 2017). ISSN: 1882-6695. DOI: 10.1186/s41074-017-0027-2. URL: https://doi.org/10.1186/s41074-017-0027-2. | spa |
dc.relation.references | Matthew Tancik et al. “Nerfstudio: A Modular Framework for Neural Radiance Field Development”. In: SIGGRAPH ’23. Los Angeles, CA, USA: Association for Computing Machinery, 2023. ISBN: 9798400701597. DOI: 10.1145/3588432.3591516. URL: https://doi.org/10.1145/3588432.3591516. | spa |
dc.relation.references | A. Tewari et al. “Advances in Neural Rendering”. In: Computer Graphics Forum 41.2 (2022), pp. 703– 735. DOI: https://doi.org/10.1111/cgf.14507. eprint: https://onlinelibrary.wiley.com/doi/pdf/10. 1111/cgf.14507. URL: https://onlinelibrary.wiley.com/doi/abs/10.1111/cgf.14507. | spa |
dc.relation.references | Athanasios Voulodimos et al. “Deep Learning for Computer Vision: A Brief Review”. In: Computational Intelligence and Neuroscience 2018.1 (2018), p. 7068349. DOI: https://doi.org/10.1155/2018/7068349. eprint: https://onlinelibrary.wiley.com/doi/pdf/10.1155/2018/7068349. URL: https://onlinelibrary.wiley.com/doi/abs/10.1155/2018/7068349. | spa |
dc.relation.references | Guangming Wang et al. NeRF in Robotics: A Survey. 2024. arXiv: 2405.01333 [cs.RO]. URL: https://arxiv.org/abs/2405.01333. | spa |
dc.relation.references | Huan Wang et al. “R2L: Distilling Neural Radiance Field to Neural Light Field for Efficient Novel View Synthesis”. In: Computer Vision -ECCV 2022. Springer Nature Switzerland, 2022, pp. 612–629. ISBN: 978-3-031-19821-2. DOI: 10.1007/978-3-031-19821-2_35. | spa |
dc.relation.references | Jiaxin Wang et al. “Neural Radiance Fields with Hash-Low-Rank Decomposition”. In: Applied Sciences 14.23 (2024). ISSN: 2076-3417. DOI: 10.3390/app142311277. URL: https://www.mdpi.com/ 2076-3417/14/23/11277. | spa |
dc.relation.references | Miao Wang et al. “VR content creation and exploration with deep learning: A survey”. In: Computational Visual Media 6 (1 Mar. 2020), pp. 3–28. ISSN: 2096-0662. DOI: 10.1007/s41095-020-0162-z. URL: https://doi.org/10.1007/s41095-020-0162-z. | spa |
dc.relation.references | Peng Wang et al. NeuS: Learning Neural Implicit Surfaces by Volume Rendering for Multi-view Reconstruction. 2021. arXiv: 2106.10689 [cs.CV]. URL: https://arxiv.org/abs/2106.10689. | spa |
dc.relation.references | Xiaohui Wang et al. “MP-NeRF: More refined deblurred neural radiance field for 3D reconstruction of blurred images”. In: Knowledge-Based Systems 290 (2024), p. 111571. ISSN: 0950-7051. DOI: https://doi.org/10.1016/j.knosys.2024.111571. URL: https://www.sciencedirect.com/science/ article/pii/S0950705124002065. | spa |
dc.relation.references | Xin Wang et al. Neural Radiance Fields in Medical Imaging: Challenges and Next Steps. 2024. arXiv: 2402. 17797 [eess.IV]. URL: https://arxiv.org/abs/2402.17797. | spa |
dc.relation.references | Xuan Wang et al. “A Review of GAN-Based Super-Resolution Reconstruction for Optical Remote Sensing Images”. In: Remote Sensing 15.20 (2023). ISSN: 2072-4292. DOI: 10.3390/rs15205062. URL: https://www.mdpi.com/2072-4292/15/20/5062. | spa |
dc.relation.references | Zhan Wang, Shoudong Huang, and Gamini Dissanayake. Simultaneous Localization And Mapping: Exactly Sparse Information Filters. 1st ed. Singapore: World Scientific, 2011. ISBN: 9789814350310, 9789814350327. | spa |
dc.relation.references | Ying-mei Wei et al. “Applications of structure from motion: a survey”. In: Journal of Zhejiang University SCIENCE C 14 (7 2013), pp. 486–494. ISSN: 1869-196X. DOI: 10.1631/jzus.CIDE1302. URL: https://doi.org/10.1631/jzus.CIDE1302. | spa |
dc.relation.references | Laura Weihl, Bilal Wehbe, and Andrzej Wąsowski. NeRF-To-Real Tester: Neural Radiance Fields as Test Image Generators for Vision of Autonomous Systems. 2024. arXiv: 2412 . 16141 [cs.CV]. URL: https://arxiv.org/abs/2412.16141. | spa |
dc.relation.references | Diana Werner, Ayoub Al-Hamadi, and Philipp Werner. “Truncated Signed Distance Function: Experiments on Voxel Size”. In: Image Analysis and Recognition. Cham: Springer International Publishing, 2014, pp. 357–364. ISBN: 978-3-319-11755-3. | spa |
dc.relation.references | Di Wu et al. “Explicit 3D reconstruction from images with dynamic graph learning and rendering-guided diffusion”. In: Neurocomputing (2024). ISSN: 0925-2312. DOI: 10 . 1016 / j . neucom . 2024 . 128206. URL: https://www.sciencedirect.com/science/article/pii/S0925231224009779. | spa |
dc.relation.references | Jiajun Wu et al. “MarrNet: 3D Shape Reconstruction via 2.5D Sketches”. In: (2017). arXiv: 1711. 03129 [cs.CV]. URL: https://arxiv.org/abs/1711.03129. | spa |
dc.relation.references | Juhao Wu et al. “Multi-view 3D reconstruction based on deep learning: A survey and comparison of methods”. In: Neurocomputing 582 (2024), p. 127553. ISSN: 0925-2312. DOI: https://doi.org/ 10 . 1016 / j . neucom . 2024 . 127553. URL: https : / / www . sciencedirect . com / science / article / pii / S0925231224003242. | spa |
dc.relation.references | Zhirong Wu et al. “3D ShapeNets: A Deep Representation for Volumetric Shapes”. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR). June 2015. | spa |
dc.relation.references | Yu Xiang et al. “Objectnet3d: A large scale database for 3d object recognition”. In: European conference on computer vision. Springer. 2016, pp. 160–176. | spa |
dc.relation.references | Haozhe Xie et al. “Toward 3D object reconstruction from stereo images”. In: Neurocomputing 463 (2021), pp. 444–453. ISSN: 0925-2312. DOI: https://doi.org/10.1016/j.neucom.2021.07.089. URL: https://www.sciencedirect.com/science/article/pii/S0925231221011711. | spa |
dc.relation.references | Mingle Xu et al. “A Comprehensive Survey of Image Augmentation Techniques for Deep Learning”. In: Pattern Recognition 137 (2023). ISSN: 0031-3203. DOI: https://doi.org/10.1016/j.patcog.2023. 109347. URL: https://www.sciencedirect.com/science/article/pii/S0031320323000481. | spa |
dc.relation.references | Jingyu Yang et al. “Learning to Reconstruct and Understand Indoor Scenes From Sparse Views”. In: IEEE Transactions on Image Processing 29 (2020), pp. 5753–5766. DOI: 10.1109/TIP.2020.2986712. | spa |
dc.relation.references | Myongkyoon Yang and Seong-In Cho. “High-Resolution 3D Crop Reconstruction and Automatic Analysis of Phenotyping Index Using Machine Learning”. In: Agriculture 11.10 (2021). ISSN: 2077-0472. DOI: 10.3390/agriculture11101010. URL: https://www.mdpi.com/2077-0472/11/10/1010. | spa |
dc.relation.references | Lior Yariv et al. “BakedSDF: Meshing Neural SDFs for Real-Time View Synthesis”. In: SIGGRAPH ’23. Los Angeles, CA, USA: Association for Computing Machinery, 2023. ISBN: 9798400701597. DOI: 10.1145/3588432.3591536. URL: https://doi.org/10.1145/3588432.3591536. | spa |
dc.relation.references | Lior Yariv et al. Volume Rendering of Neural Implicit Surfaces. 2021. arXiv: 2106.12052 [cs.CV]. URL: https://arxiv.org/abs/2106.12052. | spa |
dc.relation.references | İbrahim Yazici, Ibraheem Shayea, and Jafri Din. “A survey of applications of artificial intelligence and machine learning in future mobile networks-enabled systems”. In: Engineering Science and Technology, an International Journal 44 (2023), p. 101455. ISSN: 2215-0986. DOI: https://doi.org/10.1016/j.jestch.2023.101455. URL: https://www.sciencedirect.com/science/article/pii/S2215098623001337. | spa |
dc.relation.references | Heng Yu et al. “DyLiN: Making Light Field Networks Dynamic”. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). June 2023, pp. 12397–12406. | spa |
dc.relation.references | Zehao Yu et al. “MonoSDF: Exploring Monocular Geometric Cues for Neural Implicit Surface Reconstruction”. In: Advances in Neural Information Processing Systems. Ed. by S. Koyejo et al. Vol. 35. Curran Associates, Inc., 2022, pp. 25018–25032. | spa |
dc.relation.references | Zhao Zhang et al. “A High-Quality Rice Leaf Disease Image Data Augmentation Method Based on a Dual GAN”. In: IEEE Access 11 (2023), pp. 21176–21191. DOI: 10.1109/ACCESS.2023.3251098. | spa |
dc.relation.references | Junhong Zhao et al. Exploring Accurate 3D Phenotyping in Greenhouse through Neural Radiance Fields. 2024. arXiv: 2403.15981 [cs.CV]. URL: https://arxiv.org/abs/2403.15981. | spa |
dc.relation.references | Xia Zhao et al. “A review of convolutional neural networks in computer vision”. In: Artificial Intelligence Review 57 (2024). ISSN: 1573-7462. DOI: 10.1007/s10462-024-10721-6. URL: https://doi.org/10.1007/s10462-024-10721-6. | spa |
dc.relation.references | Zhengli Zhao et al. Image Augmentations for GAN Training. 2020. arXiv: 2006 . 02595 [cs.LG]. URL: https://arxiv.org/abs/2006.02595. | spa |
dc.relation.references | Linglong Zhou et al. “A Comprehensive Review of Vision-Based 3D Reconstruction Methods”. In: Sensors 24.7 (2024). ISSN: 1424-8220. DOI: 10.3390/s24072314. URL: https://www.mdpi.com/ 1424-8220/24/7/2314. | spa |
dc.relation.references | Jingsen Zhu et al. “I2-SDF: Intrinsic Indoor Scene Reconstruction and Editing via Raytracing in Neural SDFs”. In: 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). 2023, pp. 12489–12498. DOI: 10.1109/CVPR52729.2023.01202 | spa |
dc.rights.accessrights | info:eu-repo/semantics/openAccess | spa |
dc.rights.license | Atribución-NoComercial-SinDerivadas 4.0 Internacional | spa |
dc.rights.uri | http://creativecommons.org/licenses/by-nc-nd/4.0/ | spa |
dc.subject.ddc | 621.3996 | spa |
dc.subject.ddc | 620 - Ingeniería y operaciones afines::621 - Física aplicada | spa |
dc.subject.ddc | 006.32 | spa |
dc.subject.ddc | 000 - Ciencias de la computación, información y obras generales::006 - Métodos especiales de computación | spa |
dc.subject.lemb | PROCESAMIENTO DE IMAGENES | spa |
dc.subject.lemb | Image processing | eng |
dc.subject.lemb | PROCESAMIENTO OPTICO DE DATOS | spa |
dc.subject.lemb | Optical data processing | eng |
dc.subject.lemb | SISTEMAS DE REPRESENTACION TRIDIMENSIONAL | spa |
dc.subject.lemb | Three-dimensional display systems | eng |
dc.subject.lemb | REDES NEURALES (COMPUTADORES) | spa |
dc.subject.lemb | Neural networks (Computer science) | eng |
dc.subject.lemb | INTELIGENCIA ARTIFICIAL | spa |
dc.subject.lemb | Artificial intelligence | eng |
dc.subject.proposal | 3D reconstruction | eng |
dc.subject.proposal | Artificial neural networks | eng |
dc.subject.proposal | Image augmentation | eng |
dc.subject.proposal | Deep learning | eng |
dc.subject.proposal | Computer vision | eng |
dc.subject.proposal | Reconstrucción 3D | spa |
dc.subject.proposal | Redes neuronales artificiales | spa |
dc.subject.proposal | Aprendizaje profundo | spa |
dc.subject.proposal | Visión por computador | spa |
dc.subject.proposal | Aumentación de imágenes | spa |
dc.title | A deep learning approach for 3D reconstruction of indoor scenes | eng |
dc.title.translated | Un enfoque de aprendizaje profundo para la reconstrucción 3D de escenas interiores | spa |
dc.type | Trabajo de grado - Maestría | spa |
dc.type.coar | http://purl.org/coar/resource_type/c_bdcc | spa |
dc.type.coarversion | http://purl.org/coar/version/c_ab4af688f83e57aa | spa |
dc.type.content | Text | spa |
dc.type.driver | info:eu-repo/semantics/masterThesis | spa |
dc.type.redcol | http://purl.org/redcol/resource_type/TM | spa |
dc.type.version | info:eu-repo/semantics/acceptedVersion | spa |
dcterms.audience.professionaldevelopment | Estudiantes | spa |
dcterms.audience.professionaldevelopment | Investigadores | spa |
dcterms.audience.professionaldevelopment | Maestros | spa |
dcterms.audience.professionaldevelopment | Público general | spa |
oaire.accessrights | http://purl.org/coar/access_right/c_abf2 | spa |
Archivos
Bloque original
1 - 1 de 1
Cargando...
- Nombre:
- gomezdiego-thesis2024.pdf
- Tamaño:
- 21.12 MB
- Formato:
- Adobe Portable Document Format
- Descripción:
- Tesis de Maestría en Ingeniería - Ingeniería de Sistemas y Computación
Bloque de licencias
1 - 1 de 1
No hay miniatura disponible
- Nombre:
- license.txt
- Tamaño:
- 5.74 KB
- Formato:
- Item-specific license agreed upon to submission
- Descripción: