Mostrar el registro sencillo del documento

dc.rights.licenseAtribución-NoComercial-SinDerivadas 4.0 Internacional
dc.contributor.authorGil González, Julián
dc.contributor.authorÁlvarez Meza, Andrés Marino
dc.date.accessioned2023-09-11T13:36:20Z
dc.date.available2023-09-11T13:36:20Z
dc.date.issued2023
dc.identifier.urihttps://repositorio.unal.edu.co/handle/unal/84685
dc.description.abstractThe increasing popularity of crowdsourcing platforms, i.e., Amazon Mechanical Turk, is changing how datasets for supervised learning are built. In these cases, instead of having datasets labeled by one source (which is supposed to be an expert who provided the absolute gold standard), we have datasets labeled by multiple annotators with different and unknown expertise. Hence, we face a multi-labeler scenario, which typical supervised learning models cannot tackle.For this reason, much attention has recently been given to the approaches that capture multiple annotators’ wisdom. However, such methods reside on two key assumptions: the labeler’s performance does not depend on the input space and independence among the annotators, which are hardly feasible in real-world settings. This book exploresseveral models based on both frequentist and Bayesian perspectives aiming to face multi-labeler scenarios. Our approaches model the annotators’ behavior by considering the relationship between the input space and the labelers’ performance and coding interdependencies among them.
dc.description.tableofcontents1 Preliminaries 1.1 Motivation 1.2 Problem Statement 1.3 Mathematical Preliminaries 1.3.1 Methods for Supervised Learning 1.3.2 Learning from Multiple Annotators 1.4 Literature Review on Supervised Learning from Multiple Annotators 1.5 Objectives 1.5.1 General Objective 1.5.2 Specific Objectives 1.6 Outline and Contributions 1.6.1 Kernel Alignment-Based Annotator Relevance Analysis (KAAR) 1.6.2 Localized Kernel Alignment-Based Annotator Relevance Analysis (LKAAR) 1.6.3 Regularized Chained Deep Neural Network for Multiple Annotators (RCDNN) 1.6.4 Chained Gaussian Processes for Multiple Annotators (CGPMA) andCorrelated Chained Gaussian Processes for Multiple Annotators (CCGPMA) 1.6.5 Book Structure 2 Kernel Alignment-Based Annotator Relevance Analysis 2.1 Centered Kernel Alignment Fundamentals 2.2 Kernel Alignment-Based Annotator Relevance Analysis 2.2.1 KAAR for Classification and Regression 2.3 Experimental Set-Up 2.3.1 Classification 2.3.2 Regression 2.4 Results and Discussion 2.4.1 Classification 2.4.2 Regression 2.5 Summary 3 Localized Kernel Alignment-Based Annotator Relevance Analysis 3.1 Localized Kernel Alignment Fundamentals 3.2 Localized Kernel Alignment-Based Annotator Relevance Analysis 3.2.1 LKAAR for Classification and Regression 3.3 Experimental Set-Up 3.3.1 Classification 3.3.2 Regression 3.4 Results and Discussion 3.4.1 Classification 3.4.2 Regression 3.5 Summary 4 Regularized Chained Deep Neural Network for Multiple Annotators 4.1 Chained Deep Neural Network 4.2 Regularized Chained Deep Neural Network for Classification with Multiple Annotators 4.3 Experimental Set-Up 4.3.1 Tested Datasets 4.3.2 Provided and Simulated Annotations 4.3.3 Method Comparison and Quality Assessment 4.3.4 RCDNN Detailed Architecture and Training 4.4 Results and Discussion 4.5 Summary 5 Correlated Chained Gaussian Processes for Multiple Annotators 5.1 Chained Gaussian Processes 5.1.1 Correlated Chained Gaussian Processes 5.2 Correlated Chained GP for Multiple Annotators-CCGPMA 5.2.1 Classification 5.2.2 Regression 5.3 Experimental Set-Up 5.3.1 Classification 5.3.2 Regression 5.4 Results and Discussion 5.4.1 Classification 5.4.2 Regression 5.5 Summary 6 Final Remarks 6.1 Conclusions 6.2 Future Work 6.3 Repositories Bibliography Appendices Appendix A CCGPMA Supplementary Material A.1 Derivation of CCGPMA Lower Bounds A.1.1 Gradients w.r.t. the Variational Parameters A.2 Likelihood Functions A.2.1 Multiclass Classification with Multiple Annotators A.2.2 Gaussian Distribution for Regression with Multiple Annotators Alphabetical Index
dc.format.mimetypeapplication/pdf
dc.language.isoeng
dc.rights.urihttp://creativecommons.org/licenses/by-nc-nd/4.0/
dc.subject.ddc620 - Ingeniería y operaciones afines
dc.titleA Supervised Learning Framework in the Context of Multiple Annotators
dc.typeLibro
dc.type.driverinfo:eu-repo/semantics/book
dc.type.versioninfo:eu-repo/semantics/publishedVersion
dc.contributor.corporatenameVicedecanatura de Investigación y Extensión -Facultad de Ingeniería y Arquitectura-Sede Manizales -Editorial Universidad Nacional de Colombia
dc.identifier.instnameUniversidad Nacional de Colombia
dc.identifier.reponameRepositorio Institucional Universidad Nacional de Colombia
dc.identifier.repourlhttps://repositorio.unal.edu.co/
dc.publisher.placeBogotá,Colombia
dc.rights.accessrightsinfo:eu-repo/semantics/openAccess
dc.subject.proposalAprendizaje supervisado
dc.subject.proposalInteligencia artificial
dc.subject.proposalAprendizaje automático
dc.subject.proposalRedes neuronales
dc.subject.proposalComputadores
dc.subject.proposalProcesos de Gauss
dc.type.coarhttp://purl.org/coar/resource_type/c_2f33
dc.type.coarversionhttp://purl.org/coar/version/c_970fb48d4fbd8a85
dc.type.redcolhttp://purl.org/redcol/resource_type/LIB
oaire.accessrightshttp://purl.org/coar/access_right/c_abf2
dcterms.audience.professionaldevelopmentEstudiantes
dc.identifier.eisbn9789585053694


Archivos en el documento

Thumbnail

Este documento aparece en la(s) siguiente(s) colección(ones)

Mostrar el registro sencillo del documento

Atribución-NoComercial-SinDerivadas 4.0 InternacionalEsta obra está bajo licencia internacional Creative Commons Reconocimiento-NoComercial 4.0.Este documento ha sido depositado por parte de el(los) autor(es) bajo la siguiente constancia de depósito