dc.rights.license | Atribución-NoComercial-SinDerivadas 4.0 Internacional |
dc.contributor.author | Gil González, Julián |
dc.contributor.author | Álvarez Meza, Andrés Marino |
dc.date.accessioned | 2023-09-11T13:36:20Z |
dc.date.available | 2023-09-11T13:36:20Z |
dc.date.issued | 2023 |
dc.identifier.uri | https://repositorio.unal.edu.co/handle/unal/84685 |
dc.description.abstract | The increasing popularity of crowdsourcing platforms, i.e., Amazon Mechanical Turk, is changing how datasets for supervised learning are built. In these cases, instead of having datasets labeled by one source (which is supposed to be an expert who provided the absolute gold standard), we have datasets labeled by multiple annotators with different and unknown expertise. Hence, we face a multi-labeler scenario, which typical supervised learning models cannot tackle.For this reason, much attention has recently been given to the approaches that capture multiple annotators’ wisdom. However, such methods reside on two key assumptions: the labeler’s performance does not depend on the input space and independence among the annotators, which are hardly feasible in real-world settings. This book exploresseveral models based on both frequentist and Bayesian perspectives aiming to face multi-labeler scenarios. Our approaches model the annotators’ behavior by considering the relationship between the input space and the labelers’ performance and coding interdependencies among them. |
dc.description.tableofcontents | 1 Preliminaries
1.1 Motivation
1.2 Problem Statement
1.3 Mathematical Preliminaries
1.3.1 Methods for Supervised Learning
1.3.2 Learning from Multiple Annotators
1.4 Literature Review on Supervised Learning from Multiple Annotators
1.5 Objectives
1.5.1 General Objective
1.5.2 Specific Objectives
1.6 Outline and Contributions
1.6.1 Kernel Alignment-Based Annotator Relevance Analysis (KAAR)
1.6.2 Localized Kernel Alignment-Based Annotator Relevance
Analysis (LKAAR)
1.6.3 Regularized Chained Deep Neural Network for Multiple Annotators (RCDNN)
1.6.4 Chained Gaussian Processes for Multiple Annotators (CGPMA) andCorrelated Chained Gaussian Processes for Multiple Annotators (CCGPMA)
1.6.5 Book Structure
2 Kernel Alignment-Based Annotator Relevance Analysis
2.1 Centered Kernel Alignment Fundamentals
2.2 Kernel Alignment-Based Annotator Relevance Analysis
2.2.1 KAAR for Classification and Regression
2.3 Experimental Set-Up
2.3.1 Classification
2.3.2 Regression
2.4 Results and Discussion
2.4.1 Classification
2.4.2 Regression
2.5 Summary
3 Localized Kernel Alignment-Based Annotator Relevance Analysis
3.1 Localized Kernel Alignment Fundamentals
3.2 Localized Kernel Alignment-Based Annotator Relevance Analysis
3.2.1 LKAAR for Classification and Regression
3.3 Experimental Set-Up
3.3.1 Classification
3.3.2 Regression
3.4 Results and Discussion
3.4.1 Classification
3.4.2 Regression
3.5 Summary
4 Regularized Chained Deep Neural Network for Multiple Annotators
4.1 Chained Deep Neural Network
4.2 Regularized Chained Deep Neural Network for Classification with Multiple Annotators
4.3 Experimental Set-Up
4.3.1 Tested Datasets
4.3.2 Provided and Simulated Annotations
4.3.3 Method Comparison and Quality Assessment
4.3.4 RCDNN Detailed Architecture and Training
4.4 Results and Discussion
4.5 Summary
5 Correlated Chained Gaussian Processes for Multiple Annotators
5.1 Chained Gaussian Processes
5.1.1 Correlated Chained Gaussian Processes
5.2 Correlated Chained GP for Multiple Annotators-CCGPMA
5.2.1 Classification
5.2.2 Regression
5.3 Experimental Set-Up
5.3.1 Classification
5.3.2 Regression
5.4 Results and Discussion
5.4.1 Classification
5.4.2 Regression
5.5 Summary
6 Final Remarks
6.1 Conclusions
6.2 Future Work
6.3 Repositories
Bibliography
Appendices
Appendix A CCGPMA Supplementary Material
A.1 Derivation of CCGPMA Lower Bounds
A.1.1 Gradients w.r.t. the Variational Parameters
A.2 Likelihood Functions
A.2.1 Multiclass Classification with Multiple Annotators
A.2.2 Gaussian Distribution for Regression with Multiple Annotators
Alphabetical Index |
dc.format.mimetype | application/pdf |
dc.language.iso | eng |
dc.rights.uri | http://creativecommons.org/licenses/by-nc-nd/4.0/ |
dc.subject.ddc | 620 - Ingeniería y operaciones afines |
dc.title | A Supervised Learning Framework in the Context of Multiple Annotators |
dc.type | Libro |
dc.type.driver | info:eu-repo/semantics/book |
dc.type.version | info:eu-repo/semantics/publishedVersion |
dc.contributor.corporatename | Vicedecanatura de Investigación y Extensión -Facultad de Ingeniería y Arquitectura-Sede Manizales -Editorial Universidad Nacional de Colombia |
dc.identifier.instname | Universidad Nacional de Colombia |
dc.identifier.reponame | Repositorio Institucional Universidad Nacional de Colombia |
dc.identifier.repourl | https://repositorio.unal.edu.co/ |
dc.publisher.place | Bogotá,Colombia |
dc.rights.accessrights | info:eu-repo/semantics/openAccess |
dc.subject.proposal | Aprendizaje supervisado |
dc.subject.proposal | Inteligencia artificial |
dc.subject.proposal | Aprendizaje automático |
dc.subject.proposal | Redes neuronales |
dc.subject.proposal | Computadores |
dc.subject.proposal | Procesos de Gauss |
dc.type.coar | http://purl.org/coar/resource_type/c_2f33 |
dc.type.coarversion | http://purl.org/coar/version/c_970fb48d4fbd8a85 |
dc.type.redcol | http://purl.org/redcol/resource_type/LIB |
oaire.accessrights | http://purl.org/coar/access_right/c_abf2 |
dcterms.audience.professionaldevelopment | Estudiantes |
dc.identifier.eisbn | 9789585053694 |