Mostrar el registro sencillo del documento

dc.rights.licenseAtribución-NoComercial-SinDerivadas 4.0 Internacional
dc.contributor.advisorRestrepo Calle, Felipe
dc.contributor.advisorRamírez Echeverry, Jhon Jairo
dc.contributor.authorLozano Rojas, Hernan Dario
dc.date.accessioned2023-01-13T19:46:59Z
dc.date.available2023-01-13T19:46:59Z
dc.date.issued2022
dc.identifier.urihttps://repositorio.unal.edu.co/handle/unal/82923
dc.descriptionilustraciones, diagramas, gráficas, tablas
dc.description.abstractLa enseñanza y aprendizaje en el contexto educativo ha requerido a través del tiempo de diferentes esfuerzos por parte de los estudiantes y de los docentes para que se pueda dar un proceso satisfactorio en el que se cumpla con el objetivo de garantizar un completo aprendizaje. En el aprendizaje de la programación de computadores los estudiantes se enfrentan a distintos retos, por lo que el rol del docente, en su constante búsqueda y actualización, opta por incluir estrategias innovadoras mediante el apoyo de tecnologías computacionales con el fin de facilitar la apropiación del conocimiento por parte del estudiante. Cada vez hay más estrategias educativas que integran herramientas tecnológicas de evaluación automática de tipo juez online en las metodologías de clase de programación puesto que permiten: motivar al estudiante, promover el interés en las temáticas a partir de la práctica y la competencia y mejorar el rendimiento académico. Estas integraciones en las metodologías de clase de programación han registrado a través de la literatura que sus efectos son positivos en lo que respecta a mejoramiento en rendimiento, habilidades, mejora en la lógica y la participación. Sin embargo, se evidencia una falta de estudios con el fin de determinar si la motivación, como factor de aprendizaje, se ve afectada por la intervención de herramientas de evaluación automática en las sesiones de clase. Es por esto que el presente artículo se centra en evaluar el impacto en la motivación por aprender de los estudiantes al usar una herramienta de evaluación automática de programas en la asignatura de programación de computadores. A partir de un diseño de investigación cuasiexperimental se comparó el nivel de motivación en el aprendizaje de los estudiantes del grupo experimental y de los estudiantes de un grupo control (estudiantes que cursaron la asignatura sin usar la herramienta de evaluación automática seleccionada) por medio de un pretest y postest con el instrumento MSLQ-Colombia. Adicionalmente, los estudiantes del grupo experimental respondieron una encuesta con preguntas de respuesta abierta que permitió recopilar las diferentes percepciones sobre la herramienta y la integración de este tipo de estrategias en la metodología de la clase. Los datos recolectados permitieron realizar un análisis estadístico respecto a la variación de la motivación en el aprendizaje de los grupos que participaron. Por un lado, los datos cuantitativos recopilados a través del MSLQ-Colombia no evidenciaron diferencias significativas en la motivación de los estudiantes del grupo experimental en el que se integró la herramienta de evaluación automática. Por otro lado, la información cualitativa permitió contrastar el impacto hallado en el análisis cuantitativo. Los datos de la encuesta de opinión de los estudiantes permitieron observar un aumento en los niveles de autoeficacia para el rendimiento de los estudiantes. Estos resultados permiten entender de mejor manera los efectos de la integración de herramientas de evaluación automática en la motivación por aprender de los estudiantes de programación de computadores. (Texto tomado de la fuente)
dc.description.abstractTeaching and learning in the educational context has required over time different efforts by students and teachers to ensure a satisfactory process in which the objective of guaranteeing a complete learning process is met. In learning computer programming, students face different challenges, so the role of teachers, in their constant effort to improve, chooses to include innovative strategies through the support of computer technologies in order to facilitate the appropriation of knowledge by the student. There are more and more educational strategies that integrate technological tools of automatic evaluation of online judge type in the programming class methodologies since they allow: motivate the student, promote interest in the topics from practice and competition and improve academic performance. These integrations in programming class methodologies have been reported in the literature to have positive effects in terms of improvement in performance, skills, improvement in logic and participation. However, there is a lack of studies to determine whether motivation, as a learning factor, is affected by the intervention of automatic assessment tools in class sessions. For this reason, the present article focuses on evaluating the impact on students' learning motivation when using an automatic program assessment tool in a computer programming course. Based on a quasi-experimental research design, the level of learning motivation of students in the experimental group and students in a control group (students who took the course without using the selected automatic evaluation tool) were compared by means of a pretest and posttest with the MSLQ-Colombia instrument. Additionally, the students of the experimental group answered a survey with open-ended questions that allowed collecting the different perceptions about the tool and the integration of this type of strategies in the course methodology. The data collected allowed us to perform a statistical analysis regarding the variation of learning motivation in the groups that participated. On the one hand, the quantitative data collected through the MSLQ-Colombia did not show significant differences in the motivation of the students in the experimental group in which the automatic evaluation tool was integrated. On the other hand, the qualitative information made it possible to contrast the impact found in the quantitative analysis. The data from the student opinion survey allowed us to observe an increase in the levels of students' expectation for her self-efficacy in academic performance. These results allow a better understanding of the effects of the integration of automatic assessment tools on the learning motivation of computer programming students.
dc.format.extentxvi, 120 páginas
dc.format.mimetypeapplication/pdf
dc.language.isospa
dc.publisherUniversidad Nacional de Colombia
dc.rightsDerechos reservados al autor, 2022
dc.rights.urihttp://creativecommons.org/licenses/by-nc-nd/4.0/
dc.subject.ddc000 - Ciencias de la computación, información y obras generales::004 - Procesamiento de datos Ciencia de los computadores
dc.titleEvaluación del impacto en la motivación en el aprendizaje de los estudiantes de programación de computadores mediante el uso de una herramienta de evaluación automática
dc.typeTrabajo de grado - Maestría
dc.type.driverinfo:eu-repo/semantics/masterThesis
dc.type.versioninfo:eu-repo/semantics/acceptedVersion
dc.publisher.programBogotá - Ingeniería - Maestría en Ingeniería - Ingeniería de Sistemas y Computación
dc.contributor.researchgroupPlas Programming languages And Systems
dc.description.degreelevelMaestría
dc.description.degreenameMagíster en Ingeniería - Ingeniería de Sistemas y Computación
dc.description.researchareaComputación aplicada - Educación en ingeniería
dc.identifier.instnameUniversidad Nacional de Colombia
dc.identifier.reponameRepositorio Institucional Universidad Nacional de Colombia
dc.identifier.repourlhttps://repositorio.unal.edu.co/
dc.publisher.facultyFacultad de Ingeniería
dc.publisher.placeBogotá, Colombia
dc.publisher.branchUniversidad Nacional de Colombia - Sede Bogotá
dc.relation.referencesACM & IEEE. (2009). Curriculum Guidelines for Graduate Degree Programs in Software Engineering (inf. téc.). ACM, IEEE.
dc.relation.referencesACM & IEEE. (2013). Computer Science Curricula 2013 Curriculum Guidelines for Undergraduate Degree Programs in Computer Science (inf. téc.). Association for Computing Machinery (ACM), IEEE Computer Society.
dc.relation.referencesACM & IEEE. (2017). Information Technology Curricula 2017 IT2017 Curriculum Guidelines for Baccalaureate Degree Programs in Information Technology (inf. téc.).
dc.relation.referencesAla-Mutka, K. M. (2005). A Survey of Automated Assessment Approaches for Programming Assignments. Computer Science Education, 15(2), 83-102.
dc.relation.referencesÁlvarez, B., González, C. & García, N. (2008). La motivación y los métodos de evaluación como variables fundamentales para estimular el aprendizaje autónomo. Red U. Revista de Docencia Universitaria, 5(2), 1-12.
dc.relation.referencesAlves, N. D. C., Wangenheim, C. G. V., Hauck, J. C. R. & Borgatto, A. F. (2020). A large-scale evaluation of a rubric for the automatic assessment of algorithms and programming concepts. Annual Conference on Innovation and Technology in Computer Science Education, ITiCSE, 556-562.
dc.relation.referencesAristika, A., Darhim, Juandi, D. & Kusnandi. (2021). The Effectiveness of Hybrid Learning in Improving of Teacher-Student Relationship in Terms of Learning Motivation. Emerging Science Journal, 5(4).
dc.relation.referencesAzmi, N. A., Mohd-Yusof, K., Phang, F. A. & Syed Hassan, S. A. H. (2018). Motivating Engineering Students to Engage in Learning Computer Programming. Advances in Intelligent Systems and Computing, 143-157.
dc.relation.referencesBarra, E., López-Pernas, S., Alonso, Á., Sánchez-Rada, J. F., Gordillo, A. & Quemada, J. (2020). Automated Assessment in Programming Courses: A Case Study during the COVID-19 Era. Sustainability, 12(18).
dc.relation.referencesBarrios, T. & Marin, M. B. (2014). Aprendizaje mixto a través de laboratorios virtuales. Signos Universitarios.
dc.relation.referencesBarros Barrios, R. J., Rojas Montero, J. A. & Sánchez Ayala, L. M. (2008). Diseño de instrumentos didácticos para aprendizaje activo basado en teoría de colores. Revista Educación en Ingeniería, 3(5), 11-18.
dc.relation.referencesBecerra-Alonso, D., Lopez-Cobo, I., Gómez-Rey, P., Fernández-Navarro, F. & Barbera, E. (2020). Edu-Zinc: A tool for the creation and assessment of student learning activities in complex open, online and flexible learning environments. Distance Education, 41(1), 86-105.
dc.relation.referencesBeltrán, J. A. (1993). Procesos, estrategias y técnicas de aprendizaje. Síntesis.
dc.relation.referencesBennedsen, J. & Caspersen, M. E. (2019). Failure Rates in Introductory Programming: 12 Years Later. ACM Inroads, 10(2), 30-36.
dc.relation.referencesBenotti, L., Aloi, F., Bulgarelli, F. & Gomez, M. J. (2018). The effect of a web-based coding tool with automatic feedback on students’ performance and perceptions. SIGCSE 2018 - Proceedings of the 49th ACM Technical Symposium on Computer Science Education, 2018-Janua, 2-7.
dc.relation.referencesBosse, Y. & Gerosa, M. A. (2017). Difficulties of Programming Learning from the Point of View of Students and Instructors. IEEE Latin America Transactions, 15(11), 2191-2199.
dc.relation.referencesBrito, M. & Goncalves, C. (2019). Codeflex: A web-based platform for competitive programming. Iberian Conference on Information Systems and Technologies, CISTI, 2019-June.
dc.relation.referencesBryman, A. (2012). Social Research Methods (4a ed). Oxford University Press.
dc.relation.referencesBurgos-Castillo, E. & Sánchez-Abarca, P. (2012). .Adaptación y validación preliminar del cuestionario de motivación y estrategias de aprendizaje (MSLQ)”. Universidad del Bío-Bío. Chillán, Chile.
dc.relation.referencesByrne, P. & Lyons, G. (2001). The effect of student attributes on success in programming. Proceedings of the Conference on Integrating Technology into Computer Science Education, ITiCSE, 49-52.
dc.relation.referencesCardoso, M., de Castro, A. V., Rocha, Á., Silva, E. & Mendonça, J. (2020). Use of Automatic Code Assessment Tools in the Programming Teaching Process. En R. Queirós, F. Portela, M. Pinto y A. Simões (Eds.), First International Computer Programming Education Conference (ICPEC 2020) (4:1-4:10). Schloss Dagstuhl–Leibniz-Zentrum für Informatik.
dc.relation.referencesCardoso, M., Marques, R., De Castro, A. V. & Rocha, Á. (2020). Using Virtual Programming Lab to improve learning programming: The case of Algorithms and Programming. Expert Systems, 38(4).
dc.relation.referencesCheng, L.-C., Li, W. & Tseng, J. C. R. (2021). Effects of an automated programming assessment system on the learning performances of experienced and novice learners. Interactive Learning Environments, 0(0), 1-17.
dc.relation.referencesChi, H., Allen, C. & Jones, E. (2016). Integrating Computing to STEM Curriculum via CodeBoard, 512-529.
dc.relation.referencesChibizova, N. V. (2018). The Problems of Programming Teaching. 2018 4th International Conference on Information Technologies in Engineering Education, Inforino 2018 - Proceedings
dc.relation.referencesChristian, M. & Trivedi, B. (2016). A comparison of existing tools for evaluation of programming exercises. ACM International Conference Proceeding Series, 04-05-Marc.
dc.relation.referencesClifton, J. (2010). A Simple Judging System for the ACM Programming Contest. Computer Science and Software Engineering, University of Wisconsin – Platteville.
dc.relation.referencesCodeboard. (2020). Recuperado desde: https://codeboard.io/.
dc.relation.referencesCombéfis, S. & de Moffarts, G. (2019). Automated Generation of Computer Graded Unit Testing-Based Programming Assessments for Education. 6th International Conference on Computer Science, Engineering and Information Technology (CSEIT-2019).
dc.relation.referencesCombéfis, S. & Saint-Marcq, V. (2012). Teaching Programming and Algorithm Design with Pythia, a Web-Based Learning Platform. 6, 31-43.
dc.relation.referencesContreras, E. (2004). Evaluación de los aprendizajes universitarios. Docencia universitaria. Orientaciones para la formación del profesorado, 129-152.
dc.relation.referencesContreras, R., Sierra, E. A., Hernández, H. D. R., Hernández, N. B. E. & Moyotl, V. J. H. (2020). Sistema de evaluación inteligente para medir habilidades de razonamiento matemático. Revista Iberoamericana de Evaluación Educativa, 13(1), 251-280.
dc.relation.referencesCoolican, H. (1997). Métodos de investigación y estadística en psicología (2a ed). Editorial Manual Moderno, S.A. de C.V.
dc.relation.referencesCorreia, H., Leal, J. P. & Paiva, J. C. (2017). Improving diagram assessment in Mooshak. International Conference on Technology Enhanced Assessment, 69-82.
dc.relation.referencesCroft, D. & England, M. (2019). Computing with CodeRunner at Coventry University Automated summative assessment of Python and C++ code. ACM International Conference Proceeding Series, 1-4.
dc.relation.referencesDalfaro, N., Cuenca Pletsch, L. & Maurel, M. (2008). La utilización del Blendend-Learning como aporte a la construcción de conocimientos significativos para los alumnos de Ingeniería en Sistemas. Primera Conferencia Latinoamericana sobre el Abandono en la Educación Superior.
dc.relation.referencesDaradoumis, T., Marqués Puig, J. M., Arguedas, M. & Calvet Liñan, L. (2022). Enhancing students’ beliefs regarding programming self-efficacy and intrinsic value of an online distributed Programming Environment. Journal of Computing in Higher Education.
dc.relation.referencesDarejeh, A. & Salim, S. S. (2016). Gamification Solutions to Enhance Software User Engagement—A Systematic Review. International Journal of Human-Computer Interaction, 32(8), 613-642.
dc.relation.referencesDe Oliveira Brandão, L., Bosse, Y. & Gerosa, M. A. (2016). Visual programming and automatic evaluation of exercises: An experience with a STEM course. Proceedings - Frontiers in Education Conference, FIE, 2016-Nove
dc.relation.referencesDel Valle, S. (2004). La programación y las unidades didácticas en Secundaria Obligatoria. Curso CSICSIF Sector de enseñanza.
dc.relation.referencesDerval, G., Gégo, A. & Reinbold, P. (2014). INGINIOUS [software]. Recuperado desde: https://github.com/UCLINGI/INGInious.
dc.relation.referencesDomJudge-Online. (2022). Recuperado desde: https://www.domjudge.org/.
dc.relation.referencesEdwards, S. H. & Pérez-Quiñones, M. A. (2008). Web-CAT: Automatically grading programming assignments. Proceedings of the Conference on Integrating Technology into Computer Science Education, ITiCSE, 328.
dc.relation.referencesElliott, S. W. (2017). Computers and the Future of Skill Demand. OECD Publishing.
dc.relation.referencesEscamilla, J., Fuentes, K., Venegas, E., Fernández, K., Elizondo, J. & Román, R. (2016). EduTrends Gamificación. Observatorio de Innovación Educativa, 1-36.
dc.relation.referencesFernández, P., Vallejo, G., Livacic-Rojas, P. & Tuero, E. (2014). Validez Estructurada para una investigación cuasi-experimental de calidad. Se cumplen 50 años de la presentación en sociedad de los diseños cuasi-experimentales. An. psicol., 30(2).
dc.relation.referencesGalan, D., Heradio, R., Vargas, H., Abad, I. & Cerrada, J. A. (2019). Automated Assessment of Computer Programming Practices: The 8-Years UNED Experience. IEEE Access, 7, 130113-130119.
dc.relation.referencesGallardo, K. (2021). The Importance of Assessment Literacy: Formative and Summative Assessment Instruments and Techniques. En R. Babo, N. Dey y A. S. Ashour (Eds.), Workgroups eAssessment: Planning, Implementing and Analysing Frameworks (pp. 3-25). Springer Singapore.
dc.relation.referencesGallego-Romero, J. M., Alario-Hoyos, C., Estévez-Ayres, I. & Delgado Kloos, C. (2020). Analyzing learners’ engagement and behavior in MOOCs on programming with the Codeboard IDE. Educational Technology Research and Development.
dc.relation.referencesGarcia, R., Falkner, K. & Vivian, R. (2018). Systematic literature review: Self-Regulated Learning strategies using e-learning tools for Computer Science. Computers and Education, 123, 150-163.
dc.relation.referencesGarcía Méndez, M. & Rivera Aragón, S. (2012). Aplicación de la estadística a la psicología. Miguel Ángel Porrúa.
dc.relation.referencesGarcia-Duncan, T. & McKeachie, W. J. (2005). The Making of the Motivated Strategies for Learning Questionnaire. Educational Psychologist, 40(2), 117-128.
dc.relation.referencesGatica-Saavedra, M. & Rubí-González, P. (2020). The master class in the context of the competency-based educational model. Revista Electrónica Educare, 25(1), 1-12.
dc.relation.referencesGomes, A. & Mendes, A. (2015). A teacher’s view about introductory programming teaching and learning: Difficulties, strategies and motivations. Proceedings - Frontiers in Education Conference, FIE.
dc.relation.referencesGonzález Jaimes, E. I., López Chau, A., Trujillo Mora, V. & Rojas Hernández, R. (2018). Estrategia didáctica de enseñanza y aprendizaje para programadores de software / Teaching and learning didactic strategy for software programmers. RIDE Revista Iberoamericana para la Investigación y el Desarrollo Educativo, 9(17), 688-712.
dc.relation.referencesGonzález-Carrillo, C. D., Restrepo-Calle, F., Ramírez-Echeverry, J. J. & González, F. A. (2021a). Automatic Grading Tool for Jupyter Notebooks in Artificial Intelligence Courses. Sustainability, 13(21).
dc.relation.referencesGonzález-Carrillo, C. D., Restrepo-Calle, F., Ramírez-Echeverry, J. J. & González, F. A. (2021b). Automatic Grading Tool for Jupyter Notebooks in Artificial Intelligence Courses. Sustainability, 13(21).
dc.relation.referencesGonzalez-Escribano, A., Lara-Mongil, V., Rodriguez-Gutiez, E. & Torres, Y. (2019). Toward improving collaborative behaviour during competitive programming assignments. 2019 IEEE/ACM Workshop on Education for High-Performance Computing (EduHPC), 68-74.
dc.relation.referencesGordillo, A. (2019). Effect of an instructor-centered tool for automatic assessment of programming assignments on students’ perceptions and performance. Sustainability (Switzerland), 11(20).
dc.relation.referencesGrover, S. (2021). Toward A Framework for Formative Assessment of Conceptual Learning in K-12 Computer Science Classrooms. Proceedings of the 52nd ACM Technical Symposium on Computer Science Education, 31-37.
dc.relation.referencesGuerrero, M., Guamán, D. S. & Caiza, J. C. (2015). Revisión de Herramientas de Apoyo en el Proceso de Enseñanza-Aprendizaje de Programación. Revista Politécnica, 35(1), 84.
dc.relation.referencesGupta, S. & Gupta, A. (2018). E-Assessment Tools for Programming Languages: A Review. First International Conference on Information Technology and Knowledge Management, 65-70.
dc.relation.referencesHamidah, J., Said, I. & Ratnawati. (2019). Implementing Blended Learning Toward Students’ Self Efficacy in Writing Class: Students and Teachers’ Voice. Journal of English Education and Teaching (JEET).
dc.relation.referencesHernández Sampieri, R., Fernández Collado, C. & Baptista Lucio, M. d. P. (2016). Metodología de la investigación (6a. ed). México D.F.: McGraw-Hill.
dc.relation.referencesIbrahim, M. M. & Nat, M. (2019). Blended learning motivation model for instructors in higher education institutions. International Journal of Educational Technology in Higher Education, 16(1), 12.
dc.relation.referencesIhantola, P., Ahoniemi, T., Karavirta, V. & Seppälä, O. (2010). Review of recent systems for automatic assessment of programming assignments. Proceedings of the 10th Koli Calling International Conference on Computing Education Research, Koli Calling’10, 86-93.
dc.relation.referencesIon, G., Sánchez Martí, A. & Agud Morell, I. (2019). Giving or receiving feedback: which is more beneficial to students’ learning? Assessment & Evaluation in Higher Education, 44(1), 124-138.
dc.relation.referencesJanczewski, R., Kosowski, A., Malafiejski, M. & Noinski, T. (2006). Application of SPOJ cooperative contest management in the university tuition system. Annals of the Gdansk University of Technology.
dc.relation.referencesJaničić, M. V. & Marić, F. (2020). Regression verification for automated evaluation of students programs. Computer Science and Information Systems, 17(1), 205-227
dc.relation.referencesJärvelä, S. & Niemivirta, M. (2001). Motivation in context: Challenges and possibilities in studying the role of motivation in new pedagogical culture. En S. Volet y S. Järvelä (Eds.), Motivation in Learning Contexts (pp. 105-127). Pergamon Press.
dc.relation.referencesJiménez-Toledo, J. A., Collazos, C. & Revelo-Sánchez, O. (2019). Consideraciones en los procesos de enseñanza-aprendizaje para un primer curso de programación de computadores: una revisión sistemática de la literatura. TecnoLógicas, 22, 83-117.
dc.relation.referencesJurado, F., Redondo, M. & Ortega, M. (2014). eLearning standards and automatic assessment in a distributed eclipse based environment for learning computer programming. Computer Applications in Engineering Education, 22(4), 774-787.
dc.relation.referencesKanika, Chakraverty, S. & Chakraborty, P. (2020). Tools and Techniques for Teaching Computer Programming: A Review. Journal of Educational Technology Systems, 49(2), 170-198.
dc.relation.referencesKeuning, H., Jeuring, J. & Heeren, B. (2016). Towards a Systematic Review of Automated Feedback Generation for Programming Exercises. Proceedings of the 2016 ACM Conference on Innovation and Technology in Computer Science Education, 41-46.
dc.relation.referencesKhramova, M. V., Nesterov, M. V. & Kurkin, S. A. (2019). Problems of Learning Programming in Introductory Course. International Conference ”Quality Management, Transport and Information Security, Information Technologies”(IT&QM&IS), 522-525.
dc.relation.referencesKosowski, A., Malafiejski1, M. & Noinski, T. (2008). Application of an Online Judge and Contester System in Academic Tuition. Annals of the Gdansk University of Technology.
dc.relation.referencesKrugel, J., Hubwieser, P., Goedicke, M., Striewe, M., Talbot, M., Olbricht, C., Schypula, M. & Zettler, S. (2020). Automated Measurement of Competencies and Generation of Feedback in Object- Oriented Programming Courses. IEEE Global Engineering Education Conference (EDUCON ’20), 1(1), 10.
dc.relation.referencesKrusche, S., von Frankenberg, N. & Afifi, S. (2017). Experiences of a Software Engineering Course based on Interactive Learning.
dc.relation.referencesLaw, K. M., Lee, V. C. & Yu, Y. T. (2010). Learning motivation in e-learning facilitated computer programming courses. Computers and Education, 55(1), 218-228.
dc.relation.referencesLeal, J. P. & Silva, F. (2003). Mooshak: a Web-based multi-site programming contest system. Software: Practice and Experience, 567-581.
dc.relation.referencesLishinski, A. & Yadav, A. (2019). Motivation, Attitudes, and Dispositions. En S. A. Fincher y A. V. Robins (Eds.), The Cambridge Handbook of Computing Education Research (pp. 801-826). Cambridge University Press
dc.relation.referencesLobb, R. & Harlow, J. (2016). Coderunner: A tool for assessing computer programming skills. ACM Inroads, 7(1), 47-51.
dc.relation.referencesLoui, M. C. & Borrego, M. (2019). Engineering Education Research. En S. A. Fincher y A. V. Robins (Eds.), The Cambridge Handbook of Computing Education Research (pp. 292-322). Cambridge University Press
dc.relation.referencesLovos, E. & González, A. H. (2014). Moodle y VPL como Soporte a las Actividades de Laboratorio de un Curso Introductorio de Programación. IX Congreso de Tecnología en Educación y Educación en Tecnología.
dc.relation.referencesManniam Rajagopal, M. B. (2018). Virtual Teaching Assistant to Support Students’ Efforts in Programming (Tesis de maestría). Virginia Polytechnic Institute; State University.
dc.relation.referencesMarchisio, M., Barana, A., Fioravera, M., Rabellino, S. & Conte, A. (2018). A Model of Formative Automatic Assessment and Interactive Feedback for STEM. 2018 IEEE 42nd Annual Computer Software and Applications Conference (COMPSAC), 01, 1016-1025.
dc.relation.referencesMonereo, C. (2014). Las estrategias de aprendizaje en la Educación formal: enseñar a pensar y sobre el pensar. Infancia y Aprendizaje, 13, 3-25.
dc.relation.referencesMoreira, M. A. (2017). Aprendizaje significativo como un referente para la organización de la enseñanza. Archivos de Ciencias de la Educación, 11(12).
dc.relation.referencesMuñoz, R., Barría, M., Nöel, R., Providel, E. & Quiroz, P. (2012). Determinando las dificultades en el aprendizaje de la primera asignatura de programación en estudiantes de ingeniería civil informática. Memorias del XVII Congreso Internacional de Informática Educativa, TISE.
dc.relation.referencesPagano, R. (2006). Estadística para Las Ciencias Del Comportamiento. Cengage Learning Latin America.
dc.relation.referencesParedes-Daza, J. D. & Sanabria-Becerra, W. M. (2015). Ambientes de aprendizaje o ambientes educativos. Una reflexión ineludible. Revista de Investigaciones UCM, 25(15), 144-158.
dc.relation.referencesPatil, A. (2010). Automatic Grading of Programming Assignments (Tesis de maestría). San Jose State University.
dc.relation.referencesPérez Pino, M., Enrique Clavero, J. O., Carbó Ayala, J. E. & González Falcón, M. (2017). La evaluación formativa en el proceso enseñanza aprendizaje. Edumecentro, 9(3), 263-283.
dc.relation.referencesPham, M. T. & Nguyen, T. B. (2019). The DOMJudge based Online Judge System with Plagiarism Detection. The University of Danang - University of Science and Technology.
dc.relation.referencesPintrich, P., Smith, D., Garcia, T. & McKeachie, W. (1991). A Manual for the Use of the Motivated Strategies for Learning Questionnaire (MSLQ).
dc.relation.referencesPringuet, P., Friel, A. & Vande Wiele, P. (2021). CodeRunner: A Case Study of the Transition to Online Learning of a Java Programming Course. Proceedings of the AUBH E-Learning Conference 2021: Innovative Learning and Teaching - Lessons from COVID-19, 1-10.
dc.relation.referencesQoiriah, A., Yamasari, Y., Asmunin, Nurhidayat, A. I. & Harimurti, R. (2021). Exploring Automatic Assessment-Based Features for Clustering of Students’ Academic Performance. En A. Abraham, Y. Ohsawa, N. Gandhi, M. Jabbar, A. Haqiq, S. McLoone y B. Issac (Eds.), Proceedings of the 12th International Conference on Soft Computing and Pattern Recognition (SoCPaR 2020) (pp. 125-134). Springer International Publishing.
dc.relation.referencesQueirós, R. & Leal, J. P. (2012). PETCHA: A programming exercises teaching assistant. Proceedings of the 17th ACM Annual Conference on Innovation and Technology in Computer Science Education, ITiCSE, 192-197.
dc.relation.referencesQueirós, R. & Leal, J. P. (2018). Fostering Students-Driven Learning of Computer Programming with an Ensemble of E-Learning Tools. En Á. Rocha, H. Adeli, L. P. Reis y S. Costanzo (Eds.), Trends and Advances in Information Systems and Technologies (pp. 289-298). Springer International Publishing.
dc.relation.referencesRamírez-Dorantes, M. d. C., Echazarreta-Moreno, A., Bueno-Álvarez, J. A. & Canto-y-Rodríguez, J. E. (2013). Validación Psicométrica del Motivated Strategies for Learning Questionnaire en Universitarios Mexicanos. Electronic Journal of Research in Educational Psychology.
dc.relation.referencesRamírez-Echeverry, J. J. (2017). La competencia «aprender a aprender» en un contexto educativo de ingeniería (Tesis doctoral). Universitat Politècnica de Catalunya (UPC). Barcelona, España.
dc.relation.referencesRamírez-Echeverry, J. J., García-Carillo, A. & Olarte-Dussán, F. A. (2016). Adaptation and Validation of the Motivated Strategies for Learning Questionnaire-MSLQ-in Engineering Students in Colombia*. International Journal of Engineering Education, 32(4), 1-14.
dc.relation.referencesRamírez-Echeverry, J. J., Restrepo-Calle, F. & González, F. (2022). A case study in technology-enhanced learning in an introductory computer programming course. Global Journal of Engineering Education, 24(1), 65-71.
dc.relation.referencesRamírez-Echeverry, J. J., Rosales-Castro, L. F., Restrepo-Calle, F. & Gonzalez, F. A. (2018). Self-Regulated learning in a Computer Programming Course. IEEE Revista Iberoamericana de Tecnologias del Aprendizaje, 13(2), 75-83.
dc.relation.referencesRestrepo-Calle, F., Ramírez Echeverry, J. J. & González, F. A. (2019). Continuous assessment in a computer programming course supported by a software tool. Computer Applications in Engineering Education, 27(1), 80-89.
dc.relation.referencesRestrepo-Calle, F., Ramírez-Echeverry, J. J. & Gonzalez, F. A. (2018). UNCODE: Interactive system for learning and automatic evaluation of computer programming skills. In Proceedings of the 10th annual International Conference on Education and New Learning Technologies EDULEARN 2018, 1, 6888-6898.
dc.relation.referencesRestrepo-Calle, F., Ramírez-Echeverry, J. J. & González, F. (2020). Using an interactive software tool for the formative and summative evaluation in a computer programming course: an experience report. Global Journal of Engineering Education.
dc.relation.referencesRevilla, M. A., Manzoor, S. & Liu, R. (2008). Competitive Learning in Informatics: The UVa Online Judge Experience. Olympiads in Informatics, Institute of Mathematics and Informatics.
dc.relation.referencesRodríguez, J., Rubio Royo, E. & Hernández, Z. (2011). USES OF VPL. INTED2011 Proceedings, 743-748.
dc.relation.referencesRodriguez-del-Pino, J. (2012). A Virtual Programming Lab for Moodle with automatic assessment and anti-plagiarism features.
dc.relation.referencesRubio-Sánchez, M., Kinnunen, P., Pareja-Flores, C. & Ángel Velázquez-Iturbide, J. (2012). Lessons learned from using the automated assessment tool “Mooshak”. 2012 International Symposium on Computers in Education (SIIE), 1-6.
dc.relation.referencesRubio-Sánchez, M., Kinnunen, P., Pareja-Flores, C. & Velázquez-Iturbide, Á. (2014). Student perception and usage of an automated programming assessment tool. Computers in Human Behavior, 31, 453-460.
dc.relation.referencesRuiz-de-Clavijo, B. N. (2009). Motivación, motivación en el aprendizaje, acción motivacional del profesor en el aula. Revista Digital Innovación y Experiencias Educativas.
dc.relation.referencesSangwin, C. (2019). Automatic assessment of students’ code using CodeRunner. University of Edinburgh, 1-20.
dc.relation.referencesSanmartín, V. A. G. & Pilco, W. V. Y. (2020). Aprender haciendo”: Aplicación de la metodología por ambientes de aprendizaje. Polo del Conocimiento: Revista científico-profesional, 5(7), 188-208.
dc.relation.referencesSeijo Galán, S., Freire Rodríguez, C. & Ferradás Canedo, M. d. M. (2020). Tipos de motivación en relación a la ansiedad ante los exámenes en el alumnado de educación primaria. PUBLICACIONES, 50(1), 265-274.
dc.relation.referencesShao, T., Kuang, Y., Huang, Y. & Quan, Y. (2019). PAAA: An implementation of programming assignments automatic assessing system. ACM International Conference Proceeding Series, 68-72.
dc.relation.referencesShivam, Goswami, N., Baths, V. & Bandyopadhyay, S. (2019). AES: Automated evaluation systems for computer programing course. ICSOFT 2019 - Proceedings of the 14th International Conference on Software Technologies, 508-513.
dc.relation.referencesSiegel, S. & Castellan, N. J. (1998). Estadística no paramétrica: Aplicada a las ciencias de la conducta (4a ed). Editorial Trillas.
dc.relation.referencesSkalka, J., Drlík, M. & Obonya, J. (2019). Automated Assessment in Learning and Teaching Programming Languages using Virtual Learning Environment, 689-697.
dc.relation.referencesSousa Silva, G. R. (2022). Impact of a pseudocode online judge on programming language learning. Universidade de Brasília.
dc.relation.referencesSouza, D. M., Felizardo, K. R. & Barbosa, E. F. (2016). A systematic literature review of assessment tools for programming assignments. Proceedings - 2016 IEEE 29th Conference on Software Engineering Education and Training, CSEEandT 2016, 147-156.
dc.relation.referencesSpacco, J., Hovemeyer, D., Pugh, W., Emad, F., Hollingsworth, J. K. & Padua-Perez, N. (2006). Experiences with Marmoset: Designing and Using an Advanced Submission and Testing System for Programming Courses. Proceedings of the 11th Annual SIGCSE Conference on Innovation and Technology in Computer Science Education (ITICSE ’06), pp. 13-17.
dc.relation.referencesSpacco, J., Strecker, J., Hovemeyer, D. & Pugh, W. (2005). Software Repository Mining with Marmoset: An Automated Programming Project Snapshot and Testing System. Proceedings of the Mining Software Repositories Workshop (MSR 2005).
dc.relation.referencesSpacco, J., Winters, T. & Payne, T. (2006). Inferring Use Cases from Unit Testing. AAAI Workshop on Educational Data Mining.
dc.relation.referencesSpacco, J. W. (2006). MARMOSET: A programming project assignment framework to improve the feedback cycle for students, faculty and researchers (Tesis doctoral). University of Maryland, College Park. Ann Arbor, United States.
dc.relation.referencesSPhere-Online-Judge. (2022). Recuperado desde: https://www.spoj.com/.
dc.relation.referencesSun, Q., Wu, J. & Liu, K. (2020). Toward Understanding Students’ Learning Performance in an Object-Oriented Programming Course: The Perspective of Program Quality. IEEE Access, 8, 37505-37517.
dc.relation.referencesSun, Q., Wu, J., Rong, W. & Liu, W. (2019). Formative assessment of programming language learning based on peer code review: Implementation and experience report. Tsinghua Science and Technology, 24(4), 423-434.
dc.relation.referencesSweller, J. (1988). Cognitive load during problem solving: Effects on learning. Cognitive Science, 12(2), 257-285.
dc.relation.referencesTapia, J. A. (2001). Motivación y estrategias de aprendizaje: principios para su mejora en alumnos universitarios.
dc.relation.referencesTarek, M., Ashraf, A., Heidar, M. & Eliwa, E. (2022). Review of Programming Assignments Automated Assessment Systems. 2022 2nd International Mobile, Intelligent, and Ubiquitous Computing Conference (MIUCC), 230-237.
dc.relation.referencesTavares, P. C., Henriques, P. R. & Gomes, E. F. (2017). A computer platform to increase motivation in programming students-PEP. CSEDU 2017 - Proceedings of the 9th International Conference on Computer Supported Education, 1, 284-291.
dc.relation.referencesThiébaut, D. (2015). Automatic Evaluation of Computer Programs Using Moodle’s Virtual Programming Lab (VPL) Plug-In. J. Comput. Sci. Coll., 30(6), 145-151.
dc.relation.referencesUllah, Z., Lajis, A., Jamjoom, M., Altalhi, A., Al-Ghamdi, A. & Saleem, F. (2018). The effect of automatic assessment on novice programming: Strengths and limitations of existing systems.
dc.relation.referencesVennila, R., Labelle, D. & Wiendenbeck, S. (2004). Self-efficacy and mental models in learning to program. ACM SGCSE Bulletin, 36(3).
dc.relation.referencesVerdú, E., Regueras, L. M., Verdú, M. J., Leal, J. P., De Castro, J. P. & Queirós, R. (2012). A distributed system for learning programming on-line. Computers and Education, 58(1), 1-10.
dc.relation.referencesVidela, R. L. (2010). Clases pasivas, clases activas y clases virtuales. ¿Transmitir o construir conocimientos? Revista Argentina de Radiología.
dc.relation.referencesWardani, A. D., Gunawan, I., Kusumaningrum, D. E., Benty, D. D. N., Sumarsono, R. B., Nurabadi, A. & Handayani, L. (2020). Student Learning Motivation: A Conceptual Paper. Proceedings of the 2nd Early Childhood and Primary Childhood Education (ECPE 2020), 275-278.
dc.relation.referencesWilcoxon, F. (1945). Individual Comparisons by Ranking Methods. International Biometric Society.
dc.relation.referencesWunsche, B. C., Huang, E., Shaw, L., Suselo, T., Leung, K. C., Dimalen, D., Van Der Mark, W., Luxton-Reilly, A. & Lobb, R. (2019). CodeRunnerGL - An interactive web-based tool for computer graphics teaching and assessment. ICEIC 2019 - International Conference on Electronics, Information, and Communication.
dc.relation.referencesYusof, N., Zin, N. A. M. & Adnan, N. S. (2012). Java Programming Assessment Tool for Assignment Module in Moodle E-learning System. Procedia - Social and Behavioral Sciences, 56, 767-773.
dc.relation.referencesZimmerman, B. (1989). A social cognitive view of self-regulated academic learning. Journal of Educational Psychology, 81(3), 329-339.
dc.rights.accessrightsinfo:eu-repo/semantics/openAccess
dc.subject.proposalHerramientas de evaluación
dc.subject.proposalMotivación en el aprendizaje
dc.subject.proposalProgramación de computadores
dc.subject.proposalAssessment tools
dc.subject.proposalComputer programming
dc.subject.proposalLearning motivation
dc.subject.unescoMétodo de aprendizaje
dc.subject.unescoLearning methods
dc.subject.unescoPrograma informático didáctico
dc.subject.unescoEducational software
dc.subject.unescoInformática educativa
dc.subject.unescoComputer uses in education
dc.title.translatedEvaluation of the impact on the learning motivation of computer programming students using an automatic assessment tool
dc.type.coarhttp://purl.org/coar/resource_type/c_bdcc
dc.type.coarversionhttp://purl.org/coar/version/c_ab4af688f83e57aa
dc.type.contentText
dc.type.redcolhttp://purl.org/redcol/resource_type/TM
oaire.accessrightshttp://purl.org/coar/access_right/c_abf2
dcterms.audience.professionaldevelopmentInvestigadores
dcterms.audience.professionaldevelopmentMaestros
dcterms.audience.professionaldevelopmentPúblico general


Archivos en el documento

Thumbnail

Este documento aparece en la(s) siguiente(s) colección(ones)

Mostrar el registro sencillo del documento

Atribución-NoComercial-SinDerivadas 4.0 InternacionalEsta obra está bajo licencia internacional Creative Commons Reconocimiento-NoComercial 4.0.Este documento ha sido depositado por parte de el(los) autor(es) bajo la siguiente constancia de depósito