Author ORCID Identifier

0009-0007-4281-3666

Date of Award

Summer 6-5-2024

Degree Type

Dissertation

Degree Name

Doctor of Philosophy (PhD)

Department

Learning Technologies Division

First Advisor

Dr. Lauren Margulieux

Second Advisor

Dr. Briana Morrison

Third Advisor

Dr. Ben Shapiro

Fourth Advisor

Dr. Yin-Chan (Janet) Liao

Abstract

Self-efficacy is a reliable predictor of academic motivation and achievement across academic disciplines and age groups (Lishinski et al. 2016; Zimmerman et al. 1992). Because self-efficacy can be improved through instructional experiences and tools, identifying students with low self-efficacy or strategies to increase it is valuable in computing education. For this reason, measurements of self-efficacy are important for educators and researchers. The goal of the study is to replicate the findings from Steinhorst et al. (2020) validation study of their newly designed self-efficacy measurement for introductory programming students. Additionally, the intention is to extend the work by incorporating new measures of general (i.e., domain-independent) self-efficacy to assess convergent validity and explore measurements over time. Students completed several instruments related to general self-efficacy once at the beginning and programming self-efficacy at the beginning and end of the semester. Half of the current study’s scales were used in the original study, so this population can be directly compared to the original populations. New scales for general self-efficacy were added to examine convergent validity and potential differences between general and programming-specific self-efficacy. The results revealed robust internal consistency and construct validity of the Steinhorst instrument for both the introductory programming course and the data structures course, aligning with the general self-efficacy theory. Additionally, the findings highlight the Steinhorst Instrument's suitability for different languages and contexts. A significant insight from the study is the necessity for researchers and educators to modify the instrument accordingly, ensuring that items not covered in the curriculum are excluded. This research enhances researchers' and educators' understanding of the Steinhorst instrument's robustness for assessing programming self-efficacy across various educational contexts.

DOI

https://doi.org/10.57709/37160748

File Upload Confirmation

1

Share

COinS