Understanding Computing Students’ Self-Assessments
Professor Eleanor O'Rourke and PhD students Melissa Chen and Yinmiao Li won a Best Paper Award at ICER 2024 for their study on underlying criteria for the self-assessments of programming students
During the learning process, negative self-assessments can lead students to make inaccurate conclusions about their programming ability. Novice programming students may be overly critical, for instance, when they encounter simple errors, need to restart a problem, stop coding to plan, or consult resources to explore an approach or study syntax.
Northwestern Engineering’s Eleanor O'Rourke, Melissa Chen, and Yinmiao Li are investigating how to support students through the process of learning programming and how to help novice programming students develop higher self-efficacy — a belief in their ability to complete a task or achieve a goal.
The team won the Best Paper Award at the 2024 Association for Computing Machinery Conference on International Computing Education Research, held August 12-15 in Melbourne, Australia. Their study examines students’ underlying reasons for their self-assessments to uncover why students think they are doing poorly in response to natural and expected programming experiences.
Through interviews with computing students, the team identified three negative and three positive underlying reasons for self-assessments. Judgements such as “I should know this” (negative) or “I can learn/recover” (positive) were tied to students’ perceptions of the ‘normal’ learning curve for computing novices.
Chen, a third-year PhD student in computer science, explained that students often negatively self-assess when they think they should not encounter a particular situation or cannot overcome setbacks.
“Based on our results, we encourage everyone in the computing community to consider what they convey regarding what is typical or expected for programming learners when interacting with students,” Chen said. “We often aren’t clear about how students should evaluate their programming abilities, so students interpret all of the information they can get — including what we implicitly or accidentally convey through our words and course designs — to try to understand how they should be viewing their programming abilities.”
O’Rourke underscored these findings, noting that researchers and practitioners can help students develop more accurate expectations about the learning process by making evaluation criteria explicit and helping students develop the skills they need to recover from struggles.
“Our findings suggest that talking to students explicitly about what the programming learning process looks like, and normalizing common practices such as using online resources to look up syntax or planning solutions before implementing them, could help overcome the overly negative self-assessments that many students experience” said O’Rourke, associate professor of computer science at the McCormick School of Engineering and associate professor of learning sciences at Northwestern’s School of Education and Social Policy.
Chen, O’Rourke, and Li, a PhD student in computer science and learning sciences co-advised by O’Rourke, plan to continue this work and design interventions to help students make more accurate self-assessments.
"Interventions that help students surface and reflect on their emotional and metacognitive experiences during programming are needed to increase student self-efficacy and persistence in computing,” Li said.