AUT LibraryAUT
View Item 
  •   Open Research
  • AUT Faculties
  • Faculty of Design and Creative Technologies (Te Ara Auaha)
  • School of Engineering, Computer and Mathematical Sciences - Te Kura Mātai Pūhanga, Rorohiko, Pāngarau
  • View Item
  •   Open Research
  • AUT Faculties
  • Faculty of Design and Creative Technologies (Te Ara Auaha)
  • School of Engineering, Computer and Mathematical Sciences - Te Kura Mātai Pūhanga, Rorohiko, Pāngarau
  • View Item
JavaScript is disabled for your browser. Some features of this site may not work without it.

How difficult are exams? A framework for assessing the complexity of introductory programming exams

Sheard, J; Simon; Carbone, A; Chinn, D; Clear, Tony; Corney, M; D'Souza, D; Fenwick, J; Harland, J; Laakso, M-J; Teague, D
Thumbnail
View/Open
Pages 145ff from Vol136.pdf (644.4Kb)
Permanent link
http://hdl.handle.net/10292/5151
Metadata
Show full metadata
Abstract
Student performance on examinations is influenced by the level of difficulty of the questions. It seems reasonable to propose therefore that assessment of the difficulty of exam questions could be used to gauge the level of skills and knowledge expected at the end of a course. This paper reports the results of a study investigating the difficulty of exam questions using a subjective assessment of difficulty and a purpose-built exam question complexity classification scheme. The scheme, devised for exams in introductory programming courses, assesses the complexity of each question using six measures: external domain references, explicitness, linguistic complexity, conceptual complexity, length of code involved in the question and/or answer, and intellectual complexity (Bloom level). We apply the scheme to 20 introductory programming exam papers from five countries, and find substantial variation across the exams for all measures. Most exams include a mix of questions of low, medium, and high difficulty, although seven of the 20 have no questions of high difficulty. All of the complexity measures correlate with assessment of difficulty, indicating that the difficulty of an exam question relates to each of these more specific measures. We discuss the implications of these findings for the development of measures to assess learning standards in programming courses.
Keywords
Standards; Quality; Examination papers; CS1; Introductory programming; Assessment; Question complexity; Question difficulty
Date
January 2013
Source
Australasian Computing Education Research Conference (ACE 2013) held at UniSA, Adelaide, Australia, 2013-01-29 to 2013-02-01, published in: Proceedings of the Fifteenth Australasian Computing Education Research Conference (ACE 2013), vol.136, pp.145 - 154
Item Type
Conference Contribution
Publisher
Australian Computer Society (ACS)
Publisher's Version
http://crpit.com/PublishedPapers.html
Rights Statement
Copyright © 2013, Australian Computer Society, Inc. This paper appeared at the 15th Australasian Computing Education Conference (ACE 2013), Adelaide, South Australia, JanuaryFebruary 2013. Conferences in Research and Practice in Information Technology (CRPIT), Vol. 136. A. Carbone and J. Whalley, Eds. Reproduction for academic, not-for-profit purposes permitted provided this text is included.

Contact Us
  • Admin

Hosted by Tuwhera, an initiative of the Auckland University of Technology Library

 

 

Browse

Open ResearchTitlesAuthorsDateSchool of Engineering, Computer and Mathematical Sciences - Te Kura Mātai Pūhanga, Rorohiko, PāngarauTitlesAuthorsDate

Alternative metrics

 

Statistics

For this itemFor all Open Research

Share

 
Follow @AUT_SC

Contact Us
  • Admin

Hosted by Tuwhera, an initiative of the Auckland University of Technology Library