Abstract
Competence-based (vocational) education has gained a firm foothold in our society, causing assessment practices to change accordingly, along with ideas of what constitutes good assessment. The subject of this thesis is the (1) development, (2) validation and (3) practical use of a framework of quality criteria to evaluate assessment quality. Different from most research into assessment, this thesis focuses on Competence Assessment Programmes (CAPs) instead of single assessment methods. First, a literature study was carried out to develop a framework of quality criteria for CAPs, which was qualitatively compared to Messick’s (1994, 1995) aspects of construct validity. It is argued that psychometric criteria such as validity and reliability are important for all assessments, but need to be operationalised in a different way for the more qualitative assessments that are found in competence-based education. Moreover, they need to be complemented with new quality criteria that do justice to the changed nature of assessments, for example their authenticity and meaningfulness to students. Second, this framework was validated by means of a teacher questionnaire and an expert focus-group meeting. The teacher questionnaire showed that teachers consider the quality criteria to be important for their own assessment practices. No distinction was found between the importance of traditional, psychometric quality criteria and new, competence-based criteria. In the expert focus-group meeting, the participants collaboratively generated a list of important quality criteria for assessments in competence-based education. This list was discussed and compared to the framework proposed, which resulted in a new and improved framework of 12 quality criteria for CAPs composed of: acceptability, authenticity, cognitive complexity, comparability, costs & efficiency, educational consequences, fairness, fitness for purpose, fitness for self-assessment, meaningfulness, reproducibility of decisions, and transparency. Third, a self-evaluation procedure was developed to study the utility of the framework. Eight vocational schools evaluated the quality of their own CAP, and the self-evaluation processes and outcomes were explored. The 12 quality criteria were operationalised into more concrete practical indicators, on which the participants gave their CAP a rating, and supported this rating with a piece of evidence. A group interview was used to discuss and elaborate on the ratings and evidence given. The process results showed that different perspectives on CAP quality are aggregated in the group interview, which leads to new insights and ideas for improvement. Providing evidence appeared to be difficult, as mostly personal experiences were provided, and very few empirical data were available. To study the self-evaluation outcomes, a more ‘traditional’ and a more ‘innovative’ school were contrasted. The results showed that the two schools appeared to operate from different frames of reference. For example, the innovative school explicitly checked whether its CAP was transparent, fair, and acceptable in the eyes of stakeholders, while the innovative school assumed its stakeholders to be satisfied as they expressed no complaints. Concluding, the framework of quality criteria for CAPs developed in this thesis was validated, and schools seem to be able to critically reflect on CAP quality with the help of the self-evaluation procedure.
Original language | English |
---|---|
Qualification | Doctor of Philosophy |
Awarding Institution |
|
Supervisors/Advisors |
|
Award date | 24 Apr 2008 |
Place of Publication | Utrecht |
Publisher | |
Print ISBNs | 978-90-393-4773-7 |
Publication status | Published - 2008 |