2 minute read

2.2.2 Psychometric approaches

Next Article
REFERENCES

REFERENCES

make sense when we understand that knowledge is not static, that practices are emergent, and that practice generates its own understandings and is relational. The relationship between identity and learning is relational, dynamic and provisional (Fenwick, 2000, 2004). Agency for learning is mediated by individual sense-making of the context, as the context is mediated by the actions of individuals and groups, informing “ways of knowing, doing, and feeling” or, in other words, “a way of being” (Edwards & Usher, 1996).

2.2.2 Psychometric approaches

‘‘Knowledge of educational products and educational purposes must become quantitative, take the form of measurement’’ (Thorndike, 1922, p. 1 in Hodges, 2013, p. 564). Thorndike’s work on “measurement in education” has had a long and far-reaching impact on assessment practices. Central to this idea is that learning can be measured on a behaviouristic concept of learning. This implies that being competent is the

result of following a large number of small steps or modules, each of which has to be assessed at the end. Only after successful completion of a module can the student progress to the next. It follows then logically that assessment has to take a reductionist approach as well, viewing the total only as the sum of its constituent parts. (Lambert et al., 2011, p. 478)

When assumptions are made that mimic the line of assessment as a hard science, there is a concentration on uniformity and moderation, and being clear, fixed and precise about what counts as good performance and what does not. It is from these approaches that current conceptualisation of validity, reliability and fairness in assessment, rooted in the psychometric tradition, are a part of our everyday language when considering assessment. The psychometric tradition places a strong association between assessment considered as objective and assessment tools that are standardised (Hodges, 2013). Standardised testing conditions and homogonised test materials were considered reliable, resulting, as Lambert points out, in the atomisation of competencies into sub-tasks (ibid). One result of this approach is that “practice settings were often removed to make tests equivalent for all test takers” (ibid., p. 555).

These assumptions [validity and reliability] are grounded on the ideas that phenomena are located within individuals; that there is a quantity or amount that can be measured; that this measure, or true score, is obscured by sources of statistical noise from extraneous factors that needs to be eliminated; and that the ability of tests to discriminate between individuals is something positive. (Ibid.)

This quote illustrates the assumption that what a person does, his behaviour or performance, is objectively observable (Holmes, 2001). But as humans, we do not “objectively observe and record actions, behaviours and performance, rather, we interpret them within a context. There is now considerable literature demonstrating that, in fact, authenticity increases validity (see section 2.2.3). Vaughan & Cameron (2009) note in relation to the medical field that the emphasis is moving away from gaining a certain number of marks in high-stakes examinations and more towards gathering evidence of clinical competence and appropriate professional behaviour and attitudes. Such evidence is seen every day in the workplace.

Not surprisingly, then, there are arguments in the assessment literature that “traditional” validity frameworks operate from perspectives which are incompatible with the goals of alternative assessment” (Teasdale & Leung, 2000, p. 164), suggesting a need for a different conceptualisation of validity and reliability. Knight and Yorke (2009), for example, state that summative assessment is in disarray, that examinations have limited reliability and validity, and that “cheat-proof assessment systems are often

27

This article is from: