6 minute read

Setting the Standard: Getting It Right

Frances Jackson, PRP

During the 2021 NAP Biennial Convention, members from the Commission on Credentialing (COC) presented a workshop that updated the progress the commission is making on completing the RP credentialing process, and the start of the PRP credentialing process . As part of that presentation, commissioner Frances Jackson presented the analysis work performed by the commission to ensure testing integrity of the new process . Since that presentation, some interest has been expressed in having Dr . Jackson expand on the presentation to describe the process of inter-rater reliability and how it is used to ensure testing integrity .

Advertisement

As some members are aware, one of the recurring issues with the current credentialing process is the view for both the Professional Qualifying Course (now called the Professional Qualifying Examination) and all versions of the Professional Renewal Course (now called Professional Renewal Certification), that how candidates are evaluated is dependent on who one gets as an evaluator . Rightly or wrongly, there is a persistent perception that there is a lack of consistency on how the assignments are scored . This perception, whether accurate or not, leads to considerable anxiety among candidates, and raises legitimate questions about consistency of evaluation, how it can be measured, and how it can be achieved .

The bylaws that established the COC included a clause requiring psychometric analysis as part of the testing process for the new credentialing system . This provision not only reflects the concerns of NAP members but is also a requirement from the three organizations that sanction accreditation programs . To achieve this and comply with this provision, the COC established an Analysis Committee . The chair of this committee is commissioner Jackson, a retired university professor . The committee member is Dr . Mona Calhoun, PRP, who is now the national secretary for NAP . Dr . Calhoun recently completed a doctoral degree in psychometric testing . Dr . Jackson has thirty-two

graduate credits in research and analysis . This committee is responsible for designing and implementing the analysis used to comply with the bylaws, that psychometric testing is used to confirm testing integrity of the credentialing process .

Brief Overview of the RP Credentialing Process

The RP credentialing process has three steps, and each step has several parts . Step One is comprised of eight multiple choice exams . A score is generated, and candidates must achieve 85% on each exam to successfully complete each exam . Step Two is comprised of four parts, mainly composed of written assignments that reflect activities performed by parliamentarians outside of meetings . A scoring rubric has been developed for each assignment . Step Three is composed of a video that is a meeting simulation . Candidates will serve as the parliamentarian for this meeting . After this mock meeting, candidates will submit a post-meeting report .

Since the multiple-choice exams in Step One generate a score, there is no need for evaluator agreement . The Schoology system scores each exam . However, for Steps Two and Three, candidates will be subject to evaluator “opinion” or assessment on the extent to which a written assignment (Step Two) or a candidate’s performance as a meeting parliamentarian (Step Three) meets the criteria in the scoring rubrics . This naturally raises the question on what standard is used to establish what is considered a passing assignment, who establishes this standard, and how this standard is communicated to all persons who serve as evaluators for Steps Two and Three .

Inter-Rater Reliability (IRR)

Based on the recommendation of the Analysis Committee, the COC established that the standard for analysis for Steps Two and Three will be the commission . At the point the COC is ready to train evaluators to score the Step Two and Step Three assignments, evaluators will be judged on the extent to which their scores correlate, i .e ., match the scores of the COC . Essentially, to what extent does the evaluator agree with the COC on what is a passing assignment and which assignments must be resubmitted? To make this determination, two statistics will be used to measure these issues: inter-rater reliability (IRR), and correlation coefficient .

The first step is for the COC to establish the standard of what constitutes a successful assignment for each part of Steps Two and Three . To establish that standard, commissioners were given actual assignments submitted by candidates, and asked to score these assignments . Dr . Jackson retrieved the scores and ran the IRR statistics which generated a score . Inter-rater reliability (IRR) measures the extent to which two or more raters agree on the scores they have awarded . The IRR statistic ranges from 0 .0 to 1 .0 . For most studies, an IRR score of at least 0 .7 is considered strong agreement . These results were presented at a commission meeting, and areas where the IRR score was below 0 .7 were discussed to identify

why the disagreement occurred . To be clear, there were times when a commissioner gave a passing score on a criterion that the other commissioners felt had failed to meet the criterion . At other times, a commissioner gave a failing score on a criterion that other commissioners felt had successfully met the criterion . The discussion of the differences sometimes raised issues that helped all commissioners come to an agreement on what was a successful assignment, and what was not . This process will be repeated for all parts of Steps Two and Three .

With the standard established of what constitutes a successful assignment, and what constitutes an unsuccessful assignment, once the COC starts training evaluators to take over the scoring of these assignments, evaluators will be judged on the extent to which their scoring matches or correlates with the scoring done by the commission . During the training, potential evaluators will score the same assignments scored by the commissioners . The correlation coefficient statistic will be used to determine the extent to which the evaluators’ scores agree (correlate) with the scores awarded by the commissioners .

Evaluators will not undergo IRR . Conducting IRR with the evaluators will tell us the extent to which they agreed with each other . It will not tell us the extent to which the evaluators agreed with the standard set by the commission . The commission sets the standard, and evaluators will be judged on the extent to which their scoring agrees (correlates) with the scores awarded by the commissioners . In this way, the scores awarded by evaluators should have less diversity based on who is doing the scoring, and therefore more consistency . Essentially, no matter who does the scoring, the same result should occur because the standard for evaluation is the standard set by the commission, not the standard of the individual evaluators .

In conclusion, the statistics used to ensure testing integrity include IRR and correlation coefficient . IRR assists in establishing the standard of acceptable assignments as determined by the COC . The correlation statistic will assist in the training of evaluators to assess to what extent evaluators agree with the standard set by the COC . This will provide an opportunity during the training to help evaluators understand what differences, if any, exist between their scoring and the scoring done by the commission on the sample assignments that will be used . It is hoped that this process will decrease variability in scoring assignments and that all candidates will have equity and consistency in having their assignments scored . NP

Frances Jackson, PhD, RN, PRP, a member of the NAP Commission on Credentialing since 2017, worked as a registered nurse at various health care agencies before joining the faculty at Oakland University in Rochester, Michigan where she served for thirty years. She has been a member of the Detroit Unit since 1996 and is currently president of the Michigan Unit of Registered Parliamentarians and first vice-president of the Michigan State Association of Parliamentarians.

This article is from: