Should we trust students’ evaluations? A study in an Italian University1
Daniel Piana, Tommaso Agasisti2
Abstract: The progressive reduction of public resources for higher education has led many governments to rethink their organizations to become more managerial, altering the balance between central governments and academic institutions and increasing the decentralization of responsibilities. In this contest an accountability system becomes fundamental, and teaching evaluation acquires two main purposes: the external certification of results achieved by single university and the scientific support of decisional process. In this paper, we focus our attention on the second aspect investigating the principal drivers of student’s satisfaction at Politecnico di Milano (Italy) through the analysis of teaching quality survey in the academic year 2005/06. We found that students’ evaluations are focused on “core” aspects of teaching, such as the professors’ abilities and clearness and the provision of teaching support, giving evidence of the possibility to use these evaluations for really improving teaching activities. Moreover, on the basis of these results, the paper suggests to use the students’ evaluations for managerial purposes.
Keywords: students’ teaching evaluation, universities’ policies, accountability.
1
A part of this work is the outcome of the Daniel Piana’s experience at the Ufficio di Supporto al Nucleo di Valutazione of Politecnico di Milano from March to July 2007. Special thanks are owed to the office members for their constant support and their help in processing data that are shown in this paper. We are also grateful to prof. G. Catalano who helped used during all the work. Obviously, only the authors have the responsibility for any errors that might appear in the present work. 2 Corresponding author: Tommaso Agasisti, Politecnico di Milano (ITALY), Department of Economics, Management and Industrial Engineering, tommaso.agasisti@polimi.it.
1 Electronic copy available at: http://ssrn.com/abstract=1307739
1. Introduction Teaching evaluation managed in the Italian university can be read as a consequence of two phenomena: the growing demand of university training, and the establishment of a sort of university market. Proliferation of new universities, the consequent competition and shortage of resources made necessary the institution of an “accountability system” which could manage the allocation of the resources according to the quality results achieved by each university. According to Barnabè & Riccaboni (2007) universities are under scrutiny as they are required to change rapidly. This process of reform has been leading to a common consequence: the role, the strategic focus, the modus operandi and even the core values of universities are substantially changing. A way to foster and facilitate this transition has been through the implementation in higher education of managerial methodologies and approaches once exclusively adopted by the private sector. Generally, this process has been frequently defined as the "new public management" (Barzelay, 1999; Gruening, 2001; Hood, 1995), but it has also been labeled with many different terms like managerialization, corporatization, commercialization
marketization,
customerization,
modernization,
commodification,
rationalization, professionalization, accountingization (Boyce, 2002; Czarniawska & Genell, 2002; Currie & Vidovich, 2000; Davies & Thomas, 2002; Lapsley & Miller, 2004; Lawrence & Sharma, 2002; Meek, 2000; Meek & Wood, 1998; Neumann & Guthrie, 2002; Parker, 2002; Saravanamuthu & Tinker, 2002; Singh, 2002; Roberts, 2004; Willmott, 1995). To sum up, a trend to reorganize and restructure modern universities as entrepreneurial universities is emerging (Etzkowitz, 2003, p. 109; Meek, 2000, p. 28). Consequently, teaching evaluations should not be considered only an external way to attest the teaching quality because it is, indeed, a fundamental governance instrument. Teaching evaluation is also one of the main pillars of each decisional process phase: from problem setting, through problem solving, to the monitoring phase of the results. Performance evaluation schemes, budgetary controls, continuous improvement processes, focused internal reporting solutions such as the Balanced Scorecard, are becoming common methodologies in higher education (Aly & Akpovi, 2001; Dillard, 2002, p. 626; Holmes & McElwee, 1995; Lawrence & Sharma, 2002; Parker, 2002, p. 605; Saravanamuthu & Tinker, 2002, p. 549). A particularly relevant role in the reforms is played by quality assurance procedures and performance evaluations. Teaching evaluation can be also used to investigate the origins of student’s satisfaction and to inspire the university’s policies.
2 Electronic copy available at: http://ssrn.com/abstract=1307739
The use of questionnaires to collect students’ opinions is not new (Ramsden, 1991; Feldman 1996; Wachtel, 1998; Burns, 1999) but still very frequent. Near the end of every term students complete questionnaires about their instructors and the teaching they have experienced. The questionnaires are meant to provide feedback to the teachers with a view to their remediating revealed areas of weakness. Each time student feedback questionnaires are introduced into a new situation there is an addition to the chorus of lecturers questioning the ability of students to make appropriate judgments about teaching and also the liability and validity of the questionnaires. Over the years these concerns have stimulated a substantial body of research into various aspects of the validity and reliability of student feedback questionnaires. Marsh (1987) provides a thorough and comprehensive review of this work. In addition, according to the model of Double loop learning (Argrys & Schon, 1978), teaching evaluation permits the effects induced by the implemented politics and their consistency. Lastly, by monitoring survey results of teaching quality, it is possible to validate the fishbone diagram of student’s satisfaction drivers which is our theoretical framework. In this work it is shown the results of students’ teaching evaluation at Politecnico di Milano for the academic year 2005/06. In particular the main purpose of our empirical analysis is the research of student’s satisfaction drivers and determinants, by analyzing data from more than 35,000 questionnaires compiled by students. The paper is organized as follows. Section 2 illustrates the theoretical framework for the analysis, section 3 the methodology, section 4 contains the results and section 5 the conclusions.
2. Theoretical framework Teaching quality evaluation can be managed through many different methods; quantitative student ratings of teaching are used more than any other method to evaluate teaching performance (Marsh & Roche, 1993; Cashin,1999; Seldin, 1999). Ratings of students that attend lessons play a dominant role in the operational definition of what constitutes effective teaching (Cohen, 1981; Marsh, 1984; Feldman, 1997). In accordance with university’s choice and Ministry’s prescription, the questionnaire of CNVSU (National Evaluation Committee for the University Sector) has been used at Politecnico di Milano (CNVSU, 2002). This instrument shows directly students’ opinions about a wide range of aspects of teaching quality so, by careful investigation of survey
3
results, it has been possible to understand the main drivers of student’s satisfaction3. A clarification is necessary about this point, because the CNVSU’s questionnaire monitors the quality of teaching: the student’s satisfaction, as it will be shown later, is only an aspect of it. Concerning the objective definition that can be achieved through quality monitoring with this questionnaire, the CNSVU has paid attention to the following hypothesis: • the responsibility for training process quality is widespread at several levels but in particular where competences and power to control and manage it exist; • the single course that provides training service is responsible for the quality of its outcome. Then, the main goal pursued is the identification of factors that make students’ learning process easier or harder. This is a crucial objective also to improve the understanding of the educational production processes. On the basis of those factors it is possible to: • activate specific action to modify and improve every single element of teaching quality that need adjustment; • activate a wide range of “boosting policies” which should make each adjustment process that has the quality improvement as a goal easier. The CNVSU’s questionnaire in table 1 is composed by 15 questions concerning 5 themes, described below. Coherently with the mainstream literature on these topics, the questionnaire’s attempt is to monitor the students’ perceptions in a multidimensional setting (Marsh & Bailey, 1993; Cranton and Smith, 1990; Marsh, 1980; Marsh and Overall, 1981). Section 1: Whole university courses’ organization It investigates the acceptability of the required time to attend all scheduled courses in the considered period. In addition, it investigates the acceptability of the whole organization of all scheduled courses. Section 2: Organization of this course It investigates the engagement which is necessary to attend all the scheduled courses, the clear definitions of examination rules and the real availability of teaching staff for clarifications and explanations. 3
Locke (1976) proposed that satisfaction is a result of outcome – value congruence; outcomes that are congruent with established values will be viewed as satisfying. Therefore, in Locke’s theory, the correct reference point is the participant’s established values relative to the outcome at hand. Specifically, the satisfaction evaluation will be based on whether one obtains what one values—value-matching hypothesis. Locke defined a value as ‘what a person consciously or sub-consciously desires, wants, or seeks to attain’.
4
Section 3: Teaching activities and studying It investigates the preliminary knowledge, the studying load, and the real interest about the subject matter motivated by teaching staff, the quality of training aid and the usefulness of integrative activities. Section 4: Infrastructure It investigates opinions about the organization of the course, adequacy of lecture rooms and of the equipment for integrative teaching activities. Section 5: Interest and satisfaction It investigates students’ personal interests in the subjects of the evaluated course, regardless of how it has been run. In addition, it investigates the level of satisfaction for “how” the course was run. So, the questionnaire of Politecnico di Milano investigates 15 variables; each one is matched with a respective question. The university headquarters inspire their policy on these bases in order to monitor student’s satisfaction as a relevant objective. For this reason we paid attention at question number 15 which monitors directly the degree of student’s satisfaction of the evaluated course. The degree of student’s satisfaction should be maximized by leveraging on factors that produce it, so it is crucial, from a managerial point of view, to identify those factors, their mutual relations and the effort of each one. Table 1 reports the 15 questions posed in the questionnaire. Reading CNSVU’s questionnaire and according to the scheme in Table 1, it is possible to classify the drivers of student’s satisfaction into: • organization of the course; • teaching activities and studying; • infrastructure; • student’s interest. By deepening our analysis we have assumed that variable 15, which is student’s satisfaction about the evaluated course, could be treated as the dependent variable and the others 12 of the 14 variables (from question 3 to question 14) as the independent variables that create and influence it (figure 1). By managing this analysis we have not considered questions 1 and 2 because they are connected with the organization of the whole university courses – that is to say, they refer to another level of analysis.
5
Students’ backgrounds, learning habits, interests and expectations influence how they feel about and react to their learning environment (Marsh, 1983; Zoller, 1992). Our study examines students’ cognitive background and their perception of teachers’ lecturing skills and course design as the key student-level determinants. So considering the 12 variables shown in Figure 1 as drivers of student’s satisfaction and using appropriate statistical models we quantified and ranked the straightness of structure connections between them and the student satisfaction. Table 1: The CNSVU’s questionnaire
Sections Whole university courses organization Organization of this course
Question Number 1 2 3 4 5 6 7
Teaching activities and studying
8 9 10 11 12
Infrastructure 13 Interest and satisfaction
14 15
Question Is the time required to attend all courses scheduled in this period (two-month period, quarter, semester, ect.) acceptable? Is the whole organization (lessons scheduling, intermediate and final examinations) of all scheduled courses in this period (twomonth period, quarter, semester, etc.) acceptable? Were examinations conditions clearly defined? Is timetable of teaching activities respected? Is teaching staff really available for clarifications and explanations? Are my own preliminary knowledge enough for understanding the handled subjects? Does teaching staff stimulate / motivate interest towards subject matter? Does teaching staff explain subjects clearly? Is studying load requested by this course proportioned to assigned credits? Does training aid (suggested or supplied) fit to study the matter? Are the integrative teaching activities (trainings, laboratories, seminars, etc.) useful in order to learn? (If integrative activities are not scheduled, cross out “NOT scheduled”) Are lecture rooms in which lessons take place adequate? (Can you see, hear, sit down?) Are rooms and equipment for integrative teaching activities (trainings, laboratories, seminars, etc.) adequate? (If integrative activities are not scheduled, cross out “NOT scheduled”) Am I interested in the subjects of this course? (regardless of how it was run) Am I satisfied for how this course was run?
6
Figure 1: Student satisfaction drivers
3. Methodology Searching the variables set which impact powerfully on student’s satisfaction, and adopting the theoretical framework described in section 2, an empirical analysis has been managed on the teaching evaluation survey of Politecnico di Milano in the academic year 2005/06. In particular we worked on a database which collects answers to the questionnaires converted into numeric value, according with the scheme in Table 2. This was possible because the questionnaire offers qualitative answers, so only frequency and distribution analysis has been managed. By converting the answers into numeric value we were able to use a more efficient statistical model- like correlation and linear regression. We are aware of the problems connected with this choice, but at the same time this choice allows us to derive some (almost quantitative) results useful for policy and managerial implications.
7   
Table 2: Conversion scheme of answers in numeric values
Answer
Value
Firmly NO More NO than YES More YES than NO Firmly YES
1 2 3 4
The empirical analysis articulates in three phases: 1. The first one is the selection of relevant data, because not all the questionnaires had been correctly filled in, so the pattern has been restricted to those which have an answer for each question. For that reason it was possible to analyze 38.432 records out of 61.108 (62.89%). 2. The second one is the selection of relevant variables through the analysis of correlations. By analyzing the correlation indexes between question 15 and questions from 3 to 14 we identified which aspects of teaching quality make students satisfied. 3. The third one is the consolidation of the results. In this phase we tested the consistency and aleatory quality of results obtained in the previous step through: • Simulation: study of hundred homologous populations at the real one, casually generating these homologous. As a first test we casually created hundred populations with the same answers distribution of the real one. For every one of them we registered the (Spearman) correlation indexes between their answers and their degree of satisfaction. After that, for each question, we compared its correlation indexes with the highest homologous, registered among the casual populations. • Linear regression: We conducted a linear regression analysis: question number 15 was considered as the dependent variable Y. On the other hand, the vector of others 12 questions (questions number 1 and 2 were excluded from the study because they are related to the organization of university courses) was considered as the set of independent variable. Our choice of using a linear regression model to validate results is driven by two fundamental aspects. Firstly, this statistical model allows us to quantify the weight of each variable in determining student’s satisfaction. In addition, consistency of linear regression can be proved by using two hypothesis tests: F test of Fisher, to know the correlation degree inside the population studied
8
and T test of Student, to evaluate the relevancy of the single variables to determine the dependent variable. Several levels of analysis have been used to report the results: university, department, homogeneous cluster of student etc… Finally, we use the analysis of type I error as further check. Type I error, also known as an "error of the first kind", and α error, or a "false positive": the error of rejecting a null hypothesis (H0) when it is actually true. Plainly speaking, it occurs when we are observing a difference where actually there is none. Different cases that can occur are presented in the following scheme:
accept H0 refuse H0
H0 true H0 false right decision type II error (β) type I error (α) right decision
α =Pr (type I error) =Pr (refuse H0 | H0 true) =Pr ((x1,...xn) € R| H0 true) where ( x1,...xn) is a random sample and R is critic region, that is the set of sample space which induces the refusing of null hypothesis H0.
4. Results and discussion Selection of relevant variables The correlation analysis managed at university level is illustrated in Table 3: indexes with a value which is higher than 0.50 have been underlined (reported in bold). Table 3: Correlation indexes between questions from 3 to 14 and question 15 at university level
Correlation index Q3 Q4 Q5 Q6 Q7 Q8 Q9 Q10 Q11 Q12 Q13 Q14
0.50 0.40 0.52 0.39 0.71 0.72 0.46 0.56 0.46 0.21 0.25 0.52
9
During this preliminary analysis, some elements emerge: • The indexes of correlation of questions 12 and 13, concerning adequacy of lecture rooms and integrative activities, have the smallest modulus. • The highest indexes of correlation are those of questions number 3, 5, 7, 8, 10 and 14 that concern, respectively: clearness in the explanation of examination conditions, availability of teaching staff for integrative clarifications and explanations, teacher’s capability of students involvement during lessons, teacher’s expositive clearness, quality of training aid and preliminary student’s interest concerning the subject of the course. At the end of this first phase, it is possible to assume that the main drivers of student’s satisfaction are (1) clearness in the explanation of examination rules, (2) teachers’ behavior, (3) quality of training aid and (4) student’s leaning towards the subject of the course. Deepening our study by adopting a single-faculty perspective, we obtained the correlation indexes that are related in Table 4 (again, the scores >0.50 are reported in bold). Correlation indexes are reported for each department.
Table 4: Correlation indexes between questions from 3 to 14 and question 15 at department level
Q3 Q4 Q5 Q6 Q7 Q8 Q9 Q10 Q11 Q12 Q13 Q14
Ing 1
Ing 2
Ing 3
Ing 4
Ing 5
Ing 6
Arc 1
Arc 2
Arc 3
0.5 0.37 0.5 0.42 0.72 0.71 0.47 0.6 0.43 0.21 0.26 0.59
0.45 0.33 0.45 0.36 0.68 0.68 0.41 0.51 0.4 0.17 0.2 0.51
0.53 0.48 0.59 0.43 0.75 0.77 0.48 0.57 0.5 0.26 0.28 0.58
0.5 0.43 0.5 0.38 0.7 0.72 0.45 0.54 0.44 0.18 0.24 0.51
0.47 0.38 0.5 0.42 0.72 0.74 0.47 0.57 0.44 0.21 0.23 0.5
0.58 0.49 0.58 0.41 0.73 0.72 0.46 0.59 0.52 0.28 0.32 0.53
0.54 0.43 0.58 0.38 0.73 0.72 0.52 0.58 0.61 0.24 0.3 0.55
0.49 0.37 0.55 0.4 0.71 0.71 0.53 0.63 0.5 0.25 0.35 0.5
0.53 0.4 0.5 0.32 0.71 0.67 0.39 0.53 0.44 0.14 0.17 0.47
Bypassing the peculiarities of the specific faculties and paying attention to a more general level it is possible to highlight some interesting elements:
10
• Question 3, about the clearness in explanation of the examination conditions, has been generally characterized by elevated correlation indexes. However, it must be said that some departments have a lower index of correlation than the lower bound previously fixed (0.5). This strong relation between knowledge of the examination rules and student’s satisfaction can be read as a sort of “safety wish”: students that know the way used to evaluate them can manage the right and proper study approach in order to maximize their outcome. • Question 5, about the availability of teaching staff for integrative clarifications and explanations, has been characterized by elevated correlation indexes with the exception of Ing2. The importance of this variable is incontrovertible. However, this indicator is “hardly-readable” because only few students require teacher’s meetings, therefore the variable should be interpreted considering also the other teachers involved. Our idea, in fact, is that this judgment was influenced more by teacher’s expertise and personal appeal than by teacher’s availability. • Questions 7 and 8 concerning teacher’s behavior are the most important: they have the highest correlations index for all departments. These aspects appear as critical to quality and so they should be monitored with more attention. The quality of the teacher is critical from different points of view. Firstly, teacher’s involving skills increase the satisfaction index because students do not attend teaching activities passively. Moreover, teachers’ expositive clearness gives to students the certainty that they are not wasting their time because, by attending the lessons, they should acquire a better sense about the subject. • Question 10 concerning the suitability of training aid that are recommended, is critical for every department: indeed, it has a consistent value of correlation index with the students’ satisfaction. This close relation between these variables can be read as the students’ desire of having suitable aids to allow an improvement of their skills and these should be enough to get them proficiently “brave” in the course tests, that, moreover, are managed by who has suggested them the training aid. So, the thematic and temporal alignment between training aid and lessons gives to the students a useful confidence (moreover, the awareness of understanding and learning make students motivated and satisfied). • Question 14 concerning student’s leaning towards the subjects of the course, has a relevant role in student satisfaction generation. In fact, its correlation indexes are on 11
average high. The importance of this factor should be read as the student’s approach to the course: the higher his/her own interest is, the better his/her involvement will be. However, the student’s interest in the subject can be criticized because it can create unintended expectations. Concluding, also at faculty level, the reflections taken for the whole university are still confirmed. The main drivers of student’s satisfaction are related to a clearness in explanation of examination rules, teachers’ behavior and their communication skills, quality of training aid and student’s leaning towards course subject. Results consolidation As previously anticipated in methodology section, the results and conclusions expressed in relevant variable selection has been developed in many steps. The first step was a random generation of hundred populations that for answers distribution is homologous to the real one. The correlation indexes between question 15 and the others twelve questions was linked to these populations. Maximum values, for university and for single department, are shown in Table 5.
Table 5: Maximum correlation indexes between questions from 3 to 14 and question 15 at university level and at department level for 100 casual populations homologous at the real one
Q3 Q4 Q5 Q6 Q7 Q8 Q9 Q10 Q11 Q12 Q13 Q14
Poli
Ing 1
Ing 2
Ing 3
Ing 4
Ing 5
Ing 6
Arc 1
Arc 2
Arc 3
0.04 0.04 0.04 0.04 0.04 0.03 0.04 0.03 0.04 0.04 0.04 0.04
0.04 0.04 0.05 0.03 0.04 0.03 0.04 0.03 0.04 0.04 0.03 0.05
0.04 0.04 0.04 0.04 0.03 0.03 0.03 0.03 0.05 0.04 0.04 0.03
0.03 0.03 0.04 0.04 0.05 0.03 0.05 0.03 0.03 0.04 0.04 0.03
0.04 0.04 0.04 0.03 0.04 0.04 0.04 0.04 0.05 0.03 0.04 0.04
0.04 0.04 0.04 0.04 0.05 0.04 0.04 0.04 0.03 0.04 0.03 0.04
0.04 0.04 0.03 0.04 0.05 0.04 0.03 0.04 0.04 0.03 0.04 0.04
0.04 0.03 0.04 0.05 0.03 0.03 0.03 0.04 0.05 0.03 0.04 0.03
0.05 0.04 0.03 0.03 0.04 0.04 0.04 0.03 0.04 0.02 0.05 0.03
0.03 0.03 0.04 0.04 0.03 0.03 0.04 0.03 0.04 0.04 0.04 0.05
12
The comparison between the maximum value in Table 5, which is 0.05, and the correlation indexes of real population shown in Table 3 and in Table 4, clears up every doubt about the consistency of correlation indexes, because their absolute value is always of a higher order. The second step of results attestation is based on linear regression analysis for the whole university and all departments. At university level, we obtained the following results (Table 6):
Table 6: Regression coefficients at university level (dependent variable: Q15)
Independent variables
Coefficients m(i)
Standard errors s(i)
Q3 Q4 Q5 Q6 Q7 Q8 Q9 Q10 Q11 Q12 Q13 Q14
0.08 0.03 0.06 0.03 0.23 0.27 0.09 0.11 0.08 -0.01 0.01 0.13
0.004 0.004 0.004 0.003 0.004 0.004 0.003 0.004 0.003 0.004 0.004 0.004
Coefficients m(i) express the connection between the causal variables, which are monitored by questions from 3 to 14 of survey questionnaire, and the variable effect that is monitored by question 15. The considerations that we can express are the same as those given about the correlation analysis. Teacher’s behavior and their communication skills (questions 7 and 8), suitability of training aid (question 10) and student’s interest about the subjects of the courses (question 14) are the variables that have the greater impact on student’s satisfaction – see the high values of the coefficients. The summary that comes out substantially agrees with our expectations, which are coherent with the preliminary data analysis. However, the results of consistency was checked by using the F test. It attests that the causality hypothesis is not reasonable because its p-value is nearly 0.00. Finally, the reliability of m(i) coefficients was checked by estimating the probability of type I error and by t test of student that it is shown in Table 7. 13
Table 7: t test Politecnico di Milano
Variable Q3 Q4 Q5 Q6 Q7 Q8 Q9 Q10 Q11 Q12 Q13 Q14
modulus m(i)/s(i)
α (type I error)
23.5 6.81 13.81 9.75 53.79 63.15 25.77 29.54 22.78 2.39 2.95 37.07
0 0 0 0 0 0 0 0 0 0.01 0 0
α is I type probability of error, and t test highlights that m(i) coefficients are exact and sure with a probability near to the 100%. For this reasons, it is possible to confirm our considerations made during the presentation of Table 6 that the main drivers of student’s satisfaction are: teachers’ behavior and communication skills, suitability of training aid and student’s interest in the subjects of the course. However, an anomaly should be mentioned. If we assume that the weight of each variable on student’s satisfaction is proportional to m(i) modulus, then teacher’s availability (question 5) and clearness of examination rules (question 3) appear “comparable” with other drivers that have smaller correlation indexes. For this reason it was necessary to deepen the role of these aspects on the creation of student’s satisfaction. Therefore, we moved the focus of this study to the level of each department and the results were compared with those of the whole university. The main reason of this approach is that we wanted to understand the real structure of students’ satisfaction and to appreciate in which measure the most populous department influences the university indexes. Table 8 reports the regression coefficients recorded at faculty level.
14
Table 8: Regression coefficients at faculty level (dependent variable: Q15)
Q3 Q4 Q5 Q6 Q7 Q8 Q9 Q10 Q11 Q12 Q13 Q14
Ing 1
Ing 2
Ing 3
Ing 4
Ing 5
Ing 6
Arc 1
Arc 2
Arc 3
0.5 0.37 0.5 0.42 0.72 0.71 0.47 0.6 0.43 0.21 0.26 0.59
0.45 0.33 0.45 0.36 0.68 0.68 0.41 0.51 0.4 0.17 0.2 0.51
0.53 0.48 0.59 0.43 0.75 0.77 0.48 0.57 0.5 0.26 0.28 0.58
0.5 0.43 0.5 0.38 0.7 0.72 0.45 0.54 0.44 0.18 0.24 0.51
0.47 0.38 0.5 0.42 0.72 0.74 0.47 0.57 0.44 0.21 0.23 0.5
0.58 0.49 0.58 0.41 0.73 0.72 0.46 0.59 0.52 0.28 0.32 0.53
0.54 0.43 0.58 0.38 0.73 0.72 0.52 0.58 0.61 0.24 0.3 0.55
0.49 0.37 0.55 0.4 0.71 0.71 0.53 0.63 0.5 0.25 0.35 0.5
0.53 0.4 0.5 0.32 0.71 0.67 0.39 0.53 0.44 0.14 0.17 0.47
The m(i) values of each department, shown in Table 8, confirm again the reflections made at university level. In particular, we have set the arbitrary threshold value 0.50, and we have observed that only four drivers overcome it: teacher’s behavior and communication skills (questions 7 and 8); suitability of training aid (question 10) and student’s interest in course subjects.
5. Concluding remarks The empirical results allow us to declare that two out of five sections of the survey questionnaire can identify the main drivers of student’s satisfaction concerning the teaching quality of Politecnico di Milano: • Teaching activities and studying; • Interest and satisfaction. This result does not mean that the other variables have no effect on student’s satisfaction, but it means that their impact is lower. In fact, the absolute value of their m(i) coefficients is of an inferior order of relevance than those referred to the main drivers. By creating their policies, the headquarters of Politecnico di Milano could consider the following variables because they have a high impact on student’s satisfaction concerning teaching quality:
15
• The first category of variables concerns the teaching staff. Their abilities in stimulating students’ interest, involving students during their lessons and their communication skills have a huge impact on total satisfaction. • Students’ interest in subject of the course is a very relevant driver. It refers to student’s nature; so, from university perspective, it is an exogenous variable. By this exogenous variable student’s interest in the subject of the courses should be monitored both for a preventive and a corrective intent in order to make the policies of the University more efficient. • Lastly, also the training aid choice reveals a very important driver. Students wish the highest alignment possible between what they have been treating at school and the training aid suggested to maximize the effectiveness and to reduce the difficulty of their study. The consistency of reflections made is confirmed by coefficient stability and by the substantial alignment between results obtained through different analysis methods: correlation analysis and linear regression analysis. In terms of policy implications, several themes arose. With reference to the first issue, adequate incentives could be introduced to make the professors informed of the effects of students’ evaluations. Given the professional nature of teaching activities, indeed, no direct interventions are possible, but only indirect incentives towards more teachers’ awareness. With respect to the students’ attitudes, a more direct intervention to stimulate correct students’ decisions could be possible. For instance, better activities in the informative actions towards high school students might help them when choosing their university path. Finally, improving tutorships could contribute to improve students’ experience. In this respect, we strongly suggest to pay more attention to the theme of students teaching support to help their ability to reach their best targets. Further research on this topic would also include, obviously, the extension of the analysis to other universities, and also quantitative extensions of the analysis of Politecnico di Milano.
16
References Aly, N., & Akpovi, J. (2001), “Total quality management in California public higher education”, Quality Assurance in Education, 9 (3), 127-131. Argyris, C. and Schon, D. (1978), Organizational Learning: A theory of action perspective, Addison-Wesley, Reading MA, 1978. Barnabè F., Riccaboni A. (2007), “Which role for performance measurement systems in higher education? Focus on quality assurance in Italy”, Studies in Educational Evaluation 33, 302–319 Barzelay, M. (1999). “How to argue about the new public management”, International Public Management Journal, 2 (2), 183-216. Boyce, G. (2002). “Now and then: Revolutions in higher learning”, Critical Perspectives on Accounting, 13 (56), 575-601. Burns, C.W., (1999), “Teaching portfolios and the evaluation of teaching in higher education: Confident claims, questionable research support”, Studies in Educational Evaluation, 25 (2),131-142. Cashin, W.E. (1999), “Student Ratings of Teaching: Uses and Misuses.” In P. Seldin (ed.), Current Practices in Evaluating Teaching: A Practical Guide to Improved Faculty Performance and Promotion/Tenure Decisions. Bolton, Mass.: Anker. CNVSU - Comitato Nazionale per la Valutazione del Sistema Universitario (2002), Proposta di un insieme minimo di domande per la valutazione della didattica da parte degli studenti frequentanti, Ministero dell’Istruzione, dell’Università e della Ricerca, Luglio 2002, Doc 9/02. Cohen, P.A. (1981) “Student Ratings of Instruction and Student Achievement: A Meta-analysis of Multisection Validity Studies.” Review of Educational Research, 51(3),281-309. Cranton, P., and Smith, R.A. (1990) “Reconsidering the unity of analysis: a model of student ratings of instruction” Journal of Educational Psychology 82 (2) :207–212. Currie, J., & Vidovich, L. (2000) “Privatization and competition policies for Australian universities”, International Journal of Educational Development, 20 (2), 135-151. Czarniawska, B., & Genell, K. (2002). “Gone shopping? Universities on their way to the market”, Scandinavian Journal of Management, 18 (4), 455-474. Davies, A., & Thomas, R. (2002). “Managerialism and accountability in higher education: The gendered nature of restructuring and the costs to academic service”, Critical Perspectives on Accounting, 13 (2), 179-193. Dillard, J.F. (2002) “Dialectical possibilities of thwarted responsibilities”, Critical Perspectives on Accounting, 13 (5-6), 621-641. Etzkowitz, H. (2003). “Research groups as 'quasi-firms': The invention of the entrepreneurial university”, Research Policy, 32 (1), 109-121. Feldman, K.A. (1996). “Identifying exemplary teachers and teaching: Evidence from student ratings”, in Perry, R. P. and Smart, J.C.(eds.), Effective Teaching in Higher Education: Research and Practice. New York: Agathon Press. Feldman, K.A. (1996) “Identifying exemplary teaching: Using data from course and teacher evaluations”, New Directions for Teaching and Learning, 65,41–50. Gruening, G. (2001). “Origin and theoretical basis of New Public Management” Management Journal, 4 (1), 1-25.
17
International Public
Holmes, G., & McElwee, G. (1995).” Total quality management in higher education: How to approach human resource management” The TQM Magazine, 7 (6), 5-10. Hood, C. (1995). “The "New Public Management" in the 1980s: Variations on a theme”, Accounting Organizations and Society, 20 (2/3), 93-109. Lapsley, I., & Miller, P. (2004).” Transforming universities: The uncertain, erratic path, foreword”, Financial Accountability & Management, 20 (2), 103-106. Lawrence, S., & Sharma, U. (2002) “Commodification of education and academic labour – Using the balanced scorecard in a university setting”, Critical Perspectives on Accounting, 13 (5-6), 661-677. Locke, E.A.(1976). “The nature and causes of job satisfaction”. In M.D. Dunnette (Ed.), Handbook of industrial and organizational psychology (Vol.1, pp.1297–1349). Chicago: Rand McNally. Marsh, H.W. (1980). “The influence of student, course, and instructor characteristics in evaluations of university teaching”. American Educational Research Journal 17 (1): 219–237. Marsh, H.W. (1983). “Multidimensional ratings of teaching effectiveness by students from different academic settings and their relation to student/course/instructor characteristics” Journal of Educational Psychology 75 (1): 150–166. Marsh, H.W. (1987) “Students evaluations of university teaching: Research findings, methodological issues, and directions for future research”, International Journal of Educational Research 11,253–388. Marsh, H.W. “Students’ Evaluations of University Teaching: Dimensionality, Reliability, Validity, Potential Biases, and Utility.” Journal of Educational Psychology, 1984, 76, 707-754. Marsh, H.w., & Bailey, M., (1993), “Multidimensional Students' Evaluations of Teaching Effectiveness: A Profile Analysis”, Journal of Higher Education, vol. 63. Marsh, H.W., & Roche, L., (1993), “The Use of Students' Evaluations and an Individually Structured Intervention to Enhance University Teaching Effectiveness”, American Educational Research Journal, 30(1), pp. 217-251 Marsh, H.W., and Overall, J.U. (1981). “The relative influence of course level, course type, and instructor on students’ evaluations of college teaching” American Educational Research Journal 18 (1): 103–112. Meek, V.L. (2000) “Diversity and marketisation of higher education: Incompatible concepts?” Higher Education Policy, 13 (1), 23-39. Meek, V.L., & Wood, F.Q. (1998) “Higher education governance and management: Australia” Higher Education Policy, 11 (2-3), 165-181. Neumann, R.,& Guthrie, J. (2002) “The corporatization of research in Australian Higher Education”. Critical Perspectives on Accounting,13 (5-6), 721-741. Parker, L.D. (2002). “It's been a pleasure doing business with you: A strategic analysis and critique of university change management” Critical Perspectives on Accounting, 13 (5-6), 603-619. Ramsden, P., (1991), “A performance indicator of teaching quality in higher education: The Course Experience Questionnaire”, Studies in Higher Education, 16(2), pp. 129-150. Roberts, R.W. (2004). “Managerialism in US universities: Implications for the academic accounting profession” Critical Perspectives on Accounting, 15 (4-5), 461-467. Saravanamuthu, K., & Tinker, T. (2002). “The university in the new corporate world” Critical Perspectives on Accounting, 13 (5-6), 545-554.
18
Seldin, P. (1999) “Current Practices Good and Bad Nationally.” In P. Seldin (ed.), Current Practices in Evaluating Teaching: A Practical Guide to Improved Faculty Performance and Promotion/Tenure Decisions. Bolton, Mass.: Anker. Singh, G. (2002). “Educational consumers or educational partners: a critical theory analysis” Critical Perspectives on Accounting, 13(5-6), 681-700. Wachtel, H.K., (1998), “Student Evaluation of College Teaching Effectiveness: a brief review”, Assessment & Evaluation in Higher Education, 23(2), 191 – 212. Willmott, H. (1995). “Managing the academics: Commodification and control in the development of university education in the U.K” Human Relations, 48 (9), 993-1027. Zoller, U.(1992). “Faculty teaching performance evaluation in higher science education: issues and implications (Across-cultural case study)” Science Education 76 (6): 673–684.
19