66 minute read

Decision-making by Heads of Academic Department using Student Evaluation of Instruction (SEI

International Journal of Learning, Teaching and Educational Research Vol. 20, No. 2, pp. 235-250, February 2021 https://doi.org/10.26803/ijlter.20.2.13

Decision-making by Heads of Academic Department using Student Evaluation of Instruction (SEI)

Advertisement

Mohammed Saleh Alkathiri Imam Abdulrahman bin Faisal University, Dammam, Saudi Arabia https://orcid.org/0000-0002-4079-7347

Abstract. This study investigates the ways in which heads of academic department use student evaluation of instruction (SEI) to make decisions about individual faculty members and/or whole academic departments. The study utilized a convenience sample of 57 heads of department, who completed an online questionnaire with two main constructs, which were assessed at the interval level of measurement. The results of the study revealed significant differences between heads of department who tend to trust SEI results compared to those who tend not to trust SEI results. The findings suggest there is a significant association between how heads of department perceive SEI and how they use it to make decisions about individual faculty members and their academic departments. In addition, analysis of the respondents as per two groups, according to their attitudes of trust or distrust toward SEI, showed that disparities within these groups were greater with respect to issues or decisions that affect individuals as opposed to whole departments. Therefore, the study concludes that decisions should not be made based solely on the results of SEI; rather, multiple sources of evaluation should be utilized to make proper decisions. The author strongly recommends that academic leaders should use SEI across multiple years or courses in order to obtain more reliable information. Future research may include qualitative studies on the topic and discipline-specific studies within certain academic departments or college clusters.

Keywords: faculty members; heads of academic department; higher education; student evaluation of instruction; teaching and learning

1. Introduction

The use of student evaluation of instruction (SEI) is, and has long been, one of the most common assessment practices in higher education. In the United States, SEI is the predominant form of faculty evaluation, and approximately 88% of all liberal arts colleges use SEI for summative decisions (Seldin, 1999). In 1991, the U.S. Department of Education reported that 97% of 40,582 heads of department who participated in a survey used SEI to assess teaching (Cashin, 2003). Today, many universities still use SEI to determine whether to grant faculty tenure,

©Authors This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License (CC BY-NC-ND 4.0).

promote them, raise their pay based on merit, and offer them opportunities for professional development (Kelly, Ponton & Rovai, 2007). In fact, at institutions where the emphasis is on teaching, SEI is an influential measure used in promotion decisions (Emery, Kramer & Tian, 2003). Furthermore, universities use SEI for various other purposes, such as establishing the credibility of the education they offer, planning strategy, and improving curricula (Massy & French, 2001; Scott & Hawke, 2003). Nonetheless, the use of SEI in higher education has been controversial. Whilst supporters of SEI view it as a valid and reliable tool that can be used to facilitate decisions, those who oppose SEI claim that it is biased because of many factors that influence its results.

SEI has been researched more often than other topics in higher education for decades (Theall & Franklin, 2001). Whereas many studies have supported the use of SEI as a valid indicator of quality instruction, several studies have questioned its validity and reliability for faculty evaluation. This paper provides a literature review of support for and opposition to SEI. Moreover, it identifies a gap in the literature around the question of how heads of department use SEI results to make decisions about faculty members and academic departments in the context of Saudi higher education. It then presents original research devised to fill that gap. The results inform the reader of the present practical reality of SEI and show how practitioners and policy makers use SEI results in their decisions. Finally, this paper provides conclusions and recommendations for academic leaders on the use of SEI in higher education. These will enable well-informed evaluations of individual faculty members and overall academic departments, and thus ultimately facilitate better decision-making in future.

2. Literature Review 2.1 History of student evaluation of instruction

The primary purpose of using SEI is to improve the quality of teaching and learning (Delvaux et al., 2013). Since ancient times, students have had a voice on how they are taught. For example, at the time of Socrates and in the medieval period (Tucker, 2015), students expressed their opinions about their teachers. However, the use of SEI, as it is currently known and applied, to evaluate faculty teaching started in the early 1900s (Algozzine et al., 2004). Wachtel (1998) provided a brief review of SEI, indicating that “the first teacher rating scale was published in 1915” (p. 191). In the 1920s, several universities in the United States started student evaluation procedures (Wachtel, 1998). Today, SEI is used in universities worldwide. Taking into consideration the fact that faculty members today perform multiple responsibilities (Alkathiri, 2018), the purpose of SEI has been expanded by universities, as well as by quality assurance bodies. It is now used, for example, to allocate performance funding, to gather evidence to decide which faculty members to promote, and to select the winners of teaching awards (Arthur, 2009; Hendry & Dean, 2002; Massy & French, 2001; Scott & Hawke, 2003; Shah & Nair, 2012; Tucker, 2014).

According to Centra (1993), research on SEI went through four distinct periods. First, the period from 1927 to 1960 was known by the pioneer work of Remmers and his colleagues at Purdue University. Second, in the 1960s, the use of SEI was

voluntary in almost all universities. Third, the 1970s was the “golden age of research on student evaluations” (Centra, 1993, p. 49). During this period, new research on SEI evolved, including studies that showed evidence of the validity of SEI and advocated the use of SEI for formative and summative evaluations. Lastly, the fourth period started in the early 1980s, when research on SEI continued to expand, providing further illustration of research findings, including studies of the meta-analysis type.

According to Theall and Franklin (2001), more studies have been conducted on SEI than any other topic in higher education. Many studies have been done in the United States, Australia, and Europe. In contrast, Saudi Arabia, and the Arab world more generally, has been relatively understudied. Research on SEI has covered various subtopics such as the validity, reliability, and usefulness of SEI; the dimensions of effective teaching to be evaluated; the bias in student and instructor responses; and the identification of teaching excellence (Tucker, 2014). Furthermore, many reviews of the literature and meta-analysis studies on SEI have been conducted (Alderman, Towers & Bannah, 2012; Perry & Smart, 2007; Richardson, 2005). Nonetheless, research seldom includes discussions of the use of SEI by heads of department to make decisions that may affect individual faculty members or academic departments. The current paper is intended to contribute to the field of higher education by rectifying that omission, presenting and analyzing new evidence from Saudi Arabia where the study took place.

2.2 Support for student evaluation of instruction

Many researchers have claimed that SEI is an important indicator of where quality is improving and where it needs to be improved in teaching and learning, and in student satisfaction (Alkathiri, 2020; Zineldin, Akdag & Vasicheva, 2011). In addition, for decades, scholars have suggested that SEI can be considered a valid indicator of effective instruction. For example, as indicated by Liu (2012), SEI can predict ratings gathered from other sources, such as former students and colleagues. In addition, McKeachie (1997) claimed that SEI is a source of evaluation of teaching effectiveness that is more valid than any other. According to Liu (2012), early studies on SEI acknowledged its importance in teaching and learning because of the way in which SEI can actually reflect the quality of teaching based on student perceptions. Furthermore, literature (see Liu, 2012) has urged the use of SEI since students are able to furnish information on (1) learning goals, (2) student-instructor rapport, (3) teaching methods, (4) student-instructor communication, and (5) consumer data.

Research comparing SEI in distance education and face-to-face courses has concluded that there is little difference between the two modes of teaching in terms of the ratings of whole courses, and the quality of their instruction (Kelly et al., 2007; McGhee & Lowell, 2003; Waschull, 2001). Supporters of the use of SEI claim that if students are trained in using SEI, “evaluative judgements [given] on a regular basis have strong positive impact on the improvement of [faculty’s] instructional skills” (Spooren, Mortelmans & Denekens, 2007, p. 667). With regard to factors that might be expected to affect SEI, Aleamoni (1999) indicated that there is no relationship between SEI and class size, gender of student, time of day when

a course is offered, level of course, or rank of instructor. In addition, many reviews of SEI have concluded that gender roles have no effect on it, or where such effects exist, they are not significant (Liu, 2012; Radmacher & Martin, 2001).

2.3 Concerns about student evaluation of instruction

Although many studies support the use of SEI in higher education, others express opposition to it. For example, some studies have revealed a gender bias against female faculty members in SEI, with students evaluating male and female faculty members based on different dimensions (Basow, 1995; Chamberlin & Hickey, 2001; Liu, 2012). Basow (1995) analyzed 2,000 SEIs collected from undergraduate students over four years and found that male faculty members scored much better than female faculty members on most questions most of the time, aside from one year when the women scored better on two criteria (i.e., sensitivity and student comfort). Another issue of bias is that female students consistently score their female professors higher than their male peers do (Centra & Gaubatz, 2000). Overall, it is evident that the gender of faculty member and students has an effect on SEI.

Another concern about SEI is the impact of teaching mode, in that, sometimes, faculty members receive disproportionately lower ratings in face-to-face courses compared to faculty members who teach online classes (Carle, 2009). Students studying online show more diverse opinions than students in face-to-face courses when scoring the delivery mode for effectiveness (Kelly et al., 2007; Liu, 2012; McGhee & Lowell, 2003; Rovai et al., 2006). Furthermore, class size has been found to have an influence on SEI. For example, many studies examining the impact of class size on SEI have found that higher SEI scores correlate with smaller classes (Badri et al., 2006; Liaw & Goh, 2003; Liu, 2012).

Another problem with SEI is the impact of evaluation instruments on results. According to Landrum and Braitman (2008), SEI scores decrease significantly when the number of points on an evaluation scale are changed from 10 to 5. On the 10-point scale, students would use a range of values that is larger than on the 5-point scale. Other studies have examined subtle factors that impact SEI. For example, elective courses score better than compulsory ones (Marsh & Roche, 1997); SEI at the end of a semester can be significantly predicted by students’ first impressions of the instructor (Buchert et al., 2008); undergraduate students give lower ratings than graduate students do (Marsh, 2007; Whitworth, Price & Randall, 2002); the faculty member’s rank and experience influence SEI (Rovai et al., 2006); and faculty-member characteristics such as enthusiasm and humor can positively impact SEI (Obenchain, Abernathy & Wiest, 2001).

According to Centra and Gaubatz (2000), SEI can be biased because characteristics of students and instructors that are irrelevant to teaching may potentially affect ratings. For example, a class right at the start of the day might receive a worse score than the same class at a later, less awkward time. Concerns around the reliability of SEI need to be taken seriously, considering the influences of extraneous factors on SEI. When making decisions regarding faculty members, academic leaders should take care when using SEI results from undergraduate

courses or small classes. SEI of a small sample (i.e., in small classes) might not be accurate nor reflect the actual quality of faculty teaching. In order to make accurate decisions, the types of courses and the reasons that students took the courses should be considered. Another suggestion for increasing the reliability of SEI and reducing distortions is to ask students to rate the extent to which they have attained their educational objectives (McKeachie, 1997; Zhao & Gallant, 2012).

3. Statement of Purpose and Research Question

Conducting research using student-satisfaction data is a common practice in higher education (Alkathiri, 2020). Moreover, SEI is a crucial aid to decision-making there. SEI is used in colleges and universities for various purposes. These include providing formative feedback to faculty for instructional improvement; measuring teaching effectiveness in order to make administrative decisions on career advancement; helping students choose classes and instructors; and for research on teaching (Zhao & Gallant, 2012). Furthermore, according to Algozzine et al. (2004), a major reason for universities to use SEI is to make decisions on salary. That said, when making decisions based on SEI, institutional administrators need to be aware of the various findings of the ongoing research concerning its validity and reliability. The present study was devised to investigate the views of heads of department concerning the use of SEI to facilitate decisions. The author investigated the attitudes of heads of department and the ways in which these affect their use of SEI. The primary research question was: Does the difference in views of heads of department, at a public university in the Eastern Province of Saudi Arabia, have a significant effect on their use of SEI to make decisions about individual faculty members and about academic departments? The author investigated the use of SEI results by heads of department to make decisions about: (1) individual faculty members (including on promotion and awarding of tenure, effectiveness of teaching, professional development needs, and contribution to student learning experience); and (2) academic departments (including on effectiveness of teaching, professional development needs, and contribution to student cohort learning experience).

4. Methodology

The present study focused on heads of department at a public university in the Eastern Province of Saudi Arabia. Its purpose was to determine how significantly the differences between the heads of department in their views on SEI affect their use of it to make decisions about individual faculty members and academic departments. This study’s main hypothesis was that the tendency of heads of department to trust or not to trust the results of SEI makes no difference on their decision-making about individual faculty members or the academic department. The criterion variable was their overall tendency to trust the use of SEI to make decisions. The author utilized a quantitative research design and a convenience sample of heads of department, who were asked to take an online questionnaire to provide data. Convenience sampling involves the sample being drawn from the population that is available to the researcher (Taherdoost, 2016). Analyzing and evaluating the data on two constructs, the author developed an evaluation scale to assess the respondents’ responses regarding their experiences using SEI

to make decisions about individual faculty members (i.e., construct A) and about their academic department (i.e., construct B).

4.1 Respondents

An online survey was distributed to 112 heads of department from 20 colleges in four clusters (i.e., health, engineering, sciences and management, and arts and education). Fifty-seven heads of department completed the survey, which equates to a 50.89% response rate. Just under six out of every ten respondents were women. Table 1 displays the counts and percentages of respondents by category.

Table 1: Respondent demographic data Categories of participating heads of department Overall sample count (n = 57) % M

Sex Male Female 24 33

Health 8

College cluster Engineering Sciences and management Arts and education 19 15 15 42.1 57.9 14.0 33.3 26.3 26.3

4.2 Procedure

The researcher sent out an email to 112 heads of department from all clusters, asking them to take an online questionnaire. Respondents completed the questionnaire voluntarily. No compensation was offered for completing the questionnaire. Respondents were shown a consent form prior to taking the questionnaire. Respondents’ completion and submission of the questionnaire were used to indicate consent.

4.3 Instrument

The author developed an online questionnaire of eight question items (see Appendix 1). The questionnaire asked respondents to indicate their attitude toward each item using a 6-point Likert-type scale with three levels of agreement and three of disagreement. The first item, “Overall, I tend to trust the SEI results to make decisions in my job,” was intended to identify the level of trust in general terms. The seven other question items were subscales of two constructs, A and B (see Table 2). The two main constructs were assessed at the interval level of measurement.

Table 2: Percentages of some form of agreement, mean scores, and standard deviations of scores for questions and constructs (strongly disagree = 1, strongly agree = 6)

Item no. Question Some form of agreement (%) M SD

1 Overall, I tend to trust the SEI results to make decisions in my job as a head of department 40.3 3.2 1.0

2 (A)

As a head of department, I would use the SEI results to make decisions about the individual faculty members 2.1 Individual faculty members’ promotion/awarding of tenure 2.2 Individual faculty members’ effectiveness of teaching 2.3 Individual faculty members’ professional development needs 38.2 3.2 0.72

3.5 1.9 0.69

24.6 2.9 1.2

64.9 3.8 0.93

2.4 Individual student learning experience 59.7 4.0 0.93 3 (B) As a head of department, I would use the SEI results to make decisions about the academic department 95.9 5.0 0.66 3.1 Academic department’s effectiveness of teaching 87.7 4.6 0.96 3.2 Academic department’s professional development needs 100 5.2 0.57 3.3 Student cohort learning experience 100 5.2 0.63

Construct A, concerning the use of SEI results to make decisions about individual faculty members, is measured by questions 2.1, 2.2, 2.3, and 2.4. Construct B, concerning the use of SEI results to make decisions about an academic department, is measured by questions 3.1, 3.2, and 3.3. Using the online questionnaire, two kinds of scores were calculated for each construct: subscale scores and an overall score. The subscale score for each question item from 2.1 onwards is the mean value calculated from all of its responses. The higher the subscale score for an item, the more likely the participating heads of department would be to use SEI to make a decision about the matter in question. For instance, the high M value on question 3.2 implies that the participating academic leaders are more likely to use SEI to make decisions about an academic department’s professional development needs. Finally, an average of the subscale scores for each construct gives its overall tendency score.

4.4 Analysis

The researcher used an independent t-test to assess if there was an association between overall trust in results of SEI by heads of department (i.e., “Overall, I tend to trust the SEI results to make decisions in my job as a head of department”) and their willingness to use SEI to make decisions about individual faculty members

and overall academic departments. As mentioned above, to obtain the overall tendency score for each construct (A or B), the relevant subscale scores were averaged. Table 2 displays percentages of some form of agreement, mean scores, and standard deviations of scores for each of the items and constructs. The Cronbach alpha coefficient of internal consistency was computed to report the reliability and correlations for each of the constructs, as presented in Table 3.

Table 3: Correlation of constructs and measures of internal consistency Construct Question numbers Subscale construct A α

A 2.1, 2.2, 2.3, 2.4 Decisions about individual faculty members .76

B 3.1, 3.2, 3.3 Decisions about overall academic department

* p < .01. .78* .87

5. Results

It is remarkable that all of the responses to question items 3.2 and 3.3 indicated some form of agreement. Both concern construct B: using SEI results to make decisions about the academic department. The least agreement was with question item 2.1: using SEI to make decisions about an individual faculty member’s promotion and awarding of tenure.

Overall, the percentage of some form of agreement could be considered low for the questions within construct A (concerning individual faculty members), in contrast to the percentage of some form of agreement for the questions within construct B (concerning the academic department). The individual items within the constructs were averaged. Table 3 shows that the results of the survey have high reliability, as well as significant correlations between the constructs.

Based on the respondents’ answers to the first question, two groups of heads of department were identified: one group with a tendency to trust the use of SEI to make decisions (answering 4, 5, or 6), and another group with a tendency not to (answering 1, 2, or 3). The t-test results showed statistically significant differences between construct means for the two groups. Therefore, the study’s main hypothesis is rejected because the tendency of the respondents to trust or not to trust the results of SEI makes a significant difference on their decision-making about the individual faculty members or the academic department.

For construct A, respondents in the “trusting” first group had a mean of 3.85, whereas respondents in the “distrusting” second group had a mean of 2.68. The difference was statistically significant (t(55) = 9.749, p > .01). For construct B, respondents in the first group had a mean score of 5.58, whereas the second group’s mean score was 4.59. The difference was statistically significant (t(55) = 8.315, p > .01). Furthermore, Cohen’s effect size was computed, and alpha level was reported in Table 3.

6. Discussion

According to Muammar and Alkathiri (2021), “higher education institutions are faced with a constantly evolving set of aims: to meet the needs of students while responding to societal demands and stakeholders’ expectations in a context of continually changing expectations, roles, and outcomes” (p. 1). Despite the challenges that heads of department face in their positions, they have a key role in the success of academic departments as well as the achievement of major higher education objectives (Freeman Jr., Karkouti & Ward, 2020; Reznik & Sazykina, 2017). The present study should make educators in higher education aware of how much heads of department vary in terms of their tendency to trust the results of SEI for making informed decisions.

Although the position of head of department is fundamental in higher education institutions to implement forward changes, the position is unattractive to many faculty members (Freeman Jr. et al., 2020). The expectations of heads of department can be ambiguous, especially that they receive limited training prior to assuming the position (Freeman Jr. et al., 2020). According to Freeman Jr. et al. (2020), “[c]hairs must balance the dual responsibilities of managing faculty and student affairs who they support and evaluate as they implement the mandates from higher administration. Similarly, they shuttle between their managerial roles and faculty roles while balancing work-life demands” (p. 895). Figure 1 shows the difference among respondents regarding using student evaluation of instruction to make decisions about individual faculty members and the academic department on various purposes.

100

80

60

40

20

0 As a head of department, I would use the SEI results to make decisions about ... (% some form of agreement)

Promotion/Tenure Awarding Effectiveness of Teaching Professional Development Needs Student Learning Experience Effectiveness of Teaching Professional Development Needs Student Cohort Learning Experience

The Individual Faculty Member’s … The Academic Department’s …

Figure 1: Form of agreement (%) by respondents regarding using student evaluation of instruction for various purposes

As seen in Figure 1, respondents tended to use SEI results to make decisions about individual faculty members’ professional development needs and about their contribution to the individual student learning experience. However, respondents were less likely to use SEI results to make decisions regarding the effectiveness of individual faculty members’ teaching or about promoting them or awarding them

tenure. According to Smith (2005), “heads of department are overloaded with work, … large departments are difficult to manage and … collegiality is the ‘preferred’ model of decision-making” (p. 463). On the other hand, respondents were more likely to use SEI results to make decisions about their whole academic departments on matters such as effectiveness of teaching, professional development needs, and the student cohort learning experience. Learning about the significant differences amongst heads of department concerning the use of SEI to make decisions will help academic leaders to address the issue. It is hoped that this will result in specific measures that facilitate a better understanding of these different views on SEI, and promote well-informed decisions in higher education.

Many studies have reported that, in general, SEI is a valid indicator of the quality of instruction (Marsh & Roche, 2000; Theall & Franklin, 2001). Many faculty members in higher education have exhibited reasonably positive attitudes toward the validity of SEI and its usefulness for improving instruction (Nasser & Fresko, 2002). However, others have expressed concerns about SEI and its uses (Nasser & Fresko, 2002) because of various factors that may cause biases (Badri et al., 2006; Kelly et al., 2007).

This study set out to investigate the views of heads of academic department concerning SEI, and the effect of their views on the ways in which they tend to use SEI when making decisions about individual faculty members and their overall academic department. The sample of heads of department who tended to trust SEI results showed more agreement with the use of SEI to make decisions, in comparison to those who tended not to trust SEI. The comparisons were statistically significant with respect to two constructs: using SEI results to make decisions about (A) individual faculty members and (B) overall academic department. Therefore, it can be concluded that whether or not heads of department trust SEI results has an effect on their use of such information when making decisions about faculty members under their supervision as well as about the academic department that they chair.

Although there were significant differences between the group who trusted SEI and the one who did not, the mean scores for each group on the use of SEI results to make decisions were lower where those decisions concerned individual faculty members as opposed to whole academic departments. Furthermore, the disparity between the two groups was biggest when it came to decisions that affect individuals. Therefore, it is clear that decisions cannot be made based solely on the results of SEI; rather, multiple sources of evaluation should be utilized. Moreover, when evaluating the effectiveness of a faculty member, the multi-dimensional nature of SEI should be considered. Further research may include qualitative studies on the topic to further our understanding of the use of SEI by heads of department. Additionally, discipline-specific studies within certain academic departments or college clusters are recommended.

7. Conclusion

Fifty-seven heads of department completed an online questionnaire with two main constructs: using SEI results to make decisions about (A) individual faculty

members and (B) overall academic department. The study aimed at investigating the ways in which heads of academic department use SEI to make decisions about individual faculty members and their academic department. The results revealed statistically significant differences between those heads of department who tended to trust the results of SEI and those who tended not to trust the results of SEI. The study concludes that there is a significant association between how heads of department perceive SEI and how they use it to make decisions about their academic department and individual faculty members.

In addition, the disparities within the groups of respondents, according to their attitudes of trust or distrust toward SEI, were greater with respect to issues or decisions that affect individuals as opposed to overall department. Therefore, decisions should not be made based solely on the results of SEI; rather, multiple sources of evaluation should be utilized to make proper decisions. Based on the findings of the current study, the author strongly suggests that academic leaders should use SEI across multiple years or courses in order to obtain more reliable information.

8. References

Alderman, L., Towers, S., & Bannah, S. (2012). Student feedback systems in higher education: A focused literature review and environmental scan. Quality in Higher Education, 18(3), 261-280. https://doi.org/10.1080/13538322.2012.730714 Aleamoni, L. M. (1999). Student rating myths versus research facts from 1924 to 1998. Journal of Personnel Evaluation in Education, 13, 152-66. https://doi.org/10.1023/A:1008168421283 Algozzine, B., Gretes, J., Flowers, C., Howley, L., Beattie, J., Spooner, F., Mohanty, G., & Bray, M. (2004). Student evaluation of college teaching: A practice in search of principles. College Teaching, 52(4), 134-41. https://doi.org/10.3200/CTCH.52.4.134-141 Alkathiri, M. S. (2018). Using art-based techniques in faculty training programmes. In T. Chemi & X. Du (Eds.), Palgrave studies in business, arts and humanities: Arts-based methods and organizational learning (pp. 265–290). Palgrave Macmillan. https://doi.org/10.1007/978-3-319-63808-9_12 Alkathiri, M. S. (2020). Teaching and learning experiences of college students in Saudi higher education in the 21st century. The International Journal of Learning in Higher Education, 27(1), 15-30. https://doi.org/10.18848/2327-7955/CGP/v27i01/15-30 Arthur, L. (2009). From performativity to professionalism: Lecturers’ responses to student feedback. Teaching in Higher Education, 14(4), 441-454. https://doi.org/10.1080/13562510903050228 Badri, M., Abdulla, M., Kamali, M., & Dodeen, H. (2006). Identifying potential biasing variables in student evaluation of teaching in a newly accredited business program in the UAE. International Journal of Educational Management, 20, 43-59. https://doi.org/10.1108/09513540610639585 Basow, S. (1995). Student evaluations of college professors: When gender matters. Journal of Educational Psychology, 87, 656-665. https://doi.org/10.1037/0022-0663.87.4.656 Buchert, S., Laws, E. L., Apperson, J. M., & Bregman, N. J. (2008). First impressions and professor reputation: Influence on student evaluations of instruction. Social Psychology of Education, 11(4), 397-408. https://doi.org/10.1007/s11218-0089055-1

Carle, A. C. (2009). Evaluating college students’ evaluations of a professor’s teaching effectiveness across time and instruction mode (online vs. face-to-face) using a multilevel growth modeling approach. Computers & Education, 53(2), 429-435. https://doi.org/10.1016/j.compedu.2009.03.001 Cashin, W. E. (2003). Evaluating college and university teaching: Reflections of a practitioner. In J. C. Smart (Ed.), Higher education: Handbook of theory and research (pp. 531–593). Kluwer Academic. https://doi.org/10.1007/978-94-010-0137-3_10 Centra, J. A. (1993). Reflective faculty evaluation: Enhancing teaching and determining faculty effectiveness. Jossey-Bass. https://eric.ed.gov/?id=ED363233 Centra, J. A., & Gaubatz, N. B. (2000). Is there gender bias in student evaluations of teaching? Journal of Higher Education, 70(1), 17-33. https://doi.org/10.1080/00221546.2000.11780814 Chamberlin, M. S., & Hickey, J. S. (2001). Student evaluations of faculty performance: The role of gender expectations in differential evaluations. Educational Research Quarterly, 25(2), 3-14. https://search.proquest.com/docview/216185804?pqorigsite=gscholar&fromopenview=true Delvaux, E., Vanhoof, J., Tuytens, M., Vekeman, E., Devos, G., & Van Petegem, P. (2013). How may teacher evaluation have an impact on professional development? A multilevel analysis. Teaching and Teacher Education Journal, 36, 1-11. https://doi.org/10.1016/j.tate.2013.06.011 Emery, C. R., Kramer, T. R., & Tian, R. G. (2003). Return to academic standards: A critique of students’ evaluations of teaching effectiveness. Quality Assurance in Education: An International Perspective, 11(1), 37-47. https://doi.org/10.1108/09684880310462074 Freeman Jr., S., Karkouti, I. M., & Ward, K. (2020). Thriving in the midst of liminality: Perspectives from department chairs in the USA. Higher Education, 80, 895-911. https://doi.org/10.1007/s10734-020-00521-6 Hendry, G. D., & Dean, S. J. (2002). Accountability, evaluation of teaching and expertise in higher education. The International Journal for Academic Development, 26(4), 327-414. https://doi.org/10.1080/13601440210156493 Kelly, H. F., Ponton, M. K., & Rovai, A. P. (2007). A comparison of student evaluations of teaching between online and face-to-face courses. Internet and Higher Education, 10, 89-101. https://doi.org/10.1016/j.iheduc.2007.02.001 Landrum, R. E., & Braitman, K. A. (2008). The effect of decreasing response options on students’ evaluation of instruction. College Teaching, 56, 215-217. https://doi.org/10.3200/CTCH.56.4.215-218 Liaw, S., & Goh, K. (2003). Evidence and control of biases in student evaluations of teaching. The International Journal of Educational Management, 17(1), 37-43. https://doi.org/10.1108/09513540310456383 Liu, O. L. (2012). Student evaluation of instruction: In the new paradigm of distance education. Research in Higher Education, 53, 471-486. https://doi.org/10.1007/s11162-011-9236-1 Marsh, H. W. (2007). Students’ evaluations of university teaching: Dimensionality, reliability, validity, potential biases and usefulness. In R. P. Perry & J. C. Smart (Eds.), The scholarship of teaching and learning in higher education (pp. 319–383). Springer. https://doi.org/10.1007/1-4020-5742-3_9 Marsh, H. W., & Roche, L. A. (1997). Making students’ evaluations of teaching effectiveness effective: The critical issues of validity, bias, and utility. American Psychologist, 52(11), 1187-1197. https://doi.org/10.1037/0003-066X.52.11.1187 Marsh, H. W., & Roche, L. A. (2000). Effects of grading lenience and low workload on students’ evaluations of teaching: Popular myth, bias, validity, or innocent

bystanders? Journal of Educational Psychology, 92(1), 202-228. https://doi.org/10.1037/0022-0663.92.1.202 Massy, W. F., & French, N. J. (2001). Teaching and learning quality process review: What the programme has achieved in Hong Kong. Quality in Higher Education, 1, 34-45. https://doi.org/10.1080/13538320120045067 McGhee, D. E., & Lowell, N. (2003). Psychometric properties of student ratings of instruction in online and on-campus courses. New Directions for Teaching & Learning, 96, 39-48. https://doi.org/10.1002/tl.121 McKeachie, W. J. (1997). Student ratings: The validity of use. American Psychologist, 52, 1218-1225. https://doi.org/10.1037/0003-066X.52.11.1218 Muammar, O. M., & Alkathiri, M. S. (2021). What really matters to faculty members attending professional development programs in higher education. International Journal for Academic Development. https://doi.org/10.1080/1360144X.2021.1897987 Nasser, F., & Fresko, B. (2002). Faculty views of student evaluation of college teaching. Assessment & Evaluation in Higher Education, 27(2), 187-198. https://doi.org/10.1080/02602930220128751 Obenchain, K. M., Abernathy, T. V., & Wiest, L. R. (2001). The reliability of students’ ratings of faculty teaching effectiveness. College Teaching, 49(3), 100-104. https://doi.org/10.1080/87567550109595859 Perry, R. P., & Smart, J. C. (Eds.). (2007). The scholarship of teaching and learning in higher education: An evidence-based perspective. Springer. https://doi.org/10.1007/1-40205742-3 Radmacher, S. A., & Martin, D. J. (2001). Identifying significant predictors of student evaluations of faculty through hierarchical regression analysis. The Journal of Psychology, 135(3), 259-268. https://doi.org/10.1080/00223980109603696 Reznik, S. D., & Sazykina, O. A. (2017). Head of a university department: Competence and new activity priorities. European Journal of Contemporary Education, 6(1), 126-137. https://doi.org/10.13187/ejced.2017.1.126 Richardson, J. T. E. (2005). Instruments for obtaining student feedback: A review of the literature. Assessment and Evaluation in Higher Education, 30(4), 387-415. https://doi.org/10.1080/02602930500099193 Rovai, A. P., Ponton, M. K., Derrick, M. G., & Davis, J. M. (2006). Student evaluation of teaching in the virtual and traditional classrooms: A comparative analysis. Internet and Higher Education, 9, 23-35. https://doi.org/10.1016/j.iheduc.2005.11.002 Scott, G., & Hawke, I. (2003). Using an external quality audit as a lever for institutional change. Assessment and Evaluation in Higher Education, 28(3), 323-332. https://doi.org/10.1080/0260293032000059667 Seldin, P. (1999). Current practices—good and bad—nationally. In P. Seldin & Associates (Eds.), Changing practices in evaluating teaching: A practical guide to improved faculty performance and promotion/tenure decisions (pp. 1–24). Anker. https://www.wiley.com/ensa/Changing+Practices+in+Evaluating+Teaching:+A+Practical+Guide+to+Impr oved+Faculty+Performance+and+Promotion+Tenure+Decisions-p9781882982288 Shah, M., & Nair, C. S. (2012). The changing nature of teaching and unit evaluations in Australian universities. Quality Assurance in Education, 20(3), 274-288. https://doi.org/10.1108/09684881211240321 Smith, R. (2005). Departmental leadership and management in chartered and statutory universities. Educational Management Administration & Leadership, 33(4), 449-464. https://doi.org/10.1177/1741143205056305

Spooren, P., Mortelmans, D., & Denekens, J. (2007). Student evaluation of teaching quality in higher education: Development of an instrument based on 10 Likert-scales. Assessment & Evaluation in Higher Education Journal, 32(6), 667-679. https://doi.org/10.1080/02602930601117191 Taherdoost, H. (2016). Sampling methods in research methodology: How to choose a sampling technique for research. International Journal of Advance Research in Management, 5(2), 18-27. https://doi.org/10.2139/ssrn.3205035 Theall, M., & Franklin, J. (2001). Looking for bias in all the wrong places: A search for truth or a witch hunt in student ratings of instruction? In M. Theall, P. C. Abrami, & L. A. Mets (Eds.), The student ratings debate: Are they valid? How can we best use them? [Special issue of New Directions for Institutional Research, 2001(109), 45-56. https://doi.org/10.1002/ir.3] Wiley. Tucker, B. M. (2015). The student voice: Using student feedback to inform quality in higher education (Doctoral dissertation). Curtin University, Perth. https://espace.curtin.edu.au/bitstream/handle/20.500.11937/2158/234289_Tuc ker%20B%202015.pdf Tucker, B. (2014). Student evaluation surveys: Anonymous comments that offend or are unprofessional. Higher Education, 68(3), 347-358. https://doi.org/10.1007/s10734014-9716-2 Wachtel, H. K. (1998). Student evaluation of college teaching effectiveness: A brief review. Assessment & Evaluation in Higher Education, 23(2), 191-212. https://doi.org/10.1080/0260293980230207 Waschull, S. B. (2001). The online delivery of psychology courses: Attrition, performance, and evaluation. Computers in Teaching, 28, 143-147. https://doi.org/10.1207/S15328023TOP2802_15 Whitworth, J., Price, B., & Randall, C. (2002). Factors that affect college of business student opinion of teaching and learning. Journal of Education for Business, 77, 282-289. https://doi.org/10.1080/08832320209599677 Zhao, J., & Gallant, D. J. (2012). Student evaluation of instruction in higher education: Exploring issues of validity and reliability. Assessment and Evaluation in Higher Education, 37, 227-235. https://doi.org/10.1080/02602938.2010.523819 Zineldin, M., Akdag, H. C., & Vasicheva, V. (2011). Assessing quality in higher education: New criteria for evaluating students’ satisfaction. Quality in Higher Education Journal, 17(2), 231-243. https://doi.org/10.1080/13538322.2011.582796

Appendix 1

An online questionnaire for heads of academic department regarding whether they use student evaluation of instruction to make decisions about individual faculty members and their academic departments. I am … (1) Male (2) Female

I am a head of department in the following college cluster … (1) Health (2) Engineering (3) Sciences and Management (4) Arts and Education

1. Overall, I tend to trust student evaluation of instruction results to make decisions in my job … (1) Strongly disagree (2) Disagree (3) Slightly disagree (4) Slightly agree (5) Agree (6) Strongly agree

2. As a head of department, I would use the student evaluation of instruction results to make decisions about the … 2.1 Individual faculty members’ promotion/awarding of tenure (1) Strongly disagree (2) Disagree (3) Slightly disagree (4) Slightly agree (5) Agree (6) Strongly agree

2.2 Individual faculty members’ effectiveness of teaching (1) Strongly disagree (2) Disagree (3) Slightly disagree (4) Slightly agree (5) Agree (6) Strongly agree

2.3 Individual faculty members’ professional development needs (1) Strongly disagree (2) Disagree (3) Slightly disagree (4) Slightly agree (5) Agree (6) Strongly agree

2.4 Individual student learning experience (1) Strongly disagree (2) Disagree (3) Slightly disagree (4) Slightly agree (5) Agree (6) Strongly agree

3. As a Head of Department, I would use the student evaluation of instruction results to make decisions about the …

3.1 Academic department’s effectiveness of teaching (1) Strongly disagree (2) Disagree (3) Slightly disagree (4) Slightly agree (5) Agree (6) Strongly agree

3.2 Academic department’s professional development needs (1) Strongly disagree (2) Disagree (3) Slightly disagree (4) Slightly agree (5) Agree (6) Strongly agree

3.3 Student cohort learning experience (1) Strongly disagree (2) Disagree (3) Slightly disagree (4) Slightly agree (5) Agree (6) Strongly agree

Thank you for your input.

International Journal of Learning, Teaching and Educational Research Vol. 20, No. 2, pp. 251-269, February 2021 https://doi.org/10.26803/ijlter.20.2.14

Fostering Media Literacy Skills in the EFL Virtual Classroom: A Case Study in the COVID-19 Lockdown Period

Marina Bilotserkovets, Tatiana Fomenko, Oksana Gubina, Tetiana Klochkova and Oksana Lytvynko Sumy National Agrarian University, Sumy, Ukraine https://orcid.org/0000-0003-4692-3444 https://orcid.org/0000-0002-3048-7097 https://orcid.org/0000-0002-3575-5898 https://orcid.org/0000-0002-1173-6211 https://orcid.org/0000-0002-2241-3776

Maryna Boichenko Sumy State Pedagogical University, Sumy, Ukraine https://orcid.org/0000-0002-0543-8832

Olena Lazareva Kharkiv National Pedagogical University, Kharkiv, Ukraine https://orcid.org/0000-0003-4385-0139

Abstract. This investigation highlights the ways and means of students’ formation of media literacy skills under the conditions of total and emergent distance learning in the lockdown period of the COVID-19 pandemic. The case study involved 138 first-year students from Sumy National Agrarian University, Ukraine, who studied English as a foreign language (EFL). Analysis, synthesis, and generalization of scientific data were conducted to determine the requirements and materials for the survey. Media literacy of the participants in the experimental group was developed through performance of a series of social media projects, critical analysis of social media texts, and creation of social media content. Pedagogical observation and expert estimation were employed to obtain qualitative results of partiucipants’ progress during practical classes and extracurricular activities. Psychological techniques and mathematical methods were employed to measure and assess the quantitative data of the experiment. The outcomes of the study revealed the positive dynamics of the development of reflective-evaluative, collaborative, and searching-creative skills of participants in the experimental group as well as improvement in their English proficiency. The result of this study is potentially appropriate for educators who are

©Authors This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License (CC BY-NC-ND 4.0).

interested in the application of media technologies in foreign-language teaching.

Keywords: distance learning; media literacy skills; pandemic period; social media; teaching EFL

1. Introduction

An emergent shift of the majority of educational institutions worldwide to distance learning in the virtual academic environment occurred due to the spread of the COVID-19 pandemic. Most students had to study remotely from their homes through the internet (Assalahi, 2020; Goh & Sandars, 2020). In the period of total lockdown, students and academics had to use media, especially social media, for supporting the learning process, which was facilitated through various digital means (Zoom, Google Meet etc.) and educational platforms (Moodle, Google Classroom etc.). They applied and presented the results of their scientific works through social media groups (Facebook, Instagram, Twitter etc.); online libraries and torrent sites were often the only sources of knowledge and information for learners (Marinoni et al., 2020; Ogbonnaya et al., 2020).

In terms of learning English as a foreign language (EFL), media sources are of utter importance as they provide students with a variety of information about linguistic discourses, language structures, pronunciation, and grammatical patterns. In addition, they serve as a guidance in social interactions and cultural values in particular language communities. However, the spread of digital content following COVID-19 exposed students to a vast amount of information through media, often false or inaccurate (Marinoni et al., 2020). More than ever before, students are in crucial need of the media literacy skills that would allow them to identify the nature of information they are surrounded by; find out if it is useful and credible; and protect themselves from misinformation and take control over what they read, listen to, or watch in the media.

The aim of the paper was to explore how students’ media literacy skills were developed for efficient application of social media resources in the EFL virtual classroom in the lockdown period of the COVID-19 pandemic. The objecives of the paper included revising recent scientific research to prepare the materials and requirements for the case study; implementing social media activities in the process of EFL training of the experimental group of student participants; and determining whether social media technologies have the potential to improve the EFL proficiency of students.

2. Literature review

Nowadays, media in education have a stable position. Ytreberg (2002) investigated the role of media technologies in constructing an educational environment involving interactive learning and student-created content. The author pointed out that media in education suggest both cognitive and affective practices. Media provoke discussion, self-esteem, and assessment of values because media content mostly has a strong emotional impact (Ytreberg, 2002). Furthermore, studies have shown that people learn abstract ideas and new

concepts more easily when they are presented in both verbal and visual form (Schmidt, 2012). Another essential function of media, and in particular social media, is the significant transformation of teacher-student interactions and innovative support of cooperation between students and academia (Chen & Bryer, 2012).

When social media sources are employed in foreign-language learning, students become co-authors of learning information content and partners and co-producers of the educational process. They thereby contribute to the establishment of a new kind of student-centered approach, “an approach that puts the student’s ability to communicate and produce content in the focus of the educational activities” (Pfeffer, 2014, p. 93). Moreover, as a means of communication, social media include teaching technologies that can greatly contribute to the creation of an authentic linguistic environment for foreign-language learners (He, 2019). Forming communicative competence through social media, students are presented with a scenario where they can produce their information content from media models, applying and imitating the foreign-language patterns and intonations that tend to be most similar to real communicative practices and situations (Mc Dermott, 2013).

However, if educators and students want to wholly benefit from media implemented in their learning process, they need to embrace their analytical and critical abilities to become media literate. Modern scholars have defined media literacy as a combination of knowledge and skills necessary for people to orient themselves in an information-based environment, search and share information, interact with other people and computer software, create safe and reliable media content, filter the media content, and solve cognitive tasks (Andriushchenko et al., 2020; Hatlevik et al., 2015). Analysis of recent scientific sources has shown that media literacy deals with the leveling of artificially created information structures and understanding of the principles of their creation. In addition, it deals with the ability of people to interpret the meaning of media messages based on personal experience and such individual characteristics as personal requests and expectations, formed national and gender ideas, social and cultural backgrounds etc. Media literate citizens are aware of different opinions and form their own position on current issues (Dvorghets & Shaturnaya, 2015; Pfeffer, 2014).

Being cross-disciplinary by nature (Schmidt, 2012), media literacy has been investigated from different points of view, but pedagogic research exploring the integration of media literacy and learning foreign languages is scant. A critical media literacy approach included in the foreign-language curriculum has been vectored by academia worldwide to develop learners’ critical reading of media content, thereby enabling their perception of media messages as a distorted version of reality. Moreover, this approach has encouraged and motivated students’ progress in foreign-language learning by involving their culture and interests amidst the learning process. Instructors intend to assist learners in their discovery of ties between language and social, cultural, and political spheres of its application, as well as materials and topics that have analogies in their native-

language culture by exploring own identities and divarication in comparison to other peoples (Reagan & Osborn, 2002).

A critical intercultural approach to teaching EFL has been implemented to develop the intercultural communication skills of students, and their critical thinking, social, and reflexive skills (Gómez Jiménez & Gutiérrez, 2019). The National Council of Teachers of English (NCTE) formulated the core components for media literacy necessary in the process of teaching EFL. These are the abilities to navigate, process, and synthesize multiple sources of information; compile, analyze, and critically evaluate multimedia texts; create and spread messages for virtual communities worldwide; start fruitful and respectful cross-cultural relationships with others by adopting the ethical responsibilities crucial for complex environments; and improve proficiency and skillfulness with digital technologies (ACTFL, n.d.).

Exploring the scientific works on media literacy implementation, we intended to identify the skills necessary to compose students’ professionally oriented communicative competence in EFL. An overview of the latest studies (Baglari et al., 2020; Bal & Bicen, 2017) revealed the following three types of skills that are crucial for teaching EFL using a media literacy approach: • skills to search and choose important information out of multiple diversified internet sources and assess its status, reliability, and acceptance; • skills to construct and present professionally oriented content in various media formats and genres, taking into consideration the target audience and algorithms of how media influence the application of tools; and • skills to collaborate and benefit from the participation in different digitally mediated research organizations, social networks, and virtual project groups for gaining professional knowledge and intercultural experience.

3. Methodology 3.1 The goal of the investigation

This case study was conducted to answer the following questions: 1) What are the most commonly used social media amongst students and what are they utilized for? 2) What do educators need to do to develop students’ critical media literacy skills through learning English? 3) In what way does the critical media literacy approach affect the improvement of students’ proficiency in EFL?

3.2 Participants

A sample of 138 students was determined for the survey. The participants voluntarily participated in this study. The experimental group included 70 first-year students, and the control group consisted of 68 first-year students. The experimental and control groups were selected in such a way that the controlled parameters did not differ significantly. The data concerning the features of the participants of the study are presented in Table 1.

Table 1: Participant features

N Feature

1. Age 2. Gender

Experimental group Control group

17–19 years old 17–19 years old 46% female 54% male 45% female 55% male

3. English language knowledge level B1 (according to the international certification levels)

4. Number of English classes per week 2

5. Future specialties Agronomy, Food Technologies, Livestock Technologies, Management, Geodesy B1 (according to the international certification levels) 2

Agronomy, Food Technologies, Livestock Technologies, Management, Geodesy

3.3 Limitations

The major methodological limitation of the research was that the outcome depended highly on the context and case, along with the application of mostly exploratory and descriptive methods. The study was carried out at Sumy National Agrarian University (SNAU), Ukraine. Its findings are thus limited to the sample of participants and the institution where the research took place.

The research was conducted from March to July 2020. The time interval for this survey was limited to the mentioned term because that period involved exclusive and emergent distance learning, where students and academics worked under the conditions of the total lockdown, caused by the COVID-19 pandemic. During this period, EFL classes were delivered utilizing digital and social media facilities such as Zoom, Moodle, Google Classroom, Facebook, and Instagram.

The obtained data were verified by cross-checking and member-checking. The reliability factor for the questionnaire and diagnostic techniques used in the survey produced values from 0.85 to 0.91 (at p < 0.01), indicating the high reliability of the questionnaire items. The sample, consisting of 138 participants, determined the margin of error (at the 90% confidence level) for this research at about 7%.

The authors did not attempt to quantitate data on the linguistic knowledge and skills improvement, regarding it as a prospect of future study.

3.4 Methods and pedagogical conditions

A set of theoretical methods was used to prepare the requirements and materials for the survey. This included analysis, synthesis, explanation, and generalization of the scientific data. Pedagogical observation of the participants and the expert estimation method were employed to obtain qualitative results. The survey was administered by lecturers and tutors who were watching over the participants’ progress during practical classes and extracurricular activities. Psychological techniques and mathematical methods were applied to measure and assess the results of the study.

The research work on implementing media literacy tools into the course of learning EFL in SNAU was implemented according to the following pedagogical conditions: 1) creating a relaxed atmosphere, mutual trust, and psychological comfort in teacher-student and student-student interactions; 2) appropriately using the polysemous nature of information and the principle of improvization; 3) encouraging different interpretations of information, recognizing the equality of viewpoints of all participants concerning the information; and 4) focusing on close ties with the socio-cultural environment, interests, and life experience of the participants.

3.5. Organization of the experimental work

To answer the research questions, experimental work was conducted. The participants involved in the study were divided into an experimental and a control group. During the experiment, participants of both groups conducted distance learning through online learning platforms. The control group used their usual textbooks, grammar books etc. instead of social media materials, thus using no media literacy tools during their EFL classes.

Teaching EFL to the experimental group was based on authentic social media materials. Participants were asked to identify its multifunctional nature and specifics of its presentation to different public groups; and to research the role of social media networks in the academic environment and professional socialization, and what its educational potential is.

At the beginning of each lesson, a specific survey was adminstered which aimed at identifying the range of student tastes, personalized features of the academic groups, and student expectations from the lesson. Several tests were also conducted to explore if the participants were ready to work with the media; how well they knew the modern world of media (social and mobile networks, mass media, popular sites, TV and radio programs, specialized publications etc.); if they possessed the skills to assess media events; whether they were able to defend own opinions; their propensity to fall under the influence of ‘authorities’ etc. (see Appendix 1). Media literacy tools were implemented with different creative tasks that contributed to the acquisition of knowledge about the methods of perception and analysis of social media texts, the application of this knowledge in various professionally oriented situations, and the development of experience in creating competent and correct social media texts.

In the course of the experimental work, social media posts, TV and radio programs, and materials of various internet sites, on the same topic but from different sources, were discussed and compared in attempts to detect their specific features. Such discussions were built on the principle of antithesis and paradox, organized as a dialogue, conversation, or dispute, which allowed the identification of different points of view that existed amongst the participants on the same issue. The participants’ critical analysis abilities were elicited by

employing the following key questions, formulated for the study on the basis of recommendations by Mendoza (2018) and other experts in digital and media literacy: 1) Who is the author of the article (post) under review? This point enabled participants’ realization that all media content is created by a diversity of authors, each of them having a unique discourse, agenda, vision, and background. 2) What technologies are applied to draw the attention of the audience? This questions helped participants to systematize their knowledge on various tools and means used by digital media to keep the audience interested in videos, commercials, or apps. 3) What multiple interpretations of one particular message are possible? Here, the participants could reflect on the point that different communities can perceive a specific message in different ways. 4) Whose discourses, traditions, lifestyles, positions, and values are expressed or absent? Sometimes, not all perspectives and voices are represented, especially from certain strata of the society. It was essential to find out what views were missing and why. 5) What reason is this message sent for? This point was brought up so participating students could try to understand the author motives in the creation and sharing of messages and how they benefited from them.

Social media facilities, such as online platforms, forums, and chats, were used as educational tools in various learning activities and projects of the participants. Participants were required to set out the solution of the problem, express their own opinion, or argue their position on a specific aspect of the problem. The participants discussed multifaceted aspects of professional activities in the information society and developed the insights necessary to critically assess the messages embodied in the social media content. In addition, they practiced the application of strategies for analyzing, reflecting, critiquing, and interpreting social media content that consisted of both visual and textual elements (in particular professionally oriented social networks).

Some participants did not feel confident enough to report in English in front of others. Therefore, the participant groups were divided into smaller subgroups, with their chat rooms, where they discussed matters related to the project. The activities of the subgroups were organized into two parts. First, each subgroup worked on gathering information on a particular topic, whereafter the subgroup members reported their findings to the whole subgroup. Second, the collective preparation of a creative task as a result of mastering certain learning material took place. Consequently, all subgroups presented their projects to the whole group, and the participants from the other subgroups wrote comments and questions based on this. It thus turned into a written dialogue, where participants who were otherwise not so talkative became engaged and took part in the discussion. The participants created multimedia presentations to show them for disputation regarding the coursework and posted them on social networks. Projects on the following topics were presented: “Academic environment socializing: Online versus offline”, “Security and balance in social

network maintenance”, “Fake news: Who cares and who profits?”, “Gender stereotypes presented in media” etc.

During the period of the study, three online trainings were held on the following topics: “Professional communities in a social media environment”, “Successful fundraising and projects development by means of social media platforms”, and “Media literacy skills of a student as a condition of his/her successful personal and professional self-realization”.

The following were achieved during the trainings with the participants as an interactive form of education: 1) becoming aware of interaction through social media in the professional sphere; 2) mastering skills to critically analyze information and messages received from media; 3) developing self-awareness and self-determination of their social positions; and 4) forming skills to critically evaluate information reproduced by social media.

4. Results

It was found that most participants had more than one social media account. The outcomes of this aspect are presented in Figure 1.

100% 80% 60%

40% 20%

0% YouTube Instagram Telegram Facebook Blogs Twitter TikTok

Figure 1. The most commonly used social media networks amongst participants

The most commonly used social media networks amongst the participants were YouTube (81%), Instagram (73%), Telegram (38%), and Facebook (27%). Fewer participants reported that they filmed video blogs or read or wrote blogs (25%), or used Twitter (18%) or TikTok (11%). These social media networks were used by the participants for educational purposes (to share organizational information, learning materials, and assignments or to discuss university-related issues in chats or comments); as well as for socialization, small talk, and entertainment, such as posting humorous photographs or video clips.

Since media literacy is perceived as a dynamic phenomenon that reflects the ability of individuals to effectively use the potential of the media environments for personal and professional purposes, consciously perceive and critically

evaluate information from media sources, as well as effectively apply media technology, three components of student media literacy were identified. These are reflective-evaluative, collaborative, and searching-creative media literacies. The assessment of indicators was recorded at three basic levels – high, medium, and initial, which was fully consistent with the tradition of the pedagogical research. The corresponding results are given in Table 2.

Critical media Table 2: The dynamics of participants’ critical media literacy formation Level High Medium Initial

y erac lit Group EG CG EG CG EG CG

Reflectiveevaluative

Before study 18.1 17.3 30.2 34.0 51.6 48.6 After study 30.7 19.3 48.4 43.1 20.8 37.5 Collaborative Before study 22.6 18.7 39.1 37.9 38.2 43.3 After study 37.2 18.4 56.1 38.9 6.6 42.6

Searchingcreative Before study 18.7 11.4 42.8 42.1 38.4 46.4 After study 36.3 12.3 53.1 43.0 10.5 44.6

The dynamics of the formation of participants’ reflective-evaluative media literacy component was measured by means of an instrument by Karpov and Ponomareva (2000), adapted for this study. It expressed the degree of formation of emotional states and attitudes. The instrument assigned points as follows: a) proficiently formed reflective-evaluative skills – 3 points; b) satisfactorily formed reflective-evaluative skills – 2 points; c) superficially formed reflectiveevaluative skills – 1 point; and d) a lack of reflective-evaluative skills – 0 points. The changes in the levels of reflective-evaluative skills are presented in Figures 2a and 2b.

Percentage of students

60 50 40 30 20 10 0 before after

High Medium Initial

Levels of reflective-evaluative skills of students

a) Experimental group

Percentage of students

60 50 40

30 20 10 0

High Medium Initial

Levels of reflective-evaluative skills of students

before after

b) Control group Figure 2. Changes in the levels of reflective-evaluative skills of participants for the a) experimental group and b) control group

The collaborative component, which measured the participants’ skills to collaborate and benefit from the participation in different digitally mediated entities, was determined using the authors’ instrument for the self-assessment of collaborative skills (see Appendix 2). The instrument assigned points as follows: a) proficiently formed collaborative skills – 3 points; b) satisfactorily formed collaborative skills – 2 points; c) superficially formed collaborative skills –1 point; and d) a lack of collaborative skills – 0 points. The changes in the levels of collaborative skills are presented in Figures 3a and 3b.

Percentage of students

60

50 40 30 20 10 0

before after

High Medium Initial

Levels of collaborative skills of students

50

Percentage of students

40 30 20 10 0

a) Experimental group

High Medium Initial

Levels of collaborative skills of students

before after

b) Control group Figure 3. Changes in the levels of collaborative skills of participants for the a) experimental group and b) control group

The searching-creative component, which expresses a set of explorative searching-creative skills, was determined through a modified instrument of ‘Media research in professional activity’ (Kuzmina, 2011). The instrument assigned points as follows: a) proficiently formed searching-creative skills –3 points; b) satisfactorily formed searching-creative skills – 2 points; c) superficially formed searching-creative skills – 1 point; and d) a lack of searching-creative skills – 0 points. The changes in the levels of searchingcreative skills are presented in Figures 4a and 4b.

Percentage of students

60

50 40 30 20 10 0 before after

Hig h Medium Initial

Levels of searching-creative skills of students a) Experimental group

Percentage of students

50

40 30 20 10 0 before after

High Medium Initial Levels of searching-creative skills of students b) Control group Figure 4. Changes in the levels of searching-creative skills of participants for the a) experimental group and b) control group

According to Table 2 and Figures 2a, 2b, 3a, 3b, 4a, and 4b, we could trace the positive dynamics of media literacy formation amidst the participants from the experimental group, who were taught EFL based on social media materials through the implementation of social media tools. As for the participants of the control group, these indicators remained almost unchanged. Thus, the generalization of the research outcomes proved the effectiveness of the implementation of social media tools and means into the process of learning EFL by the participants.

The next stage of the study was carried out to investigate in what way mastering media literacy tools and means affected participants’ proficiency in EFL. The participants were administered a questionnaire to determine the prior factors that had contributed to the improvement in their EFL proficiency in the process

of practicing social media activities (see Appendix 3). The data from the processed questionnaires are presented in Table 3.

Table 3: The priority of factors that influenced the participants’ improvement in English proficiency by means of social media implementation N Factors that influenced participants’ improvement in

English proficiency

1. Increasing motivation 2. Communication skills practice 3. Authentic learning materials from social media 4. ICT application in the process of learning 5. Interactive teaching techniques

Priority for the participants (%)

92% 82% 76% 73% 58%

Most participants (92%) identified increased motivation for studying EFL as major factor influencing their improvement in English proficiency. They highly appreciated social media projects and searching activities, singling out the abundance of information on the English language sites necessary to prepare them for fully fledged professional activities. They also highlighted the importance of being able to critically analyze the available media resources and being capable of joining and collaborating with international professional virtual communities through social media. Communication skills practice was also identified as a prime concern by many participants (82%), who were positively disposed toward participation in Moodle and other social media forums. By communicating through social media, they could learn from each other by posting comments on other students’ forums and presenting blogs to their groupmates. Seventy-six percent (76%) of participants admitted that social media offered them the opportunity to learn different speech patterns and discourse information directly from native speakers. Almost the same number of participants (73%) stressed the possibility to learn EFL with the help of ICT, various gadgets, and software. Teaching by means of social media is interactive by nature, with a considerable number of participants (58%) regarding this factor as crucial for their EFL-knowledge and -skills improvement.

In conclusion, the improvement of participants’ EFL proficiency was greatly affected by the development of their reflective-evaluative, collaborative, and searching-creative skills. These skills served to motivate and enable them to utilize social media potential for achieving professional and personal progress.

5. Discussion

The global academic community faced many challenge brought on by the COVID-19 pandemic. On the one hand, it was a test of students’ preparedness to learn independently through the internet, to cope with learning tasks and problems within virtual communities. On the other hand, it tested teachers and educational institutions’ awareness of and readiness for exclusive and emergent distance education and their ability to foster student skills essential for training and functioning in the virtual media environment.

Being representatives of Generation Z or centennials (Bubich et al., 2016), modern students are bound to information accessibility, capable of quickly processing considerable amounts of information and multi-tasking. They spend ample time on media and social networks for pleasure and learning (Brocca, 2020). The latest research investigating the academic process in the period of the COVID-19 pandemic has shown that students can often act as passive receivers of knowledge, preferring to accept and internalize any presented information (Assalahi, 2020). In one study, students were interviewed during the course of the study, claiming that they underwent learning without the permanent guidance of lecturers, thus searching for information and doing assignments on their own. Moreover, they regarded their studies ineffective without discussing learning material with a teacher and groupmates (Rahiem, 2020).

To deal with the pandemic problems, educators encouraged learners to take advantage of a wide range of virtual learning facilities. They utilized various social media sites, amongst them Facebook, Instagram, Telegram, WhatsApp, and WeChat, to apply synchronous or asynchronous classroom discussion, facilitate peer explanations using either voice notes or texts, and deliver assignments and quizzes. Lecturers were obliged to provide students with access to online materials, resources, links to digital libraries, repositories such as Open Educational Resources, internet streaming, torrents, or broadcasts. Academia contacted students and delivered lessons through educational platforms such as Zoom, Google Hangouts, learning management systems (LMSs) etc. (Lapada et al., 2020; Rahiem, 2020). The circumstances caused by the pandemic have caused m-learning to become widespread, involving the utilization of mobile and portable IT devices. Significant features hereof include a focus on learning mobility and emphasis on the progressive adjustment of the mobile population by social institutions. Furthermore, the smartphone has been identified as the most popular gadget amongst young people (Lapada et al., 2020).

Such vast expansion of media, and social media in particular, into the domain of education stimulated “a social process in all educational contexts’’ and empowered ‘’building effective learning communities, where collective knowledge is created and advanced while supporting the growth of individual knowledge” (Lahiri & Moseley, 2015, p. 73). Therefore, it is essential for students to develop core media literacy skills. Amongst these skills should be the ability to pool knowledge amongst the abundance of information in the virtual environment and the ability to assess the trustworthiness and authenticity of information sources. In addition, students should possess the ability to analyze and synthesize found information and to participate in various online communities regarding and respecting diverse points of view, grasping and following alternative norms (Lantz-Andersson, 2016).

Forming the media literacy skills of EFL students by implementing social media resources into EFL courses is a stimulating and demanding task for the language teacher who seeks to master interdisciplinary knowledge aquistion whilst inspiring students to master it alongside them (Westman, 2019). Media content is

used in the EFL classroom to connect theoretical knowledge with real-life events and policies. However, to be aware of the true sense of these phenomena, students need to be media literate. The following are the purposes of using media in an EFL setting: 1) To develop different linguistic competences. Recent surveys have proven that students’ receptiveness toward the use of multiple social media tools for foreign-language learning enhances their experience of its application (Bilotserkovets & Gubina, 2019; Mc Dermott, 2013). In addition, it provides students with the training necessary to take computer-based tests of English, such as PTE Academic, TOEFL etc., to participate in telecollaboration in the field of language learning and online academic exchange programs around the world (Kobzhev et al., 2020). Lastly, it enables successful cross-cultural communication between students (Dvorghets & Shaturnaya, 2015). 2) To focus on the values and tendencies of the foreign-language society. Gómez Jiménez and Gutiérrez (2019) noted that the implementation of social media tools within an educational context contributes to the creation of an advantageous academic environment, turning EFL classrooms into peculiar spaces where “different linguistic and cultural worlds come into contact” (p. 94). Through this, students’ linguistic and intercultural competences together with their media literacy skills are promoted. 3) To provoke the discussion of ideas reflected in the media content. In the process of incorporating media literacy tools into the EFL training process, students develop their critical thinking skills. They do this by studying the influence of subjective factors on the objectivity of media content and forming an interest in studying different sources of media information and comparing diverse opinions. It is important for every ordinary citizen, especially a young one, be able to critically comprehend media information and identify opportunities for the positive or negative impact of information disseminated by different types of media to prevent their possible manipulative impact (Van Den Beemt et al., 2020).

The outcomes of our study revealed that educators implementing media literacy tools should provoke students to participate in dialogue, teaching them not to be afraid to express their opinion, as well as to argue it, to form their critical thinking skills. This is essential because the media environment is both a field for producing meanings for the mass consciousness and a means of manipulating it (Ytreberg, 2002). Media literate students are supposed to be able to critically and consciously evaluate media messages. They need to be taught to ‘read between the lines’, identify the language of manipulation and persuasion, differentiating between the text and the subtext (Krakid et al., 2014). Besides, “ignoring what students do with a foreign language outside the classroom or refusing to engage with students in social media, teachers will never truly understand their needs and never fully realize the potential of social media as a language learning tool” (Tantarangsee et al., 2017, p. 476).

6. Conclusion

The findings of the study revealed improvement in the experimental group’ s media literacy skills and English-language proficiency compared to that of the control group. Being restricted to using mostly digital and virtual facilities because of the lockdown due to the COVID-19 pandemic, students and academics were able to express themselves and communicate through social media, search for information, and consciously perceive and critically interpret information obtained from different media. In addition, they were able to separate reality from its virtual imitation (i.e. understand the reality constructed by media sources), comprehend the manipulating tools they create, successfully use media technologies to solve professional problems etc. Concurrently, the positive dynamics of EFL-proficiency improvement was observed due to the growing motivation of applying English in real communication practice, authentic English learning materials, and ICT and interactive teaching technologies constructed into the academic process with the help of social media. However, beyond this study, additional experimentation is needed to quantitate and evaluate the progress of students in written and spoken English involved in social media and internet and web content evaluation activities via the use of critical questioning on the part of the students.

7. References

ACTFL. (n.d.). Literacy in language learning. https://www.actfl.org/guidingprinciples/literacy-language-learning Andriushchenko, K., Rozhko, O., & Tepliuk, M. (2020). Digital literacy development trends in the professional environment. International Journal of Learning, Teaching and Educational Research, 19(7), 55-79. https://doi.org/10.26803/ijlter.19.7.4 Assalahi, H. (2020). Learning EFL online during a pandemic: Insights into the quality of emergency online education. International Journal of Learning, Teaching and Educational Research, 19(11), 203-222. https://doi.org/10.26803/ijlter.19.11.12 Baglari, H., Sharma, M. K., Marimuthu, P., & Suma, N. (2020). Pattern of social media use among youth: Implication for social media literacy. Mental Health Addiction Research, 5, 1-5. https://doi.org/10.15761/MHAR.1000189 Bal, E., & Bicen, H. (2017). The purpose of students’ social media use and determining their perspectives on education. Procedia Computer Science, 120, 177-181. (9th International Conference on Theory and Application of Soft Computing, Computing with Words and Perception, ICSCCW 2017, 22-23 August 2017, Budapest, Hungary.) https://doi.org/10.1016/j.procs.2017.11.226 Bilotserkovets, M., & Gubina, O. (2019). Target language teaching by means of e-learning: A case study. Revista Romaneasca pentru Educatie Multidimensionala, 11(4), 17-29. https://doi.org/10.18662/rrem/154 Brocca, N. (2020). Sozial medien in bildung und fremdsprachdidaktik: Einleitung [Social media in education and foreign language teaching: An introduction]. heiEDUCATION Journal, 5, 9-23. https://doi.org/10.17885/heiup.heied.2020.5.24155 Bubich, O. A., Gilevich, E. G., Lushchinskoi, O. V., &. Savich, E. V. (2016). Clip thinking and organization of the pedagogical process at the university. Modeling of effective speech communication in the context of academic and professionally oriented interaction. BSU, 65-71. https://elib.bsu.by/bitstream/123456789/159350/1/6571.pdf

Chen, B., & Bryer, T. (2012). Investigating instructional strategies for using social media in formal and informal learning. The International Review of Research in Open and Distributed Learning, 13(1), 87-104. https://doi.org/10.19173/irrodl.v13i1.1027 Dvorghets, O. S., & Shaturnaya, Y. A. (2015). Developing students’ media literacy in the English language teaching context. Procedia – Social and Behavioral Sciences, 200, 192-198. Goh, P. S., & Sandars, J. (2020). A vision of the use of technology in medical education after the COVID-19 pandemic. MedEdPublish, 9(1). https://doi.org/10.15694/mep.2020.000049.1 Gómez Jiménez, M. C., & Gutiérrez, C. P. (2019). Engaging English as a foreign language students in critical literacy practices: The case of a teacher at a private university. Profile: Issues in Teachers’ Professional Development, 21(1), 91-105. https://doi.org/10.15446/profile.v21n1.71378 Hatlevik, O. E., Ottestad, G., & Throndsen, I. (2015). Predictors of digital competence in 7th grade: A multilevel analysis. Journal of Computer Assisted Learning, 31(3), 220-231. https://doi.org/10.1111/jcal.12065 He, H. (2019). Media literacy education and second language acquisition. The International Encyclopedia of Media Literacy, 1-7. https://doi.org/10.1002/9781118978238.ieml0125 Karpov, A. V., & Ponomareva, V. V. (2000). Psychology of reflexive control mechanisms (p. 283). IPRAN. Kobzhev, A., Bilotserkovets, M., Fomenko, T., Gubina, O., Berestok, O., & Shcherbyna, Y. (2020). Measurement and assessment of virtual internationalization outcomes in higher Agrarian education. Postmodern Openings, 11(1Supl1), 78-92. https://doi.org/10.18662/po/11.1sup1/124 Krakid, A.-M., Skledar Matijevid, A., & Babovid, N. J. (2014, September 11-12). Seeing is (not) believing: Teaching media literacy through ELT. [Conference session]. In The International Language Conference on The Importance of Learning Professional Foreign Languages for Communication between Cultures, Maribor, Slovenia. https://bib.irb.hr/datoteka/803370.Kraki_Skledar_Matijevi_Jurina_Babovi.pdf Kuzmina, M. V. (2011). Diagnosis of forming of media culture of students in process of creation of educational video materials. Theory and Practice of Social Development, 8, 162-164. Lahiri, M., & Moseley, J. L. (2015). Learning, unlearning and relearning with cutting edge technologies. International Journal of Learning, Teaching and Educational Research, 13(3), 62-78. Lantz-Andersson, A. (2016). Embracing social media for educational linguistic activities. Nordic Journal of Digital Literacy, 11(1), 50–77. https://doi.org/10.18261/issn.1891-943x-2016-01-03 Lapada, A. A., Miguel, F. F., Robledo, D. A. R., & Alam, Z. F. (2020). Teachers’ Covid-19 awareness, distance learning education experiences and perceptions towards institutional readiness and challenges. International Journal of Learning, Teaching and Educational Research, 19(6), 127-144. https://doi.org/10.26803/ijlter.19.6.8 Mc Dermott, G. (2013). The role of social media in foreign language teaching: A case study for French. Recherche et Pratiques Pédagogiques en Langues de Spécialité, 32(2), 141-157. https://doi.org/10.4000/apliut.4234 Marinoni G., Van Land, H., & Jensen, T. (2020). The impact of Covid-19 on higher education around the world. IAU Global Survey Report.

https://www.unibasq.eus/wp-content/uploads/2020/06/iau_covid19_and_he _survey_report_final_may_2020.pdf Mendoza, K. (2018, November 19). 5 Questions students should ask about media. Common sense education. https://www.commonsense.org/education/blog/5questions-students-should-ask-about-media Ogbonnaya, U. I., Awoniyi, F. C., & Matabane M. E. (2020). Move to online learning during COVID-19 lockdown: Pre-service teachers’ experiences in Ghana. International Journal of Learning, Teaching and Educational Research, 19(10), 286-303. https://doi.org/10.26803/ijlter.19.10.16 Pfeffer, T. (2014). Academic media literacy and the role of universities. Perspectives of Innovations, Economics and Business, 14(2), 83-93. https://doi.org/10.15208/pieb.2014.10 Rahiem, M. D. H. (2020). The emergency remote learning experience of university students in Indonesia amidst the COVID-19 crisis. International Journal of Learning, Teaching and Educational Research, 19(6), 1-26. https://doi.org/10.26803/ijlter.19.6.1 Reagan, T., & Osborn, T. (2002). Critical curriculum development in the foreign language classroom. In T. G. Reagan & T. A. Osborn (Eds.), The foreign language educator in society: Toward a critical pedagogy (pp. 70-81). Coursework. Schmidt, H. (2012). Media literacy education at the university level. The Journal of Effective Teaching, 12(1), 64-77. https://files.eric.ed.gov/fulltext/EJ1092140.pdf Tantarangsee, C., Kosarassawadee, N., & Sukweses, A. (2017). The use of social media in teaching and learning: A case of SSRU’s faculty members. International Journal of Innovation, Management and Technology, 8(6), 471-476. https://doi.org/10.18178/ijimt.2017.8.6.773 Van Den Beemt, A., Thurlings, M. & Willems, M. (2020). Towards an understanding of social media use in the classroom: A review. Technology, Pedagogy and Education, 29(1), 35-55. https://doi.org/10.1080/1475939X.2019.1695657 Westman, P. (2019). Using critical media literacy to support English language teaching and practice. Revista Lusófona De Estudos Culturais, 1(1). https://doi.org/10.21814/h2d.242 Ytreberg, E. (2002). Erving Goffman as a theorist of the mass media. Critical Studies in Media Communication, 19(4), 481-497. https://doi.org/10.1080/07393180216570

Appendix 1 Questionnaire “Social media in students’ life”

Dear students, please answer the questions. The questionnaire will be anonymous. The value of the research depends on how openly and in detail you answer the questions. Thank you in advance for your participation!

Section 1: Kinds of social media

1. What kind of social media do you usually use? 2. What purposes do you mostly use social media for? Choose the activities mostly you do on social media.

Activities

communication with classmates, friends, relatives making new acquaintances searching for interesting information posting some ads reading blogs reading the news searching interest groups using multimedia (viewing photos, movies, listening to music) uploading photos learning languages communication with foreigners

The activities you do most often (√)

Section 2: Social media use for educational purposes

1. What kinds of social media do you use for educational purposes? 2. How frequently do you use social media for study? 3. What kinds of social media do you use in learning English? 4. How often do you use social media in learning English?

Appendix 2 Questionnaire “Self-assessment of media competence skills”

Dear students, please evaluate your ability and readiness (on a five-point scale, where: 1 – no, 2 – rather “no” than “yes”, 3 – rather “yes” than “no”, 4 – yes, 5 rather obvious) to work with social media:

Skills

to benefit from the virtual participation in a team work to interact with other team members in the virtual social environment to explore and acquire new roles, such as mentors, mediators and group leaders in a virtual group to exchange practical and academic information, experiences, social support with other participants

1 2 3 4 5

to develop learning abilities through virtual peer collaboration to achieve mutual understanding among the members of the group to develop more comprehensive understandings of learning topics through discussion group chats to plan the schedule and the algorithms of the team work in a virtual group to validate a virtual group activities to assess the outcomes of the participation in a virtual group

Appendix 3 Questionnaire “Social media activities in learning English”

Section 1: What social media activities do you usually use for learning English?

Dear students! Please, point out the priority of social media activities that had contributed to your English language improvement.

Social media activities for learning English The prior social media activities (√)

searching for information use of authentic learning materials reading news communication with foreigners social media projects participation in the Moodle and other social media forums

Section 2: How did the implementation of social media help you in learning English?

Dear students! Read the suggested statements carefully. Assess their significance to you.

The factors that influence the students’ English proficiency improvement

Increasing motivation Communication skills practice Authentic learning materials from social media ICT application in the process of learning Interactive teaching techniques

This article is from: