6 minute read
The State of First-Year Program Assessment: Recent Evidence from the 2017 NSFYE
from eSource for College Transitions Collection, Vol. 2: The First-Year Seminar
by National Resource Center for The First-Year Experience and Students in Transition
The State of First-Year Program Assessment: Recent Evidence from the 2017 NSFYE
Dallin George Young, Assistant Director for Research, Grants, and Assessment, National Resource Center for The First-Year Experience and Students in Transition, University of South Carolina
Advertisement
The first-year experience (FYE) has been put forward as a philosophy and movement for improving first-year student transitions for most of the past four decades (Hankin & Gardner, 1996; Upcraft & Gardner, 1989) and represents a comprehensive, coordinated, and wide-reaching effort designed to support student success (Hankin & Gardner, 1996; Upcraft & Gardner, 1989; Young & Keup, 2019). Although the philosophy behind the FYE concept requires a broad approach to new-student success, research shows that it is not enough for colleges and universities to simply increase the array of educational offerings aimed at these students. Indeed, the FYE is not necessarily improved by the number of programs, but by the level of coordination and integration across them. Because first-year student success is not easily localized or specific to one functional area on campus (Young & Keup, 2019), FYE efforts must include a cohesive, comprehensive, and campuswide mix of curricular and cocurricular initiatives (Greenfield, Keup, & Gardner, 2013; Hankin & Gardner, 1996; Upcraft & Gardner, 1989).
Assessment is also critical to the success of any FYE enterprise. As Kuh (2010) noted, only through thoughtful implementation and continual evaluation will high-impact practices, such as those that comprise a FYE, realize their full potential. Intentional design depends on information that allows for targeting of first-year efforts as well as ongoing improvement.
This article focuses on assessment of first-year initiatives, and specifically on providing insight gathered through the 2017 administration of the National Survey on the First-Year Experience (NSFYE). A brief description of the survey precedes a presentation of data on the frequency of assessment and the formats of assessment practices of selected first-year initiatives.
2017 National Survey on the First- Year Experience
The 2017 NSFYE sought to gather information on overall institutional attention to the first year, as well as common first-year programs including academic advising, orientation, common readings, early-alert programs, first-year seminars, learning communities, and residential programs. The questionnaire sought information common to each of these seven first-year initiatives, including students served, perceived value, and assessment. An additional set of questions about assessment was asked of institutions that indicated offering first-year seminars, first-year advising, and preterm (new-student) orientation. The 2017 NSFYE asked more indepth questions about these initiatives because of their longevity in the scholarly practice discussion and their greater degree of professionalization, as evidenced by the presence of professional organizations (i.e., the National Resource Center, NODA, NACADA) representing these activities. In addition to asking whether these first-year initiatives were assessed, the survey asked respondents to indicate the formats used to carry out any assessment activity. Findings related to general questions on assessment and those on assessment of specific first-year initiatives follow.
Assessment of First-Year Programs by Frequency
Figure 1 shows the frequency that institutions offering certain first-year programs reported assessing those programs within the past four academic years. First-year seminars (62.7%) and preterm orientation (54.1%) were the only programs in which more than half of respondents with these programs reported recent assessment. However, it is notable that when I don’t know responses were removed, that group includes first-year residential programs and learning communities as well. The least frequently reported assessed first-year initiatives were common-reading (27.5%) and early-alert programs (31.1%).
There were some comparative differences in reported levels of assessment by institutional characteristic. For example, two-year institutions more frequently reported assessing early alert, while four-year schools were more likely to report assessing common readings and first-year seminars. Additionally, public institutions more frequently reported assessing first-year academic advising, first-year seminars, learning communities, orientation, and residential programs.
Assessment of First-Year Programs by Format
Figure 2 displays the frequency that colleges and universities in the sample reported using specific formats to assess first-year seminars, pre-term orientation, and first-year academic advising. Individually, the most common formats for institutions that had assessed first-year seminars were course evaluations (81.1%), analysis of institutional data (72.2%), and direct assessment of learning outcomes (63.4%). For orientation, respondents most frequently indicated the use of a survey instrument (73.1%) by a large margin, more than doubling the number of schools that reported using analysis of institutional data (36.3%) and direct assessment of learning outcomes (36.3%). Finally, nearly two thirds of colleges and universities reported analysis of institutional data (68.5%) and use of a survey instrument (66.4%) to assess first-year advising efforts, followed by nearly a third of respondents who indicated carrying out a program review (32.9%).
Discussion
These results point to at least two noteworthy patterns. First, there were substantial comparative differences in the frequency that respondents indicated the use of formats of assessment by first-year program. Institutions that recently assessed their firstyear seminars reported using five of the nine types of assessment referred to in the survey at greater frequency than the other two. This, combined with the results reported in Figure 1, suggests that at the institutions represented in the sample, first-year seminars are not only being assessed more frequently, but also using a greater variety of formats.
Secondly, there are formats that, while used with varying frequency relative to one another, are among the most common for the three first-year programs on which the NSFYE gathered data. Analysis of institutional data was among the top three assessment formats for all three programs. In addition, direct assessment of learning outcomes and use of survey instruments were in the top three responses of at least two of these programs. Under the collaborative ideal of the FYE philosophy, these represent potential opportunities to engage in coordinated assessment practices. Even though an individual program might have its own priorities and salient questions to consider, similar formats allow offices to thoughtfully discuss how the data collected could be fed into a similar stream for cross-functional assessment of broader institutional goals for the first year.
Assessment of FYE programs is an important piece toward ensuring the effectiveness of these offerings individually and collectively. We repeat the sentiments of Kuh (2010), who highlighted first-year experiences as high-impact educational practices: “Only when they are implemented well and continually evaluated ... will we realize their considerable potential” (p. xiii).
Related Articles in E-Source
Padgett, R. D. (2011). Emerging evidence from the 2009 National Survey of First-Year Seminars. 9(1), 18-19.
Young, D. G. (2013). Research spotlight: National evidence of the Assessment of First-Year Seminars: How and how much? 11(1), 18-19.
This article was originally published in July 2019.