UVM - Division of Student & Campus Life Assessment Guide TABLE OF CONTENTS INTRODUCTION....................................................................................................................... 12 ETHICS ......................................................................................................................................... 5 DIVISIONAL EXPECTATIONS ................................................................................................ 9 ASSESSMENT 101 ..................................................................................................................... 12
INTRODUCTION WHY ASSESSMENT? Ongoing assessment and evaluation of programs are important principles and values in Student Affairs and other professions. While most professionals understand the merit of assessing programs and services, competing priorities and lack of time often get in the way of incorporating assessment in an ongoing and meaningful way. The Division of Student & Campus Life requires all departments to make assessment a central divisional goal and practice. In support of this goal, the Assessment Team (SCL A-Team) offers technical and strategic support around assessment. Continuous Improvement Assessment is an important priority for the division given that continuous improvement of programs is central to success. Departments within the division need to assess in order to be certain that programs are meeting and/or exceeding their intended goals. If it is determined that programs are falling short, then staff must systematically find ways to improve. Ongoing practice of assessment is also essential given the ever-changing nature of our student population. The same programs and services that worked 10 years ago may not meet the needs of our current students. Alignment & Accountability Assessment helps practitioners align the institutional vision and strategic plan with program goals. Assessment is also critical in determining accountability and goal achievement. Success & Satisfaction Assessment is an effective tool for communicating departmental success and student satisfaction, which can be overlooked due to lack of methodical assessment approaches. Sharing divisional and departmental victories with influential constituents becomes more powerful when assessment measures support one’s assertions. To say that a program ‘felt like a success’ holds
significantly less weight than being able to demonstrate through quantitative or qualitative measures that program outcomes were achieved and students were satisfied. Also, communicating success on a departmental level can be an effective tool for promoting teamwork and productivity. Trend Analysis, Longitudinal Studies, & Benchmarking Assessment plays an important role in identifying new issues and trends that are occurring on campus. Assessment mechanisms provide valuable data about population changes. Staff members become more responsive to new trends in student preferences and changes in student culture. Longitudinal studies provide the institution with valuable data of how populations are changing over time and helps staff predict future behaviors and attitudes. Benchmarking is also an important assessment approach which provides professionals with valuable comparison information of similar functional areas and programs at different institutions. Training & Staff Development Assessment can identify areas in which staff members need improvement or additional training. Additionally, assessment can help focus staff and team members on strategic plan goals and learning outcomes. Assessment also helps managers to determine where to allocate departmental training resources. Outcomes Given the increasing attention on learning outcomes throughout higher education and more stringent requirements from accreditation associations, practicing assessment has become a necessary component of college and university day-today operations. Engaging in the assessment process compels individuals to state intended goals. Articulating specific goals is the first step in meeting desired programmatic and student learning outcomes. Diversity & Social Justice One of the most important goals for the University of Vermont and the Division of Student & Campus Life is to diversify the campus and create an open and welcoming climate for all groups and populations. Assessment helps institutional leaders understand current campus climate issues and highlight critical areas for improvement. As new programs are implemented to support diversification, assessment helps leaders determine whether these programs achieve the desired outcomes. Professional Standards Both the American College Personnel Association (ACPA) and the National Association of Student Personnel Administrators (NASPA) include regular assessment as a central principle and ethical practice in student affairs work. Assessment is important because it helps practitioners do their best, be accountable, and remain student-centered.
ROLE OF ASSESSMENT TEAM The role of the UVM Student & Campus Life Assessment Team is threefold: 1) Create recommended standards for consistency and accountability 2) Serve as a resource group for departments 3) Administer division-wide assessment on important divisional questions The work of the assessment team ensures collaboration between departments, eliminates redundancy, and provides consistency throughout the division. While the Assessment Team will not conduct a department’s assessment project(s), the team will support departments to create an assessment strategy, to demystify the process, and to empower individuals to conduct their own assessment.
PURPOSE OF ASSESSMENT GUIDE This guide offers technical assistance in conducting and understanding assessment and basic statistical concepts. It also serves as a resource to understand the division’s philosophy around assessment. Additionally, the guide functions as a document that aligns and articulates divisional standards and expectations. Following the principles laid out in this document will ensure consistency among departments, staff accountability, and an understanding of the importance of aligning the division to the UVM Strategic Plan. The guide will also underscore the importance of shaping our efforts around ‘learning outcomes.’
ETHICS Ethics MUST be taken into consideration when designing assessment and research projects. Not only is work with human subjects a highly sensitive endeavor, but acting responsibly with data collected and assessment results is critical to the central premise of ‘doing no harm.’ Many people do not think about ethics when putting together what seems to be a simple survey instrument or a focus group but it is crucial that one think of the many different ways such assessment projects can harm others. To illustrate, simply asking for demographic information can further alienate and harm a participant whose identity is excluded or misidentified. To further illustrate, the composition of a focus group and the level of risk are important considerations, not only for the participants but also for the validity of the data collected. When designing an assessment project it is important to ask oneself these questions as well as to share the project design with others so that they too can help look for potential trouble areas. Research vs. Assessment While assessment and research are connected to each other they are distinct practices. The statistical standards for research are considerably more stringent than assessment standards. For example, sample size and validity are greater concerns for those conducting research than those conducting assessment. Individuals conducting assessment can still get a great deal of useful information from a less than desirable sample sizes whereas those conducting research are bound to the more rigorous standard. These are important distinctions given that assessment can often be intimidating to practitioners who think that they must adhere to the more arduous standard. To IRB or Not To IRB All research institutions have an Institutional Review Board also known as IRB. The IRB is responsible for ensuring the protection of individuals that participate in research studies. The IRB must approve all participant research before it begins and provide oversight during each project. Any research on human subjects that is to be published or that puts human subjects at ‘risk’ must get IRB approval before proceeding. It is possible that one assessment practitioner sense of what could put participants at risk differs from IRB guidelines. Therefore, it is recommended that every researcher and practitioner become familiar with the IRB process. Generally, assessment that is for the purposes of: 1. Evaluating programs or services 2. Measuring student outcomes 3. Comparing practices to that of other programs, may not require IRB approval
It is important to note however that if results from an assessment project may someday be published, obtaining IRB approval is a wise and prudent course of action. Avoiding Harm Design of assessment projects must keep in mind the protection and interests of those individuals who will be participating in the study. This includes all aspects of their welfare, including physical, emotional, psychological, intellectual, academic, professional, and social. Participation in an assessment project should cause no harm to the participant. Participants should never be forced or misled into being involved. It is important for the designers of any assessment project to take into consideration any possible negative outcome that could befall an assessment participant and then to take all measures to avoid this outcome from occurring. Sensitivity & Social Justice Individuals responsible for designing assessment projects cannot do an adequate job of avoiding harm if they have not taken the time to work on their own sensitivity around different participant populations. It is expected that all staff within the division work towards increasing their own knowledge around diversity and social justice issues. Doing so helps to ensure that as our knowledge increases about groups and cultures our assessments become more sensitive to issues related to diversity. As a result, our research questions begin to address issues around climate and social justice. Additionally, being sensitive also means that assessment designers should take into account exactly how assessment projects are being administered and who is administering it. For example, if one wants to assess how students of color perceive your department, it would be reasonable to conduct focus groups. It is important to consider who will be facilitating these focus groups. Will the focus group be as frank and forthright if a white person is facilitating the group or will the group be more forthcoming if a person of color facilitates? Any facilitator can be an ally and have sensitivity but it is worth thinking about participant perceptions as they relate to one’s study. Informed Consent Assessment participants must provide informed consent. To have informed consent means that participants should understand the type of assessment, any potential risks, their right to not participate, and how data will be protected and used. Informed consent may take a different format depending on your study. In some cases, participants could provide tacit agreement by completing a survey. In other cases, participants may need to sign a written consent form, depending on the nature of the study.
Freedom from Coercion ‘Freedom from Coercion’ ensures that the wellbeing of research subjects or assessment participants is taken into consideration and it helps to maintain the integrity of the any data potentially collected. If subjects are coerced into participating in an assessment then that data may not be valid Confidentiality vs. Anonymity Another important issue to take into consideration is that of the difference between confidentiality versus anonymity. While both terms are often used interchangeably they are distinct concepts. Promising confidentiality lets participants know that while it is possible for assessment administrators to determine from whom the data came from, assessment designers nevertheless promise to keep their identities confidential. Promising anonymity ensures that participant’s identities are impossible to ascertain for even the designers of the assessment. These are important distinctions especially when conducting higher risk projects. Designers should never promise anonymity of subjects if they cannot ensure that it would be impossible to connect any data to any individual. Ensure Ethical Collection of Data The ethical collection of data involves seeking to eliminate all bias in the construction of survey instruments, focus group guidelines, interview guidelines, etc. This is especially important at the evaluation stage of the process. It is reasonable that assessment designers or other department individuals may have a vested interest in some particular outcome and design their assessment project with the view of trying to achieve that outcome. Environments in which data are collected are also important to consider. Contexts for the data collection should be free of issues that may contribute to participants feeling less than forthright in their evaluations. For example, student staff may not feel comfortable being honest if their supervisor is the individual collecting the data. Ensure Ethical Interpretation and Presentation of Data It is often tempting for researchers to skew or hide information from an assessment that casts a negative light on programs or services being examined. In order to be ethical, it is important that researchers analyze all data and provide a fair, truthful, and impartial analysis regardless of the outcome. The purpose of assessment is not to solely highlight strengths, but also to examine areas of improvement in order to continually enhance programs and services.
Ethical Dilemmas The following ethical dilemmas were created by from Blaxter, Hughes, and Tight (1996) and are adapted from Schuh, Uprcraft, and Associates’ Assessment Practice in Student Affairs: An Application Manual. Ethical Dilemma 1: In assessing the Department of Residence Life, you learn from staff interviews that a number of resident assistants do not enforce the alcohol policy. What do you do? Ethical Dilemma 2: While conducting a confidential assessment, you learn that certain financial indiscretions have occurred in Student Government. Your informant does not want the situation investigated since doing so would reveal their identity. What do you do? Ethical Dilemma 3: While conducting a confidential study of campus organizations, you learn that many of these organizations engage in illegal hazing (such as running errands for active members, cleaning rooms, providing rides, etc). While you do not feel that any of these practices puts students at risk, you do know that they are violating the law. What do you do?
DIVISIONAL EXPECTATIONS Departments within the Division of Student & Campus Life are expected to incorporate assessment into ongoing practice and program implementation. As such, it is expected that departments will: 1) Conduct both “Outcomes” and “Non Outcomes” Assessment 2) Revisit and create outcomes for each departmental program and initiative 3) Review assessments currently being implemented to determine whether these projects are valuable to the work and measuring outcomes 4) Create a ‘Departmental Strategic Assessment Plan’ which will organize the assessment work to be done within each department 5) Determine methods by which assessment results are shared within the department and beyond “Non Outcomes” Assessment “Non outcomes” assessment refers to efforts to gather, analyze, and interpret evidence which pertain to the following: 1) 2) 3) 4)
Program Evaluation Satisfaction Needs, Use Climate
It is expected that departments conduct ‘non outcomes’ assessment in order to continue high-quality programs, determine new unmet needs, and to determine departmental climate issues. Outcomes Assessment Outcomes assessment refers to efforts to gather, analyze, and interpret evidence which measure: 1) Cognitive or Psychosocial Development 2) Learning 3) Goal Attainment Likewise, it is expected that departments conduct outcomes assessment to determine how students are learning and growing as a result of departmental programming efforts. Learning Outcomes Learning Outcomes are the broad statements about the attitudes, skills, and/or knowledge you want your students to have. Each department is expected to develop learning outcomes for students who are served by your programs. Student affairs learning outcomes should be created while
taking into consideration the mission statement and objectives of the institution and division. Institutional Mission Statement--Institutional Objectives--Student Affairs Outcomes -Departmental Outcomes--Program Outcomes High Quality Outcome Statements The first step in beginning an assessment is to determine what outcomes are to be measured. High quality outcome statements do the following: • • • • • • • •
Translate intentions into actions Describe what students should demonstrate or produce Use action verbs Align with other intentions (institutional) Are associated directly with programmatic practice Are authored collaboratively Reflect/complement existing national understanding and criteria Are measurable
Institutional Alignment All of the programs (and learning outcomes) implemented by the Division of Student & Campus Life should fall into one of the following five areas:
Enhance Learning
Advance Diversity TO CREATE A HIGH QUALITY STUDENT EXPERIENCE THAT SUPPORTS THE ACADEMIC MISSION
Promote Health & Safety
Create Community Manage Resources
Departmental Strategic Assessment Plan Each department is expected to collaboratively create a strategic assessment plan. This plan will identify: 1) 2) 3) 4) 5) 6) 7)
Program and outcomes that will be measured The means by which these outcomes are to be measured The kind of assessment that will measure these outcomes When the assessment will be implemented How often this assessment will take place How assessment results will be shared and distributed Who will be responsible for implementing the assessment
Departments will create a three year assessment strategic plan. While the intent is to approach assessment with a more long-term and institutionalized approach, it is expected that this strategic plan be an active document that is consistently referred to, reviewed, and updated. Directors will incorporate this plan into their meetings with the Dean of Students Office reviewing their plans on a regular basis to be determined by the Dean of Students Office each academic year. Depart ment
Outcome(s)
Pentagon Goal
Assessment Type
Assessment Method(s)
Timeline Implementat ion
Results Sharing Process
Assessment Project #1 Name
Descriptive Tracking Satisfaction Needs Outcomes Benchmarking Climate
Quantitative Tracking Survey Qualitative Interview Focus Grp.
Descriptive Tracking Satisfaction Needs Outcomes Benchmarking Climate
Quantitative Tracking Survey Qualitative Interview Focus Grp.
Descriptive Tracking Satisfaction Needs Outcomes Benchmarking Climate
Quantitative Tracking Survey Qualitative Interview Focus Grp.
Assessment Project #2 Name
Assessment Project #3 Name
Sharing & Dissemination of Data Each department is expected to share data within their own department as well as with other members of the division and campus community as appropriate. Suggested mechanisms for doing so include: 1) 2) 3) 4)
PowerPoint Presentations Executive Summary Departmental Report Annual Report
Who?
Assessment 101 Courtesy of Ann Groves Lloyd University of Wisconsin
TABLE OF CONTENTS INTRODUCTION....................................................................................................................... 12 SELECTING ASSESSMENT TOOLS ..................................................................................... 12 STATISTICAL CONCEPTS ..................................................................................................... 14 INTERPRETATION .................................................................................................................. 16 MANAGING DATA ................................................................................................................... 21 SURVEYS .................................................................................................................................... 22 INTERVIEWS............................................................................................................................. 25 INTRODUCTION1 Because the issues and assessment needs that each unit faces are unique, we provide points to consider, primarily as ways to get started. We do not believe that “one tool fits all.” We expect that you will select and modify items to fit your unit’s needs. We recognize that personnel in SAA units will have varying degrees of experience and types of expertise. This section is intended to serve as a very basic introduction to program assessment. As such, some of the information in it may not be useful to those with more experience. The resources/reference section at the end of the binder includes more sophisticated sources of information for those who seek more background. SELECTING ASSESSMENT TOOLS There are a variety of ways to collect information to answer specific questions about your program. We refer to these different data-gathering techniques as “assessment tools.” There are two general categories of assessment tools: Î Quantitative (e.g., Descriptive & Inferential Statistics) Î Qualitative (e.g., Introspection, interview, ethnography)
1
Some parts of this section were selected, with permission, from the following: “Program Assessment Tool Kit: A Guide to Conducting Interviews and Surveys” by LEAD Center Researchers Sarah K.A. Pfatteicher, Dianne Bowcock, and Jennifer Kushner. Pilot Edition, Summer 1998, work supported by a grant from the University of Wisconsin-Madison Assessment Council to the College of Engineering. Additional resources were provided by the Office of the Dean of the College of Engineering and a Hilldale Foundation Grant from the UW-Madison Chancellor to the LEAD Center.
It is important to note, however, qualitative and quantitative research are NOT compartmentalized separate elements. Rather qualitative and quantitative assessment tools are co-functioning and co-existing complementary tools. The tools you select should be consistent with the type of information you seek. In some cases, several tools may be needed to address a breadth of issues. Using a variety of tools is often beneficial because the information gathered by some tools can be compared with information from other tools (this allows one to confirm findings from one source with another source), or can deepen your understanding about issues raised in one method. Some researchers purposely use a variety of different tools to see if they reveal similar findings and to supplement and deepen understanding about the information – this is one form of “triangulation.” Many of the types of assessments that SAA units want to consider are what we call “Indirect indicators.” These commonly involve self-reports through surveys or interviews of students, alumni, or employers on their experiences and perspectives about a unit, service, or program. Below we summarize some of the advantages and disadvantages of a few selected types of tools.
Tool
Assessment Tools- Advantages & Disadvantages Advantages Disadvantages
Survey In-class Questionnaire
Survey Mail Questionnaire
Inexpensive. Can be quickly administered to a group. Best suited for simple and short questions. Can reach every participant. Comparatively inexpensive. Can attempt to reach participants over a wide geographical area.
Interview Telephone questionnaire or interview
Relatively inexpensive. Best suited for relatively short and non-sensitive topics. Can attempt to reach participants over a wide geographical area.
Interview Structured in-person interviews
Interviewer can probe evasive answers and explore topics in depth, and clarify questions. With good rapport, may obtain useful open-ended comments. Usually yields richest data, details and insights. Best if indepth information is wanted. Permit face-to-face contact with respondents.
No control for misunderstood questions, missing data, untruthful responses. Not suited for exploration of complex issues. No control for misunderstood questions, missing data, untruthful responses. Not suited for exploration of complex issues. Sometimes low response rate. Takes time to receive responses. Sometimes difficult to reach audience (especially alumni). Usually requires a sample of respondents. Not suitable for lengthy questionnaires and sensitive topics. Respondents lack privacy and anonymity. Interviewer requires skills. Same as above. Often timeconsuming to analyze due to volume and qualitative nature of data. Interviewee may distort information through recall error, selective perceptions, desire to please interviewer.
Interview Focus group interviews
Useful to gather ideas, different viewpoints, new insights, improving questions design.
Tests/Standardized certifications
Provide comparative information across populations. May be relatively easy to administer.
Must take a sample. Limits time that each individual has to contribute. Lacks individual privacy, thus could reduce accuracy of results. Available instrument may be unsuitable for population. Tests may contain unfairness or bias. Developing and validating new project-specific tests may be expensive and time consuming. Test may be narrow in scope and thus not allow student to demonstrate complexity of understanding. Without clear criteria, judgment is subjective.
Gathers information from variety of sources over time. Usually respondent self-reflects and rates own work. Requires development of clear criteria for judgment. *This table is modified from the User-Friendly Handbook for Project Evaluation: Science, Mathematics, Engineering and Technology Education, Directorate for Education and Human Resources, Division of Research, Evaluation and Communication, National Science Foundation. Exhibit 5.
Portfolios
STATISTICAL CONCEPTS The researcher/evaluator must be aware of some basic statistical concepts. 1. Central Tendency The purpose of central tendency is to summarize a very large amount of data in a single glance. Some of the tools of central tendency include the mean, the median, and the mode. Mean: The mathematical average, which is calculated by adding all the scores and then dividing by the number of scores. In general the mean is a good indicator of central tendency for a group of scores; it is the measure of central tendency that is used most often. One exception to this general rule is when a group of scores contains a few extreme scores. In such a situation, one of the other two measures of central tendency (median or mode) would more accurately describe the data’s overall characteristics. Median: The physical middle score of the data. 5000, 5001.
For example,
1, 2, 3, 100, 200,
In this set of numbers the median is 100, because it is the physical middle score. Median score is very useful when you have a high variation of scores that may not be accurately summarized by the mathematical mean. When you have an even number of scores, just add the middle two scores and divide by two to arrive at the median. Mode: The most frequently occurring score. For example, 1, 2, 3, 3,3,3,3,3,3,3 ,4,4,4,5
The mode in this case is 3. If the score has two modes it is called bimodal distribution. It is also possible to have no mode at all. Depending on the research question being investigated, the mode may actually provide more meaningful information than either the mean or the median. 2. Measures of Variability Measures of variability (or measures of dispersion) can show how much the scores in a sample vary from one another. They can help represent the individual fluctuations in scores. The two measures of variability are called the range and the standard deviation Range: Subtraction between the highest and the lowest score. Generally speaking the range is a rather simplistic estimate of variability, or dispersion, for a group of scores. Because the range involves only two scores, it can produce a misleading index of variability. Thus the range is rarely used as a measure of variability. For example, 1,2,3,4,5 Range would be 5-1 = 4 Standard Deviation: The overall numerical distance from the mean. In this case the word “standard” means overall. “Deviation” means specific numerical distance from the mean. For example, 1 2 3 4 5 The average or mean of these scores is 3. The individual deviation is as follows 1-3 = -2 The score “1” is 2 deviations away from the mean/average. 2-3 = -1 3-3 = 0 4-3 = 1 5-3 = 2 Thus, standard deviation is also called the “overall point spread.” It is a measure of how much the scores vary on the average around the mean of a sample. It indicates how closely scores are clustered around the mean. The smaller the standard deviation, the less variability from the mean and vice versa. Calculating standard deviation is really not very difficult if you use a calculator capable of doing square roots. To compute a standard deviation, follow these four steps: 1) calculate the mean of the scores; 2) From each score, subtract the mean and then square the difference (this eliminates the negative signs that result from subtracting the mean); 3) Add the squares and then divide by the number
of scores; 4) calculate the square root of the value obtained in Step 3. This is the standard deviation. INTERPRETATION Interpretation or analysis of data could be the most exciting and rewarding dimension(s) of program assessment. However, it is also the riskiest portion of the assessment. All program evaluation/assessment is vulnerable to misinterpretation. No matter how “quantitative” the data and calculation in your assessment tool, it is under the mercy of the quality of qualitative interpretation. No matter how “accurate” the research data “calculation” is, data could be rendered useless with poor qualitative interpretation. Following are some of the examples. Example 1: Ice Cream Cone sales related to drowning ?
*** It is very important to note (again): Even if the calculation of statistical analysis is mathematically correct, this does NOT mean it is invulnerable to misinterpretation. Anybody familiar with statistics can quantitatively prove unrelated and non-causal factors together. When the temperature is hot and sunny the sales of ice cream cones increases. At the same time, when the high temperature and sunny weather is present there is higher probability for people to go in to water, thus more probability of drowning victims. If one correlates the ice cream cone sales with frequency of drowning the coefficient will be “significant.” Officials in the department of public safety may correctly interpret this data and conclude: The department of public safety will have to have more lifeguards on duty during hotter and sunnier days than gloomy colder days. A wrong conclusion could be that controlling ice cream sales may contribute to fewer people drowning. Example 2: Less Policing will Result in Less Crime ?
Similar to the example above an analyst could easily prove there are higher police spending in high crime cities and lower police spending in low-crime cities. One can easily prove there are strong statistical correlation of crime rate and police spending. As absurd as it may sound, would the crime rate go down if one decreases police spending?
Example 3: Student Performance Statistics The basic descriptive statistics below are actual overall cumulative grade point averages of degree recipients at UW-Madison.
Cumulative G.P.A. of Degree Recipients
Targeted Minority 3.30 3.23
3.20 3.13
3.10 3.00 2.90 2.80
3.06 2.99 2.88 2.84
2.98 2.93
3.14
3.12
2.90
2.88
2.86
1992
1993
2.94 2.94
3.02
4th year
2.92
5th year 6th year
2.89
2.70 2.60 1991
1994
1995
Fall Semester of Entrance
1996
1997
Cumulative G.P.A. of Degree Recipients
Non-Targeted
3.35
3.33
3.31
3.29
3.29
3.30
3.28
3.27
3.25
3.25 3.24
3.25 3.20
3.19
3.20
3.24 3.24
4th year
3.22
3.17
5th year
3.19
3.15
6th year
3.17 3.15
3.10 3.05 1991
1992
1993
1994
1995
1996
1997
Fall Semester of Entrance
One can observe students who have finished their undergraduate degree in four years had higher overall grade point averages than those who graduated in 5 or more years to obtain the same degree. Some policy makers may interpret this descriptive statistic to say students who graduate in 4 years had more focus and guidance then those who “floundered� to their degrees in 5 or more years. Enhancement in advising and mentoring programs may be the answer to undergraduate student success. One misinterpretation of the data could be that if the policy makers were to push students to graduate in 4 years or less, a higher GPA performance would result.
Example 4: Data that can be Interpreted Either Way These descriptive and inferential statistics were performed on actual data from University of Illinois-Chicago Urban Health Program.
In plain English, analysis of variance examines whether three or more treatments are statistically significantly different from one another. In this case, there were NO significant statistical differences between 1996, 1997, 1998 enrollment. There are two ways to interpret this analysis of variance: 1) One can conclude UHP is doing a good job of recruiting students since the enrollment is very "consistent" and therefore there were NO significant statistical differences between 1996, 1997, and 1998. 2) Another way of looking at the data would be: Since the 1996, 1997, and 1998 enrollment rates are NOT statistically significant (different), it is safe to conclude there were NO significant improvement in enrollment/recruitment.
MANAGING DATA 1. How will you manage your data? An important part of assessment is planning how you will manage and maintain your data over time. As you develop student surveys or ways to count student contacts, you need to think about how you will store and enter the data consistently, accurately, and securely so that you can use it when you need to. If possible, use a database not just a spreadsheet to store your data. A database (such as Access) can hold more information over time than a spreadsheet, and it’s a much more reliable way to handle data. Information from the database can be transferred to a spreadsheet (such as Excel) for analysis later. Consider the following: • How do you want to use the data and what do you want to use the data for? In most cases SAA units want to conduct periodic analysis (at least annually) and also conduct analysis over several, or even many years. You want to design your data so that you can meet short term and long term needs. • Is there other data you might want to link to your data? In most cases SAA units want to link their data to the ISIS data or to other university student records information. This type of linking usually requires a student name and ID. Before you establish your database, investigate how the data in these other formats look, and design yours so that it is an easy match. For example, if the ISIS field gives the name of the student in the following format (firstname middleinitial lastname) all in one field, then your database should have a field that matches this. Or, if the ID number is given as a string of nine or ten digits (without hyphens between them) then your data should match this number of digits and not use hyphens. • Make sure your unit is keeping ISIS information accurate and current, because you will rely on the accuracy of this data to do student records analysis. 2. What do you need to know about data entry? a) Establish ways to maintain the data consistently. • Decide who will enter the data and how frequently. It is good practice to create a data codebook that shows all of the entry forms, field names, field types (e.g., text, numeric, memo) and other information. This provides a record and doesn’t leave all of this essential information in the “head” of one individual. A codebook can also provide a consistent guide if you have several people entering data over time. It can contain a comment section that might address questions that arise during data entry, such as the following examples: What should I enter if the respondent gives two answers when they were only to select one? What should I enter if they don’t answer a question? What should I do if they write a comment along the margin? • It is important to keep the original survey information neatly organized. The last thing you want to happen is to loose your surveys or have them lying around in different piles or stuck in between other paperwork. You also need to be able to distinguish between surveys that have been entered into the database and those that haven’t. We suggest storing surveys in a folder or notebook labeled with information sensible for your unit (e.g., the event, semester, name of the cohort
group, and the date). Surveys should be placed in one place prior to data entry (e.g., in the front of a notebook), and then checked-off after data entry (it helps to place a checkmark an upper corner to indicate that data has been entered). If your database assigns an ID, then it is good practice to place this ID number at the top of the original survey. After surveys are entered, they should be placed in a different location from those that still need entry (e.g., at the end of the notebook in the section titled “entered”). The surveys that have been entered can be organized by the assigned ID number, alphabetically by student name (if you know this), by date of entry, or by some other factor that makes sense for the unit and type of data. At some point, you may want to return to the original data to check something further, so you need to be able to find the original survey that matches the data. b) Enter data in a consistent way. In particular, student names and IDs should be done very consistently to allow you to match this information with other data bases such as information in ISIS. Decide whether you will use lower case or upper case, and how much spacing will occur between digits to link easily to other data. Most database software allows you to create an entry form. c) It is essential that data be entered accurately. Most database software allows you to set up “flags” to indicate when wrong data entry occurs. For example if you try to enter text data into a numeric field, it will alert you. Also, think about how you will check the data after it is entered. You can establish a system in which the person doing the entry will recheck each entry after they do it, or go back and check it at the end of the week, or they will check a sample (e.g., check every other one). Perhaps have one person enter data and another check it later. You may need to develop some passwords to protect entry into your database. 3. How will you keep your data secure? Think about ways to keep the data confidential and establish a system to keep the data secure over time. Some questions to consider are: • Who will enter the data? Teach the people that will do data entry that the information needs to remain confidential. This means they must not show or discuss the information with others, and they should not leave the information laying around where others or the public can review it or take it. Having a good system using folders or notebooks to keep the surveys during data entry will help keep data confidential. • Who will back-up the data? • Who will have access to the data? Decide who will have access to the data for data entry and analysis purposes. SURVEYS Most SAA units will use some types of surveys. In particular, SAA units will use a version of a student satisfaction survey. We believe that surveys are excellent ways to gather information that describes, compares, or explains knowledge, attitudes, and
behaviors. We have found that THE SURVEY KIT2 by series editor Arlene Fink is an excellent resource for conducting surveys. We highly recommend that you acquire and study this resource, and therefore we do not attempt to supply information in much detail regarding surveys. It provides some of the more sophisticated information about using surveys that can guide your survey development. THE SURVEY KIT is available through SAGE publications at: SAGE Publications, Inc., 2455 Teller Road, Thousand Oaks, CA 91320 (email: order@sagepub.com) Types of demographics to gather (This is not a complete list.) • Gender • Ethnicity/race • Age • Year in College • Classification • First-generation college attendee • In-state or out-of-state • Campus Housing Types of survey questions • Closed-ended • Open-ended • Multiple choice • Check all that apply • Ranking • Closed-ended followed by open-ended Examples of methods units may use to gather survey information • Immediately after the student receives service or participates in an event, such as a workshop. Every time a student uses the service s/he is asked to complete a survey. • Conduct surveys for a designated period of time (e.g., three months). Every student who uses the service will be asked to complete a survey at the time the service is given. • Give surveys at the end of some designated period such as at the end of the semester, at the finale of a program, or near the end of the school year. Tips on conducting surveys • Before using a survey, pilot it on a small sample of a similar audience. Then modify unclear or poorly worded questions or delete those questions that do not seem to elicit worthwhile information. • Keep the survey as short as possible. Be selective about the questions you ask. 2
THE SURVEY KIT contains 10 volumes, each focusing on a specific aspect of conducting a survey. This kit includes volumes on asking survey questions, conducting self-administered and mail surveys, interviews by phone and in person, designing surveys, how to select a sample for surveys, how to measure survey reliability and validity, how to analyze survey data, and how to report on surveys.
• • • • • •
• •
•
•
Survey should be easy to read. (on lightly colored paper, legible font, photocopy quality). Survey should include information at the beginning and the end about when and where survey should be returned. Decide whether the survey will be anonymous (without the student’s name) or ask the student for their name. For SAA units, most of the time surveys should not ask students for their names. Use a combination of open and close-ended types of questions. Each question should ask only one question. Organize the questions into a meaningful flow – usually starting with background information on the student (demographics), and ending with questions about the future. More sensitive questions should be asked in the middle of the survey, not at the beginning. Plan ways for students to return the survey (drop in a slot, mail back). Avoid having the student hand the survey directly to the advisor or the program coordinator. Consider offering incentives to encourage students to return the survey (placing their name in a drawing for a gift certificate, book, privilege, etc.). If you do this, if you want the surveys to be anonymous, you will need to keep the student’s name separate from the specific survey. When implementing a student satisfaction survey, your office may want to prepare a Frequently Asked Questions (FAQ) sheet to inform the students about the survey. Some sample questions you may want to answer on a FAQ sheet are listed below: o What is this survey about? o Who is this survey from? o Why should I take the survey? o When can I fill out the survey? o How do I fill out the survey? o Who will read the answers to this survey? o How is my privacy being insured? o What is the point of this survey? o Who do I contact for more information? Remember, the main purpose of undertaking a student survey is to obtain some information from your students on their experience of the unit. If implemented with much thought and effort, a survey can help you and your staff immensely. The goal is to have all cohorts complete the survey. If 70 percent or more of your students complete a questionnaire then you can be confident that the information gathered will be reasonably representative of the class as a whole. If the remaining students completed the questionnaire then it is unlikely that the results would change considerably. If only 50 per cent complete the questionnaire then your confidence in the results being representative must be less. If less than 50 percent complete the questionnaire then the information collected is not particularly useful for making decisions with regard to changing your unit. For units where the response rate is less than 50 percent of enrolled students, results
will be qualified with the statement that - these results cannot be taken as representative of this class of students and can only be said to reflect the views of those students who completed the questionnaire. Points About Mail Surveys: • Allow enough time – plan for time needed to conduct first mailing, receive the first set of responses, conduct follow-up mailing, receive follow-up responses, input data, conduct analysis, and create report. • If you are doing a mail survey, plan to do at least two mailings. The follow-up mailing can add up to 1/3 more responses. • Sending surveys on colored paper can increase your return rate. • Make sure the return date and sender address is bold, big, and clear on the survey. • Use a short cover letter on the survey. When possible, personalize the letter so that the students, alumni or employers recognize they have valuable information to contribute. • Use incentives to increase return rate - Response rates can be improved if you give a pizza coupon when the survey is returned or enclose a card that becomes part of a drawing to win a prize. When planning these techniques, plan them in a way that keeps the anonymity of the respondent. Anticipate an average return rate of about 20% unless you do something exceptional as an incentive. • Identify one person to keep records and do follow-up mailings.
INTERVIEWS This section summarizes many of the important points you should consider as you use interviews to evaluate your program or unit. It presents some broad guidelines, which can be applied to a variety of interviewees whether they are current students, graduating students, alumni, or employers. The section also gives general background information about conducting interviews in various formats such as individually or in groups, over the phone or face-to-face, and tape recorded or not. The section discusses these formats, poses questions to consider when planning interviews, provides tips for conducting interviews, and discusses methods for analyzing interviews. Frequently-asked questions to consider prior to interviewing These questions are interrelated and should be considered as a whole instead of in a linear way. 1. What is the purpose of the interviews? Evaluators/Personnel who will be conducting the interviews need to communicate clearly with all stakeholders (faculty, deans, staff, and students) that the purpose of the interviews is improve the program, not to evaluate personnel. The use of interviews as a data collection method begins with the assumption that the participants’ perspectives are meaningful, knowledgeable, and able to be made explicit, and that their perspectives are important to the success of the program. 2. Who will do the interviews?
•
•
•
Who within your unit will conduct interviews? The interviewers you select should be interested in the project, have time to participate, and be willing to cooperate with others to follow an interview plan over a specific timeframe. Ideally, the interviewer should be a person with no past, present, or likely future authority over the student/alumni. The interviewer should also be a person who is capable of facilitating an open discussion without passing judgment on student responses or leading the student to make specific responses. If possible, a small group of interviewers (two to four) should collaborate on the interview process. Using multiple interviewers allows interviewers to be assigned to interviewees they do not know. It also provides a range of interviewer backgrounds which can broaden the interpretation and understanding of the findings (this is one form of triangulation). It is important that one person act as a “point person” for the entire interviewing process. This person will coordinate all of the activities and serve as a conduit for information and communication.
3. Should we conduct individual or group interviews? Some factors to consider when contrasting individual and group interviews are shown in the table below. Which to use: Focus groups of In-depth interviews? Factors to consider Group interaction
Group/peer pressure
Sensitivity of subject matter
Depth of individual responses
Data collector fatigue
Use focus groups when.. Interaction of respondents may stimulate a richer response or new and valuable thought. Group/peer pressure will be valuable in challenging the thinking of respondents and illuminating conflicting opinions. Subject matter is not so sensitive that respondents will temper responses or withhold information. The topic is such that most respondents can say all that is relevant or all that they know is less than 10 minutes.
Experimentation with interview guide.
It is desirable to have one individual conduct the data collection; a few groups will not create fatigue or boredom for one person. The volume of issues to cover is not extensive. A single subject area is being examined in depth and strings of behaviors are less relevant. Enough is known to establish a meaningful topic guide.
Observation by stakeholders
It is desirable for stakeholders to
Extent of issues to be covered Continuity of information
Use in-depth interview when.. Group interaction is likely to be limited or nonproductive. Group/peer pressure would inhibit responses and cloud the meaning of results. Subject matter is so sensitive that respondents would be unwilling to talk openly in a group. The topic is such that a greater depth of response per individual is desirable, as with complex subject matter and very knowledgeable respondents. It is possible to use numerous individuals on the project; one interviewer would become fatigued or bored conducting all interviews. A greater volume of issues must be covered. It is necessary to understand how attitudes and behaviors link together on an individual basis. It may be necessary to develop the interview guide by altering it after each of the initial interviews. Stakeholders do not need to hear
hear what participants have to firsthand the opinions of participants. say. Logistics Geographically An acceptable number of target Respondents are dispersed or not respondents can be assembled in easily assembled for other reasons. one location. Cost and Training Quick turnarounds is critical, and Quick turnaround is not critical, and funds are limited. budget will permit higher cost. Availability of qualified staff Focus group facilitators need to Interviewers need to be supportive be able to control and manage and skilled listeners. groups. Source: User-friendly Handbook for Mixed Method Evaluations, Directorate for Education and Human Resources, Division of Research, Evaluation and Communication, National Science Foundation. August 1997, p. 3-11.
4. How many interviews should be conducted? Should we select a sample? If you intend to conduct individual interviews, you need to conduct enough interviews so that similar responses or patterns begin to occur. This can usually be accomplished by fifteen to twenty individual interviews. Because some people may choose not be participate in the interviews and others who do agree may not actually show up for the interview, we suggest that you may need to ask about 25 people to participate in order to accomplish 20 interviews. Since most programs do not have resources to interview every student, alumni, or employer, you will likely need to ask a sample to participate in the interviews.
Some suggested ways to select a sample to interview are presented below. • Invite a percentage: If you want to select 25 students out of 50, for example, you can choose every other name on the student roster. • Invite everyone in the targeted group: You can ask everyone (through posters, emails, or phone calls) in the program to participate in an interview. Then wait to see how many respond and confirm interviews with the first 25 who respond to your request. This is an efficient way but may not result in a very representative sample. • Invite a representative sample: For a sample to be representative of the entire population, your sample should reflect proportions of the entire population based on criteria you identify as important. These criteria will vary for different groups, however, some of the most commonly used criteria to select a sample are: gender, ethnicity, GPA, length of time in program. For example: An entering class in the School of Pharmacy had 100 students who were 60% female, had an average GPA of 3.2, and half had completed their first two years at UW-Madison. In their sample they purposely selected a sample of twenty students who were 60% female, had a range of GPA’s above and below 3.2, and half of whom had attended UW-Madison for 2 years. 5. How should we invite students/people to interview? Now that most students have e-mail access, this seems to be a good medium to reach students because it is easy for the respondent to respond. Allow several days for people to check e-mail messages. Telephone contact is tedious and often requires back-and-forth messages. If you plan to do phone interviews, call the student and ask if they will agree to an interview, and then set a convenient date and time that you can call them back.
6. Should we use a written pre-survey? Most programs we have worked with report that it is beneficial to use a short pre-survey that students either complete quickly at the beginning of the interview or bring with them completed to the interview. It provides a good beginning point for the interview discussion and conserves interview time that might otherwise be spent collecting this background information. Pre-surveys can be mailed out, faxed, or picked up by students at a central location prior to the interview. 7. Should we provide questions to interviewees ahead of time? Interviewees report that it is helpful to have the questions ahead of time. Providing questions ahead of time may take away any anxiety about interviewing that some interviewees may feel. It also allows interviewees time to prepare thoughtful answers which can improve the quality of the responses you receive. Questions can be sent to the interviewees over e-mail at the same time you confirm the interview time or made available at the unit office. 8. When should we interview? An agreed-upon timetable that details the time frames for the interview process should be created. If possible try to conduct all interviews within a concentrated two-week long to one-month long period. Student interviews should be done early to mid-semester, and be completed by about the 11 th week of the semester. Toward the end of the semester students are less likely to participate. Try to avoid interviewing during holiday seasons such as Christmas and Easter when people are extremely busy, or during summer when many people take vacations. If you are conducting phone interviews, it may be necessary to make calls in the evenings. We suggest that you schedule one-hour blocks for interviews with half hour between interviews (e.g. interview from 11:00-12:00, schedule next interview to begin at 12:30). 9. How can we practice interviewing? It is useful for people who are not experienced at interviewing to practice one or two interviews prior to conducting actual interviews. Practice allows the interviewer to become familiar with recording equipment, to practice setting a person at ease, to practice pacing so the interview is completed in the allotted time, and to try out questions that may be sensitive or difficult to ask. 10. Where should interviews be conducted? Ideally you should choose an infrequently used private location that will have minimal interruptions. The location should be private and quiet enough to hear respondents, and one that is comfortable, easily accessible, and can be equipped with audio recording equipment. If a group of interviewers are collaborating, it is useful to choose a location where you can leave equipment in a box. Seating arrangements should encourage involvement and interaction and break down status and power relationships that may often exist when faculty or deans interview students. If possible place seats side-by-side instead of having the interviewer sit behind a desk. Group interviews are best done around a table so that everyone can see and hear each other.
11. Should we tape record or not? Should we transcribe or not? There are three approaches to recording interview data, each with advantages and disadvantages as described below.
Approaches to Recording Interview Data: Advantages and Disadvantages Approach
Advantages
Disadvantages
Tape record and transcribe verbatim Word-for-word transcription is complete. This requires resources and time, but is valuable when respondent’s own words and phrasing are needed. Take notes, tape record, but do not transcribe This approach draws on the notes taken by interviewer. As soon as possible after the interview, the interviewer listens to the tape to clarify certain issues and to confirm that all main points have been included in the notes. Take notes but do not tape record Interviewer takes detailed notes during the interview and draws on memory to expand and clarify the notes immediately after the interview.
Completeness and the opportunity it affords the interviewer to remain attentive and focused during interview. Allows other interviewers to read the transcript and collaborate together on analysis. Recommended when resources are scarce, and when the results must be provided in a short period of time. Not expansion saves time and retains all the essential points of the discussion.
Amount of time and resources needed; inhibitory impact tape recording has on some participants. It is essential that interviewees are assured of confidentiality and permission to tape is obtained.
Useful if time and resources are short, results are needed quickly, and evaluation questions are simple.
The interviewer must frequently listen, talk, and write at the same time, a skill that is hard for some to achieve. Where more complex questions are involved, note taking alone does not allow one to document all of the intricate relationships and descriptions.
Interviewer may be more selective or biased in what he or she writes.
12. How can we assure confidentiality? A goal of interviewing is to gain adequate information about your program. Most interviewees are more inclined to participate in interviews if they know that what they say will remain confidential, if they believe that there will be no negative repercussions to their opinions, and if they are assured that their interview tape will be used for analysis purposes only. Some ways to address these issues are to use a consent form, assign ID numbers instead of student names to the audiotapes, remove the tapes from the interview area promptly after the interview, and design the analysis phase so that only the transcriptionist or interviewer listen to the tape, and only one or two people read the transcripts. After analysis, completed audiotapes and typed transcripts should be stored in a safe location. Although the use of a consent form is optional when doing program assessment activities, it is considered good practice to ensure that procedures were communicated in an open manner and to notify interviewees about process and confidentiality. 13. What problems might we anticipate?
Some problems you might anticipate are summarized below. Anticipated Problems and Solutions Problem
Description/Solution
Interviewee does not show up
Some interviewees will not show up, or if you are doing phone interviews, will not be available at the time you have arranged to call them. It is appropriate to try to reschedule the interview, but discontinue efforts after one more contact. Depending on the types of questions, it is possible that interviewees may be uncomfortable discussing certain topics or some may become emotional and cry or withdraw. If this occurs, tell the interviewee it is OK. If you are taping, offer to turn off the tape recorder, and offer to take a few minutes break. Ask if the interviewee would like water. Take a break from questioning or ask if you should proceed. Have extra materials: copies of questions, paper and pencil, consent forms, batteries, blank tapes. It is possible that the interviewee may reveal information related to issues such as harassment, unethical, or illegal activity. In the event this occurs, refer the student to the appropriate office or counselor. Equipment failure happens. Have extra equipment on hand. Encourage interviewers to take some written notes and check the tape recorder during the session. After the interview is completed, the interviewer should also check the tape. If the machine did not work, the interviewer can create a written summary based on notes while the discussion is still fresh in the interviewer’s mind.
Interviewee is very uncomfortable talking about issues or becomes emotional
You run out of materials Interviewee confides important information
Equipment Fails
14. How does an interview differ from a survey? Interview questions differ from survey questions in that they should be open-ended rather than limited “yes-no” type questions. Their purpose is to elicit perspectives and opinions from the interviewee. Ideally, interview questions will be developed with input from a cross section of stakeholders such as faculty, deans, staff, and students. An interview, rather than a paper and pencil survey, is selected when interpersonal contact is important and when opportunities for follow-up of comments are desired. In-depth interviews are characterized by extensive probing and open-ended questions. In-depth interviews are particularly appropriate when seeking an understanding of complex or highly sensitive subject matter. Typically the interviewer works from a list of questions or issues that are to be explored and probes areas of particular interest. The guide helps the interviewer pace the interview and makes the process more systematic and comprehensive. Interviewers should seek to encourage open responses. It is important for an interviewer to capture respondents’ perspectives in their own words. An interview is like a guided conversation in which the interviewer becomes an attentive listener. In contrast to a normal conversation, an in-depth interview is not intended to be a two-way form of communication and sharing. The key to being a good interviewer is being a good listener and questioner. It is not the role of the interviewer to put forth his or her opinions or perspectives.
Tips about how to analyze interviews Although interviews can provide a wealth of information, the analysis of interviews is challenging because responses are usually quite varied and complex, and interconnected ideas can flow throughout the entire interview. Your task throughout the analysis is to look for common and recurring ideas, issues, and themes across the set of interviews, as well as try to recognize the range of the perspectives that interviewees raise. When doing analysis, one must continually attempt to put aside your own perspective and be guided by what the interviewees said. Whether or not you work alone or with a small committee to conduct the analysis, and whether you work from verbatim transcriptions or written notes, the process of analysis is generally similar to that described below: 1. If interviewers used notes, have each one collect their notes together. If transcripts were completed, distribute these to the people who will undertake the analysis. 2. Read through transcripts/notes for each interview and do the following: Write a key word or phrase in the margin to capture the topic (such as advising, labs) and also write a few words to summarize the perspective. Note both positive and negative factors. Highlight or underline student comments or phrases when these provide a good example of the perspective or provide a clear illustration of the interviewee’s perspectives. If you have time after reading the interview, make a short summary of the issues and perspectives at the very end of the transcript. Reread as needed. 3. Identify common issues and themes, cluster, and re-group the ideas and themes. The analysis committee should hold meetings to review and summarize the perspectives presented in each transcript. The readers can summarize their ideas onto a blackboard or flip-chart. Patterns and common issues and perspectives will begin to be noticeable. The committee should cluster or group these issues or try to capture the range of opinions if there is no common opinion expressed. A
continual process of re-grouping and synthesis of ideas and themes should occur. Quotations from interviewees that represent the prevalent perspectives can be identified. 4. Create a summary report.