Development of a Wellbeing Measurement Tool in the Community Service Sector
Prepared by Dr John Douglass Whyte Associate Professor Martyn Jones Project funded by Helen MacPherson Smith Trust April 2011
ACKNOWLEDGEMENTS
We acknowledge the traditional custodians of the lands on which Windermere Child and Family Services, RMIT University and the Centre for Excellence in Child and Family Welfare stand—the Peoples of the Kulin Nation, including Community members, Elders and Respected Persons, both passed and present. We acknowledge and gratefully thank the clients of Windermere Child and Family Services who participated in this project. Without their trust and openness this study would not have been possible. This project was funded by the Helen MacPherson Smith Trust.
RESEARCH TEAM Ms Cheryl DeZilwa Ms Serap Ozdemir Mr Guy Robbins Windermere Child and Family Services Dr John Douglass Whyte Associate Professor Martyn Jones Professor Catherine McDonald Dr Andrea Simpson Social Work Discipline RMIT University Ms Rebecca Jolly Centre for Excellence in Child and Family Welfare, Inc
Development of a Wellbeing measurement Tool in the Community Service Sector
April 2011
2
CONTENTS 1.
Introduction
4
2.
Background and rationale
5
2.1 Introduction to Windermere Child and Family Services 2.1.1 Structure, client base, programs 2.1.2 WC&FS’s position within the human service context in Victoria 2.2 Shifting emphases on ‘accountability’ and ‘effectiveness’ 2.2.1 Growing valuing of ‘evidence’ of program effectiveness 2.2.2 Looking at more than activity outputs 2.3 The Pilot Project and Complementary Survey: purpose and scope 3.
Conceptualisation: Wellbeing and Evaluation
12
3.1 Measuring effectiveness: Conceptually broad, yet specifically sensitive. 3.1.1 Encompassing aspects of the human condition. 3.2 Various approaches/measures of conceptualising ‘wellbeing’. 3.2.1 Factors impacting/explaining ‘wellbeing’ 3.3 Devising a single ‘aggregate’ instrument 3.3.1 Enhancing validity 3.3.2 Minimising threats to reliability 3.3.3 Enhancing potentials for statistical analyses 3.4 Considering constraints in operationalising such a measure 3.4.1 Identification & recruitment processes for potential participants 3.4.2 Administering the initial measure 3.4.3 Administering multiple time series 4.
Operationalisation of the Pilot Project
6.
23 23 24 24 26 26 27 27 27 27 46 47 47
The Complementary Survey: Evaluation Practices in C&FS 5.1 5.2 5.3 5.4
48
Rationale Approach and methodology Data Collection Findings
48 48 49 49
Discussion and Recommendations 6.1 6.2 6.3 6.4
12 12 14 14 17 19 20 21 21 21 22 22 23
4.1 Determining pilot programs 4.2 Identifying program-specific particulars 4.2.1 Cognition/age related 4.2.2 Communication/administration of measures 4.2.3 Trialing icon images 4.3 Data collection – administering the instrument 4.4 Data analysis 4.4.1 Access to existing Windermere client data 4.4.2 Procedures to ensure ongoing confidentiality 4.5 Project Findings 4.6 Limitations and challenges 4.6.1 Impact of staff turnover 4.6.2 Pragmatics of evaluating service programs 5.
5 5 6 7 8 9 10
52
Themes from the Pilot Project Themes from the Complementary Survey Recommendations Summary
52 52 53 54
References Windermere Wellbeing Survey WC&FC Invitation to Participate Plain Language Statement
55 58 60 61
Appendices A B C D
Development of a Wellbeing measurement Tool in the Community Service Sector
April 2011
3
1.
INTRODUCTION
How does one achieve a sense of ‘wellbeing’? Does wellbeing change over time and can this change be measured? If so, for those of us involved in social service provision, how can we ensure that our service contributes in a meaningful way to our user’s sense of wellbeing? These are some of the questions this report seeks to address by exploring the notion of wellbeing and applying it in evaluating service provision.
This report is concerned with whether wellbeing measures can be extended in application and applied to assessing the effectiveness of a service, specifically, Windermere Child and Family Services. Windermere, a Melbourne-based welfare agency, caters primarily to disadvantaged children, families, and individuals. In spite of the varied nature of Windermere’s work, its programs have the common purpose of contributing to service users wellbeing. The agency’s stated main objective is “to improve wellbeing in children, families and communities by helping to realise their potential, building resilience and connecting people to the community” (Windermere, 2010).
The question posed by this report is twofold: first, can a quantitative measurement of Subjective Wellbeing (SWB) be used as an outcomes measure for service delivery and second, will this measurement show direct links to the efficacy of service provision?
Development of a Wellbeing measurement Tool in the Community Service Sector
April 2011
4
2.
BACKGROUND AND RATIONALE
2.1 Introduction to Windermere Child & Family Services 2.1.1
Structure, client base and programs
Windermere Child and Family Services is a well-established welfare agency situated in Melbourne’s Southeastern suburbs, which together with related agencies, provides for a population of approximately 1.2 million people. The agency provides its services from 20 locations, with three major centres in Narre Warren, Cranbourne and Pakenham. The main locale Windermere services are divided into the City of Casey, the City of Greater Dandenong, and the Shire of Cardinia. The demand for Windermere’s services are expected to increase as the area it services is currently in one of Melbourne’s fastest developing growth corridors. For instance, by 2031 the City of Casey alone is expected to accommodate around 148,000 additional people (GAA, 2010). An overview of these districts compared to the general population of Victoria is provided in Table 1.
Greater Dandenong, in particular, is an area which experiences high social and economic disadvantage with a large number of its residents living on low incomes together with high unemployment levels and low educational attainment. Of note is that all three geographical areas that Windermere services rank lower than the greater Victorian population in educational level, jobs in skilled industries, and perception of safety. Subjective Wellbeing (SWB), the main focus of this paper, is also shown in Table 1 and refers to a percentage score in which respondents were asked to rate their satisfaction with their life-as-a-whole across a variety of domains. The data was collated from the 2007 Community Indicators Victoria Survey using the Australian Unity Wellbeing Index (McCaughey Centre, 2007). The average Personal Wellbeing Index for persons living in Victoria was 76.4%, compared with 72.4% in Greater Dandenong, 76.8% in the Shire of Casey, and 77.3% in the Shire of Cardinia.
Development of a Wellbeing measurement Tool in the Community Service Sector
April 2011
5
Measure
Casey
Greater Dandenong
Shire of Cardinia
Victoria
Weekly Income ($) NonNon-school qualification (%) Employed (%)
$610
$461
$606
$600
45.2%
35.8%
47.4%
50.7%
65.6%
51.1%
66.6%
60.9%
Skilled Occupation (%)
48.8%
47.9%
52%
56.3%
SWB (%)
76.8%
72.4%
77.3%
76.4%
SelfSelf-reported Health (%)
54.6%
43%
58.3%
54.3%
Perception of Safety (%)
60.5%
47.4%
62.8%
66.5%
Table 1. Overview of the locale Windermere services, which consists of the City of Casey, the City of Greater Dandenong, and the Shire of Cardinia. For comparison, scores for the greater population of Victoria are shown in the far right-hand column. Areas which fall significantly below that of the general state population are highlighted. (Data from http://www.communityindicators.net.au. Reference period 2006-07)
Windermere caters to children, families, and individuals in areas of physical access, emotional and mental health, sexual abuse, family and domestic violence, housing, disability, and assisting families in caring for children with disabilities and/or developmental delay (Windermere, 2009). Programs include allied health services, counseling, general advice, education, advocacy, in home care for children and families, housing support, and group work (Windermere, 2009).
The agency has a number of collaborative relationships with educational, government and welfare agencies including the Department of Human Services, the City of Greater Dandenong, Relationships Australia, the Victoria Police, Melbourne University, Monash University, and RMIT University. 2.1.2
WC&FS within the human service context in Victoria
Windermere currently operates as a non-profit organization. In Australia, the non-profit industry began life as privately–funded charities created by churches and the benevolent wealthy (Lyons, 2001). Many of these were subsidized by government funds. Historically though, the Australian Government has never funded social services, preferring these to be provided for by the non-profit sector. In turn, individual households in need of these services purchased them from non-profit organisations. For those unable to pay due to economic hardship, social welfare organisations were created to assist these individuals.
Development of a Wellbeing measurement Tool in the Community Service Sector
April 2011
6
The non-profit sector, also referred to as the third, voluntary, social economy, or civil sector, is an eclectic mix of organisations. Lyons (2001) defines non-profit organizations in terms of what they are not i.e. they are not in the public or business sectors. In general, the non-profit sector is independent of government and whilst they can make a profit, this is not their main purpose. In addition, any profit made by the organization cannot be distributed under Australian law. In general, all share a common purpose of being community-driven whilst crossing domains as diverse as health, education, employment, recreation, and religious institutions.
At the time of writing, the Australian non-profit sector consisted of approximately 600 000 organisations, 59 000 of which were considered to be making a significant contribution to the economy (PC, 2010). Although the monetary sources of the non-profit sector are varied, a number of organisations, approximately 20 000, are heavily reliant on the government for income (PC, 2010). These are predominantly in the human services field including
health,
housing,
community,
child
welfare,
and
disability
services.
Organisations also receive indirect funding support from the government via tax concessions.
Although Windermere is funded by both the Australian federal and state governments, as well as private and corporate donations, the agency is largely dependent on government funding for its existence. In 2009, federal and state Government funding to Windermere was just over $11.5 million, which accounts for 88% of Windermere’s monetary income (Windermere, 2009). The programs which receive the majority of its funding are its family day care program which received 30% monetary expenditure in 2008-09, and its disability services which received 28% annual expenditure in 2008-09 (Windermere, 2009).
2.2 Shifting emphases on ‘accountability’ and ‘effectiveness’ Notions of accountability and effectiveness related to the provision of public sector human services have seen important shifts over the past decades.
Moving from a
reliance on quantifiable program activity and efficiency measures, recent trends have seen a movement towards addressing clients’ subjective impact. This movement has been informed and underpinned, in part, by two questions:
Development of a Wellbeing measurement Tool in the Community Service Sector
April 2011
7
First is that given in the formation of any program/service provision there is the presumption that the effort is well-intentioned, is it sufficient to focus on program efficiency—e.g. cost-benefit ratio or client thru-put—as a primary measure of accountability and effectiveness? By default, empirical evidence has been considered more valued and reliable when identifying explicit, identifiable factors by which intra- and inter-program measures can be made and compared. But actual impact on the client? That strays into a more problematic realm. On one hand, it is possible to measure shifts in particular conditions—e.g. income, specific health symptoms and numbers of involvements with social services—but it is much more difficult to measure subjective experiences—including emotional, social and existential conditions or attitudes. How does one measure those attitudes and experiences that may, themselves, be more accurate—if elusive--indicators of a program’s impact? That leads to the second question: If we are to attempt to understand and consider those more ephemeral, qualities of the human condition, how can we do so in a way that is not seen simply as an attempt to generalize an individually-subjective response to broader client and program considerations? Attempts to address the qualitative and quantitative aspects of both program and client conditions have seen the increased development and application of what has become known as ‘evidence-based’ practice and evaluation. As is described in greater detail next, this has involved the reconciliation of sometimes competing priorities and emphases. 2.2.1
Growing valuing of ‘evidence’ of program effectiveness
As noted by Bloom, Fischer and Orme (2009), the evolution of evidence-based practice (EBP) across health and human service professions—including medicine, nursing, social work, psychology and public health is one of the most important—is one of the most important developments in addressing those professions’ practice and program evaluation. Both an ideology and a method, EBP is philosophically underpinned by the principle
that
clients
deserve
to
be
provided
with
the
most
effective
programs/interventions possible. Pragmatically, it reflects the practitioners’ commitment to make use of all possible means to identify and apply the most effective evidence related to any particular issue or problem—at all points of planning, conceptualizing, operationalising and client contact.
Development of a Wellbeing measurement Tool in the Community Service Sector
April 2011
8
The challenges facing the practitioner who attempts to incorporate EBP are varied and interwoved, in part because the methods of identifying/locating the most effective practices/interventions extend beyond simply those incorporated in empirically-based practice (Bloom, Fischer & Orme, 2009). This is no more evident than in the application of EBP principles to broadly interdisciplinary and multi-programmatic human service endeavours. 2.2.2
Looking at more than activity outputs
Among the challenges are to, first of all, locate and identify as many studies of effectiveness related to a particular problem as is possible and then to critically assess the studies identified for evidence of validity and utility for the problem at hand. These efforts are made more daunting in that they require the practitioner simultaneously search for studies embodying the most rigorous protocols while at the same time including sensitivity to socially and culturally relevant considerations. The second great challenge is that of conceptual integration. The success of efforts informed by these principles of EBP is very much dependent upon the sensitive integration of two differing types of research: the use of single system designs as a core of evaluation-informed practice and the effective utilization of experimental and/or quasiexperimental designs that serve to inform practitioners’ decisions about the most effective procedure for a particular situation. But the reliance on interventions informed by the ‘highest’ EBP level of multiple randomized, controlled studies is no guarantee of effective service delivery with every client.
What must also be considered are the characteristics of both clients and
practitioners—either or both of which can have profound effects on outcomes. Situations such as those involving Windermere Child and Family Services, in particular, can experience situations where the cultural, socioeconomic, ethnic and various other demographic and interpersonal variables can impact intervention results—regardless of the rigour of the EBP protocols employed. To minimise such occurrences, the practitioner is encouraged to incorporate regular feedback on the progress of the services being provided and to make changes accordingly.
Such feedback has been shown to be a characteristic of evaluation-
informed practice that can reduce deterioration and enhance overall outcome (Bloom, Fischer & Orme, 2009).
Development of a Wellbeing measurement Tool in the Community Service Sector
April 2011
9
Despite
the
challenges,
several
successful protocols to integrate the factors have been developed.
One
such model is the PRAISES model, shown in the adjacent figure. While at first glance daunting, the flowchart is nothing more than a compilation of the multiple
steps
practitioners
in
which
already
many engage.
Presented in a systematized flowchart that is intended to present a structured approach to those steps, this flowchart illustrates characteristics and protocols that enhance evidence-based practice. These include: o
since
it
meanings
of
Empirically-based incorporates
both
evidence-based practice—the use of results of classical evaluation research to inform the selection of demonstrably effective interventions and the systematic evaluation of the effects of those interventions. o
Integrative insofar as it integrates all practice and evaluation activities, with no distinctions made between evaluation and practice.
o
Eclectic since it is a framework based on the perspective that the knowledge base of practice in the helping professions that is [1] pluralistic in the sense that knowledge is derived from multiple sources and [2] eclectic in that it uses clear, precise and systematic criteria to select knowledge.
o
Systematic, in that it clearly identifies the various phases of practice, organizing them in a logical sequence.
o
Accountable, in that it reveals the entire practice process, inviting scrutiny by others.
o
A way of thinking since it illustrates and encourages an approach that is grounded in the ethics and values that underlie the practices of helping professions.
2.3 The Pilot Project and Complementary Survey: purpose and scope The pilot project that examines the Windermere Child & Family Services programs in this research attempts to utilise all of the evidence-based practice and evaluation features,
Development of a Wellbeing measurement Tool in the Community Service Sector
April 2011
10
protocols and processes described above—incorporating, integrating and coordinating evaluation and practice aspects in the course of its pursuit of evidence-based practice and evaluation. But this pilot program only addresses such attitudes and attempts within the one organization. Unaddressed are questions of the attitudes and efforts by other agencies/organizations within the Victorian human services sector: How important is the incorporation of program evaluation features across other agencies? Who determines and/or implements such efforts?
How are such efforts conceptualized and
operationalised? By way of addressing these and other questions related to the pursuit of evidence-based evaluation and practice in the broader context, an analysis of a survey conducted by The Centre for Excellence in Child and Family Welfare (CFECFW), in partnership with Windermere Child & Family Services and RMIT University Social Work, is part of this report. This survey looked at the types of evaluation currently being utilized in the child, youth and families welfare sector in Victoria. Drawing upon 91 responses from across 21 different organisations, the CFECFW survey focuses on the exploration of evaluation processes that relate to the impact a service has on client outcomes, and provides an important background context within which the WC&FS pilot program can be understood. Conceptual and operational contrasts and commonalities between the CFECFW survey and the WC&FC pilot are analysed throughout this report. A conceptual consideration common to both is the notion of individual wellbeing as an important indicator of program effectiveness/impact. This is discussed next.
Development of a Wellbeing measurement Tool in the Community Service Sector
April 2011
11
3 CONCEPTUALISATION: WELLBEING AND EVALUATION 3.1 Measuring ‘wellbeing’: Conceptually broad, yet specifically sensitive 3.1.1
Encompassing aspects of the human condition
Wellbeing is a multi-faceted concept concerned with discovering the ingredients needed to live a happy and fulfilling life. A more precise definition of its meaning has remained elusive, shaped from a variety of disciplines dating back in time to the writings of the ancient Greek philosophers. The commonality for most experts on the topic is that wellbeing is something positive worth striving for with the central theme of wellbeing research being the desire to somehow improve the world for those living in it.
As just described, wellbeing is not a new concept. It has been studied for centuries and from a variety of disciples including philosophy, psychology, medicine, economics, and the social and physical sciences. The notion of wellbeing is used interchangeably with terms such as happiness, quality of life, standard of living, and life satisfaction. For the sake of brevity, the following section provides an abridged version of a more extensive literature review carried out on subjective wellbeing.
In ancient Greek philosophy, Aristotle defined wellbeing as happiness. He stated that there are many good things in life, such as health, pleasure and family, however, the ‘highest good’ human beings can attain is happiness (Annas, 1993). He believed that wellbeing could best be achieved by carrying out worthwhile activities, such as intellectual pursuits. At the same time, Aristotle acknowledged that in order to have wellbeing one must possess certain resources, in particular, positive relationships with significant others and a source of income. Similarly, he proposed that wellbeing was more difficult to achieve if a person experienced negative events, for example, the loss of children or good friends (Annas, 1993).
Development of a Wellbeing measurement Tool in the Community Service Sector
April 2011
12
Aristotle’s writings form the basis of the modern-day capability approach, made famous by economist Amartya Sen and philosopher Martha Nussbaum (Nussbaum & Sen, 1993). The approach focuses on what people are effectively able to do and to be, that is, on their capabilities. Wellbeing for Sen and Nussbaum is achieved by allowing people the freedom to choose the kind of life they would like to lead. Both argue that the focus of policy decisions should be on achieving wellbeing and the removal of barriers that restrict wellbeing (Nussbaum & Sen, 1993). The theory rejects an equal distribution of resources or goods on the grounds that this is too simplistic. It argues that different people have different needs in order to achieve a good life. For example, a person with a disability would require more resources than his/her peers to achieve a life of the same standard (Nussbaum, 1999; 2000). Wellbeing is therefore what a person is capable of doing, her access to resources and her opportunities to use her abilities.
Nussbaum (2000) in particular argues that wellbeing is a universal concept and most of the critique against her work is her failure to acknowledge the role of local context (Clark & Gough, 2005). It needs to be acknowledged that the religious, spiritual, and cultural beliefs of a community can have a vital role to play in the individual’s access to resources and thereby their general sense of wellbeing.
The difference in physical, cultural, social, and economic environments across communities and countries makes a global wellbeing measure especially tricky. Nonetheless, the social indicator movement of the 1970’s developed with the aim of developing universal standards, or indicators, for wellbeing. The movement was founded in the belief that an increase in standard of living would contribute to an improved quality of life. Rather than measure wellbeing at the individual level, indicators are recorded across communities and countries (OECD, 2009). These indicators are objective measures of wellbeing and include measures such as the number of people in active employment, life expectancy, literacy rates, reported violent crime etc (OECD, 2009). Brasher & Wiseman (2007) has added that wellbeing indicators should include the importance of our relationships not only with each other but also with our external environment, such as our perception of safety and our ability to participate in community.
In medicine and psychiatry, wellbeing more often defined in terms of quality of life or more specifically ‘health-related quality of life (HRQoL)’. Quality of life (QoL) is a subjective measurement tool by which the individual is asked to rate their perceived
Development of a Wellbeing measurement Tool in the Community Service Sector
April 2011
13
quality of life on Likert-type scales from poor to good. The field developed as a means of incorporating the patients view in medical decisions and is concerned with measuring a person’s physical and emotional functioning (Katschnig, 2006). Quality of life instruments are now predominantly used as outcomes measures in clinical trials and health research as well as to assist in ethical dilemmas as pharmaceutical options for diseases continue to progress.
The main limitation of HRQoL measures is that they are restricted to health status and do not incorporate contextual elements of life, such as income, education, freedom, and social inclusion. Physical and mental health is only one part of the wellbeing equation, and it would be remiss not to incorporate the spiritual and societal factors that contribute to experiencing a good and meaningful life (Katschnig, 2006).
In summary, wellbeing is a loose term and means many things to different people. Overall, it is the values and experiences we hold hear, our access to resources, a feeling of purpose, the freedom to pursue goals, bodily health and integrity, and the ability to build positive relationships with others and the world in which we live.
3.2 Various approaches/measures of ‘wellbeing’ 3.2.1
Factors impacting/explaining ‘wellbeing’
Wilson’s (1967, p. 294) description of the happy person was one of the first attempts to categorize the factors that contribute to experiencing SWB. He stated that the happiest people are “young, healthy, well-educated, well-paid, extroverted, optimistic, worry-free, religious, married with high self-esteem, job morale, modest aspirations, of either sex and of a wide range of intelligence.” Just over four decades later, and more recent research indicates that Wilson got many, but not all, of these indicators for SWB correct.
The main theories behind why some people experience a high level of SWB whilst others do not are complementary to describing what factors impact on SWB. In general, theories of SWB can be divided into: (i)
‘top-down’ approaches, and
(ii)
‘bottom-up’ approaches.
Development of a Wellbeing measurement Tool in the Community Service Sector
April 2011
14
Top-down approaches view SWB as the product of individual internal traits and psychological processes and maintain that we have a pre-determined ability to experience SWB that is largely independent from our environment. Top-down theorists emphasize the individual factors that influence SWB, including: o
genetic disposition which has been shown to count for as much as 50% of the individual variance in SWB scores (Healey, 2008);
o
cognitive processes under our voluntary control, such as conscious lifestyle and activity changes, positive thinking, and creating and pursuing meaningful goals (Sheldon & Lyubomirsky, 2004);
o
personality type with individuals reporting high scores on SWB ratings corresponding to the personality characteristics of extroversion, optimism, the ability to set reasonable goals and work towards them, and the ability to not dwell on past negative events (Diener et al. ,1999);
o
the ability to adapt our perceptions of wellbeing over time as we experience changes in our life circumstances (Cummins 2001; 2002); and
o
the ability to compare ourselves with others and adjust our expectations of wellbeing accordingly (Calman,1984).
In contrast, bottom-up approaches give weight to the external social and situational forces shaping our perception of SWB. Bottom-up theories view SWB as the accumulation of a series of happy moments. The happy person is one who has experienced many pleasurable events. The theory also puts forward that our achievements are dependent on the opportunities in our environment. Hence, bottom-up theories hold that wellbeing is the combination of experiencing satisfaction in a number of particular domains, such as family life, intimate relationships, lifestyle, health, financial and work situation, spirituality, housing etc. Bottom-up theories would emphasize the social factors shaping our abilities to achieve wellbeing. The following bottom-up factors have been shown to have positive influences on subjective wellbeing: o
monetary income which is strongly correlated with SWB for those who are unemployed or earning below the poverty line (Healey, 2008). For those living on moderate-to-high incomes sufficient to meet their needs, income becomes less important in life satisfaction ratings when compared to other factors (Healey, 2008; Helliwell, 2003);
Development of a Wellbeing measurement Tool in the Community Service Sector
April 2011
15
o
perceived good health (Diener et al., 1999; Helliwell 2003);
o
marriage (Diener et al., 1999; Helliwell 2003) and positive relationships with extended family, friends, neighbours, as well as connections with the wider community (Argyle, 1999, Helliwell & Putnam, 2005; Jordan, 2007);
o
partaking in a religious faith (Diener et al., 1999; Helliwell 2003);
o
living in a community with perceived trust, freedom from violence and security (Diener et al., 1999; Helliwell 2003);
o
Age with Helliwell (2003) finding that those in the middle years of their lives (ages 35 – 54 years) reported lower levels of SWB than the young and those over 65 years of age, assuming participants were equally healthy; and
o
access to nature (Burns, 2005).
Researchers in the field now believe that top-down and bottom-up theories are not mutually exclusive (Diener, 2008). Rather, SWB is a complex combination of both these approaches incorporating physical, mental, emotional, social and environmental factors (Sheldon & Lyubomirsky, 2004). Whilst research often reports general population trends, we need to be aware that individual values, interpretations and experiences create unique ideas of what a happy and satisfied life would look life. At the time of writing it can be said that the person with the greatest chance at achieving wellbeing has a positive temperament, is optimistic, doesn’t dwell on negative events, is living in an economically developed society, has access to positive social connections, sufficient resources to progress towards self-determined goals, access to nature, and perceives their external environment to be safe.
With an understanding of the approaches to conceptualizing SWB, it can be further understood that while social indicators, such as median income levels, crime rates etc. are one way of quantifying SWB are relevant, they present only a portion of the complete picture and are inadequate in defining wellbeing. Outwardly observable behaviors thought to be associated with wellbeing, such as unconscious body language or an ‘energetic, friendly’ appearance, have not been found to be adequate correlates with selfreported wellbeing (Strack et al., 1991). Ratings by peers and experts on other’s wellbeing have been found to be only weakly correlated with self-reports (Strack et al., 1991; Pavot, 2008). Thus, it can be argued that only the individual is able to provide the information needed to determine what they feel about their own life (Diener et al., 1999).
Development of a Wellbeing measurement Tool in the Community Service Sector
April 2011
16
As more overt measures have thus far proved unsuccessful, subjective self-reports are currently the only solution in providing information about a person’s perceived sense of wellbeing.
3.3 Devising a single ‘aggregate’ instrument SWB is typically measured by means of a single-measure survey questionnaire. Respondents are asked to rate their feelings and perceptions about themselves and their life on a rating scale from very poor to very good. The measurement of SWB is an umbrella term with incorporates affective responses (such as mood and emotions) as well as cognitive evaluations of life circumstances (life satisfaction). Affective responses are transitory and ongoing states of subjective experiences.
In contrast, life satisfaction is concerned with how content people are with various aspects of their lives, such as their health, income etc. and as such is usually more stable than affective responses. Although mood is pertinent to SWB, researchers are more interested in longer-term effects impacting on SWB. Life satisfaction ratings therefore are generally more popular in research as it is viewed to be the more stable component of SWB (Diener et al., 1999). Ultimately though, an individual’s affective state will impact on their responses and it is for this reason that the measurement of SWB is concerned both affective and cognitive responses.
Overall, the evaluation of SWB is concerned with the “positive mood and emotional states within an individual’s subjective experience and a cognitive evaluation of the conditions and circumstances of his or her life in positive and satisfying terms” (Pavot, 2008 p. 125).
In further considering how, exactly, SWB is conceptualised and measured, social indicators, such as median income levels, crime rates etc. are among
the ways of
creating a contextual framework of SWB. Although observable social indicators are relevant, they present only a portion of the complete picture and are inadequate in defining wellbeing. Outwardly observable behaviors thought to be associated with wellbeing, such as unconscious body language or an ‘energetic, friendly’ appearance, have not been found to be adequate correlates with self-reported wellbeing (Strack et al., 1991).
Development of a Wellbeing measurement Tool in the Community Service Sector
April 2011
17
Ratings by peers and experts on other’s wellbeing have been found to be only weakly correlated with self-reports (Strack et al., 1991; Pavot, 2008). It would seem that only the individual is able to provide the information needed to determine what they feel about their own life (Diener et al., 1999). As more overt measures have thus far proved unsuccessful, subjective self-reports are currently the only solution in providing information about a person’s perceived sense of wellbeing.
In further considering the applicability of SWB measures to Windermere programs, despite SWB’s broad applications, the question of whether it can be applied to measure the effectiveness of a service such as Windermere is largely unknown. The closestrelated research in answering this question is the use of Quality of Life (QoL) measures in the mental health field. These have become increasingly popular as a measurement of treatment outcomes as well as assessing the quality of mental health services (Carulla, 1999; Lasalvia & Ruggeri, 2006). Many studies have found correlations between subjective QoL scores and service satisfaction with those individuals reporting high service satisfaction also reporting high-perceived QoL (Holcomb et al., 1998; Ruggeri et al., 2001; 2005).
Oliver et al. (1996) summarises the appeal of selecting a QoL measure when evaluating a service: o
It places the service user at “the centre of service planning” (Oliver et al., 1996, p. 242),
o
it serves as a useful way of monitoring and reviewing the user’s life circumstances
o
it can be used in clinical practice to determine individual user goals and reflect upon personal achievements, and
o
it is an intuitively well-received concept understood by a variety of audiences.
Overall though, to the extent of this literature review, there is surprisingly little information on the use of SWB in measuring service effectiveness. It is however positive to learn that subjective measures, such as QoL questionnaires, which contain elements of SWB have been used effectively in service evaluation. The correlation between service satisfaction
Development of a Wellbeing measurement Tool in the Community Service Sector
April 2011
18
and QoL is particularly reassuring. The studies described above share some commonalities in that they evaluate a single program within a service, combined with a subject group that share some commonalities, such as a diagnosed mental illness. Windermere differs from these reports in that its service spans a heterogeneous group of service users, as well as multiple programs and interventions. Finding a SWB measure, which adequately assesses change across all these domains, may prove to be a challenging task.
Considering the multitude of programs offered by Windermere, there is a need for a measure of effectiveness that is conceptually broad, but still specifically sensitive. In addition, the measure must encompass the physiological, psychological, social, cultural and existential aspects of the human condition. With this in mind, a longitudinal SWB rating was proposed across the domains of emotional wellbeing, physiological wellbeing, community engagement, cultural engagement, and spiritual worth. When designing the measure the following constraints were considered: o
that it provided a reliable, sensitive and valid measure over time.
o
that it incorporated the diversity of Windermere’s programs,
o
that it not interfere unduly with program running, service user experience or worker experience, and
o
3.3.1
that it not require extensive training to administer.
Enhancing validity
In examining issues of validity related to this project, threats are encountered in two general areas: the construct and nature of the instrument, itself, and the processes by which the instrument is administered. Related to the construct and nature of the instrument itself, as described above, there does not exist a single SWB instrument that is as conceptually broad, as specifically sensitive, nor as human-services contextually focused as is considered desirable in the project. This means the first need is to literally create such a single instrument—one that can be administered in an efficient and effective manner by existing staff, all with a minimum of training.
But attempting to create such an instrument, from thin air, means
introducing a number of additional threats to the validity of the instrument, foremost among which are internal threats related to construct and face validity.
Development of a Wellbeing measurement Tool in the Community Service Sector
April 2011
19
To minimize these threats to validity a number of multiple-domain instruments already in use and normed across populations congruent with those of this study were identified. These included the SF-36v2 Health Survey (Ware et al, 1998), Community Wellbeing (Wiseman & Brasher, 2008) and the Personal Wellbeing Index-Adult (IWG, 2006). Within these surveys, evidence of validity related to wellbeing is demonstrated across a number of measures, applications and populations. Specifically, studies of the SF-36’s validity generally support the intended meaning of high and low SF-36 scores as documented in the original user’s manuals, with the measure’s widespread use across a variety of applications providing evidence of content, concurrent, criterion, construct, and predictive validity (Ware et al, 1998). More broadly, the content validity of the SF-36 has been compared favourably to other widely used generic health surveys, with comparisons showing the SF-36 includes eight of the most frequently measured health concepts (Ware et al, 1998). Similarly, the construct validity of the Personal Wellbeing Index-Adult, specifically related to Australian data, has been examined across a number of multiple regression analyses. Too numerous to present adequately here, the results of these analysis are available on the IWG website (http://www.deakin.edu.au/research/acqol/instruments/wellbeing_index.htm.).
3.3.2
Minimising threats to reliability
Just as threats to validity involve the instrument and its administration, so too do threats to reliability. Related to instrument reliability, the same steps taken to minimise threats to validity apply to minimizing threats to reliability. As an example, the reliability of the eight scales and two summary measures of the SF-36 Health Survey—which conceptually inform the aggregate survey used in the pilot—has been estimated using both internal consistency and test-retest methods. Regularly, published reliability statistics have exceeded the minimum standard of 0.70 recommended for measures used in group comparisons in more than 25 studies, with most having exceeded 0.80. The median reliability coefficients for each of the eight scales was equal or greater than 0.80, with the exception for social functioning, which had a median reliability across studies of 0.76 (Ware et al., 1998). Procedures for minimizing threats to reliability encountered in the administration of the survey were also identified.
These include the training of all potential survey
Development of a Wellbeing measurement Tool in the Community Service Sector
April 2011
20
administrators, to minimize inter-rater reliability, and the trialing of the instrument across various program staff. 3.3.3
Enhancing potentials for statistical analyses
The potential for statistical analyses related to this survey are constrained by two primary factors: sample size and levels of measurement of the variables.
The resource
limitations associated with this pilot survey—together with the uncertainty of numbers of available clients within the pilot time frame—mean that the maximum reasonable overall sample size is < 30 respondents. Adding to that normally associated attrition in the availability of the intended follow up survey, the reasonably expected number of responses for the entire survey fall to <25. While this overall number is sufficient for ttest comparisons, further attempts to differentiate responses across three differing programs means program-level responses of five to ten participants. This falls at the minimum range for the most means analyses protocols. Related to levels of measurement of the survey variables, wherever possible the highest level of measurement—ranking from nominal, ordinal, interval and ratio—are utilized. Even applying this technique to the semantic differentiation response variables, most of the client socioeconomic variables are nominal level measure—with the exception of client age—and the response variables are ordinal.
3.4 Considering constraints in operationalising such a measure 3.4.1
Identification & recruitment processes for potential participants
Given the differences across the three programs involved, but the need for consistency of approach compatible with all, it was envisioned that prospective participants were to be identified once they reached the top of the waitlist. Once identified, the contact details of the prospective participant would be forwarded to one of the designated interviewers (one per team) who would then send an introductory ‘invitation’ to the participant. As a follow up, the interviewer would telephone the person a few days later to ask if they agreed to participate. If so, an appointment would be made prior to the first service intervention, e.g. just before the first counselling session. At the beginning of the first client appointment the participant would sign the consent form after a ‘plain language’ introduction letter was read out loud verbatim by the interviewer. The documents involved (Appendices C and D) were developed by Windermere in early 2010.
Development of a Wellbeing measurement Tool in the Community Service Sector
April 2011
21
3.4.2
Administering the initial measure
To minimize threats to reliability in the administration of the instrument, it was envisioned that one staff member from each program would be identified as the key administrator of the survey and trained in its use. Unfortunately, as is described in greater detail later in this report, the turnover of staff in each of these team positions meant that there were significant changes to the way in which the survey was administered, the number of clients involved and the general consistency of approach. 3.4.3
Administering subsequent measures
At initial meetings between Windermere and RMIT University, considerations were made of typical wait times and program-specific service delivery time frames. As a means of simplifying the administration of subsequent measures across all three programs a target of three months post-initial survey, or at program closure, was determined.
Development of a Wellbeing measurement Tool in the Community Service Sector
April 2011
22
4 OPERATIONALISATION: THE PILOT PROJECT 4.1 Determining pilot programs Given the decision to develop a generic measurement tool that would be applicable to a number of different services, Windermere chose to include those which would provide a wide range of differing program client profiles. For this reason, the three service areas which were selected to participate in the survey were Counselling, Disability and Family Services.
4.2 Identifying program-specific particulars The process by which program-specific particulars informed the details of the survey content and delivery was an active collaboration between designated Windermere staff and RMIT University staff. The intent of this process was to balance procedural and instrument research rigour with appropriateness and ease of administration as informed by the relevant Windermere staff, themselves. Representative of this process, the draft outcome assessment instrument first presented to Windermere by RMIT was deemed by Windermere staff to be difficult to read and contained ‘phrases’ and ‘words’ that did not seem appropriate to be used with the anticipated participants – too ‘wordy’ or too abstract. The instrument has been refined numerous times with extensive input from Windermere staff to get the phraseology and wording correct. Additionally, a draft of the instrument was trialed with RMIT Social Work students. With feedback expressing difficulty in understanding exactly what was being asked the instrument underwent additional revision.
This collaborative process continued from
latter 2009 to early 2010. In addition to the paper version of the outcome measurement instrument, a ‘boardmaker’ version was developed for use with young children & people with an intellectual disability. This version is not being trialled at this stage
Development of a Wellbeing measurement Tool in the Community Service Sector
April 2011
23
4.2.1
Cognition/age related
An augmentative communication tool, Picture Exchange Communication System (PECS), was developed by Windermere Disability services staff to assist in conducting the survey with clients experiencing an intellectual disability. This tool was developed as a number of clients associated with Windermereâ&#x20AC;&#x2122;s disability program have significant disabilities impacting on language and Windermere staff felt it would be beneficial to represent the concepts/words used in the survey in a visual way.
It was deemed
necessary to have a PECS tool in place if people with a language disability were to be invited to participate in the study. A number of challenges were associated with developing the PECS tool.
These
included: o
Choosing symbols that were both representative of the survey concepts and recognisable to the prospective survey recipient as such.
o
Limited worker skill/knowledge. In an effective augmentative communication system, symbols are organised to facilitate meaningful and efficient communication. This is especially important when the individual has a large number of symbols to interpret. The tool as developed by the project team was not endorsed by a speech pathologist or other language specialist.
These considerations impacted significantly on the time taken to complete the project. 4.2.2
Communication/administration of measures
Feedback from Windermere staff and clients regarding administration of the instruments from each of the programs was solicited. Reflecting thematic commonalities as well as important program-specific differences, specific considerations encountered in the administration of the surveys included:
o
Experiences in administering the tool to Disability program clients:
a number of disability respondents found the survey questions confusing and asked for clarification when being surveyed.
As a
response, the workers reported paraphrasing the data collection tool. This may have impacted on the consistency of the tool and in turn, the validity of the data.
Development of a Wellbeing measurement Tool in the Community Service Sector
April 2011
24
Carers play a significant role in the life of a family member with a disability. Inasmuch as carers were often present when conducting the survey with people from Windermere’s Disability program, honesty of the responses/data needs to be considered.
Specifically, when
present, at the time of conducting the survey, carers often found it difficult to refrain from commenting on their beliefs/perceptions in relation to their disabled family member’s sense of wellbeing. This key variable should be considered in the write up of the final report. A carer’s perception of their child’s/partner’s wellbeing may have influenced the response as given by the person with a disability.
Despite the development of an augmentative communication tool, clients with an intellectual disability did not participate in the study. Carers (when approached) expressed concern regarding their intellectually disabled family member’s ability to interpret the survey. Within this context, the survey questions were seen to be lengthy and containing more than one concept, making them difficult for a person with an intellectual disability to comprehend.
All participants of the Disability target group were recipients of a brokerage package. When conducting follow up second surveys this group of people noted their appreciation for the financial support they received and spoke of the difference ‘that a few dollars made’ to their general sense of wellbeing. Brokerage (or financial assistance) is a key variable present within the Disability target group. Importantly, brokerage was not available to Counselling and Family Support clients participating in the study.
o
Experiences in administering the survey to Family Services and Counselling clients:
After completing the surveys a number of issues became evident. In some cases there were issues with language—the survey could not be administered because a suitable translator could not be found to interpret the survey to particular clients. Another issue was the inability of some clients to understand some concepts, where, subsequently, they gave inconsistent answers to compensate. This inability to comprehend caused client frustration. Similarly, what were seen as repetitive questions sometimes led to repeating answers for each question, with the client indicating the same degree for each sub
Development of a Wellbeing measurement Tool in the Community Service Sector
April 2011
25
question. Often the client didnâ&#x20AC;&#x2122;t consider each question and statement on their own, but as a whole either being about physical health or emotional issues.
In one case a client was confused about the term physical health as they thought it included mental health, and therefore included emotional issues due to their depression. Ultimately, to keep the data valid the clients could only receive a broad explanation and a reminder that responses were a matter of personal interpretation, with no right or wrong answer.
In some cases the client was receiving help for a family member and the issues surround the relationship with that family member and the survey could not adequately cover those problems. Some clients were upset with the service they received and refused to take the well being survey, while others couldnâ&#x20AC;&#x2122;t complete it due to on going problems they were sufferingâ&#x20AC;&#x201D;particularly with those attending the counseling service.
4.2.3
Trialing of icon images
An original intent of the pilot survey was to incorporate a trial of the icon imaged version of the survey, to be used with children and clients with intellectual disabilities. Given the time and resource constraints of the pilot program, coupled with the challenges described above, a full trialing of the images with clients was not completed.
4.3 Data Collection: Administering the instrument During February 2010 Windermere commissioned an IT adaptation to its database to enable de-identified demographic data to be downloaded for survey participants. This was a relatively straightforward process and enabled the relevant data for each participant to be collected more readily. Surveys commenced with prospective Counselling program and Disability program clients in March 2010. Each survey took approximately one hour of time to complete. This included associated administration and interview times. At a later point, Family Services entered into the process using the same approach.
Development of a Wellbeing measurement Tool in the Community Service Sector
April 2011
26
4.4 Data analysis 4.4.1
Access to existing Windermere client data
Actual data analysis of the responses was conducted by RMIT University staff. The data was provided RMIT by two separate streams/processes. Disaggregated socioeconomic dataâ&#x20AC;&#x201D;including client number, age, gender, postcode, reasons for involvement in the program and Windermere program involved inâ&#x20AC;&#x201D;was provided via Excel file. Separate to this, individual paper copies of the actual surveys were also provided RMIT University by WC&FS. Aside from individual responses, the only client information included in this was client number, Windermere program and whether the survey represented the first or follow up survey. 4.4.2
Procedures to ensure ongoing confidentiality
Confidentiality was enhanced by the de-identification of responses, linked to sociodemographics only by the client number. In this way, only the intake worker knew the identity of the client and only the survey administrator(s) knew the clientâ&#x20AC;&#x2122;s responses. All information related to data analysis was de-identified.
4.5 Project findings Throughout the survey administration at total of 29 clients consented to be part of the pilot project. Of these, 25 actually completed at least one survey. These included nine from the Counselling Program, eight from the Disability Program and nine from the Family Services Program. Consisting of 17 female and nine male respondents, the ages ranged from 28 to 61 years, with a mean age of 42.4 and a standard deviation of 9.299. Nine different postcodes were represented in the respondent sample. Aside from sociodemographic and Windermere-related information, the balance of the survey consisted of eight questions related to wellbeing. As shown in the Windermere Wellbing Survey (Appendix B), these questions, including their respective subset questions, were: o
Question 1: Impact of physical health problems on social activities.
o
Question 2: Impact of emotional issues on social activities.
o
Question 3: Impact of physical health problems on work or other activities.
Development of a Wellbeing measurement Tool in the Community Service Sector
April 2011
27
o
Question 4: Impact of emotional issues on work or other activities.
o
Question 5: General attitude towards life over the past month.
o
Question 6: Attitudes/experiences of living in their neighbourhood.
o
Question 7: How they feel about their community.
o
Question 8: General sense of wellbeing now and compared to three months ago.
The response options for each of the above were via a semantic differentiation five point Likert scale. This was the case for each of the individual questions, together with all subset questions. As described earlier, the limitations regarding sample size and levels of measurement severely restrict the statistical analyses that can be conducted. However, it is possible to examine the responses more broadly, looking at response trends for each question across both first- and second-iteration surveys. For example:
o
Question 1: Across the entire aggregated all-program sample, the initial survey responses describing physical health impact on social activities across all subsets were similar in their distribution. As shown below, on a 1-5 Likert scale ranging from ‘not at all’ to ‘extremely’, the means across all questions of [Q1, 1A, 1B, 1C, 1D] ranged from a low of 2.17 to a high of 2.65.
TABLE 4.1.1 QUESTION 1 INITIAL SURVEY AGGREGATED PROGRAMS RESPONSE DISTRIBUTIONS
Looking at the initial survey responses for individual programs, o
The means for Counselling responses were generally at the lower range of the overall responses, ranging from 1.78 to 2.22. the most notable example was the 1.78 mean for the question related to being less careful [Q1C]
Development of a Wellbeing measurement Tool in the Community Service Sector
April 2011
28
o
The responses of those in the Disability program were generally consistent with the overall responses, ranging from 2.25 to 2.86. This is only slightly higher on the scale than the means range across all programs.
o
Those respondents from the Family Services program had means ranging from 2.29 to 3.0, slightly higher than the means for all programs.
The responses in the follow up survey across all programs were similarly consistent across all subset questions, ranging from a low of 2.25 to a high of 2.63. Comparing these pre- post-intervention means shows a slight overall shift towards the ‘less impact’ end of the scale. Comparing the individual programs’ follow up surveys, o
The means range for Counselling respondents was generally significantly lower than that of the overall responses, ranging from 1.0 to 1.5, with the most notable example being a 1.0 for time spent on physical activities [Q1A]. These represent a significant shift to the ‘less impact’ end of the response scale.
o
In contrast, the responses of those in the Disability program were generally higher than the overall responses, ranging from 2.6 to 3.25. This represents a shift to the ‘greater impact’ end of the scale.
o
Those responses from the Family Services program had means ranging from 2.60 to 2.80. As with Disability program respondents, this range represents a shift towards the ‘greater impact’ end of the scale relative to the aggregate means across all programs.
TABLE 4.1.2 QUESTION 1 INITIAL & FOLLOW UP SURVEYS MEANS COMPARISONS Response Scale:
1 = not at all
2 = a little bit
QUESTION 1 SUBSET QUESTIONS 1 1A 1B 1C 1D
Have physical health problems interfered with my usual social activities over the last month? Reduced the overall amount of time spent? Achieved less than I would have liked? Didn’t do them as well or carefully as usual? Had difficulty because of physical pain?
3 = moderately
4 = quite a bit
5 = extremely
All Programs 1st 2nd 2.65 2.29
Counselling 1st 2nd 2.21 1.25
Disability 1st 2nd 2.86 2.60
Family Svcs 1st 2nd 3.00 2.80
2.48 2.25 2.17 2.50
2.00 2.11 1.78 2.11
2.50 2.38 2.25 2.63
3.00 2.29 2.57 2.86
2.25 2.63 2.38 2.31
1.00 1.50 1.50 1.25
Development of a Wellbeing measurement Tool in the Community Service Sector
2.71 3.14 2.57 2.71
April 2011
2.60 2.80 2.60 2.80
29
Interpreting the results for Question 1: o
Considering all aggregated program responses, the means patterns across first and follow up surveys showed relatively minor differences, with all means within the ‘a little bit’ to ‘moderately’ semantic category.
o
Within the individual programs, across first and follow up surveys:
The Counselling responses showed consistent and marked shifts toward the ‘less impact’ end of the scale.
Disability program responses varied—with some showing means increases and others means decreases.
All shifts
remained within the ‘a little bit’ to ‘moderately’ semantic range.
Family Services program responses showed consistent and sometimes marked shifts towards the ‘less impact’ end of the scale across all subset questions.
o
Question 2: Across the entire sample, in the initial survey the responses describing emotional issues related to social activities were also along a 1-5 Likert scale ranging from ‘not at all’ to ‘extremely’. Also as with the responses to Question 1, the range of means was very consistent across all subset questions [Q2, 2A, 2B, 2C, 2D, 2E], from a low of 2.71 to a high of 3.00, reflecting a perceived impact of emotional issues on social activities as between ‘a little bit’ and ‘moderately’. TABLE 4.2.1 QUESTION 2 INITIAL SURVEY AGGREGATED PROGRAMS RESPONSE DISTRIBUTIONS
Development of a Wellbeing measurement Tool in the Community Service Sector
April 2011
30
Comparing the initial survey results for individual programs, o
The means range for Counselling respondents was from 2.33 to 3.00, only slightly lower in the means range from the aggregate of all programs.
o
Those in the Disability program ranged from 2.00 to 2.75. As with those results from the Counselling program, these means are slightly lower—towards the ‘less impact’ end of the Liker scale—from those of the aggregated programs.
o
In contrast to the two previous programs, responses from the Family Services program had means range from 3.43 to 4.00, significantly higher in their range from those of both Counselling and Disability programs and from those of the aggregated programs—indicating a perception of greater impact of emotional issues on social activities.
The follow up survey responses for the aggregated programs were also consistent with each other, with the means more tightly ranging from 2.38 to 2.63. Compared to the first survey, this indicates a slight shift towards the ‘less impact’ end of the scale. Examining the follow up survey responses for the individual programs,
The mean responses of all subsets from the Counselling program ranged from 1.25 to 1.75. This is significantly ‘lower’ on the response scale from either the aggregated program means or those from Disability or Family Services. These means reflect a perceived impact of emotional issues on social activities between ‘not at all’ and ‘a little bit’.
Disability program responses were more in line with those of the aggregated programs, with means ranging from 2.14 to 2.57—indicating ‘a little bit’ to ‘moderately’ impact.
Mean Family Services responses ranged from 3.00 to 3.80. These were significantly higher in the scale than either the aggregated programs means or either of the other individual programs. This indicated a greater self-reported sense of impact of emotional issues on social activities, firmly in the ‘moderately’ category.
Development of a Wellbeing measurement Tool in the Community Service Sector
April 2011
31
TABLE 4.2.2 QUESTION 2 INITIAL & FOLLOW UP SURVEYS MEANS COMPARISONS Response Scale:
1 = not at all
2 = a little bit
QUESTION 1 SUBSET QUESTIONS 2
Have emotional issues interfered with my usual social activities over the last month? Reduced the overall amount of time spent? Achieved less than I would have liked? Didn’t do them as well or carefully as usual? Found myself feeling anxious? Found myself feeling sad?
2A 2B 2C 2D 2E
3 = moderately
4 = quite a bit
5 = extremely
All Programs 1st 2nd 2.86 2.50
Counselling 1st 2nd 2.56 1.50
Disability 1st 2nd 2.67 2.33
Family Svcs 1st 2nd 3.43 3.40
2.71 2.88 2.75 2.96 3.00
2.56 2.44 2.33 2.89 3.00
2.00 2.75 2.13 2.25 2.25
3.71 3.57 4.00 4.00 3.71
2.63 2.56 2.56 2.56 2.38
1.75 1.25 1.75 1.50 1.25
2.43 2.57 2.14 2.29 2.57
3.60 3.60 3.80 3.80 3.00
Interpreting the results for Question 2: o
Examining the aggregated responses for all programs, the shifts across first and follow up surveys were consistently towards the ‘less impact’ end of the semantic scale, albeit remaining within the ‘a little bit’ to ‘moderately’ range.
o
Within each of the individual program’s responses across first and follow up surveys:
Counselling program means showed consistent and marked shifts towards the ‘less impact’ end of the semantic scale.
Disability program means were inconsistent in differences between the two surveys, with some increasing and some decreasing. All means, though, remained within the ‘a little bit’ to ‘moderately’ range of the semantic differentiation scale.
Family services program responses showed consistent shifts towards the ‘less impact’ end of the scale, although the range of these was higher than those for any of the other programs or the aggregated program responses—all within the ‘moderately to ‘quite a bit’ category of the scale.
o
Question 3: Question 3 relates to the question of the impact of physical health on work activities. As with Questions 1 and 2, all of the Question 3 subset questions [Q3, 3A, 3B, 3C, 3D] the responses were along a 1-5 Likert scale ranging from ‘not at all’ to ‘extremely’. The range of means of the first survey was very consistent and tightly clustered around the ‘moderately’ response, with a low of 2.50 and a high of 2.96.
Development of a Wellbeing measurement Tool in the Community Service Sector
April 2011
32
TABLE 4.3.1 QUESTION 3 INITIAL SURVEY AGGREGATED PROGRAMS RESPONSE DISTRIBUTIONS
Comparing individual programs’ first survey results, o
Counselling progam mean responses ranged from 2.22 to 2.56. These responses are significantly ‘lower’ on the response scale from those of the aggregated program means and more narrowly grouped than those of either Disability or Family Service programs.
o
The Disability program response means ranged from 2.38 to 3.29. These reponses were consistent with the means range of the aggregated programs, albeit with a wider range of means.
o
The mean Family Services responses ranged from 2.57 to 3.29. This range is consistent with the broader aggregated responses across all programs.
The responses in the follow up survey across all programs showed all Question 3 subset questions were similarly consistent to those of the initial survey, with means ranging from 1.93 to 2.31.
This represents an appreciable
shift towards the ‘less impact’ end of the scale, compared to the first survey iteration results. Examining individual programs’ follow up survey results, o
Counselling program mean scores were consistent, but more narrow in range, with those of the aggregated program scores, ranging from 1.20 to 1.50—between the ‘not at all’ and ‘a little bit’ categories.
o
The means of the Disability responses ranged from 1.80 to 2.71. Consistent with those of the aggregated means, they were nonetheless wider in their range across the different subset questions.
o
The means of the Family Services responses were the only individual program results to fall outside of the aggregated groups—varying from
Development of a Wellbeing measurement Tool in the Community Service Sector
April 2011
33
2.40 to 2.80. This narrow range is in the ‘a little bit’ to ‘moderately’ semantic categories.
TABLE 4.3.2 QUESTION 3 INITIAL & FOLLOW UP SURVEYS MEANS COMPARISONS Response Scale:
1 = not at all
2 = a little bit
QUESTION 1 SUBSET QUESTIONS 3
Have physical health problems interfered with my usual work or other activities over the last month? Reduced the overall amount of time spent? Achieved less than I would have liked? Didn’t do them as well or carefully as usual? Had difficulty because of physical pain?
3A 3B 3C 3D
3 = moderately
4 = quite a bit
5 = extremely
All Programs 1st 2nd 2.96 1.93
Counselling 1st 2nd 2.44 1.25
Disability 1st 2nd 3.29 1.80
Family Svcs 1st 2nd 3.29 2.60
2.75 2.83 2.50 2.50
2.44 2.44 2.56 2.22
3.13 2.88 2.38 2.75
2.71 3.29 2.57 2.57
2.12 2.31 2.31 2.31
1.20 1.25 1.25 1.50
2.57 2.71 2.57 2.57
2.40 2.60 2.80 2.60
Interpreting the results for Question 3: o
Examining the aggregated responses for all programs, the shifts across first and follow up surveys were consistently towards the ‘less impact’ end of the semantic scale, albeit remaining within the ‘a little bit’ to ‘moderately’ range.
o
Within each of the individual program’s responses across first and follow up surveys:
Counselling program means showed consistent and marked shifts towards the ‘less impact’ end of the semantic scale.
Disability program means, with one exception [Q3C], showed consistent shifts towards the ‘less impact’ end of the spectrum, with all but one of the second survey means falling between the ‘a little bit’ to the ‘moderately’ semantic categories.
With the exception of two subset questions which remained essentially unchanged, the Family services program response means showed shifts towards the ‘less impact’ end of the scale—all higher than the aggregated program means, but still between the ‘a little bit’ to ‘moderately’ points of the scale.
o
Question 4: Question 4 asks about the impact of emotional issues on work or other activities over the previous month. The response range is the same 1-5 Likert scale used in Questions 1 through 3. As with the above questions, the response means for the first survey of aggregated programs, across all subset questions [Q4, 4A,
Development of a Wellbeing measurement Tool in the Community Service Sector
April 2011
34
4B, 4C, 4D, 4E] were within a relatively small range between the ‘a little bit’ and ‘moderately’ mid-point response, ranging from a low of 2.71 to a high of 2. 92.
TABLE 4.4.1 QUESTION 4 INITIAL SURVEY AGGREGATED PROGRAMS RESPONSE DISTRIBUTIONS
Looking at individual programs’ first survey responses, o
The Counselling program’s response means, varying from 2.89 to 3.22, were marginally shifted towards the ‘greater impact’ end of the scale compared to those of the aggregated programs’.
o
The Disability program’s response means were significantly shifted from those of the aggregated programs and any other individual program. Ranging from 1.75 to 2.25, these means were markedly closer to the ‘less impact’ end of the semantic scale.
o
In contrast to the Disability program, the Family Services means variance of 3.43 to 3.86 placed them significantly higher towards the ‘greater impact’ end of the scale.
The aggregate program responses in the follow up survey were also relatively consistent across all subset questions. The means ranged from 2.36 to 2.63, effectively straddling the ‘moderately’ midpoint-scale response. As with the follow up surveys for Questions 1, 2 and 3, this represents a collective movement to the ‘less impact’ end of the scale. Looking at individual programs’ second survey means,
Development of a Wellbeing measurement Tool in the Community Service Sector
April 2011
35
o
Counselling program means were very consistent with each other, ranging from 1.25 to 1.50, and were significantly lower in the scale than those for the aggregate programs.
o
Disability program means were consistent with those of the aggregated programs, varying from 2.00 to 2.43.
o
The means of the Family Services responses varied from 3.20 to 3.80. This is markedly outside the range of the aggregated programs’ means, more towards the ‘greater impact’ end of the scale, with results consistently between the ‘moderately’ to ‘quite a bit’ points in the scale.
TABLE 4.4.2 QUESTION 4 INITIAL & FOLLOW UP SURVEYS MEANS COMPARISONS Response Scale:
1 = not at all
2 = a little bit
QUESTION 1 SUBSET QUESTIONS 4 4A 4B 4C 4D 4E
Have emotional issues interfered with my usual work or other activities over the last month? Reduced the overall amount of time spent? Achieved less than I would have liked? Didn’t do them as well or carefully as usual? Found myself feeling anxious? Found myself feeling sad?
3 = moderately
4 = quite a bit
5 = extremely
All Programs 1st 2nd 2.91 2.36
Counselling 1st 2nd 3.00 1.25
Disability 1st 2nd 1.86 2.00
Family Svcs 1st 2nd 3.86 3.60
2.71 2.75 2.88 2.92 2.92
2.89 2.89 3.00 3.22 3.00
1.88 1.75 2.13 1.75 2.25
3.43 3.71 3.57 3.86 3.57
2.50 2.63 2.44 2.44 2.38
1.50 1.50 1.50 1.50 1.25
2.29 2.43 2.57 2.57 2.00
3.60 3.80 3.20 3.40 3.20
Interpreting the results for Question 4: o
Examining the aggregated responses for all programs, the shifts across first and follow up surveys were consistently towards the ‘less impact’ end of the semantic scale, albeit remaining between the ‘a little bit’ to ‘moderately’ points in the semantic scale.
o
Within each of the individual program’s responses across first and follow up surveys:
Counselling program means showed consistent and marked shifts towards the ‘less impact’ end of the semantic scale, all of which were significantly ‘lower’ than the aggregated programs’ means.
Contrary to the results of the aggregated programs and the other individual programs, the Disability program means
Development of a Wellbeing measurement Tool in the Community Service Sector
April 2011
36
showed consistent shifts towards the ‘greater impact’ end of the spectrum.
With the exception of two subset questions which shifted somewhat towards the ‘greater impact’ of the scale, the Family services program response means showed shifts towards the ‘less impact’ end of the scale—all higher than the aggregated program means, between the ‘moderately’ and ‘quite a bit’ points of the scale.
o
Question 5: Question 5 asks about the respondent’s general feelings over the previous month. The responses are along a 1-5 Likert scale that range from ‘none of the time’, a little bit of the time’, some of the time’, ‘a good bit of the time’, to ‘all of the time’.
Across all of the programs, the means of responses for all subset
questions in the aggregated programs in the initial survey [Q5A, 5B, 5C, 5D] were relatively consistent, ranging from a low of 2.58 to a high of 2.92.
TABLE 4.5.1 QUESTION 5 INITIAL SURVEY AGGREGATED PROGRAMS RESPONSE DISTRIBUTIONS
Examining the individual programs’ first survey responses, o
The response means from Counselling program participants ranged from 2.22 to 3.22. This range is roughly consistent with the broader aggregated program means, but with a wider range.
o
Disability program response means ranged from 2.25 to 3.38. As with those from the Counselling program, these responses were roughly consistent with those of the aggregated program means.
o
The Family Services program response means ranged from 2.29 to 2.71, proximally consistent with those of the aggregated programs and with the other individual programs.
Development of a Wellbeing measurement Tool in the Community Service Sector
April 2011
37
The responses to all question subsets in the aggregated programs’ follow up survey were less tightly consistent than those with the first survey, with means ranging from 2.13 to 3.63.
Examined individually, the respondents reported
feeling generally content more of the time [5A], feel more positive about the future more of the time [5B], feel that bad things keep happening less of the time [5C] and feel calm and peaceful more of the time [5D]. Considering individual programs’ follow up survey responses, o
The means of the Counselling program responses were consistent with the aggregated program responses across each of the individual subset questions, although broader in range, varying from 2.00 to 4.00
o
Disability program response means were generally consistent with the aggregated program means, as well as those of the Counselling program, ranging from 1.86 to 3.86.
o
The responses of Family Services participants were consistent with the aggregated program responses, although narrower in range, varying from 2.60 to 3.40.
TABLE 4.5.2 QUESTION 5 INITIAL & FOLLOW UP SURVEYS MEANS COMPARISONS Response Scale:
5A 5B 5C 5D
1 = none of the time
2 = a little bit of the time
QUESTION 1 SUBSET QUESTIONS Over the last month, have I been: Generally content with life. Feeling positive about the future Feeling that bad things keep happening to me Feeling calm and peaceful.
3 = some of the time
All Programs 1st 2nd 2.75 3.44 2.92 3.63 2.58 2.13 2.70 3.25
4 = a good deal of the time
Counselling 1st 2nd 2.38 3.75 2.89 4.00 3.22 2.00 2.22 3.00
Disability 1st 2nd 3.25 3.86 3.13 3.57 2.25 1.86 3.38 3.71
5 = all of the time Family 1st 2.29 2.71 2.71 2.29
Svcs 2nd 2.60 3.40 2.60 2.80
Interpreting the results for Question 5: o
Examining the aggregated responses for all programs, the shifts across first and follow up surveys were numerically mixed, but thematically consistent, i.e., questions related to general contentment, positive feelings and feeling calm increased, while that referring to the feeling of ‘bad things happening’ decreased.
o
Within each of the individual program’s responses across first and follow up surveys:
Development of a Wellbeing measurement Tool in the Community Service Sector
April 2011
38
Individual Counselling, Disability and Family Services program means all showed thematically consistent shifts similar to those of the aggregated programs’.
o
Question 6: Question 6 enquires about the respondent’s perceptions of living in their community.
The responses along the 1-5 Likert scale range from ‘strongly
disagree’, ‘generally disagree’, ‘not sure’, ‘generally agree’ to ‘strongly agree’. First survey aggregated programs’ means of the Question 6 subset questions [Q6A, 6B, 6C, 6D, 6E, 6F, 6G, 6H] were in the ‘not sure’ to ‘generally agree’ range, varying from a low of 3.17 to a high of 4.13.
TABLE 4.6.1 QUESTION 6 INITIAL SURVEY AGGREGATED PROGRAMS RESPONSE DISTRIBUTIONS
The individual programs’ first survey results include, o
The Counselling program’s response means varied from 2.89 to 4.22. Generally congruent with the aggregated programs’ means, these results were wider in their overall range.
o
Disability program’s response means varied from 2.88 to 3.75. These results were marginally shifted towards the ‘less impact’ end of the scale from those of the aggregated programs’, but consistent with those of the Counselling program.
o
The Family Services response means varied from 3.14 to 4.14. This range was nearly identical to the aggregated programs’ means.
Development of a Wellbeing measurement Tool in the Community Service Sector
April 2011
39
The responses to all question subsets in the aggregated programs follow up survey were less tightly consistent than those with the first survey.
Means
ranged from 2.69 to 4.13. Examined individually, the responses to question 6A, 6B, 6C, 6F, 6G and 6H were relatively consistent with those of the first survey. The responses to questions 6D and 6E indicated they disagreed more that there were sufficient community services and disagreed more that community services were available. Examining the results of the individual programs’ follow up surveys, o
The Counselling program response means ranged from 2.50 to 4.25. These means were consistent with those of the aggregated programs’ and with those of the Disability program.
o
The Disability program’s response means of 2.29 to 4.29 were consistent with those of the aggregated programs’ and the Counselling program’s.
o
Across a more narrow range than those of the aggregated programs’ but consistent with their position in the response scale, the Family Services response means varied from 3.00 to 4.20.
TABLE 4.6.2 QUESTION 6 INITIAL & FOLLOW UP SURVEYS MEANS COMPARISONS Response Scale:
6A 6B 6C 6D 6E 6F 6G 6H
1 = strongly disagree
2 = generally disagree
QUESTION 1 SUBSET QUESTIONS About living in my neighbourhood I generally like living here I generally feel safe here It is easy to get around There are enough community services Community services are accessible I have access to further education Public places are pleasant and appealing I am able to participate in arts, sports & cultural activities
3 = not sure
All Programs 1st 2nd 4.13 4.13 3.79 3.81 3.96 3.81 3.25 2.66 3.17 2.88 3.33 3.69 3.58 3.56 3.42 3.38
4 = generally agree
Counselling 1st 2nd 4.00 4.25 4.11 4.00 4.22 3.00 3.56 3.00 2.89 2.50 3.22 4.00 3.33 3.25 3.78 3.75
Disability 1st 2nd 4.50 4.00 3.75 3.57 3.50 4.29 3.13 2.29 3.13 3.00 3.63 3.57 3.00 3.29 2.86 2.57
5 = strongly agree Family 1st 3.86 3.43 4.14 3.57 3.57 3.14 4.00 3.57
Svcs 2nd 4.20 4.00 3.80 3.00 3.00 3.60 4.20 4.20
Interpreting the results for Question 6: o
Examining the aggregated responses for all programs, the shifts across first and follow up surveys were numerically mixed. For example, responses for liking living in their community [6A], feeling safe [6B], ease of travel [6C], pleasant public places [6G] and ability to participate in activities [6H] remained fairly unchanged.
Development of a Wellbeing measurement Tool in the Community Service Sector
April 2011
But the
40
perception of sufficient community services [6D] and their access [6E] saw increased disagreement.
o
Within each of the individual program’s responses across first and follow up surveys:
Counselling program response means were similarly mixed, compared to the aggregated programs’ means. Specifically, shifts to ‘greater agreement’ were noted in responses related to liking living in the community [6A] and access to further education [6F], while reports of feeling safe [6B], ease of travel [6C] sufficiency of community services [6D] and accessibility of community
services
[6E]
all
shifted
towards
‘greater
disagreement’ in the follow up survey. Other subset questions remained essentially unchanged.
The shifts in Disability program responses were as mixed as those of the aggregated and other individual programs, with shifts to ‘greater agreement’ noted in responses related to liking living in the community [6A], feeling safe [6B], access to further education [6F], pleasant public places [6G] and participation in activities [6H] all shifting towards ‘greater agreement’ end of the scale.
In contrast, those responses
related to ease of travel [6C], sufficiency of community services [6D] and accessibility of community services [6E] all shifted towards ‘greater disagreement’ in the follow up survey.
The follow up responses for Family Services also reflected mixed shifts from the first survey.
Liking living in the
community [6A], feeling safe [6B], access to further education [6F], pleasant public places [6G] and participation in activities [6H] all reflected shifts toward the ‘greater agreement’ end of the scale. The others, in contrast, reflected shifts towards the ‘greater disagreement’ end.
o
Question 7: Question 7 follows in the community aspect covered by Question 6, asking the respondent for their feelings of acceptance, respect, engagement and pride in their community. The 1-5 Liker scale is the same as that used in Question 6. The response means across all subset questions [Q7A, 7B, 7C, 7D] were in the
Development of a Wellbeing measurement Tool in the Community Service Sector
April 2011
41
‘not sure’ to ‘generally agree’ categories, ranging from a low of 3.17 to a high of 4.00.
TABLE 4.7.1 QUESTION 7 INITIAL SURVEY AGGREGATED PROGRAMS RESPONSE DISTRIBUTIONS
Examining individual programs’ first survey responses, o
Counselling program mean responses were generally consistent with those of the aggregated programs and with Disability, ranging from 3.33 to 4.11.
o
The Disability program response means ranged from 3.13 to 4.25, generally consistent with the aggregated program and Counselling program patterns.
o
The Family Services program responses ranged from 3.00 to 3.86, also generally consistent with those of the aggregated program and other individual program ranges.
The follow up survey responses for the aggregated programs were similarly consistent to those of the first survey, but with means ranging 3.5 to 4.06 there is a shift towards agreeing more that they feel accepted, respected, can engage and feel pride in their community. o
Examining Individual programs,
The shifts between first and follow up survey means for the Counselling program, ranging from 3.25 to 4.00, showed the least movement of all the individual programs.
In fact, the
questions regarding ‘feeling accepted’ [7A] actually shifted towards the ‘greater disagreement’ end of the scale. All other responses shifted toward the ‘generally agree’ end of the scale.
Development of a Wellbeing measurement Tool in the Community Service Sector
April 2011
42
Disability program response means, ranging from 3.57 to 4.14, showed consistent shifts towards the ‘greater agreement’ end of the scale across all subset questions.
The Family Services second survey means, ranging from 3.60 to 4.40, showed consistent shifts towards the ‘greater agreement’ end of the scale, with the exception of the question related to feeling culturally respected [7B], which shifted slightly towards the ‘greater disagreement’ end of the scale.
TABLE 4.7.2 QUESTION 7 INITIAL & FOLLOW UP SURVEYS MEANS COMPARISONS Response Scale:
1 = strongly disagree
2 = generally disagree
QUESTION 1 SUBSET QUESTIONS About how I feel in my community I feel accepted I feel culturally respected I am able to have a say about community matters I feel pride in my community
7A 7B 7C 7D
3 = not sure
4 = generally agree
5 = strongly agree
All Programs 1st 2nd 3.83 4.06 4.00 4.00 3.17 3.50
Counselling 1st 2nd 4.11 3.50 3.89 4.00 3.33 3.25
Disability 1st 2nd 3.75 4.14 4.25 4.14 3.13 3.57
Family 1st 3.57 3.86 3.00
Svcs 2nd 4.40 3.80 3.60
3.50
3.67
3.38
3.43
4.20
3.81
3.50
3.71
Interpreting the results for Question 7: o
With only two question-specific exceptions, all means for the aggregated programs and individual programs showed shifts towards the ‘greater agreement’ end of the scale between the first and follow up surveys.
o
Question 8: Question 8 is the most conceptually general of all the questions, asking the respondent to rate their general sense of wellbeing, at the present and in comparison to three months ago. Across all of the programs the initial survey mean response to the question of current general wellbeing [Q8A] was 2.71, indicating a ‘fair’ to ‘good’ sense of wellbeing.
The mean response to the
question of comparing this to three months previous [Q8B] was 2.83, closer to the ‘good’ category.
Development of a Wellbeing measurement Tool in the Community Service Sector
April 2011
43
TABLE 4.8.1 QUESTION 8 INITIAL SURVEY AGGREGATED PROGRAMS RESPONSE DISTRIBUTIONS
o
Examining Individual programs,
The Counselling program means—2.78 for 8A and 2.89 for 8B—reflected consistency with those of the aggregated programs.
The Disability program means of 3.38 for 8A and 3.25 for 8B were significantly shifted towards the ‘positive’ end of the semantic scale relative to the aggregate program means and all other individual program means.
Family Services response means, in contrast to those of the aggregated programs’ and other individual programs, were significantly shifted towards the ‘poor’ end of the semantic scale, at 1.86 for 8A and 2.29 for 8B.
The aggregated programs’ follow up survey responses for both questions indicated a significant positive shift in senses of general wellbeing [Q8A], with a mean of 3.19—the ‘good’ response—and in comparison to the three months previous [Q8B], with a mean of 3.88—indicating a ‘good’ to ‘very good’ response. Examining individual program means for the follow up survey, o
Each of the individual program means for the follow up survey reflected shifts towards the ‘positive’ end of the scale, with Counselling reporting 3.25 for 8A and 3.50 for 8B; Disability indicating means of 3.57 for 8A and 4.29 for 8B and Family Services recording means of 2.60 for 8A and 3.60 for 8B.
Development of a Wellbeing measurement Tool in the Community Service Sector
April 2011
44
TABLE 4.8. QUESTION 8 INITIAL & FOLLOW UP SURVEYS MEANS COMPARISONS Response Scale:
1 = poor
2 = fair
QUESTION 1 SUBSET QUESTIONS 8A 8B
How do I rate my general sense of wellbeing now? How does this compare to 3 months ago?
3 = good
4 = very good
5 = excellent
All Programs 1st 2nd 2.71 3.19
Counselling 1st 2nd 2.78 3.25
Disability 1st 2nd 3.38 3.57
Family Svcs 1st 2nd 1.86 2.60
2.83
2.89
3.25
2.29
3.88
3.50
4.29
3.60
Interpreting the results for Question 8: o
Most
broadly,
all
programs
means—aggregated
programs,
Counselling, Disability and Family Services—showed shifts between the first and follow up surveys towards the ‘positive’ end of the semantic scale across both subset questions. Interpreting results across all questions A critical consideration in interpreting the results of the surveys is the ‘starting point’ of the participants. Are they in crisis? Are they facing unique situations? Has life been stable or fractured?
All can impact the results and their
interpretation. In the course of this pilot program, surveys of 40 different questions (subsets of eight general questions) were given to three different program participants over two iterations. Of the 160 different means responses (including those of the aggregated programs) for the first survey, none were outside the first or fifth points of the five-point Likert scale and only 15 were outside the second or fourth Likert scale points. Further, of these 15, none reflected a semantic identifier of an exacerbated condition or perception. This is an important consideration in viewing changes in means between first and second surveys, since when dealing with clients experiencing crises one might expect response outliers indicating highly skewed perceptions/experiences— perceptions/experiences that regress to a central mean over time, regardless of intervention. With this understanding, the shifts between first- and follow upsurvey means for each of the program’s participants can be examined: In the survey instrument, the first four questions dealt with how the individual’s physical or emotional issues might have impacted their social or work lives/activities.
Consistent across these questions, Counselling and Family
Development of a Wellbeing measurement Tool in the Community Service Sector
April 2011
45
Services program response shifts indicated decreased impacts of physical or emotional issues, on both social and work activities. Also consistent across the four questions, Disability program responses indicated either mixed or increased impacts. The respondent’s attitudes about life are the focus of Questions 5. The shifts in response means were consistent across all programs, including aggregated programs, with them all indicating increases in positive attitudes/feelings across three of the four subset questions. Questions 6 and 7 focussed on the individual’s attitudes, experiences and feelings about their communities. While all program responses showed mixed differences in the individual’s perceptions of their community, all program responses showed shifts towards agreement related to feeling accepted, respected and being able to have a say in community matters. The most existentially broad of all the questions is Question 8, asking about the individual’s general sense of wellbeing, both in the present and compared to three months previous. All program means showed positive shifts in individual perceptions of wellbeing. In summary, in matters concerning the individual’s perceptions of impact of physical or emotional issues on their social or work lives, virtually all of the changes in responses from first to follow up surveys were positive—indicating enhanced/improved senses of coping ability and empowerment. The same can be said of the individuals’ senses of personal place in their community and, more broadly, of their general sense of wellbeing. The only domain where first-tofollow up survey shifts were mixed was in the respondents’ perceptions of the characteristics of their community.
4.6 Limitations and challenges The limitations and challenges related to this pilot project are primarily related to methodology, social desirability of responses, sample size and operationalisation. Related to methodology, by its nature this was a quasi-experimental research project. As such, it could not utilise those protocols associated with experimental designs, such
Development of a Wellbeing measurement Tool in the Community Service Sector
April 2011
46
as random sampling, control groups and multiple time series surveys. This means that issues involving self-selection in or out of the initial survey, as well as self-selection in or out of the follow-up survey, must be considered as threats to the outcome results. The social desirability of responses is a factor that cannot be ignored as potentially influencing participants’ willingness to participate and their responses. All participants were inherently involved in obtaining services from a community agency. While the plain language statement associated with this project made it clear that the client’s access to services was not related to their participation, the inherent authority/power asymmetry between worker and client cannot be ignored. Another limiting factor is the sample size. As described earlier, the pilot project’s total sample size—fragmented as it was across three programs—meant that only descriptive statistical analysis could be appropriately conducted. The sample size fell beneath the threshhold for more rigorous and comprehensive statistical analyses, including correlation and multivariate effect analyses. As also described in detail earlier, the operationalisation of the survey administration— particularly with clients experiencing cognitive disabilities—encountered significant challenges and potential threats to the outcomes.
The presence of the nominal
participant’s carer impacted results, as did questions of full awareness of the concepts being addressed in the survey questions.
4.6.1 Impact of staff turnover The different programs involved in this pilot project all experienced turnover of key staff. This certainly had a negative impact on the projects conceptual continuity and timely operationalisation. In larger organisations the turnover of one or two staff might be accommodated. In smaller organisations, however, such staff turnover can have serious impact on evaluation projects such as this.
4.6.2 Pragmatics of evaluating service programs A number of considerations related to constraints placed on evaluation processes in human services contexts have already been discussed—the impact on the research design that can be implemented, recuritment of participants and their willingness to participate in longer-term projects. But in addition to these factors are those related to
Development of a Wellbeing measurement Tool in the Community Service Sector
April 2011
47
the positioning of such human service organisations in the larger State human services arena. One such pragmatic consideration is the fact that any program in the human service arena selected to participate in such surveys will likely be funded by Department of Human Services. With this funding comes operational parameters, including target numbers of clients and numbers of hours of service delivery. With many programs within this funded model stretched to capacity in just in delivering the service, it is very difficult to add a significant additional evaluation task to an already heavy workload. Further, small to medium sized organisations often lack the capacity to employ additional resources for such a task. Another consideration is the alignment of any such evaluation with relevant informing policies, Acts and Frameworks. For instance, for any survey examining the quality of services to people with disabilities, the survey toolâ&#x20AC;&#x2122;s alignment with the Disability Act and the accompanying Quality Framework is essential. A further factor is one of compatibility of the organisationâ&#x20AC;&#x2122;s data collection/entry protocols with those of the larger funding body. In Victoria, compatibility with the Department of Human Services Client Relationship Information System for Service Providers (CRISSP) can help to avoid inefficiencies in data entry, reporting and retrieval.
Development of a Wellbeing measurement Tool in the Community Service Sector
April 2011
48
5 THE COMPLEMENTARY SURVEY: EVALUATION PRACTICES IN CHILD AND FAMILY SERVICES To further apprehend and contextualise the implications of the specific evaluation project being examined here it is appropriate to examine a broader survey conducted by the Centre for Excellence in Child and Family Welfare (CFECFW) in 2010.
Exploring the
types of evaluation being conducted within the welfare sector in Victoria, Australia, the key focus of this study was to explore how evaluation processes relate and impact on client outcomes
5.1 Rationale In December 2010 Windermere Child Family Services in partnership with the Royal Melbourne Institute of Technology Social Work Team and the Centre for Excellence in Child & Family Welfare (aka the Centre) undertook an Evaluation Profile Survey. The aims of the survey were to: o
Examine what evaluation processes currently occur in the child, youth & family welfare sector
o
Explore what evaluation processes currently record client outcomes
o
Gauge the potential to share an evaluation and/or measurement tool across the sector
5.2 Approach and methodology The survey was delivered using an on-line survey tool Survey Monkey which was administered by the Centre. The survey was promoted through: o
Professional networks of Windermere and the Centre
o
On the CFECFW website
o
At the CFECFW event, â&#x20AC;&#x2DC;What counts as evidenceâ&#x20AC;&#x2122; research symposium on the 1st December 2010 www.cwav.asn.au/projects/Pages/SharingInnovativePractice.aspx
Both quantitative and qualitative results were achieved through a total of 6 questions, 5 questions allowing respondents to choose an option/s and one question inviting a
Development of a Wellbeing measurement Tool in the Community Service Sector
April 2011
49
descriptive response of an evaluation tool. Respondents could choose to skip any question.
5.3 Data collection The survey was completed by 91 respondents from the professional networks of Windermere and the Centre. Responses were collected just over a 4 week time frame from the 17th November to 20th December 2010.
The results were received from 91
individuals across 21 different organisations representing large and small sized agencies and were processed and reported via SurveyMonkey software, with data analysis conducted by the Centre for Excellence in Child and Family Welfare (the Centre).
5.4 Findings The findings focused on themes centred around what, actually, was learned; the types of evaluation tools in use; who uses those tools and perceived needs in evaluating programs/agencies. As to what was learned, a majority of respondents (83.5%) reported that they measure and/or evaluate the impact of their services on clients and more than two thirds (69.7%) offered more information about the processes and tools they use. Further, two thirds (67.9%) conduct evaluations at key times throughout the program and a similar number (60.4%) reported evaluating programs upon their completion Pertaining to more specifics about what types of evaluation tools are being used, 62.2% report most common tools being a questionnaire or survey by phone, mail or email. All of the 69.7% of respondents who offered further information identified some sort of a structured feedback process that was in place. They further described the most commonly used tool as a questionnaire/survey with other tools identified as forms, plans, interviews, groups, consultations, suggestion boxes, workshops and tests. Less frequently identified as evaluation tools were informal processes—including anecdotal evidence, observations and informal verbal feedback or conversations— reported by 10% of respondents.
More specifically, types of tools commonly mentioned
included ‘scales’, ‘consumer participation/ focus / client groups’, ‘workshops’ and ‘care/family plans’.
Development of a Wellbeing measurement Tool in the Community Service Sector
April 2011
50
In answer to the question of who uses the evaluation tools, 79.2% reported they are used mainly by Program Employees, 52.8% described Management as the most frequent users of such tools and 28.3% reporting Quality Assurance personnel. But is there a need for more evaluation tools in the sector? The responses are interesting, if sometimes seemingly contradictory.
For instance, a majority of responses (59.3%)
responded that they would be interested in using an evaluation tool to measure the impact of their services on clients; however, 32.2% state they ‘don’t know’ if they would.
The primary purposes of evaluations? A large majority (88.1%) reported that the reason they would collect client data was to inform and improve practice, followed closely by the reason to demonstrate effectiveness of a program, reported by 86.4% of respondents. Other reasons specified by respondents include ‘For continuing accreditation purposes’, to identify quality assurance’, ‘to build (an) evidence base’, ‘to give parent feedback’ and ‘to see if (the) service is delivering what it said it would’.
It is important to note that all commonly identified reasons for conducting evaluation (as determined by the Centre) ranked above average to high, demonstrating respondents’ awareness of the various need for evaluation.
Development of a Wellbeing measurement Tool in the Community Service Sector
April 2011
51
6 DISCUSSION AND RECOMMENDATIONS 6.1 Themes from the Pilot Project
The themes which emerged from the pilot project are related to (1) the outcomes of the surveys and (2) the feasibility and operational considerations encountered in its administration. Regarding survey outcomes: •
The outcomes gave evidence that the programs involved did, indeed, have an impact on the client’s sense of agency, empowerment and wellbeing
•
The outcomes supported the feasibility of exploring a wide range of factors related to wellbeing.
Regarding feasibility and operational considerations: • Any attempt to use such a survey tool would require consideration of applicability to a wide range of applications. For instance, attention would have to be paid to wording with an eye to simplification and to attempts to employ multiple questions with subtle distinctions. • Considerations would also have to be made for applications with clients who might experience challenges in competing the survey.
Such clients might
include: o Those from culturally and linguistically diverse backgrounds. o Those with intellectual or cognitive challenges. • The most successful approach to the administration of such surveys would be one that is incorporated into the organisation’s operational routine and not perceived as an additional task.
6.2 Themes from the Complementary Survey The themes from the complementary survey addressed considerations in the need, desirability, utilization and operationalisation of various forms of evaluations.
These
themes include:
Development of a Wellbeing measurement Tool in the Community Service Sector
April 2011
52
•
There is evidence of strong support within the sector for greater need and use of evaluations. o
The majority of service workers would support the use of such evaluations.
•
The use of a variety of evaluation tools, including survey data, should be regarded as essential components of practice evidence, providing a more accurate understanding.
•
Evaluations should be conducted throughout a program o
•
Pre-program data allows tracking of changes impacted by the service.
With an abundance of various evaluation tools and processes in use, sharing of knowledge across the sector would support stronger approaches to evaluation.
•
With the main users of evaluation tools being program employees, specific training and development can be targeted to this group.
•
Organizations are less likely to make use of external evaluators.
6.3 Recommendations Based upon the experiences and findings of the pilot project and the findings of the complementary survey, it is possible to make a number of recommendations for further research into this area: o
The pilot project has shown enormous potential in connecting program evaluations with the exploration of client wellbeing.
o
Future efforts should: o
Be broader in scope, encompassing multiple organisations and their respective programs. This would provide a broader contextual base within which to research applications and approaches.
o
Be longer in time-frame, preferably including a more comprehensive pre-program component and multiple times series re-iterations to explore long term program effectiveness.
o
Include a greater number of clients within any given program. The greater number of cases would allow for greater statistical examination of relationships, correlations and analyses.
o
Consultations should be conducted with nominated future programs so as to ensure program-specific client considerations—including cultural and cognitive— inform evaluation content and protocols.
Development of a Wellbeing measurement Tool in the Community Service Sector
April 2011
53
o
Consultations
with
DHS
should
be
conducted
to
explore
greater
integration/compatibility between program and CRISSP data protocols
6.4 Summary
The questions posed at the beginning of the report were twofold: first, can a quantitative measurement of Subjective Wellbeing (SWB) be used as an outcomes measure for service delivery and second, will this measurement show direct links to the efficacy of service provision? This pilot project has shown qualified potential to successfully answer both. On one hand, the pilot project undertaken here did give evidence of quantitatively measured changes in subjective wellbeing over the course of a program, but on the other hand the scope of the pilot did not support a more rigorous linking of wellbeing and service delivery outcome. Similarly, the pilot project did give evidence of effectiveness of the service program, but it did not attempt to link this with actual program efficiency. These are questions that are at the core of current debates around effective, efficient and appropriate service delivery in human service programs. This pilot project has begun to point to important ways that they can be explored further.
Development of a Wellbeing measurement Tool in the Community Service Sector
April 2011
54
APPENDICES APPENDIX A
REFERENCES
ANNAS, J. 1993. The Morality of Happiness, Oxford, Oxford University Press. ARGYLE, M. 1999. Causes and correlates of happiness. In: KAHNEMAN, D. (ed.) Well-being: The foundations of Hedonic Psychology. New York: Russel Sage Foundation. BLOOM, M, FISCHER, J. & ORME, J. (2009), Evaluating practice: Guidelines for the accountable professional (6th Ed.). Boston: Allyn and Bacon. BRASHER, K. & WISEMAN, J. 3 July 2007. Community wellbeing in an unwell world: Trends, Challenges, and Opportunities. Faculty of Medicine, Dentistry, and Health Scienes' Deans Lecture Series. McCaughey Centre, Melbourne University. BRICKMAN, P., & CAMPBELL, D. T. 1971. Hedonic relativism and planning the society. In M. H. Appley (Ed.), Adaptation level theory: A symposium ppppp(pp. York: Academic Press.
pppppgood 287–302). New
BURNS, G.W. 2005. Naturally happy, naturally healthy: the role of the natural environment in well-being. In: HUPPERT, F. A., BAYLIS, N. & KEVERNE, B. (eds.) The science of well-being. Oxford: Oxford University Press. CALMAN, K. C. 1984. Quality of Life in cancer patients - an hypothesis. Journal of Medical Ethics, 10, 124-127. CLARK, D. & GOUGH, I. 2005. Capabilities, Needs, and Wellbeing: Relating the Universal and the Local. In: MANDERSON, L. (ed.) Rethinking Wellbeing. Netley: Griffen Press. CUMMINS, R.A. 2001. Self-rated quality of life scales for people with an intellectual disability: A reply to Ager and Hatton. Journal of Applied Research in Intellectual Disabilities. 14, 1-11. CUMMINS, R.A. 2002. The validity and utility of SWB: A reply to Hatton and Ager. Journal of Applied Research in Intellectual Disabilities. 15, 261-268. CURELLA, L. S. 1999. Routine outcome assessment in mental health research. Current Opinion Psychiatry, 12, 207-210. DIENER, E. 2008. Myths in the Science of Happiness, and Directions for Future research. In: Eid, M. & Larson, R.J. (eds.) The Science of Subjective Well-being. New York. The Guilford Press. DIENER, E; LUCAS, E.M.; SMITH, H.L. 1999. Subjective well-being: Three decades of Progress. Psychological Bulletin, 125 (2), 276 – 302. DIENER, E., LUCAS, R. E. & SCOLLON, C. N. 2006. Beyond the Hedonic Treadmill: Revising the Adaption theory of well-being. American Psychologist, 61, 305-314. G.A.A. GROWTH AREAS AUTHORITY. 2010. http://www.gaa.vic.gov.au/ga_caseycardinia [Online].
Development of a Wellbeing measurement Tool in the Community Service Sector
April 2011
55
Victoria: Growth Areas Authority. [Accessed 2 December 2010 2010]. HEALEY, J. 2008. Happiness and Life Satisfaction. Issues in Society, 276. HELLIWELL, J. F. 2003. How's life? Combining individual and national variables to explain subjective well-being. Economic Modelling, 20, 331-360. HELLIWELL, J. F. & PUTNAM, R. D. 2005. The social context of well-being. In: HUPPERT, F. A., BAYLIS, N. & KEVERNE, B. (eds.) The science of well-being. Oxford: Oxford University Press. HOLCOMB, W. R., PARKER, J., C, LEONG, G. B., THIELE, J. & HIGDON, J. 1998. Customer satisfaction and self-reported treatment outcomes among psychiatric patients. Psychiatric Services, 49, 929-934. INTERNATIONAL WELLBEING GROUP, 2006, Personal Wellbeing Index. Melbourne: Australian Centre on Quality of Life, Deakin University (http://www.deakin.edu.au/research/acqol/instruments/wellbeing_index.htm.) JORDAN, B. 2007. Social Work and Well-being, Dorset, Russel House Publishing. KATSCHNIG, H. 2006. How useful in the concept of Quality of Life in Psychiatry? In: KATSCHNIG, H., FREEMAN, H. & SARTORIUS, N. (eds.) Quality of Life in Mental Disorders. Chichester: John Wiley & Sons Ltd. LASALVIA, A. & RUGGERI, M. 2006. Quality of life in mental health service research. In: KATSCHNIG, H., FREEMAN, H. & SARTORIUS, N. (eds.) Quality of life in mental disorders. Chichester: John Wiley & Sons Ltd. LYONS, M. 2001. The contribution of nonprofit and cooperative enterprises in Australia, Crows Nest, NSW, Griffen Press. MCCAUGHEY CENTRE, 2007, Community Indicators Victoria Survey, pppppMcCaughey Centre: VicHealth Centre for the Promotion of Mental Health pppppand Community Wellbeing: The University of Melbourne, Melbourne. NUSSBAUM, M. & SEN, A. 1993. The Quality of Life, Oxford, Clarendon Press. NUSSBAUM, M. 1999. Sex and Social Justice, New York, Oxford University Press. NUSSBAUM, M. 2000. Woman and Human Development, New York, Cambridge University Press. OLIVER, J., HUXLEY, P., BRIDGES, K. & MOHAMAD, H. 1996. Quality of Life and Mental Health Services, London, Routledge. OECD. 2009. Society at a Glance 2009: OECD Social Indicators, OECD. P.C. PRODUCTIVITY COMMISSION. 11 February 2010. Research Report of the Contribution of the Notfor-Profit Sector. Australian Government. PAVOT, 2008. The Assessment of Subjective Well-being: Successes and Shortfalls. In: Eid, M. & Larson, R.J. (eds.) The Science of Subjective Well-being. New York. The Guilford Press. RUGGERI, M., BISOFFI, G., FONTECEDRO, L. & WARNER, R. 2001. Subjective and objective dimensions of quality of life in psychiatric patients: A factorial approach. The South Verona Outcome Project 4. British Journal of Psychiatry, 178, 268-275. RUGGERI, M., NOSE, M., BONETTO, C., CRISTOFALO, D., LASALVIA, A., SALVI, G., STEFANI, B., MALCCHIODI, F. & TANSELLA, M. 2005. The impact of mental disorders on objective and subjective quality of life. A multiwave follow-up study. British Journal of Psychiatry, 187, 121-130. SHELDON, K. M. & LYUBOMIRSKY, S. 2004. Achieving sustainable new happiness: prospects, practices, and prescriptions. In: LINLEY, P. A. & JOSEPH, S. (eds.) Positive psychology in practice.
Development of a Wellbeing measurement Tool in the Community Service Sector
April 2011
56
Hoboken: NJ: Wiley. STRACK, F., ARGYLE, M. & SCHWARZ, N. 1991. Subjective Well-being: An Interdisciplinary Perspective, Pergamon Press. WARE, JE, Jr., GANDEK, B, KOSINSKI, M, AARONSON, NK, APOLONE, G, BRAZIER, J et al. The equivalence of SF-36 summary health scores estimated using standard and country-specific algorithms in 10 countries: results from the IQOLA Project. International Quality of Life Assessment. Journal of Clinical Epidemiology 1998; 51:1167-1170. WILSON, W. 1967. Correlates of avowed happiness. Psychological Bulletin, 67, 294-306. WINDERMERE CHILD AND FAMILY SERVICES, 2009, Windermere Annual Report, Narre Warren, Victoria. WINDERMERE CHILD AND FAMILY SERVICES, 2010, Narre Warren, Victoria, accessed 7 October 2010, http://www.windermere.org.au/ WISEMAN, J & BRASHER, K, 2008, Community Wellbeing in an unwell world: Trends, challenges and possibilities, Journal of Public Health Policy, Vol 29, Iss 3, pp 353-366
Development of a Wellbeing measurement Tool in the Community Service Sector
April 2011
57
APPENDIX B
WINDERMERE WELLBEING SURVEY
Development of a Wellbeing measurement Tool in the Community Service Sector
April 2011
58
Development of a Wellbeing measurement Tool in the Community Service Sector
April 2011
59
APPENDIX C
WC&FC INVITATION TO PARTICIPATE
Development of a Wellbeing measurement Tool in the Community Service Sector
April 2011
60
APPENDIX D
PLAIN LANGUAGE STATEMENT School of Global Studies Social Science and Planning GPO Box 2476 Melbourne VIC 3001 Australia Tel. +61 (03) 9925 3650 +61 (03) 9925 2328 http://www.rmit.edu.au/gsssp
Dear Windermere Client, You are being asked to consider being a part of a research project involving Windermere Child and Family Services and RMIT staff. The title of this research is: Development of an instrument for
evaluating Community Services interventions. About the project Windermere has been working with researchers from the Social Work program at RMIT University and have designed a questionnaire which asks about your sense of wellbeing. The purpose of this is to help Windermere Child and Family Services understand whether their work will have assisted you to feel better about yourself in general and to understand how Windermere contributes to the community. Windermere wants to do this to improve its services to the people it serves. This is the pilot phase and we are starting with two Windermere programs. The questionnaire will be refined after a six month trial for use across the whole organisation. The project will involve all the people who present at Windermere in their programmes over a six month period and who agree to be involved. What it involves for you The project involves Windermere asking you the normal questions you would be asked when you first come, as well as some additional questions about wellbeing. We expect that the first interview will take less than an hour and a second interview three months later around twenty minutes. Possible Risks and Benefits It is unlikely, but possible, that the extra questions about wellbeing might distress you. If that is the case, please tell us quickly and we will stop and assess if you need any assistance. The benefits are that Windermere has the opportunity and the capacity to work on improving the quality of its services to you and people like you. Privacy and Disclosure of Information Any information which could identify you will be removed from any data files forwarded to RMIT. Windermere has provisions in place to manage privacy and disclosure issues. Participation is voluntary and you are free to withdraw from the project at any time and to withdraw any unprocessed data previously supplied without affecting any other services to you. Further information for any questions/problems: Cheryl De Zilwa, Windermere Child & Family Services, Phone No 03 9705 3200, or Assoc Professor Martyn Jones, RMIT University, Phone No. 03 9925 3788. Any complaints about your participation in this project may be directed to the Executive Officer, RMIT Human Research Ethics Committee, Research & Innovation, RMIT, GPO Box 2476, Melbourne 3001. Details of the complaints procedure are available at http://www.rmit.edu.au/rd/hrec_complaints. Thank you.
Development of a Wellbeing measurement Tool in the Community Service Sector
April 2011
61