Evidence-based practices

Page 1

Evaluation and Program Planning 34 (2011) 273–282

Contents lists available at ScienceDirect

Evaluation and Program Planning journal homepage: www.elsevier.com/locate/evalprogplan

Evidence-based practices in the field of intellectual and developmental disabilities: An international consensus approach§ Robert L. Schalock a,*, Miguel Angel Verdugo b, Laura E. Gomez c a

Hastings College, NE, United States University of Salamanca, Spain c University of Valladolid, Spain b

A R T I C L E I N F O

A B S T R A C T

Article history: Received 29 June 2010 Received in revised form 22 October 2010 Accepted 31 October 2010 Available online 10 November 2010

As evidence-based practices become increasingly advocated for and used in the human services field it is important to integrate issues raised by three perspectives on evidence: empirical–analytical, phenomenological–existential, and post-structural. This article presents and discusses an evidencebased conceptual model and measurement framework that integrates these three perspectives and results in: multiple perspectives on evidence-based practices that involve the individual, the organization, and society; and multiple interpretation guidelines related to the quality, robustness, and relevance of the evidence. The article concludes with a discussion of five issues that need to be addressed in the future conceptualization, measurement and application of evidence-based practices. These five are the need to: expand the concepts of internal and external validity, approach evidencebased practices from a systems perspective, integrate the various perspectives regarding evidence-based practices, develop and evaluate evidence-based practices within the context of best practices, and develop a set of guidelines related to the translation of evidence into practice. ß 2010 Elsevier Ltd. All rights reserved.

Keywords: Evidence-based practices: Measurement framework Model Application guidelines Interpretation guidelines

1. Introduction and overview The concept and application of evidence-based practices started originally in medicine in the 1990s and has spread rapidly to many social and behavioral disciplines including education and special education, aging, criminal justice, nursing, public health, mental and behavioral health, and intellectual and closely related developmental disabilities. Representative references for each of these areas are found in Appendix A. Across these broad areas, evidence-based practices generally refer to the use of current best evidence in making clinical decisions about the interventions and/ or supports that service recipients receive in specific situations. Despite their widespread advocacy and use, there are at least three different perspectives on evidence and evidence-based practices: the empirical–analytical, the phenomenological–existential, and the post-structural (Broekaert, Autreque, Vanderplasschen, & Colpaert, 2010). These three perspectives relate to different approaches to intervention and the conceptualization, measurement, and application of evidence-based practices. For example, the empirical–analytical perspective places a premium

§

See the list of contributors in Appendix B. * Corresponding author at: PO Box 285, Chewelah, WA 99109, United States. E-mail addresses: rschalock@ultraplix.com (R.L. Schalock), Verdugo@usal.es (M.A. Verdugo), lauragomez@psi.uva.es (L.E. Gomez). 0149-7189/$ – see front matter ß 2010 Elsevier Ltd. All rights reserved. doi:10.1016/j.evalprogplan.2010.10.004

on experimental or scientific evidence as the basis for evidencebased practices (e.g., Blayney, Kalyuga, & Sweller, 2010; Brailsford & Williams, 2001; Cohen, Stavri, & Hersh, 2004). In distinction, the phenomenological–existential perspective approaches treatment or intervention success based on the reported experiences of wellbeing concerning the intervention (e.g., Kinash & Hoffman, 2009; Mesibov & Shea, 2010; Parker, 2005). From a third, post-structural perspective, treatment or intervention decisions and intervention success should be based on an understanding of public policy principles such as inclusion, self-determination, participation, and empowerment (e.g., Broekaert, D’Oosterlinck, & van Hove, 2004; Goldman & Azrin, 2003; Shogren et al., 2009). As evidence-based practices become increasingly advocated for and used in fields such as intellectual and closely related developmental disabilities (ID/DD) it is important to address and integrate the issues raised by these three perspectives. To that end, the purpose of this article is to present and discuss an evidence-based conceptual and measurement framework that integrates these three perspectives and results in: (a) multiple perspectives on evidence-based practices that involve the individual, organization, and society; and (b) multiple interpretation guidelines related to the quality of the evidence, the robustness of the evidence, and the relevance of the evidence. Subsequent articles by the authors expand on the interpretation guidelines (Claes, van Hove, Vandevelde, Broekaert, & Decramer, in preparation), and application to individuals (Buntinx & Didden,


274

R.L. Schalock et al. / Evaluation and Program Planning 34 (2011) 273–282

in preparation) and organizations (Van Loon & Bonham, in preparation). The ďŹ ve sections of the article: (a) present an operational deďŹ nition of evidence-based practices that integrates the various deďŹ nitions found in the literature; (b) present an evidence-based practices conceptual model that integrates the sequential component steps involved in moving from the practices in question to interpretation guidelines for the evidence produced; (c) summarize the parameters to an evidence-based practices measurement framework that aligns each major component of the conceptual model to three perspectives: the individual, the organization, and society; (d) presents a number of guidelines that can be used to evaluate the quality, robustness, and the relevance of the evidence; and (e) discuss the utility of the proposed conceptual model and measurement framework in reference to the challenges posed by evidence-based practices in any ďŹ eld, but especially ID/DD. As suggested in the title, the material presented reects an international consensus approach. Additionally, the authors want to stress the on-going nature of this work and the need for continued dialog among all stakeholders. Throughout the article we suggest there are three valid uses of current evidence-based practices. These purposes are to make: Clinical decisions about the interventions, services, or supports that clients receive in speciďŹ c situations. Such decisions should be consistent with the client’s values and beliefs. Managerial decisions about the strategies used by an organization to increase its effectiveness, efďŹ ciency, and sustainability. Policy decisions regarding strategies for enhancing an organization or system’s effectiveness, efďŹ ciency, and sustainability. 2. DeďŹ ning evidence-based practices Evidence-based practices are practices that are based on current best evidence that is obtained from credible sources that used reliable and valid methods and based on a clearly articulated and empirically supported theory or rationale. This operational deďŹ nition developed by the authors is consistent with both the multiple perspectives on evidence and the following four core aspects of evidence-based practices deďŹ nitions found in the literature: 1. Experimental or empirical basis. For example Kazdin and Weisz (2003) stress that interventions must be evaluated in wellcontrolled experiments, and must show replications of the effects so there are assurances that any effect or outcome can be reproduced ideally by others. 2. Multiple research designs. ChafďŹ n and Friedrich (2004), for example stress that evidence can also include information from qualitative studies, and even information from interactions with clients. Thus, interventions can be qualiďŹ ed as evidence-based when they receive qualitative, theoretical or clinical support. Similarly, Rathvon (2008) indicates that evidence-based practices are based on the application of rigorous, systematic, and objective procedures or experiments, rigorous data analysis, and [those] accepted by a peer-reviewed journal or approved by a panel of independent experts. Even single-case research studies are an important source for evidence-based practice (Parker & Hagan-Burke, 2007) 3. Practice-driven evaluation. As discussed by Veerman and van Yperen (2007) practice-driven evaluation as a research enterprise involves researchers and providers working jointly to gather information about the effects of an intervention—and thus conducting ‘transdisciplinary evaluations.’ As summarized by these authors, practice-driven evaluation is contrasted with

methods driven evaluation, which treats randomized control trials as the gold standard. 4. Aid to decision making. For example, Sackett, Richardson, Rosenberg, and Haynes (2005), and Scott and McSherry (2008) approach evidence-based practices on the basis of conscientious, explicit, and judicious use of current best evidence in making decisions about the care of individuals and the services/supports they receive. 3. Evidence-based practices conceptual model 3.1. Context Three types of models have been formulated recently to address the complexity of evidence-based practices. We refer to these models as sequential, developmental, and transdisciplinary. Cooley, Jones, Imig, and Villaruel (2009), for example, have developed a ďŹ ve step, sequential model that involves: (a) determine question(s) to be answered to inform the clientspeciďŹ c decision; (b) search for research evidence related to the question(s); (c) evaluate the research evidence for its validity, relevance, and clinical applicability; (d) integrate the research evidence with clinical experience and client preferences to answer the question; and (e) assess performance of the previous steps as well as outcomes in order to improve future decisions. In distinction to a sequential model, Veerman and van Yperen (2007) have suggested a four-stage developmental model that requires one to: (a) specify the core elements of an intervention; (b) explicate the rationale and theory underlying an intervention; (c) obtain preliminary evidence that the intervention works in actual practice; and (d) present clear evidence that the intervention is responsible for the observed effect(s) and involves randomized control trials and well-designed repeated case studies. A third model focuses on both transdisciplinary research and an ecological framework. SatterďŹ eld, Spring, and Brownson (2009), for example, present a transdisciplinary model of evidence-based practices that includes an ecological framework that emphasizes shared decision making and focuses on the environmental and organization context, best available scientiďŹ c evidence, practitioner’s expertise, clinical expertise, decision making, and client preferences. 3.2. Authors’ conceptual model The evidence-based practices conceptual/process model proposed by the authors reects aspects of each of the three types of models just described as well as the systems perspective towards the establishment of evidence-based practices. As shown in Fig. 1, the ďŹ rst step of the model focuses on a clear understanding from a systems perspective of the practices in question. Such practices typically relate to assessment, intervention, and the provision of individualized supports and/or the organization’s use of quality strategies. Each of these practices has intended effects at the level of the individual (e.g., enhanced personal outcomes), the organization (e.g., enhanced effectiveness and efďŹ ciency, or improved service quality), and society (e.g., people with disabilities achieving a higher social-economic status, more positive community attitudes towards persons with ID/DD, changes in education and training programs, changes in resource allocation patterns, or changes in public policies). These intended effects are evaluated on the basis of behavior change indicators and changes in personal outcomes, organization outputs, and societal-level indicators reective of the above-referenced societal intended effects. As discussed later in reference to Fig. 2, a number of evidencegathering strategies can be used to evaluate the evidence


R.L. Schalock et al. / Evaluation and Program Planning 34 (2011) 273–282

275

intervention without a treatment control condition. However, any application beyond the treated sample is speculative. The final stage of the proposed conceptual model is the use of interpretation guidelines to evaluate the quality, robustness, and relevance of the evidence. 4. Evidence-based measurement framework

Fig. 1. Evidence-based practices model.

indicators and thus ‘produce evidence.’ This model component, which emphasizes multiple evidence gathering strategies, is essential since it underscores the value of different research designs that can be used to address a problem that has long plagued the field of evidence-based practices: even though one might not be able to do experimental/control or randomized control trials, one can evaluate the possible effects of a specific

Three major components depicted in Fig. 1 are operationalized in Fig. 2: the practices in question, the evidence indicators, and the evidence-gathering strategy. These three components address evidence-based practices from a multiple systems perspective representing the individual, the organization, and society at large. This systems perspective is responsive to the drivers of evidencebased practices which Scott and McSherry (2008) identify as professional (e.g., specific practices that lead to better outcomes), organizational (i.e., increasing an organization’s effectiveness and efficiency), and societal (i.e., the public’s demand for best practices). 4.1. Individual perspective At the individual level, the practices in question typically relate to assessment, diagnosis, and interventions that can vary from medical treatment to person-centered support strategies, to the use of positive behavior supports. Regardless of the practices in question, objective and measurable indicators need to be developed that can

Fig. 2. Evidence-based practices measurement framework.


276

R.L. Schalock et al. / Evaluation and Program Planning 34 (2011) 273–282

be used to determine whether the practice in question leads to positive personal outcomes. For assessment-related questions, the indicators might well be that the purpose of the assessment is aligned with the actual instruments used. For diagnosis, the indicator might well be consistency and agreement across diagnosticians. For intervention/supports one typically uses change indicators that are targeted to behavioral, physical, psychological, or subjective well-being. The measurement of these indicators requires assessment instruments that have demonstrated reliability and validity. Evidence-gathering strategies at the individual level can range from anecdotes or opinions of respected authorities (based on clinical evidence, descriptive studies or reports of expert committees) to strong evidence from systematic reviews of well-designed multiple baselines studies and randomized controlled trials (National Health and Medical Research Council, 1999). It is important to point out, however, that if all controlled trials/ multiple baseline studies are equally well designed and implemented, the quality of evidence increases as one uses randomized controlled trials or multiple baseline studies (Sackett et al., 2005). 4.2. Organization perspective Human service organizations are currently facing a number of challenges that provide the context for the organization perspective summarized in Fig. 2. Three of the most important of these challenges are: dwindling resources with increasing need for services and supports, a focus on quality strategies that enhance personal outcomes, and increasing social and political expectations and requirements for service/support organizations to be effective in terms of outcomes, efďŹ cient in terms of resource allocation, and evidence-based. As a result of these challenges, the organization practices in question typically focus on the ďŹ ve quality strategies listed in Fig. 2, and how these strategies impact evidence regarding organization outputs and personal outcomes. Although these indicators are discussed in more detail elsewhere (e.g., Schalock, Bonham, & Verdugo, 2008; Schalock, Verdugo, Bonham, Fantova, & van Loon, 2008), the following material provides an overview/ summary. Quality strategies are the techniques that organizations use to enhance personal outcomes and organization outputs. As listed in Fig. 2 these include person-centered planning, systems of supports (that involve a standardized assessment of support needs, the alignment of the person’s assessed support needs to the individualized supports provided, and the application of a systems of supports), support staff techniques (e.g., facilitating personal use of assistive technology, developing participation opportunities, and fostering consumer empowerment), program options (e.g., community living options and employment opportunities), and consumer involvement. Personal outcomes are deďŹ ned as the beneďŹ ts to program recipients that are the result, directly or indirectly, of program activities, services, and supports. Personal outcomes can be approached from two perspectives. The ďŹ rst is a delineation of valued life domains as reected in the Convention on the Rights of Persons with Disabilities (United Nations, 2006). The second, and complimentary perspective, is based on recent work in the ďŹ eld of individual-referenced quality of life that focuses on the measurement of core quality of life domains (Go´mez, Verdugo, Arias, & Arias, in press; Schalock, Gardner, & Bradley, 2007; Verdugo, Arias, Go´mez, & Schalock, 2010; Wang, Schalock, Verdugo, & Jenaro, 2010). Consistent with the articles contained within the UN Convention, these core quality of life domains are rights, participation, self-determination, physical well-being, material well-being, social inclusion, emotional well-being, and personal development.

Organization outputs are the products that result from the resources a program uses to achieve it goals and the actions and/or processes implemented by an organization to produce these products. Commonly used output indicators include effort measures (e.g., units of service/support provided), efďŹ ciency measures (e.g., unit cost, indirect/overhead cost), staff-related measures (e.g., retention rates and satisfaction measures), program options (e.g., employment opportunities and community living alternatives), and network indicators (shared organization functions). In reference to evidence gathering, organizations have a number of options depending on their evaluation capability and the availability of a real-time management information system. In the authors’ opinion, the best evidence gathering strategy is to use a multivariate research design that allows the organization to determine statistically which of the quality strategies and/or organization outputs (as described above and in Fig. 2) best predict short-term personal outcomes. Identifying these signiďŹ cant outcome predictors produces ‘evidence’ for ‘evidence-based practices’ as well as the ability for the organization to target the signiďŹ cant predictors for increased attention and resource allocation. These uses are described more fully in Schalock, Bonham, et al. (2008) and Schalock, Verdugo, et al. (2008). 4.3. Societal perspective As discussed more fully in Schalock et al. (2010), Shogren et al. (2009) and Shogren and Turnbull (2010), multiple social factors inuence public policy and its adoption and implementation. The goals and purposes of public policy and public service systems for people with disabilities have signiďŹ cantly changed over time due to changes in both ideology and increased knowledge regarding the nature of disability. These changes have been impacted signiďŹ cantly by social and political movements, attitudinal changes, judicial decisions, statutory changes, participatory research and evaluation frameworks, and advances in research regarding the nature of disability. National and international disability policy is currently premised on a number of concepts and principles that are: (a) person-referenced such as self-determination, inclusion, empowerment, individual and appropriate services, productivity and contribution, and family integrity and unity; and (b) organization and system-referenced such as antidiscrimination, coordination and collaboration, and accountability (Shogren & Turnbull, 2010; Stowe, Turnbull, & Sublet, 2006). Over time, as our understanding of disability and human functioning has deepened and become more progressive, these evolving core concepts and principles have fostered public policy that promotes change based on various types of information (e.g., research, evaluation, quality assurance). They have also increased our interest in generating outcome data that can be used both to establish evidence indicators (such as those summarized in Fig. 2) and dependent variables that can be used in evidence-based practices research. Although there are a number of evidence-gathering strategies available at the societal level, the authors feel that meta-analyses are potentially the most beneďŹ cial in identifying evidence-based practices. For example, a recently published review (Walsh et al., 2010) reports the following three policy-related factors as highly predictive of enhanced quality of life-related personal outcomes: (a) participation opportunities (e.g., increased community activities and contact with family members and friends); (b) living arrangement (e.g., more normalized community living arrangements are generally associated with enhanced personal outcomes); and (c) support staff strategies (e.g., facilitative assistance in communication techniques, assistive technology devices, and ensuring a sense of basic security).


R.L. Schalock et al. / Evaluation and Program Planning 34 (2011) 273–282

5. Interpretation guidelines The multiple perspectives on evidence-based practices discussed in the Introduction and Overview section necessitate multidimensional interpretation guidelines. The interpretation guidelines discussed in this section of the article also need to be understood within the context of the following three current uses of evidence-based practices. These three involve making: (a) clinical decisions about the interventions, services, or supports that clients receive in specific situations; (b) managerial decisions about the strategies used by an organization to increase its effectiveness (i.e., achieving the organization’s intended results related to personal outcomes and organization outputs) and efficiency (i.e., producing the planned outcomes and outputs in relation to the expenditure of resources); and (c) public policy decisions regarding the provision of supports and services to people with disabilities and the promulgation of strategies that enhance an organization or system’s effectiveness, efficiency, and sustainability. Two of the evidence-based practices interpretation guidelines discussed next—the quality of the evidence and the robustness of the evidence—are based on the empirical–analytical perspective and the following definition of evidence-based practices: Practices that are based on current best evidence that is obtained from credible sources that used reliable and valid methods and based on a clearly articulated and empirically supported theory or rationale. The third interpretation guideline—the relevance of the evidence—is based on the emphasis

277

within the phenomenological–existential perspective on reported experiences of well-being concerning the intervention, and the emphasis within the post-structural perspective on an understanding of the impact of public policy concepts and principles. 5.1. The quality of the evidence The quality of evidence is related to the methodology used. Based on the methodology used, the quality of evidence can be ranked from high to low (Sackett et al., 2005): randomized trials and experimental/control designs are ranked higher than quasiexperiments; quasi-experiments are ranked higher than pre–post comparisons; pre–post comparisons are ranked higher than correlational studies; correlational studies are ranked higher than case studies’ and case studies are ranked higher than anecdotes, satisfaction surveys or opinions of respected authorities. 5.2. The robustness of the evidence The interpretation of evidence requires more than understanding the methodology used. As discussed by Wade (1999), before trying to determine how or why some intervention works, or who benefits, it is essential to know whether it has any effect at all. Table 1 summarizes interpretation guidelines regarding evidence from quantitative and qualitative research studies. As noted in Table 1, there are five broad levels of research designs related to

Table 1 Interpretation guidelines: the robustness of the evidence for quantitative and qualitative research. Type of designs Experimental Quasi-experimental

Between-subjects designs Within-subjects designs Between-subjects Nonequivalent group: Pretest–posttest nonequivalent control group design Within-subjects pre–post: Time-series

Examples of techniques

Examples of effectiveness criterion

Bivariate: ANOVA

Effect size Correlation (R2, Kendall’s t) Cohen’s d Glass’ D Hedges’ g

x2 t Mann–Whitney test Wilcoxon signed-rank test, etc.

Multivariate: Non-experimental

Single case study

Between-subjects Nonequivalent group: Differential research Posttest-only nonequivalent control group Within-subjects pre–post: One-group pretest-posttest Developmental research Cross-sectional Longitudinal

MANOVA ANCOVA Multiple discriminant analysis Multiple regression analysis Structural equation modeling Growth curve analysis, etc.

Cohen’s f2 w, Crame´r’s w, or Crame´r’s V Odds ratio (OR) Relative risk or risk ratio (RR) Eta-squared (h2) Partial eta-squared (partial h2) Omega-squared (v2) Cohen’s f Goodness of fit Confidence intervals Magnitude or value of the test statistic Degrees of freedom p value Measure of variability (e.g., standard error)

Common language effect size Percentile rank Binomial effect size display

Percent of all non-overlapping data Cohen’s percent of non-overlapping data

CLES: probability of higher/lower PR: percentile rank BESD: % success rate % Improvement PAND: % non-overlapping data CPND: % non-overlapping data

Qualitative Grounded theory Ethnographics Participatory action research Case study

Descriptive vividness Methodological congruence Analytical preciseness Theoretical consideration


278

R.L. Schalock et al. / Evaluation and Program Planning 34 (2011) 273–282

evidence-based practices: experimental, quasi-experimental, nonexperimental, single case study, and qualitative. Table 1 also provides examples of specific research/statistical techniques used within each design. The right column in Table 1 specifies specific examples of effectiveness criteria. A further discussion of these effectiveness criteria can be found in American Psychological Association (2010), Carter and Little (2007), Cesario, Morin, and Santa-Domato (2002), Claes et al. (in preparation), Cohen and Crabtree (2008), Ferguson (2009), Franzblau (1958), Lipsey (1998), Parker and Hagan-Burke (2007) and Wilkinson and APA Task Force on Statistical Inference (1999, p. 599). 5.3. The relevance of the evidence Determining the relevance of the evidence requires three critical thinking skills that are increasingly being recognized as the cognitive engine driving the process of knowledge development and use: analysis, evaluation, and interpretations (Schalock et al., 2010). Analysis involves examining the evidence and its component parts and reducing the complexity of the evidence into simpler or more basic components or elements. The focus of the analysis should be determining the degree of alignment among the practices in question, the evidence indicators, and the evidence-gathering strategy used (see Figs. 1 and 2). Evaluation involves determining the precision, accuracy, and integrity of the evidence through careful appraisal of the results of the evidence-gathering strategy. An essential part of this evaluation involves determining the level of confidence that one has in the evidence as reflected in the previously discussed guidelines regarding the quality and robustness of the evidence. Interpretation involves evaluating the evidence in light of the practices in question, the intended application purpose(s), and the intended effect(s). Such interpretation should be guided by the person’s perception of benefit vs. cost, field congruence models (e.g., United Nations, 2006), and clinical judgment (Schalock & Luckasson, 2005). As defined by Schalock et al. (2010) clinical judgment is a special type of judgment that is rooted in a high level of clinical experience and emerges from extensive data. It is based on the clinician’s explicit training, direct experience with those with whom the clinician is working, and specific knowledge of the person and the person’s environment. Clinical judgment is characterized by its being systematic (i.e., organized, sequential, and logical), formal (i.e., explicit and reasoned), and transparent (i.e., apparent and communicated clearly). Guidelines for evaluating the relevance of the evidence are just emerging in the ID/DD field. To facilitate an active dialog, the authors suggest that the following three guidelines will assist decision makers in evaluating the relevance of the evidence. For those making clinical decisions related to diagnosis, classification, and planning supports, relevant evidence is that which enhances the congruence between the specific problem or issue and the available evidence. Such congruence will facilitate more accurate diagnoses, the development of more functional and useful classification systems, and the provision of a system of supports based on the person’s assessed support needs. From the service recipient’s perspective, information regarding specific evidence-based practices should also assist the person in making personal decisions that are consistent with his/her values and beliefs. Examples include decisions regarding informed consent, placement options, selection of service/support providers, agreeing to interventions such as medication, and/or opinions regarding the intensity and duration of individualized supports. For those making managerial decisions, relevant evidence identifies those practices that enhance a program’s effectiveness and efficiency. As summarized in Fig. 2, these practices relate to implementing quality strategies that have been shown to significantly effect personal outcomes and organizational outputs.

For those making policy decisions, relevant evidence is that which: (a) supports and enables organizations to be effective, efficient, and sustainable; (b) impacts public attitudes towards people with disabilities; (c) enhances long-term outcomes for persons with disabilities; (d) changes education and training strategies; and (e) encourages efficient resource allocation patterns. 6. Discussion Despite the popular appeal of the concept and application of evidence-based practices in many fields, there continues to be both a lack of application of evidence-based practices and a gap between research and practice (Chen, 2010; Craig, Douglas, Farrell, & Taxman, 2009; Donohue, Allen, & Romero, 2009; Ferriter & Huband, 2005; Jung & Newton, 2009; Kutash, Duchnowski, & Lynn, 2009; Newnham & Page, 2010; Verdugo, 2009; Veerman & van Yperen, 2007). Thus, it is important to discuss a number of issues that need to be addressed as clinicians, managers, and policy makers incorporate evidence-based research into their decision making, while simultaneously accommodating the individual needs and personal goals of their clientele. In this final section of the article we focus on five issues related to the future understanding and use of evidence-based practices. These issue and our proposed approaches are based on current literature, and the authors’ work in the field of disabilities generally and ID/DD specifically. These five issues are the need to: expand the concept of internal and external validity, approach evidence-based practices from a systems perspective, integrate the various perspectives regarding evidence-based practices, develop and evaluate evidence-based practices within the context of best practices, and develop a set of guidelines to translate evidence into practice. 6.1. Expand the concept of internal and external validity Regardless of one’s perspective on evidence, it is widely accepted that validity is an essential element of any evidencegathering strategy. In 1963, Campbell and Stanley proposed a validity model that continues to impact how evidence is interpreted. Their model involved two principal types of validity: internal and external. Internal validity asks whether a particular intervention makes a difference; external validity asks whether an experimental effect/intervention can be generalized to other populations, settings, or treatment and measurement variables. As discussed by Chen (2010) due to the primacy of internal validity, most validity issues have been addressed by a top-down approach in which a series of evaluations begin by maximizing internal validity through efficacy evaluations, progressing to effectiveness evaluations aimed at strengthening external validity. Within this paradigm, efficacy evaluations or studies assess treatment effects in an ideal, highly controlled, clinical research setting, with randomized controlled trials considered the gold standard. If the efficacy evaluations find the treatment/intervention has the desired effect on a small, homogeneous sample, effectiveness evaluations then estimate treatment or intervention effects in real-world environments. According to Chen (2010) and consistent with the focus of the other perspectives on evidencebased practices described in this article, the problem with a topdown approach that focuses primarily on internal validity is whether that model and approach is universally suitable for either program evaluation or, in the case of the present article, the interpretation of evidence based on multiple evidence-gathering strategies. To overcome the practical problems associated with an overreliance on internal validity, and to address the multiple perspectives on evidence, a future issue among proponents of


R.L. Schalock et al. / Evaluation and Program Planning 34 (2011) 273–282

evidence-based practices is to integrate and expand the internal and external types of validity to ensure both the proper interpretation of evidence and the application of best evidence to policies and practices. One promising approach is the ‘bottom-up approach to integrative validity’ proposed by Chen (2010). Chen’s proposed approach expands the Campbell and Stanley model’s internal and external validity into three types: internal, external, and viable. This integrative validity model slightly modifies Campbell and Stanley’s definition of internal validity by stressing objectivity. Thus, internal validity becomes the extent to which an evaluation (‘evidence-gathering strategy’ in the present article) provides objective evidence that an intervention causally affects specific outcomes. This expanded definition of internal validity is also consistent with the framework presented in this article regarding objective and measurable evidence indicators and interpretative guidelines regarding the quality, robustness, and relevance of the evidence (Table 1 and adjacent material). In expanding the conception of external validity, Chen’s model (2010) and formulations by Weiss (1998) address the perspectives of different stakeholders, and the need for evidence-based practices to meet political, organizational, and community requirements for valid evidence. This expanded conception of external validity—referred to by Chen as viable validity—focuses on the extent to which evaluation of effectiveness can be generalized from a research setting to a real-world setting or from one realworld setting to another targeted setting (Chen, 2010, p. 208). Such generalization incorporates contextual factors, stakeholder views and interests, and components of an action model-change model (Chen, 2005). As found in the relevance guidelines proposed in the present article, the evaluation of viable validity incorporates stakeholder views and experiences regarding whether an intervention or program is practical, aids in decision making, is suitable and affordable, and is helpful in the real world. By stressing viability validity as it relates to evidence-based practices one can also address the huge gap between intervention research and practice (Green & Glasgow, 2006), the utility of using mixed methods research (Chen, 2006), and the need for evidence-based practices to be perceived as feasible, acceptable, and responsive to the drivers of evidence-based practices (Scott & McSherry, 2008). 6.2. A systems perspective Fig. 2 employs a systems perspective to outline the practices in question, the evidence indicators, and the evidence-gathering strategies. Furthermore, we suggest that a systems perspective is critical in evaluating the relevance of the evidence, as reflected in the relevance of evidence guidelines discussed earlier. Two principles formed the basis for Fig. 2 and the proposed interpretation guidelines. The first principle is that changing educational and human service programs in regard to using evidence-based practices is complex and requires many simultaneous changes in individual practices, organization structures and practices, and public policy. The second principle is that translating evidence into practice is most successful when the practices are targeted to the three systems that effect human functioning: the microsystem, the mesosystem, and the macro-system. Specifically, evidence-based practices related to the microsystem would focus on the assessment of client needs, intervention or support strategies, and/or the approach used regarding client-referenced outcomes evaluation (including the instruments used). Analogously, evidence-based practices related to the mesosystem would focus on organization policies and practices, community inclusion strategies, desired outcome categories, leadership and management strategies, client and family advocacy activities, and the assessment of service quality. Third, evidence-based practices related to the macrosystem would focus on disability core concept,

279

quality assurance systems, resource allocation strategies, compliance with (inter)national laws and conventions, and consumer assessment strategies. 6.3. Integrate perspectives Historically, the empirical–analytical perspective on evidencebased practices has been the basis for their understanding and application. This influence is reflected in the authors’ conceptual model and measurement framework and interpretation guidelines presented earlier in this article. However, one cannot overlook the importance of: (a) incorporating the perspectives of the individual’s well-being and the role of organization-based services and supports, and the impact of public policies on personal outcomes and organization outputs; and (b) being sensitive to the drivers of evidence-based practices that include political, organizational, and societal factors (Scott & McSherry, 2008). Consistent with the suggestions of Broekaert et al. (2010) we suggest that the best way to integrate these multiple perspectives is to use a systems perspective in one’s conceptualization and measurement framework (Figs. 1 and 2), and multiple judgment criteria that incorporate the quality, robustness, and relevance of the evidence (Table 1 and adjacent discussion related to Interpretation Guidelines). 6.4. Evidence-based practices within the context of best practices It is the authors’ opinion that evidence-based practices need to be developed and evaluated within the context of professional best practices that are based on professional ethics, professional standards, and informed clinical judgment. For example, within the field of ID/DD current best practices can be characterized by their: (a) incorporating current models of human functioning/ disability; (b) emphasizing human potential, social inclusion, empowerment, equity, and self-determination; (c) using individualized supports to enhance personal outcomes; and (d) evaluating the impact of interventions, services and supports on personal outcomes and using that information for multiple purposes that include reporting, monitoring, evaluation, and quality improvement. Best practices and evidence-based practices will always involve professional judgment. Professional practices in assessment, diagnosis, interventions, and evaluation only deserve the qualification of ‘professional’ if they are well validated and based on sound knowledge. Knowledge from scientific/empirical evidence is the ‘best’ knowledge for professional practice. However, since empirically validated knowledge is not 100% complete (in any field) practices must carefully use other sound knowledge such as that provided by consensus models (e.g., disability models such as ICF and AAIDD; Buntinx & Schalock, 2010), procedures (e.g., program logic models), and documents such as program and quality assurance standards. Furthermore, in practices concerning individual decisions, the application of knowledge in a particular case will require tacit knowledge rooted in the person’s experiences, explicit knowledge based on empirically based best practices, and clinical judgment. 6.5. Translating evidence into practice Five guidelines have been published identifying critical elements in the translation of evidence into practice (Pronovost, Berenholtz, & Needham, 2008; Scott & McSherry, 2008). The first guideline relates to the documentation of the effectiveness of the evidence. Earlier sections of this article discussed three sets of guidelines regarding the quality, robustness, and relevance of the evidence. The second guideline suggests that evidence-based


280

R.L. Schalock et al. / Evaluation and Program Planning 34 (2011) 273–282

practices should be consistent with an ecological perspective. This guideline allows for a broader range of targets for intervention and encourages the design of interventions that are minimally intrusive. Third, evidence-based practices should relate to practices that are capable of application across all stakeholders and are relevant to the perspective of the individual, organization, or society. Fourth, evidence-based practices should be capable of being easily taught via consultation and learning teams but within the constraints of resources (time, money, expertise). A potential useful model to implement this guideline involves what Pronovost et al. (2008, p. 963) refer to as the ‘‘four Es’’: engage (i.e., explain why the intervention(s) is/are important); educate (i.e., share the evidence supporting the intervention); execute (i.e., design an intervention ‘toolkit’ targeted at barriers, standardization, independent checks, reminders, and learning from mistakes); and evaluate (i.e., regularly assess for performance measures and unintended consequences). The fifth guideline is that evidence-based practices are capable of being evaluated by reliable, valid, and practical methods. This requires a clear alignment among the components shown in Figs. 1 and 2. Implementing this guideline also requires clearly stated outcomes that are targeted to concrete, observable behavior that can be objectively measured over time. 7. Conclusion As indicated in the title, this article is part of a series of articles that address the conceptualization, measurement, and application of evidence-based practices in field of disabilities from an international perspective. Our purposes in developing this series of articles include providing a framework for further discussions regarding: (a) the need to make more effective and efficient use of all the resources that are going into disability-related services and supports; (b) the belief that one should not just hope for particular results and accepting on blind faith that good results will occur: (c) the fact that professionals (including program/organization managers) will need to use evidence-based practices so they can guarantee they are using good methods to improve the quality of their services and being able to answer questions related to the rationale and effectiveness of interventions, services, and supports; and (d) the importance of levels of evidence, realizing that even though one might not be able to do experimental/control studies or randomized controlled studies one can still evaluate the effectiveness of the intervention(s). In the end, we believe that such a dialog will result in the availability and use of evidencebased practices that enhance clinical, managerial, and policy decisions. Appendix A. Representative evidence-based practices references Criminal justice: Craig et al. (2009), Ferriter and Huband (2005), and Knoll and Resnick (2008). Education and special education: Blayney et al. (2010), Kutash et al. (2009), Marsh (2005), Rathvon (2008), Smylie and Corcoran (2009), Etscheidt and Curran (2010), and Arthur-Kelly, Bochner, Center, and Mok (2007). Intellectual and developmental disabilities: Burton and Chapman (2004), Nehring (2005), Perry and Weiss (2007), Rudkin and Rowe (1999), and Wehmeyer et al. (2007). Medicine: Brailsford and Williams (2001), Cooley et al. (2009), Sackett et al. (2005), and Shaneyfelt et al. (2006).

Mental/behavioral health: Center for Evidence-Based Practices (2010), Chaffin and Friedrich (2004), Dixon et al. (2001), Drake, Goldman, and Leff (2001), Jung and Newton (2009), Kazdin (2008), and Kazdin and Weisz (2003). Nursing: Praeger (2009) and Scott and McSherry (2008). Public health: Kohatsu, Robinson, and Torner (2004). Substance abuse: Amodeo, Ellis, and Samet (2006) and Broekaert et al. (2004).

Appendix B. Contributors The following persons are part of a larger international consensus group that had input into the formulation and final editing of the current manuscript. They are also involved in developing subsequent articles in this series regarding interpretation guidelines and application of evidence-based practices at the individual and organizational level. Claudia Claes is a lecturer at the Faculty of Social Work and Welfare Studies of University College Gent and a researcher at the Departmetn of Orthopediagogics, Ghent University (Belgium). Her research interests include quality of life in the field of intellectual disability, person-centered planning, and individualized supports, email address: claudia.claes@ugent.be. Wil H.E. Buntinx, Ph.D. is Director of Buntinx Training and Consultancy and Research Assistant Professor, Govenor Kremers Center, Maastricht University (Netherlands). He has published widely in the field of disability policy and organizational management and change, e-mail address: whebuntinx@maastricht.edu.nl. Gordon S. Bonham, Ph.D. is President of Bonham Research (Cranberry Twp. PA). He has been the researcher for the Maryland Ask Me! Project since its beginning in 1997. He also conducted research at the National Center for Health Statistics and currently publishes in the area of human services businesses. Jos van Loon, Ph.D. is an Orthopedagogue and manager in Arduin (The Netherlands), a service provider in the Netherlands. He has been instrumental in the process of deinstitutionalization within the Netherlands. He has an adjunct appointment at the University of Gent and has published widely in the field of community-based programs for persons with disabilities, e-mail address: jloon@arduin.nl. Geert van Hove, Ph.D. is a professor senior lecturer at the Department of Orthopedagogics (Special Education), Gent University (Belgium). He has authored and co-authored numerous publications in the field of intellectual disability. His research interests are inclusive education, quality of life, and self-advocacy, e-mail address: geert.vanhove@ugent.be. Stijn Vandevelde, Ph.D. is a lecturer and researcher at the Faculty of Social Work and Welfare Studies at University College Ghent and a visiting professor at the Department of Orthopedagogics, Gent University (Belgium). His research interests in include theoretical orthopedagogics, quality of life, and treatment for special target groups including offenders who are mentally ill and substance abuse in persons with intellectual disability, e-mail address: Stijn.vandevelde@ugent.be. R. Didden, Ph.D. is Professor of Intellectual Disabilities, Learning and Behavior at the Behavior Science Institute and Department of Special Education of the Radboud University Nijmegen (Netherlands). His research and clinical interests include treatment of behavioral and mental disorders and evidence-based practices in the field of


R.L. Schalock et al. / Evaluation and Program Planning 34 (2011) 273–282

intellectual and closely related developmental disabilities, e-mail: ridden@unymger.edu.nl. Adelien Decramer is a lecturer and researcher at the Faculty of Business Administration and Public Administration, University College, Ghent and the Faculty of Economics and Business Administration, Ghent University (Belgium). Her research interests include general management and individual and organization performance management in the public, non-and social profit sector, e-mail address: adelien.decramer@hogent.be. Eric Broekaert, Ph.D. is a full professor at the Department of Orthopedagogics (Special Education), Ghent University (Belgium). He has authored and co-authored numerous articles, books, and book chapters on substance abuse treatment, therapeutic communities, and theoretical Orthopedagogics, e-mail address: eric.broekaert@ugent.be. References American Psychological Association. (2010). Publication manual of the American Psychological Association (6th ed.). Washington, DC: American Psychological Association. Amodeo, M., Ellis, M. A., & Samet, J. H. (2006). Introducing evidence-based practices into substance abuse treatment using organization development methods. The American Journal of Drug and Alcohol Abuse, 32, 555–560. Arthur-Kelly, M., Bochner, S., Center, Y., & Mok, M. (2007). Socio-communicative perspectives on research and evidence-based practice in the education of students with profound and multiple disabilities. Journal of Developmental and Physical Disabilities, 19(3), 161–176. Blayney, P., Kalyuga, S., & Sweller, J. (2010). Interactions between the isolated-interactive elements effect and levels of learner expertise: Experimental evidence from an accountancy class. Instructional Science, 38(3), 277–287. Brailsford, E., & Williams, P. L. (2001). Evidence based practice: An experimental study to determine how different working practice affects eye radiation dose during cardiac catheterization. Radiography, 7(1), 21–30. Broekaert, E., Autreque, M., Vanderplasschen, W., & Colpaert, K. (2010). The human prerogative: A critical analysis of evidence-based and other paradigms of care in substance abuse treatment. Psychiatric Quarterly (published on-line: April 2, 2010). Broekaert, E., D’Oosterlinck, F., & van Hove, G. (2004). The search for an integrated paradigm of care models for people with handicaps, disabilities, and behavioral disorders. Education and Training in Developmental Disabilities, 39, 206–216. Buntinx, W. H. E., & Didden, R. (in preparation). Evidence-based practices: Applications to individuals with disabilities. Buntinx, W. H. E., & Schalock, R. L. (2010). Models of disability, quality of life, and individualized supports: Implications for professional practices. Journal of Policy and Practice in Intellectual Disabilities, 7(4), 283–294. Burton, M, & Chapman, M. J. (2004). Problems of evidence-based practices in community based services. Journal of Intellectual Disabilities, 8, 56–70. Campbell, D. T., & Stanley, J. (1963). Experimental and quasi-experimental designs for research. Chicago: Rand McNally. Carter, S. M., & Little, M. (2007). Justifying knowledge, justifying method, taking action: Epistemologies, methodologies, and methods in qualitative research. Qualitative Health Research, 17, 1316–1328. Center for Evidence-Based Practices. (2010, May). Young children with challenging behavior http://www.challengingbehavior.org Accessed 01.05.10. Cesario, S., Morin, K., & Santa-Domato, A. S. (2002). Evaluating the level of evidence. Qualitative Research, 31, 708–714. Chaffin, M., & Friedrich, B. (2004). Evidence-based treatments in child abuse and neglect. Children and Youth Service Review, 26, 1097–1113. Chen, H. T. (2005). Practical program evaluation: Assessing and improving planning, implementation, and effectiveness. Thousand Oak, CA: Sage. Chen, H. T. (2006). A theory-driven evaluation perspective on mixed methods. Research in the Schools, 13(1), 75–83. Chen, H. T. (2010). The bottom-up approach to integrative validity: A new perspective for program evaluation. Evaluation and Program Planning, 33(3), 205–214. Claes, C., van Hove, G., Vandevelde, S., Broekaert, E., & Decramer, A. (in preparation). Evidence-based practices: Interpretation guidelines. Cohen, D. J., & Crabtree, B. F (2008). Evaluative criteria for qualitative research in health care: Controversies and recommendations. Annals of Family Medicine, 6, 331–339. Cohen, A. M., Stavri, P. Z., & Hersh, W. R. (2004). A categorization and analysis of the criteria of evidence-based medicine. International Journal of Medical Information, 73, 35–43. Cooley, H. M. J., Jones, R. S., Imig, D. R., & Villaruel, F. A. (2009). Using family paradigms to improve evidence-based practice. American Journal of Speech-Language Pathology, 18, 212–221. Craig, E. H., Douglas, W. Y., Farrell, J., & Taxman, F. S. (2009). Associations among state and local organizational contexts: Use of evidence-based practices in the criminal justice system. Drug and Alcohol Dependence, 103S, S23–S32. Dixon, L., McFarlane, W. R., Lefley, H., Lucksted, A., Cohen, M., & Falloon, I. (2001). Evidence-based practices for services to families of people with psychiatric disabilities. Psychiatric Services, 52, 903–910.

281

Donohue, B., Allen, D. N., & Romero, V. (2009). Description of a standardized treatment center that utilizes evidence-based clinic operations to facilitate implementation of an evidence-based treatment. Behavior Modification, 33, 411–436. Drake, R. E., Goldman, H. H., & Leff, H. S. (2001). Implementing evidence-based practices in routine mental health service settings. Psychiatric Services, 52, 179–182. Etscheidt, S., & Curran, C. M. (2010). Reauthorization of the Individuals with Disabilities Education Improvement Act (IDEA, 2004). The peer-reviewed research requirement. Journal of Disability Policy Studies, 21(1), 29–39. Ferguson, C. F. (2009). An effect size primer: A guide for clinicians and researchers. Professional Psychology: Research and Practice, 40(5), 532–538. Ferriter, M., & Huband, N. (2005). Does the non-randomized controlled study have a place in the systematic review? A pilot study. Criminal Behaviour and Mental Health, 15, 111–120. Franzblau, A. (1958). A primer of statistics for non-statisticians. New York: Harcourt, Brace & World. Goldman, H. H., & Azrin, S. T. (2003). Public policy and evidence-based practice. Psychiatric Clinics of North America, 26(4), 899–917. Go´mez, L. E., Verdugo, M. A., Arias, B., & Arias, V. B. (in press). A comparison of alternative models of individual quality of life. Social Indicators Research. Green, L. W., & Glasgow, R. E. (2006). Evaluating the relevance, generalization, and applicability of research: Issues in translation methodology. Evaluation and the Health Professions, 29, 126–153. Jung, X. T., & Newton, R. (2009). Cochrane reviews of non-medication-based psychodisorders: A systematic literature review. International Journal of Mental Health Nursing, 18, 239–249. Kazdin, A. E. (2008). Evidence-based treatment and practice: New opportunities to bridge clinical research and practice, enhance the knowledge base, and improve patient care. American Psychologist, 63(1), 146–159. Kazdin, A. E., & Weisz, J. R. (2003). Evidence-based psychotherapies for children and adolescents. New York: The Guilford Press. Kinash, S., & Hoffman, M. (2009). Children’s wonder-initiated phenomenological research: A rural primary school case study. Evaluation, 6(3), 1–14. Knoll, J. L., & Resnick, P. J. (2008). Insanity defense evaluations: Toward a model for evidence-based practice. Brief Treatment and Crisis Intervention, 8(1), 92–110. Kohatsu, N. D., Robinson, J. G., & Torner, J. C. (2004). Evidence based public heath: An evolving concept. American Journal of Preventive Medicine, 27(5), 417–421. Kutash, K., Duchnowski, A. J., & Lynn, N. (2009). The use of evidence-based instructional strategies in special education settings in secondary schools: Development, implementation and outcome. Teaching and Teacher Education, 25, 917–923. Lipsey, M. W. (1998). Design sensitivity: Statistical power for applied experimental research. In L. Bickman & D. J. Rog (Eds.), Handbook of applied social research methods (pp. 39–68). Thousand Oaks, CA: Sage. Marsh, R. (2005). Evidence-based practice for education. Educational Psychology, 25(6), 701–704. Mesibov, G., & Shea, B. V. (2010). The TEACCH program in the era of evidence-based practices. Journal of Autism and Developmental Disorders, 40, 570–579. National Health and Medical Research Council. (1999). A guide to the development, implementation and evaluation of clinical practice guidelines. Canberra: NHMRC. http://www.nhmrc.gov.au/_files_nhmrc/file/publications/synopses/cp30.pdf Accessed 02.06.10. Nehring, W. M. (2005). Health promotion for persons with intellectual/developmental disabilities: The state of scientific evidence. Washington, DC: American Association on Mental Retardation. Newnham, E. A., & Page, A. C. (2010). Bridging the gap between best evidence and best practice in mental health. Clinical Psychology Review, 30(1), 27–142. Parker, M. (2005). False dichotomies: EBM, clinical freedom, and the art of medicine. Medical Humanities, 31, 23–30. Parker, R. I., & Hagan-Burke, S. (2007). Useful effect size interpretations for single case. Research Behavior Therapy, 38(1), 95–105. Perry, A., & Weiss, A. (2007). Evidence-based practice in developmental disabilities: What it is and why does it matter? Journal of Developmental Disabilities, 13, 167– 172. Praeger, S. (2009). Applying findings to practice. The Journal of School Nursing, 25(2), 173–175. Pronovost, P., Berenholtz, S., & Needham, D. (2008). Translating evidence into practice: A model for large scale knowledge translation. British Medical Journal, 337(25), 963–965. Rathvon, N. (2008). Effective school interventions: Evidence-based strategies for improving student outcomes (2nd ed.). New York: The Guilford Press. Rudkin, A., & Rowe, D. (1999). A systematic review of the evidence base for lifestyle planning in adults with learning disabilities: Implications for other disabled populations. Clinical Rehabilitation, 13, 453–455. Sackett, D. L., Richardson, W. S., Rosenberg, W., & Haynes, R. B. (2005). Evidence-based medicine: How to practice and teach EBM. London: Churchill-Livingstone. Satterfield, J. S., Spring, B., & Brownson, R. C. (2009). Toward a transdisciplinary model of evidence-based practice. The Milbank Quarterly, 87, 368–390. Schalock, R. L., Bonham, G. S., & Verdugo, M. A. (2008a). The conceptualization and measurement of quality of life: Implications for program planning and evaluation in the field of intellectual disabilities. Evaluation and Program Planning, 31, 181– 190. Schalock, R. L., Borthwick-Duffy, S. A., Bradley, V. J., Buntinx, W. H., Coulter, D. L., Craig, E., et al. (2010). Intellectual disability: Definition, classification, and systems of supports. Washington, DC: American Association on Intellectual and Developmental Disabilities. Schalock, R. L., Gardner, J. F., & Bradley, V. J. (2007). Quality of life of persons with intellectual and other developmental disabilities: Applications across individuals,


282

R.L. Schalock et al. / Evaluation and Program Planning 34 (2011) 273–282

organizations, communities, and systems. Washington, DC: American Association on Intellectual and Developmental Disabilities. Schalock, R. L., & Luckasson, R. (2005). Clinical judgment. Washington, DC: American Association on Mental Retardation. Schalock, R. L., Verdugo, M. A., Bonham, G. S., Fantova, F., & van Loon, J. (2008b). Enhancing personal outcomes: Organizational strategies, guidelines, and examples. Journal of Policy and Practice in Intellectual Disability, 5(4), 276–285. Scott, K., & McSherry, R. (2008). Evidence-based nursing: Clarifying the concepts for nurses in practice. Journal of Clinical Nursing, 18(8), 1085–1095. Shaneyfelt, T., Baum, K. D., Bell, D., Feldstein, D., Houston, T. K., Kaatz, S., et al. (2006). Instruments for evaluating education in evidence-based practice: A systematic review. Journal of the American Medical Association, 296(9), 1116–1127. Shogren, K. A., Bradley, V. J., Gomez, S. C., Yaeger, M. H., Schalock, R. L., BorthwickDuffy, S. A., et al. (2009). Public policy and the enhancement of desired outcomes for persons with intellectual disabilities. Intellectual and Developmental Disabilities, 47, 307–319. Shogren, K. A., & Turnbull, R. (2010). Public policy and outcomes for persons with intellectual disability: Extending and expanding the public policy framework of the 11th edition of Intellectual disability: Definition, classification and systems of supports. Intellectual and Developmental Disabilities, 48(5), 375–386. Smylie, M. A., & Corcoran, (2009). Nonprofit organizations and the promotion of evidence-based practice in education. In J. Bransford, D. Stipek, N. Vye, & L. Gomez (Eds.), The role of research in educational improvement (pp. 111–135). Cambridge, MA: Harvard Education Press. Stowe, M. J., Turnbull, H. R., & Sublet, C. (2006). The Supreme Court ‘‘our town’’, and disability policy: Boardrooms and bedroom, courtrooms, and cloakrooms. Mental Retardation, 44, 83–99. United Nations. (2006). Convention on the rights of persons with disabilities http:// www.un.org/disabilities/convention Accessed 02.06.10. Van Loon, J., & Bonham, G. S. (in preparation). Evidence-based practices: Applications to organizations. Verdugo, M. A. (2009). Quality of life, R + D + I and social policies. Siglo Cero, 40(1), 5–21. Verdugo, M. A., Arias, B., Go´mez, L. E., & Schalock, R. L. (2010). Development of an objective instrument to assess quality of life in social services, Reliability and validity in Spain. International Journal of Clinical and Health Psychology, 10(1), 105–123. Veerman, J. W., & van Yperen, T. A. (2007). Degrees of freedom and degrees of certainty: A developmental model for the establishment of evidence-based youth care. Evaluation and Program Planning, 30, 212–221.

Wade, D. T. (1999). Randomized controlled trials—a gold standard? Clinical Rehabilitation, 13, 453–455. Walsh, P. N., Emerson, E., Lobb, C., Hatton, C., Bradley, V., Schalock, R. L., et al. (2010). Supported accommodation for people with intellectual disability and quality of life: An overview. Journal of Policy and Practice in Intellectual Disabilities, 7(2), 137– 142. Wang, M., Schalock, R. L., Verdugo, M. A., & Jenaro, C. (2010). Examining the factor structure and hierarchical nature of the quality of life construct. American Journal on Intellectual and Developmental Disabilities, 115(3), 218–233. Wehmeyer, M. L., Agran, M., Hughes, C., Martin, J. E., Mithaug, D. E., & Palmer, S. B. (2007). Promoting self-determination in students with developmental disabilities. New York: Guilford Press. Weiss, C. (1998). Evaluation (2nd ed.). Englewood Cliffs, NJ: Prentice Hall. Wilkinson, L., & APA Task Force on Statistical Inference, (1999). Statistical methods in psychology journals: Guidelines and explanations. American Psychologist, 54, 594–604.

Robert L. Schalock, Ph.D. is Professor Emeritus at Hastings College (Nebraska) and Adjunct Research Professor at the Universities of Kansas (Beach Center on Disabilities), Salamanca, Gent, and Chongqing (Mainland China). His national and international work has focused on the conceptualization and measurement of quality of life and the supports paradigm. He has been involved in the development and evaluation of community based programs for persons with intellectual and closely related developmental disabilities.

Miguel Angel Verdugo, Ph.D. is Director of the INICO Research Center on Community Integration and Professor of Psychology at the University of Salamanca (Spain). He has published widely in the areas of quality of life, individualized supports, and public policy.

Laura E. Gomez, Ph.D. is a Doctor of Psychology at the University of Salamanca, Teaching Assistant at the University of Valladolid, and researcher on disabilities at the Institute on Community Integration (INICO) at the University of Salamanca. She has several international publications in the areas of public policy, model development, and the assessment of quality of life.


Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.