Outcome-Directed Thinking

Page 1

The e-Advocate Quarterly Magazine Luke 5:4-11 | Habakkuk 2:2

Outcome-Directed Thinking

“Helping Individuals, Organizations & Communities Achieve Their Full Potential”

Vol. XII, Issue LIII – Q-2 April| May| June 2026



The Advocacy Foundation, Inc. Helping Individuals, Organizations & Communities Achieve Their Full Potential

Since its founding in 2003, The Advocacy Foundation has become recognized as an effective provider of support to those who receive our services, having real impact within the communities we serve. We are currently engaged in many community and faith-based collaborative initiatives, having the overall objective of eradicating all forms of youth violence and correcting injustices everywhere. In carrying-out these initiatives, we have adopted the evidence-based strategic framework developed and implemented by the Office of Juvenile Justice & Delinquency Prevention (OJJDP). The stated objectives are: 1. 2. 3. 4. 5.

Community Mobilization; Social Intervention; Provision of Opportunities; Organizational Change and Development; Suppression [of illegal activities].

Moreover, it is our most fundamental belief that in order to be effective, prevention and intervention strategies must generally be Community Specific, Culturally Relevant, EvidenceBased, and Collaborative. The Violence Prevention and Intervention programming we employ in implementing this community-enhancing framework include the programs further described throughout our publications, programs and special projects both domestically and internationally.

www.TheAdvocacyFoundation.org ISBN: ......... .........

../2015 Printed in the USA

Advocacy Foundation Publishers 3601 N. Broad Street, Philadlephia, PA 19140 (878) 222-0100 | Voice | Fax | SMS

Page 2 of 105


Page 3 of 105


Dedication ______

Every publication in our many series’ is dedicated to everyone, absolutely everyone, who by virtue of their calling, by Divine inspiration, direction and guidance, is on the battlefield dayafter-day striving to follow God’s will and purpose for their lives. And this is with particular affinity for those Spiritual warriors who are being transformed into excellence through daily academic, professional, familial, and other challenges. We pray that you will bear in mind: Matthew 19:26 (NIV) Jesus looked at them and said, "With man this is impossible, but with God all things are possible." (Emphasis added) To all of us who daily look past our circumstances, and naysayers, to what the Lord says we will accomplish: Blessings!

-

The Advocacy Foundation, Inc. ______

for Naomi!

Page 4 of 105


Page 5 of 105


The Advocacy Foundation, Inc. Helping Individuals, Organizations & Communities Achieve Their Full Potential

The e-Advocate Quarterly

Outcome-Directed Thinking

“Helping Individuals, Organizations & Communities Achieve Their Full Potential

1735 Market Street, Suite 3750 | 100 Edgewood Avenue, Suite 1690 Philadelphia, PA 19102 Atlanta, GA 30303

John C Johnson III Founder & CEO

(878) 222-0100 Voice | Fax | SMS www.TheAdvocacyFoundation.org

Page 6 of 105


Page 7 of 105


Biblical Authority ______

Luke 5:4-11 (NIV) 4

When he had finished speaking, he said to Simon, “Put out into deep water, and let down the nets for a catch.” 5

Simon answered, “Master, we’ve worked hard all night and haven’t caught anything. But because you say so, I will let down the nets.” 6

When they had done so, they caught such a large number of fish that their nets began to break. So they signaled their partners in the other boat to come and help them, and they came and filled both boats so full that they began to sink. 7

8

When Simon Peter saw this, he fell at Jesus’ knees and said, “Go away from me, Lord; I am a sinful man!” 9 For he and all his companions were astonished at the catch of fish they had taken, 10 and so were James and John, the sons of Zebedee, Simon’s partners. Then Jesus said to Simon, “Don’t be afraid; from now on you will fish for people.” 11 So they pulled their boats up on shore, left everything and followed him.

______

Habakkuk 2:2 (NIV) The LORD’s Answer 2

Then the LORD replied:

“Write down the revelation and make it plain on tablets so that a herald may run with it.

Page 8 of 105


Page 9 of 105


Table of Contents The e-Advocate Quarterly Outcome-Directed Thinking

Biblical Authority I.

Introduction

II.

Evaluation

III.

Program Evaluation

IV.

Monitoring

V.

The Logical Framework Approach

VI.

Logic Models

VII.

Strategic Planning

VIII.

Lessons Learned: Counterfactual Policy Analysis

IX.

References Attachments A. The SOCRATES Model B. Self-Directed Guide to Outcome Mapping C. Outcome Thinking Glossary

Copyright Š 2015 The Advocacy Foundation, Inc. All Rights Reserved.

Page 10 of 105


Page 11 of 105


Introduction Outcomes Theory provides the conceptual basis for thinking about, and working with outcomes systems of any type. An outcomes system is any system that: identifies; prioritizes; measures; attributes; or hold parties to account for outcomes of any type in any area. Outcomes systems go under various names such as: strategic plans; management by results; results-based management systems; outcomes-focused management systems; accountability systems; evidence-based practice systems; and best-practice systems. In addition, outcomes issues are dealt with in traditional areas such as: strategic planning; business planning and risk management. Outcomes theory theorizes a sub-set of topics covered in diverse ways in other disciplines such as: performance management, organizational development, program evaluation, policy analysis, economics and the other social sciences. The different treatment of outcomes issues in different technical languages in these different disciplines means that it is hard for those building outcomes systems to gain quick access to a generic body of principles about how to set up outcomes systems and fix issues with existing outcomes systems. Outcomes theory is made up of several key conceptual frameworks and a set of principles. The most important framework is Duignan's Outcomes System Diagram. This diagram identifies seven different building-blocks of outcomes systems. These building-blocks are analogous to the building-blocks that make up accounting systems (e.g. general ledger, assets register). In the case of an outcomes system they are a different set of building-blocks which are necessary for outcomes systems to function properly. The building blocks are: 1. A model of the high-level outcomes being sought within the outcomes system, the steps which it is believed are necessary to get to these outcomes, previous evidence linking such steps to such outcomes, current priorities and whether current activity is focused on these priorities. Within outcomes theory these models are, for convienence, conceived of as visual models. They are used, for example, in visual strategic planning. 2. 'Controllable' indicators - measures of at least some of the boxes within the model. Controllable indicators have the feature that their mere measurement is proof that they have been caused by the project, organization or intervention that they are controlled by. This means that they are ideal for use as accountability measures (e.g. Key Performance Indicators KPIs). Page 12 of 105


3. 'Not-necessarily controllable indicators - indicators that are influenced by factors in addition to the intervention. These have the feature that their mere measurement does not say anything about what has caused them. 4. Non-impact evaluation - while 2 and 3 above are usually routinely collected information, outcomes systems can also utilize more one-off studies (referred to as types of 'evaluation'). Non-impact evaluation focuses on improving the 'lower-level' steps within the outcomes model (it is often included within aspects of: developmental, formative, process and implementation evaluation) 5. Impact evaluation - evaluation that makes a claim about what has caused high-level outcomes to have occurred (i.e. whether or not the intervention has improved them). 6. Comparative and economic evaluation - evaluation that compares different interventions or translates their benefits into dollar terms so that different interventions focusing on different issues can be compared. 7. Contracting, accountability and performance management arrangements - the arrangements that are in place (e.g. in the form of a contract between a funder and provider) as to what information will be collected regarding 1-6 and what parties will be held to account for, and rewarded and punished for.

Example of a Principle An example of a principle within outcomes theory is the Impact evaluation only option for highlevel outcome attribution if no controllable indicators at top of outcomes model principle. This is the principle that in an instance where building-block two (controllable indicators) does not connect to building block one of the outcomes model, then building-block five (impact evaluation) offers the only way to obtain more information about whether changes in high-level indicators can be attributed to an intervention.

Uses of Outcomes Theory Williams and Hummelbrunner (2009) summarize some of the uses of outcomes theory: "Outcomes theory intends to improve outcomes system architecture, that is, related systems that deal in one way or another with outcomes, by providing a clear common technical language, thus helping to avoid unnecessary duplication and identify gaps to be filled. Outcomes theory also specifies the structural features and the key principles of wellconstructed outcomes systems. ... This helps people without significant background in outcomes thinking to construct sound and sustainable outcomes systems."[4]

Practical Application of Outcomes Theory Duignan's Outcomes-Focused Visual Strategic Planning is an applied implementation of outcomes theory. It is based on building a visual strategic plan and then using it for: prioritization; performance management; and assessing organizational impact.

Page 13 of 105


Page 14 of 105


Evaluation Evaluation is a systematic determination of a subject's merit, worth and significance, using criteria governed by a set of standards. It can assist an organization, program, project or any other intervention or initiative to assess any aim, realizable concept/proposal, or any alternative, to help in decision-making; or to ascertain the degree of achievement or value in regard to the aim and objectives and results of any such action that has been completed. The primary purpose of evaluation, in addition to gaining insight into prior or existing initiatives, is to enable reflection and assist in the identification of future change. Evaluation is often used to characterize and appraise subjects of interest in a wide range of human enterprises, including the arts, criminal justice, foundations, non-profit organizations, government, health care, and other human services.

Definition Evaluation, is the structured interpretation and giving of meaning to predicted or actual impacts of proposals or results. It looks at original objectives, and at what is either predicted or what was accomplished and how it was accomplished. So evaluation can be formative, that is taking place during the development of a concept or proposal, project or organization, with the intention of improving the value or effectiveness of the proposal, project, or organization. It can also be assumptive, drawing lessons from a completed action or project or an organization at a later point in time or circumstance. Evaluation is inherently a theoretically informed approach (whether explicitly or not), and consequently any particular definition of evaluation would have be tailored to its context – the theory, needs, purpose, and methodology of the evaluation process itself. Having said this, evaluation has been defined as: 

A systematic, rigorous, and meticulous application of scientific methods to assess the design, implementation, improvement, or outcomes of a program. It is a resourceintensive process, frequently requiring resources, such as, evaluate expertise, labor, time, and a sizable budget

Page 15 of 105






"The critical assessment, in as objective a manner as possible, of the degree to which a service or its component parts fulfills stated goals" (St Leger and Wordsworth-Bell). The focus of this definition is on attaining objective knowledge, and scientifically or quantitatively measuring predetermined and external concepts. "A study designed to assist some audience to assess an object's merit and worth" (Shuffleboard). In this definition the focus is on facts as well as value laden judgments of the programs outcomes and worth.

Purpose The main purpose of a program evaluation can be to "determine the quality of a program by formulating a judgment" Marthe Hurteau, Sylvain Houle, StĂŠphanie Mongiat (2009). An alternative view is that "projects, evaluators, and other stakeholders (including funders) will all have potentially different ideas about how best to evaluate a project since each may have a different definition of 'merit'. The core of the problem is thus about defining what is of value." From this perspective, evaluation "is a contested term", as "evaluators" use the term evaluation to describe an assessment, or investigation of a program whilst others simply understand evaluation as being synonymous with applied research. There are two function considering to the evaluation purpose Formative Evaluations provide the information on the improving a product or a process Summative Evaluations provide information of short-term effectiveness or long-term impact to deciding the adoption of a product or process. Not all evaluations serve the same purpose some evaluations serve a monitoring function rather than focusing solely on measurable program outcomes or evaluation findings and a full list of types of evaluations would be difficult to compile. This is because evaluation is not part of a unified theoretical framework, drawing on a number of disciplines, which include management and organisational theory, policy analysis, education, sociology, social anthropology, and social change.

Discussion However, the strict adherence to a set of methodological assumptions may make the field of evaluation more acceptable to a mainstream audience but this adherence will work towards preventing evaluators from developing new strategies for dealing with the myriad problems that programs face. It is claimed that only a minority of evaluation reports are used by the evaluand (client) (Datta, 2006). One justification of this is that "when evaluation findings are challenged or utilization has failed, it was because stakeholders and clients found the inferences weak or the warrants unconvincing" (Fournier and Smith, 1993). Some reasons for this situation may be the failure of the evaluator to establish a set of shared aims with the evaluand, or creating overly ambitious Page 16 of 105


aims, as well as failing to compromise and incorporate the cultural differences of individuals and programs within the evaluation aims and process. None of these problems are due to a lack of a definition of evaluation but are rather due to evaluators attempting to impose predisposed notions and definitions of evaluations on clients. The central reason for the poor utilization of evaluations is arguably due to the lack of tailoring of evaluations to suit the needs of the client, due to a predefined idea (or definition) of what an evaluation is rather than what the client needs are (House, 1980). The development of a standard methodology for evaluation will require arriving at applicable ways of asking and stating the results of questions about ethics such as agent-principal, privacy, stakeholder definition, limited liability; and could-the-money-be-spent-more-wisely issues.

Standards Depending on the topic of interest, there are professional groups that review the quality and rigor of evaluation processes. Evaluating programs and projects, regarding their value and impact within the context they are implemented, can be ethically challenging. Evaluators may encounter complex, culturally specific systems resistant to external evaluation. Furthermore, the project organization or other stakeholders may be invested in a particular evaluation outcome. Finally, evaluators themselves may encounter "conflict of interest (COI)" issues, or experience interference or pressure to present findings that support a particular assessment. General professional codes of conduct, as determined by the employing organization, usually cover three broad aspects of behavioral standards, and include inter-collegial relations (such as respect for diversity and privacy), operational issues (due competence, documentation accuracy and appropriate use of resources), and conflicts of interest (nepotism, accepting gifts and other kinds of favoritism). However, specific guidelines particular to the evaluator's role that can be utilized in the management of unique ethical challenges are required. The Joint Committee on Standards for Educational Evaluation has developed standards for program, personnel, and student evaluation. The Joint Committee standards are broken into four sections: Utility, Feasibility, Propriety, and Accuracy. Various European institutions have also prepared their own standards, more or less related to those produced by the Joint Committee. They provide guidelines about basing value judgments on systematic inquiry, evaluator competence and integrity, respect for people, and regard for the general and public welfare. Page 17 of 105


The American Evaluation Association has created a set of Guiding Principles for evaluators. The order of these principles does not imply priority among them; priority will vary by situation and evaluator role. The principles run as follows: 





Systematic Inquiry: evaluators conduct systematic, data-based inquiries about whatever is being evaluated. This requires quality data collection, including a defensible choice of indicators, which lends credibility to findings. Findings are credible when they are demonstrably evidence-based, reliable and valid. This also pertains to the choice of methodology employed, such that it is consistent with the aims of the evaluation and provides dependable data. Furthermore, utility of findings is critical such that the information obtained by evaluation is comprehensive and timely, and thus serves to provide maximal benefit and use to stakeholders. Competence: evaluators provide competent performance to stakeholders. This requires that evaluation teams comprise an appropriate combination of competencies, such that varied and appropriate expertise is available for the evaluation process, and that evaluators work within their scope of capability. Integrity/Honesty: evaluators ensure the honesty and integrity of the entire evaluation process. A key element of this principle is freedom from bias in evaluation and this is underscored by three principles: impartiality, independence, and transparency.

Independence is attained through ensuring independence of judgment is upheld such that evaluation conclusions are not influenced or pressured by another party, and avoidance of conflict of interest, such that the evaluator does not have a stake in a particular conclusion. Conflict of interest is at issue particularly where funding of evaluations is provided by particular bodies with a stake in conclusions of the evaluation, and this is seen as potentially compromising the independence of the evaluator. Whilst it is acknowledged that evaluators may be familiar with agencies or projects that they are required to evaluate, independence requires that they not have been involved in the planning or implementation of the project. A declaration of interest should be made where any benefits or association with project are stated. Independence of judgment is required to be maintained against any pressures brought to bear on evaluators, for example, by project funders wishing to modify evaluations such that the project appears more effective than findings can verify. Impartiality pertains to findings being a fair and thorough assessment of strengths and weaknesses of a project or program. This requires taking due input from all stakeholders involved and findings presented without bias and with a transparent, proportionate, and persuasive link between findings and recommendations. Thus evaluators are required to delimit their findings to evidence. A mechanism to ensure impartiality is external and internal review. Such review is required of significant (determined in terms of cost or sensitivity) evaluations. The review is based on quality of work and the degree to which a demonstrable link is provided between findings and recommendations. Transparency requires that stakeholders are aware of the reason for the evaluation, the criteria by which evaluation occurs and the purposes to which the findings will be applied. Access to Page 18 of 105


the evaluation document should be facilitated through findings being easily readable, with clear explanations of evaluation methodologies, approaches, sources of information, and costs incurred. 

Respect for People: Evaluators respect the security, dignity and self-worth of the respondents, program participants, clients, and other stakeholders with whom they interact.This is particularly pertinent with regards to those who will be impacted upon by the evaluation findings. Protection of people includes ensuring informed consent from those involved in the evaluation, upholding confidentiality, and ensuring that the identity of those who may provide sensitive information towards the program evaluation is protected. Evaluators are ethically required to respect the customs and beliefs of those who are impacted upon by the evaluation or program activities. Examples of how such respect is demonstrated is through respecting local customs e.g. dress codes, respecting peoples privacy, and minimizing demands on others' time. Where stakeholders wish to place objections to evaluation findings, such a process should be facilitated through the local office of the evaluation organization, and procedures for lodging complaints or queries should be accessible and clear.



Responsibilities for General and Public Welfare: Evaluators articulate and take into account the diversity of interests and values that may be related to the general and public welfare. Access to evaluation documents by the wider public should be facilitated such that discussion and feedback is enabled.

Furthermore, the international organizations such as the I.M.F. and the World Bank have independent evaluation functions. The various funds, programmes, and agencies of the United Nations has a mix of independent, semi-independent and self-evaluation functions, which have organized themselves as a system-wide UN Evaluation Group (UNEG), that works together to strengthen the function, and to establish UN norms and standards for evaluation. There is also an evaluation group within the OECD-DAC, which endeavors to improve development evaluation standards. The independent evaluation units of the major multinational development banks (MDBs) have also created the Evaluation Cooperation Group to strengthen the use of evaluation for greater MDB effectiveness and accountability, share lessons from MDB evaluations, and promote evaluation harmonization and collaboration.

Page 19 of 105


Perspectives of Evaluation The word “evaluation” has various connotations for different people, raising issues related to this process that include; what type of evaluation should be conducted; why there should be an evaluation process and how the evaluation is integrated into a program, for the purpose of gaining greater knowledge and awareness? There are also various factors inherent in the evaluation process, for example; to critically examine influences within a program that involve the gathering and analyzing of relative information about a program. Michael Quinn Patton motivated the concept that the evaluation procedure should be directed towards:      

Activities Characteristics Outcomes The making of judgments on a program Improving its effectiveness, Informed programming decisions

Founded on another perspective of evaluation by Thomson and Hoffman in 2003, it is possible for a situation to be encountered, in which the process could not be considered advisable; for instance, in the event of a program being unpredictable, or unsound. This would include it lacking a consistent routine; or the concerned parties unable to reach an agreement regarding the purpose of the program. In addition, an influencer, or manager, refusing to incorporate relevant, important central issues within the evaluation

Approaches Evaluation approaches are conceptually distinct ways of thinking about, designing, and conducting evaluation efforts. Many of the evaluation approaches in use today make truly unique contributions to solving important problems, while others refine existing approaches in some way.

Classification of Approaches Two classifications of evaluation approaches by House and Stufflebeam and Webster can be combined into a manageable number of approaches in terms of their unique and important underlying principles. House considers all major evaluation approaches to be based on a common ideology entitled liberal democracy. Important principles of this ideology include freedom of choice, the uniqueness of the individual and empirical inquiry grounded in objectivity. He also contends that they are all based on subjectivist ethics, in which ethical conduct is based on the subjective

Page 20 of 105


or intuitive experience of an individual or group. One form of subjectivist ethics is utilitarian, in which "the good" is determined by what maximizes a single, explicit interpretation of happiness for society as a whole. Another form of subjectivist ethics is intuitionist/pluralist, in which no single interpretation of "the good" is assumed and such interpretations need not be explicitly stated nor justified.

These ethical positions have corresponding epistemologies—philosophies for obtaining knowledge. The objectivist epistemology is associated with the utilitarian ethic; in general, it is used to acquire knowledge that can be externally verified (intersubjective agreement) through publicly exposed methods and data. The subjectivist epistemology is associated with the intuitionist/pluralist ethic and is used to acquire new knowledge based on existing personal knowledge, as well as experiences that are (explicit) or are not (tacit) available for public inspection. House then divides each epistemological approach into two main political perspectives. Firstly, approaches can take an elite perspective, focusing on the interests of managers and professionals; or they also can take a mass perspective, focusing on consumers and participatory approaches. Stufflebeam and Webster place approaches into one of three groups, according to their orientation toward the role of values and ethical consideration. The political orientation promotes a positive or negative view of an object regardless of what its value actually is and Page 21 of 105


might be—they call this pseudo-evaluation. The questions orientation includes approaches that might or might not provide answers specifically related to the value of an object—they call this quasi-evaluation. The values orientation includes approaches primarily intended to determine the value of an object—they call this true evaluation. When the above concepts are considered simultaneously, fifteen evaluation approaches can be identified in terms of epistemology, major perspective (from House), and orientation. Two pseudo-evaluation approaches, politically controlled and public relations studies, are represented. They are based on an objectivist epistemology from an elite perspective. Six quasievaluation approaches use an objectivist epistemology. Five of them—experimental research, management information systems, testing programs, objectives-based studies, and content analysis—take an elite perspective. Accountability takes a mass perspective. Seven true evaluation approaches are included. Two approaches, decision-oriented and policy studies, are based on an objectivist epistemology from an elite perspective. Consumer-oriented studies are based on an objectivist epistemology from a mass perspective. Two approaches— accreditation/certification and connoisseur studies—are based on a subjectivist epistemology from an elite perspective. Finally, adversary and client-centered studies are based on a subjectivist epistemology from a mass perspective.

Summary of Approaches The following table is used to summarize each approach in terms of four attributes—organizer, purpose, strengths, and weaknesses. The organizer represents the main considerations or cues practitioners use to organize a study. The purpose represents the desired outcome for a study at a very general level. Strengths and weaknesses represent other attributes that should be considered when deciding whether to use the approach for a particular study. The following narrative highlights differences between approaches grouped together. Summary of approaches for conducting evaluations Attribute Approach Organizer

Purpose

Key strengths

Key weaknesses

Politically Controlled

Threats

Get, keep or increase influence, power or money.

Secure evidence advantageous to the client in a conflict.

Violates the principle of full & frank disclosure.

Public Relations

Propaganda needs

Create positive public image.

Secure evidence most likely to bolster public support.

Violates the principles of balanced reporting, justified conclusions, & objectivity.

Experimental Research

Causal relationships

Determine causal relationships between variables.

Strongest paradigm for determining causal relationships.

Requires controlled setting, limits range of evidence, focuses primarily on results.

Page 22 of 105


Management Information Systems

Testing Programs

ObjectivesBased

Content Analysis

Accountability

DecisionOriented

Policy Studies

Continuously supply evidence needed to fund, direct, & control programs.

Gives managers detailed evidence about complex programs.

Human service variables are rarely amenable to the narrow, quantitative definitions needed.

Compare test scores of individuals & groups to selected norms.

Produces valid & reliable evidence in many performance areas. Very familiar to public.

Data usually only on testee performance, overemphasizes testtaking skills, can be poor sample of what is taught or expected.

Relates outcomes to objectives.

Common sense appeal, widely used, uses behavioral objectives & testing technologies.

Leads to terminal evidence often too narrow to provide basis for judging the value of a program.

Describe & draw conclusion about a communication.

Allows for unobtrusive analysis of large volumes of unstructured, symbolic materials.

Sample may be unrepresentative yet overwhelming in volume. Analysis design often overly simplistic for question.

Provide constituents with an accurate accounting of results.

Popular with constituents. Aimed at improving quality of products and services.

Creates unrest between practitioners & consumers. Politics often forces premature studies.

Decisions

Provide a knowledge & value base for making & defending decisions.

Encourages use of evaluation to plan & implement needed programs. Helps justify decisions about plans & actions.

Necessary collaboration between evaluator & decision-maker provides opportunity to bias results.

Broad issues

Identify and assess potential costs & benefits of competing policies.

Provide general direction for broadly focused actions.

Often corrupted or subverted by politically motivated actions of participants.

Judge the relative merits of alternative goods & services.

Independent appraisal to protect practitioners & consumers from shoddy products & services. High public credibility.

Might not help practitioners do a better job. Requires credible & competent evaluators.

Determine if

Helps public make

Standards & guidelines

Scientific efficiency

Individual differences

Objectives

Content of a communication

Performance expectations

ConsumerOriented

Generalized needs & values, effects

Accreditation /

Standards &

Page 23 of 105


Certification

Connoisseur

Adversary Evaluation

Client-Centered

guidelines

institutions, programs, & personnel should be approved to perform specified functions.

informed decisions about quality of organizations & qualifications of personnel.

typically emphasize intrinsic criteria to the exclusion of outcome measures.

Dependent on small number of experts, making evaluation susceptible to subjectivity, bias, and corruption.

Critical guideposts

Critically describe, appraise, & illuminate an object.

Exploits highly developed expertise on subject of interest. Can inspire others to more insightful efforts.

"Hot" issues

Present the pro & cons of an issue.

Ensures balances presentations of represented perspectives.

Can discourage cooperation, heighten animosities.

Specific concerns & issues

Foster understanding of activities & how they are valued in a given setting & from a variety of perspectives.

Practitioners are helped to conduct their own evaluation.

Low external credibility, susceptible to bias in favor of participants.

Note. Adapted and condensed primarily from House (1978) and Stufflebeam & Webster (1980).

Pseudo-Evaluation Politically controlled and public relations studies are based on an objectivist epistemology from an elite perspective. Although both of these approaches seek to misrepresent value interpretations about an object, they function differently from each other. Information obtained through politically controlled studies is released or withheld to meet the special interests of the holder, whereas public relations information creates a positive image of an object regardless of the actual situation. Despite the application of both studies in real scenarios, neither of these approaches is acceptable evaluation practice.

Objectivist, Elite, Quasi-Evaluation As a group, these five approaches represent a highly respected collection of disciplined inquiry approaches. They are considered quasi-evaluation approaches because particular studies legitimately can focus only on questions of knowledge without addressing any questions of value. Such studies are, by definition, not evaluations. These approaches can produce characterizations without producing appraisals, although specific studies can produce both. Each of these approaches serves its intended purpose well. They are discussed roughly in order of the extent to which they approach the objectivist ideal.

Page 24 of 105


Experimental research is the best approach for determining causal relationships between variables. The potential problem with using this as an evaluation approach is that its highly controlled and stylized methodology may not be sufficiently responsive to the dynamically changing needs of most human service programs. Management information systems (MISs) can give detailed information about the dynamic operations of complex programs. However, this information is restricted to readily quantifiable data usually available at regular intervals. Testing programs are familiar to just about anyone who has attended school, served in the military, or worked for a large company. These programs are good at comparing individuals or groups to selected norms in a number of subject areas or to a set of standards of performance. However, they only focus on testee performance and they might not adequately sample what is taught or expected. Objectives-based approaches relate outcomes to prespecified objectives, allowing judgments to be made about their level of attainment. Unfortunately, the objectives are often not proven to be important or they focus on outcomes too narrow to provide the basis for determining the value of an object. Content analysis is a quasi-evaluation approach because content analysis judgments need not be based on value statements. Instead, they can be based on knowledge. Such content analyses are not evaluations. On the other hand, when content analysis judgments are based on values, such studies are evaluations.

Objectivist, Mass, Quasi-Evaluation Accountability is popular with constituents because it is intended to provide an accurate accounting of results that can improve the quality of products and services. However, this approach quickly can turn practitioners and consumers into adversaries when implemented in a heavy-handed fashion.  

Objectivist, elite, true evaluation Decision-oriented studies are designed to provide a knowledge base for making and defending decisions. This approach usually requires the close collaboration between an evaluator and decision-maker, allowing it to be susceptible to corruption and bias. Policy studies provide general guidance and direction on broad issues by identifying and assessing potential costs and benefits of competing policies. The drawback is these studies can be corrupted or subverted by the politically motivated actions of the participants.

Objectivist, Mass, True Evaluation 

Consumer-oriented studies are used to judge the relative merits of goods and services based on generalized needs and values, along with a comprehensive range of effects. However, this approach does not necessarily help practitioners improve their work, and it requires a very good and credible evaluator to do it well.

Page 25 of 105


Subjectivist, Elite, True Evaluation 

Accreditation / certification programs are based on self-study and peer review of organizations, programs, and personnel. They draw on the insights, experience, and expertise of qualified individuals who use established guidelines to determine if the applicant should be approved to perform specified functions. However, unless performance-based standards are used, attributes of applicants and the processes they perform often are overemphasized in relation to measures of outcomes or effects. Connoisseur studies use the highly refined skills of individuals intimately familiar with the subject of the evaluation to critically characterize and appraise it. This approach can help others see programs in a new light, but it is difficult to find a qualified and unbiased connoisseur.

Subjectivist, Mass, True Evaluation 

The adversary approach focuses on drawing out the pros and cons of controversial issues through quasi-legal proceedings. This helps ensure a balanced presentation of different perspectives on the issues, but it is also likely to discourage later cooperation and heighten animosities between contesting parties if "winners" and "losers" emerge.

Client-Centered 

Client-centered studies address specific concerns and issues of practitioners and other clients of the study in a particular setting. These studies help people understand the activities and values involved from a variety of perspectives. However, this responsive approach can lead to low external credibility and a favorable bias toward those who participated in the study.

Methods and Techniques Evaluation is methodologically diverse. Methods may be qualitative or quantitative, and include case studies, survey research, statistical analysis, model building, and many more such as:         

Accelerated aging Action research Advanced product quality planning Alternative assessment Appreciative Inquiry Assessment Axiomatic design Benchmarking Case study

       

Design Focused Evaluation Discourse analysis Educational accreditation Electronic portfolio Environmental scanning Ethnography Experiment Experimental techniques

       

Marketing research Meta-analysis Metrics Most significant change technique Multivariate statistics Naturalistic observation Observational techniques Opinion polling

      

Quality management system Quantitative research Questionnaire Questionnaire construction Root cause analysis Rubrics Sampling Self-assessment

Page 26 of 105


          

Change management Clinical trial Cohort study Competitor analysis Consensus decision-making Consensus-seeking decision-making Content analysis Conversation analysis Cost-benefit analysis Data mining Delphi Technique

             

Factor analysis Factorial experiment Feasibility study Field experiment Fixtureless incircuit test Focus group Force field analysis Game theory Goal-free evaluation Grading Historical method Inquiry Interview Iterative design

    

        

Organizational learning Outcome mapping Outcomes theory Participant observation Participatory impact pathways analysis Policy analysis Post occupancy evaluation Process improvement Project management Qualitative research Quality audit Quality circle Quality control Quality management

            

Six Sigma Standardized testing Statistical process control Statistical survey Statistics Strategic planning Structured interviewing Systems theory Student testing Theory of change Total quality management Triangulation Wizard of Oz experiment

Page 27 of 105


Page 28 of 105


Program Evaluation Program Evaluation is a systematic method for collecting, analyzing, and using information to answer questions about projects, policies and programs, particularly about their effectiveness and efficiency. In both the public and private sectors, stakeholders often want to know whether the programs they are funding, implementing, voting for, receiving or objecting to are producing the intended effect. While program evaluation first focuses around this definition, important considerations often include how much the program costs per participant, how the program could be improved, whether the program is worthwhile, whether there are better alternatives, if there are unintended outcomes, and whether the program goals are appropriate and useful. Evaluators help to answer these questions, but the best way to answer the questions is for the evaluation to be a joint project between evaluators and stakeholders. The process of evaluation is considered to be a relatively recent phenomenon. However, planned social evaluation has been documented as dating as far back as 2200 BC. Evaluation became particularly relevant in the U.S. in the 1960s during the period of the Great Society social programs associated with the Kennedy and Johnson administrations. Extraordinary sums were invested in social programs, but the impacts of these investments were largely unknown. Program evaluations can involve both quantitative and qualitative methods of social research. People who do program evaluation come from many different backgrounds, such as sociology, psychology, economics, social work, and public policy. Some graduate schools also have specific training programs for program evaluation.

Doing an Evaluation Program evaluation may be conducted at several stages during a program's lifetime. Each of these stages raises different questions to be answered by the evaluator, and correspondingly different evaluation approaches are needed. Rossi, Lipsey and Freeman (2004) suggest the following kinds of assessment, which may be appropriate at these different stages:     

Assessment of the need for the program Assessment of program design and logic/theory Assessment of how the program is being implemented (i.e., is it being implemented according to plan? Are the program's processes maximizing possible outcomes?) Assessment of the program's outcome or impact (i.e., what it has actually achieved) Assessment of the program's cost and efficiency

Page 29 of 105


Assessing Needs A needs assessment examines the population that the program intends to target, to see whether the need as conceptualized in the program actually exists in the population; whether it is, in fact, a problem; and if so, how it might best be dealt with. This includes identifying and diagnosing the actual problem the program is trying to address, who or what is affected by the problem, how widespread the problem is, and what are the measurable effects that are caused by the problem. For example, for a housing program aimed at mitigating homelessness, a program evaluator may want to find out how many people are homeless in a given geographic area and what their demographics are. Rossi, Lipsey and Freeman (2004) caution against undertaking an intervention without properly assessing the need for one, because this might result in a great deal of wasted funds if the need did not exist or was misconceived. Needs assessment involves the processes or methods used by evaluators to describe and diagnose social needs This is essential for evaluators because they need to identify whether programs are effective and they cannot do this unless they have identified what the problem/need is. Programs that do not do a needs assessment can have the illusion that they have eradicated the problem/need when in fact there was no need in the first place. Needs assessment involves research and regular consultation with community stakeholders and with the people that will benefit from the project before the program can be developed and implemented. Hence it should be a bottom-up approach. In this way potential problems can be realized early because the process would have involved the community in identifying the need and thereby allowed the opportunity to identify potential barriers. The important task of a program evaluator is thus to: First, construct a precise definition of what the problem is. Evaluators need to first identify the problem/need. This is most effectively done by collaboratively including all possible stakeholders, i.e., the community impacted by the potential problem, the agents/actors working to address and resolve the problem, funders, etc. Including buy-in early on in the process reduces potential for push-back, miscommunication, and incomplete information later on. Second, assess the extent of the problem. Having clearly identified what the problem is, evaluators need to then assess the extent of the problem. They need to answer the ‘where’ and ‘how big’ questions. Evaluators need to work out where the problem is located and how big it is. Pointing out that a problem exists is much easier than having to specify where it is located and how rife it is. Rossi, Lipsey & Freeman (2004) gave an example that: a person identifying some battered children may be enough evidence to persuade one that child abuse exists. But indicating how many children it affects and where it is located geographically and socially would require knowledge about abused children, the characteristics of perpetrators and the impact of the problem throughout the political authority in question. This can be difficult considering that child abuse is not a public behavior, also keeping in mind that estimates of the rates on private behavior are usually not possible because of factors like unreported cases. In this case evaluators would have to use data from several sources and Page 30 of 105


apply different approaches in order to estimate incidence rates. There are two more questions that need to be answered: Evaluators need to also answer the ’how’ and ‘what’ questions The ‘how’ question requires that evaluators determine how the need will be addressed. Having identified the need and having familiarized oneself with the community evaluators should conduct a performance analysis to identify whether the proposed plan in the program will actually be able to eliminate the need. The ‘what’ question requires that evaluators conduct a task analysis to find out what the best way to perform would be. For example whether the job performance standards are set by an organization or whether some governmental rules need to be considered when undertaking the task. Third, define and identify the target of interventions and accurately describe the nature of the service needs of that population It is important to know what/who the target population is/are – it might be individuals, groups, communities, etc. There are three units of the population: population at risk, population in need and population in demand 

Population at risk: are people with a significant probability of developing the risk e.g. the population at risk for birth control programs are women of child bearing age.  Population in need: are people with the condition that the program seeks to address; e.g. the population in need for a program that aims to provide ARV’s to HIV positive people are people that are HIV positive. Population in demand: that part of the population in need that agrees to be having the need and are willing to take part in what the program has to offer e.g. not all HIV positive people will be willing to take ARV’s.

Being able to specify what/who the target is will assist in establishing appropriate boundaries, so that interventions can correctly address the target population and be feasible to apply< There are four steps in conducting a needs assessment: 1. Perform a ‘gap’ analyses Evaluators need to compare current situation to the desired or necessary situation. The difference or the gap between the two situations will help identify the need, purpose and aims of the program.

Page 31 of 105


2. Identify priorities and importance In the first step above, evaluators would have identified a number of interventions that could potentially address the need e.g. training and development, organization development etc. These must now be examined in view of their significance to the program’s goals and constraints. This must be done by considering the following factors: cost effectiveness (consider the budget of the program, assess cost/benefit ratio), executive pressure (whether top management expects a solution) and population (whether many key people are involved). 3. Identify causes of performance problems and/or opportunities When the needs have been prioritized the next step is to identify specific problem areas within the need to be addressed. And to also assess the skills of the people that will be carrying out the interventions. 4. Identify possible solutions and growth opportunities Compare the consequences of the interventions if it was to be implemented or not. Needs analysis is hence a very crucial step in evaluating programs because the effectiveness of a program cannot be assessed unless we know what the problem was in the first place.

Assessing Program Theory The program theory, also called a logic model or impact pathway, is an assumption, implicit in the way the program is designed, about how the program's actions are supposed to achieve the outcomes it intends. This 'logic model' is often not stated explicitly by people who run programs, it is simply assumed, and so an evaluator will need to draw out from the program staff how exactly the program is supposed to achieve its aims and assess whether this logic is plausible. For example, in an HIV prevention program, it may be assumed that educating people about HIV/AIDS transmission, risk and safe sex practices will result in safer sex being practiced. However, research in South Africa increasingly shows that in spite of increased education and knowledge, people still often do not practice safe sex. Therefore, the logic of a program which relies on education as a means to get people to use condoms may be faulty. This is why it is important to read research that has been done in the area. Explicating this logic can also reveal unintended or unforeseen consequences of a program, both positive and negative. The program theory drives the hypotheses to test for impact evaluation. Developing a logic model can also build common understanding amongst program staff and stakeholders about what the program is actually supposed to do and how it is supposed to do it, which is often lacking (see Participatory impact pathways analysis). Of course, it is also possible that during the process of trying to elicit the logic model behind a program the evaluators may discover that such a model is either incompletely developed, internally contradictory, or (in worst cases) essentially Page 32 of 105


nonexisistent. This decidedly limits the effectiveness of the evaluation, although it does not necessarily reduce or eliminate the program. Creating a logic model is a wonderful way to help visualize important aspects of programs, especially when preparing for an evaluation. An evaluator should create a logic model with input from many different stake holders. Logic Models have 5 major components: Resources or Inputs, Activities, Outputs, Short-term outcomes, and Long-term outcomes. Creating a logic model helps articulate the problem, the resources and capacity that are currently being used to address the problem, and the measurable outcomes from the program. Looking at the different components of a program in relation to the overall short-term and long-term goals allows for illumination of potential misalignments. Creating an actual logic model is particularly important because it helps clarify for all stakeholders: the definition of the problem, the overarching goals, and the capacity and outputs of the program. </gallery> Rossi, Lipsey & Freeman (2004) suggest four approaches and procedures that can be used to assess the program theory. These approaches are discussed below. 

Assessment in relation to social needs

This entails assessing the program theory by relating it to the needs of the target population the program is intended to serve. If the program theory fails to address the needs of the target population it will be rendered ineffective even when if it is well implemented.

Page 33 of 105




Assessment Of Logic And Plausibility

This form of assessment involves asking a panel of expert reviewers to critically review the logic and plausibility of the assumptions and expectations inherent in the program's design. The review process is unstructured and open ended so as to address certain issues on the program design. Rutman (1980), Smith (1989), and Wholey (1994) suggested the questions listed below to assist with the review process. Are the program goals and objectives well defined? Are the program goals and objectives feasible? Is the change process presumed in the program theory feasible? Are the procedures for identifying members of the target population, delivering service to them, and sustaining that service through completion well defined and suffiient? Are the constituent components, activities, and functions of the program well defined and sufficient? Are the resources allocated to the program and its various activities adequate? 

Assessment Through Comparison With Research And Practice

This form of assessment requires gaining information from research literature and existing practices to assess various components of the program theory. The evaluator can assess whether the program theory is congruent with research evidence and practical experiences of programs with similar concepts. 

Assessment Via Preliminary Observation

This approach involves incorporating firsthand observations into the assessment process as it provides a reality check on the concordance between the program theory and the program itself.[7] The observations can focus on the attainability of the outcomes, circumstances of the target population, and the plausibility of the program activities and the supporting resources. These different forms of assessment of program theory can be conducted to ensure that the program theory is sound.

Assessing Implementation Process analysis looks beyond the theory of what the program is supposed to do and instead evaluates how the program is being implemented. This evaluation determines whether the components identified as critical to the success of the program are being implemented. The evaluation determines whether target populations are being reached, people are receiving the intended services, staff are adequately qualified. Process evaluation is an ongoing process in which repeated measures may be used to evaluate whether the program is being implemented effectively. This problem is particularly critical because many innovations, particularly in areas like education and public policy, consist of fairly complex chains of action. Many of which these Page 34 of 105


elements rely on the prior correct implementation of other elements, and will fail if the prior implementation was not done correctly. This was conclusively demonstrated by Gene V. Glass and many others during the 1980s. Since incorrect or ineffective implementation will produce the same kind of neutral or negative results that would be produced by correct implementation of a poor innovation, it is essential that evaluation research assess the implementation process itself. Otherwise, a good innovative idea may be mistakenly characterized as ineffective, where in fact it simply had never been implemented as designed.

Assessing the Impact (Effectiveness) The impact evaluation determines the causal effects of the program. This involves trying to measure if the program has achieved its intended outcomes, i.e. program outcomes.

Program Outcomes An outcome is the state of the target population or the social conditions that a program is expected to have changed. Program outcomes are the observed characteristics of the target population or social conditions, not of the program. Thus the concept of an outcome does not necessarily mean that the program targets have actually changed or that the program has caused them to change in any way.

Page 35 of 105


There are two kinds of outcomes, namely outcome level and outcome change, also associated with program effect.   

Outcome level refers to the status of an outcome at some point in time. Outcome change refers to the difference between outcome levels at different points in time. Program effect refers to that portion of an outcome change that can be attributed uniquely to a program as opposed to the influence of some other factor.

Measuring Program Outcomes Outcome measurement is a matter of representing the circumstances defined as the outcome by means of observable indicators that vary systematically with changes or differences in those circumstances. Outcome measurement is a systematic way to assess the extent to which a program has achieved its intended outcomes. According to Mouton (2009) measuring the impact of a program means demonstrating or estimating the accumulated differentiated proximate and emergent effect, some of which might be unintended and therefore unforeseen. Outcome measurement serves to help you understand whether the program is effective or not. It further helps you to clarify your understanding of your program. But the most important reason for undertaking the effort is to understand the impacts of your work on the people you serve. With the information you collect, you can determine which activities to continue and build upon, and which you need to change in order to improve the effectiveness of the program. This can involve using sophisticated statistical techniques in order to measure the effect of the program and to find causal relationship between the program and the various outcomes. More information about impact evaluation is found under the heading 'Determining Causation'.

Assessing Efficiency Finally, cost-benefit or cost-effectiveness analysis assesses the efficiency of a program. Evaluators outline the benefits and cost of the program for comparison. An efficient program has a lower cost-benefit ratio.

Determining Causation Perhaps the most difficult part of evaluation is determining whether the program itself is causing the changes that are observed in the population it was aimed at. Events or processes outside of the program may be the real cause of the observed outcome (or the real prevention of the anticipated outcome). Causation is difficult to determine. One main reason for this is self selection bias. People select themselves to participate in a program. For example, in a job training program, some people Page 36 of 105


decide to participate and others do not. Those who do participate may differ from those who do not in important ways. They may be more determined to find a job or have better support resources. These characteristics may actually be causing the observed outcome of increased employment, not the job training program. Evaluations conducted with random assignment are able to make stronger inferences about causation. Randomly assigning people to participate or to not participate in the program, reduces or eliminates self-selection bias. Thus, the group of people who participate would likely be more comparable to the group who did not participate.

However, since most programs cannot use random assignment, causation cannot be determined. Impact analysis can still provide useful information. For example, the outcomes of the program can be described. Thus the evaluation can describe that people who participated in the program were more likely to experience a given outcome than people who did not participate. If the program is fairly large, and there are enough data, statistical analysis can be used to make a reasonable case for the program by showing, for example, that other causes are unlikely.

Reliability, Validity and Sensitivity in Program Evaluation It is important to ensure that the instruments (for example, tests, questionnaires, etc.) used in program evaluation are as reliable, valid and sensitive as possible. According to Rossi et al. (2004, p. 222), 'a measure that is poorly chosen or poorly conceived can completely undermine the worth of an impact assessment by producing misleading estimates. Only if outcome measures are valid, reliable and appropriately sensitive can impact assessments be regarded as credible'.

Page 37 of 105


Reliability The reliability of a measurement instrument is the 'extent to which the measure produces the same results when used repeatedly to measure the same thing' (Rossi et al., 2004, p. 218). The more reliable a measure is, the greater its statistical power and the more credible its findings. If a measuring instrument is unreliable, it may dilute and obscure the real effects of a program, and the program will 'appear to be less effective than it actually is' (Rossi et al., 2004, p. 219). Hence, it is important to ensure the evaluation is as reliable as possible.

Validity The validity of a measurement instrument is 'the extent to which it measures what it is intended to measure' (Rossi et al., 2004, p. 219). This concept can be difficult to accurately measure: in general use in evaluations, an instrument may be deemed valid if accepted as valid by the stakeholders (stakeholders may include, for example, funders, program administrators, et cetera).

Sensitivity The principal purpose of the evaluation process is to measure whether the program has an effect on the social problem it seeks to redress; hence, the measurement instrument must be sensitive enough to discern these potential changes (Rossi et al., 2004). A measurement instrument may be insensitive if it contains items measuring outcomes which the program couldn't possibly effect, or if the instrument was originally developed for applications to individuals (for example standardized psychological measures) rather than to a group setting (Rossi et al., 2004). These factors may result in 'noise' which may obscure any effect the program may have had. Only measures which adequately achieve the benchmarks of reliability, validity and sensitivity can be said to be credible evaluations. It is the duty of evaluators to produce credible evaluations, as their findings may have far reaching effects. A discreditable evaluation which is unable to show that a program is achieving its purpose when it is in fact creating positive change may cause the program to lose its funding undeservedly. Steps to Program Evaluation Framework According to the Center for Disease Control (CDC) there are six steps to a complete program evaluation. The steps described are: engage stakeholder, describe the program, focus the evaluation design, gather credible evidence, justify conclusions, and ensure use and share lessons learned. These steps can happen in a cycle framework to represent the continuing process of evaluation.

Page 38 of 105


Evaluating Collective Impact Though program evaluation processes mentioned here are appropriate for most programs, highly complex non-linear initiatives, such as those using the collective impact (CI) model, require a dynamic approach to evaluation. Collective impact is "the commitment of a group of important actors from different sectors to a common agenda for solving a specific social problem" and typically involves three stages, each with a different recommended evaluation approach: 

Early Phase: CI participants are exploring possible strategies and developing plans for action. Characterized by uncertainty.

Recommended evaluation approach: Developmental evaluation to help CI partners understand the context of the initiative and its development: "Developmental evaluation involves real time feedback about what is emerging in complex dynamic systems as innovators seek to bring about systems change." 

Middle Phase: CI partners implement agreed upon strategies. Some outcomes become easier to anticipate.

Recommended evaluation approach: Formative evaluation to refine and improve upon the progress, as well as continued developmental evaluation to explore new elements as they Page 39 of 105


emerge. Formative evaluation involves "careful monitoring of processes in order to respond to emergent properties and any unexpected outcomes." 

Later Phase: Activities achieve stability and are no longer in formation. Experience informs knowledge about which activities may be effective.

Recommended evaluation approach: Summative evaluation “uses both quantitative and qualitative methods in order to get a better understanding of what [the] project has achieved, and how or why this has occurred.”

Planning a Program Evaluation Planning a program evaluation can be broken up into four parts: focusing the evaluation, collecting the information, using the information, and managing the evaluation. http://learningstore.uwex.edu/assets/pdfs/g3658-1.pdf Program evaluation involves reflecting on questions about evaluation purpose, what questions are necessary to ask, and what will be done with information gathered. Critical questions for consideration include:         

What am I going to evaluate? What is the purpose of this evaluation? Who will use this evaluation? How will they use it? What questions is this evaluation seeking to answer? What information do I need to answer the questions? When is the evaluation needed? What resources do I need? How will I collect the data I need? How will data be analyzed? What is my implementation timeline?

Methodological Constraints and Challenges The Shoestring Approach The “shoestring evaluation approach” is designed to assist evaluators operating under limited budget, limited access or availability of data and limited turnaround time, to conduct effective evaluations that are methodologically rigorous(Bamberger, Rugh, Church & Fort, 2004). This approach has responded to the continued greater need for evaluation processes that are more rapid and economical under difficult circumstances of budget, time constraints and limited availability of data. However, it is not always possible to design an evaluation to achieve the highest standards available. Many programs do not build an evaluation procedure into their design or budget. Hence, many evaluation processes do not begin until the program is already

Page 40 of 105


underway, which can result in time, budget or data constraints for the evaluators, which in turn can affect the reliability, validity or sensitivity of the evaluation. > The shoestring approach helps to ensure that the maximum possible methodological rigor is achieved under these constraints.

Budget Constraints Frequently, programs are faced with budget constraints because most original projects do not include a budget to conduct an evaluation (Bamberger et al., 2004). Therefore, this automatically results in evaluations being allocated smaller budgets that are inadequate for a rigorous evaluation. Due to the budget constraints it might be difficult to effectively apply the most appropriate methodological instruments. These constraints may consequently affect the time available in which to do the evaluation (Bamberger et al., 2004). Budget constraints may be addressed by simplifying the evaluation design, revising the sample size, exploring economical data collection methods (such as using volunteers to collect data, shortening surveys, or using focus groups and key informants) or looking for reliable secondary data (Bamberger et al., 2004).

Time Constraints The most time constraint that can be faced by an evaluator is when the evaluator is summoned to conduct an evaluation when a project is already underway if they are given limited time to do the evaluation compared to the life of the study, or if they are not given enough time for adequate planning. Time constraints are particularly problematic when the evaluator is not familiar with the area or country in which the program is situated (Bamberger et al., 2004).[25] Time constraints can be addressed by the methods listed under budget constraints as above, and also by careful planning to ensure effective data collection and analysis within the limited time space.

Data Constraints If the evaluation is initiated late in the program, there may be no baseline data on the conditions of the target group before the intervention began (Bamberger et al., 2004). Another possible cause of data constraints is if the data have been collected by program staff and contain systematic reporting biases or poor record keeping standards and is subsequently of little use (Bamberger et al., 2004). Another source of data constraints may result if the target group are difficult to reach to collect data from - for example homeless people, drug addicts, migrant workers, et cetera (Bamberger et al., 2004). Data constraints can be addressed by reconstructing baseline data from secondary data or through the use of multiple methods. Page 41 of 105


Multiple methods, such as the combination of qualitative and quantitative data can increase validity through triangulation and save time and money. Additionally, these constraints may be dealt with through careful planning and consultation with program stakeholders. By clearly identifying and understanding client needs ahead of the evaluation, costs and time of the evaluative process can be streamlined and reduced, while still maintaining credibility. All in all, time, monetary and data constraints can have negative implications on the validity, reliability and transferability of the evaluation. The shoestring approach has been created to assist evaluators to correct the limitations identified above by identifying ways to reduce costs and time, reconstruct baseline data and to ensure maximum quality under existing constraints (Bamberger et al., 2004).

Five-Tiered Approach The five-tiered approach to evaluation further develops the strategies that the shoestring approach to evaluation is based upon. It was originally developed by Jacobs (1988) as an alternative way to evaluate community-based programs and as such was applied to a state wide child and family program in Massachusetts, U.S.A. The five-tiered approach is offered as a conceptual framework for matching evaluations more precisely to the characteristics of the programs themselves, and to the particular resources and constraints inherent in each evaluation context. In other words, the five-tiered approach seeks to tailor the evaluation to the specific needs of each evaluation context. The earlier tiers (1-3) generate descriptive and process-oriented information while the later tiers (4-5) determine both the short-term and the long-term effects of the program. The five levels are organized as follows:     

Tier 1: needs assessment (sometimes referred to as pre-implementation) Tier 2: monitoring and accountability Tier 3: quality review and program clarification (sometimes referred to as understanding and refining) Tier 4: achieving outcomes Tier 5: establishing impact

For each tier, purpose(s) are identified, along with corresponding tasks that enable the identified purpose of the tier to be achieved. For example, the purpose of the first tier, Needs assessment, would be to document a need for a program in a community. The task for that tier would be to assess the community's needs and assets by working with all relevant stakeholders. While the tiers are structured for consecutive use, meaning that information gathered in the earlier tiers is required for tasks on higher tiers, it acknowledges the fluid nature of evaluation. Therefore, it is possible to move from later tiers back to preceding ones, or even to work in two tiers at the same time. It is important for program evaluators to note, however, that a program must be evaluated at the appropriate level. Page 42 of 105


The five-tiered approach is said to be useful for family support programs which emphasise community and participant empowerment. This is because it encourages a participatory approach involving all stakeholders and it is through this process of reflection that empowerment is achieved.

Methodological Challenges Presented by Language and Culture The purpose of this section is to draw attention to some of the methodological challenges and dilemmas evaluators are potentially faced with when conducting a program evaluation in a developing country. In many developing countries the major sponsors of evaluation are donor agencies from the developed world, and these agencies require regular evaluation reports in order to maintain accountability and control of resources, as well as generate evidence for the program’s success or failure. However, there are many hurdles and challenges which evaluators face when attempting to implement an evaluation program which attempts to make use of techniques and systems which are not developed within the context to which they are applied. Some of the issues include differences in culture, attitudes, language and political process. Culture is defined by Ebbutt (1998, p. 416) as a “constellation of both written and unwritten expectations, values, norms, rules, laws, artifacts, rituals and behaviors that permeate a society and influence how people behave socially�. Culture can influence many facets of the evaluation process, including data collection, evaluation program implementation and the analysis and understanding of the results of the evaluation. In particular, instruments which are traditionally used to collect data such as questionnaires and semi-structured interviews need to be sensitive to differences in culture, if they were originally developed in a different cultural context. The understanding and meaning of constructs which the evaluator is attempting to measure may not be shared between the evaluator and the sample population and thus the transference of concepts is an important notion, as this will influence the quality of the data collection carried out by evaluators as well as the analysis and results generated by the data. Language also plays an important part in the evaluation process, as language is tied closely to culture. Language can be a major barrier to communicating concepts which the evaluator is trying to access, and translation is often required. There are a multitude of problems with translation, including the loss of meaning as well as the exaggeration or enhancement of meaning by translators. For example, terms which are contextually specific may not translate Page 43 of 105


into another language with the same weight or meaning. In particular, data collection instruments need to take meaning into account as the subject matter may not be considered sensitive in a particular context might prove to be sensitive in the context in which the evaluation is taking place. Thus, evaluators need to take into account two important concepts when administering data collection tools: lexical equivalence and conceptual equivalence. Lexical equivalence asks the question: how does one phrase a question in two languages using the same words? This is a difficult task to accomplish, and uses of techniques such as backtranslation may aid the evaluator but may not result in perfect transference of meaning. This leads to the next point, conceptual equivalence. It is not a common occurrence for concepts to transfer unambiguously from one culture to another. Data collection instruments which have not undergone adequate testing and piloting may therefore render results which are not useful as the concepts which are measured by the instrument may have taken on a different meaning and thus rendered the instrument unreliable and invalid. Thus, it can be seen that evaluators need to take into account the methodological challenges created by differences in culture and language when attempting to conduct a program evaluation in a developing country.

Utilization Results There are three conventional uses of evaluation results: persuasive utilization, direct (instrumental) utilization, and conceptual utilization.

Persuasive Utilization Persuasive utilization is the enlistment of evaluation results in an effort to persuade an audience to either support an agenda or to oppose it. Unless the 'persuader' is the same person that ran the evaluation, this form of utilization is not of much interest to evaluators as they often cannot foresee possible future efforts of persuasion.

Direct (Instrumental) Utilization Evaluators often tailor their evaluations to produce results that can have a direct influence in the improvement of the structure, or on the process, of a program. For example, the evaluation of a novel educational intervention may produce results that indicate no improvement in students' marks. This may be due to the intervention not having a sound theoretical background, or it may be that the intervention is not conducted as originally intended. The results of the evaluation would hopefully cause to the creators of the intervention to go back to the drawing board to re-create the core structure of the intervention, or even change the implementation processes.

Page 44 of 105


Conceptual Utilization But even if evaluation results do not have a direct influence in the re-shaping of a program, they may still be used to make people aware of the issues the program is trying to address. Going back to the example of an evaluation of a novel educational intervention, the results can also be used to inform educators and students about the different barriers that may influence students' learning difficulties. A number of studies on these barriers may then be initiated by this new information.

Variables Affecting Utilization

There are five conditions that seem to affect the utility of evaluation results, namely relevance, communication between the evaluators and the users of the results, information processing by the users, the plausibility of the results, as well as the level of involvement or advocacy of the users.

Guidelines for Maximizing Utilization Quoted directly from Rossi et al. (2004, p. 416).: 

Evaluators must understand the cognitive styles of decisionmakers Page 45 of 105


   

Evaluation results must be timely and available when needed Evaluations must respect stakeholders' program commitments Utilization and dissemination plans should be part of the evaluation design Evaluations should include an assessment of utilization

Internal Versus External Program Evaluators The choice of the evaluator chosen to evaluate the program may be regarded as equally important as the process of the evaluation. Evaluators may be internal (persons associated with the program to be executed) or external (Persons not associated with any part of the execution/implementation of the program). (Division for oversight services,2004). The following provides a brief summary of the advantages and disadvantages of internal and external evaluators adapted from the Division of oversight services (2004), for a more comprehensive list of advantages and disadvantages of internal and external evaluators, see (Division of oversight services, 2004).

Internal Evaluators Advantages   

May have better overall knowledge of the program and possess informal knowledge of the program Less threatening as already familiar with staff Less costly

Disadvantages   

May be less objective May be more preoccupied with other activities of the program and not give the evaluation complete attention May not be adequately trained as an evaluator.

External evaluators Advantages   

More objective of the process, offers new perspectives, different angles to observe and critique the process May be able to dedicate greater amount of time and attention to the evaluation May have greater expertise and evaluation brain

Page 46 of 105


Disadvantages   

May be more costly and require more time for the contract, monitoring, negotiations etc. May be unfamiliar with program staff and create anxiety about being evaluated May be unfamiliar with organization policies, certain constraints affecting the program.

Three Paradigms Positivist Potter (2006) identifies and describes three broad paradigms within program evaluation . The first, and probably most common, is the positivist approach, in which evaluation can only occur where there are “objective”, observable and measurable aspects of a program, requiring predominantly quantitative evidence. The positivist approach includes evaluation dimensions such as needs assessment, assessment of program theory, assessment of program process, impact assessment and efficiency assessment (Rossi, Lipsey and Freeman, 2004). A detailed example of the positivist approach is a study conducted by the Public Policy Institute of California report titled "Evaluating Academic Programs in California's Community Colleges", in which the evaluators examine measurable activities (i.e. enrollment data) and conduct quantitive assessments like factor analysis.

Interpretive The second paradigm identified by Potter (2006) is that of interpretive approaches, where it is argued that it is essential that the evaluator develops an understanding of the perspective, experiences and expectations of all stakeholders. This would lead to a better understanding of the various meanings and needs held by stakeholders, which is crucial before one is able to make judgments about the merit or value of a program. The evaluator’s contact with the program is often over an extended period of time and, although there is no standardized method, observation, interviews and focus groups are commonly used. A report commissioned by the World Bank details 8 approaches in which qualitative and quantitative methods can be integrated and perhaps yield insights not achievable through only one method.

Critical-Emancipatory Potter (2006) also identifies critical-emancipatory approaches to program evaluation, which are largely based on action research for the purposes of social transformation. This type of approach is much more ideological and often includes a greater degree of social activism on the part of the evaluator. This approach would be appropriate for qualitative and participative evaluations. Because of its critical focus on societal power structures and its emphasis on

Page 47 of 105


participation and empowerment, Potter argues this type of evaluation can be particularly useful in developing countries. Despite the paradigm which is used in any program evaluation, whether it be positivist, interpretive or critical-emancipatory, it is essential to acknowledge that evaluation takes place in specific socio-political contexts. Evaluation does not exist in a vacuum and all evaluations, whether they are aware of it or not, are influenced by socio-political factors. It is important to recognize the evaluations and the findings which result from this kind of evaluation process can be used in favor or against particular ideological, social and political agendas (Weiss, 1999). This is especially true in an age when resources are limited and there is competition between organizations for certain projects to be prioritized over others (Louw, 1999).

Empowerment Evaluation Empowerment evaluation makes use of evaluation concepts, techniques, and findings to foster improvement and self-determination of a particular program aimed at a specific target population/program participants. Empowerment evaluation is value oriented towards getting program participants involved in bringing about change in the programs they are targeted for. One of the main focuses in empowerment evaluation is to incorporate the program participants in the conducting of the evaluation process. This process is then often followed by some sort of critical reflection of the program. In such cases, an external/outsider evaluator serves as a consultant/coach/facilitator to the program participants and seeks to understand the program from the perspective of the participants. Once a clear understanding of the participants perspective has been gained appropriate steps and strategies can be devised (with the valuable input of the participants) and implemented in order to reach desired outcomes. According to Fetterman (2002) empowerment evaluation has three steps;   

Establishing a mission Taking stock Planning for the future

Establishing a Mission The first step involves evaluators asking the program participants and staff members (of the program) to define the mission of the program. Evaluators may opt to carry this step out by bringing such parties together and asking them to generate and discuss the mission of the program. The logic behind this approach is to show each party that there may be divergent views of what the program mission actually is.

Page 48 of 105


Taking Stock Taking stock as the second step consists of two important tasks. The first task is concerned with program participants and program staff generating a list of current key activities that are crucial to the functioning of the program. The second task is concerned with rating the identified key activities, also known as prioritization. For example, each party member may be asked to rate each key activity on a scale from 1 to 10, where 10 is the most important and 1 the least important. The role of the evaluator during this task is to facilitate interactive discussion amongst members in an attempt to establish some baseline of shared meaning and understanding pertaining to the key activities.In addition, relevant documentation (such as financial reports and curriculum information) may be brought into the discussion when considering some of the key activities.

Planning for the Future After prioritizing the key activities the next step is to plan for the future. Here the evaluator asks program participants and program staff how they would like to improve the program in relation to the key activities listed. The objective is to create a thread of coherence whereby the mission generated (step 1) guides the stock take (step 2) which forms the basis for the plans for the future (step 3). Thus, in planning for the future specific goals are aligned with relevant key activities. In addition to this it is also important for program participants and program staff to identify possible forms of evidence (measurable indicators) which can be used to monitor progress towards specific goals. Goals must be related to the program's activities, talents, resources and scope of capability- in short the goals formulated must be realistic. These three steps of empowerment evaluation produce the potential for a program to run more effectively and more in touch with the needs of the target population. Empowerment evaluation as a process which is facilitated by a skilled evaluator equips as well as empowers Page 49 of 105


participants by providing them with a 'new' way of critically thinking and reflecting on programs. Furthermore, it empowers program participants and staff to recognize their own capacity to bring about program change through collective action.[42]

Transformative Paradigm The transformative paradigm is integral in incorporating social justice in evaluation. Donna Mertens, primary researcher in this field, states that the transformative paradigm, “focuses primarily on viewpoints of marginalized groups and interrogating systemic power structures through mixed methods to further social justice and human rights”. The transformative paradigm arose after marginalized groups, who have historically been pushed to the side in evaluation, began to collaborate with scholars to advocate for social justice and human rights in evaluation. The transformative paradigm introduces many different paradigms and lenses to the evaluation process, leading it to continually call into question the evaluation process. Both the American Evaluation Association and National Association of Social Workers call attention to the ethical duty to possess cultural competence when conducting evaluations. Cultural competence in evaluation can be broadly defined as a systemic, response inquiry that is actively cognizant, understanding, and appreciative of the cultural context in which the evaluation takes place; that frames and articulates epistemology of the evaluation endeavor; that employs culturally and contextually appropriate methodology; and that uses stakeholdergenerated, interpretive means to arrive at the results and further use of the findings. Many health and evaluation leaders are careful to point out that cultural competence cannot be determined by a simple checklist, but rather it is an attribute that develops over time. The root of cultural competency in evaluation is a genuine respect for communities being studied and openness to seek depth in understanding different cultural contexts, practices and paradigms of thinking. This includes being creative and flexible to capture different cultural contexts, and heightened awareness of power differentials that exist in an evaluation context. Important skills include: ability to build rapport across difference, gain the trust of the community members, and self-reflect and recognize one’s own biases.

Paradigms The paradigms axiology, ontology, epistemology, and methodology are reflective of social justice practice in evaluation. These examples focus on addressing inequalities and injustices in society by promoting inclusion and equality in human rights.

Axiology (Values and Value Judgments) The transformative paradigm’s axiological assumption rests on four primary principles:  

The importance of being culturally respectful The promotion of social justice

Page 50 of 105


 

The furtherance of human rights Addressing inequities

Ontology (Reality) Differences in perspectives on what is real are determined by diverse values and life experiences. In turn these values and life experiences are often associated with differences in access to privilege, based on such characteristics as disability, gender, sexual identity, religion, race/ethnicity, national origins, political party, income level, are, language, and immigration or refugee status.

Epistemology (Knowledge) Knowledge is constructed within the context of power and privilege with consequences attached to which version of knowledge is given privilege. “Knowledge is socially and historically located within a complex cultural context”.

Methodology (Systematic Inquiry) Methodological decisions are aimed at determining the approach that will best facilitate use of the process and findings to enhance social justice; identify the systemic forces that support the status quo and those that will allow change to happen; and acknowledge the need for a critical and reflexive relationship between the evaluator and the stakeholders.

Lenses While operating through social justice, it is imperative to be able to view the world through the lens of those who experience injustices. Critical Race Theory, Feminist Theory, and Queer/LGBTQ Theory are frameworks for how we think should think

Page 51 of 105


about providing justice for marginalized groups. These lenses create opportunity to make each theory priority in addressing inequality.

Critical Race Theory Critical Race Theory(CRT)is an extension of critical theory that is focused in inequities based on race and ethnicity. Daniel Solorzano describes the role of CRT as providing a framework to investigate and make visible those systemic aspects of society that allow the discriminatory and oppressive status quo of racism to continue.

Feminist Theory The essence of feminist theories is to “expose the individual and institutional practices that have denied access to women and other oppressed groups and have ignored or devalued women”

Queer/LGBTQ Theory Queer/LGBTQ theorists question the heterosexist bias that pervades society in terms of power over and discrimination toward sexual orientation minorities. Because of the sensitivity of issues surrounding LGBTQ status, evaluators need to be aware of safe ways to protect such individuals’ identities and ensure that discriminatory practices are brought to light in order to bring about a more just society.

Government Requirements Given the Federal budget deficit, the Obama Administration moved to apply an "evidencebased approach" to government spending, including rigorous methods of program evaluation. The President's 2011 Budget earmarked funding for 19 government program evaluations for agencies such as the Department of Education and the United States Agency for International Development (USAID). An inter-agency group delivers the goal of increasing transparency and accountability by creating effective evaluation networks and drawing on best practices. A sixstep framework for conducting evaluation of public health programs, published by the Centers for Disease Control and Prevention (CDC), initially increased the emphasis on program evaluation of government programs in the US. The framework is as follows: 1. 2. 3. 4. 5. 6.

Engage stakeholders Describe the program. Focus the evaluation. Gather credible evidence. Justify conclusions. Ensure use and share lessons learned.

Page 52 of 105


CIPP Model of Evaluation History of the CIPP model The CIPP model of evaluation was developed by Daniel Stufflebeam and colleagues in the 1960s.CIPP is an acronym for Context, Input, Process and Product. CIPP is an evaluation model that requires the evaluation of context, input, process and product in judging a programme’s value. CIPP is a decision-focused approach to evaluation and emphasises the systematic provision of information for programme management and operation.

CIPP model The CIPP framework was developed as a means of linking evaluation with programme decisionmaking. It aims to provide an analytic and rational basis for programme decision-making, based on a cycle of planning, structuring, implementing and reviewing and revising decisions, each examined through a different aspect of evaluation –context, input, process and product evaluation. The CIPP model is an attempt to make evaluation directly relevant to the needs of decisionmakers during the phases and activities of a programme. Stufflebeam’s context, input, process, and product (CIPP) evaluation model is recommended as a framework to systematically guide the conception, design, implementation, and assessment of service-learning projects, and provide feedback and judgment of the project’s effectiveness for continuous improvement.

Four Aspects of CIPP Evaluation These aspects are context, inputs, process, and product. These four aspects of CIPP evaluation assist a decision-maker to answer four basic questions: 

What should we do?

This involves collecting and analyzing needs assessment data to determine goals, priorities and objectives. For example, a context evaluation of a literacy program might involve an analysis of the existing objectives of the literacy program, literacy achievement test scores, staff concerns (general and particular), literacy policies and plans and community concerns, perceptions or attitudes and needs. 

How should we do it?

This involves the steps and resources needed to meet the new goals and objectives and might include identifying successful external programs and materials as well as gathering information.

Page 53 of 105


Are we doing it as planned?

This provides decision-makers with information about how well the programme is being implemented. By continuously monitoring the program, decision-makers learn such things as how well it is following the plans and guidelines, conflicts arising, staff support and morale, strengths and weaknesses of materials, delivery and budgeting problems. 

Did the program work?

By measuring the actual outcomes and comparing them to the anticipated outcomes, decisionmakers are better able to decide if the program should be continued, modified, or dropped altogether. This is the essence of product evaluation.

Using CIPP in The Different Stages of the Evaluation The CIPP model is unique as an evaluation guide as it allows evaluators to evaluate the program at different stages, namely: before the program commences by helping evaluators to assess the need and at the end of the program to assess whether or not the program had an effect. CIPP model allows you to ask formative questions at the beginning of the program, then later gives you a guide of how to evaluate the programs impact by allowing you to ask summative questions on all aspects of the program.    

Context: What needs to be done? Vs. Were important needs addressed? Input: How should it be done? Vs. Was a defensible design employed? Process: Is it being done? Vs. Was the design well executed? Product: Is it succeeding? Vs. Did the effort succeed?

Page 54 of 105


Page 55 of 105


Monitoring Monitoring and Evaluation Monitoring and Evaluation (M&E) is a process that helps improving performance and achieving results. Its goal is to improve current and future management of outputs, outcomes and impact. It is mainly used to assess the performance of projects, institutions and programs set up by governments, international organizations and NGOs. It establishes links between the past, present and future actions.

Monitoring and evaluation processes can be managed by the donors financing the assessed activities, by an independent branch of the implementing organization, by the project managers or implementing team themselves or by a private company. The credibility and objectivity of monitoring and evaluation reports depend very much on the independence of the evaluator or evaluating team in charge. Their expertise and independence is of major importance for the process to be successful. Many international organizations such as the United Nations, the World Bank group and the Organization of American States have been utilizing this process for many years. The process is also growing in popularity in the developing countries where the governments have created their own national M&E systems to assess the development projects, the resource management and the government activities or administration. The developed countries are using this process to assess their own development and cooperation agencies.

Page 56 of 105


Evaluation The M&E is, as its name indicates, separated into two distinguished categories: Evaluation and Monitoring. An evaluation is a systematic and objective examination concerning the relevance, effectiveness, efficiency and impact of activities in the light of specified objectives. The idea in evaluating projects is to isolate errors not to repeat them and to underline and promote the successful mechanisms for current and future projects. An important goal of evaluation is to provide recommendations and lessons to the project managers and implementation teams that have worked on the projects and for the ones that will implement and work on similar projects. Evaluations are also indirectly a means to report to the donor about the activities implemented. It is a means to verify that the donated funds are being well managed and transparently spent. The evaluators are supposed to check and analyse the budget lines and to report the findings in their work.

Monitoring Monitoring is a continuous assessment that aims at providing all stakeholders with early detailed information on the progress or delay of the ongoing assessed activities. It is an oversight of the activity's implementation stage. Its purpose is to determine if the outputs, deliveries and schedules planned have been reached so that action can be taken to correct the deficiencies as quickly as possible.

Differences between Monitoring and Evaluation The common ground for monitoring and evaluation is that they are both management tools. For monitoring, data and information collection for tracking progress according to the terms of reference is gathered periodically which is not the case in evaluations for which the data and information collection is happening during or in view of the evaluation. The monitoring is a short term assessment and does not take into consideration the outcomes and impact unlike the evaluation process which also assesses the outcomes and sometime longer term impact. This impact assessment occurs sometimes after the end of a project, even though it is rare because of its cost and of the difficulty to determine whether the project is responsible of the observed results.

Importance of Monitoring and Evaluation Although evaluations are often a retrospective, their purpose is essentially forward looking. Evaluation applies the lessons and recommendations to decisions about current and future programs. Evaluations can also be used to promote new projects, get support from

Page 57 of 105


governments, raise funds from public or private institutions and inform the general public on the different activities. The Paris Declaration on Aid Effectiveness in February 2005 and the follow-up meeting in Accra underlined the importance of the evaluation process and of the ownership of its conduct by the projects' hosting countries. Many developing countries now have M&E systems and the tendency is growing.

Performance Measurement The credibility of findings and assessments depends to a large extent on the manner in which monitoring and evaluation is conducted. To assess performance, it is necessary to select, before the implementation of the project, indicators which will permit to rate the targeted outputs and outcomes. According to the United Nations Development Programme (UNDP), an outcome indicator has two components: the baseline which is the situation before the programme or project begins, and the target which is the expected situation at the end of the project. An output indicator does not have any baseline as the purpose of the output is to introduce something that does not exist yet.

In the United Nations The most important agencies of the United Nations have a monitoring and evaluation unit. All these agencies are supposed to follow the common standards of the United Nations Evaluation Group (UNEG). These norms concern the Institutional framework and management of the evaluation function, the competencies and ethics, and the way to conduct evaluations and present reports (design, process, team selection, implementation, reporting and follow up). This group also provides guidelines and relevant documentation to all evaluation organs being part of the United Nations or not. Most agencies implementing projects and programs, even if following the common UNEG standards, have their own handbook and guidelines on how to conduct M&E. Indeed, the UN agencies have different specializations and have different needs and ways of approaching M&E. The M&E branches of every UN agency are monitored and rated by the Joint Inspection Unit of the United Nations.

Page 58 of 105


Page 59 of 105


The Logical Framework Approach The Logical Framework Approach (LFA) is a management tool mainly used for designing, monitoring, and evaluating international development projects. Variations of this tool are known as Goal Oriented Project Planning (GOPP) or Objectives Oriented Project Planning (OOPP). The Logical Framework Approach was developed in 1969 for the U.S. Agency for International Development (USAID). It is based on a worldwide study by Leon J. Rosenberg, a principal of Fry Consultants Inc. From 1970 to 1971, 30 countries adopted the method under the guidance of Practical Concepts Incorporated, founded by Rosenberg. It has been widely used by multilateral donor organizations, such as AECID, GIZ, SIDA, NORAD, DFID, SDC, UNDP, EC and the Inter-American Development Bank. Some non-governmental organizations offer LFA training to ground-level field staff. It has also gained popularity in the private sector. Terry Schmidt has been active in extending the LFA. The Logical Framework Approach continues to gain adherents, though it is a management tool invented more than 40 years ago. This phenomenon has been the subject of several doctoral theses. In the 1990s, it was often mandatory for aid organizations to use the LFA method in their project proposals. However, its use has become increasingly optional in recent years. The Logical Framework Approach is sometimes confused with Logical Framework (LF or Log frame). The Logical Framework Approach is a project design methodology, whereas the Logical Frame is a document.

Description The Logical Framework takes the form of a four-by-four project table. The four rows describe four different types of events that take place as a project is implemented: Activities, Outputs, Purpose and Goal (from bottom to top on the left hand side. The four columns provide different types of information about the events in each row. The first column is used to provide a Narrative description of the event. The second column lists one or more Objectively Verifiable Indicators (OVIs) of these events taking place. The third column describes the Means of Verification (MoV) where information will be available on the OVIs, and the fourth column lists the Assumptions. Assumptions are external factors that could have an influence, whether Page 60 of 105


positive or negative, on the events described in the narrative column. The list of assumptions should include the factors that have a potential impact on the success of the project, but which cannot be directly controlled by the project or program managers. In some cases these may include what could be killer assumptions, which if proved wrong will have major negative consequences for the project. A good project design should be able to substantiate its assumptions, especially those with a high potential to have a negative impact.

Temporal Logic Model The core of the Logical Framework is the "temporal logic model" that runs through the matrix. This takes the form of a series of connected propositions:   

If these Activities are implemented, and these Assumptions hold, then these Outputs will be delivered. If these Outputs are delivered, and these Assumptions hold, then this Purpose will be achieved. If this Purpose is achieved, and these Assumptions hold, then this Goal will be achieved.

These are viewed as a hierarchy of hypotheses, with the project or program manager sharing responsibility with higher management for the validity of hypotheses beyond the output level. Thus, Rosenberg brought the essence of scientific method to non-scientific endeavors. The "Assumptions" column is important in clarifying the extent to which the project or program objectives depend on external factors, and greatly clarify "force majeure" — of particular Page 61 of 105


interest when the Canadian International Development Agency (CIDA) at least briefly used the LFA as the essence of contracts. The LFA is also used in other contexts, both personal and corporate. When developed within an organization, it can articulate a common interpretation of the objectives of a project and how they will be achieved. The indicators and means of verification force clarifications as one would for a scientific endeavor, as in "you haven't defined it until you say how you will measure it." Tracking progress against carefully defined output indicators provides a clear basis for monitoring progress; verifying purpose and goal level progress then simplifies evaluation. Given a well constructed logical framework, an informed skeptic and a project advocate should be able to agree on exactly what the project attempts to accomplish, and how likely it is to succeed—in terms of programmatic (goal-level) as well as project (purpose-level) objective. One of its purposes in its early uses was to identify the span of control of 'project management'. In some countries with less than perfect governance and managerial systems, it became an excuse for failure. Externally sourced technical assistance managers were able to say that all activities foreseen have been implemented and all required outputs produced, but because of the sub-optimal systems in the country, which are beyond the control of the project's management, the purpose(s) have not been achieved and so the goal has not been attained.

Page 62 of 105


Page 63 of 105


Logic Models A Logic Model (also known as a logical framework, theory of change, or program matrix) is a tool used by funders, managers, and evaluators of programs to evaluate the effectiveness of a program. Logic models are usually a graphical depiction of the logical relationships between the resources, activities, outputs and outcomes of a program. While there are many ways in which logic models can be presented, the underlying purpose of constructing a logic model is to assess the "if-then" (causal) relationships between the elements of the program; if the resources are available for a program, then the activities can be implemented, if the activities are implemented successfully then certain outputs and outcomes can be expected. Logic models are most often used in the evaluation stage of a program, they can however be used during planning and implementation.

Versions In its simplest form, a logic model has four components: Inputs

Activities

what resources go what activities the into a program program undertakes

e.g. money, staff, equipment

Outputs

Outcomes/impacts

what is produced through those activities

the changes or benefits that result from the program

e.g. increased skills/ e.g. number of knowledge/ e.g. development of booklets produced, confidence, leading materials, training workshops held, in longer-term to programs people trained promotion, new job, etc.

Following the early development of the logic model in the 1970s by Carol Weiss, Joseph Wholey and others, many refinements and variations have been added to the basic concept. Many versions of logic models set out a series of outcomes/impacts, explaining in more detail the logic of how an intervention contributes to intended or observed results. This will often include distinguishing between short-term, medium-term and long-term results, and between direct and indirect results. Some logic models also include assumptions, which are beliefs the prospective grantees have about the program, the people involved, and the context and the way the prospective grantees think the program will work, and external factors, consisting of the environment in which the program exists, including a variety of external factors that interact with and influence the program action. University Cooperative Extension Programs in the US have developed a more elaborate logic model, called the Program Action Logic Model, which includes six steps:

Page 64 of 105


 



Inputs (what we invest) Outputs: o Activities (the actual tasks we do) o Participation (who we serve; customers & stakeholders) o Engagement (how those we serve engage with the activities) Outcomes/Impacts: o Short Term (learning: awareness, knowledge, skills, motivations) o Medium Term (action: behavior, practice, decisions, policies) o Long Term (consequences: social, economic, environmental etc.)

In front of Inputs, there is a description of a Situation and Priorities. These are the considerations that determine what Inputs will be needed. The University of Wisconsin Extension offers a series of guidance documents on the use of logic models. There is also an extensive bibliography of work on this program logic model.

Advantages By describing work in this way, managers have an easier way to define the work and measure it. Performance measures can be drawn from any of the steps. One of the key insights of the logic model is the importance of measuring final outcomes or results, because it is quite possible to waste time and money (inputs), "spin the wheels" on work activities, or produce outputs without achieving desired outcomes. It is these outcomes (impacts, long-term results) that are the only justification for doing the work in the first place. For commercial organizations, outcomes relate to profit. For not-for-profit or governmental organizations, outcomes relate to successful achievement of mission or program goals.

Uses of the Logic Model Program Planning One of the most important uses of the logic model is for program planning. Here it helps managers to 'plan with the end in mind' Stephen Covey, rather than just consider inputs (e.g. budgets, employees) or just the tasks that must be done. In the past, program logic has been justified by explaining the process from the perspective of an insider. Paul McCawley (no date) outlines how this process was approached: 1. 2. 3. 4. 5.

We invest this time/money so that we can generate this activity/product. The activity/product is needed so people will learn how to do this. People need to learn that so they can apply their knowledge to this practice. When that practice is applied, the effect will be to change this condition When that condition changes, we will no longer be in this situation.

Page 65 of 105


While logic models have been used in this way successfully, Millar et al. (1999) has suggested that following the above sequence, from the inputs through to the outcomes, could limit one’s thinking to the existing activities, programs and research questions. Instead, by using the logic model to focus on the intended outcomes of a particular program the questions change from ‘what is being done?’ to’ what needs to be done?’ McCawley (no date) suggests that by using this new reasoning, a logic model for a program can be built by asking the following questions in sequence: 1. 2. 3. 4. 5. 6.

What is the current situation that we intend to impact? What will it look like when we achieve the desired situation or outcome? What behaviors need to change for that outcome to be achieved? What knowledge or skills do people need before the behavior will change? What activities need to be performed to cause the necessary learning? What resources will be required to achieve the desired outcome?

By placing the focus on ultimate outcomes or results, planners can think backwards through the logic model to identify how best to achieve the desired results. Planners therefore need to understand the difference between the categories of the logic model.

Performance Evaluation The logic model is often used in government or not-for-profit organizations, where the mission and vision are not aimed at achieving a financial benefit. In such situations, where profit is not the intended result, it may be difficult to monitor progress toward outcomes. A program logic model provides such indicators, in terms of output and outcome measures of performance. It is therefore important in these organizations to carefully specify the desired results, and consider how to monitor them over time. Often, such as in education or social programs, the outcomes are long-term and mission success is far in the future. In these cases, intermediate or shorterterm outcomes may be identified that provide an indication of progress toward the ultimate long-term outcome. Traditionally, government programs were described only in terms of their budgets. It is easy to measure the amount of money spent on a program, but this is a poor indicator of mission success. Likewise it is relatively easy to measure the amount of work done (e.g. number of Page 66 of 105


workers or number of years spent), but the workers may have just been 'spinning their wheels' without getting very far in terms of ultimate results or outcomes. The production of outputs is a better indicator that something was delivered to customers, but it is still possible that the output did not really meet the customer's needs, was not used, etc. Therefore, the focus on results or outcomes has become a mantra in government and not-for-profit programs. The President's Management Agenda is an example of the increasing emphasis on results in government management. It states: "Government likes to begin things — to declare grand new programs and causes. But good beginnings are not the measure of success. What matters in the end is completion. Performance. Results." However, although outcomes are used as the primary indicators of program success or failure they are still insufficient. Outcomes may easily be achieved through processes independent of the program and an evaluation of those outcomes would suggest program success when in fact external outputs were responsible for the outcomes (Rossi, Lipsey and Freeman, 2004). In this respect, Rossi, Lipsey and Freeman (2004) suggest that a typical evaluation study should concern itself with measuring how the process indicators (inputs and outputs) have had an effect on the outcome indicators. A program logic model would need to be assessed or designed in order for an evaluation of these standards to be possible. The logic model can and, indeed, should be used in both formative (during the implementation to offer the chance to improve the program) and summative (after the completion of the program) evaluations.

The Logic Model and Other Management Frameworks There are numerous other popular management frameworks that have been developed in recent decades. This often causes confusion, because the various frameworks have different functions. It is important to select the right tool for the job. The following list of popular management tools is suggested to indicate where they are most appropriate (this list is by no means complete). Organizational Assessment Tools Fact-gathering tools for a comprehensive view of the as-is situation in an organization, but without prescribing how to change it:     

Baldrige Criteria for Performance Excellence (United States) EFQM (Europe) SWOT Analysis (Strengths, Weaknesses, Opportunities, Threats) Skills audits Customer surveys

Page 67 of 105


Strategic Planning Tools For identifying and prioritizing major long-term desired results in an organization, and strategies to achieve those results:     

Strategic Vision (Writing a clear "picture of the future" statement) Strategy maps Portfolio Management (Managing a portfolio of interdependent projects) Participatory Impact Pathways Analysis (An approach for project staff and stakeholders to jointly agree on a vision, develop a logic model and an evaluation plan) Weaver's Triangle (simply asks organisations to identify inputs, outcomes and outputs). Program Planning and Evaluation Tools For developing details of individual programs (what to do and what to measure) once overall strategies have been defined:    

Program logic model (this entry) Work Breakdown Structure Managing for Results model Earned Value

Management PART - Program Assessment Rating Tool (US federal government)

Performance Measurement Tools For measuring, monitoring and reporting the quality, efficiency, speed, cost and other aspects of projects, programs and/or processes:   

Balanced scorecard systems KPI - key performance indicators Critical success factors

Process Improvement Tools For monitoring and improving the quality or efficiency of work processes:

Page 68 of 105


    

PDCA - Plan-do-check-act (Deming) TQM - Total Quality Management (Shewhart, Deming, Juran) - A set of TQM tools is available. Six Sigma BPR - Business Process Reengineering Organizational Design

Process Standardization Tools For maintaining and documenting processes or resources to keep them repeatable and stable:     

ISO 9000 CMMI - Capability Maturity Model Integration Business Process Management (BPM) Configuration management Enterprise Architecture

Page 69 of 105


Page 70 of 105


Strategic Planning Strategic Planning is an organization's process of defining its strategy, or direction, and making decisions on allocating its resources to pursue this strategy. It may also extend to control mechanisms for guiding the implementation of the strategy. Strategic planning became prominent in corporations during the 1960s and remains an important aspect of strategic management. It is executed by strategic planners or strategists, who involve many parties and research sources in their analysis of the organization and its relationship to the environment in which it competes. Strategy has many definitions, but generally involves setting goals, determining actions to achieve the goals, and mobilizing resources to execute the actions. A strategy describes how the ends (goals) will be achieved by the means (resources). The senior leadership of an organization is generally tasked with determining strategy. Strategy can be planned (intended) or can be observed as a pattern of activity (emergent) as the organization adapts to its environment or competes. Strategy includes processes of formulation and implementation; strategic planning helps coordinate both. However, strategic planning is analytical in nature (i.e., it involves "finding the dots"); strategy formation itself involves synthesis (i.e., "connecting the dots") via strategic thinking. As such, strategic planning occurs around the strategy formation activity.

Process Strategic management processes and activities

Overview Strategic planning is a process and thus has inputs, activities, and outputs. It may be formal or informal and is typically iterative, with feedback loops throughout the process. Some elements of the process may be continuous and others may be executed as discrete projects with a definitive start and end during a period. Strategic planning provides inputs for strategic thinking, which guides the actual strategy formation. The end result is the organization's strategy, including a diagnosis of the environment and competitive situation, a guiding policy on what the organization intends to accomplish, and key initiatives or action plans for achieving the guiding policy. Michael Porter wrote in 1980 that formulation of competitive strategy includes consideration of four key elements: Page 71 of 105


1. 2. 3. 4.

Company strengths and weaknesses; Personal values of the key implementers (i.e., management and the board); Industry opportunities and threats; and Broader societal expectations.

The first two elements relate to factors internal to the company (i.e., the internal environment), while the latter two relate to factors external to the company (i.e., the external environment).[3] These elements are considered throughout the strategic planning process.

Inputs Data is gathered from a variety of sources, such as interviews with key executives, review of publicly available documents on the competition or market, primary research (e.g., visiting or observing competitor places of business or comparing prices), industry studies, etc. This may be part of a competitive intelligence program. Inputs are gathered to help support an understanding of the competitive environment and its opportunities and risks. Other inputs include an understanding of the values of key stakeholders, such as the board, shareholders, and senior management. These values may be captured in an organization's vision and mission statements.

Activities The essence of formulating competitive strategy is relating a company to its environment. 

Michael Porter

Strategic planning activities include meetings and other communication among the organization's leaders and personnel to develop a common understanding regarding the competitive environment and what the organization's response to that environment (its strategy) should be. A variety of strategic planning tools (described in the section below) may be completed as part of strategic planning activities. The organization's leaders may have a series of questions they want answered in formulating the strategy and gathering inputs, such as:      

What is the organization's business or interest? What is considered "value" to the customer or constituency? Which products and services should be included or excluded from the portfolio of offerings? What is the geographic scope of the organization? What differentiates the organization from its competitors in the eyes of customers and other stakeholders? Which skills and resources should be developed within the organization?

Page 72 of 105


Outputs The output of strategic planning includes documentation and communication describing the organization's strategy and how it should be implemented, sometimes referred to as the strategic plan. The strategy may include a diagnosis of the competitive situation, a guiding policy for achieving the organization's goals, and specific action plans to be implemented. A strategic plan may cover multiple years and be updated periodically. The organization may use a variety of methods of measuring and monitoring progress towards the objectives and measures established, such as a balanced scorecard or strategy map. Companies may also plan their financial statements (i.e., balance sheets, income statements, and cash flows) for several years when developing their strategic plan, as part of the goal setting activity. The term budget is often used to describe the expected financial performance of an organization for the upcoming year.

Tools and Approaches A variety of analytical tools and techniques are used in strategic planning. These were developed by companies and management consulting firms to help provide a framework for strategic planning. Such tools include: 

 

   

PEST analysis, which covers the remote external environment elements such as political, economic, social and technological (PESTLE adds legal/regulatory and ecological/environmental); Scenario planning, which was originally used in the military and recently used by large corporations to analyze future scenarios; Porter five forces analysis, which addresses industry attractiveness and rivalry through the bargaining power of buyers and suppliers and the threat of substitute products and new market entrants; SWOT analysis, which addresses internal strengths and weaknesses relative to the external opportunities and threats; Growth-share matrix, which involves portfolio decisions about which businesses to retain or divest; and Balanced Scorecards and strategy maps, which creates a systematic framework for measuring and controlling strategy. The Nine Steps to Success(TM) - The Balanced Scorecard Institute's framework for Strategic Planning and Management.

Page 73 of 105


Strategic Planning vs. Financial Planning Simply extending financial statement projections into the future without consideration of the competitive environment is a form of financial planning or budgeting, not strategic planning. In business, the term "financial plan" is often used to describe the expected financial performance of an organization for future periods. The term "budget" is used for a financial plan for the upcoming year. A "forecast" is typically a combination of actual performance year-to-date plus expected performance for the remainder of the year, so is generally compared against plan or budget and prior performance. The financial plans accompanying a strategic plan may include 3–5 years of projected performance. McKinsey & Company developed a capability maturity model in the 1970s to describe the sophistication of planning processes, with strategic management ranked the highest. The four stages include: 1. Financial planning, which is primarily about annual budgets and a functional focus, with limited regard for the environment; 2. Forecast-based planning, which includes multi-year financial plans and more robust capital allocation across business units; 3. Externally oriented planning, where a thorough situation analysis and competitive assessment is performed; 4. Strategic management, where widespread strategic thinking occurs and a well-defined strategic framework is used. Categories 3 and 4 are strategic planning, while the first two categories are non-strategic or essentially financial planning. Each stage builds on the previous stages; that is, a stage 4 organization completes activities in all four categories.

Criticism Strategic Planning vs. Strategic Thinking Strategic planning has been criticized for attempting to systematize strategic thinking and strategy formation, which Henry Mintzberg argues are inherently creative activities involving synthesis or "connecting the dots" which cannot be systematized. Mintzberg argues that strategic planning can help coordinate planning efforts and measure progress on strategic goals, but that it occurs "around" the strategy formation process rather than within it. Further, strategic planning functions remote from the "front lines" or contact with the competitive environment (i.e., in business, facing the customer where the effect of competition is most clearly evident) may not be effective at supporting strategy efforts.

Page 74 of 105


Page 75 of 105


Lessons Learned: Counterfactual Policy Analysis Counterfactual thinking is a concept in psychology that involves the human tendency to create possible alternatives to life events that have already occurred; something that is contrary to what actually happened. Counterfactual thinking is exactly as it states: "counter to the facts." These thoughts consist of the "What if?" and the "If I had only..." that occur when thinking of how things could have turned out differently. Counterfactual thoughts are things that could never possibly happen in reality, because they solely pertain to events that have occurred in the past. Counterfactual literally means, contrary to the facts. A counterfactual thought occurs when a person modifies a factual prior event and then assesses the consequences of that change. A person may imagine how an outcome could have turned out differently, if the antecedents that led to that event were different. For example, a person may reflect upon how a car accident could have turned out by imagining how some of the factors could have been different, for example, If only I hadn't been speeding.... These alternatives can be better or worse than the actual situation, and in turn give improved or more disastrous possible outcomes, If only I hadn't been speeding, my car wouldn't have been wrecked or If I hadn't been wearing a seatbelt, I would have been killed. Counterfactual thoughts have been shown to produce negative emotions, however they may also produce functional or beneficial effects. Ideas that create a more negative outcome are downward counterfactuals and those thoughts that create a more positive outcome are considered upward counterfactuals. These counterfactual thoughts, or thoughts of what could have happened, can affect people's emotions, such as causing them to experience regret, guilt, relief, or satisfaction. They can also affect how they view social situations, such as who deserves blame and responsibility.

Page 76 of 105


History The origin of counterfactual thinking has philosophical roots and can be traced back to early philosophers such as Aristotle and Plato who pondered the epistemological status of subjunctive suppositions and their nonexistent but feasible outcomes. In the seventeenth century, the German philosopher, Leibniz, argued that there could be an infinite number of alternate worlds, so long as they were not in conflict with laws of logic. The well known philosopher Nicholas Rescher (as well as others) has written about the interrelationship between counterfactual reasoning and modal logic. The relationship between counterfactual reasoning based upon modal logic may also be exploited in literature or Victorian Studies, painting and poetry. Ruth M.J. Byrne in The Rational Imagination: How People Create Alternatives to Reality (2005) proposed that the mental representations and cognitive processes that underlie the imagination of alternatives to reality are similar to those that underlie rational thought, including reasoning from counterfactual conditionals. More recently, counterfactual thinking has gained interest from a psychological perspective. Cognitive scientists have examined the mental representations and cognitive processes that underlie the creation of counterfactuals. Daniel Kahneman and Amos Tversky (1982) pioneered the study of counterfactual thought, showing that people tend to think 'if only' more often about exceptional events than about normal events. Many related tendencies have since been examined, e.g., whether the event is an action or inaction, whether it is controllable, its place in the temporal order of events, or its causal relation to other events. Social psychologists have studied cognitive functioning and counterfactuals in a larger, social context. Early research on counterfactual thinking took the perspective that these kinds of thoughts were indicative of poor coping skills, psychological error or bias, and generally dysfunctional in nature. As research developed, a new wave of insight beginning in the 1990s began taking a functional perspective, believing that counterfactual thinking served as a largely beneficial behavioral regulator. Although negative affect and biases arise, the overall benefit is positive for human behavior.

Activation There are two portions to counterfactual thinking. First, there is the activation portion. This activation is whether we allow the counterfactual thought to seep into our conscious thought. The second portion involves content. This content portion creates the end scenario for the antecedent. The activation portion leads into the mystery of why we allow ourselves to think of other alternatives that could have been beneficial or harmful to us. It is believed that humans tend to think of counterfactual ideas when there were exceptional circumstances that led to an event, and thus could have been avoided in the first place. We also tend to create counterfactual ideas when we feel guilty about a situation and wish to exert more control. For example, in a study by

Page 77 of 105


Davis et al., parents who suffered the death of an infant were more likely to counterfactual think 15 months later if they felt guilty about the incident or if there were odd circumstances surrounding the mortality. In the case of a death of natural causes, parents tended to counterfactual think to a lesser extent over the course of time. Another factor that determines how much we use counterfactual thought is how close we were to an alternative outcome. This is especially true when there is a negative outcome that was this close to a positive outcome. For example, in a study by Meyers-Levy and Maheswaran, subjects were more likely to counterfactual think alternative circumstances for a target if his house burned down three days after he forgot to renew his insurance versus six months after he forgot to renew his insurance. Therefore, the idea that a final outcome almost occurred plays a role in the reason we emphasize that outcome.

Functional Basis It can be wondered why we continue to think in counterfactual ways if these thoughts tend to make us feel guilty or negatively about an outcome. One of the functional reasons for this is to correct for mistakes and to avoid making them again in the future. If a person is able to consider another outcome based on a different path, they may take that path in the future and avoid the undesired outcome. It is obvious that the past cannot be changed, however, it is likely that similar situations may occur in the future, and thus we take our counterfactual thoughts as a learning experience. For example, if a person has a terrible interview and thinks about how it may have been more successful if they had responded in a more confident manner, they are more likely to respond more confidently in their next interview.

Risk Aversion Another reason we continue to use counterfactual theory is to avoid situations that may be unpleasant to us, which is part of our approach and avoidance behavior. Often, people make a conscious effort to avoid situations that may make them feel unpleasant. However, despite our best efforts, we sometimes find ourselves in these unpleasant situations anyway. In these situations, we continue to use counterfactual thinking to think of ways that that event could have been avoided and in turn to learn to avoid those situations again in the future. For example, if a person finds hospitals to be an uncomfortable place, but find themselves in one due to cutting their finger while doing dishes, they may think of ways they could have avoided going to the hospital by tending to the wound themselves or doing the dishes more carefully.

Page 78 of 105


Behavior Intention We continue to use counterfactual thoughts to change our future behavior in a way that is more positive, or behavior intention. This can involve immediately making a change in our behavior immediately after the negative event occurred. By actively making a behavioral change, we are completely avoiding the problem again in the future. An example, is forgetting about Mother's Day, and immediately writing the date on the calendar for the following year, as to definitely avoid the problem.

Goal-Directed Activity In the same sense as behavior intention, people tend to use counterfactual thinking in goaldirected activity. Past studies have shown that counterfactuals serve a preparative function on both individual and group level. When people fail to achieve their goals, counterfactual thinking will be activated (e.g., studying more after a disappointing grade;). When they engage in upward counterfactual thinking, people are able to imagine alternatives with better positive outcomes. The outcome seems worse when compared to positive alternative outcomes. This realization motivates them to take positive action in order to meet their goal in the future. Markman, Gavanski, Sherman, and McMullen (1993) identified the repeatability of an event as an important factor in determining what function will be used. For events that happen repeatedly (e.g., sport games) there is an increased motivation to imagine alternative antecedents in order to prepare for a better future outcome. For one-time events, however, the opportunity to improve future performance does not exist, so it is more likely that the person will try to alleviate disappointment by imagining how things could have been worse. The direction of the counterfactual statement is also indicative of which function may be used. Upward counterfactuals have a greater preparative function and focus on future improvement, while downward counterfactuals are used as a coping mechanism in an affective function. Furthermore, additive counterfactuals have shown greater potential to induce behavioral intentions of improving performance. Hence, counterfactual thinking motivates individuals to making goal-oriented actions in order to attain their (failed) goal in the future.

Collective Action On the other hand, at a group level, counterfactual thinking can lead to collective action. According to Milesi and Catellani (2011), political activists exhibit group commitment and are more likely to re-engage in collective action following a collective defeat and show when they are engage in counterfactual thinking. Unlike the cognitive processes involved at individual level, abstract counterfactuals lead to an increase in group identification, which is positively correlated with collective action intention. The increase in group identification impacts on people's affect. Abstract counterfactuals also lead to an increase in group efficacy. Increase in group efficacy translates to belief that the group has the ability to change outcomes in situations. This in turn motivates group members to make group-based actions to attain their goal in the future. Page 79 of 105


Benefits and Consequences When thinking of downward counterfactual thinking, or ways that the situation could have turned out worse, people tend to feel a sense of relief. For example, if after getting into a car accident somebody thinks "At least I wasn't speeding, then my car would have been totaled." This allows for consideration of the positives of the situation, rather than the negatives. In the case of upward counterfactual thinking, people tend to feel more negative affect (e.g., regret, disappointment) about the situation. When thinking in this manner, people focus on ways that the situation could have turned out more positively: for example, "If only I had studied more, then I wouldn't have failed my test".

Current Research As with many cognitive processes in the brain, current and upcoming research seeks to gain better insight into the functions and outcomes of how we think. Research for counterfactual thinking has recently been investigating various effects and how they might alter or contribute to counterfactual thinking. One study by Rim and Summerville (2014) investigated the distance of the event in terms of time and how this length of time can affect the process by which counterfactual thinking can occur. Their results showed that "people generated more downward counterfactuals about recent versus distant past events, while they tended to generate more upward counterfactuals about distant versus recent past events," which was consistent in their replications for social distance as well. They also examine the possible mechanism of manipulating social distance and the effect this could have on responding to negative events in either a self-improvement or self-enhancement motivations. Recent research by Scholl and Sassenberg (2014) looked to determine how perceived power in the situation can affect the counterfactual thought and process associated to understanding future directions and outlooks. The research examined how manipulating the perceived power of the individual in the given circumstance can lead to different thoughts and reflections, noting that "demonstrated that being powerless (vs. powerful) diminished self-focused counterfactual thinking by lowering sensed personal control". These results may show a relationship between how the self perceives events and determines the best course of action for future behavior.

Types Upward and Downward Upward counterfactual thinking focuses on how the situation could have been better. Many times, people think about what they could have done differently. For example, "If I started Page 80 of 105


studying three days ago, instead of last night, I could have done better on my test." Since people often think about what they could have done differently, it is not uncommon for people to feel regret during upward counterfactual thinking. Downward counterfactual thinking focuses on how the situation could have been worse. In this scenario, a person can make themselves feel better about the outcome because they realize that the situation is not the worst it could be. For example, "I'm lucky I earned a 'C' on that; I didn't start studying until last night."

Additive/Subtractive A counterfactual statement may involve the action or inaction of an event that originally took place. An additive statement involves engaging in an event that did not originally occur (e.g., I should have taken medicine) wheres a subtractive statement involves removing an event that took place (e.g., I should have never started drinking). Additive counterfactuals are more frequent than subtractive counterfactuals. Additive and upward counterfactual thinking focuses on "what else could I have done to do well?". Subtractive and upward counterfactual thinking focuses on "what shouldn't I have done so I could do well?". In contrast, an additive and downward scenario would be, "If I went drinking last night as well, I would have done even worse", while a subtractive and downward scenario would be, "if I didn't start studying two days ago, I would have done much worse".

Self vs. Other This distinction simply refers to whether the counterfactual is about actions of the self (e.g., I should have slowed down) or someone else's actions (e.g., The other driver should have slowed down). Self counterfactuals are more prevalent than other person focused counterfactuals. Construal level theory explains that self counterfactuals are more prevalent because the event in question is psychologically closer than an event in which others are involved.

Theories Norm Theory Kahneman and Miller (1986) proposed the norm theory as a theoretical basis to describe the rationale for counterfactual thoughts. Norm theory suggests that the ease of imagining a different outcome determines the counterfactual alternatives created. Norms involve a pairwise comparison between a cognitive standard and an experiential outcome. A discrepancy elicits an affective response which is influenced by the magnitude and direction of the difference. For example, if a server makes twenty dollars more than a standard night, a positive affect will be evoked. If a student earns a lower grade than is typical, a negative affect will be

Page 81 of 105


evoked. Generally, upward counterfactuals are likely to result in a negative mood, while downward counterfactuals elicit positive moods. Kahneman and Miller (1986) also introduced the concept of mutability to describe the ease or difficulty of cognitively altering a given outcome. An immutable outcome (i.e., gravity) is difficult to modify cognitively whereas a mutable outcome (i.e.,speed) is easier to cognitively modify. Most events lie somewhere in the middle of these extremes. The more mutable the antecedents of an outcome are, the greater availability there is of counterfactual thoughts. Wells and Gavanski (1989) studied counterfactual thinking in terms of mutability and causality. An event or antecedent is considered causal if mutating that event will lead to undoing the outcome. Some events are more mutable than others. Exceptional events (i.e., taking an unusual route then getting into an accident) are more mutable than normal events (i.e., taking a usual route and getting into an accident). This mutability, however, may only pertain to exceptional cases (i.e., car accident). Controllable events (i.e., intentional decision) are typically more mutable than uncontrollable events (i.e., natural disaster). In short, the greater the number of alternative outcomes constructed, the more unexpected the event, and the stronger emotional reaction elicited.

Rational Imagination Theory Byrne (2005) outlined a set of cognitive principles that guide the possibilities that people think about when they imagine an alternative to reality. Experiments show that people tend to think about realistic possibilities, rather than unrealistic possibilities, and they tend to think about few possibilities rather than many Counterfactuals are special in part because they require people to think about at least two possibilities (reality, and an alternative to reality), and to think about a possibility that is false, temporarily assumed to be true. Experiments have corroborated the proposal that the principles that guide the possibilities that people think about most readily, explain their tendencies to focus on, for example, exceptional events rather than normal events, actions rather than inactions, and more recent events rather than earlier events in a sequence.

Functional Theory The functional theory looks at how counterfactual thinking and its cognitive processes benefit people. Counterfactuals serve a preparative function, and help people avoid past blunders. Counterfactual thinking also serves the affective function to make a person feel better. By comparing one's present outcome to a less desirable outcome, the person may feel better Page 82 of 105


about the current situation (1995). For example, a disappointed runner who did not win a race may feel better by saying, "At least I did not come in last." Although counterfactual thinking is largely adaptive in its functionality, there are exceptions. For individuals experiencing severe depressive symptoms, perceptions of control are diminished by negative self-perceptions and low self-efficacy. As a result, motivation for selfimprovement is weakened. Even when depressed individuals focus on controllable events, their counterfactuals are less reasonable and feasible. Epstude and Roese (2008) propose that excessive counterfactual thoughts can lead people to worry more about their problems and increase distress. When individuals are heavily focused on improving outcomes, they will be more likely to engage in maladaptive counterfactual thinking. Other behavior such as procrastination may lead to less effective counterfactual thinking. Procrastinators show a tendency to produce more downward counterfactuals than upward counterfactuals. As a result, they tend to become complacent and lack motivation for change. Perfectionists are another group for whom counterfactual thinking may not be functional.

Rational Counterfactuals Tshilidzi Marwala introduced rational countefactual which is a counterfactual which, given the factual, maximizes the attainment of the desired consequent. For an example suppose we have a factual statement: Qaddafi supported terrorism and consequently Barack Obama declared war on Libya then its counterfactuals is: If Qaddafi did not support terrorism then Barack Obama would not have declared war on Libya. The theory of rational counterfactuals identifies the antecedent that gives the desired consequent necessary for rational decision making. For example suppose there is an explosion in some chemical plant. The rational counterfactual will be what should have been the situation to ensure that the possibility of an explosion is minimized.

Examples In the case of Olympic Medalists, counterfactual thinking explains why bronze medalists are often more satisfied with the outcome than silver medalists. The counterfactual thoughts for silver medalists tend to focus on how close they are to the gold medal, upward counterfactually thinking about the event, whereas bronze medalists tend to counterfactual think about how they could have not received a medal at all, displaying downward counterfactual thinking. Another example is the satisfaction of college students with their grades. Medvec and Savitsky studied satisfaction of college students was studied based on whether their grade just missed the cut off versus if they had just made the cutoff for a category. Students that just made it into a grade category tended to downward counterfactual think and were more satisfied, thinking it could be worse. These students tended to think in terms of "At least I." However, students that were extremely close to making it into the next highest category showed higher dissatisfaction

Page 83 of 105


and tended to upward counterfactual think, or focus on how the situation could have been better. These students tended to think it terms of "I could have."

Page 84 of 105


Page 85 of 105


References ______

1. http://en.wikipedia.org/wiki/Outcomes_theory 2. http://en.wikipedia.org/wiki/Evaluation 3. http://en.wikipedia.org/wiki/Monitoring_and_Evaluation 4. http://en.wikipedia.org/wiki/Program_evaluation 5. http://en.wikipedia.org/wiki/Logical_framework_approach 6. http://en.wikipedia.org/wiki/Logic_model 7. http://en.wikipedia.org/wiki/Strategic_planning 8. http://www.google.com/url?sa=t&rct=j&q=&esrc=s&source=web&cd=6&v ed=0CDsQFjAF&url=http%3A%2F%2Fwww.crenyc.org%2F_literature_47804 %2FIntro_to_Outcome_Thinking&ei=E9pMVYMKaLbsAT454GQDg&usg=AFQjCNFrCs3u2QZv2JT0ztCitPQuglyFsw 9. http://www.google.com/url?sa=t&rct=j&q=&esrc=s&source=web&cd=1&v ed=0CB8QFjAA&url=http%3A%2F%2Fpeople.terry.uga.edu%2Fbostrom%2F Outmodel.doc&ei=E9pMVYMKaLbsAT454GQDg&usg=AFQjCNE3Q4_BHp6XDXU_nHh6860fLt6oDQ 10.http://vievolvelearninganddevelopment.vievolve.com/outcome-thinking/ 11.http://www.thekennedygroup.com/_pdfs/ot_model.pdf

Page 86 of 105


Page 87 of 105


Notes ______________________________________________________________________________ ______________________________________________________________________________ ______________________________________________________________________________ ______________________________________________________________________________ ______________________________________________________________________________ ______________________________________________________________________________ ______________________________________________________________________________ ______________________________________________________________________________ ______________________________________________________________________________ ______________________________________________________________________________ ______________________________________________________________________________ ______________________________________________________________________________ ______________________________________________________________________________ ______________________________________________________________________________ ______________________________________________________________________________ ______________________________________________________________________________ ______________________________________________________________________________ ______________________________________________________________________________ ______________________________________________________________________________ ______________________________________________________________________________ ______________________________________________________________________________ ______________________________________________________________________________ ______________________________________________________________________________ ______________________________________________________________________________ ______________________________________________________________________________ ______________________________________________________________________________ ______________________________________________________________________________ ______________________________________________________________________________ ______________________________________________________________________________ ______________________________________________________________________________ ______________________________________________________________________________ ______________________________________________________________________________ ______________________________________________________________________________ ______________________________________________________________________________ ______________________________________________________________________________ ______________________________________________________________________________ ______________________________________________________________________________ ______________________________________________________________________________ ______________________________________________________________________________ ______________________________________________________________________________ ______________________________________________________________________________

Page 88 of 105


Notes ______________________________________________________________________________ ______________________________________________________________________________ ______________________________________________________________________________ ______________________________________________________________________________ ______________________________________________________________________________ ______________________________________________________________________________ ______________________________________________________________________________ ______________________________________________________________________________ ______________________________________________________________________________ ______________________________________________________________________________ ______________________________________________________________________________ ______________________________________________________________________________ ______________________________________________________________________________ ______________________________________________________________________________ ______________________________________________________________________________ ______________________________________________________________________________ ______________________________________________________________________________ ______________________________________________________________________________ ______________________________________________________________________________ ______________________________________________________________________________ ______________________________________________________________________________ ______________________________________________________________________________ ______________________________________________________________________________ ______________________________________________________________________________ ______________________________________________________________________________ ______________________________________________________________________________ ______________________________________________________________________________ ______________________________________________________________________________ ______________________________________________________________________________ ______________________________________________________________________________ ______________________________________________________________________________ ______________________________________________________________________________ ______________________________________________________________________________ ______________________________________________________________________________ ______________________________________________________________________________ ______________________________________________________________________________ ______________________________________________________________________________ ______________________________________________________________________________ ______________________________________________________________________________ ______________________________________________________________________________ ______________________________________________________________________________

Page 89 of 105


Page 90 of 105


Attachment A The SOCRATES Model

Page 91 of 105


Outcome Thinking with SOCRATES Purpose: To provide an easy-to-remember summary of the key criteria and questions to help you develop a really “Well-Formed Outcome”. Using this model will significantly increase personal achievement over time and can also increase the immediate effectiveness of all your activities, for example, in meetings, negotiations and personal coaching sessions.

S Specify your goal

O Own it

C

Check your evidence for having achieved it

R

Remember how you‟ve achieved this particular goal

A

Add in your higher level „interests‟

T

Test against the needs of others most closely affected

E Ecology and other effects

S Step out and start

What is the specific goal you really want to achieve? Is it stated in positive language? If “Moving Away From” the Present State, what do you want instead of the problem? Is it within your control to make happen? If not, what can you bring within control? Step into it – „As if‟ it is already achieved. Allow it to become fully associated in your mind: What are you seeing? What are you hearing? What are you feeling? What are you thinking? Looking back, how have you done this? What skills and resources have you used? What does achieving this do for you? What does it mean to you? What else is important to you about this? How does your achievement „dovetail‟ with the needs of others affected by it? Is this acceptable to you? What do they need at the same time? Looking at the bigger systems that may be affected: What is the impact of this achievement? What are the further consequences? What other ripples can you identify? Looking back, what were your first steps? Dissociate – Is the image truly compelling? Test commitment (a score out of 10) 10 = Committed 8-9 Ask what has to be true to make it 10? 7 or less? – Check for any hidden positive by-products of the Present State and incorporate them. Or, change the goal!


Attachment B Self-Directed Guide to Outcome Mapping

Page 92 of 105


Getting Started: A Self‐Directed Guide to Outcome Map Development GUIDE, EXERCISES and EXAMPLES

Prepared for the Annie E. Casey Foundation by Organizational Research Services Anne Gienapp, Jane Reisman, Sarah Stachowiak August 2009

1100 Olive Way, Suite 1500 S Seattle, WA 98101 S T: 206.728.0474 S F: 206.728.8984 ors@organizationalresearch.com S www.organizationalresearch.com


The Anne E. Casey Foundation: Don Crary, KIDS COUNT Coordinator; Jann Jackson, KIDS COUNT Senior Fellow and Tom Kelly, Evaluation Manager And the following KIDS COUNT Grantees for helping to inform the content of this guide: Action for Children North Carolina, Children First For Oregon, Connecticut Association of Human Services, Georgia Family Connection Partnership, Kentucky Youth Advocates, Voices for Illinois Children, Voices For Vermont’s Children and the Human Services Policy Center/Washington KIDS COUNT.


Getting Started: A Self‐Directed Guide to Outcome Map Development Guide

INTRODUCTION A THEORY OF CHANGE clearly expresses the relationships between actions and hoped‐for results. It provides an explanation of belief systems (e.g., assumptions, “best practices,” experiences) that make positive change in the lives of individuals and the community. A theory of change can be articulated as a visual diagram such as an OUTCOME MAP that depicts the sequential relationships between initiatives, strategies and intended outcomes and goals.

Page 1

Context ORS has been providing ongoing evaluation consultation to the Annie E. Casey Foundation KIDS COUNT Initiative since 2007. The focus of evaluation support has been individualized capacity‐building and guidance to KIDS COUNT grantees related to development of a theory of change, identifying interim outcomes, developing data collection processes and tools, and using data to strengthen advocacy efforts. Much of this work has been based on A Guide to Measuring Advocacy and Policy which ORS developed for the Foundation in 2007 in collaboration with Tom Kelly, Director of Evaluation and Don Crary, Director of KIDS COUNT.1 The intent of individualized grantee capacity building has been a multi‐way learning enterprise—to both test the ideas described in the Guide in real‐life advocacy settings as well as to fine‐tune these lessons. A number of these lessons are captured in a recently published ORS brief (2009).2 Going forward, ORS and Casey are attempting to find ways to advance the knowledge and application of advocacy evaluation approaches in broader and more accessible ways— including the use of webinars, trainings, and resource materials. The Getting Started Guide is part of this approach.

Purpose and Format of the Guide Getting Started offers step‐by‐step guidance and tools that can help KIDS COUNT grantees and other advocacy organizations who are interested in expressing their theory of change to enhance communication and serve as a framework for evaluation planning. This guide offers a template for advocates to express theory of change through an outcome map. The guide lays out steps associated with three main aspects of outcome map development: Part One: Identify Approach for Developing a Theory of Change Outcome Map, including defining the opportunity, determining the timeframe and stakeholders to involve in the process Part Two: Identify Needs, Purposes, Frames for Communication and Evaluation, including identifying audiences, vantage point(s), and priorities to highlight Part Three: Design a Useful Theory of Change Outcome Map, including identifying goals, strategies, and interim outcomes. Each Part includes the relevant steps, along with considerations and key questions to answer. The companion, Getting Started Exercises, includes exercises and tools to support documentation of decisions and specific components of an outcome map. 1

A Guide for Evaluation of Advocacy and Policy. (2007) Organizational Research Services on behalf of the Annie E. Casey Foundation. Available at: www.organizationalresearch.com and www.aecf.org. 2 Ten Considerations for Advocacy Evaluation Planning: Lessons Learned from KIDS COUNT Grantee Experiences (2009). Organizational Research Services on behalf of the Annie E. Casey Foundation. Available at: www.organizationalresearch.com and www.aecf.org. Prepared for The Annie E. Casey Foundation by Organizational Research Services


Page 2

Getting Started: A Self‐Directed Guide to Outcome Map Development Guide

Defining STRATEGIES, OUTCOMES and GOALS While these terms are often used differently by different groups or fields, ORS defines Strategies, Outcomes and Goals as follows: STRATEGIES: A related set of activities, e.g., those connected with implementa‐ tion of a program, a campaign or a collaborative effort. continued...

Getting Started is intended to be an easy‐to‐use resource for advocacy organizations that seek to develop and use a theory of change outcome map to simply articulate and effectively communicate their work to a variety of audiences (e.g., Board members, staff, funders, constituents, donors, partners, or other stakeholders) and to help them think about how and what to evaluate. In addition, we expect that use of this guide will offer more multi‐way learning opportunities regarding how theory of change is developed and used in advocacy settings.

Background As noted above, one way to express a theory of change is via an outcome map; we have found this visual product to be particularly useful for advocacy organizations. Simply put, an outcome map is a roadmap or a blueprint for articulating strategies and their relationship to outcomes. It provides a focused view of the landscape for advocacy activities, as well as the progression of outcomes that describe how you get from “here” to “there.” In the context of advocacy, this roadmap is especially important. While the focus of advocacy work is often on policy wins and improved conditions for populations and the environment, much of the progress occurs in the landscape along the way. We characterize advocacy outcomes as the interim structural change outcomes (e.g. changes in institutions, systems, beliefs, commitments) on the one hand and the policy change outcomes on the other hand. Both are essential to advocacy and policy change work but the former has been under‐emphasized and the latter over‐emphasized in planning, funding and evaluation of advocacy efforts. Changes in public will, political will, base of support, capacity of advocacy organizations, and strengthened alliances are the crucial structural changes that must happen on the way to policy wins. These interim changes are equally crucial for “holding the line” and defending bedrock legislation. An outcome map lifts up the importance of advocacy’s “interim outcomes” at the same time that it sharpens the focus on the type of policy changes of greatest interest and relevance. (Several examples of advocacy organizations’ outcome maps are included in the Exercises that accompany this guide.) Advocacy organizations that have worked to develop a theory of change outcome map have found both the process and the product to be useful. The process allows advocates and their partners to clarify thinking and build consensus about how strategies are expected to lead to desired outcomes. The outcome map product is a useful tool to help advocates communicate about their efforts. “Development of a theory of change (outcome map) has moved our work forward significantly. The process of defining our strategies, outcomes and goals gave our team a framework for discussing the values and direction of our organization in the coming years. We are better positioned to advocate for a system that effectively serves children.” —Director of Policy and Research, Action for Children, NC

Prepared for The Annie E. Casey Foundation by Organizational Research Services


Getting Started: A Self‐Directed Guide to Outcome Map Development Guide

...continued

OUTCOMES: Short, intermedi‐ ate or long term changes that can occur among indi‐ viduals, families, communities, organizations or systems. Individual, family and community outcomes can include changes in knowledge, attitudes, skills, behaviors, health or conditions. Organizational and system outcomes can include changes in institutional structures, capacity, service delivery systems, regulations, service practices, issue visibility, norms, partner‐ ships, public will and policies.

Page 3

“We worked to develop an organizational theory of change (outcome map) and so far, the payoff has been wonderful. Not knowing that we would be facing a major state budget crisis this year, it was absolutely the right and most timely thing we could have done! We are able to clearly show, describe and defend our work with our funders, the legislature, our partners and our board. People say ‘Oh, now I really get it. I see what you do.’” —Executive Director, Georgia Family Connection Partnership ORS’ experience working with KIDS COUNT grantees and other advocates shows that there is not a neat, linear “one size fits all” set of steps that results in a completed outcome map. To help advocates, Getting Started outlines basic guiding questions that support outcome map development, as well as accompanying exercises and tools to support documentation of decisions and the specific components of an outcome map. (See Getting Started Exercises.) Movement through the guide’s steps and questions related to three main aspects of outcome map development will help grantees better articulate their strategies and their relationships to outcomes and will ultimately help advocates enhance communication and engage in evaluation planning to document results of their work. However, there are no “right” answers. Answers to guiding questions will likely be different for each advocacy organizations depending on numerous contextual factors. The guide is intended to be self‐directed, though occasionally organizations may benefit from having an outside consultant work with them through some of the steps of articulating their theory of change. Having an outside perspective can sometimes help tease out the logic and assumptions that are inherent in your thinking.

GOALS: Sizeable, lasting, positive long‐term changes.

Prepared for The Annie E. Casey Foundation by Organizational Research Services


Page 4

Getting Started: A Self‐Directed Guide to Outcome Map Development Guide

CHECKPOINTS It is not uncommon that development of an outcome map can surface issues related to consensus, compatibility and capacity. While these issues can be challenging at times, further exploration can result in enhanced clarity and agreement about what an advocacy organization is seeking to accomplish, as well as what might be realistically required to get there. As advocates work through each part of outcome map development and its related steps and questions, it may help to periodically consider the following questions as “checkpoints.” 1. To what degree is there clarity and consensus among key stakeholders regarding beliefs and assumptions, audiences, models of change, strategies and key outcome areas? Addressing some of the guide’s questions may expose places where there are different ideas about an organization’s work or how it leads to expected changes. Sometimes differences can be easily resolved. However, if different assumptions are exposed about how the advocacy work happens, it can sometimes be challenging to find agreement. If it is hard to achieve consensus or arrive at answers to particular questions, it may be best to make a brief note of what the differences or challenges are and simply move on. Questions can always be revisited later in the process. Depending on the situation, it may be helpful to work with an outside facilitator to sort out issues standing in the way of agreement or consensus. 2. To what degree is the emerging picture of change compatible with the organization’s beliefs, approaches and overall culture (e.g., need for confidentiality, beliefs about how change happens, timeframe represented, implied roles and relationships)? It is a good idea to make sure that the outcome map reflects a view of change that is consistent with an organization’s strategic plan, overall beliefs and philosophies (e.g. community engagement, grassroots democracy). 3. To what degree does the emerging theory of change have implications for organizational capacity, roles and resources dedicated to advancing the theory of change (e.g., does the organization have adequate capacity to fully implement key strategies)? It is a good idea to make sure that the outcome map reflects an amount of work and results expectations that are realistic and in line with an organization’s resources and capacity.

Prepared for The Annie E. Casey Foundation by Organizational Research Services


Getting Started: A Self‐Directed Guide to Outcome Map Development Guide

PART ONE STEP 1 Define the current opportunity and questions and why this is the right time to develop a theory of change outcome map.

STEP 2 Determine timeframe for development of a theory of change outcome map.

Page 5

Identify Approach for Developing a Theory of Change Outcome Map There are times when an organization is in the best position to begin this process. (See examples in the Part One Exercises.) Organizations should be strategic about when and why they engage in outcome map development. Answer the question: Why are we embarking on this process now? What is our goal and purpose for creating an outcome map?

While the purpose of a theory of change outcome map is similar for most advocacy groups – to define and communicate how strategies will lead to expected changes ‐ ORS’ experience shows that the process for building an outcome map will vary across groups. Variations are partly due to differences in the contexts, timing, organizational culture and leadership present across organizations. However, even with variations, there are two basic processes that ORS has seen work well; these are described in Getting Started Exercises, Part One. One process will work best if an organization is developing its outcome map in a 3‐6 month timeframe. This is a likely process if there is a limited appetite for planning, and the preferred approach involves having a few key representa‐ tives do most of the work, with vetting and review by a broader group of stakeholders. The second process described is likely to be effective in a 6‐12 month outcome map development process. This is the likely process when it is determined that an outcome map must be created based on the direct input of many stakeholders and partners. Answer the question: What timeframe will be appropriate for our process?

Prepared for The Annie E. Casey Foundation by Organizational Research Services


Page 6

Getting Started: A Self‐Directed Guide to Outcome Map Development Guide

STEP 3 Determine stakeholders in the outcome map design process — e.g., staff only, staff and others (e.g., board), broader partners / stakeholder input and feedback, designated work group.

All of these factors will need to be considered together in order to select the process that will work best for your situation: “Appetite” for planning among staff and stakeholders. Much advocacy work occurs through partnerships across different organizations, sectors and sometimes—in the cases of unlikely allies—across political or other lines. While involving partners in planning or theory of change development processes can lead to the creation of a more complete picture of how desired goals may be achieved, it may be prohibitive or difficult to involve all partners aligned around one campaign or strategy in broader planning efforts. Instead, it may be best to consult partners as interim outcomes and/or priority measures are identified. This could be especially important if support or cooperation from partners is needed to implement strategies that are directed at certain outcomes, or if there is a need to rely on partners to help with documentation about outcome achievement. Who must be at the table? Sometimes there is a strategic reason for involving certain parties in a planning process, e.g., to further ownership and buy‐in, to build good will or to deepen relationships or partnerships. Time available. Advocates operate in a fast‐paced, dynamic environment with intense periods of hectic activity. This can make finding regular time to meet and plan challenging. Taking steps to conceptualize a theory of change is more than a one day “event.” It can be challenging, but advocacy organizations need to determine how they can dedicate the needed time and bandwidth to this activity. Also, it is important to consider that if an organization is about to develop, revise or revisit its strategic plan or do other significant planning work, or if advocates are heading into the busiest times of the year (e.g., legislative session) it may be best to put theory of change development on hold. Leadership. Because development of a theory of change outcome map will typically be done “out of hide,” that is, in addition to all other efforts and without any additional resources, it will be best accomplished if there is leadership to keep the process moving. Answer the Question: Who will lead/contribute to the process of developing your theory of change outcome map? CHECKPOINT: Before moving on, it may be useful to reflect on questions related to clarity and consensus (see Checkpoints, p. 4).

Prepared for The Annie E. Casey Foundation by Organizational Research Services


Getting Started: A Self‐Directed Guide to Outcome Map Development Exercises

EXCERCISES EXERCISE 1

Page 7

Identify Approach for Developing a Theory of Change Outcome Map Examples include: An organization is just beginning or has just finished strategic planning and is

Define the current opportunity or question and why this is the right time to develop a

hoping to lift up and be able to communicate important aspects of its work. There has been a recent leadership transition and an outcome map could help clarify the organization’s current and/or future work and focus areas. The organization wishes to learn how it might strengthen its capacity to influence policy and budget decisions at the state or local level. The organization is moving towards evaluation of some or all of its efforts and needs to more specifically articulate relationships between strategies and outcomes.

EXERCISE 2 Determine a timeframe for development of an outcome map.

Example: 3‐6 month outcome map development process. 1‐2 individuals identified to facilitate the process. Staff or another identified small (5‐8 person) work group develop an initial draft outcome map [typically accomplished in 2‐5 work sessions; 2‐4 hours each]. Work group should include facilitators, representatives of the organization’s executive/management team, some with solid knowledge of strategies and implementation, a variety of perspectives. Draft outcome map is shared and vetted with a broader group of stakeholders (e.g. other staff, Board, partners, funders) and feedback is collected and documented.

Example: 6—12 month outcome map development process. 1‐2 individuals identified to facilitate the process. A list of all key stakeholders is developed, and input regarding elements of the outcome map is sought from key stakeholders [ typically accomplished via multiple meetings or work sessions that occur over a 1‐3 month time frame] Input regarding stakeholders’ input/initial outcome map development is summarized.

Prepared for The Annie E. Casey Foundation by Organizational Research Services


Page 8

Getting Started: A Self‐Directed Guide to Outcome Map Development Exercises

Example: 6—12 month outcome map development process.

A small work group is identified (5‐8 people). Based on input from key stakeholders, the work group prepares a draft outcome map [typically accomplished in 1‐3work sessions; 2‐4 hours each]. Work group should include some of those who participated in the broad input gathering process. Draft outcome map is shared back with those stakeholders who provided initial input. Feedback is collected and documented. Draft outcome map is refined by work group based on feedback received [typically accomplished in 1‐2 work sessions, 2‐3 hours each] Second draft is shared for feedback. Second draft outcome map is refined by work group based on feedback received [typically accomplished in 1‐2 work sessions, 2‐3 hours each] Third draft is shared for minor comments and adoption. Formal adoption following minor revisions (revisions at this point are primarily to clarify or amplify)

EXERCISE 3

Identify who will lead/contribute to the process of developing your theory of change outcome map.

Determine stakeholders in the outcome map design process— e.g., staff only, staff and others (e.g., board), broader partner/ stakeholder input and feedback, designated work group.

Consider:

“Appetite” for planning Strategic choices: Who must be at the table? Time available Leadership

Prepared for The Annie E. Casey Foundation by Organizational Research Services


Getting Started: A Self‐Directed Guide to Outcome Map Development Guide

PART TWO STEP 1 Brainstorm relevant audiences for a theory of change outcome map and identify their needs and interests.

Page 9

Identify Needs, Purposes and Frames for Communication and Evaluation Answer the Question: Who are the main audiences with whom you will communicate your work via an outcome map? Possible audiences include: Funders, Board, Staff, Constituents, Partners and Donors. Identifying your organization’s main audiences and their interests regarding your work can help you determine how best to communicate about your work, what areas to emphasize, and how you may approach documentation and evaluation of your work. Some audiences have strong or specific interests and needs to which your organization may want to or need to respond. If so, you might consider that these are target audience(s) for your outcome map. When creating the outcome map, it will be important to clarify to what extent the outcome map will address a particular audience’s needs and interests in relationship to either the interests of other audiences or a more general picture of your work. Answer the Question: Who is your target audience for a theory of change outcome map?

STEP 2 Determine the best vantage point(s) for depicting a theory of change outcome map.

Different audiences may view your work from different perspectives or vantage points. Before articulating the strategies and outcomes that you want to make clear and prominent in your map, it will be helpful to determine the vantage point which can best communicate your theory of change. 30,000 foot vantage point. An outcome map from this high‐level vantage point is a “zoomed out” view, like looking out of an airplane window. This view point shows the broad landscape of what is being done to advance towards and achieve a long‐term goal, typically a policy‐related goal or a change in population or environmental conditions. This view would likely include multiple efforts of different partners that contribute towards the long‐term goal and is most useful when seeking to describe work happening in a long‐term time frame, e.g. multiple partners are implementing a broad set of efforts directed at different areas leading to change in the health/well being of all children birth to 18. This vantage point may be most relevant for general communication with multiple funders, partner alignment, and for those who care about long‐term results.

Prepared for The Annie E. Casey Foundation by Organizational Research Services


Page 10

Getting Started: A Self‐Directed Guide to Outcome Map Development Guide

10,000 foot vantage point. An outcome map from this vantage point shows a slightly lower‐to‐the‐ground view and would likely encompass the breadth of work of one organization. This vantage point could be most useful if an organization is seeking to define its particular role or contribution within a broad effort (i.e. what the organization itself bring to a partnership effort), or if an organization wishes to express how its own mix of internal strategies and outcomes are related and connected. This vantage point may be most relevant for board members, staff teams, close‐in partners and funders. 1,000 foot vantage point. An outcome map from this vantage point is like a view from the roof of a small building and would likely illustrate the activities and intended results connected with a singular strategy or related set of actions. This view would be most useful if an organization is involved in evaluation planning, or trying to get a picture of what is like to happen/change in a distinct near‐term time period (e.g. next 1‐2 years). This vantage point may be most relevant for close in partners, staff teams, or constituents. Another option is to create several “nested” outcome maps that show different views of strategies within a multi‐faceted campaign or broad effort. This option can be quite useful, but makes most sense if the organization has the time, appetite, and leadership for doing this work. Answer the Question: What vantage point(s) will allow you to best communicate your work and intended results to your target audience(s)? When selecting a vantage point for your outcome map, consider that there is no right answer. Answering this question for your organization will involve thinking about what is important to your target audiences, your strategies and what your organization ultimately hopes to achieve, and the degree to which your work happens in the context of collaboration and partnership with others who share similar goals.

Prepared for The Annie E. Casey Foundation by Organizational Research Services


Getting Started: A Self‐Directed Guide to Outcome Map Development Guide

STEP 3 Prioritize relevant strategies and outcome areas to highlight in your outcome map.

Page 11

Think about what your target audience(s) cares most. This, along with your identified van‐ tage point, can help you determine what needs to be clear or prominent in your theory of change. Some audiences may care most about implementation of certain activities (e.g., media advocacy/communications, lobbying, community education and outreach, data and research). Some audiences may care most about achieving certain outcomes (e.g., increased organizational capacity to do good media advocacy, policy wins, the health/ well‐being of a particular population). Considerations: Some audiences’ interests may be related to your organization’s operations (e.g. organizational capacity, types of actions and the quantity/breadth of actions). Some audiences’ interests may be more related to effectiveness of your actions (e.g. the quality, results or outcomes of your actions). And, some audiences may have strategic interests (e.g. how your organization’s efforts contribute to broad outcome areas or goals). Organizational capacity, types of actions, effectiveness and strategic interests are particularly important to consider and reflect as part of your outcome map. Often, information about the quantity or breadth of actions fit better into a work plan or implementation plan. Audiences’ interests may either be to get a clearer understanding of your organization’s current work, or to get a clearer view of what your organization’s work could look like in the future. It is important to be as clear as possible in determining whether the outcome map will present a picture of “what is” or “what could be.” Answer the Question: What activities or outcomes does your target audience(s) care most about? CHECKPOINT: Before moving on, it may be useful to reflect on questions related to clarity and consensus (see Checkpoints, p. 4).

Prepared for The Annie E. Casey Foundation by Organizational Research Services


Page 12

Getting Started: A Self‐Directed Guide to Outcome Map Development Guide

Example Outcome Map Georgia Family Connection Partnership Outcome Map Provide technical assistance, support and training

Convenes and connects local, regional, state and national partners

So That

So That

Provide data, research and evaluation

So That

Increased knowledge re: current conditions of children and families in Georgia; data and research, and best practices

Increased organizational capacity to provide coordinated services to children and families

So That So That

Increased / Enhanced coordination among collaborative partners

Additional partners are aligned to support system change

Increased public will / commitment to improve condition for Georgia's children and families So That

So That

So That

Children and families have access to appropriate services

Georgia has strong systems to support families So That

Increased resources to improve conditions for Georgia's children and families

So That

Children and families demonstrate improved outcomes So That

Healthy children

Children Ready to Start School

Children Succeeding in School

Stable, SelfSufficient Families

Strong Communities

So That

Improved conditions for children and families in Georgia

Prepared for The Annie E. Casey Foundation by Organizational Research Services


Getting Started: A Self‐Directed Guide to Outcome Map Development Exercises

EXERCISES

Page 13

Identify Needs, Purposes and Frames for Communication and Evaluation Below is a table that may be useful to complete as you think about Exercises below Identify Audiences and Their Interests

Main audiences for your work

EXERCISE 1 Brainstorm relevant audiences for a theory of change outcome map and identify their needs and interests.

You are successful if... What are the primary interests/needs of the audience?

Do audiences’ interests align most with a: 30,000 foot view 10,000 foot view or 1,000 foot view of your work?

Target audience for outcome map? Y/N

Considerations: When creating the outcome map, it will be important to clarify to what extent the outcome map will address a particular audience’s needs and interests in relationship to either the interests of other audiences or a more general picture of your work. If you have identified lots of target audiences, where are their interests the same? Where are they different? If the interests of identified target audiences are significantly different, it may be helpful to narrow your focus. Are the target audiences’ needs/interests likely to be addressed in the short‐term or long‐term? Are they likely to be addressed by your organization alone or by many organizations, groups and efforts working in partnership?

Prepared for The Annie E. Casey Foundation by Organizational Research Services


Page 14

Getting Started: A Self‐Directed Guide to Outcome Map Development Exercises

EXERCISE 2 Determine vantage point(s) for depicting a theory of change outcome map.

Your vantage point. This could be your vantage point if SOME or ALL of the following are true... 30,000 foot view

Your organization is working toward impact or has a social change model1 Your organization typically works in a context of collaboration and partnership to achieve shared goals You want your outcome map to show how your organization’s strategies connect with those of other groups, and with a broad, long‐term goal See examples: Connecticut Association for Human Services, Children First For Oregon: “Fostering Success”

10,000 foot view

Your organization engages in multiple strategies directed towards a broad, long‐term goal (e.g. a policy‐related goal) Your organization has adopted a social change or a policy‐change model (see footnote) You want your outcome map to portray the strategies and expected outcomes reflected by the whole of your organization’s work, and the connections among strategies/outcomes. You want your outcome map to help express your organization’s particular role or contribution within a broader effort See examples: Georgia Family Connection Partnership, Action for Children North Carolina

1,000 foot view

Your organization is engaged in a specific strategy directed at a specific policy‐related goal Your organization is interested in results of specific advocacy tactics You want your outcome map to portray the set of related activities that are encompassed within a particular strategy and the resulting short‐ and intermediate‐term outcomes See examples: Children First for Oregon Fostering Success Strategic Communications Campaign, Georgia Family Connection Partnership – Strategy 1 Map .

1

For more description of social change model and models of change in advocacy and policy work, see: The Challenge of Assessing Policy and Advocacy Activities: Strategies for a Prospective Evaluation Approach. Blueprint Research & Design, Inc (2005) Prepared for the California Endowment. Prepared for The Annie E. Casey Foundation by Organizational Research Services


Getting Started: A Self‐Directed Guide to Outcome Map Development Exercises

Page 15

Below is a table that could be useful to complete as you think about Exercise below

Identify Target Audiences and Relevant Headline(s) Target Audiences

EXERCISE 3 Prioritize relevant strategies and outcome areas to highlight in your outcome map.

Relevant Headline(s)

A major news source is putting together a summary of your past year of work. Thinking about both your target audiences’ interests, what is the headline that would best communicate success to your target audience(s)? What would you target audience most want to read or hear?

The headline could address... What work you have done Type of activities

Examples Development of data products Development of media spots/press releases Provide training/technical assistance Sponsor/facilitate meetings and events Conduct research/evaluation Legislative advocacy Identify strategies and tactics for Universal Pre‐K campaign

OR What you have accomplished as a result of your work How much have you done?

Prepared for The Annie E. Casey Foundation by Organizational Research Services

Examples # of hits on CLIKs # of downloads of policy/ issue briefs/newsletter # of public and nonprofit organizations receiving products # and types of attendees at conferences/ meetings Open rate of our KC news alert email message # of policy makers who received the data book # 3 press releases sent to daily newspaper, TV stations, radio stations


Page 16

Getting Started: A Self‐Directed Guide to Outcome Map Development Exercises

What you have accomplished as a result of your work Organizational capacity

Examples

The amount of capacity your organization has to implement or engage in certain strategies or activities.

Effectiveness (outcomes) of your work?

Legislative report tracking policy changes Public citation of use of KC products by policymakers # of instances where products are cited in policy de‐ bates (legislative record search) Evidence of policymaker engagement (i.e. press re‐ leases, citations in bill language) # of child advocacy groups that use data/ products (State Child Advocate Survey) # of research proven initiatives used in state

Strategic accomplishments

Selected message to frame key issue Selected topic/frame/approach for development of data products Selected approaches to disseminate messages/data products

Prepared for The Annie E. Casey Foundation by Organizational Research Services


Getting Started: A Self‐Directed Guide to Outcome Map Development Guide

PART THREE

Page 17

Design a Useful Theory of Change Outcome Map Advocacy organizations are generally clear on their strategies and tactics and their end goals. End goals are often expressed as policy changes, or changes in population or envi‐ ronmental conditions. Developing meaningful evaluation of advocacy and policy efforts requires definition of the “middle”: what happens between the implementation of strate‐ gies and tactics and the ultimate policy impact?

STEP 1 Start at the END by clarifying the goal(s).

The goal(s) is the “bottom line” of your outcome map. For KIDS COUNT grantees, this ulti‐ mate change will generally be: A policy‐related change. In other words, results of strategies and activities may include policy development, new or revised policy, policy agendas, policy adoption or policy block‐ ing, policy monitoring, policy enforcement or the like. OR An impact statement. Results of strategies lead to a specific condition for individuals, fami‐ lies, a particular population, neighborhood(s) or communities. For example: Children in our state are healthy All families are strong and self‐sufficient Communities are prosperous

Answer the Question: What is the ultimate goal of your work? Or if you are working with partners, what is your overall common goal? Where is there mission congruence?

STEP 2 Identify the main strategies that your organization/ partnership will implement towards the goal(s).

Consider specific strategies that address your ultimate goal. Strategies are related sets of activities and can include public awareness efforts, capacity‐building efforts, or community mobilization efforts. Strategies can describe programs, campaigns, initiatives, or collaborations. Answer the Question: What work will we do to reach our ultimate goal?

Prepared for The Annie E. Casey Foundation by Organizational Research Services


Page 18

Getting Started: A Self‐Directed Guide to Outcome Map Development Guide

STEP 3 Determine the length of time between strategy implementation and outcome achievement that will be depicted in your outcome map.

The length of time identified will suggest the types of outcomes that will likely make up “the middle” of the theory of change (see Step 3). Considerations: Think about your activities. For how long are current activities likely to be sus‐ tained? Your map should reflect a view of your work that you feel relatively cer‐ tain about. Think about your approach. For example, if you have a social change approach, your outcome map will likely present quite a long‐term view. However, if your approach is advocacy in order to bring about strategic alliances around a particular issue, the time frame for achievement may be much shorter. If you have a policy change approach, where are you in the policy process? How long will it take to achieve desired policy “wins”? Answer the Question: When will you likely to achieve desired outcomes and goals? What kinds of things might need to happen first or “on the way”?

STEP 4 Begin filling in “the middle.” Identify meaningful interim outcomes that are likely to occur on the way to the goal(s).

One very effective approach is to develop “So That” Chains. So‐That chains help connect strategies to the ultimate goal through a series of logical, sequential changes. Creating So‐ That chains for each strategy can allow for effective articulation and communication of expected changes resulting from each strategy, and how the strategies together contribute to ultimate goals. In developing an outcome map, however, it is important to note that multiple strategies are also likely to lead to common intermediate outcomes on the path‐ way to ultimate goals. Fill in the Statement: We do _________________ [Strategy] So That ________________________ [Outcome/Change] results.

Prepared for The Annie E. Casey Foundation by Organizational Research Services


Getting Started: A Self‐Directed Guide to Outcome Map Development Guide

Page 19

Example:

We provide technical assistance (to child abuse prevention/family support programs) [Strategy] So That Providers increase their knowledge about best practices [Outcome] So That Providers provide high quality programs [Outcome] So That Programs are more likely to result in positive outcomes for parents and families served [Outcome] So That Children are less likely to experience abuse and neglect [Outcome] So That All children are healthy and safe [Goal]

Notice that this chain of statements moved from knowledge to behavior of providers and from health status of children in programs to health status of children in the community. Each link is a logical sequence of events showing how implementation of a specific strategy contributes to broad changes. For tips about constructing So‐That chains and ideas about interim outcomes, see the table included with Part Three Exercises. This table describes several outcome areas likely to be related to advocacy. Consider that you will likely need to characterize both the structural changes (e.g. changes in institutions, beliefs, commitments) that happen on the way to the policy changes which you are seeking.

Prepared for The Annie E. Casey Foundation by Organizational Research Services


Page 20

Getting Started: A Self‐Directed Guide to Outcome Map Development Guide

STEP 5 Prepare to share, refine and/or adopt your theory of change outcome map.

Once So‐That chains are completed and a draft outcome map has been created, it is a good idea to test logic and relevance. Answer the Questions: Are there logical linkages between strategies, outcomes and the goal? Are the most relevant outcomes included (i.e. those that are of highest interest/importance to target audiences)? Revisit Part Two, Step 1 to review your audience(s) needs and interests, and consider whether the outcome map is sufficiently addressing these.

CHECKPOINT: As you prepare to share or adopt your outcome map, it may be helpful to reflect on questions related to compatibility and capacity (see Checkpoints, p. 4)

Next Steps Outcome maps can be incredibly useful for advocates; many have found outcome maps to be valuable for effective communication about advocacy work and as a fundamental part of evaluation planning. This guide presents an approach and specific steps to support KIDS COUNT grantees and other advocates as they think about and create a theory of change outcome map. Those who follow the steps and engage in the associ‐ ated exercises should have a good understanding of how to clearly articulate their theory of change via a graphic outcome map. Those who use the guide will also add to ongoing learning about what it takes to plan and undertake evaluation efforts in advocacy settings. Once groups have worked through the steps and exercises in this guide, they can draw on their thinking as well as the outcome map itself to inform communication and messages about the organization’s work. Organizations may also periodically refer to the questions presented in this guide to reflect on the ongoing logic and relevance of their theory of change map or to support planning efforts. Organizations can also use the outcome map as a platform for more detailed evaluation planning. Moving ahead with evaluation would involve identification of priority areas for measurement, selection of an appropriate evaluation design and measurement approaches, development of a comprehensive evaluation plan and implementation of evaluation efforts.3

3

For more information about steps involved in evaluation planning, see: A User’s Guide to Advocacy Evaluation Planning (2009). Harvard Family Research Project with support from the David and Lucile Packard Foundation.

Prepared for The Annie E. Casey Foundation by Organizational Research Services


Getting Started: A Self‐Directed Guide to Outcome Map Development Exercises

EXERCISES EXERCISE 1

Page 21

Design a Useful Theory of Change Outcome Map Considering the following: Purpose of outcome map

Start at the END: Clarify goal(s).

Needs/interests of your target audience(s) Your organization’s core work What is the “bottom Line” or ultimate goal of your work? List the ultimate goal/impact in the “Goal”

shape at the bottom of

the “Outcome Map Template” on page 15. Examples of goals include:

Policy‐related changes: policy development, new or revised policy, agenda setting, policy adoption or policy blocking, policy monitoring, policy enforcement.

Impact statement: A specific condition for individuals, families, a particular population, neighborhood(s) or community.

It is important to achieve consensus about this goal. Typically goal(s) are broad enough to make everyone feel comfortable, included and inspired.

EXERCISE 2 Identify the main strategies that your organization/ partnership will implement towards the goal(s).

Consider: The needs/interests of your target audience(s) Your organization’s core work Identify the specific strategies which address your ultimate impact. These strategies may include program strategies, campaigns, initiatives, collaborations, public awareness efforts, capacity‐building efforts, community mobilization efforts and so on. Here are some examples: Media campaign Facilitate Alliance for Education Community organizing Provide technical assistance Conduct research and program evaluation Develop data products

Prepared for The Annie E. Casey Foundation by Organizational Research Services


Page 22

Getting Started: A Self‐Directed Guide to Outcome Map Development Exercises

EXERCISE 3 Determine the length of time between strategy implementation and outcome achievement that will be depicted in the outcome map.

EXERCISE 4 Begin filling in “the middle.” Identify meaningful interim outcomes that are likely to occur on the way to the goal(s).

Consider: Needs/interests of your target audience(s) The vantage point you identified for your outcome map (e.g., 30K foot, 10K foot, 1K foot view) Your organization’s capacity or partnerships How long will it take to implement the strategies and/or achieve the range of desired accomplishments, outcomes and goal(s)? Is it likely to take 1‐3 years? 3‐5 years? 5‐10 years? 10 years or more? What implications does your working timeframe have in terms of the particular strate‐ gies and activities that will be implemented and/or the sequence of outcomes (changes, results) that will achieved in the short term, intermediate term and longer term? Create “So That” Chains. Take the first strategy identified on your outcome map and create a "so that" chain based on the following question: "We do [strategy] so that _____________________ results for individuals, families, organizations or communities" The answer should be the direct change, result or outcome of the strategy. Repeat this question until you have linked each strategy to your goal. TIP: It is helpful to create So‐That chains and begin assembling the picture of your theory of change outcome map on a large wall. You can use colored half sheets of paper to write strategies and outcomes, and these sheets can be arranged sequentially on the wall to reflect the connection between strategies and outcomes, as well as the flow of outcomes towards the ultimate goal. A worksheet template follows. Once you have begun to craft So‐That chains, you can begin to fill in the “Outcomes” rectangles in the middle part of the “Outcome Map Template” on page 15. See below for an example So‐That chain. Also see below for additional information about the types of outcomes likely to be associated with advocacy and policy work.

Prepared for The Annie E. Casey Foundation by Organizational Research Services


Getting Started: A Self‐Directed Guide to Outcome Map Development Exercises

Page 23

“So That” Chain Worksheet We implement STRATEGY/ ACTVITY So That

OUTCOME So That

OUTCOME So That

OUTCOME

TYPES OF OUTCOMES ASSOCIATED WITH ADVOCACY AND POLICY CHANGE WORK In A Guide To Measuring Advocacy and Policy, ORS identified several outcome areas that represent the interim steps and infrastructure that create the conditions for changes in society and the environment as well as outcome areas that reflect the end‐game: policy adoption, funding or enforcement in various jurisdictions, e.g., local, state, federal. ORS then distilled these outcomes into six distinct categories representing the essential changes in lives, community conditions, institutions and systems that result from advocacy and policy work. These outcome categories are as follows2: 1. SHIFT IN SOCIAL NORMS Description: the knowledge, attitudes, values and behaviors that comprise the normative structure of culture and society. Advocacy and policy work has become increasingly focused on this area of changes in recognition of the importance of aligning advocacy and policy goals with core and enduring social values and behaviors. 2

Descriptions of Outcome Areas and the Table on pages 5‐7 excerpted from: A Guide to Measuring Advocacy and Policy. (2007) Organizational Research Services on behalf of the Annie E. Casey Foundation. Available at: www.organizationalresearch.com and www.aecf.org.

Prepared for The Annie E. Casey Foundation by Organizational Research Services


Page 24

Getting Started: A Self‐Directed Guide to Outcome Map Development Exercises

2. STRENGTHENED ORGANIZATIONAL CAPACITY Description: the skill set, staffing and leadership, organizational structure and systems, finances, and strategic planning among of non‐profit organizations and formal coalitions that plan and carry out advocacy and policy work. The development of these core capacities is critical organizational conditions to the ability to implement and sustain advocacy and policy change efforts. 3. STRENGTHENED ALLIANCES Description: the level of coordination, collaboration and mission alignment among community and system partners—including nontraditional alliances, e.g., bipartisan alliances; unlikely allies. These structural changes in community and institutional relationships and alliances have become essential forces in presenting common messages, pursuit of common goals, enforcement of policy changes and insuring the protection of policy ‘wins” in the event that they are threatened. 4. STRENGTHENED BASE OF SUPPORT Description: the grassroots, leadership and institutional support for particular policy changes. The breadth and depth of support among the general public, interest groups and opinion leaders for particular issues provides a major structural condition for supporting changes in policies. This outcome category spans many layers of culture and societal engagement including increases in civic participation and activism, “allied voices” among informal and formal groups,” the coalescence of dissimilar interest groups, actions of opinion leader champions, and positive media attention. 5. IMPROVED POLICIES Description: the stages of policy change in the public policy arena. These stages include policy development, adoption, implementation and funding. This has frequently been the past focus of measuring the success of advocacy and policy work. It is certainly the major focus of such work but is rarely achieved without changes in the preconditions to policy change identified in the other outcome categories. 6. CHANGES IN IMPACT Description: the ultimate changes in social and physical lives and conditions, .i.e., changes in individuals, populations and physical environments, that motivate policy change efforts. Changes in impacts are long‐term outcomes and goals. They would be important to monitor and evaluate in those funding situations in which grant makers and advocacy organizations view themselves as partners in social change. These types of changes are influenced by policy change but typically involve far more strategies, including direct interventions, community support, personal and family behaviors, than policy change alone.

Prepared for The Annie E. Casey Foundation by Organizational Research Services


Getting Started: A Self‐Directed Guide to Outcome Map Development Exercises

Page 25

The table below presents these outcome categories along with samples of outcomes and the strategies that are associated with these broad outcomes. Please note that the order of outcomes is not intended to represent their importance or priority. Menu of Outcomes for Advocacy and Policy Work Table 1. SHIFT IN SOCIAL NORMS Examples of Outcomes

Changes in awareness Increased agreement of the definition of a problem (e.g., common language) Changes in beliefs Changes in attitudes Changes in values Changes in the salience of an issue Increased alignment of campaign goal with core societal values Changes in public behavior

Examples of Strategies

Media campaign Message development (e.g., defining the problem, framing, naming) Development of trusted messengers and champions

Unit of Analysis (e.g. Who or What Changes?)

Individuals at large Specific groups of individuals Population groups

2. STRENGTHENED ORGANIZATIONAL CAPACITY Examples of Outcomes

Improved organizational capacity of organizations involved with advocacy and policy work (e.g., non‐profit management, strategic abilities; capacity to communicate and promote advocacy messages; stability) Increased ability of coalitions working toward policy change to identify policy change process (e.g., venue of policy change, steps of policy change based on strong understanding of the issue and barriers, jurisdiction of policy change)

Examples of Strategies

Leadership development Organizational capacity building Communication skill building Strategic planning

Unit of Analysis e.g. Who or What Changes?)

Advocacy organizations Not‐for profit organizations Advocacy coalitions Community organizers, leaders

Prepared for The Annie E. Casey Foundation by Organizational Research Services


Page 26

Getting Started: A Self‐Directed Guide to Outcome Map Development Exercises

3. STRENGTHENED ALLIANCES Examples of Outcomes

Increased number of partners supporting an issue Increased level of collaboration (e.g., coordination) Improved alignment of partnership efforts (e.g., shared priorities, shared goals, common accountability system) Strategic alliances with important partners (e.g. stronger or more powerful relationships and alliances)

Examples of Strategies

Partnership development Coalition development Individuals Groups Organizations Institutions

Unit of Analysis (e.g. Who and What Changes?)

4. STRENGTHENED BASE OF SUPPORT Examples of Outcomes

Increased public involvement in an issue Increased level of actions taken by champions of an issue Increased voter registration Changes in voting behavior Increased breadth of partners supporting an issue (e.g., number of “unlikely allies” supporting an issue) Increased media coverage (e.g., quantity, prioritization, extent of coverage, variety of media “beats,” message echoing) Increased awareness of campaign principles and messages among selected groups, e.g., policy makers, general public, opinion leaders) Increased visibility of the campaign message (e.g., engagement in debate, presence of campaign message in the media) Changes in public will

Examples of Strategies

Community organizing Media campaigns Outreach Public/grassroots engagement campaign Voter registration campaign Coalition development Development of trusted messengers and champions Policy analysis and debate Policy impact statements

Unit of Analysis

Individuals Groups Organizations Institutions

(e.g. Who or What Changes?)

Prepared for The Annie E. Casey Foundation by Organizational Research Services


Getting Started: A Self‐Directed Guide to Outcome Map Development Exercises

Page 27

5. IMPROVED POLICIES Examples of Outcome

Examples of Strategies

Unit of Analysis (e.g. Who or What Changes?)

Policy Development Policy Adoption (e.g., ordinance, ballot measure, legislation, legally‐binding agreements) Policy Implementation (e.g., equity, adequate funding and other resources for implementing policy) Policy Enforcement (e.g., holding the line on bedrock legislation) Scientific research Development of “white papers” Development of policy proposals Pilots/Demonstration programs Educational briefings of legislators Watchdog function Policy planners Administrators Policy makers Legislation/laws/formal policies

6. CHANGES IN IMPACT Examples of Outcome

Improved social and physical conditions (e.g., poverty, habitat diversity, health, equality, democracy)

Examples of Strategies

Combination of direct service and systems‐changing strategies

Unit of Analysis (e.g., Who or What Changes?)

Population Ecosystem

Definition of outcomes is a crucial step of your evaluation design. We suggest that advocacy and policy efforts can be viewed in the context of one or more of these broad outcome categories, or “outcome rectangles.”

Prepared for The Annie E. Casey Foundation by Organizational Research Services


Page 28

Getting Started: A Self‐Directed Guide to Outcome Map Development Exercises

“So That” Chain Examples: “Large Leaps” Approach to Policy Change* Redefine issue/issue framing So That

Mobilize new actors: New/unexpected allies Legislators Public So That

Increased number of allies/partners So That

Increased agreement about issue defi‐ nition and need for change Increased salience of and prioritization of issue So That

“Significant” changes in institutions “Significant” change in policy So That

Changes in social and/or physical conditions

*From Pathways for Change: 6 Theories about How Policy change Happens. (2007) S. Stachowiak, Organizational Research Services. Available at: www.organizationalresearch.com. Prepared for The Annie E. Casey Foundation by Organizational Research Services


Getting Started: A Self‐Directed Guide to Outcome Map Development Exercises

Page 29

Discuss your organization’s outcomes as they fit into the areas described in the table on pages 11‐13, and add relevant outcomes to the map below.

Exercise: Outcome Map Template

STRATEGIES

OUTCOMES

● ● ● (Complete Chain from Outcomes to Goals) ● ● ●

GOAL(s)

Prepared for The Annie E. Casey Foundation by Organizational Research Services


Page 30

Getting Started: A Self‐Directed Guide to Outcome Map Development Exercises

EXERCISE 5 Prepare to share, refine and/or adopt your theory of change outcome map.

Logic and relevance test: Once you have completed So‐That chains or a draft outcome map, conduct a logic and rele‐ vance test by addressing the following questions: Do the strategies reflect aspects of your organization’s core work? Do short‐term outcomes logically flow from identified strategies? Are short‐term outcomes appearing in the map the changes that are most likely to happen first? Does the sequence of outcomes flow logically? Can you reasonably expect that things will change as shown in the map? Are the outcomes realistic and reasonable? Does it seem logical to assert that the identified strategies will influence the outcomes shown in the map? Are the strategies and outcomes shown on the map meaningful and compelling to your target audience(s)? Are your target audience’s needs and interests sufficiently addressed? If the answer to any of these questions is “No,” or if you are uncertain, it may be useful to review the steps in Parts Two and Three, as well as the Checkpoints (Getting Started Guide, page 4).

Prepared for The Annie E. Casey Foundation by Organizational Research Services


OUTCOME MAP EXAMPLES



Getting Started: A Self‐Directed Guide to Outcome Map Development Exercises

Prepared for The Annie E. Casey Foundation by Organizational Research Services

Page 33


Getting Started: A Self‐Directed Guide to Outcome Map Development Exercises

Impact/ Goals

Long-term Outcome

Intermediate Outcomes

Strategies

Resources

Page 34

Prepared for The Annie E. Casey Foundation by Organizational Research Services


Getting Started: A Self‐Directed Guide to Outcome Map Development Exercises

Page 35

2009-10 2010

Impact/ Goals

Long-term Outcome

Intermediate Outcomes

Activities Prepared for The Annie E. Casey Foundation by Organizational Research Services


Page 36

Getting Started: A Self‐Directed Guide to Outcome Map Development Exercises

Georgia Family Connection Partnership Theory of Change Outcome Map Provide technical assistance, support and training

Convenes and connects local, regional, state and national partners

So That

So That

Provide data, research and evaluation

So That

Increased knowledge re: current conditions of children and families in Georgia; data and research, and best practices

Increased organizational capacity to provide coordinated services to children and families

So That So That

Increased / Enhanced coordination among collaborative partners

Additional partners are aligned to support system change

Increased public will / commitment to improve condition for Georgia's children and families So That

So That

So That

Children and families have access to appropriate services

Georgia has strong systems to support families So That

Increased resources to improve conditions for Georgia's children and families

So That

Children and families demonstrate improved outcomes So That

Healthy children

Children Ready to Start School

Children Succeeding in School

Stable, SelfSufficient Families

Strong Communities

So That

Improved conditions for children and families in Georgia

Prepared for The Annie E. Casey Foundation by Organizational Research Services


Getting Started: A Self‐Directed Guide to Outcome Map Development Exercises

Page 37

Resources

Georgia Family Connection Partnership Theory of Change Outcome Map – Strategy 1 Funding

Staff and Consultants

Leadership and Innovation

Data, Publications, CCBS

Technology

Collaborations and Partnerships

Activities

Strategy

So That

Provide technical assistance, support and training So That

Professional development and peer support

Technical Assistance

Training

Communications Support

So That

Improved collaborative processes and structures Short-term Outcomes

So That

Increased quality of strategic plans

Increased implementation and evaluation of plans

Increased capacity to document progress

So That

Enhanced collaborative work

Intermediate-term Outcomes

So That

Increased accountability

Increased communication among partners

Increased implementation of strategies and initiatives relevant to the needs of children and families

So That

So That

So That

Increased organizational capacity to provide coordinated services to children and families

Increased partners aligned to support systems change

Increased public will / commitment to improve conditions for Georgia's children and families

So That

Increased resources to improve conditions for Georgia's children and families So That

Children Ready to Start School

Children Succeeding in School

Goal

Healthy children

Stable, SelfSufficient Families

So That

Improved conditions for children and families in Georgia Prepared for The Annie E. Casey Foundation by Organizational Research Services

Strong Communities


Page 38

Getting Started: A Self‐Directed Guide to Outcome Map Development Exercises

Values

Strategies

Resources

Sound Operations; Accountability; Equity; Public-Private Engagement

Staff, Fellows/ Leadership and Innovation/ Office Procedures and Technology

Increase Action for Children’s capacity

Build local capacity to address children’s issues

Advocate for state policy that addresses children’s issues

Research

Advocacy and Lobbying

Tools

Communication

Increase public and policymaker awareness of children’s issues

Increase in public and political will to address children’s issues

Increase implementation of evidence-based practice and policies to address children’s issues

Goal

Intermediate Outcomes

Increase organization’s visibility

Long-Term Outcomes

Short-Term Outcomes

Community Engagement

Budget and Policy Analysis

Strengthen partnerships and alliances

Increase availability and use of data and evidence in policymaking

Increase alignment and collaboration on identification of solutions and policymaking efforts

Increase state and private investment to improve children’s outcomes

Improve child-serving institutions uniformly across the state to improve conditions for children on the ground

All children (0 to 21) are healthy. All children (0 to 21) are safe in their homes, schools and communities. All children (0to 21) have economic security. All children (0 to 21) are provided the opportunity and resources to succeed in their education.

Prepared for The Annie E. Casey Foundation by Organizational Research Services



Organizational Research Services

1100 Olive Way, Suite 1500 Seattle, WA 98101 206.728.0474 www.organizationalresearch.com


Attachment C Outcome Thinking Glossary

Page 93 of 105


Outcome Thinking: Glossary Activities – Methods, techniques or strategies employed in carrying out a program. Indicators – Observable and measurable evidence that the intended outcomes are being achieved; must be specific and able to be seen, heard or demonstrated. Inputs – Resources needed to carry out a program’s activities; may include staff and volunteers, time/hours devoted to planning and implementing program activities, money, facilities, even the program participants. Logic model – A conceptual framework that links a program’s inputs, activities, outputs, and outcomes to the desired end goal (or result) that an organization works to achieve in its community; provides a foundation for identifying key program elements as a basis for program assessment. Outcome – The change in a program’s participants, or its community’s conditions, that is anticipated as a result of program activities; might include a change in knowledge, attitude, behavior, skills, or condition. Outcomes may be short-term or immediate (the direct result of program activities), or may be intermediate (achieved as a result of other short-term outcomes). Outcome map – A visual representation of the assumptions inherent in a program’s theory of change, presenting the connections between a program’s activities and its short-term outcomes, intermediate outcomes, and the end goal (or result) that the organization seeks to achieve in its community. Outcome thinking – A mindset or orientation that is focused on measuring the changes or impact that programs have on clients, rather than measuring or counting the services provided by the agency. Outputs – Units of service, counts of activities, and/or products that a program delivers, and which are intended to lead to desired outcomes (e.g., number of brochures distributed or hours of tutoring provided). Program assessment – A process by which a systematic examination of an agency’s program is conducted in order to answer questions about effectiveness and/or efficiency. Results – End goal(s) that an agency strives to achieve in its community; these are often greater than what a single program can achieve, but a useful point of reference to guide program planning and evaluation. Theory of change – Articulation of an organization’s assumptions about how its program activities will ultimately contribute to the long-term change (or results) it strives to achieve in the community at large. A theory of change can often best be defined by following a sequence of “so that...” explanations that connect activities to short-term outcomes, to intermediate and longer-term outcomes, to the desired end goal for the community.


Outcome Thinking: Resources Publications The Web sites noted as a means to order publications are also very informative and sources of further guidance or resources. Measuring Program Outcomes: A Practical Approach (United Way of America, 1996) – Provides a comprehensive and user-friendly guide to tackling outcome measurement, with a more in-depth look at the concepts and frameworks of logic modeling, indicators and data collection. Includes guidance for how to assemble and organize people to undertake program assessment. Order by phone from Sales Service/America at 800-772-0008. Outcomes for Success! (The Evaluation Forum, 2000) – A very solid guide and workbook for developing a theory of change and logic model. Includes exercises, many examples, and indepth guidance for learning and understanding the concepts and applying them to any organization. Order by phone (206-269-0717) or online at www.evaluationforum.com. Outcome Frameworks (The Rensselaerville Institute, 2004) – A theoretical overview of how outcomes measurement for not-for-profits has developed, along with many of the major frameworks that have been developed for program assessment/evaluation. Good for background information to stimulate ideas for additional ways to assess programs. Order by phone (518-797-3783) or online at: www.RInstitute.org. Additional Web sites The Evaluation Toolkit (W.K.Kellogg Foundation) – A wealth of resource materials on logic models and program evaluation, with downloadable handbooks in both English and Spanish. Includes guidance on how to develop an evaluation plan (next-level work for organizations pursuing this in a substantive way), as well as links to other resources that provide live examples of theories of change and logic models. http://www.wkkf.org/Programming/Overview.aspx?CID=281 Outcome Measurement Resource Network (United Way of America) – Background information generated in the United Way’s efforts to foster outcome measurements, and a large set of downloadable resources – articles, samples, guides, videos – to guide organizations undertaking outcome measurement. http://national.unitedway.org/outcomes/

This material was developed by Community Resource Exchange (CRE), a not-for-profit consulting group that provides strategic advice and technical services every year to over 350 community-based organizations that fight poverty and HIV/AIDS. For over 25 years, CRE has been committed to providing these front-line community groups with the information, skills and leadership training to leverage their resources within their organization and communities.


Page 94 of 105


Advocacy Foundation Publishers The e-Advocate Quarterly Issue

Title

Quarterly

Vol. I

2015 The ComeUnity ReEngineering Project Initiative The Adolescent Law Group Landmark Cases in US Juvenile Justice (PA) The First Amendment Project

The Fundamentals

2016 The Fourth Amendment Project Landmark Cases in US Juvenile Justice (NJ) Youth Court The Economic Consequences of Legal Decision-Making

Strategic Development Q-1 2016

2017 The Sixth Amendment Project The Theological Foundations of US Law & Government The Eighth Amendment Project The EB-5 Investor Immigration Project*

Sustainability Q-1 2017

2018 Strategic Planning The Juvenile Justice Legislative Reform Initiative The Advocacy Foundation Coalition for Drug-Free Communities Landmark Cases in US Juvenile Justice (GA)

Collaboration Q-1 2018

I II III IV Vol. II V VI VII VIII Vol. III IX X XI XII Vol. IV XIII XIV XV XVI

Q-1 2015 Q-2 2015 Q-3 2015 Q-4 2015

Q-2 2016 Q-3 2016 Q-4 2016

Q-2 2017 Q-3 2017 Q-4 2017

Q-2 2018 Q-3 2018 Q-4 2018

Page 95 of 105


Issue

Title

Quarterly

Vol. V

2019

Organizational Development

XVII XVIII XIX XX

The Board of Directors The Inner Circle Staff & Management Succession Planning

Q-1 2019 Q-2 2019 Q-3 2019 Q-4 2019

XXI XXII

The Budget* Data-Driven Resource Allocation*

Bonus #1 Bonus #2

Vol. VI

2020

Missions

XXIII

Q-1 2020

XXV XXVI

Critical Thinking The Advocacy Foundation Endowments Initiative Project International Labor Relations Immigration

Vol. VII

2021

Community Engagement

XXIV

Q-2 2020 Q-3 2020 Q-4 2020

st

XXVII XXVIII XXIX XXX XXXI Vol. VIII

The 21 Century Charter Schools Initiative The All-Sports Ministry @ ... Lobbying for Nonprofits Advocacy Foundation Missions Domestic Advocacy Foundation Missions International 2022

Q-1 2021 Q-2 2021 Q-3 2021 Q-4 2021 Bonus ComeUnity ReEngineering

XXXV

The Creative & Fine Arts Ministry @ The Foundation The Advisory Council & Committees The Theological Origins of Contemporary Judicial Process The Second Chance Ministry @ ...

Vol. IX

2023

Legal Reformation

XXXVI XXXVII

The Fifth Amendment Project The Judicial Re-Engineering Initiative The Inner-Cities Strategic Revitalization Initiative Habeas Corpus

Q-1 2023 Q-2 2023

XXXII XXXIII XXXIV

XXXVIII XXXVIX

Q-1 2022 Q-2 2022 Q-3 2022 Q-4 2022

Q-3 2023 Q-4 2023

Page 96 of 105


Vol. X

2024

ComeUnity Development

XXXVXI XXXVXII XXXVXIII

The Inner-City Strategic Revitalization Plan The Mentoring Initiative The Violence Prevention Framework The Fatherhood Initiative

Vol. XI

2025

Public Interest

XXXVXIV L

Q-1 2025 Q-2 2025

LII

Public Interest Law Spiritual Resource Development Nonprofit Confidentiality In The Age of Big Data Interpreting The Facts

Vol. XII

2026

Poverty In America

XXXVX

LI

LIII LIV LV LVI

American Poverty In The New Millennium Outcome-Directed Thinking Servant Leadership ...

Q-1 2024 Q-2 2024 Q-3 2024 Q-4 2024

Q-3 2025 Q-4 2025

Q-1 2026 Q-2 2026 Q-3 2026 Q-4 2026

The e-Advocate Journal of Theological Jurisprudence Vol. I - 2017 The Theological Origins of Contemporary Judicial Process Scriptural Application to The Model Criminal Code Scriptural Application for Tort Reform Scriptural Application to Juvenile Justice Reformation Vol. II - 2018 Scriptural Application for The Canons of Ethics Scriptural Application to Contracts Reform & The Uniform Commercial Code Scriptural Application to The Law of Property Scriptural Application to The Law of Evidence

Page 97 of 105


Legal Missions International Issue Vol. I I II III IV

Title 2015

Quarterly st

God’s Will and The 21 Century Democratic Process The Community Engagement Strategy Foreign Policy Public Interest Law in The New Millennium

Vol. II

2016

V VI VII VIII

Ethiopia Zimbabwe Jamaica Brazil

Vol. III

2017

IX X XI XII

India Suriname The Caribbean United States/ Estados Unidos

Vol. IV

2018

XIII XIV XV XVI

Cuba Guinea Indonesia Sri Lanka

Vol. V

2019

XVII XVIII XIV XV

Russia Australia South Korea Puerto Rico

Q-1 2015 Q-2 2015 Q-3 2015 Q-4 2015

Q-1 2016 Q-2 2016 Q-3 2016 Q-4 2016

Q-1 2017 Q-2 2017 Q-3 2017 Q-4 2017

Q-1 2018 Q-2 2018 Q-3 2018 Q-4 2018

Q-1 2019 Q-2 2019 Q-3 2019 Q-4 2019

Page 98 of 105


Issue

Title

Vol. VI

2020

XVI XVII XVIII XIX XX

Trinidad & Tobago Egypt Sierra Leone South Africa Israel

Vol. VII

2021

XXI XXII XXIII XXIV XXV

Haiti Peru Costa Rica China Japan

Vol VIII

2022

XXVI

Chile

Quarterly

Q-1 2020 Q-2 2020 Q-3 2020 Q-4 2020 Bonus

Q-1 2021 Q-2 2021 Q-3 2021 Q-4 2021 Bonus

Q-1 2022

The e-Advocate Juvenile Justice Report Vol. I – Juvenile Delinquency in The US Vol. II. – the Prison Industrial Complex Vol. III – Restorative/ Transformative Justice Vol. IV – The Sixth Amendment Right to The Effective Assistance of Counsel Vol. V – The Theological Foundations of Juvenile Justice Vol. VI – Collaborating to Eradicate Juvenile Delinquency

Page 99 of 105


The e-Advocate Newsletter 2012 - Juvenile Delinquency in the US Genesis of the Problem Family Structure Societal Influences Evidence-Based Programming Strengthening Assets v. Eliminating Deficits 2013 - Restorative Justice in the US Introduction/Ideology/Key Values Philosophy/Application & Practice Expungement & Pardons Pardons & Clemency Examples/Best Practices 2014 - The Prison Industrial Complex 25% of the World's Inmates Are In the US The Economics of Prison Enterprise The Federal Bureau of Prisons The After-Effects of Incarceration/Individual/Societal 2015 - US Constitutional Issues In The New Millennium The Fourth Amendment Project The Sixth Amendment Project The Eighth Amendment Project The Adolescent Law Group 2016 - The Theological Law Firm Academy The Theological Foundations of US Law & Government The Economic Consequences of Legal Decision-Making The Juvenile Justice Legislative Reform Initiative The EB-5 International Investors Initiative 2017 - Organizational Development The Board of Directors The Inner Circle Page 100 of 105


Staff & Management Succession Planning Bonus #1 The Budget Bonus #2 Data-Driven Resource Allocation 2018 - Sustainability The Data-Driven Resource Allocation Process The Quality Assurance Initiative The Advocacy Foundation Endowments Initiative The Community Engagement Strategy 2019 - Collaboration Critical Thinking for Transformative Justice International Labor Relations Immigration God's Will & The 21st Century Democratic Process 2020 - Community Engagement The Community Engagement Strategy The 21st Century Charter Schools Initiative Extras The NonProfit Advisors Group Newsletters The 501(c)(3) Acquisition Process The Board of Directors The Gladiator Mentality Strategic Planning Fundraising 501(c)(3) Reinstatements The Collaborative US/ International Newsletters How You Think Is Everything The Reciprocal Nature of Business Relationships Accelerate Your Professional Development The Competitive Nature of Grant Writing Assessing The Risks

Page 101 of 105


Page 102 of 105


About The Author John C (Jack) Johnson III Founder & CEO

Having become disillusioned with the inner-workings of the “Cradle-to-Prison” pipeline, former practicing attorney Johnson set out, in 2001, to try to help usher-in fundamental changes in the area of Juvenile and Transformative Justice. Educated at Temple University, in Philadelphia, Pennsylvania and Rutgers Law School, in Camden, New Jersey, Jack moved to Atlanta, Georgia to pursue greater opportunities to provide Advocacy and Preventive Programmatic services for at-risk/ atpromise young persons, their families, and Justice Professionals embedded in the Juvenile Justice process in order to help st facilitate its transcendence into the 21 Century. There, along with a small group of community and faith-based professionals, “The Advocacy Foundation, Inc." was conceived and implemented over roughly a thirteen year period, originally chartered as a Juvenile Delinquency Prevention and Educational Support Services organization consisting of Mentoring, Tutoring, Counseling, Character Development, Community Change Management, Practitioner Re-Education & Training, and a host of related components. The Foundation’s Overarching Mission is “To help Individuals, Organizations, & Communities Achieve Their Full Potential”, by implementing a wide array of evidence-based proactive multi-disciplinary "Restorative & Transformative Justice" programs & projects currently throughout the northeast, southeast, and western international-waters regions, providing prevention and support services to at-risk/ at-promise youth, to young adults, to their families, and to Social Service, Justice and Mental Health professionals” everywhere. The Foundation has since relocated its headquarters to Philadelphia, Pennsylvania, and been expanded to include a three-tier mission. In addition to his work with the Foundation, Jack also served as an Adjunct Professor of Law & Business at National-Louis University of Atlanta (where he taught Political Science, Business & Legal Ethics, Labor & Employment Relations, and Critical Thinking courses to undergraduate and graduate level students). Jack has also served as Board President for a host of wellestablished and up & coming nonprofit organizations throughout the region, including “Visions Unlimited Community Development Systems, Inc.”, a multi-million dollar, award-winning, Violence Prevention and Gang Intervention Social Service organization in Atlanta, as well as Vice-Chair of the Georgia/ Metropolitan Atlanta Violence Prevention Partnership, a statewide 300 organizational member violence prevention group led by the Morehouse School of Medicine, Emory University and The Atlanta-Based Martin Luther King Center. Attorney Johnson’s prior accomplishments include a wide-array of Professional Legal practice areas, including Private Firm, Corporate and Government postings, just about all of which yielded significant professional awards & accolades, the history and chronology of which are available for review online.

www.TheAdvocacyFoundation.org Clayton County Youth Services Partnership, Inc. – Chair; Georgia Violence Prevention Partnership, Inc – Vice Chair; Fayette County NAACP - Legal Redress Committee Chairman; Clayton County Fatherhood Initiative Partnership – Principal Investigator; Morehouse School of Medicine School of Community Health Feasibility Study - Steering Committee; Atlanta Violence Prevention Capacity Building Project – Project Partner; Clayton County Minister’s Conference, President 2006-2007; Liberty In Life Ministries, Inc. – Board Secretary; Young Adults Talk, Inc. – Board of Directors; ROYAL, Inc - Board of Directors; Temple University Alumni Association; Rutgers Law School Alumni Association; Sertoma International; Our Common Welfare Board of Directors – President)2003-2005; River’s Edge Elementary School PTA (Co-President); Summerhill Community Ministries; Outstanding Young Men of America; Employee of the Year; Academic All-American - Basketball; Church Trustee.

Page 103 of 105


www.TheAdvocacyFoundation.org

Page 104 of 105


Page 105 of 105


Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.