Developing the Learning Door: A case study in youth participatory program planning

Page 1

ARTICLE IN PRESS

Evaluation and Program Planning 30 (2007) 55–65 www.elsevier.com/locate/evalprogplan

Developing the Learning Door: A case study in youth participatory program planning Justus J. Randolph , Pasi J. Eronen Department of Computer Science, University of Joensuu, P.O. Box 111, FIN-80101, Joensuu, Finland Received 23 November 2005; received in revised form 22 March 2006; accepted 23 June 2006

Abstract This article presents the results of a case study in youth participatory program planning conducted in the context of a nonformal technology-education program in eastern Finland. The purpose of the program was to have youth, university, and business stakeholders work together to create the Learning Door, a door that would meet the needs of older people and people with disabilities. The participatory program planning process that was used involved clarifying the mission, roles, and modes of collaboration as well as creating stakeholder matrices, logic models, program plans, and implementation plans. It was found that the observed program planning process was similar to the intended planning process and that the process was well received by the planning participants. The lessons learned include clarifying the nature of collaboration before the program gets underway, reviewing program planning steps often, and making clear distinctions between logic models and implementations plans. r 2006 Elsevier Ltd. All rights reserved. Keywords: Youth participatory evaluation; Youth participatory program planning; Technology education

0. Introduction In 1989 the United Nations Convention on the Rights of Children; which states that children and young people have civil, political, and economic rights; was ratified by all U.N. member states, excluding the United States. Accordingly, there has been increased interest in including youth as participants in development activities such as program planning, implementation, and evaluation (Rennenkamp, 2001; Sabo, 1999, 2003c; Smith, 2001). Although a ‘‘field in the making,’’ the findings from research and practice in youth participatory evaluation (YPE) are promising. Sabo (2003b), in a special issue of New Directions for Evaluation, summarized five of the major findings in the dialogue about YPE: YPE is important for youth development for two reasons. First, according to Goodyear, ‘‘it gives young people meaningful leadership opportunities and a skill base on which to build’’ (Youth Participation in Corresponding author. Tel.: +358 13 251 7924; fax: +358 13 251 7955.

E-mail address: justus.randolph@cs.joensuu.fi (J.J. Randolph). 0149-7189/$ - see front matter r 2006 Elsevier Ltd. All rights reserved. doi:10.1016/j.evalprogplan.2006.06.004

Community Research and Evaluation, 2002). Second, YPE fundamentally changes relationships (youth to youth and youth to adult) and supports all participants to perform in advance of their current level of development (Sabo, 2001). YPE is important for the field of evaluation and social research as a whole in several ways. First, it is valuable because evaluation ‘‘holds the potential to be a launching point of democratic dialogue. Involving youth is one way of changing the definition of evaluation and modeling a process by which humanity may be brought back into the practice of evaluation’’ (Youth Participation in Community Research and Evaluation, 2002). Second, youth are better able to collect data from other youth: ‘‘Youth can blend into programs, see everything, and gain the trust of other youth easier than can adults’’ (p.5), and therefore, data might be more valid and reliable (Youth Participation in Community Research and Evaluation, 2002). YPE is positive for programs and organizations that serve youth. Because young people are involved in the process of evaluation, they can use the data to change


ARTICLE IN PRESS 56

J.J. Randolph, P.J. Eronen / Evaluation and Program Planning 30 (2007) 55–65

their program according to their needs. ‘‘Who knows what youth want more than youth?’’ (Youth Participation in Community Research and Evaluation, 2002, p. 5). YPE improves the quality of life within communities in several ways. First, it changes attitudes and relationships to young people and increases social capital (Chawla, 2001). Second, young people are oppressed, and the process of engaging in evaluation is empowering because ‘‘organized knowledge is a form of power’’ (Youth Participation in Community Research and Evaluation, 2002, p. 5). Participation in decision making, particularly in environments that directly affect young people, is a fundamental right (Chawla, 2001) (pp. 6–7). The forms that YPE can take are numerous. They include intergenerational practical participatory evaluation (Greer & Martinez, 2001; Lau, Netherland, & Haywood, 2003; London, Zimmerman, & Erbstein, 2003; Quintanilla & Packard, 2002; Zimmerman & Erbstein, 1999), youth transformative participatory evaluation (Barnes & Espinoza, 1999; Hart, 1997; Hart & Rajbhandary, 2003), youth councils (Mathews, 2001), youth–adult conferences (Voakes, 2003), funder–youth participation (Gilden, 2003), intergenerational design of technology (Druin, 1999; Guha et al., 2004, 2005; Knudtzon et al., 2003), and many others. In this case, we used a participatory program planning process whereby youths, university students, and a representative from an international security company worked together to plan a program to design a ‘‘Learning Door’’—an adaptive, electronically operated door that would meet the needs of older people and people with disabilities. The primary purposes of this article are to (a) document the process and content of the participatory program planning that occurred, (b) compare and contrast the intended and observed participatory program planning processes, (c) infer why the observed participatory program planning process did or did not go as planned, and (d) present recommendations for improving the participatory program planning process that was used. With hope, the lessons that were learned here will benefit other practitioners in the field of youth participatory program planning. This case study report is organized according to Fishman and Neigher’s guidelines for ‘‘publishing systematic, pragmatic case studies in program evaluation’’ (Fishman & Neigher, 2003, 2004a, 2004b). We begin with a description of the organizational site, the context of the case, the program that was evaluated, and the theoretical model guiding the program and the program planning process. We then briefly discuss the program results and present the findings from a pattern matching analysis between the intended and observed participatory program planning models. We end with a summary of the lessons that were learned.

This case study report, which serves as the narrative hub of this case study, is supported by several appendices available in Elsevier’s on-line version of this paper. The case study protocol is included in Appendix A. It describes in detail the case study’s rationale, study design, and study procedures. Appendix B is the case study coding book. Appendix C provides evidence tables that link our claims to sections of documents in the case study database; Appendix C also provides estimates of interrater reliability for the various coding categories. Appendix D provides a description of the documents included in the case study database. The case study methodology used here followed the guidelines of Yin (2001). In short, the findings in this case study were based on data collected from documents made by program planning participants, from meeting logs, and from interviews. See Appendix A for detailed information about the case study methodology, including its study design, study questions, plans for linking data to propositions, criteria for interpreting findings, data collection procedures, methods of data analysis, and information about reliability and validity. In the context of this case study, the term youth participatory program planning is used instead of the more general term YPE since program planning, not evaluation, was done. (By program planning we mean the process whereby a program is planned, which includes creating a priori plans for evaluating the program. By evaluation we mean the process whereby the evaluation plans are actually carried out.) 1. Organizational site and context of the case This case study was set within the Kids’ Club program of the Educational Research Group (Sutinen, 2006) at the Department of Computer Science, University of Joensuu. (Joensuu is a city in eastern Finland.) The Learning Door program, one of the programs subsumed by the Kids’ Club program, was partly sponsored by ASSA Abloy [Hereafter Abloy] (Abloy, n.d.), an international security company with a large presence in Joensuu. According to the Kids’ Club website, Kids’ Club is a combined research laboratory and technology club, where children of age 10–17 work together with university students and researchers of Computer Science [Educational Technology]. Altogether 20 children meet at the laboratory of educational technology twice a month for learning about technology and having fun with it. The reason for launching the Kids’ Club project at the Department of Computer Science (CS) in the University of Joensuu, in fall 2001, was to arouse young pupils’ interest in information and communication technology (ICT) and help them to become active doers of the future society based on information technology. From the research point of view we aim at developing novel


ARTICLE IN PRESS J.J. Randolph, P.J. Eronen / Evaluation and Program Planning 30 (2007) 55–65

ICT-related tools and methods for learning and teaching. (Virnes, 2006, n.p.) Kids’ Club was modeled after technology education programs in the United States, ‘‘where children interested in academically flavored topics are encouraged to join clubs run on university campusesy [where] children have an opportunity to study skills of their interests in a playful, nonschool like environment, [and] where there is room for innovative ideation and alternative approaches’’ (Eronen, Sutinen, Vesisenaho, & Virnes, 2002, p. 61). Kids’ Club might be considered what Norland (2005) calls a ‘‘nonformal’’ educational program. Typical activities in Kids’ Club include building and programming robots, model making, creating applications for mobile phones, movie making, and, for older youthparticipants, collaborating on design projects with local businesses. For more information about Kids’ Club see Eronen et al. (2002); Eronen, Jormanainen, Sutinen, and Virnes, (2005a, 2005b) or Virnes (2006). Abloy, according to their website, is ‘‘one of the leading manufacturers of locks, locking systems and architectural hardware and the world’s leading developer of products in the field of electromagnetic locking technology’’ (Abloy, n.d.). Abloy is one the largest employers in Joensuu and has a history of collaboration with educational organizations both within Finland and abroad (see, e.g., Public Risk Management Association, 2002). The primary participants in the program were six adultparticipants: a planning facilitator; a program manager; four Kids’ Club tutors; and five youth-participants, who were students at Joensuu middle schools. (Kids’ Club personnel use the word tutor to refer to adults who supervise the educational activities of youth-participants; the youth-participants in this program used the term makers to refer to themselves.) The first author of this case study was the program planning facilitator; the second author of this paper was the program manager. The planning facilitator, a Ph.D. candidate in an education research and evaluation program, was in charge of carrying out the program planning part of the Learning Door program. The program manager, in addition to having

57

tutoring tasks, was in charge of writing the funding proposal to Abloy and in charge of the overall implementation and administration of the program. There were several Kids’ Club tutors present off-and-on during the program planning sessions; their time was split between this program and other Kids’ Club programs. The Kids’ Club tutors were computer science graduate students whose research related to educational technology. The planning facilitator and Kids’ Club tutors volunteered their time to the Learning Door program in exchange for the opportunity to gain research and development experience. The five youth-participants involved in the Learning Door program were male Finnish teens (aged 13–15) who had been active, volunteer members of Kids’ Club for several years. Like other youth-participants in Kids’ Club, these youths attended the program, after school, two Friday afternoons per month. Through their participation in Kids’ Club, this particular group of youth-participants had won international awards for designing and programming soccer-playing robots (RoboCup Junior, 2006) and had received recognition for participating in the design of an electronic door (Kostamo-Hirvola, 2005). According to the program manager, these youth-participants were motivated to take part in the Kids’ Club program because it enabled them to pursue their technologically oriented interests to a much higher degree than was possible in their public schools. An Abloy representative had intended to regularly attend the program planning sessions; however, the representative was absent from the sessions because of an extended illness. The Abloy representative did, however, stay in regular contact with the program manager during the planning process. Had the Abloy representative been able to attend program-planning sessions, the representative’s role in the sessions would have been (a) to provide ongoing input about what Abloy expected from the Learning Door and (b) to provide additional support to the youth-participants. It was planned that older people and people with disabilities would also take part in the program planning process and in the development of the Learning Door prototype. However, because of time and resource

Table 1 The stakeholder matrix that the youth-participants created Stakeholder

How are they affected by the program?

What do they want from the program?

What are the benefits and risks to stakeholders?

Makers (youth-participants) Users—older people and people with disabilities Abloy Product

Learning They get help

Money, fame They want their life to be easier

Innovations Gets created, advances

New product ideas, stimulation Wants to be good

Tutors

Get power and learn

Salary, scientific results, experience, fame, Nobel prize

Failure, tutors are disappointed If the learning door does not work as it should, it could harm them Can lose money Gets spread around the world, might not get finished No results, lose job


ARTICLE IN PRESS 58

J.J. Randolph, P.J. Eronen / Evaluation and Program Planning 30 (2007) 55–65

constraints, unfortunately, these critical groups of end-user stakeholders were not actually included in the process. They were, however, taken into consideration as stakeholders; see the stakeholder matrix in Table 1, which is presented later. All but one of the program participants were native Finnish speakers. However, the program planning process was conducted in English because it was a language that all of the program participants were fluent in.

2. Program to be planned for: Learning Door The Learning Door program was a follow-up to a similar program, called Intelligent Door (Eronen et al., 2005a), where Kids’ Club participants designed a technology-enabled door that was commissioned by Abloy. (The Intelligent Door was designed to allow, or disallow, certain users to gain entry to a room at certain times, based on preset user profiles.) The Intelligent Door program was

completed in 2003 and based on its success, Abloy solicited a proposal for a follow-up program. The Learning Door program’s youth-participants and adult-participants collaboratively decided that the main activity of the Learning Door program should be to design a prototype of an adaptive, automatic door that would expand on the functionalities of the Intelligent Door. Not only would Learning Door be designed to grant or deny access to certain users at certain times, as the Intelligent Door did, it would be able to remotely identify users and be able to adapt some of its functions, such as how long the door would stay open, based on previous user behaviors. For example, if a certain user had a disability and tended to take longer to open a door and enter a room than the average user, the Learning Door would ‘‘learn’’ how long the door should stay open and how long before an ‘‘open door’’ alarm should be signaled. In order to get funding from Abloy for the Learning Door program, the Kids’ Clubs tutors and youthparticipants needed to develop detailed program plans.

Fig. 1. Synthesized logic model for the Learning Door program.


ARTICLE IN PRESS J.J. Randolph, P.J. Eronen / Evaluation and Program Planning 30 (2007) 55–65

(Learning Door was partly funded by the Kids’ Club program too.) Although the program manager did the actual writing of the grant proposal to Abloy, the proposal was based almost entirely on the program plans made by the youth-participants. 3. Theoretical model guiding the program Fig. 1 illustrates the theoretical model, which is a synthesis of the youths’ and tutors’ logic models, guiding the program. The logic model in Fig. 1 plots the theoretical, causal steps that would have to occur for the program to meet the needs of each of the identified stakeholder groups. Basically, the logic was that proposal planning would lead to funding for the production of Learning Door, which, through a series of causal events, eventually would lead to the program’s meeting the needs of the various stakeholder groups. The steps in the boxes in bold were considered to be the responsibility of the Learning Door program. The steps in the boxes not in bold were not the responsibility of the Learning Door program, but were expected to come about as a result of the program. 4. Theoretical model guiding the program planning process The conceptual model for how the program planning was expected to be carried out is presented in Fig. 2. The theoretical model, which the authors had planned a priori, was a variation of the models described in Randolph and Eronen (2004) and Randolph, Virnes, and Eronen (2005). The first step in the theoretical model is to have the group come to an agreement about the mission, each of the program planning participant’s roles, and the mode of collaboration that will be adopted. To arrive at the mission statement, the program facilitator explains to the group (i.e., to the adult-participants and the youth-participants) what a mission statement is, provides examples of mission statements, and facilitates a discussion in which the group arrives at a mission statement by consensus. To determine each participant’s role in the program, the group generates a matrix where labels for the participants are listed in columns and participants’ responsibilities are listed in the rows. To decide on how collaboration should take place, the facilitator presents different modes of collaboration

Clarifying mission, roles, & collaboration

Creating a stakeholder matrix

Creating logic models

Creating program plans

Creating implementation plans

Creating a proposal

Fig. 2. The expected youth participatory planning model.

59

and the group decides, by consensus, which mode will be adopted. In this case, for example, it was decided if (a) the tutors, in addition to the youth-participants, would take part in the actual planning of the program (i.e., a codesign mode of collaboration), if (b) the tutors would serve only as supports for the youth-participants who would do all of the actual planning and would have final decision-making power in terms of the program planning and design processes (i.e., an empowerment mode of collaboration), or if (c) the tutors would just tell the youth-participants what to design and how to design it (i.e., an authoritarian mode of collaboration). The next step in the theoretical model is to complete a stakeholder matrix, which has the names of stakeholder groups in rows and key stakeholder questions in columns. Some examples of key stakeholder questions, adapted from Preskill and Torres (1999), are listed below: 1. How are the stakeholders affected by the program? 2. What do the stakeholders want from the program? 3. What are the benefits and risks to the stakeholders? See Table 1, which is presented later, for an example of a stakeholder matrix. The following step is to have each program participant create a logic model—‘‘an explanation of the causal links that tie program inputs to expected program outputs’’ (Weiss, 1998, p. 55). The planning facilitator then synthesizes each participant’s logic model into a single logic model. (We use the term logic model synonymously with what others call a program theory.) The planning facilitator synthesizes the model by keeping only the elements that two or more participants’ logic models have in common, presenting the synthesized logic model to the group for member checking, and then revising the group’s logic model until consensus is reached. See Fig. 1 for an example of a synthesized logic model—the group logic model for the Learning Door program. See Harris (2001) or Monroe et al. (2005) for practical information about creating logic models for nonformal participatory programs. The next step is to fill out a program plan for each of the logic model steps that the program is responsible for. The program plan consists of a matrix that has the objectives for the step in the logic model in rows and measures, baselines, targets, and activities in columns; see Randolph and Eronen (2004) for more details about creating program plans. The following step is to create one or more implementation plans based on an implementation theory—‘‘a theory about what is required to translate objectives into ongoing service delivery and program implementation’’ (Weiss, 1998, p. 58). The implementation plan is a description of how the activities in the program plans will be carried out. According to Weiss (1998), ‘‘the assumption [behind an implementation theory] is that if the activities are conducted as planned, the desired results will be


ARTICLE IN PRESS J.J. Randolph, P.J. Eronen / Evaluation and Program Planning 30 (2007) 55–65

60

forthcoming.y Implementation theory does not deal with the processes that mediate between program services and achievement of program goals but focuses on the delivery of program services’’ (p. 58). In this particular case, the final step of the theoretical model guiding the program planning process involved creating a grant proposal to secure funding to implement program activities. The proposal was written by the project manager. 5. Program results In August of 2004, after four program planning sessions—each of which lasted two and one half hours— the program plans/proposal for the Learning Door were presented to Abloy, who subsequently agreed to fund the program. The program plans were implemented over the course of the next 10 months. In June of 2005, the Learning Door participants presented a working prototype of Learning Door to Abloy. As planned, the prototype of Learning Door was able to recognize, by way of a radio frequency ID pin, who was approaching the door and it was able to adapt to a user’s patterns, based on past behavior. An article that described the Learning Door prototype, and the Learning Door program in general, was published in the ABLOY employee magazine (KostamoHirvola, 2005). Concerning the actual development of Learning Door prototype, the program manager had mentioned that the development had been slower than planned because the youth-participants and tutors had begun to tire of the program since this was the second ‘‘door’’ program in 2 years. Two of the five students had gradually quit attending the program after the construction of the Learning Door prototype got underway.

each step of the model and the differences between the process that was intended and the process that actually occurred. 6.1. Clarifying mission, roles, and modes of collaboration The mission statement that the group arrived at was—to create a good Learning Door that meets the needs of older people and the needs of people with disabilities. It was relatively easy for the group to come to a consensus about the mission statement; however, clarifying the collaborative process, and keeping it clarified turned out to be problematic. It was found that there were conflicting views of how collaboration was going to be conducted. Based on case study documents and interviews, three modes of collaboration came into play during the program planning process: The empowerment mode, the codesign mode, and the authoritarian mode. When the youthparticipants were asked to draw an illustration of the collaboration that actually occurred between the youthparticipants (i.e., the makers), the tutors, and Abloy, one youth-participant drew an illustration representing the empowerment mode, one youth-participant drew an illustration representing the collaborative mode, and one youth-participant drew two illustrations representing the authoritarian mode. Two youth-participants did not draw an illustration. In the empowerment mode, the tutors and Abloy are envisioned as forces that empower the students to create the Learning Door. The adults are guides during the planning and production process. The youths are the actual decision makers and hold responsibility for the success, or lack of success of the Learning Door. Fig. 4 is a student’s

6. Observed results of the program planning model Fig. 3 illustrates how the program-planning model was actually carried out. Fig. 3, the observed model, is similar to Fig. 2, the expected model, except that the mode of collaboration that was agreed upon changed over time and that the participants created implementation plans at the same time as they created logic models. In the rest of this section, we describe what occurred in

Clarifying mission, roles, & collaboration

Creating a stakeholder matrix

Creating logic models

Creating program plans

Creating implementation plans

Creating a proposal

Fig. 3. The observed youth participatory planning model.

Fig. 4. A youth-participant’s illustration of the empowerment mode of collaboration. Note. The ‘‘makers’’ in this figure are youth-participants.


ARTICLE IN PRESS J.J. Randolph, P.J. Eronen / Evaluation and Program Planning 30 (2007) 55–65

Fig. 5. A youth-participant’s illustration of the codesign mode of collaboration. Note. The ‘‘makers’’ in this figure are youth-participants.

illustration of the empowerment mode. In Fig. 4, Abloy provides the input and resources for the program. The tutors also provide input to the makers (i.e., the youth participants), but the makers are the ones who actually create the Learning Door. In the codesign mode, the tutors, Abloy, and the youthparticipants are on equal ground in terms of planning and creating the Learning Door. The decision-making responsibility is shared between the program participants. The codesign mode in this case could also be framed as what Druin (1999) calls cooperative inquiry. Fig. 5 is a student’s illustration of the codesign mode. In this figure, three characters—Abloy, Makers (i.e., the youth-participants) and Tutors—are on equal ground in the creation of the Learning Door. Note that the relative sizes of the characters in Fig. 5 differ even though they are on a relatively equal plane. The character labeled Abloy is the largest, the character labeled Tutors is the second largest, and the character labeled Makers is the smallest. We hypothesize that the size of the figures corresponds with that youth-participants’s perceptions of how much decision-making power the various types of program participants actually had. It could be argued that this illustration also hints at an authoritarian mode of collaboration.

61

In the authoritarian mode, adult-participants in the program are perceived to be the decision makers; in this mode the adult-participants specify what the youthparticipants are supposed to do and how they are supposed to do it. One youth-participant drew two comical illustrations of the authoritarian mode. In one of those illustrations, which is not shown here, one of the tutors was jokingly characterized as a high-ranking military figure giving orders to the youth-participants. During exit interviews, one of the tutors admitted that even though the participants had agreed on using the codesign and empowerment modes, it was still the tutor’s responsibility sometimes to be authoritarian. When each of the program participants was asked what mode of collaboration they espoused for the program, the answers varied between types of participants. The empowerment mode was espoused by the planning facilitator and the youth-participants. The codesign mode was espoused by the program manager and the tutors. No one agreed that the authoritarian mode should be used within the planning process; however, the tutors and the program manager agreed, in hindsight, that the authoritarian mode of collaboration needs to be used sometimes for issues outside the planning process. In the clarifying mission, roles, and modes of collaboration stage of program planning, an agreement was originally made by the tutors, planning facilitator, and youth-participants that the empowerment mode would be used. However, under the direction of the program manager, the mode of collaboration issue was brought up again and an agreement was made between the tutors and youth-participator that the stated mode of collaboration be changed from the empowerment mode to the codesign mode. The document that specified the participants role’s and relationship between roles originally had the youthparticipants doing ‘‘programming, graphics, planning, and reporting,’’ the tutors acting as ‘‘organizers and supporters,’’ and Abloy acting as ‘‘experts, funders, and evaluators.’’ After the change, however, the tutors’ roles also became ‘‘programming, graphics, planning, and reporting.’’ The program director explained that he desired the change so that there would be ‘‘equal collaboration between all participants on equal terms, like in a real-life design task.’’ Regardless of which mode of collaboration was ultimately used or which mode worked the best in this case, it is clear that the agreement about how collaboration was to be carried out was changed midway through the planning process. That midprogram mode of collaboration change was a deviation from what was expected to happen. 6.2. Creating stakeholder matrices The creation of the stakeholder matrix went as planned. Table 1 presents the actual stakeholder matrix that the youthparticipants created. The various stakeholder groups that were identified were the makers (i.e., the youth-participants),


ARTICLE IN PRESS 62

J.J. Randolph, P.J. Eronen / Evaluation and Program Planning 30 (2007) 55–65

the users (i.e., older people and people with disabilities), Abloy (i.e., the funder), the product (i.e., the Learning Door), and the tutors. It is interesting to note that the youth-participants insisted on including the Learning Door itself, although not a living being, as a stakeholder. 6.3. Creating and synthesizing logic models One deviation from what was expected was that tutors and youth-participants had difficulty creating a logic model (i.e., a causal model of how the program is expected to lead to its desired outcomes) that was not an implementation plan (i.e., a plan for how the steps in the program are to be carried out). For example, Fig. 1, which is a synthesis of the youth-participants and tutors’ logic models, has elements of an implementation plan and logic model. Fig. 6 shows a comical logic model that was more or less typical of the logic models drawn by youth-participants. It is clearly an implementation plan that has elements of a causal model. The tutors explained that the logic models were difficult for youth-participants because they were out of the context of what Finnish students are normally expected to do. They mentioned that flowcharts, on the other hand, are very common in the Finnish educational system and in

computer science alike. The tutors mentioned that more time should have been spent on distinguishing a causal model from an implementation plan because causal modeling was fairly new to the youth-participants. The tutors also reported that it was difficult to create a logic model without more feedback from Abloy about the specific goals for Learning Door. They also mentioned that it was important to explain carefully to the youthparticipants that the stakeholders should be taken into consideration when creating the logic models. Despite the difficulty of creating logic models, many youth-participants and tutors reported that creating logic models for the program was the best part of the program planning process. Also, the students and tutors both reported that the creating logic models step was the step when they bought into the planning process. 6.4. Creating program plans The youth-participants created a separate program plan for each of the boxes in bold in the logic model of Fig. 1. (The boxes in bold indicate which steps of the logic model the program would be responsible for effecting.) An example of one of the actual program plans, the one concerning production planning, is shown in Table 2. The tutors commented that the program plans should have been simplified so that there were not so many cells. They also mentioned that the concepts—baselines, measures, and targets—were hard for the program participants, including themselves, to understand. 6.5. Creating an implementation plan In this case, only one implementation plan was made, although one could have been made for each of the program plans. Fig. 7 shows the flow-chart-oriented implementation plan that was made for the program. 6.6. Overall results of the observed model In terms of the program planning process, although the observed model deviated from the theoretical model in several ways, overall the program planning process was deemed to have been a success by the program planning participants. The tutors reported that they would use the planning process again with slight modifications. They would simplify the process and they would reframe the model so that it would be nonlinear, more open to modification, and take less time. The tutors also reported that they noticed that the youth-participants had bought into the planning process. 7. Lessons learned

Fig. 6. A youth-participant’s logic model for the Learning Door program.

We suggest that others who carry out similar youth participatory planning models be aware of the lessons that we have learned, which are listed below:


ARTICLE IN PRESS J.J. Randolph, P.J. Eronen / Evaluation and Program Planning 30 (2007) 55–65

63

Table 2 A program plan created by youth-participants Objective

Measure

Baseline

Target

Activity

Programming

3rd party review

Graphics

3rd party review

Good and working program Cool looking graphics

Reporting

3rd party review

Old program (intelligent door) Old program (intelligent door) No baseline

Physical design

3rd party review

Makers program with visual Basic, Java, etc. Create graphics with Microsoft paint maybe The team will periodically write up results and make a presentation at the end Makers test and 3rd party tests too

Intelligent door testing strategy

Learning Door Implementation Plan Proposal Writing Other Funding Abloy Funding

Approved

Not Approved

Production Planning

Abloy Reviews

Not Approved

Approved Production & Testing (based on production plan)

To make an understood and utilized report Better testing strategy

1. Be sure that planning participants clearly understand the different modes of collaboration that are possible so that they can make informed decisions about which mode is most appropriate for their program, before the program gets well underway. 2. Realize that program planning might be out of the context of what youth-participants normally do so extra guidance maybe needed, at least in the first stages. 3. Spend extra time teaching the creation of logic models. At least in this case, this is the point when youthparticipants and tutors bought into the planning program. 4. Review and clarify mission goals, program steps, stakeholder needs, and the agreed upon mode of collaboration often. 5. Simplify the model where appropriate. It is easy for program-participants to get bogged down in details, like filling out every cell in a program-planning matrix. 6. Realize that a nonlinear approach to the development of logic models and to the program implementation process is not necessarily a hindrance to the program.

8. Conclusion Presentation

Abloy Reviews

Not Approved

Approved New Project

Fig. 7. The youth-participants’ implementation plan for the Learning Door program.

In summary, we conducted a case study in youth participatory program planning in the context of a nonformal technology-education program in eastern Finland. The goal of the program was to create a Learning Door that would meet the needs of older people and people with disabilities. The planning model went more or less as planned except that the collaborative mode changed from the empowerment mode to the codesign mode during the middle of the program and that in general, implementation plans were made in place of logic models. Tutors and youth-participants seemed to have bought into the program, especially after creating logic models. The tutors, if they were to use the program-planning model again, reported that they would spend more time clarifying the model and make the model less linear and more dynamic.


ARTICLE IN PRESS 64

J.J. Randolph, P.J. Eronen / Evaluation and Program Planning 30 (2007) 55–65

The results of this program parallel the results of other youth participatory programs. It, for example, confirms Sabo’s findings that youth who are given the proper training, motivation, and responsibility are capable of competently performing complex program development tasks (Sabo, 2003a), such as program implementation, evaluation, and planning. In this case, youth-participants, with adult mentoring, were able to do sophisticated program planning tasks—creating logic models, program plans, and implementation plans—for a program whose mission was to create a Learning Door for older people and people with disabilities. Acknowledgements This research was supported in part by a special programs grant from the Association for Computing Machinery’s Special Interest Group on Computer Science Education and by Abloy. We would like to thank the Kids’ Club tutors—Osku Kannusma¨ki, Ilkka Jormanainen, Mauri Heinonen, and Marjo Virnes—and the youth-participants in the Learning Door program for their participation in and contributions to this case study. Appendix A. Supplementary Materials Supplementary data associated with this article can be found in the online version at doi:10.1016/j.evalprogplan. 2006.06.004.

References Abloy. (n.d.) Abloy: Leading the way in worldwide security. Retrieved May 10, 2005 from /http://www.abloy.com/S. Barnes, R., & Espinoza, K. (1999). The juvenile justice evaluation project. Evaluation Exchange, 5(1), 5. Chawla, L. (2001). Evaluating children’s participation: Seeking areas of consensus. PLA Notes. Children’s Participation: Evaluating Effectiveness. London: International Institute for Environment and Development. Druin, A. (1999). Cooperative inquiry: Developing new technologies for children with children. In Proceedings of the SIGCHI conference on human factors in computing systems: The CHI is the limit (pp. 592–599). New York: ACM Press. Eronen, P. J., Jormanainen, I., Sutinen, E., & Virnes, M. (2005a). A Kids’ Club model for innovation creation between business life and school students; The Intelligent Door Project. In Proceedings of the 5th IEEE international conference on advanced learning technologies (ICALT 2005) (pp. 30–32). Los Alamitos, CA: IEEE Computer Society Press. Eronen, P. J., Jormanainen, I., Sutinen, E., & Virnes, M. (2005b). Kids’ Club reborn: Evolution of activities. In Proceedings of the 5th IEEE international conference on advanced learning technologies (ICALT 2005) (pp. 545–547). Los Alamitos, CA: IEEE Computer Society Press. Eronen, P. J., Sutinen, E., Vesisenaho, M., & Virnes, M. (2002). Kids’ Club as an ICT-based learning laboratory. Informatics in Eduation, 1(1), 61–72. Fishman, D. B., & Neigher, W. D. (2003). Publishing systematic, pragmatic case studies in program evaluation: Rationale and introduction to the special issue. Evaluation and Program Planning, 26, 421–428.

Fishman, D. B., & Neigher, W. D. (2004a). Publishing systematic, pragmatic case studies in program evaluation: Collatoral on a ‘promisory note’. Evaluation and Program Planning, 27, 105–113. Fishman, D. B., & Neigher, W. D. (2004b). Publishing systematic, pragmatic case studies in program evaluation: Rationale and introduction to part II of the special section. Evaluation and Program Planning, 27, 59–63. Gilden, B. L. (2003). All Stars Talent Show Network: Grassroots funding, community building, and participatory evaluation. In K. Sabo (Ed.), Youth participatory evaluation: A field in the making. New Directions for evaluation, Vol. 98 (pp. 77–86). New York: Jossey Bass. Greer, L. C., & Martinez, T. (2001). All for our future. Evaluation Exchange, 7(2), 12. Guha, M. L., Druin, A., Chipman, G., Fails, J. A., Simms, S., & Farber, A. (2004). Mixing ideas: A new technique for working with young children as design partners. In Proceedings of the 2004 conference on interaction design and children: Building a community (pp. 35–42). New York: ACM Press. Guha, M. L., Druin, A., Chipman, G., Fails, J. A., Simms, S., & Farber, A. (2005). Working with young children as technology design partners. Communications of the ACM, 48(1), 39–42. Harris, J. (2001). Logic models in real life: After school at the YWCA of Asheville. Evaluation Exchange, 7(2), 13–14. Hart, R. (1997). Children’s participation: The theory and practice of involving young citizens in community development and environmental care. New York: Earthscan. Hart, R., & Rajbhandary, J. (2003). Using participatory methods to further the democratic goals of children’s organizations. In K. Sabo (Ed.), Youth participatory evaluation: A field in the making. New Directions for Evaluation, Vol. 98 (pp. 61–76). New York: Jossey Bass. Kostamo-Hirvola, E. (2005). Kids’ Club esitteli oppivan oven [Kids’ Club presented the Learning Door.] Abloy: Abloy Oy:n Henkilo¨sto¨lehti [Abloy: The magazine for Abloy employees.]. Joensuu, Finland: ABLOY (p. 16). Knudtzon, K., Druin, A., Kaplan, N., Summers, K., Chisik, Y., Kulkami, R., et al. (2003). Starting an intergenerational design team: A case study. In Proceedings of the 2003 conference on interaction design and children (pp. 51–58). New York: ACM Press. Lau, G., Netherland, N. H., & Haywood, M. L. (2003). Collaborating on evaluation for youth development. In K. Sabo (Ed.), Youth participatory evaluation: A field in the making. New Directions for Evaluation, Vol. 98 (pp. 47–60). New York: Jossey Bass. London, J. K., Zimmerman, K., & Erbstein, N. (2003). Youth-led research and evaluation: Tools for youth, organizational and community development. In K. Sabo (Ed.), Youth participatory evaluation: A field in the making. New directions for evaluation, Vol. 98 (pp. 33–46). New York: Jossey Bass. Mathews, H. (2001). Citizenship, youth councils and young people’s participation. Journal of Youth Studies, 4(3), 299–318. Monroe, M. C., Fleming, M. L., Bowman, R. A., Zimmer, J. F., Marcinkowski, T., Washburn, J., et al. (2005). Evaluators as educators: Articulating program theory and building evaluation capacity. In E. Norland, & C. Somers (Eds.), Evaluating nonformal education programs and settings. New directions for evaluation, Vol. 98 (pp. 57–71). New York: Jossey Bass. Norland, E. (2005). The nuances of being ‘‘non’’: Evaluating nonformal education programs and settings. In E. Norland, & C. Somers (Eds.), Evaluating nonformal education programs and settings. New directions for evaluation, Vol. 98 (pp. 5–12). New York: Jossey Bass. Preskill, H., & Torres, R. T. (1999). Evaluative inquiry for learning in organizations. Thousand Oaks, CA: Sage. Public Risk Management Association (2002). School safety at the forefront of a new national campaign [press release]. Retrieved May 4, 2005 from /http://www.primacentral.org/news/1002/0000-1002-08.phpS. Quintanilla, G., & Packard, T. (2002). A participatory evaluation of an inner-city science enrichment program. Evaluation and Program Planning, 25, 15–22.


ARTICLE IN PRESS J.J. Randolph, P.J. Eronen / Evaluation and Program Planning 30 (2007) 55–65 Randolph, J. J. & Eronen, P. J. (2004). Program and evaluation planning lite: Planning in the real world. Paper presented at Kasvatustieten Pa¨iva¨t 2004 [Finnish Education Research Days 2004], Joensuu, Finland, November 25–26, 2004 (ERIC Document Reproduction Service No. ED490461). Randolph, J. J., Virnes, M., & Eronen, P. J. (2005). A model for designing and evaluating teacher training programs in technology education. In J-P. Courtiat, C. Davarakis, & T. Villemur (Eds.), Technology Enhanced Learning (pp. 69–79). New York: Springer. Rennenkamp, R. A. (2001). Youth as partners in program evaluation. Retrieved May 2, 2005 from /http://danr.ucop.edu/eee_aea/AEA_ Hear_It_From_The_Board_2001.pdfS. RoboCup Junior (2006). RoboCup Junior official website. Retrieved March 10, 2006 from /http://www.robocupjunior.org/S. Sabo, K. (1999). Young people’s involvement in evaluating programs that serve them. Dissertation Abstracts International, 60(4-B), 1918 (UMI No. 9924845). Sabo, K. (2001). The benefits of participatory evaluation for children and youth. PLA Notes. London: International Institute for Environment and Development. Sabo, K. (2003a). A Vygotskian perspective on youth participatory evaluation. In K. Sabo (Ed.), Youth participatory evaluation: A field in the making. New directions for evaluation, Vol. 98 (pp. 13–24). New York: Jossey Bass.

65

Sabo, K. (2003b). Editor’s Notes. In K. Sabo (Ed.), Youth participatory evaluation: A field in the making. New directions for evaluation, Vol. 98 (pp. 1–12). New York: Jossey Bass. Sabo, K. (Ed.) (2003c). Youth participatory evaluation: A field in the making. New directions for evaluation (Vol. 98). New York: Jossey Bass. Smith, J. C. (2001). Pizza, transportation and transformation: Youth involvement in evaluation and research. Evaluation Exchange, 7(2), 10–11. Sutinen, E. (2006) Educational technology in the Department of Computer Science: Ed Tech. Retrieved March 10, 2006 from /http://cs.joensuu. fi/edtech/index.phpS Virnes, M. (2006). Welcome to the Kids’ Club webpages. Retrieved March 6, 2006 from /http://cs.joensuu.fi/ kidsclub/S Voakes, L. (2003). Listening to the experts. In K. Sabo (Ed.), Youth participatory evaluation: A field in the making. New directions for evaluation, Vol. 98 (pp. 25–32). New York: Jossey Bass. Weiss, C. H. (1998). Evaluation ((2nd ed.)). Upper Saddle River, NJ: Prentice-Hall. Yin, R. K. (2001). Case study research: Design and methods. London: Sage. Youth Participation in Community Research and Evaluation (2002). Symposium proceedings. Wingspread, WI. Zimmerman, K., & Erbstein, N. (1999). Youth empowerment evaluation. Evaluation Exchange, 5(1), 4.


Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.