Assignment II
April 8, 2022
Ecology Article summary and critique
April 8, 2022
Show all

Savaya2005__xid-12701505_1-1.pdf

The Logic Model:A Tool for Incorporating Theoryin Development and Evaluation

of Programs

Riki Savaya, PhDMark Waysman, PhD

ABSTRACT. The role of program theory in developing, operating andevaluating programs is gaining increased emphasis in recent years. Aclear grasp of a program’s theory can contribute to the successful perfor-mance of many important developmental and evaluative tasks along itslife span. Although there is growing recognition of the importance ofprogram theory in the development and evaluation of programs, itshould be noted that this is not a simple task. Programs are often com-plex, comprising many different types of interlinking components. Whatis needed is a relatively simple instrument that can help the practitionerexplicate and present program theory, by guiding and structuring theprocess. The logic model is such a tool, whose purpose is to describe andarticulate program theory. Drawing upon examples from the authors’

Riki Savaya is affiliated with the Bob Shapell School of Social Work, Tel Aviv Univer-sity, Israel, and the Center for Evaluation of Human Services, Rishon Lezion, Israel. MarkWaysman is affiliated with the Center for Evaluation of Human Services, Rishon Lezion,Israel.

Address correspondence to: Dr. Riki Savaya, Bob Shapell School of Social Work,Tel Aviv University, P.O.B. 39040, Ramat Aviv 69978, Israel (E-mail: [email protected]).

This article is based on a paper presented at the International Conference on Evalua-tion for Practice, University of Huddersfield, England, 12-14 July, 2000.

Administration in Social Work, Vol. 29(2) 2005http://www.haworthpress.com/web/ASW

© 2005 by The Haworth Press, Inc. All rights reserved.Digital Object Identifier: 10.1300/J147v29n02_06 85

work with both nonprofit and governmental organizations, this paperpresents potential uses of the logic model tool in explicating programtheory for a variety of purposes throughout the life span of programs: forassessing the feasibility of proposed programs and their readiness forevaluation, for program development, for developing performance mon-itoring systems, and for building knowledge. [Article copies available for afee from The Haworth Document Delivery Service: 1-800- HAWORTH. E-mail ad-dress: <[email protected]> Website: <http: //www.HaworthPress.com> © 2005 by The Haworth Press, Inc. All rights reserved.]

KEYWORDS. Logic model, program theory, program evaluation, per-formance monitoring systems

Over the last decades the literature on programs and interventions in the fieldof social work reflect a growing emphasis on incorporating theoretical frame-works in practice and in program evaluation (Alter & Egan, 1997; Alter &Murty, 1997; Mizrahi, 1992; Reid, 1994; Rosen, 1993). This implies a cleardefinition of the population, problems and outcomes that are the focus of anyprogram, a clear presentation of the theoretical assumptions that guide thechoice of intervention, and systematic assessment of effectiveness. Currentprofessional standards for judging the quality of a program and assessing itsreadiness for dissemination thus require a high level of clarity in conceptual-ization and presentation of programs.

Concurrently, literature in the field of program evaluation shows a growingemphasis on the role of program theory in developing, operating and evaluat-ing programs (Bickman, 1987, 1990; Chen, 1990; Weiss, 1997). Wholey(1987) states that program theory identifies “program resources, program ac-tivities, and intended program outcomes, and specifies a chain of causal as-sumptions linking program resources, activities, intermediate outcomes, andultimate goals” (p. 88). As such, program theory lays out the assumptions thatare held regarding the transformative mechanisms that are considered to be ac-tive in creating change and growth (Lipsey, 1987; Pawson & Tilly, 1997).

Several authors have pointed to the fact that, in many cases, program theo-ries exist but are never made explicit (Cook, 2000; Leeuw, 2003). Schon(1983, 1995), for example, points out that even experienced practitioners maywell be unable to transcend their concrete actions and translate their tacitknowledge into abstract concepts that may be examined and applied beyondtheir own setting. Although their activities may reveal an inner logic, in whichthe outcome of one action sets in motion a process of reflection which prompts

86 ADMINISTRATION IN SOCIAL WORK

the next action, they may well be unable to explain the connections. Theknowledge they demonstrate in their actions remains implicit. The series ofactivities that unfolded on the basis of their tacit considerations and knowl-edge is difficult to transfer to others, so that they can learn from them, and dif-ficult to replicate elsewhere.

Thus, while there is growing recognition of the importance of program the-ory in the development and evaluation of programs, and there is often an ex-pectation that theory be addressed, this is clearly not a simple task. Programsare often complex, comprising many different types of components. Attempt-ing to integrate them all into a single coherent framework may prove daunting.

As such, what is needed is a relatively simple instrument that can help thepractitioner by guiding and structuring the process. The logic model frame-work is an ideal tool for this purpose. The logic model is a diagram that describes“how the program theoretically works to achieve benefits for participants. It is the‘If-Then’ sequence of changes that the program intends to set in motion throughits inputs, activities, and outputs” (United Way of America, 1996, p. 38). Thelogic model captures the logical flow and linkages that exist in any program.Even in cases where the theory of a program has never been made explicit, thelogic model approach can help to uncover, articulate, present and examine aprogram’s theory.

Although there are several ways to represent the logic model, it is usuallyset forth as a diagram, resembling a flow chart, that contains a series of boxeslinked via connecting one-way arrows, as presented in detail, for example, inrecent manuals published by the United Way (1996) and the W. K. KelloggFoundation (2000). The basic logic model includes the following components:

Inputs: the human, financial, organizational, and community resources thatneed to be invested in a program so that it will be able to perform its plannedactivities.

Activities: what the program does with the inputs; the processes, events, andactions that are an intentional part of the program implementation.

Outputs: the direct products of program activities, usually measured interms of the volume of work accomplished (e.g., the number of classestaught, number of group meetings held, number of pamphlets distributed,number of announcements broadcast on radio or TV) and the number ofpeople reached (number who attended each meeting, number who receivedwritten materials, etc.).

Outcomes: the benefits or changes in the program’s target population;for example, changes in knowledge, perceptions, attitudes, behavior,

Riki Savaya and Mark Waysman 87

or status. Programs often posit a chain of outcomes that are linked toeach other in a logical sequence over time, with immediate outcomesleading to intermediate outcomes, which in turn lead to longterm-outcomes. For instance, it may be expected that new knowl-edge and increased skills (immediate outcomes) will lead to modi-fied behavior (intermediate outcomes), which will lead, in turn, toimproved condition (long-term outcome).

Inputs or resources to the program usually appear in the first box at the leftof the model, and the longer-term outcomes are presented on the far right. Inbetween these ends, the major program activities are boxed, followed by in-tended outputs and outcomes from each activity (McLaughlin & Jordan,1999). Figure 1 presents the basic format of a generic logic model. Actualmodels are usually much more complex, since programs often contain a vari-ety of activities, each of which may require its own inputs and also may leadto specific outputs, that are directed at either common or separate outcomes.Graphic presentation of this complexity may thus require more boxes to rep-resent the different components and more arrows to depict the linkages be-tween them (for examples, see Bickman, 1990; Wandersman et al., 1997;Weiss, 1997).

The logic model is a tool that practitioners and evaluators are findingincreasingly useful in explicating and presenting program theory formany purposes, as witnessed by the publication of user manuals (e.g., W.K. Kellogg Foundation, 2000; United Way of America, 1996) and profes-sional papers (for example, Alter & Egan, 1997; Cooksy, Gill, & Kelly,2001; Cozzens, 1997; Julian, 1997; McLaughlin & Jordan, 1999). Thestructured nature of the end product (the logic model diagram) helps tostructure and focus the process of articulating the program theory by di-viding it up into discrete units (inputs, outputs, outcomes, etc.) that areconnected via links that can be readily examined for logic and feasibility.

Most existing publications, however, have focused on a limited rangeof uses. In this paper, we demonstrate potential uses of the logic modeltool in explicating program theory for a variety of purposes, via an inte-

88 ADMINISTRATION IN SOCIAL WORK

Inputs Activities

Situation:TargetPopulation& Needs

OutputsInitialOutcomes

IntermediateOutcomes

Long-TermOutcomes

FIGURE 1. Basic generic logic model.

grative framework based on the life span of programs (see Figure 2). Thepaper consists of a series of case studies, taken from our work, that illus-trate the different uses of logic models at different points in time through-out the life cycle of programs.

PROGRAM INCEPTION STAGE:USING LOGIC MODELS FOR ASSESSING PROGRAM FEASIBILITY

AND READINESS FOR EVALUATION

Purpose

A clear and concise presentation of program theory can be extremely usefulprior to program implementation and prior even to the detailed planning ofprograms, at the early stage, when the idea for a program is first being consid-ered. When programs are first being conceived and examined, feasibility andevaluability assessment help us to determine (a) whether there is a goodchance that the program can, in fact, be implemented with the given resources,(b) whether the program has a reasonable chance of achieving its intended out-comes, and (c) whether the program has been presented in a sufficiently clearand detailed manner to allow for the planning of its evaluation. A proper as-sessment can prevent investment of resources on programs that, with someforethought, could have been seen in advance to be impractical or ineffective.For this purpose, program planners need to clarify several things: What doesthe program intend to achieve (outcomes)? For whom (target population)? Viawhich activities? and With what resources? The resources available must besufficient to implement all the planned activities, these activities must be ap-propriate to the target population (age-appropriate, culturally sensitive, etc.),and they must also logically have a reasonable chance of leading to the desiredoutcomes. In the following example, the authors demonstrate the use of the

Riki Savaya and Mark Waysman 89

Program Inception Program Modification Program Operation

LM for assessingprogram feasibilityand its readiness forevaluation

LM for programdevelopment

LM for developingperformancemonitoring system

ProgramDissemination

LM for knowledgebuilding

FIGURE 2. Uses of the logic model (LM) along the life span of a program.

logic model to assess the evaluability of a program, prior to its commence-ment.

Context

The Special Projects (SP) department within the Planning and Research di-vision of a large Israeli government agency funds pilot testing of innovativenew social welfare programs that are subsequently to be disseminated by otherpublic agencies. One of the conditions for funding any program is that its initi-ators and managers agree for it to be accompanied by external evaluation. Aseach program is approved for funding, qualified evaluators are approachedand asked to submit evaluation proposals.

Example

In this case, we were sent a copy of the program proposal that a governmentagency had submitted to the SP department for running a program geared tohelping a specific group of parents, who had been separated from their chil-dren. The program’s aim was to help these parents prepare for resuming con-tact with their children and to help them develop their parenting skills. Wewere asked to study the document and submit a proposal of our own for evalu-ation of this program that had been approved for funding and was about tocommence operation.

Review of this agency’s proposal raised a number of questions about theprogram’s readiness for both implementation and evaluation. We thereforedecided that prior to preparing a formal evaluation proposal, we would attemptto systematically examine its feasibility and evaluability (Wholey, 1987) byassessing the program theory with the help of a logic model. Using the compo-nents of the logic model described above (target populations, problems, re-sources, activities, outputs, outcomes), we read through the proposal andmarked any statement that could be classified as pertaining to any one of them.We then assembled all the statements in each category and tried to see if wecould arrive at a clear picture of each component. Finally, we examinedwhether there appeared to be logical linkages connecting these components toeach other. By the end of this process, we had some serious concerns that thisprogram, at least as it appeared in writing, may not be ready for implementa-tion or for evaluation: Some of the activities appeared to be very difficult toimplement and the intervention itself did not seem to have a reasonable chanceof leading to its stated outcomes.

For example, the program’s target population was defined in different waysthroughout the proposal: “mothers or fathers of children up to age 17, who

90 ADMINISTRATION IN SOCIAL WORK

have not lived with them for at least six months”; “any father who has at leastone child under the age of 17 or whose wife is pregnant”; “any father about toreturn home [no specification of children’s ages].” It is apparent that the targetpopulation was defined differently in different places, such that it was notclear to us whether the program was intended for fathers only or for both fa-thers and mothers; whether a minimum period of separation was required; andwhether or not the program was limited to actual parents or was also to includeparents-to-be.

The program’s planned activities were not specified in the proposal, be-yond a general statement that they will include 12 weekly group meetings,each lasting two hours. It was, therefore, not possible to infer from the pro-posal what experiences these parents were going to undergo that were ex-pected to lead to the desired outcomes, which, in turn, were also not specifiedbeyond general statements about preparing the parents for their return homeand enhancing their parenting skills. Figure 3 illustrates how the logic modelhelped us to identify and clarify the problems in this program proposal and topresent them in a succinct and clear visual schema.

While the reader may suspect that we have singled out here an extreme ex-ample of poor or vague program planning, from our experience, this is unfor-tunately not the case. Programs are usually planned by practitioners, who havelittle experience in planning and developing new interventions and are notusually aware of the importance of explicating the program theory when de-veloping a program proposal. Without the aid of a simple and structured toolfor reviewing programs, such as the logic model, it is easy to “get lost” and endup with a plan that is full of holes or contradictions.

Although we were unable in this case to specify a clear logic model, our at-tempt to do so guided us in converting a long, diffuse proposal into a clear,concise, structured and focused document that highlighted the lacunae in thetheory underlying this program. This document was sent to the SP department,along with our reply, in which we declined their offer to submit an evaluation

Riki Savaya and Mark Waysman 91

Situation Inputs Activities Outputs Outcomes

different andcontradictorydefinitions of thetarget population:no specification ofneeds

OK group meetingscontents unspecified

− 12 weeklymeeetings, eachlasting twohours

general, non-specificstatements, notoperationallydefined

FIGURE 3. Example of logic model to help assess program feasibility andreadiness of evaluation.

proposal at this point in time, since it appeared to us that the program was notyet ready for evaluation. Instead, we recommended that the agency shouldwork on clarifying and specifying the program theory prior to its implementa-tion and prior to planning any evaluation.

Although we do not know whether this recommendation was accepted, in-terestingly, the SP department later decided to contract with evaluation con-sultants to help them assess program proposals, prior to their final approval.Previously, evaluators had been included in the process only after approval offunding, at which point it can be very difficult to make major changes in a pro-gram.

PROGRAM MODIFICATION STAGE:USING LOGIC MODELS FOR IMPROVEMENT

OF EXISTING PROGRAMS

Purpose

In many cases, initial program planning does not include a clear specifica-tion of all program components and a systematic examination of the logicallinkages between them. Although this may not necessarily be a problem, theremay be cases where difficulties become apparent after a program has been inoperation for a while. Logic models can also be beneficial in these cases,where they may focus formative efforts at program improvement. In the fol-lowing example, we constructed and elaborated the logic model of an existingprogram with management and staff of the Department of Social Welfare in alarge Israeli city, in a collaborative process that lasted almost a year, based onthe findings of an implementation evaluation.

Context

The Welfare Department established a network of Family Aid Centers(FACs), aimed at improving services to those segments of the population thathad not been considered a high priority in the past. In particular: (a) clientswho had applied only for material or instrumental assistance and (b) those forwhom traditional approaches (such as psychotherapy and counseling) hadbeen exhausted but who were still in need of other forms of assistance. Theseneighborhood-based FACs were to provide instrumental and material help, in afast, efficient and courteous manner, without requiring an in-depth psychosocialassessment and without the prior prerequisite, that receipt of instrumental aidwould be dependent on the client’s willingness to participate in counseling.

92 ADMINISTRATION IN SOCIAL WORK

The FACs were also supposed to develop new and innovative services forthese groups and, in particular, were to promote the use of advocacy strategiesto help these clients exercise their rights, within both the social welfare systemand other public agencies.

Example

After the FACs had been operating in six neighborhoods within the city fora period of about two years, it was decided to evaluate their implementationand generate recommendations for enhancing the model prior to expandingthe program to additional sites throughout the city. The evaluation had fourstages:

1. to uncover and elucidate the FACs’ existing conceptual model (mission,policies, and practices);

2. to assess the implementation and effectiveness of this model;3. to identify gaps between the conceptual model and its implementation

and results in practice; and4. to facilitate development, operationalization and consolidation of a re-

vised model geared to narrow the above gaps.

Construction of a logic model was the tool that we used in the fourth stage.First, we presented our findings from the previous stages, regarding the FACs’existing conceptual model and its implementation, which revealed disagree-ments and lack of clarity among stakeholders regarding program components,such as target populations and required inputs, as well as discrepancies be-tween the program plan and its implementation in practice.

Based on these findings, we initiated an interactive, participatory processinvolving ourselves and two separate groups within the department (adminis-tration and program staff). Over a period of eight months, we held a total of 15meetings, each lasting approximately three hours, during which we utilizedthe logic model tool to develop and refine the FAC program. The componentsof the logic model guided and structured this group process. Each meeting wasdevoted to defining and elaborating one component of the logic model, whichin this case began with the program’s “mission.” Only after the two groups hadreached agreement on the written formulation of a component did we progressto the next one. Since each component in the logic model stems logically fromthe previous component, the process flowed in a systematic and organizedmanner.

For example, the program’s intended outcomes stemmed clearly from theactivities that are to be implemented by the FACs, which in turn were targeted

Riki Savaya and Mark Waysman 93

at addressing specific client problems. One problem that the group agreedshould be addressed by the FACs was difficulties that some clients have in re-alizing their rights for services and benefits that they are entitled to receivefrom government agencies. Several types of activities were planned to addressthis problem, including provision of information about rights and entitlements,representing clients before governmental agencies, physical accompanimentof clients to agencies, and development of advocacy campaigns to promoteutilization and expansion of rights. These activities were expected to lead to anincrease in utilization of benefits and rights among clients that failed to do soin the past.

The discussions were often very heated, suggesting a high degree of in-volvement and interest on the part of group members. The end result of thisgroup process was a written program plan that differed from the original con-ception in several ways: It was more explicit, more detailed and specific, moreoperational, more consensual and, most of all, much tighter–all componentswere clearly and logically connected and integrated into a coherent programtheory.

This product served a number of purposes within the organization and alsobetween the organization and external agencies. First, the existence of a clearand consensual logic model facilitated communication, cooperation, andshared understanding among different levels of management and staff withinthe Welfare Department and also promoted greater clarity in role division (seeCoffman, 2000). It also specified the inputs that each FAC needed and was en-titled to for implementing the planned activities and who was responsible forallocating these resources. Previously, there had been considerable confusionand disagreement regarding key elements in the program that created tensionand frustration among the stakeholders, limited the scope of activities, andprevented the FACs from realizing their full potential to help their target cli-ents.

Furthermore, the existence of a clearly articulated document presenting theprogram model helped the Welfare Department to present the program to ex-ternal agencies, such as government ministries and foundations, in order toraise funds for running and evaluating the program.

Finally, this program was developed in a large multicultural city, and was tobe implemented in a variety of different neighborhoods, each with differentsociodemographic characteristics: different ethnic groups, with different reli-gious affiliations and levels of religiosity (from secular to ultra-orthodox), aswell as immigrants from many different countries. As such, it was clear thatany model developed might have to be adapted to suit the characteristics of thelocal residents. The existence of the logic model helped to make decisions

94 ADMINISTRATION IN SOCIAL WORK

about the components and elements that were core to the program versus thosethat could be adapted (for more details on this application, see Hacsi, 2000).

PROGRAM OPERATION STAGE:USING LOGIC MODELS

FOR DEVELOPING PERFORMANCE MONITORING SYSTEMS

Purpose

Performance monitoring (PM) has been widely promoted as an effectiveway to ensure a focus on program results and accountability, to determinewhat publicly funded programs are actually doing, and to provide for controlover expenditures (Perrin, 1998). Performance monitoring systems are cur-rently being implemented, and sometimes even built into legislative require-ments by governments in developed countries throughout the world (e.g., theUnited States 1993 Government Performance and Results Act; see Wholey,1997). In the private and nonprofit sectors, performance monitoring is alsogaining popularity as a means for keeping programs on track and improvingthe quality of program implementation and results by providing ongoing peri-odic measurement of key indicators (Newcomer, 1997).

Along with this, critics have claimed that performance monitoring is oftenimplemented inappropriately, focusing on trivial indicators, providing datathat are irrelevant to stakeholders (and especially program staff) and not focus-ing sufficiently on outcomes (Bernstein, 1999; Greene, 1999; Winston, 1999).Many of these shortcomings may be prevented, when the measures are se-lected and developed after the program theory has been formulated in collabo-ration with the system’s intended users.

Context

The example that we present here stems from our work with the New IsraelFund (NIF), a philanthropic foundation that helps to fund local NGOs active inachieving social change. The NIF also offers its grantees two additional ser-vices via its capacity-building branch, SHATIL: (1) technical assistance andtraining in a number of areas (such as organizational development and re-source development); and (2) help in establishing and maintaining coalitionsof NGOs to promote common goals. While SHATIL is an extension of theNIF, it also receives independent funding from other sources, some of whomhave begun to require greater accountability and, in particular, assessment ofthe results of SHATIL’s work. The director of SHATIL approached us to help

Riki Savaya and Mark Waysman 95

the organization develop a PM system, with the specific requirement that thedevelopment process be in accord with SHATIL’s organizational culture,which tends to be more egalitarian and less hierarchical, thus involving a highdegree of staff participation in many processes. When the decision to developthe PM system was presented to staff, with the understanding that they wouldbe actively involved in the development process, they saw it also as an oppor-tunity to obtain a set of tools that would help them to learn from their accumu-lated experience in a systematic manner so as to improve practice.

Example

We present here the work that we did in developing a PM system for one ofthe SHATIL teams: consultants in organizational development (OD). In ap-proaching this task, we decided that the first step should be to develop a clearand comprehensive picture of their work. The rationale was that the PM sys-tem should be based on a set of indicators that truly reflect the core aspects ofthis team’s work–the program theory that stems from the practice wisdom theyhave developed over the years–and not necessarily the indicators that are most sa-lient or easiest to measure. We thought that the ideal way to arrive at this picturewas with the aid of a logic model.

In developing the model, we decided to engage in a group dialogue with theteam, not only because management had requested this, but also because webelieved that in order to promote future utilization of the system, it shouldhave broad support from within and reflect a consensus that the system is rele-vant and useful. The information to be provided should be generally perceivedas bearing the potential to help the OD consultants learn from their experienceand derive ideas for improving the quality of the consultation that they providetheir clientele.

Although the group process with this team proved to be considerablymore difficult and more prolonged than we had anticipated, it was clearlyjustified. It required the consultants to explicate and formulate their activi-ties, their working assumptions and the outcomes that they strive to achievewith their clients. As often occurs in other participatory processes with peo-ple that are invested in their work, here, too, the process involved a numberof very heated discussions. Consensus was, however, reached eventually,leading to a clear model depicting all the components of their work. For ex-ample, it included a detailed statement of six major outcome dimensions,formulated as areas of organizational health: It was conceptualized that anyNGO that functions well in all of these dimensions can be deemed to behealthy and not in need of help from SHATIL. These outcomes cover all the

96 ADMINISTRATION IN SOCIAL WORK

main target areas to which SHATIL OD consultants direct their interven-tions, such as establishing “a clear and coherent organizational identity,”“a clear and functional organizational structure,” or “an organizationalculture that promotes growth and effectiveness.” Each outcome dimen-sion was further broken down into several more specific outcomes. Forexample, the outcome dimension “existence of a clear and coherent orga-nizational identity” comprises the following three specific outcomes:“the organization has a clear and coherent vision and mission,” “the orga-nization’s target populations are clearly defined,” and “the organiza-tion’s goals and objectives are clearly specified and linked to itsmission.”

Once agreement was reached on all the components of the logic model,the next step was to determine which components and which parts withineach component would be routinely monitored as part of the PM system,since it is neither feasible nor advisable to monitor them all. This wasachieved by asking the team to review the entire model and to prioritizeand choose the areas of most value for reporting to external sources, suchas donors, and for internal purposes, such as organizational learning.Thus, they were asked to decide whether or not they wish to monitor in-puts and, if so, which inputs. They were then asked the same question re-garding activities, outputs, and outcomes. The existence of a written,clearly conceptualized and consensual model, depicting the consultants’work as a whole and the theory underlying it, made this step of the processrelatively easy, and agreement on this was thus reached rapidly, leadingto the following step: development of indicators and measures.

In this example, we were invited to help an organization develop a PMsystem to monitor its work. We used a clearly articulated logic model asthe basis for selecting the aspects of program performance that would bemonitored and thereby ensured that the aspects most important for ac-countability and for organizational learning were selected, rather thanthose that are easiest to measure, as is often the case.

We believe that the direct work on the development of the system wentsmoothly and quickly because of our previous investment of time and ef-fort in constructing the logic model. We further believe that this PM sys-tem has a greater chance of being accepted within the organization andutilized over time, due to the fact that staff were part of the process, that itfocuses on aspects of their work that are meaningful and relevant to them,and that are clearly linked to their overall mission.

Riki Savaya and Mark Waysman 97

PROGRAM DISSEMINATION STAGE:USING LOGIC MODELS

FOR KNOWLEDGE BUILDING

Purpose

Since development of a new program usually involves a large investment ofeffort, time and money, program funders, initiators and implementers oftenwant their investments to yield benefits beyond the immediate context. In ad-dition, prominent figures in the evaluation field (e.g., Cronbach et al., 1980)have suggested that evaluation should “facilitate the transfer of knowledgefrom some programs or sites to other programs or sites, through explaining theprocesses that cause or prevent the outcomes achieved” (Shadish, Cook, &Leviton, 1991, p. 323). Logic models provide one of the most efficient ways toexamine the applicability and generalizability of programs to other settingsand populations.

Context

This example also stems from our work with SHATIL. Whereas the previ-ous example illustrated the work of SHATIL’s consultants that work withNGOs on capacity-building, this example pertains to the second main servicethat SHATIL provides its clients: assistance in establishing and maintainingcoalitions.

From its consulting work with Arab organizations active in the area of edu-cation, it became increasingly clear to SHATIL that attempts to solve prob-lems of inequality in allocation of government resources by the traditionalmethod of providing alternative services had many limitations. A different ap-proach is embodied in programs that aim to change government policy ratherthan to provide substitutes for its areas of neglect. These programs employ avariety of social change strategies, such as advocacy, lobbying, pressuregroups, public education, etc. Based on these ideas, SHATIL in 1991 launchedits Equal Access Initiative to empower Arab leadership and community-basedorganizations to achieve concrete policy changes in the area of early child-hood education. The project based its activities on a coalition task force com-prised of representatives from existing Arab NGOs, active in the field of earlychildhood education, that was established, coordinated and supported bySHATIL.

The evaluation had two components. The first component (see Savaya &Waysman, 1999), which assessed the program’s effectiveness, revealed that itwas highly effective in achieving its intended outcomes. This success

98 ADMINISTRATION IN SOCIAL WORK

prompted the second component of this evaluation, aimed at conceptualizing aprogram model that could be examined to determine its generalizability toother populations and problems and, where appropriate, taught and dissemi-nated to other settings and organizations. This conceptualization was neededsince the program was essentially experimental, and many of its interventionswere developed during its implementation, in accord with changing circum-stances and responses from stakeholders to prior interventions. In this exam-ple, we describe our application of the logic model tool to conceptualize thisadvocacy program and its use in determining the program’s generalizability toother settings.

In this case, the logic model was developed retrospectively, via an attemptto induce the principles underlying this program from a study of its productsand activities. To this end, we collected data from three sources: written mate-rials generated by the project, in-depth interviews with project stakeholders,and observations of task force meetings. These data were the basis for a collab-orative, interactive effort, in which the authors used the logic model frame-work to conceptualized the program’s theory, followed by feedback fromstakeholders and revisions (this process is described in detail in Waysman &Savaya, 2003).

Once the model was complete, and we had a clear and concise representa-tion of this complex program, we were able to utilize the logic model for as-sessment of the program’s generalizability to other settings.

To achieve this aim, we first identified, via professional literature and per-sonal acquaintance, a group of five expert consultants with sufficient knowledgeand experience, from different perspectives, to evaluate the generalizability of theprogram. The experts were from different fields (education, psychology, socialwork, and sociology) and ethnic backgrounds (Arabs and Jews), from univer-sities in Israel and the U.S., and each is well-known for his or her expertise inone or more areas relevant to this endeavor. The five experts all received a re-port, which presented the methods employed to develop the model, thegraphic diagram of the model and the narrative text describing it. They wereasked to read these materials and to respond in writing to a set of questions re-garding the generalizability of this model to other settings. We thus receivedfive written expert opinions that we reviewed and analyzed to identify theirmain themes.

Interestingly, although the five experts were from different disciplines, cul-tures and countries, they all agreed that the SHATIL model has great potentialfor solving social problems in many different areas. They also specified addi-tional prerequisite conditions for its success, particularly highlighting the roleof the wider sociopolitical context in enabling a project of this kind to occur

Riki Savaya and Mark Waysman 99

and succeed (e.g., they suggested that a program of this kind requires an opendemocratic political system in order to work).

Although it was not feasible in this case, it is possible to add an additionalcomponent to this process: direct interaction among the experts, to enablethem to relate to each other’s opinions, either via a group meeting, video con-ference or a real-time Internet discussion. Such interaction among the expertscould help to focus and enrich the ideas that emerge from this process.

Use of the logic model served here as a powerful tool for building knowl-edge about a successful program: It helped us to organize and structure a verylarge amount of information from a variety of sources into a clearly conceptu-alized model, presented graphically and summarized in writing, that accu-rately reflected the program from the point of view of its initiators andimplementers. In addition, due to its concise and concrete nature and its visuallayout, the model was very easy to comprehend, thus also making it very easyto elicit feedback from stakeholders, who could readily identify gaps and mis-takes requiring revision, as well as from experts, who were able to use themodel for judging program generalizability.

This model was also subsequently used by SHATIL for an additional pur-pose: to teach and disseminate their methods and strategies for establishingand maintaining coalitions for social change.

CONCLUSIONS

This paper has presented a series of examples illustrating how the processof explicating program theory can contribute to the planning of programs andto their development, implementation, evaluation and dissemination. The the-orizing process was demonstrated with the aid of the logic model framework,which serves to guide, structure and simplify the activity. Our examples dem-onstrate that this framework can be useful in helping us to think about pro-grams at various points throughout their life cycle.

From the examples presented here, spanning from the earliest stage of pro-gram inception through program modification, operation, and dissemination,it is clear that both the process of developing a logic model and the existenceof the product itself, can contribute in many ways to program success. For ex-ample, it can be used to

• assess the feasibility of a program idea;• assess the readiness of a program for evaluation;• facilitate communication, cooperation, and shared understanding among

stakeholders;

100 ADMINISTRATION IN SOCIAL WORK

• promote greater clarity in role division among staff;• specify the resources needed to run the program;• present the program to external agencies for different purposes, such as

fundraising;• aid in making decisions about adaptations and modifications of a pro-

gram to fit with the needs and characteristics of different implementationsites;

• help select the aspects of program performance to be monitored, whendeveloping a performance measurement system, so as to ensure that theaspects most important for accountability and for organizational learn-ing are selected (rather than those that are easiest to measure);

• generate knowledge about successful programs;• elicit feedback from stakeholders and experts, regarding different issues,

such as relevance, innovation, and generalizability to other settings; and• teach and disseminate the program.

Despite its many advantages, some limitations of the logic model approachshould be noted. First, developing a logic model can, at times, be a lengthy andcostly endeavor. From our experience the process can take anywhere from asingle sitting lasting just a few hours to many sittings over a period of months.Sometimes, depending on the purpose, it is enough for just one person whoknows the program well to be involved in the process; at other times it is essen-tial that many people and groups, both from within and outside the organiza-tion take part. At times, discussions regarding a logic model may revealconflict or power struggles within the organization that had been dormant aslong as things were left vague. The process of specifying a logic model mayalso direct participants to the conclusion that, to ruin the program, changeswill need to be made in organizational structure (e.g., changes in role defini-tions and hierarchies). These issues require considerable investment of timeand effort on the part of the organization and, as such, should be kept in mindwhen deciding to embark on this process. Finally, it should be noted that logicmodels are merely conceptual representations of programs that should alwaysbe attentive to changes in their environment and clientele. As such, we need tobe aware of the risk of rigidly adhering to a model as it was originally con-ceived, when external circumstances have changed and may require rethink-ing of the model.

Although other frameworks are available for conceptualizing programs,from our experience the logic model is the most comprehensive and easy touse. Cooksy and her colleagues (2001) have reached the same conclusion, af-ter comparing the logic model to other approaches, including path diagrams,program templates, concept maps and textual descriptions.

Riki Savaya and Mark Waysman 101

REFERENCES

Alter, C., & Egan, M. (1997). Logic modeling: A tool for teaching critical thinking insocial work practice. Journal of Social Work Education, 33, 85-102.

Alter, C., & Murty, S. (1997). Logic modeling: A tool for teaching practice evaluation.Journal of Social Work Education, 33, 103-117.

Bernstein, D. J. (1999). Comments on Perrin’s “Effective use and misuse of perfor-mance measurement.” American Journal of Evaluation, 20, 85-93.

Bickman, L. (1987) (Ed.) Using program theory in evaluation. New Directions for Evalu-ation, Number 33. San Francisco: Jossey-Bass.

Bickman, L. (1990) (Ed.) Advances in program theory. New Directions for Evaluation,Number 47. San Francisco: Jossey-Bass.

Chen, H. (1990). Theory-driven evaluations. Newbury Park, CA: Sage Publications.Coffman, J. (2000). Simplifying complex initiative evaluation. The Evaluation Ex-

change (Harvard Family Research Project Quarterly Newsletter), 5(2/3), 2-3.Cook, T. D. (2000). The false choice between theory-based evaluation and experimen-

tation. In P. J. Rogers, T. A. Hacsi, A. Petrosino, & T. A. Huebner (Eds.), Programtheory in evaluation: Challenges and opportunities. New directions in evaluation (Vol.87, pp. 27-35). San Francisco: Jossey Bass.

Cooksy, L. J., Gill, P., & Kelly, P. A. (2001). The program logic model as an integrativeframework for a multimethod evaluation. Evaluation and Program Planning, 24,119-128.

Cozzens, S. E. (1997). The knowledge pool: Measurement challenges in evaluatingfundamental research programs. Evaluation and Program Planning, 20, 77-89.

Cronbach, L. J., Ambron, S., Dornbusch, S. M., Hess, R. D., Hornik, R.C., Phillips,D. C., Walker, D. F., & Weiner, S. S. (1980). Toward reform of program evalua-tion. San Francisco: Jossey-Bass Publishers.

Greene, J. C. (1999). The inequality of performance measurements. Evaluation, 5,160-172.

Hacsi, T. A. (200). Using program theory to replicate successful programs. In P. J.Rogers, T. A. Hacsi, A. Petrosino, & T. A. Huebner (Eds.), Program theory in evalu-ation: Challenges and opportunities. New directions in evaluation (Vol. 87, pp. 71-78).San Francisco: Jossey Bass.

Julian, D. A. (1997). The utilization of the logic model as a system-level planning andevaluation device. Evaluation and Program Planning, 20, 251-257.

Leeuw, F. L. (2003). Reconstructing program theories: Methods available and prob-lems to be solved. American Journal of Evaluation, 24, 5-21.

Lipsey, M. W. (1987). Theory as method: Small theories of treatments. Paper pre-sented at the National Center for Health Services Research conference: Strengthen-ing causal interpretations of non-experimental data, Tucson, Arizona.

McLaughlin, J. A., & Jordan, G. B. (1999). Logic models: A tool for telling your pro-gram’s performance story. Evaluation and Program Planning, 22, 65-72.

Mizrahi, T. (1992). The Future of Research Utilization in Community Practice. In A. J.Grasso & I. Epstein (Eds.), Research utilization in social services. New York, Londonand Norwood: The Haworth Press, Inc.

102 ADMINISTRATION IN SOCIAL WORK

Newcomer, K. E. (1997) (Ed.) Using performance monitoring to improve public andnonprofit programs. New Directions for Evaluation, Number 75. San Francisco:Jossey-Bass Publishers.

Pawson, R., & Tilley, N. (1997). Realistic evaluation. Thousand Oaks, CA: Sage Publica-tions.

Perrin, B. (1998). Effective use and misuse of performance measurement. AmericanJournal of Evaluation, 19, 367-379.

Reid, W. J. (1994). The Empirical Practice Movement. Social Service Review, 68(2)165-184.

Rosen, A. (1993). Systematic Planned Practice. Social Service Review, 67(1) 84-100.Savaya, R., & Waysman, M. (1999). Outcome evaluation of an advocacy program to

promote early childhood education for Israeli Arabs. Evaluation Review, 23(3),281-303.

Schon, D. A. (1983). The reflective practitioner: How professionals think in action. NewYork: Basic Books.

Schon, D. A. (1995). Reflective inquiry in social work practice. In P. M. Hess and E. J.Mullen (Eds.), Practitioner-researcher partnerships, building knowledge from in, andfor practice. Washington, DC: NASW Press.

Shadish, W. R., Cook, T. D., & Leviton, L. C. (1991). Foundations of program evalua-tion: Theories of practice. Newbury Park, CA: Sage Publications.

United Way of America. (1996). Measuring program outcomes: A practical approach.Alexandria, VA: United Way of America.

Wandersman, A., Goodman, R. M., & Butterfoss, F. D. (1997). Understandingcoalitions and how they operate: An “open systems” organizational framework.In M. Minkler (Ed.), Community organizing and community building for health(pp. 261-277). New Brunswick, NJ: Rutgers University Press.

Waysman, M., & Savaya, R. (2003, under review). A Program Model for Coali-tion-Based Social Change Initiatives: Case Study of the SHATIL Equal Access Pro-ject in Early Childhood Education.

Weiss, C. H. (1997). Theory-based evaluation: Past, present, and future. In D. J. Rog(Ed.), Progress and future directions in evaluation: Perspectives on theory, practice,and methods. New Directions for Evaluation, Number 76. San Francisco: Jossey-Bass.

Wholey, J. S. (1987). Evaluability assessment: Developing program theory. InL. Bickman (Ed.), Using program theory in evaluation. New Directions for Evalua-tion, Number 33. San Francisco: Jossey-Bass.

Wholey, J. S. (1997). Trends in performance measurement. In E. Chelimsky and W. R.Shadish (Eds.), Evaluation for the 21st century, pp. 124-133. Thousand Oaks, CA:Sage Publications.

Winston, J. A. (1999). Performance indicators–Promises unmet: A response to Perrin.American Journal of Evaluation, 20, 95-99.

W. K. Kellogg Foundation. (2000). Logic model development guide. Battle Creek, Michi-gan: Author.

Riki Savaya and Mark Waysman 103

Leave a Reply

Your email address will not be published. Required fields are marked *