There are a number of evaluation models that an agency may use. Again, the purpose of the evaluation should be of utmost consideration when choosing how to conduct an evaluation. That is, when choosing the evaluation model, the evaluator must always tie the model to the overall problem of the evaluation. For example, an importance–performance evaluation allows program personnel to identify gaps in services that are important to participants and is the ideal model if the purpose of the evaluation is to determine whether a program meets participants’ expectations. While a satisfaction survey provides data on how happy participants are with a program or facility, it often does not collect data on what is most important to the participants or why. Certainly satisfaction data are important, but if the agency desires to know what areas are most important to its patrons, it needs to take the data collection a step further.
The following are descriptions of various evaluation models. In addition, the purpose of each is outlined so that researchers and evaluators can better determine the best match for the purposes of a given assessment.
Importance–performance analysis is used to measure the worth of an agency’s performance with respect to program and facility attributes. These attributes may include friendliness of staff, knowledge of staff, equipment, cleanliness, and other variables of interest. Measurement of worth is based on participants’ perceptions of how important each attribute is as well as perceptions of how well the agency provides each attribute. The importance score and performance score for each attribute are then compared and placed on a matrix divided into four categories labeled "Concentrate here," where importance is high and performance is low; "Keep up the good work," where importance and performance are both high; "Possible overkill," where importance is low but performance is high; and "Low priority," where both importance and performance are low. The midpoint that divides the matrix into the four sectors is typically the location where the midpoints of the measurement scales used to determine importance and performance cross.
For example, if the evaluator uses a five-point scale ranging from "Strongly agree" to "Strongly disagree," then 3 is the midpoint of that scale. In some instances it is more helpful to use the mean scores of the two data sets rather than the midpoints of the scales, as data can often be overinflated, meaning that participants tend to be more positive than negative. Thus if the midpoints of the scales are used to create the midpoint of the matrix, much of the data end up falling into the "Keep up the good work" quadrant, and the agency receives little direction for improvement when often improvement is warranted, as nothing is ever perfect (Rossman and Schlatter 2008).
Figure 4.2 provides an example of an importance–performance matrix created from data collected for an evaluation of a youth soccer program. Items are rated on a seven-point scale. As you can see, the agency seems to be doing an outstanding job with its equipment and fields. It is also doing a good job with concessions; however, given the importance of concessions to the participants, it is perhaps spending too much time and effort on this aspect of the program. On the other hand, both coaching and officiating fell into the “Concentrate here” quadrant, suggesting that these are areas for improvement. As with all types of evaluation, the variables being measured must be ones about which the participants can make a knowledgeable judgment.
The purpose of the satisfaction-based evaluation is to measure the different potential outcomes of leisure engagement. The potential outcomes are tied to 10 identified domains of satisfaction, including achievement, physical fitness, social enjoyment, family escape, environment, risk, family togetherness, relaxation, fun, and autonomy (Rossman 1983; Rossman and Schlatter 2008, 387). While not all of the domains are relevant to every type of recreation experience, for those domains that are relevant the satisfaction-based evaluation provides an idea of whether the program helped the participants engage in a satisfying leisure experience. Research indicates that participant satisfaction with a program is tied to the provision of a leisure experience, and therefore measurement of the satisfaction domain can help programmers know whether their participants are experiencing leisure engagement. For example, an item that measures achievement might ask participants how satisfied they were by opportunities to learn new things or develop new skills, while an item measuring relaxation might assess satisfaction for the opportunity to relax or de-stress. Figure 4.3 lists items that can be used to measure a variety of domains of satisfaction with a recreation program.
Service Hour Evaluation
Service hour evaluation is primarily concerned with output—how many people are being served and to what extent. For example, an after-school program that serves 40 children for 2 hours a day has a daily service hour measurement of 80. The meaning and implications of this number are tied to the context and goals of the program. This measurement examines not only how many people are participating but also how much time they are spending participating. Thus the service hour evaluation provides a measurement of the amount of service that the agency provides the community. One purpose of the service hour measurement is to determine ways in which to improve organizational management: Does the input-to-output ratio show efficiency in programming, and if it does, does it do so in the course of providing quality programming?
For a service hour measurement to give a more complete picture of how well the agency is serving the public, the evaluator must look at the outputs for specific variables such as participant age, program area, program format, participant gender, geographic location, activity, operating division, ethnic background of participants, program fee, and special populations. Through these data the agency gains a very concrete and quantitative picture of how widely the community is utilizing its services and how much service it is providing.
Goals and Objectives (Process) Evaluation
A goals and objectives evaluation, or process evaluation, compares the plan of a program with the actual operation or outcome of the program. Was the program run in such a way that it met its goals and outcomes? Were the processes that were designed to facilitate the successful realization of the outlined goals and objectives actually carried through? If they were not carried through, then it should not be surprising if the goals and objectives fell short.
According to Rossman and Schlatter (2008), the goals and objectives evaluation examines the program design (inputs and resources, animation process, and desired goals and outcomes) together with the program operation (actual inputs and resources, actual animation process, and observed outcomes) to determine if there is congruence. Inputs and resources are items such as staffing, supplies, budget support, and facilities. The animation process is how the programmer puts the program components into motion. For example, if the program is a round-robin tournament, how is it structured and how should it take place? Once the tournament began, did all teams play each other, or did something prevent that from occurring? Desired outcomes encompass planned expectations for the program and are what participants should experience as a result of participating in a program. In some instances a desired outcome may be as simple as "Have fun," while other outcomes may be more detailed, such as "Increase cardiorespiratory fitness."
In order for an agency to use the goals and objectives evaluation model, the programmer must write clear goals and objectives for the evaluated program, as those are what the agency ultimately uses to determine whether the program is effective. If there are discrepancies between the program goals and objectives and the program outcomes, the agency can examine two primary areas to determine what produced the discrepancies: program inputs or unrealistic program expectations. Chapter 2 provides a detailed description on writing quality goals and objectives.
Much of the time, one form of evaluation cannot tell the entire story of the success of a program. Triangulation involves combining different types of evaluation in order to attain data from different perspectives. For example, it might mean utilizing a focus group and interview, a survey, and a suggestion box. It may also mean using an importance–performance survey along with a process evaluation. Triangulation allows an agency to gain a much clearer understanding of program success, because it involves collecting data in different formats from different perspectives. In addition, a program supervisor can use the triangulated data to test conclusions drawn from the data by looking for verification and discrepancy among the different data sets. These different perspectives help ensure that the decision made regarding the future of a program—whether the decision is to drop, modify, or continue with the program—is that much more valid.
Needs assessments are a specific type of evaluation designed to gain input and ideas from the public as well as get public reactions to issues affecting the community. The assessor can use any of the data collection methods already outlined for this purpose. A needs assessment can have multiple purposes, including determination of satisfaction of quantity, quality, and management of parks, programs, and facilities. Other purposes include determining usage rates, determining acceptable rates of funding (taxes, fees, and so on), identifying a community’s interest in future programming and facilities, determining how effective the agency is at communication, identifying how well registration procedures work, gathering community demographic information, identifying reasons for nonuse, and identifying needs for new programs and facilities.
For example, the YMCA gathers input from its members and the community when deciding to build a new facility, expand a current facility, or develop new programs. Specifically, the Armed Services YMCA of San Diego (ASYMCASD) received a grant to conduct a needs assessment. To gather the information for the assessment, the ASYMCASD and consultants inventoried the offerings of the ASYMCASD, met with focus groups of the target population, developed a survey, and interviewed 27 leaders of the community from health care, housing, military, family, and child support groups. The ASYMCASD’s purpose for the assessment was to evaluate the effectiveness of teen and family programs and to determine concerns that current or future programs could address (Merrick and Steffens 2008).
A needs assessment can often be mistaken for a wants assessment. It is important to keep in mind the expectations that the needs assessment may cultivate. Community members may assume that if they indicate on the assessment that they want a service or program, then the agency will provide that service or program. Therefore, it is important to make sure that unrealistic expectations are not an outcome of the needs assessment.