Program Evaluation: Colorado Healthy People 2010 Initiative


The program under consideration for this dissertation is Colorado Healthy People 2010 Initiative: Obesity Prevention carried out by The Colorado Trust which is a grant-making foundation committed to promoting the healthiness and well-being of the residents of Colorado. The literature for the study is provided by the Colorado Healthy People 2010 Initiative Obesity Prevention Evaluation report crafted by Erin M. Caldwell, MSPH; National Research Center, Inc.

As identified by experts, obesity has become a major health concern in the recent past due to its notable adverse effects on the health of an individual. Reasons for this increasingly concerning problem can be attributed to sedentary lifestyles and unhealthy approaches to diet in addition to multifarious societal alterations like increased technological dependence, employment drifts, and many other factors. In spite of the fact that Colorado is the fittest state in the United States as per some researches, more than 50 percent of its adult population is either overweight or obese. In addition, about 20 percent of the youth population suffers from problems caused due to being overweight or obese. Co-morbidities associated with obesity such as Type 2 Diabetes, coronary heart ailments, hypertension, cholesterol increase have become a growing concern for the community as a whole. (The Colorado Trust, 2007)

The report notes that various programs have tried to address this worrying issue primarily by means of concentrating on increased physical activity or nutritional modifications. However, these interventions, despite producing satisfactory outcomes on a short-term basis, fail to deliver long-term benefits as participants regain all the lost weight in due course of time. The factors, which contribute to beneficial weight loss, are multilayered. They include individual as well as societal factors such as increased physical activity and controlled dietary habits at an individual level and the policies, standards, and practices in the social and environmental context. This is known as the socio-ecological model. (Lipsey, 1993)

The Colorado Healthy People 2010 intervention program was designed with the intention of making the people of Colorado understand the issues and help them to recognize the practices required to lead a healthier and longer life. The Colorado Trust selected 5 regional coordinating bureaus and 43 societal groups to shoulder the responsibilities of implementing this Initiative. The literature under consideration concentrates on two regions: the Northwest (Region 1) and the Southeast (Region 5). In Region 1, the primary focus was on increasing physical activities whereas in Region 2 the initiative concentrated on preventing the growth of diabetes. (The Colorado Trust, 2007)

Evaluation Design

The National Research Center, Inc. assigned by the Colorado trust to evaluate the Colorado Healthy People 2010 Initiative, directed their effort to address two issues.

  1. Do participants of relevant programs achieve sustained change in terms of dietary and physical activity behavior?
  2. What community, programmatic and individual characteristics act as facilitators and barriers to sustainable behavior changes? (The Colorado Trust, 2007, p. 4).

Several evaluation instruments were developed to proceed in the evaluation endeavor. To address the first question carefully constructed surveys were carried out. To answer the second question a much more complex research mechanism was needed. Three levels of data were gathered in relation to the reasons, which assist or impede behavioral modifications. Both qualitative, as well as quantitative approaches, were adopted to collect the data. The three levels of data reflected the different layers of the previously discussed socio-ecological model. In this case, the layers were fragmented into individual, program and community levels. (Rogers, 2007)

At the individual level, elements such as demographics, societal backing, self-consciousness, and perceived surroundings along with factors which either aid or hinder the behavioral modifications are evaluated by means of participant surveys and analyzing focus groups. Data relating to physical activity, practices adopted, the number of fruits and vegetables consumed, and effective weight loss or maintenance was anticipated by the proper structuring of the questionnaires. A slight variation of the before-after approach was adopted. If O is the observation and X is the duration of the program or initiative, an O-X-O-O model was adopted. The first survey considered as the pretest was conducted at the commencement of the program. The second analysis, i.e. the first posttest was done just after the conclusion of the program, and the subsequent observation, the second posttest was carried out as a one-year follow-up (Stufflebeam, 2001).

At the program level, the factor which needed to be analyzed is valuable program components, programmatic theories, the extent of the program’s association with the community, and tracking the data related to the initiative. In this case, convening with the program staff was of immense importance. The ground staff is the best equipped with in-depth knowledge about the program workings with first-hand experience. Thus interviewing them about the above mention factors was the most logical course of action. In addition, analysis of the proposed structure and monitoring progress reports were also carried out in order to gather data and information on the program workings (Issel, 2009).

At the community level, the issues of interest were the health aspect, geographical reach, environmental conditions, walkability, extent, and ease of access to recreational facilities, and healthy and nutritive diet. Such aspects were analyzed through the gathering of secondary project data, the information provided by the local administration, along specifically designed assessments. In this context, it would be relevant to mention the views of Kellogg. It is stated, “Effective evaluation and program success rely on the fundamentals of clear stakeholder assumptions and expectations about how and why a program will solve a particular problem, generate new possibilities, and make the most of valuable assets.” (Kellogg, 2004, p. 5).

Data Collection Methods

Self-administered questionnaires were selected as the primary instrument for specific data gathering. Both the adult as well as the youth population was sampled. In the case of a youth participant below the age of 18, a consent form was required to be signed by a legal custodian or a parent. Program staff played a major role in data collection. They were responsible for distributing and collecting all survey forms including questionnaires, consent forms, and contact forms, at the start of the program to the appropriate survey partakers. Alternatively, some forms were to be filled by the participant and were to be mailed to the NRC in a provided postage-paid business envelop. At the end of the program, either the program staff managed the post-program survey or questionnaires were directly mailed to the participants by the NRC (Leviton & Lipsey, 2007).

However, the final follow-up survey after one year was conducted by the NRC itself through mails by using the provided participant contact information. To encourage participation incentives of up to $25 were granted. Seventeen sites were screened to select the appropriate programs for evaluation. Approximately 2,900 participants from 16 program sites were selected for the evaluation out of about 12,600 eligible participants from 17 sites. The program start survey received a response rate of 51%. Out of these respondents, 63% took part in the program end assessment. In addition, out of those who had completed both the program start as well as the program end surveys 56% completed the 1-year follow-up survey. The data obtained from the surveys were submitted into an electronic dataset for evaluation by means of the Statistical Package for Social Sciences (McQuiston, 2007).

Other than questionnaires, much effort was made to gather qualitative data through the analysis of focus groups. Between July 2005 and August 2006, 16 one-and-one-half hour focus groups were held in eight geographic regions, with representation from nine different programs. (The Colorado Trust, 2007, p 8) With more than 120 participants partaking in the discussion, all voices were recorded for future qualitative analysis. All aspects of the socio-ecological model including individual, program, and community components were discussed. During the initial arrangements, participants were selected from all those who had completed the program start surveys by means of a telephonic recruitment mechanism. The focus groups were scheduled after peak working hours and a strategic location such as a centrally located hotel or public library was selected as the venue. This was done for the convenience of the participants. This also helped to maximize participation. The participants were categorized into three segments.

Those who reported that they had made a change in their dietary habits in terms of fruits and vegetable consumption or had engaged in increased levels of physical activity were categorized as the “successful completers” group. The others were segmented into the “less successful completers” category while the third category was termed as “non-completers”. Later in 2006, the focus groups comprised mostly successful competitors. The recruitment process became 2-phased. First, an invitational letter would be sent to the likely participants. Approximately two weeks subsequent to the invitational letter, the telephonic process was initiated. Youth Participants in this case too were required to bring signed consent forms. Participants were encouraged to take part actively and interactively in these discussions.

To scrutinize the effect of the program level factors, select program staff were interviewed. Two sets of interviews with the personnel were carried out. The interview outline and script was crafted by experts from Colorado Health Outcomes (CoHO). Interviews were carried out in a conversational and qualitative fashion and aspects positively influencing the fostering of behavioral modifications were discussed. The program staffs were adequately briefed about the rationale behind the evaluation in order to attain adequate cooperation from them. A second phase of interviews was carried out one year subsequent to the closure of the grant programs primarily focusing on the issue of sustainability of the initiatives. All interviews were evidenced and documented for further analysis.

Numerous community related factors such as walkability, i.e., presence of sidewalks, parks etc, ease of access to healthy food, presence of recreational amenities, geographical environment, social surroundings and prevalent practices are either implicitly found or hypothesized to be linked to physical activity levels and nutritional behavior of the population being studied. Thus, gathering data and information on such factors became an essential part of the evaluation process. In order to do so, information was gathered from secondary data supplies such as U.S Census records, yellow pages and the Internet. Information was also collected by means of carefully designed forms, which were circulated amongst local administrative personnel. These forms enquired about the general relevant policies and infrastructure offered by the specific administrative institution. In addition, specially designed assessments were carried out which included a grocery store assessment that looked at the ease of availability of nutritious food and walkability assessment to analyze societal conditional conducive to walking (Frechtling, 2007).

Limitations and Drawbacks

However, as with the case of any other evaluation endeavor, this program evaluation initiative too suffers from its own set of limitations and drawbacks. The first notable limitation occurs in the form of the self-administered nature of selected data collection instrument. Self-administered forms are often subject to societal desirability prejudice. This means that there is an inclination to answer questions in a manner, which is socially adequate and favored. This leads to an overstated positive outcome and understated negative outcome, which is enhanced if such a trend is followed in all the three observations and leads to inappropriate data (Weiss, 2001).

Secondly, the participation itself may be biased. The evaluation design does not randomize its subjects and the participants choose to participate. It is possible that the population participating experiences different outcomes as compared to the fraction that chose not to partake in the evaluation process. This negatively influences the generalizability of the results. Again identifying a control group was difficult and the validity of the surveillance data could come under the scanner as the units selected for evaluation in the form of BMI, physical activity levels and consumption of nutritious diet may not truly reflect the outcomes. The timeframe of the evaluation process may also be considered as insufficient as a one-year follow up is not enough to judge the long-term issues which need at least a two to five years timeline to be evaluated properly (Rogers, 2004).

Furthermore, the nature of evaluation considered three different aspects in the form of individual, program and community. Such designs where too many features are considered grow in complexity exponentially. The focus does not lie on a particular issue and the direction of the study diversifies which leads to insufficiency in data collection and mounting costs. As in this case, the individual level aspects were judged by just one or two questions to minimize the complexity of the questionnaires. Finally, the vastly heterogeneous nature of the program itself complicated the nature of the evaluation design. The varied initiatives and diverse target population entailed a fragile framework to evaluate the underlying association between the community and the Initiative characteristics (Mark, 2000).

Process Format Analysis

Conversely, the evaluation productively did investigate the connections between programs and their anticipated consequences. A good deal of attention has been paid to the level of detail. Based on the selected parameters and data produced by the survey and interviews, detailed individual, program, and community factors were statistically formulated. Analysis was carried out in a very methodical and systematic fashion. Both quantitative and qualitative data were analyzed in order to enhance the reliability of the results. The report explains the complex evaluation design in a very lucid manner and does not use too many jargons making it easy to understand (Mark, 1998).

Recommendations

In general, randomized experiments produce more credible estimates of treatment effects than quasi-experiments do, but they tend to be far more difficult to implement than quasi-experiments.” (Wholey et al., 2004, p 126) As established by the authors had randomization approach been adopted the generalizability of the outcomes of this evaluation would have been enhanced to quite some extent. Instead of concentrating on focus groups to obtain the first hand perspective of the participants, a confidential interview process could have been a better approach to eradicate the social desirability bias. The relation between the programs and the community as a whole could be assessed by adding family variables and analyzing the work environment a bit more. Internal validity for this evaluation was threatened by self-reported data collection instrument. In addition to self-administration, a clinical assessment could have been introduced to strengthen the internal validity of the evaluation process (Morris, 1999).

References

  1. Frechtling, JA (2007). Logic Modeling Methods in Program Evaluation. San Francisco: Jossey Bass.
  2. Issel, L.M. (2009). Health program planning and evaluation A practical, systematic approach for community health Second Edition. Jones and Bartlett.
  3. Kellogg, WK. (2004). Logic Model Development Guide.
  4. Leviton, L.C. & Lipsey, M.W. (2007). A big chapter about small theories: Theory as method: Small theories of treatments. New Directions for Evaluation. 2007(114), 27-62.
  5. Lipsey, M.W. (1993). Theory as method: Small theories of treatments. New Directions for Program Evaluation. 1993(57), 5-38.
  6. Mark, M.M. (2000). From program theory to tests of program theory. New Directions for Program Evaluation. 22(45), 37-51.
  7. Mark, M.M. (1998). A realist theory of evaluation practice. New Directions for Evaluation. 11(78), 3-32.
  8. McQuiston, T.H. (2007). Empowerment evaluation of worker safety and health education programs. American Journal of Industrial Medicine. 38(5), 584-597.
  9. Morris, M. (1999). The role of single evaluation courses in evaluation training. New Directions for Program Evaluation. 14(62), 51-59.
  10. Rogers, P.J. (2004). Program theory evaluation: Practice, promise, and problems. Evaluation Methods. 2(8), 5-13.
  11. Rogers, P., & Waiss, H. (2007). Theory-based evaluation: Reflections ten years on: Theory-based evaluation: Past, present, and future. New Directions for Evaluation. 2007(114), 63-81.
  12. Stufflebeam, D. (2001). Evaluation Models. New Directions for Evaluation. 2001(89), 7-98.
  13. The Colorado Trust. (2007). Colorado Healthy People 2010 Initiative: Obesity Prevention.
  14. Weiss, C.H. (2001). Theory-based evaluation: Past, present, and future. New Directions for Program Evaluation. 09(7), 41-55.
  15. Wholey, J.S., Hatry, H.P., Newcomer, K.E. (Eds) (2004). Handbook of practical program evaluation, 2nd ed. San Francisco: Jossey-Bass.