Characteristics of a program evaluation




















Program staff may be pushed to do evaluation by external mandates from funders, authorizers, or others, or they may be pulled to do evaluation by an internal need to determine how the program is performing and what can be improved. While push or pull can motivate a program to conduct good evaluations, program evaluation efforts are more likely to be sustained when staff see the results as useful information that can help them do their jobs better.

Data gathered during evaluation enable managers and staff to create the best possible programs, to learn from mistakes, to make modifications as needed, to monitor progress toward program goals, and to judge the success of the program in achieving its short-term, intermediate, and long-term outcomes.

Most public health programs aim to change behavior in one or more target groups and to create an environment that reinforces sustained adoption of these changes, with the intention that changes in environments and behaviors will prevent and control diseases and injuries. Through evaluation, you can track these changes and, with careful evaluation designs, assess the effectiveness and impact of a particular program, intervention, or strategy in producing these changes.

The Working Group prepared a set of conclusions and related recommendations to guide policymakers and practitioners.

Program evaluation is one of ten essential public health services [8] and a critical organizational practice in public health. The underlying logic of the Evaluation Framework is that good evaluation does not merely gather accurate evidence and draw valid conclusions, but produces results that are used to make a difference.

You determine the market by focusing evaluations on questions that are most salient, relevant, and important. You ensure the best evaluation focus by understanding where the questions fit into the full landscape of your program description, and especially by ensuring that you have identified and engaged stakeholders who care about these questions and want to take action on the results.

The steps in the CDC Framework are informed by a set of standards for evaluation. The 30 standards cluster into four groups:. Utility: Who needs the evaluation results? Will the evaluation provide relevant information in a timely manner for them? Feasibility: Are the planned evaluation activities realistic given the time, resources, and expertise at hand? Propriety: Does the evaluation protect the rights of individuals and protect the welfare of those involved?

Does it engage those most directly affected by the program and changes in the program, such as participants or the surrounding community?

Accuracy: Will the evaluation produce findings that are valid and reliable, given the needs of those who will use the results? Sometimes the standards broaden your exploration of choices. Often, they help reduce the options at each step to a manageable number.

Feasibility How much time and effort can be devoted to stakeholder engagement? Propriety To be ethical, which stakeholders need to be consulted, those served by the program or the community in which it operates? Accuracy How broadly do you need to engage stakeholders to paint an accurate picture of this program? Similarly, there are unlimited ways to gather credible evidence Step 4.

Asking these same kinds of questions as you approach evidence gathering will help identify ones what will be most useful, feasible, proper, and accurate for this evaluation at this time. Thus, the CDC Framework approach supports the fundamental insight that there is no such thing as the right program evaluation.

Rather, over the life of a program, any number of evaluations may be appropriate, depending on the situation. Good evaluation requires a combination of skills that are rarely found in one person. The preferred approach is to choose an evaluation team that includes internal program staff, external stakeholders, and possibly consultants or contractors with evaluation expertise.

An initial step in the formation of a team is to decide who will be responsible for planning and implementing evaluation activities. One program staff person should be selected as the lead evaluator to coordinate program efforts. This person should be responsible for evaluation activities, including planning and budgeting for evaluation, developing program objectives, addressing data collection needs, reporting findings, and working with consultants.

The lead evaluator is ultimately responsible for engaging stakeholders, consultants, and other collaborators who bring the skills and interests needed to plan and conduct the evaluation. Although this staff person should have the skills necessary to competently coordinate evaluation activities, he or she can choose to look elsewhere for technical expertise to design and implement specific tasks.

However, developing in-house evaluation expertise and capacity is a beneficial goal for most public health organizations. The lead evaluator should be willing and able to draw out and reconcile differences in values and standards among stakeholders and to work with knowledgeable stakeholder representatives in designing and conducting the evaluation.

Seek additional evaluation expertise in programs within the health department, through external partners e. You can also use outside consultants as volunteers, advisory panel members, or contractors. External consultants can provide high levels of evaluation expertise from an objective point of view.

Important factors to consider when selecting consultants are their level of professional training, experience, and ability to meet your needs.

Be sure to check all references carefully before you enter into a contract with any consultant. To generate discussion around evaluation planning and implementation, several states have formed evaluation advisory panels. Advisory panels typically generate input from local, regional, or national experts otherwise difficult to access.

Such an advisory panel will lend credibility to your efforts and prove useful in cultivating widespread support for evaluation activities.

Evaluation team members should clearly define their respective roles. Second, we look for people who are smart, intellectually curious, and problem solvers. This is harder to judge, but we consider college grades, standardized test scores, academic honors, the interview results, references, and a work exercise we administer as part of the process. This exercise is a case study asking people to look at background information on a fictional state program and asking them to identify potential issues, research methodologies, and potential recommendations.

Third, we look for people who are self-starters and hard working. This again is hard to judge, but we consider college success, prior jobs, references, the work exercise, and interview results.

Fourth, we look for people who can successfully work as part of a research team. Prior team experience in college and jobs, references, and interview results give us some insight into this.

Hiring smart, hard working researchers who are committed to doing a good job and who can work well with others both within OPPAGA and the outside world makes this job a lot easier. We don't try to hire people based on their writing skills. We have never found a good way to determine whether someone is a good writer during the selection process prior work products are not a good evidence source because most are team projects that have been edited by others.

We will note if someone seems to be a bad writer based on their work exercise and may decide not to hire them based on this result. However, we concentrate on training and developing writing skills once good people are on board. One quality that I believe is very helpful is the ability to communicate well.

We get so much of our information through personal contact with others: auditees, colleagues, consultants, etc. The results of evaluation are often used by stakeholders to improve or increase capacity of the program or activity. This type of evaluation needs to identify the relevant community and establish its perspectives so that the views of engagement leaders and all the important components of the community are used to identify areas for improvement.

This approach includes determining whether the appropriate persons or organizations are involved; the activities they are involved in; whether participants feel they have significant input; and how engagement develops, matures, and is sustained. Research is hypothesis driven, often initiated and controlled by an investigator, concerned with research standards of internal and external validity, and designed to generate facts, remain value-free, and focus on specific variables.

Research establishes a time sequence and control for potential confounding variables. Often, the research is widely disseminated. Evaluation, in contrast, may or may not contribute to generalizable knowledge. The primary purposes of an evaluation are to assess the processes and outcomes of a specific initiative and to facilitate ongoing program management.

Formative evaluation provides information to guide program improvement, whereas process evaluation determines whether a program is delivered as intended to the targeted recipients Rossi et al.

Summative evaluation informs judgments about whether the program worked i. Outcome evaluation focuses on the observable conditions of a specific population, organizational attribute, or social condition that a program is expected to have changed. Whereas outcome evaluation tends to focus on conditions or behaviors that the program was expected to affect most directly and immediately i.

For example, assessing the strategies used to implement a smoking cessation program and determining the degree to which it reached the target population are process evaluations. Reduction in morbidity and mortality associated with cardiovascular disease may represent an impact goal for a smoking cessation program Rossi et al. Several institutions have identified guidelines for an effective evaluation.



0コメント

  • 1000 / 1000