Program Evaluation in Sport | AASP Conference 2015

AMERICAN EVALUATION ASSOCIATION

Main Website – eval.org

AEA Guiding Principles for Evaluators

TYPES OF EVALUATIONS

  1. Needs Assessment
  2. Assessment of Program Theory
  3. Assessment of Program Process (process evaluation)
  4. Impact Assessment (outcome evaluation)
  5. Efficiency Assessment

TYPES OF QUESTIONS WELL-SUITED TO PROGRAM EVALUATION

  1. What is the nature and scope of the problem? Where is it located, whom does it affect, and how does it affect them?
  2. What is it about the problem or its effects that justifies new, expanded, or modified social problems?
  3. What feasible interventions are likely to significantly ameliorate the problem?
  4. What are the appropriate target populations for interventions?
  5. Is a particular intervention reaching its target population?
  6. Is the intervention being implemented well? Are the intended services being provided?
  7. Is the intervention effective in attaining the desired goals or benefits?
  8. How much does the program cost?
  9. Is the program cost reasonable in relation to effectiveness and benefits?

EVALUATION THEORETICAL FRAMEWORKS

From betterevaluation.org

Utilization-Focused Evaluation

Participatory Evaluation

GRAPHICS

USAS Foundations of Coaching Logic Model
USA Swimming Foundations of Coaching Logic Model

CHALLENGES IN PROGRAM EVALUATION AND APPLICABLE SPORT PSYCHOLOGY SKILLS

Challenges in program evaluation and sport psychology skills that can be applied
Challenges in program evaluation and sport psychology skills that can be applied
Challenges in program evaluation and sport psychology skills that can be applied
Challenges in program evaluation and sport psychology skills that can be applied

REFERENCES

  1. Alkin, M. C., & Christie, C. A. (2004). An evaluation theory tree. In M. C. Alkin (ed.), Evaluation roots: Tracing theorists’ views and influences. Thousand Oaks, CA: Sage.
  2. American Evaluation Association. (2004). American Evaluation Association Guiding Principles For Evaluators. Retrieved November 2013, http://www.eval.org/p/cm/ld/fid=51.
  3. American Evlauation Association. (2013). The Program Evaluation Standards. Joint Committee for Standards in Educational Evaluation. Retrieved October 2013, http://www.eval.org/p/cm/ld/fid=103.
  4. Cousins, J. B., Donohue, J. J., & Bloom, G. A. (1996). Collaborative evaluation in North America: Evaluators’ self-reported opinions, practices, and consequences. Evaluation Practice, 17(3), 207-225.
  5. Cousins, J. B., & Whitmore, E. (1998). Framing participatory evaluation. New Directions for Evaluation, 80, 5-23.
  6. Kaner, S., Lind, L., Toldi, C., Fisk, S., & Berger, D. (2007). Facilitator’s guide to participatory decision-making. San Francisco, CA: Jossey-Bass.
  7. Linnan, L., & Steckler, A. (2002). Process evaluation for public health interventions and research. In A. Steckler & L. Linnan (Eds.) Process evaluation for public health interventions and research (pp. 1-23). San Francisco, CA: Jossey-Bass.
  8. Miller, R.L. (2010). Developing standards for empirical examinations of evaluation theory. American Journal of Evaluation, 31, 390-399.
  9. Patton, M. Q. (2002). Qualitative Research & Evaluation Methods (3rd edition). Thousand Oaks, CA: Sage.
  10. Patton, M. Q. (2011). Essentials of utilization-focused evaluation. Thousand Oaks, CA: Sage.
  11. Patton, M. Q. (2014). What brain sciences reveal about integrating theory and practice. American Journal of Evaluation, 35(2), 237-244. DOI: 10.1177/1098214013503700
  12. Patton, M. Q. (2015). Qualitative Research & Evaluation Methods (4th edition). Thousand Oaks, CA: Sage.
  13. Poister, T.H. (2004). Performance monitoring. In J.S. Wholey, H.P. Hatry, & K.E. Newcomer (Eds.), Handbook of practical program evaluation (2nd edition) (pp. 98-125). San Francisco: Jossey Bass.
  14. Rossi, P. H., Lipsey, M. W., & Freeman, H. E. (2004). Evaluation: A systematic approach. Thousand Oaks, CA: Sage.
  15. Scriven, M. (1991). Evaluation Thesaurus. Newbury Park, CA: Sage.
  16. Wholey, J.S. (1996). Formative and summative evaluation: Related issues in performance measurement. Evaluation Practice, 17, 145-149.
  17. Wholey, J. S. (2004). Evaluability assessment. In J. S. Wholey, H. P. Hatry, K. E. Newcomer (eds.), Handbook of Practical Program Evaluation. San Francisco, CA: Jossey-Bass.
  18. Witkin, B.R., & Altschuld, J.W. (1995). Planning and conducting needs assessments: A practical guide. Thousand Oaks, CA: Sage.
  19. W. K. Kellogg Foundation. (2004). Logic model development guide. Battle Creek, MI.  Free from wkkf.org
  20. Yarbrough, D. B., Shulha, L. M., Hopson, R. K., and Caruthers, F. A. (2011). The program evaluation standards: A guide for evaluators and evaluation users (3rd ed.). Thousand Oaks, CA: Sage.
Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s