November 19, 2015 § Leave a comment
Slides may be useful for those looking to plan a systematic review
Program Evaluation: A scientific approach to assessing and building capacity with sport communities and athletes
October 15, 2015 § Leave a comment
AMERICAN EVALUATION ASSOCIATION
Main Website – eval.org
TYPES OF EVALUATIONS
- Needs Assessment
- Assessment of Program Theory
- Assessment of Program Process (process evaluation)
- Impact Assessment (outcome evaluation)
- Efficiency Assessment
TYPES OF QUESTIONS WELL-SUITED TO PROGRAM EVALUATION
- What is the nature and scope of the problem? Where is it located, whom does it affect, and how does it affect them?
- What is it about the problem or its effects that justifies new, expanded, or modified social problems?
- What feasible interventions are likely to significantly ameliorate the problem?
- What are the appropriate target populations for interventions?
- Is a particular intervention reaching its target population?
- Is the intervention being implemented well? Are the intended services being provided?
- Is the intervention effective in attaining the desired goals or benefits?
- How much does the program cost?
- Is the program cost reasonable in relation to effectiveness and benefits?
EVALUATION THEORETICAL FRAMEWORKS
- Patton: “Confusing empathy with bias” – musing on neutrality in research
CHALLENGES IN PROGRAM EVALUATION AND APPLICABLE SPORT PSYCHOLOGY SKILLS
- Alkin, M. C., & Christie, C. A. (2004). An evaluation theory tree. In M. C. Alkin (ed.), Evaluation roots: Tracing theorists’ views and influences. Thousand Oaks, CA: Sage.
- American Evaluation Association. (2004). American Evaluation Association Guiding Principles For Evaluators. Retrieved November 2013, http://www.eval.org/p/cm/ld/fid=51.
- American Evlauation Association. (2013). The Program Evaluation Standards. Joint Committee for Standards in Educational Evaluation. Retrieved October 2013, http://www.eval.org/p/cm/ld/fid=103.
- Cousins, J. B., Donohue, J. J., & Bloom, G. A. (1996). Collaborative evaluation in North America: Evaluators’ self-reported opinions, practices, and consequences. Evaluation Practice, 17(3), 207-225.
- Cousins, J. B., & Whitmore, E. (1998). Framing participatory evaluation. New Directions for Evaluation, 80, 5-23.
- Kaner, S., Lind, L., Toldi, C., Fisk, S., & Berger, D. (2007). Facilitator’s guide to participatory decision-making. San Francisco, CA: Jossey-Bass.
- Linnan, L., & Steckler, A. (2002). Process evaluation for public health interventions and research. In A. Steckler & L. Linnan (Eds.) Process evaluation for public health interventions and research (pp. 1-23). San Francisco, CA: Jossey-Bass.
- Miller, R.L. (2010). Developing standards for empirical examinations of evaluation theory. American Journal of Evaluation, 31, 390-399.
- Patton, M. Q. (2002). Qualitative Research & Evaluation Methods (3rd edition). Thousand Oaks, CA: Sage.
- Patton, M. Q. (2011). Essentials of utilization-focused evaluation. Thousand Oaks, CA: Sage.
- Patton, M. Q. (2014). What brain sciences reveal about integrating theory and practice. American Journal of Evaluation, 35(2), 237-244. DOI: 10.1177/1098214013503700
- Patton, M. Q. (2015). Qualitative Research & Evaluation Methods (4th edition). Thousand Oaks, CA: Sage.
- Poister, T.H. (2004). Performance monitoring. In J.S. Wholey, H.P. Hatry, & K.E. Newcomer (Eds.), Handbook of practical program evaluation (2nd edition) (pp. 98-125). San Francisco: Jossey Bass.
- Rossi, P. H., Lipsey, M. W., & Freeman, H. E. (2004). Evaluation: A systematic approach. Thousand Oaks, CA: Sage.
- Scriven, M. (1991). Evaluation Thesaurus. Newbury Park, CA: Sage.
- Wholey, J.S. (1996). Formative and summative evaluation: Related issues in performance measurement. Evaluation Practice, 17, 145-149.
- Wholey, J. S. (2004). Evaluability assessment. In J. S. Wholey, H. P. Hatry, K. E. Newcomer (eds.), Handbook of Practical Program Evaluation. San Francisco, CA: Jossey-Bass.
- Witkin, B.R., & Altschuld, J.W. (1995). Planning and conducting needs assessments: A practical guide. Thousand Oaks, CA: Sage.
- W. K. Kellogg Foundation. (2004). Logic model development guide. Battle Creek, MI. Free from wkkf.org
- Yarbrough, D. B., Shulha, L. M., Hopson, R. K., and Caruthers, F. A. (2011). The program evaluation standards: A guide for evaluators and evaluation users (3rd ed.). Thousand Oaks, CA: Sage.
February 21, 2015 § Leave a comment
To follow up on a previous post that examined swimmer performance in the 50 freestyle at the Women’s B1G Championship meet, I wanted to examine the performance improvement from prelims to finals in a different event. I chose the women’s 400 IM, because it is tough to find an event that could be any more different than the 50 freestyle! First, the IM requires technical proficiency in all four strokes. Second, it requires a unique blend of endurance and power in order to sustain performance over 400 yards. (For non-swimmers, the term IM stands for individual medley, where a swimmer swims 100 yards of each of the four competitive strokes — butterfly, backstroke, breaststroke, and freestyle, in that order)
I used the same analytical procedure as I employed to examine 50-freestyle performances, by comparing the prelims and finals times for the 24 swimmers who made it back for one of the three heats of finals. Each swimmer appears from top to bottom, in rank order on their own horizontal line, with the first place finisher from finals on the top line. The blue square denotes the performance in the night swim (finals, consols, or bonus consols); the grey square denotes the performance in the morning prelims session. Unlike the 50 freestyle, where only four of 16 swimmers improved their time in the night session, here in the 400 IM, 13 of 24 swimmers improved during the night session.
The phenomenon is probably not strange to most swimming coaches. The finals heats during the night session set up three new heats – or three new sets of races. Some swimmers improve, others swim slower… but what governs this process?
The effects appear to be different in the final heat when compared to the consolation and bonus consolation heats. In the heat of finals (1st – 8th place), five swimmers improved on their times, two swam performances that were nearly identical to their prelim swim, and one swimmer was nearly one second slower in finals. But the general picture is a leftward shift on the graph, and it gets more pronounced at the top, with the first place finisher (Brooke Zeiger of Minnesota) improving by over five seconds.
That picture does not appear in the consolation or bonus consolation finals. In these heats, the grey squares representing the prelim swims are clustered roughly in the center, but the blue squares representing the finals swims are skewed in two directions. It appears that the top half of each heat is more likely to experience a time improvement, while the bottom half of finishers in the heat experience a slow-down.
The effect is most remarkable in the consolation finals heat (9th – 16th place), where the top half of finishers all improved on their prelims swim, and the bottom half of finishers went slower than prelims. If prelim swims are an indicator of potential, then technically this should have been any swimmer’s race to win. All eight swimmers were within 2 seconds of each other after prelims (4:14.50 to 4:16.46), and in a long race like the 400 IM, a two second gap can be made up much more easily than in shorter races (like the 50 or 100 free). In fact, the winner of the consolation finals heat (Danielle Valley of Wisconsin) had the 15th place time in prelims. Valley’s improvement moved her up six places, earning nine points for her team (instead of the two points she would have earned had she stayed in 15th place).
So what does all of this mean? Well, I’m reminded of an interview I did years ago for my masters thesis on mental toughness development in swimming. An accomplished (and now-retired) swimming coach told me that he had only seen about four or five true races in his career:
Usually, one swimmer says, “it’s mine,” and goes out and takes the win. Another says, “I’d be happy with second.” Another says, “third’s okay by me.” The decision is made before the race begins.
These data, especially from the consolation heat, seem to suggest there might be some truth to that statement. In all fairness, the 400 IM is a difficult and exhausting event, and perhaps the performances at the night session have more to do with physical conditioning and recovery than they have to do with a swimmer’s conscious decision that she would go out and take the win. In addition, a swimmer might strategically hold back in one event, in order to save energy for another event later in the session. But the picture is stunningly clear – the swimmers, by virtue of their performances in prelims, are all within striking distance of winning the heat, yet not all swimmers appear to go after the win.
One big caveat to this analysis is that this is only a descriptive analysis of performances. Theoretically, a swimmer’s performance should be limited by their seed-time coming into the meet; there is only so much time-improvement that a swimmer can experience in one season. A swimmer’s time in prelims probably correlates strongly with their season-best seed-time, but the relationship between the prelims time and the finals time is probably more complicated.
To my knowledge, there is no hard-and-fast metric that determines how much time-improvement a swimmer should expect – that remains one of the mysteries of swimming. However, given the vast amount of swimming data (times) that exist in the public realm, it is certainly possible to devise a metric for expected time improvements.
Possible, yes. But it also takes some of the fun out of the sport. As anyone who has ever coached swimming can attest, there is probably no thrill greater than a breakthrough performance, especially one that you’re not expecting.
February 20, 2015 § Leave a comment
I have always contended that most 50-freestylers swim faster in the morning prelims session of a championship meet, and these data from today’s women’s B1G championship would seem to support that notion. Amongst the top 16 scoring swimmers, only four swimmers improved on their prelims time; two of those four swimmers were the first and ninth place finishers. I will speculate that is because those swimmers are racing to win their respective finals (and consolation finals) during the night-time session.
To those unfamiliar with swimming, in the prelims/finals system, a swimmer who finishes between 1st and 8th place in prelims swims in the final heat at night. In the finals, it’s a whole new race for 1st place. Swimmers who place between 9th and 16th place in prelims swim in the consolation final heat at night, and duke it out for 9th-16th place. These night-time swims matter, because the night-time results are used to determine the points achieved by each swimmer, which then tally up for their respective teams, thus determining the meet champion.
During the night-time finals swims, athletes are “locked-in” to their respective heats, meaning that if the 8th place swimmer in the finals heat swims slower than the 9th place swimmer in the consolation finals heat, it does not matter for sake of point scoring.
The graph shows the top 16 women’s 50 freestyle swimmers, with the morning prelims time displayed in grey, and the night-time finals time displayed in blue. To speculate on what accounts for the night-time slow-down, perhaps 50-freestylers are fresher in the morning swim and get their best swim then. Or perhaps it owes to the short duration of the event, and that the 50 is not as taxing on the body as other longer events. Perhaps most swimmers just go all-out and hold nothing back in the morning prelims swim. Although coaches, parents, and fans love to speculate on why this might be happening, I’m interested to see if this relationship between morning and night swims holds up throughout the course of the meet.
The data visualization idea comes courtesy of Stephanie Evergreen (Evergreen Data | Kalamazoo, MI).
April 21, 2014 § Leave a comment
Supplemental material for this presentation can be accessed by clicking the link below. At this link, you will find a full list of references used in the preparation of this systematic review. In the same file, you will find a two-page chart that breaks down each of the 31 articles included in the systematic review.
Feel free to click through these slides if there if something you have missed.
March 25, 2013 § Leave a comment
What are the essential components of an ideal youth sport climate? Should the coach focus on teaching athletes the fundamental skills needed for the sport, focusing on the individual mastery for each player? Should the coach focus on building a caring climate for the players so that they feel like they are a part of the team, and that they are able to take calculated risks to improve their skills?
At the 2012 conference for the Association for Applied Sport Psychology, these questions arose several times for myself and my colleagues. Based on my coaching experience, I firmly believe that a coach should focus on mastery of fundamental sport skills. In swimming, this meant mastering each of the four strokes in a way that did not produce injury. I felt that if a swimmer could improve his or her skills, this would lead to higher motivation and a sense of self-accomplishment.
But what about the sport climate? Is promoting mastery part of building a caring climate? What about intensive sport experiences, such as training camps? If the training camp creates a challenging (maybe even threatening) environment that enables athletes to reach new levels of skill mastery, how would athletes rank how “caring” this climate was? For instance, “tough love” can get results, and athletes can perceive the coach’s tough love as his way of showing that he cares. This stands in contrast to a climate to one where the coach provides a non-threatening atmosphere, but never pushes athletes to improve.
Like so many cases in sport, much lies in the perception that the athlete has. A mastery-oriented athlete might perceive challenge as an opportunity to improve, while an ego-oriented athlete might perceive challenge as a threat that could show his weakness as an athlete. If a coach builds a sport climate that values mastery, that will involve challenging the athletes. How will his athletes perceive these challenges?
Sport psychology researchers will need to clarify what a caring climate looks like at each stage of athlete development, and clarify the best ways to push athletes to improve within this climate. I don’t think these two concepts mutually exclude each other. Indeed, when we look at surveys for why children play youth sports, they report they want to be with their friends and have fun, and at the same time, they want to improve their skills.
To borrow a thought I have heard Dan Gould use many times, we need to figure out how to “dose adversity” so that we get the best response from the athlete. What’s appropriate at each stage of athlete development? What might be an appropriate dose for one 16 year-old but not for another 16 year-old? And how can we make this process simpler, so that coaches – new coaches, especially – can actually apply these scientific principles in their daily work with youth athletes?
November 10, 2012 § Leave a comment
Covered walkways appear to solve a problem of human movement in a busy, wintry, urban climate. They stretch from one building to the next, delivering humans unscathed from point A to point B. They seem to appear in places that undergo sudden redevelopment, almost as a precondition for a large hotel chain agreeing to build a branch in a city that is trying to build its convention industry. What else do they communicate about the surrounding environment? These “hotel tubes” can deliver a hotel guest to a convention without the guest ever once having to interact with the citizens of the town holding the convention. The hotel tube also tells would-be users that it is a place for hotel guests, a private space that bypasses the public space of the sidewalk. This hotel tube says more of the developer’s view of the social environment of Lansing than it says about the developer’s view of Lansing weather. What fails is that the walkway traps its user and shuttles him quickly. There is no opportunity to embrace the outdoor environment – you never leave the air-conditioning.
Quick fix: tear the roof off. Compare this design to NYC’s High Line project, which exposes an elevated walkway to the elements, which makes the walkway all that more inviting.