Resolutions, Goals, and Data

A favorite moment from a spring run

A favorite moment from a spring run

Happy New Year and Happy MLK Day!

It's a new year and a time of reflection and new resolutions. I took a little time to look back at 2015. I'm thankful for many wonderful professional and personal experiences last year and for the work I get to do. Later today, I'm joining my community in this day of service.

As I was reviewing some data I'd tracked about myself (would you expect anything less?), I started thinking about connections to program evaluation.

Context matters. It helps you make meaning of the numbers.

Last year, I ran 698 miles, across 174 runs. Is that a lot? A little? If you are a runner, and an elite one at that, then 698 may not seem like many. If you hate running, it may seem like a lot.

Here are just a few ways I analyzed my data and put my numbers in context.

Time as context

  • I ran an average of 1.9 miles per day, 13.3 miles per week, and 58.2 miles per month. On average, I ran about 3.3 days per week. 
  • In my best month (May), I ran 20 times, more than 93 miles. In my worst (September), I ran 6 times and only about 20 miles. For more than half of the year, I ran at least 15 times per month.

Past performance

  • In 2015, I increased the number of miles I ran by 69% compared to 2014.
  • One month in 2014, I ran over 15 times, but I never ran 20 times in a month as I did multiple times in 2015.
  • For both 2014 and 2015, I ran the most in the month of May.

Distance

  • 412 miles (my total for 2014) is about the distance from Boston to Washington DC
  • 698 miles (my total for 2015) is more than the distance from Boston to Columbus OH
    •  (Btw, this context is kind of depressing. After a year! I'd only get as far as Columbus!)

Evidence

  • I compared my miles to research on how many miles lead to different outcomes.
    • Improved health? Some research like this suggests that 20-30 minutes 2-3 times per week would suffice. So I did well in relation to this goal.
    • Improved time for a half-marathon? A lot of research-based training plans suggest a minimum of about 20-30 miles per week, so in the months I trained, I aimed for that. 

Goals

  • I compared my results to goals I'd set. At one point in 2015, I'd hoped to hit 730 miles for the year, or about an average of 2 miles per day. I didn't quite make it :( That's another way to make meaning of my results.
    • I've got company. You may have seen that in 2016, Mark Zuckerberg plans to run 365 miles, or 1 mile a day, for the year, and you can join this pledge.
    • This month, I'm running or walking outdoors 3 miles every day as part of a local running store's Winter Warrior Challenge. So far, I'm on track. Brrr.

Bringing in other contextual data to make meaning

  • So, I know that May has two long distance races that I've done and trained for in both 2014 and 2015--the 10 mile Broad Street Run in Philly and the Run to Remember half-marathon in Boston. That helped explain why I ran more miles in May than other months in both of those years, but also....
  • May was a peak month for other activities I track. Using my Goodreads data, I saw that I read the most books in the month of May--5 books that month out of 26 finished for the year. What's up with me and May? Spring fever?

How does this apply to program evaluation?

  • Context matters, and some of the ways I've looked at my "quantified self" data are ways that we commonly track and explore data in program evaluation.
  • Some kinds of context can make sense to lots of people, even if they aren't experts on the particular kind of data you are presenting. Time or performance against goals are just two examples. Consider your audience.
  • Now, I have some information and insights that will help me plan some goals I'm setting for 2016. Ideally, the learning from evaluation helps us learn and improve.
  •  And one study or set of analyses often leads to new questions like....

How can I overcome the instinct to hibernate in January and infuse it with some May spring-fever-like activity?

How do I make sure my program is ready for an evaluation study?

Last week, I attended the American Evaluation Association's conference, and I was reminded of a common challenge and some tools to help. Often an organization or a funder wants a randomized control trial or other rigorous study of a program. Of course, studies that provide a comparison or control group, or some kind of counterfactual, are important. What is also important?--that these studies actually study "the program" compared to "not the program." 

What do I mean by this? Well, imagine you are still designing a program when you start it. Imagine you have just started to implement it in a new place--maybe a place you don't know so well yet. So then, maybe in the first days or months there, you aren't able to actually deliver much of the program as intended. You are still working out kinks in your partnerships or training staff who are new to your model. If we compare your performance in that scenario to a control situation, we may not yet be measuring your program. If we see poor results, what have we learned? We haven't really learned that your program doesn't work, or which elements are most effective for whom, because there is so much noise in the way, and your program wasn't yet really your program.

Is the first time you ever rehearse a play with a new cast in a new location, a true reflection of the production's quality? If we watched you flub the lines, your cast miss cues, without costumes and lights, would we yet really be seeing the play as written? Wouldn't it be better to give you a little time and structure to rehearse so that we can compare your performance to another play we've seen?

So, I'm not saying a newer program can't be evaluated. But wouldn't it helpful to know, assess, and then establish some pre-conditions before spending time and resources on a study? Recently, a colleague from an evaluation firm I've worked with shared that she often gets approached to do a study, only to find the program isn't ready. The program may not yet have a logic model or theory of change, a defined intended duration of the program, data on participation, or a sense of the staff time it will take to work with the evaluators. 

There are a few tools out there that can help. This evaluability assessment from the Corporation for National and Community Service (CNCS)/Social Innovation Fund (SIF), developed by Lily Zandniapour and Nicole Vicinanza is one example that I like and that you could modify. It helps you think about organizational readiness, program readiness, and evaluation readiness. 

Have you seen or used similar tools? How are you getting ready? Can I help? Let me know.

Best,

Gretchen

American Evaluation Association conference next week

Next week, I'm headed to the annual American Evaluation Association (AEA) conference in Chicago. This year's conference focuses on exemplary evaluation--examples of the best "to inspire and energize evaluation professionals from around the world."

If you don't know about AEA, they are one of the largest professional associations for evaluators, and they share lots of great resources, such as tips of the day. If you become a member, then you have access to webinars, their journal, and additional learning materials.

I'm excited to meet up with old and new colleagues. I get to take a data visualization workshop with one of my favorites, Ann K. Emery. I'm also leading a symposium with City Year colleagues on our journey to measure students' social emotional learning.

Look forward to sharing highlights with you!

Gretchen