Sheila B Robinson

Reflections of an everyday educator/program evaluator/professional developer…LEARNER

#Eval13: #omgmqp, ESM, DataViz, Program Design, Blogging, and the Great Big Nerd Project

12 Comments

Here it is, less than a week after returning home from Evaluation 2013, and I’ve already used what I’ve learned in all three workplace settings. I’ve also enjoyed reading other bloggers’ conference highlights (see below for links) as they in a sense, let me peer vicariously into sessions I didn’t attend, or they enhance my own experience by offering a different perspective on sessions I did attend.

Here’s a recap (in a “longform” post, which, I’m told, is an effective blogging strategy) of what resonated most with me:

1.) #omgmqp

I’ve never met an evaluator who wasn’t a fan of Michael Quinn Patton (Whatsoever would we have to talk about?). I eagerly attended several sessions with this exceptionally brilliant evaluator and captivating presenter – the only evaluator I know with his own hashtag.

In his State of Developmental Evaluation in the Early 21st Century, Patton presented his analysis of the Evaluation 2013 catalog with respect to DE. He found 38 sessions that included DE in the title or abstract and shared that a content analysis of their descriptions revealed that sessions aligned with several themes: complexity and systems change, DE in practice and action (from theory to practice), DE applied to a specific area (e.g. human services, schools, etc.), DE combined with other approaches, and the role and position of the DE evaluator.

Patton emphasized that DE is responsive to culture and context, and as an evaluation approach, it is a “cultural chameleon” in that it takes on and is sensitive to local context. DE is not method prescriptive, and outcomes and indicators may not be predetermined for a program undergoing DE. In fact, in some cases, it may even be culturally insensitive to predetermine outcomes. A traditional logic model, Patton claims, cannot capture the complexity inherent in an innovative program where the destination is not predetermined, nor is the journey itself. DE, however, can offer a depth of understanding of a complex, dynamic program where traditional approaches simply cannot.

In his State of Qualitative Methods in the 21st Century, Patton highlighted some straightforward timely trends including powerful qualitative data analysis software (along with support systems and training); social media as data, for data collection, and for sharing findings; ethical challenges such as anticipating impact on participants, confidentiality with small sample sizes, appropriate compensation, and IRB constraints; the use of mixed methods; and data visualization.

He also spoke of “more nuanced” aspects of current practice in qualitative evaluation including that it is driven by evaluation practice. People are doing evaluation with methods vs. the theoretical traditions of inquiry. Qualitative evaluators are doing more “tip of the iceberg” findings in order to get them out fast, and in doing so, are relying more on interviews and document reviews, and less on observation. He claims that observation is underutilized and site visits less common. Patton laments, “we’ve cut to a bottom line – a small part of what [qualitative evaluation] is about. We’re losing a major part of the work.”

He spoke of qualitative evaluation as an intervention in the context of process use. Reflective practice experiences are “forms of engagement where the evaluation functions and the inquiry functions are merged and can’t be disentangled from the intervention.” He also spoke of valuing deep contextual understanding and maintains that qualitative inquiry has a high degree of sensitivity to context. “Sensitivity to context,” he proffers, “becomes a value-added dimension of qualitative inquiry.” The qualitative evaluator as the instrument was his next point as he emphasized “who does the work matters.” What the evaluator brings to the work becomes more important as experience, expertise, and cultural competence are developed in the individual. Patton continued with an articulation of purposeful sampling options: “What you have something to say about is what you sample” and admitted that this is a huge area of misunderstanding and controversy. He explained the terms purposeful vs. purposive sampling and shared that the former is his preference and the latter somewhat nonsensical, however, there is no conceptual difference in the terms.

Finally, Patton closed this session with his own take on Hamlet’s ubiquitous soliloquy: “To sample or not to sample…” If anyone has a recording or transcription I’d love to have it!

Check out the cartoon by Chris Lysy on the “Michael Quinn Patton system.” Lysy aptly prognosticated in anticipation of the conference, “the development of star systems caused by evaluators being pulled in by the gravity of their evaluation heroes.” I’m pretty sure that’s me just southeast of MQP, almost to the inner circle.

2.) ESM = Evaluation Specific Methodology

I registered for this pre-conference professional development workshop because I feel no evaluator should go without the extraordinary experience of learning from Michael Scriven himself. You don’t have to agree with Scriven on all points, but you simply must hear him speak and read at least some (the man has 450+ publications. His CV is 41 pages!) of his work to appreciate his unique perspicacity and unfailing dedication to the field.

The icing on the cake of this workshop was that Jane Davidson presented alongside Scriven, creating a dynamic duo, to say the least. When Professor Scriven launched into the history of evaluation, I thought he would do so with what seemed to me a natural starting point – Ralph Tyler and the “Eight Year Study.” Not a chance. No, Scriven gave us the history of evaluation starting about two million years ago, his estimate of the time homo sapiens emerged. Yes, evaluation is truly the oldest profession (my words, not his). Early humans made everyday decisions (read: evaluation) of what foods to eat and what tools to use, and thus, had to assess the quality therein, and subsequently share this evaluation know-how with other humans. Simply put, they had to make evaluative claims in order to survive.

“Each major development for humans,” Scriven claimed, “increased the ease with which evaluative knowledge could be disseminated…the roots of evaluation came from the concerns of early humans.”

Professor Scriven is a devout advocate for evaluation as the alpha discipline, and he readily shared his line of thinking on this with the group. He claims evaluation is most certainly a discipline, and it is quite easy to prove that it is also a transdiscipline. Transdisciplines are key “tool disciplines” and have two roles: 1.) they are disciplines in their own right; and 2.) they are tools used by other disciplines on a grand scale. In fact, they are crucial to other disciplines. Logic, Scriven argues, is a powerful example of a transdscipline, as is statistics. As the alpha discipline, evaluation “checks the credentials of other disciplines.” It “controls the direction of the pack.” Accordingly, “evaluation has to be high status because there is high payoff in benefits to humankind.”

With regard to ESM, Scriven and Davidson maintained that while we get non-evaluation specific coursework in graduate school (e.g. RCTs, statistics, interviews, surveys, content analysis, causal inference methods), we don’t tend to get ESM (e.g. needs and values assessment, merit determination, importance weighting, evaluative synthesis, value-for-money-analysis). And, they claim, if we are not using ESM, we are not doing evaluation.

Davidson and Scriven continued the course with an extended discussion of evaluative tasks – 1.) critical description of evaluand; 2.) point of view; 3.) identify and define relevant values; 4.) dimensions of merit; 5.) weight the values; 6.) validate  values & weights; 7.) fieldwork /  gather evidence; and 8.) convert, synthesize via rubrics – with special emphasis on point of view. Point of view, Davidson claims, goes beyond just getting different perspectives. She offered the example of buying a watch as a metaphor for approaching an evaluation. “Are you evaluating a watch for its ability to keep time, or as a piece of jewelry? What is the frame or angle of the evaluation? In what ways are you looking at the evaluand?” Scriven jumped in and stated, “the point of view determines what will be the relevant evaluation questions. You need to be clear what point of view you are investigating.” Davidson then launched into an explanation of rubrics (illustrated by a richly described New Zealand example) as one evaluative tool to assist evaluators in interpreting evidence in evaluative ways. “Rubrics help us define what quality and value should be,” she explained.

The ESM workshop was a heady, theoretical, inspiring first day of what proved to be a stellar conference week.

3.) DataViz – Smart Data Presentation

I attended yet another session with evaluator-turned-information-designer Stephanie Evergreen. I never tire of Evergreen’s energetic enumeration of graphic design principles that support the effective communication of evaluation evidence. I attended her full-day session at AEA’s Summer Institute last June, and found then, as I did last week, that there is always more for me to learn and as she stays up on her game, so do her learners. This time around she shared a data presentation theory of change: REPORT (input) > SEE (activity) > THINK (outcome) > REMEMBER (outcome) > USE (outcome). “The better the communication,” Evergreen avers, “the more likely we will get to use.” She described the “pictorial superiority effect” (many references exist online) which, very simply described is “what we see is what we know” and forms the basis for her argument to use high quality visuals in presentations. She deems effective data presentation, “the last frontier of evaluation use.”

After a brief go-round in true “What Not to Wear” fashion, Evergreen took us through some font-fitting exercises, with special attention on feet. Serif (or footed) fonts are for reports, where sans serif (without feet) are for slides. Decorative fonts, like all accessories (my word, not hers) can be used selectively for impact or emphasis, as can an action color against an otherwise grayscale chart. Evergreen cautioned, “if the visuals don’t pull them in, they’re not going to read [the report].”

She then took participants through the steps of working from a default Excel chart to one that meets the Evergreen standard for “simplification and then emphasis” by removing the legend, tick marks, axis labels, bar colors (except for that action color),  and gridlines, and reordering the bars, widening the bars, adding data labels, and adding a headline, among a few other minor tweaks. Essentially, she eliminated “chartjunk” before dressing up the chart for presentation, smartly, of course. After extolling the virtues and pitfalls of various chart types and the use of small multiples, she moved on to the topic of “message” and challenged participants to create their six word presentation story. “Memorable messages,” Evergreen maintains, “are logical, and emotional.”

I strongly recommend getting information on information design for evaluation from the designer herself: Check out stephanieevergree.com and look for her new book Presenting Data Effectively. I was quite fond of telling folks at the conference that I practically knocked someone over at the SAGE Publications vendor table competing for one of the limited copies for sale there.

4.) Beyond the Basics of Program Design: A Theory Driven Approach

I rounded out my week with the last remaining #eval13 diehards on Sunday with a professional development course with Stewart Donaldson and John Gargani. Their premise as they opened the course was that evaluators are often program designers, or program re-designers. Using a particular program as a case study, they posed the question. “What are the most important actions that [the program] must make to achieve its desired impacts?” A systematic approach to program design is necessary, they claimed, because many programs are not put together in a systematic, logical way. No surprises here! They offered their definitions of program – “A reliable way of producing impact” – and of program design – “Everything that someone needs to implement a close replication of a program and produce the desired impact without ever speaking to the designer.” They readily admitted that the latter definition is more aspiration than reality. 

In order to help participants understand Program Design, the presenters shared a diagram with program design at the center. Out from that center are spokes for Impact Design, Process Design, Business Design, Values Design, and Evaluation Design. Donaldson and Gargani warned participants that the values being promoted by the program may change over time; this may be purposeful and by design, or by accident. Sometimes programs end up promoting values they don’t in fact, want to promote.

Program design, they explained, is a way to operationalize your program theory – it’s a representation of program theory. They went on to elucidate the rules for their “Very Simply Process Flow Diagram” – 3 boxes, 3 rules. The boxes are IN, DO, and OUT. DO is expressed as a subject-verb-object, as in “staff enroll participants” or “facilitators train participants.” The three rules are: 1.) Every box in the impact design corresponds to at least one DO box; 2.) Every box in the process design has a subject-verb-object label; and 3.) Every subject has at least one path from IN to OUT (there may be multiple ways for people to move in or out). The presenters called the conduits between boxes “pipes” as they explained that the pipes may be leaky (i.e. people may drop out or take different paths). The boundaries of a program, they admitted may not be clear, and advised that we should stop when it’s no longer helpful to pursue a path.

5.) Blogging for fun and sport!

I had the distinct and unique pleasure to present a session with three of my favorite evaluation folks – Ann K. Emery, Chris Lysy, and Susan Kistler. The four of us have been corresponding for quite some time and share an interest in and enthusiasm for evaluation blogging. We come from diverse perspectives, professions, levels of interest and experience in blogging, along with different goals, and future plans for our own blogs. I thought we made an excellent team to offer insight and advice to potential, new, and experienced bloggers who attended. In fact, I continue to be awed and humbled by their combined wisdom, creativity, insight and generosity as each has offered me support, encouragement and advice as the newest blogger of the group.

Here are the slides we used to present: 

6.) The Great Big Nerd Project

Decades after realizing I just wasn’t one of the cool kids, I’ve come to terms with being nerdy. In fact, I do believe it’s the “in” thing now (housewives and other ill-behaved reality TV stars notwithstanding), making a welcome resurgence after our famed 1980s aptly titled movie.

Just before #eval13, I purchased a copy of The Future of Evaluation in Society: A Tribute to Michael Scriven, and, as I mentioned in an earlier post, it arrived just in time to tuck into the suitcase with the intent of having it autographed by the tributee. After a successful bid for the professor’s signature, I opened the cover to peruse the table of contents only to discover that to my knowledge, just about every contributing author was there in DC.  It was then I established my quest and with a little green sticky note list in hand, began hunting evaluators. For the next 50+ hours I peered around corners, squinted at name tags, and enlisted the help of some well-connected colleagues. One shouted at me from down the hotel hallway pointing wildly, “Sheila! There’s Jennifer Greene! Go get her!” Another stole me away from a friend at the reception to point out Ernie House. One by one, I approached them all, any remnants of bashful inhibition melting away with each subsequent signature.  Only Daniel Stufflebeam was not to be found, and I understood later that he was unable to attend the conference after all. I would have loved the opportunity to meet him, and fantasize about sending him a little blank white card to sign and send back, so that I can paste it into my treasure. 

Each evaluator was incredibly kind, affable, and accommodating as I interrupted conversations , stopped them in hallways, or followed them out of elevators. After all, I’m hardly alone in my interest in chatting with “the big names” and had to compete for time and space with more experienced and accomplished evaluators. Would you like to know who is on the list of contributing authors to this terrific tome? Of course you would! I strongly suggest you purchase the book, though. It’s fabulously well-written and a fascinating read. No surprise there, if you’re a devout evaluation nerd.

Scriven treasure

My #eval13 treasure – The Great Big Nerd Project!

As I started typing this list, a song popped into my head. With apologies to Kylie Hutchinson, who, as far as I can tell, is in charge of all evaluation-related carols, this one is sung to the tune of Rudolph the Red-Nosed Reindeer:

You know Donaldson, Patton, and Hopson and Kirkhart

Stufflebeam, Christie, House, Greene, Stake, and Mel Mark.

But do you recaaaaaaaall the most famous evaluator of all. 

Scriven, the  —

Well, that’s just about enough singing for one day, now. Anyone care to finish the line? I just couldn’t bear to be so irreverent.

SO…did you make it to the end (or even skip to the end)? If so, I’d love to know your thoughts on this first longform post.

To enjoy different perspectives on #eval13, check out these creative and insightful conference reflections from fellow bloggers:  Ann K EmeryChris Lysy, John GarganiChi Yan LamJames PannAnn Price, Brownyn Mauldin, and Mary S Nash. If you are aware of others who have blogged about their experiences at #eval13, please let me know!

Author: Sheila B Robinson, Ed. D

Custom Professional Learning, LLC sheilabrobinson.com

12 thoughts on “#Eval13: #omgmqp, ESM, DataViz, Program Design, Blogging, and the Great Big Nerd Project

  1. Sheila, this is an OUTSTANDING POST!! It is great for someone like me who couldn´t attend #eval13. I was really surprised that we like/love the same kind of evaluation topics, as I would have chosen the same workshops you did… and I´m waiting the arrival of “The future of evaluation in society”. If I can attend #eval14 I will chase the authors too!

    • Muchas gracias Pablo! It was difficult for me to stop writing as I attended so many other great sessions as well. Of note were two fascinating sessions on the topic of evaluative thinking. I also attended Susan Kistler’s 25 Low-Cost / No-Cost Tech Tools, a perennial conference favorite. I’ll have to write about that after actually trying some of them. 🙂 Hope to meet you in Denver in 2014!

  2. Fantastic Sheila, of course you know I like the long ones 🙂

    • Thanks for the encouragement Chris! I took a quick look around and found a lot of references to and interesting articles on long-form blogging and journalism, most from just the last few months. I like that! I’m so used to writing in 400-word sound bites. I like the idea of composing longer pieces for the blog, when the topic calls for it. Please continue to keep me posted on the latest trends in blogging! 🙂

  3. I enjoyed reading your conference recap Sheila! #omgmqp is like my favorite hashtag now!

  4. Sheila — first, thanks for the motivating blog panel…and I was also at the MQP State of Dev Eval and I was most impressed both by the questions from a few audience skeptics but also by Michael’s responses.

  5. Thank you for sharing (and for following through with sharing what you’ve learned at Patton’s qual session!) :D.

  6. Thanks Sheila for the conference reflections. Back home in NZ from AEA13 and only now beginning to think about pulling some conference reflections together for the anzea (Aotearoa New Zealand Evaluation Association) newsletter – and maybe even resurrect my blog which has been largely dormant for the last 18 months.

    Thanks also for the links to the other AEA13 Conference Bloggers. It’s a good way to get a sense of how others experienced the conference and also feedback on sessions and presenter.

    It’s really interesting – that with over 600 sessions there are so few double ups, I think I only had one session in common with the all of the AEA13 Conference Blogs and that with Chi Yan Lam Michael Quinn Patton State of the nation on Developmental Evaluation.

    Jane Davidson had one blog post on the conference. http://genuineevaluation.com/case-studies-of-evaluators-lives-a-cultural-perspective-yes-culture/ Only one this year, normally with Patricia Rogers she has few more but I think she had a busy program of workshops and presentation – on her own account and with Michael Scriven.

    Roll on AEA14. See you in Denver.

    • Thanks for your comments! I truly enjoyed reading others’ accounts as well. You may find a few additional links over at emeryevaluation.com, Ann K. Emery’s blog, as she collected conference reflections as well. Do resurrect your blog, and good luck! 🙂

Leave a reply to Chi Yan Lam Cancel reply