Did this meeting meet your expectations?

Reviewing a Presentation or Short Course

Conrad Weisert, April 19, 2007
©2007, Information Disciplines, Inc.

NOTE: This article may be reproduced and circulated freely, as long as the copyright credit is included.


We've all attended presentations where we were asked to fill out an evaluation form and turn it in at the end. The information is tallied and supposedly used by the presenters to improve aspects of their work and by the organizers to choose future speakers.1 Such forms, unfortunately, are often poorly designed and don't elicit the information the presenters and the sponsor-organizers really need.

Superlatives

I went to another awful meeting yesterday, a blend of a professional society meeting and a vendor pitch. As I dutifully dropped my evaluation form on the pile my eye fell on the top of the pile. The participant ahead of me had checked 5, the highest rating, in every category. "A colleague of the speaker, no doubt," I thought.

But with my curiosity piqued I sneaked a look at a couple of the other forms. There were loads of 5s, perhaps half of all the responses! How could that be? They couldn't all be friends of the speaker.

The explanation seems to be a combination of:
  • rating inflation, analagous to academic grade inflation. The speaker's pants didn't fall down while he was talking, so he gets an A.

  • insufficient granularity in the scale. Five levels are too few, and anything less than the top score is seen as an insult to the speaker.

We want the top rating to reflect a bona fide superlative, as if to say: "That was one of the finest presentations I've ever heard. It's hard to imagine that I'll ever witness a better one."

An 11-level scale (0 to 10) conveys a more realistic invitation to the participant. You can give a well-qualified speaker 7 or 8 without insulting him or her. A rare 10 denotes a stunningly memorable program.

Addendum, May, 2013:
Meetup adds to the confusion

The popular web host meetup is being used by a growing number of small organizations that present programs featuring some expert speaker or panel. Unfortunately, they, too, ask meeting participants to rate programs on a scale of 1 to 5.

I recently got scolded by a program organizer for giving a 4 rating to a competent speaker. "What was wrong with his presentation?" the complainer demanded. I tried to explain that nothing at all was "wrong" and that 4 is an excellent rating, but my challenger would have none of that. To him anything below the top 5 grade was a negative assessment, and my rating had insulted the speaker!

Aggregating multiple speakers

I've felt frustrated, especially at vendor presentations, by a form that asks me to rate various aspects of "the speaker", when multiple speakers have participated. How do I tell the vendor that one of the speakers was superb and another was a bumbling incompetent? When I'm given such a form I assume that the careless session sponsors don't really care about my opinions.

This problem is easy to solve: Either distribute a separate form for each part of the program or (better) design a single form with a block for each speaker or a matrix of speakers and characteristics.

Distinguishing between style and substance

In these times of fad methodologies, we attend more and more programs where some "expert" makes a slick, well-organized presentation with excellent graphics and handout material of a seriously flawed practice. I've reported on several such programs2 recently.

I'd like to give the speaker credit for being an excellent, well-prepared presenter. But I also need to report that he or she failed to persuade me and to express my concern about conveying dangerous notions to less experienced and more impressionable audience members. Some evaluation forms elicit that information, but many don't.

Inviting comments

It's extremely difficult to design multiple-choice questions that faithfully capture the responder;s full opinion. It's worth trying to do so only for very high-volume surveys. Asking me to rate the meeting location on a scale of 0 to 10 can't elicit my feelings if I think it's a lovely place with world-class food, but it took me an hour and a half to drive in rush-hour traffic.

Providing space for comments, not just for the whole questionnaire, but for each major subject may complicate the tallying, but you'll get more useful information.

Feedback to the speakers

Most seminar sponsors and professional organizations that use evalutation forms at all will send a summary tally to the speakers, but surprisingly a few don't. As a speaker I find it extremely frustrating to know that information was collected that I don't get to see. I'd like a chance to correct the negative aspects and to use the positive reviews for promotion.

Meeting expectations

A common question, often the first one, is:
 YesNo
Did the program meet your expectations?  

It's sometimes tempting to check the Yes box and append a comment:   "I expected it to be an utter waste of time, and it was, so it fully met my expectations."

That's an extreme and somewhat flippant response, but it calls attention to the problem with the question. We have no idea what the participant's expectations may have been. If that's relevant, we should ask questions that specifically elicit that information.


1—And sometimes also by a vendor to generate sales leads.

2—Some examples are:
- It's Easier for Programmers to Talk to Each Other than to Document,
- Have We Lost Our Minds?,
- and How to Condemn the Traditional Approach.

Last modified April 26, 2013

Return to IDI home page
Management articles