© Conrad Weisert, Information Disciplines, Inc., August, 2001
NOTE: This document may be circulated or quoted from freely, as long as the copyright credit is included.
Suppose we need to train our 12-person programming staff in a new language or tool, say C#. We get proposals for "Introductory C#" from two course vendors:
With vendor A, our programmers will get 50 hours of instruction at an out-of-pocket expense of $80/hour. With vendor B's course, they'll get only 18 hours of instruction at an out-of-pocket expense of $200/hour. If our criterion is the amount of classroom time or the cost per instructor-hour, then vendor A's proposal is by far the better deal.
Of course as experienced managers we know that the money we pay the instructor is but a fraction of the real cost of conducting a course. The main cost component is the time of the participants. Tying up 12 programmers for 50 hours is a huge cost, one we're willing to pay if it's worth it. Suppose we value our programmers' time internally at $60/hour. Every hour each programmer spends in class represents $60 worth of other work that's not getting done for our organization.
The real cost to our organization of the two courses, therefore, is:
|Course A||Course B|
|12 * 50 * $60
|12 * 18 * $60
This more complete analysis shows that vendor B's course is actually much less expensive, if we assume that both courses instill the same degree of mastery of the content concepts and techniques. But how do we know? How can we compare the results of putting our programmers through the two courses?
An abstract or course description for a professional-level course usually specifies a list of detailed objectives. Ideally those objectives should cite what the participant will be able to do upon successfully completing the course, not just what he or she will know or understand. (For an example on this web site, see Project Planning and Control Concepts and Techniques)
Furthermore, especially for short advanced courses, the behavioral objectives should be quite detailed. An objective that simply promises mastery of some specific tool, software product, or programming language is much too broad to evaluate. (See "C++" is not a Binary Skill). If one course specifies a list of well-defined behavioral objectives while another simply promises to cover "Introductory C#", we can almost always conclude that the first course will do a better job of imparting skills to our participants.
Most courses these days, both professional and academic, end by collecting students' evaluations. Such evaluations rate the content, the instructor, the textbook, the handout materials, etc., and are useful in diagnosing weak spots and making improvements to a course. Are they equally useful in assessing how well the course met its objectives for the students?
Actually negative reviews are more useful than rave reviews. If the participants hated some aspect of the course or felt that their time wasn't well spent, then the course must be judged a failure. On the other hand, it's common for students, especially at introductory levels, to find the concepts stimulating, the instructor entertaining, the handouts and visual aids polished, and the whole environment highly competent and professional, but then, upon returning to their desks, be unable to apply the skills the course taught them.
The ultimate criterion then is not how much the participants liked the course but rather how well they can perform the tasks that the course was supposed to have prepared them for.
Return to Articles on teaching and learning.
Return to IDI home page.