Saturday, May 11

(originally posted by Clark Quinn)

Well, I really want to reply to Peter, but right now I've got a bee up my bonnet, and I want to vent (how's that for mixing my metaphors?). I'll get to Peter's comments in a moment...


In recent work, I've reliably been coming up against a requirement for a pre-test. And I can't for the life of me figure out why, they're not using the data to do anything but compare it to the post-test! This didn't make any sense to me, so, I did a Google search to see what came up. In "Going Beyond Smile Sheets... How Do We Know If Training Is Effective?" by Jeanie McKay, NOVA Quality Communications, I came across this quote:


[Level Two] To evaluate learning, each participant should be measured by quantitative means. A pre-test and post-test should be administered so that any learning that takes place gets attributed to the training program. Without a baseline for comparison of the as-is, you will never be able to reveal exactly how much knowledge has been obtained.


Now I don't blame Jeanie here, I'm sure this is the received wisdom, but I want to suggest two reasons why this is ridiculous. First, from the learner's point of view, having to do a pre-test for content you're going to have to complete anyway is just cruel. Particularly if the test is long (in a particular case, it's 20 items). The *only* reason I can see to do this is if you use that information to drop out any content that the learner already knows. That would make sense, but it's not happening in this case, and probably in too many others.


Second, it's misleading to claim that the pre-test is necessary to assess learning. In the first place, you should have done the work to justify that this training is needed, and know your audience, so you should have already established that they require this material. Then, you should design your post-test so that it adequately measures whether they know the material at the end. Consequently, it doesn't matter how well they knew it beforehand. It might make sense to justify the quality of the content, but even that's falacious. We expect improvement in pre-post test designs (this is forbidden in psychology as a mechanism to determine the effectiveness of an intervention, without a control group), so it doesn't really measure the quality of the content. Though it could be considered a benefit to the learning outcome, there are better ways to accomplish this. There is no value of the pre-test in these situations, and consequently it's cruel and unusual punishment for the learner and should be considered unlawful.


OK, I feel better now, having gotten that off my chest. So, on to Peter's comments. I agree that we want rich content, but if we have the current redundancy to address all learners, we risk being boring to all to make sure everyone's learning style is covered. We *could* provide navigation support through the different components of content to allow learners to choose their own path (and I have). That works fine with empowered learners, but that currently characterizes no more than about half the population. The rest want hand-holding (and that's what we did), but that leaves the redundancy.


Which, frankly, is better than most content (although UNext had/has a similar scheme). However, I'm suggesting that we optimize the learning to the learner. I'm not arguing to assess their cultural identity, but to understand the full set of capabilities they bring to bear as a learner (my cultural point is that we're better off understanding them as individuals, not using a broad cultural stereotype to assume we understand them). That is, for some we might start with an example, rather than the 'rule' or 'concept'. For some we might even start with practice. We might also present some with stories, others with comic strips or videos. Morever, we drop out bits and pieces. A rallying cry: Use what we know to choose what to show. Yes, additional steps in content development are required to do this (see my IFETS paper), but the argument is that the payoff is huge...


The assessment is indeed a significant task, but in a long-term relationship with the learner, we can do something particularly valuable. If we know what their strengths and weaknesses are, as a learner, we can use the former to accelerate their learning, and we can also take time and address the latter. A simple approach would be to present 'difficult' content with some support that, over time, would be internalized and improve the learner's capabilities. Improving the learner as a learner, now THAT's a worthwhile goal!


I strongly support Peter's suggestion that using a rich world as a source for embedding (or extracting) learning to make it meaningful is ultimately valid, and the base of much of my work on making learning engaging. We may be agreeing furiously, except that I may not have made what I meant by learner assessment clear.


In answer to Peter's query, I'm sad to report that we have not, and can not, publish on the 31 dimensions. I can only suggest the path we took: using Jonassen & Grabowski's Handbook of Individual Differences as an uncritical survey of potential candidates, as well as other likely suspects from any other source your research uncovers, then to make some sensible inferences to remove redundancies (much as the 'Big 5' personality factor work attempts to make sense of personality constructs). Make sure you cover the gamut of things that might influence learning, including cognitive, affective, and personality factors.

No comments: