Monday, May 13

(originally posted by Peter Isackson)

Curiously, Clark and I don't seem to know for certain whether we "furiously" agree or not. This seems rather typical of the whole learning business. I tend to agree with Clark that we do agree! The problem is that at different times we are probably referring to different phenomena. My suggestions were very general, pointing towards the overall strategy for handling a variety of content, which I see as process (transforming input into output). I also glanced at questions of content selection in the light of cultural variation. When we focus on specific content needs, particularly the "learning objects" we hope to find somewhere or need to produce ourselves, we are faced with these cultural problems, which, as Clark points out, constitute helps or hindrances depending on 1) the profile of the individual learner 2) the trainer's awareness or even real knowledge of that profile. I think a lot of work needs to be done on both at the same time. I don't believe we have any valid human models yet for dealing with this efficiently (i.e. converting information into effective strategy) and everyone else (i.e. the knowledge management specialists) seems to be focused on structuring the information. I believe that this is only the first step and may need some guidance from the strategy side to develop the right structural models.

A new theme occurred to me today and I have no idea what it's worth or how far it can be taken, so for the sake of my own ongoing reflection I'll state it here (I need to set it down somewhere!) and await any constructive or, why not, destructive criticism. It is curiously linked to the bee in Clark's bonnet, but inverted (the stinger is on the other end!). The notion has to do with the teacher’s or trainer’s state of knowledge -- not the learner’s -- before and after a course. I am not, however, suggesting pre- and post-testing! I am suggesting that it should evolve, almost as much as the learner’s state of knowledge and that we should take an interest in tracking this evolution. The context I am referring to is that of collaborative online learning. This wouldn’t be the same thing for traditional face-to-face teaching (but see my final remarks below), and even less so for pre-programmed eLearning (which I see increasingly as isolated or modular learning objects, whose meaning and impact derive from the variable contexts in which they are used more than from their internal merits).

My notion is that of a kind of open or “improvisational teaching”, a strategy that specifically aims at learning to teach a particular course by teaching it, after defining its overall structure and logic. It proceeds from two observations:

1) no one can fully anticipate what will happen in the learning process, particularly in distance learning,
2) we do not necessarily know in advance what resources, among all that are available, will prove the most productive for real learners (in all their cultural variety).

My notion of improvisation is borrowed from jazz, one of my previous occupations*. To be good at improvising, you have to learn not only the art of soloing (which you at least partly invent), but you must also know the chord changes (+ variations) of the tunes you are playing, the chosen style for each number, your precise role in the ensemble sections and, especially if you are accompanying rather than just soloing, have a good idea of the style and system of each of the other players. These multiple constraints nevertheless leave you free to discover through playing the things that work and don’t work both in general and specifically with regard to each type of musical event. The most interesting thing about working with other musicians is what you learn from them each time you rehearse or play. And of course the more you play a particular tune, the easier it gets to keep it going and to find ways of innovating and surprising without upsetting the underlying logic and the other musicians.


In short, I’m in favor of under-planning one’s course strategies and leaving room to for us to learn from the learners themselves. Actually it’s less under-planning than avoiding over-planning. This means, without sacrificing one’s “authority”, learning how to encourage the learners to bring things to you (discovery of appropriate resources you may not have been aware of, new ideas or ways of looking at the material, patterns or sequences of behavior that produce learning more effectively than your initial game plan). In other words, we should seek to be instructional co-designers rather than instructional designers.

It might be said that what I’m describing is a form of beta testing. But its implications are very different. You beta test something that is fully designed down to the last detail. What I’m suggesting is a system in which we as trainers and designers are actively concerned, at least the first time around, to integrate elements that come from the learners, or rather our own interaction with the learners. This can obviously only apply to collaborative training. But it can lead to strategies for producing learning objects. Much needs to be said on how to conduct this approach (how to create the overall model, how to manage events, how to communicate with learners, how to react to embarrassing mistakes, how to make permanent or replicable everything one learns, etc.).

After a brief search on the web, I found that David Hammer of the University of Maryland, in a context of traditional face-to-face instruction, calls a similar approach “discovery teaching” and identifies some of the areas of resistance to it by teachers. My contention is that it is less risky and more appropriate in an online environment. It is also easier to structure, plan and capitalize on.



* I ended up living in Paris because, after participating in a free-for-all jam session organized by Steve Lacy at the American Center nearly 30 years ago, I was offered a permanent job as a pianist (accompanying dance classes at the Université de Paris) and accepted it in order to become fluent in French!

Saturday, May 11

(originally posted by Clark Quinn)

Well, I really want to reply to Peter, but right now I've got a bee up my bonnet, and I want to vent (how's that for mixing my metaphors?). I'll get to Peter's comments in a moment...


In recent work, I've reliably been coming up against a requirement for a pre-test. And I can't for the life of me figure out why, they're not using the data to do anything but compare it to the post-test! This didn't make any sense to me, so, I did a Google search to see what came up. In "Going Beyond Smile Sheets... How Do We Know If Training Is Effective?" by Jeanie McKay, NOVA Quality Communications, I came across this quote:


[Level Two] To evaluate learning, each participant should be measured by quantitative means. A pre-test and post-test should be administered so that any learning that takes place gets attributed to the training program. Without a baseline for comparison of the as-is, you will never be able to reveal exactly how much knowledge has been obtained.


Now I don't blame Jeanie here, I'm sure this is the received wisdom, but I want to suggest two reasons why this is ridiculous. First, from the learner's point of view, having to do a pre-test for content you're going to have to complete anyway is just cruel. Particularly if the test is long (in a particular case, it's 20 items). The *only* reason I can see to do this is if you use that information to drop out any content that the learner already knows. That would make sense, but it's not happening in this case, and probably in too many others.


Second, it's misleading to claim that the pre-test is necessary to assess learning. In the first place, you should have done the work to justify that this training is needed, and know your audience, so you should have already established that they require this material. Then, you should design your post-test so that it adequately measures whether they know the material at the end. Consequently, it doesn't matter how well they knew it beforehand. It might make sense to justify the quality of the content, but even that's falacious. We expect improvement in pre-post test designs (this is forbidden in psychology as a mechanism to determine the effectiveness of an intervention, without a control group), so it doesn't really measure the quality of the content. Though it could be considered a benefit to the learning outcome, there are better ways to accomplish this. There is no value of the pre-test in these situations, and consequently it's cruel and unusual punishment for the learner and should be considered unlawful.


OK, I feel better now, having gotten that off my chest. So, on to Peter's comments. I agree that we want rich content, but if we have the current redundancy to address all learners, we risk being boring to all to make sure everyone's learning style is covered. We *could* provide navigation support through the different components of content to allow learners to choose their own path (and I have). That works fine with empowered learners, but that currently characterizes no more than about half the population. The rest want hand-holding (and that's what we did), but that leaves the redundancy.


Which, frankly, is better than most content (although UNext had/has a similar scheme). However, I'm suggesting that we optimize the learning to the learner. I'm not arguing to assess their cultural identity, but to understand the full set of capabilities they bring to bear as a learner (my cultural point is that we're better off understanding them as individuals, not using a broad cultural stereotype to assume we understand them). That is, for some we might start with an example, rather than the 'rule' or 'concept'. For some we might even start with practice. We might also present some with stories, others with comic strips or videos. Morever, we drop out bits and pieces. A rallying cry: Use what we know to choose what to show. Yes, additional steps in content development are required to do this (see my IFETS paper), but the argument is that the payoff is huge...


The assessment is indeed a significant task, but in a long-term relationship with the learner, we can do something particularly valuable. If we know what their strengths and weaknesses are, as a learner, we can use the former to accelerate their learning, and we can also take time and address the latter. A simple approach would be to present 'difficult' content with some support that, over time, would be internalized and improve the learner's capabilities. Improving the learner as a learner, now THAT's a worthwhile goal!


I strongly support Peter's suggestion that using a rich world as a source for embedding (or extracting) learning to make it meaningful is ultimately valid, and the base of much of my work on making learning engaging. We may be agreeing furiously, except that I may not have made what I meant by learner assessment clear.


In answer to Peter's query, I'm sad to report that we have not, and can not, publish on the 31 dimensions. I can only suggest the path we took: using Jonassen & Grabowski's Handbook of Individual Differences as an uncritical survey of potential candidates, as well as other likely suspects from any other source your research uncovers, then to make some sensible inferences to remove redundancies (much as the 'Big 5' personality factor work attempts to make sense of personality constructs). Make sure you cover the gamut of things that might influence learning, including cognitive, affective, and personality factors.