The Reusability Paradox
A CONFERENCE ROOM FULL OF PEOPLE
R: Good to see everyone again! S called ahead to say he was running a little late.
O: Good to be here again!
D: I don't think I've ever been part of a group where everyone showed up on time.
V: Well, this *is* only our second meeting. (General laughter)
R: I'm just glad you all came back. I was wondering if you got anything out of that first meeting.
V: Yes, of course! We wouldn't have taken time away from work to come back if we didn't expect this meeting to be extremely valuable.
C and D: (In unison) Riiiiiight.
D: (Trying to look angelic) I would *never* take time away from work if it weren't for something 'extremely valuable'... like World of Warcraft. (More laughter)
R: Ok, ok. I want to bring us back to the topic of learning objects tonight, and specifically, to the question of the size of learning objects. Also known as the granularity question.
O: I thought we agreed last time that our definition of learning object was going to depend on the specific situation we find ourselves in?
C: Are we going to find out tonight how big a learning object should be? My manager keeps asking me that question.
R: Well, I think we're going to talk through some of the key issues that help people make that decision...
V: But we're not going to make the decision for you.
R: I want to suggest a topic to get our conversation going tonight: the fundamental tension between using and reusing.
C: What on earth is that supposed to mean?
O: If I could put my head on the chopping block, I think R's suggesting that there's some kind of tension between using a learning object the first time and reusing it later.
D: How could using a learning object one time be any different from using it another time?
C: Using it the first time should be no problem at all. At least, it's never a problem for us.
V: How can that be?
C: Since we're still new at this and don't have any learning objects in our database, we're building all our learning objects to spec. The specs come from this big contract we have now... So everything works perfectly - by design. (Huge laughter)
O: Yes, things generally work pretty well when you design them specifically to fit your needs.
D: (With a look of realization) But that's the tension, isn't it? If you only pay attention to meeting the needs you have currently, how can you expect your learning object to be useful to you next time around?
C: Why wouldn't it be useful next time around?
O: Have you ever actually reused anything you designed on one project in another project?
C: Well, we frequently reuse stock photography and clip art across projects. And that's the point with stock photography, isn't it?
R: Yes, but O asked if you'd ever reused anything *you* designed on a project other than the one you designed it for.
C: Uh... (thinking hard).. I don't think so.
D: And why not?
C: I guess we just haven't had the opportunity yet.
V: What kind of training do you design for your corporation?
C: Everything. Leadership training and human relations stuff, product support materials and process documentation.
R: Let's start with leadership training. When was the last time you developed some leadership training?
C: We just wrapped a project like that about four months ago.
D: And how different was it from the last leadership training you developed?
C: (Somewhat confused) How different?
O: Think about it as a percentage - what percentage of this material was signficantly different from the last leadership training you developed.
C: I don't know. Maybe sixty percent? Seventy percent? I mean, some of the training covers the company's own special approach to quality improvement. It's a variant of "six sigma" or "lean manufacturing" principles applied to software development.
D: And that part of the instruction is the same every time?
C: Well, they haven't updated that process in about six years. But the instruction is different every time. See, we use a scenario-based model for our instruction, and so we have to create new stories every time we build new leadership training.
V: Do the learners have access to some documentation about the QA process as they're reading through the scenarios?
C: Yes, they can access a special version of the QA process documentation in PDF format.
V: And what's special about that version of the documentation?
C: It's specially annotated. It contains notes that help learners connect the salient elements in the scenario with the relevant portions of the QA process.
R: Ding! I think we have a winner. (General looks of confusion) C, since you have to provide learners with a copy of that documentation every time you train, why don't you just reuse this specially annotated copy of the documentation?
C: It's like I said, isn't it? That copy of the documentation is annotated with details specific to the scenario the learners are working on. And we write a new scenario every time.
R: Maybe if you left those annotations out, you'd have a learning object you could easily reuse. I mean to say that, if you just created a PDF version of the QA process documentation without these special annotations, then it would be a lot easier to reuse across training projects.
C: Yea, it would be a lot easier to reuse, and a lot less effective.
R: Exactly - and that's the tension. Some of the things you do to make a learning object more effective today make it harder to reuse tomorrow. Wiley called it The Reusability Paradox. This link describes a shortened version; we'll consider a much more thorough version in another meeting.
D: But which should you really do? Make your learning object maximally effective today, or build it to be maximally reusable tomorrow?
O: And what does it mean to be maximally reusable? Isn't that just another way of saying maximally effective in an unknown future scenario?
T: Reusability is a great token to keep in mind while creating instruction, but don't let it stop you from creating something great.
R: Welcome, S!
S: Evening, all. Sorry I'm late.
V: (Ignoring S) If a learning object doesn't actually help people *learn*, is it good for anything?
S: Back to that, are we?
C: No, this is something else. R has just told us about "The Reusability Paradox."
S: What's that?
C: Just the point that the designing you do to make a learning object work well in one instructional situation may actually keep it from working in another.
V: I liked the way R said. it. "Some of the things you do to make a learning object more effective today make it harder to reuse tomorrow."
S: "Some" things, eh? Which things are those?
R: An excellent question.
O: (Defiantly) You're not going to claim that having an objective and actually working to teach a specific outcome in a learning object make it harder to reuse.
R: I'm not going to claim anything.
D: (Exasperated) Academics...
V: Let's think about it. We just got a contract to build some safety training for McFastFood. One objective we will definitely cover is how to deal with grease fires. In instructional design speak, this will be the terminal objective, and we'll cover several enabling objectives, including when to use a fire extinguisher and when not to, alternative methods for putting out fires, and when to call in outside support (like city or county firefighters). I would argue that without understanding our terminal and enabling objectives we wouldn't be able to build training that would be effective for anyone.
O: So we should ask ourselves if this learning object has the potential to be reused. Could King Burger or Mindy's use training of this kind?
S: Seems like it. I'm sure federal and state governments have set training standards all fast food establishments have to follow. If this training meets those requirements, then it should be reusable.
R: C, what kinds of things would you do to this training to make it more effective?
C: I guess I would make sure that the people in the pictures had on McFastFood uniforms, that the fire extinguishers were in the right places relative to the boilers and shake machines, that the menu items were accurate, etc.
R: And you think that uniforms, shake machines, and fast food menus make instruction more effective? It's not the audience analysis, the terminal objectives, the practice opportunities, or the feedback?
V: He didn't say that the shake machines were the only thing that made the instruction effective. He said that adding those would make it *more* effective. It's like the difference between a store-bought suit and a custom tailored one. They both fit; one just fits better.
C: So we're talking about the personalizations? The things you do to make a learner feel like the instruction is actually relevant to them, instead of dry encyclopedia info?
R: That seems to be what we're talking about. Tell us more about what you call "personalizations."
C: Well, I think instruction is all about relevance. Don't you remember algebra or geometry in high school and thinking "why should I care what the third corollary to Bob's theorem of ninety degree angles is?" When learners don't care about instruction - that is, when they can't see *why* they should care about it - they're much less engaged.
O: And I like what you said about an encyclopedia. The difference between a great teacher and a boring one is how much spice they add to the content. Anybody can stand up there and read their lecture notes or the encyclopedia entry. The amazing teachers are the ones who can connect the content to your life, show you how it actually affects you and is useful to you, and make you see why you should care. There's an art to it - connecting inert content to vibrant, flowing life.
R: The notion of "spice" is interesting. It's about personalizing?
D: It's about understanding the context of the learner and meeting them there, speaking their language, and using their examples. When my computer science professor used examples from his time during the war, I could have cared less. When he used examples from consulting gigs he'd had in the past I connected with those - since I'm always looking for these gigs myself. When my literature teacher used examples from foreign language literature I'd never heard of, I didn't care. We she drew connections with last night's Seinfeld rerun things actually almost made sense. I had this one poly sci professor who always used examples from The Daily Show...
R: (Cutting in) Let's stick to the topic.
D: Anyway, its about context.
C: So, which is better? Put in some context that connects with the immediate set of learners and miss out on the next batch? Or... (Drifts off, confused)
D: (Incredulously) Or what? Just how exactly is one supposed to design materials to be as effective as possible in a future scenario whose nature is unknown? If you don't know the context of that future audience of learners, how can you match their context in your examples? How can you be sure to use the right spice?
S: Perhaps you just leave all the context out of your instruction.
V: Even if that were possible to do - which it isn't - it would make your instruction about as dry as the Sahara and about as interesting as watching golf on TV.
(S sits up suddenly)
R: (Chuckling) I think your attempt to personalize there failed with at least one golf-loving member of our little group.
V: Anyway, pulling out all the context would make the instruction much less effective, because it would impair transfer out of the classroom and into the real world. Those examples and pictures and little details are what help people connect instruction to their lives. If we're not going to have that context, why not just sell everyone encyclopedias?
O: It seems like encyclopedias are getting kind of a bad rap here...
R: So - how big should a learning object be? Big enough to convey its instructional message, but no bigger? Big enough to convey its message and some spice of context? Does context belong in a learning object or not?
I: Why couldn't we have huge libraries of "parts is parts" which can then be reused and revamped into new, useful learning resources? Wouldn't that solve the question of how big? Just take all the little pieces and make them as big as you want. "Wrap" the content or reusable widgets in the context that is desired.
S: Well, I know no one cares, but the standards are silent on this question.
(Everyone looks around at each other slightly nervously)
R: Let's leave the context view of the question for a moment. We'll come back to this perspective in more detail in a few weeks.
C: Are we really ready to leave this question?
R: I'm not suggesting we leave the question, just that view of it. There are many other ways we might think about the size of a learning object. in some cases, the domain itself may give us hints.
V: The domain itself?
R: Imagine that you're Boldwing Airlines. You've got hundreds of planes in your fleet that need to be maintained regularly and diagnosed and repaired as needed. What does that suggest about how large a learning object should be?
D: How many people need to be trained, and where are they?
C: I heard them say on the news tonight on the way in... they were talking about the potential strike. Its a group of about 8,000 mechanics and repairpersons distributed over several continents.
R: No, no. That's the person-centered view of granularity again... the one that relies on the learners' contexts. Think about the domain itself - airplane maintenance and repair. What does the domain suggest?
O: I'm having a real problem with you anthropomorphizing the domain like this, by the way.
D: It suggests that we need more automated systems involved in airplane repair. Can you imagine more tedious, mind numbing work?
S: Why would you say that?
D: Think of it. Come in every morning, and your little schedule says (in a high pitched voice) "It's time to service the 757!" And then the fun really begins... Take the covers off, pull the parts out, scrub them down, whatever they do. I don't know. It just sounds pretty tedious.
R: How would you characterize a domain like this?
D: Mind numbing! (General laughter)
R: No, if you had to use Merrill's classification system?
C: I guess it would be procedural.
V: You mean that the whole domain is just one long series of steps to be learned?
C: No, not that the domain is just one big process. But the domain does sound like it's probably comprised mostly of smaller processes. "How to repair this" and "How to maintain that."
I: Unless you subscribe to Brown and other's view of social learning. They would suggest that copy machine repair is not "just procedural" but also a social knowledge and practitioner domain.
R: Ok. Let's say that this is a mostly procedural domain. What would that suggest about the size of learning objects Boldwing would choose for their training?
O: Well, if its mostly procedures, and procedures and just a series of steps to learn and carry out, then maybe they should design each step as its own learning object?
V: Impressive, O! That actually makes some sense...
D: Of course. If the first step in either repairing or fixing anything were removing a panel from the side of the plane...
C: (Cutting in) Or grounding yourself appropriately even before that...
D: (Picking up) Then that step would occur at the front of almost every process in the domain, meaning that you could reuse that object in every single bit of training you ever built!
C: So why don't people just think about granularity this way, and avoid the context nastiness altogether?
R: Well, when you're Boldwing you sort of can. You know exactly who your audience is. You can be sure that even future, currently unknown reuse scenarios are going to involve the same audience. However, if you're V's company, and you're building a learning object for Boldwing but might want to reuse it on a contract with Untied, the whole problem comes back again.
O: So what other ways are there to think about the size of a learning object?
R: Wiley points to Gibbons' work on layers of instructional design as a source of five or six additional ways of thinking about grain size. Give that a read for another perspective completely of how the size of learning objects might be considered. But I think we've carried on just about long enough for tonight. Let's pull out the key questions from tonight's discussion. Thoughts?
C: Well, I'd definitely say whether or not to put context inside a learning object.
V: Or it might be more accurate to say, how much context should be put inside a learning object.
O: And what the domain can tell us about how big a learning object should be.
R: Ok. So I'll just draw a line here and continue:
(R draws a horizontal line on the whiteboard and continues writing)
- How much context-related information should go in a learning object?
- What can the domain being taught suggest about how big a learning object should be?
R: Anything else?
S: I'm saving my piece for later.
D: Sounds good, then.
R: Alright! See you next week.