Wow, there’s been some great writing lately. I’ve been particularly reinvigorated by Brian Lamb, Mike Caulfield, and Bracken Mosbacker. And Audrey Watters’ ongoing work on the history of educational technology is vastly more important than anyone seems to realize. It should be absolutely mandatory reading for every student in a graduate program on educational technology or learning sciences, period.

Audrey’s constant refrain that “no one seems to remember our history” was made for her again this week when McGraw-Hill and Microsoft announced a new project based around – I kid you not – learning objects. Reading this news created in me an irresistible urge to join Audrey in reminding the field of its not-at-all-distant and yet already-forgotten history regarding the Reusability Paradox.

Learning objects are meant to be aggregated into a wide range of larger instructional structures. Over a decade ago I worked through the problems implied by this statement in excruciating detail. The tl;dr is that any learning object needs to fit into the aggregation in which you want to reuse it. The degree of fit is purely a function of two contexts: (1) the internal context of a specific learning object and (2) the external context created by the juxtaposition of the other learning objects in the aggregation. Unfortunately, it turns out that the amount of internal context of any learning object is directly correlated with its educational efficacy, while that same amount of internal context is inversely correlated with the number of aggregations the learning object “fits” into. This is the famous Reusability Paradox – the pedagogical effectiveness of a learning object and its potential for reuse are completely at odds with one another. (While the Reusability Paradox was singlehandedly sufficient to quash the realization of the learning objects ambition – I spent over five years of my life on this stuff – there are also numerous other flaws underlying the model as it is traditionally conceived. See sections 2 and 3 in that link, but be warned – it contains Vygotsky, Leontiev, Wertsch, Friere, and M. Night Shyamalan references.)

The Reusability Paradox typically leads designers of learning objects to attempt to “strike a balance” between effectiveness and reusability. This generally results in materials that are neither particularly effective NOR particularly reusable across contexts. No one wants to trade efficacy for reusability (or for lower cost, or for anything else – as the recent Babson survey showed, faculty want proven efficacy more than anything else). And yet we do this all the time without really realizing it. Instead of targeting a specific audience and a specific context, almost all teaching materials adopt their own version of Wikipedia’s Neutral Point of View. Educational materials – and learning objects specifically – try to be just generic enough so as to not be offensive to anyone. They lack what Giant Robot Dinosaur calls a Minimum Viable Personality.

For example, take teaching materials about the Ruby programming language. Here are the first three results that come up for me after searching for Ruby tutorials – Ruby in 20 Minutes, My First Ruby Program, and Ruby Quick Reference Guide. They’re each so bland and generic as to be almost indistinguishable from each other. Contrast these resources with Why’s Poignant Guide to Ruby. You can actually distinguish this resource from the others. It was obviously written with a specific audience in mind – and they love it. But the internal context created in Why’s Poignant Guide – the cartoon foxes, Blix the cat, etc. – are SIMULTANEOUSLY what makes it awesome for a specific audience and what prevents it from being reused more broadly.

So what are we to do? We have three choices – we can either (1) create highly decontextualized resources that can be reused broadly but teach very little, (2) we can build highly contextualized resources that teach effectively in a single setting but are very difficult to reuse elsewhere, or (3) we can shoot for the mediocre middle.

There’s actually a fourth choice. The Reusability Paradox is only a paradox as long as your thinking about educational materials is caught in the ambient copyright trap. “Everyone knows” you’re not allowed to make changes to textbooks, learning objects, videos, and other educational media, and so the learning objects model is built partly in response to that “reality.” But the Reusability Paradox only arises when “reuse” means “reuse exactly as is.” According to this pervasive view, learning objects can never be altered after they’re created – so the author has to make a trade-off between effectiveness and reusability and the rest of us have to live their choice.

The way to escape from the Reusability Paradox is simply by using an open license. If I publish my educational materials using an open license, I can produce something deeply contextualized and highly effective for my local context AND give you permission to revise and remix it until it is equally effective to reuse in your own local context. Poof! The paradox disappears. I’ve produced something with a strong internal context which you have permission to make fit into other external contexts.

This brings us full circle back to the Remix Hypothesis. Learning objects that are published using open licenses – also known as open educational resources – eliminate the Reusability Paradox. However, making something possible is not the same as actually doing it. OER make it possible for us to contextualize our resources and customize our pedagogies to support more effective learning, but they don’t do the work for us. We have to take advantage of the 5R permissions and actually do the work of contextualizing and customizing our open educational resources and open pedagogies. Thus, the Remix Hypothesis states that changes in student outcomes that occur in conjunction with OER adoption will correlate positively with faculty revising and remixing activities.

Which brings me back to the announcement of the McGraw Hill – Microsoft partnership. The press release reads, in part:

A key component of McGraw-Hill Education’s relationship with Microsoft is the ability of educators to develop compound learning objects through Office Mix, a media-rich extension of Microsoft PowerPoint that is free to the education community, and combine them with McGraw-Hill Education content and technology…. Compound learning objects will serve as the basis for all of [McGraw-Hill]’s K-12 products going forward starting next year, with its higher education portfolio soon to follow.

From these and other statements in the release, it sounds like a core goal of the partnership is to get teachers to make their own learning objects in Powerpoint and then upload them into McGraw-Hill’s platform, to be used side-by-side with MH’s own learning object-ized content. This is an “adaptive and analytics” platform, consistent with MH’s goal of “becoming the world’s foremost learning science company.” At the end of the day, this is a technical partnership that focuses on platform – Microsoft’s Office Mix platform and McGraw-Hill’s (unnamed in the release) adaptive and analytics platform.

This emphasis on platform belies a belief that innovations in platform can solve the problems that have beset earlier learning objects initiatives. For 15 years now organization after organization has made the mistake of thinking that the reason past learning objects initiatives have failed is that the platforms supporting revise / remix were too hard for faculty to use (or were broken in some other way). How many times have we heard someone exclaim, “Our breakthrough platform will finally make it possible!”? While ease of use of platform will certainly have a role to play in making a learning objects initiative successful, the platform issue is not the fundamental issue that must be resolved – it is a secondary issue. The Reusability Paradox is the primary issue to overcome, and this can only be done by means of the 5R permissions granted by open licenses. If the Reusability Paradox is not resolved there’s very little need for a platform – no one wants to reuse decontextualized resources that don’t teach effectively, it’s very difficult to reuse highly contextualized resources that do, and there’s no real difference between all the content that shoots for the middle (so there’s no point substituting one learning object for another).

Oh how I wish the field could remember that.


Being Clear on “High Quality”

Some readers are misinterpreting my critique of the phrase “high quality” as it’s used with regard to textbooks and other educational resources. Let me reiterate my points and then provide a new example that hopefully sheds more light on what I’m criticizing and what I’m not.

My problems with the phrase “high quality” are two-fold: (1) how the phrase gets equated with a single authoring process to the exclusion of all other authoring processes, and (2) how that usage distracts us from the efficacy conversation.

I do not have any issues with the traditional authoring process itself. As I wrote yesterday, “I fully believe that resources created through the ‘traditional process’ can effectively supporting learning.” Not only do I not have an issue with the traditional process itself, I don’t have an issue with the people or organizations who employ the traditional process – in fact I count some of them as good friends.

To be clear, my first issue is with the way “high quality” is often equated with the traditional process and that process only. According to this usage, if you don’t follow the traditional authoring process it is literally impossible for you to create “high quality” materials. This restrictive usage serves to lock out alternative processes from competing in the marketplace. The usage makes it impossible, for example, for commons-based peer production models to result in “high quality” resources (because “high quality” was previously defined as “adhering to the traditional process”). That’s just plain wrong. We need to create more openness to alternative authoring models in the minds of faculty. The traditional process can create effective learning materials, but so can other processes. These alternative processes likely share some common functions with the traditional process – like quality reviews – but the specifics of how those functions are performed can differ significantly across processes (e.g., focused review by a smaller number of experts versus ongoing review by a much larger number of peers) and still result in effective educational materials.

My second issue is the way this misuse of “high quality” omits any consideration of results, and tends to distract us from the efficacy conversation we need to be having. As this series of posts demonstrates, “high quality” can mean many things. But for the sake of our students, the one thing we most desperately need it to mean is “effective.” It’s quite simple – in order to be considered “high quality” educational materials must be effective. We need to help faculty understand that the phrase “high quality” is often disconnected from measures of efficacy when used by publishers to describe textbooks and other instructional materials, and we need them to help bring effectiveness back into the conversation.

Let me give an example to clarify exactly what I’m calling out and exactly what I’m not.

Take OpenStax as an example. These guys are doing God’s own work, creating a catalog of open textbooks for high enrolling courses using the most open of all the Creative Commons licenses. They seem to use a fairly traditional authoring process, and that’s absolutely fine. Again, the process that any person or organization uses is entirely outside the scope of this critique.

What is absolutely relevant to the critique is this – I’ve never heard OpenStax badmouth OER that were created using other processes solely because they were created using alternative processes. Even though they appear to use a fairly traditional process internally, they don’t try to perpetuate the “high quality if and only if traditional process” myth. As you would expect from leaders in the open educational resources community, they’re open minded about the possibility of other authoring processes resulting in effective materials.

The second point to make in the context of my critique of the way “high quality” gets misused is that OpenStax actually cares about efficacy. In fact right now there’s a huge banner on their homepage inviting people to participate in an efficacy study being conducted by a highly respected third party (the OER Research Hub). OpenStax aren’t trying to distract people from the efficacy conversation; on the contrary, they’re dedicating homepage real estate to trying to get people more involved in the conversation.

I provide the OpenStax example in order to clarify that (1) I’m not criticizing the traditional authoring process itself, and (2) just because an organization uses a more traditional authoring process doesn’t mean that they’re making the mistakes that I think the field at large needs to stop making. My critique of “high quality” is simply that it is being defined far too narrowly – unnecessarily excluding alternate authoring processes and omitting effectiveness. Perhaps rather than needing to stop using the phrase “high quality” we instead need to engage in a campaign to “redefine high quality” so that the phrase emphasizes efficacy and is more accepting of diversity in authoring processes.


Last week I wrote that we should stop saying “high quality” when discussing learning materials. Some have questioned whether or not that’s true. It is true, and here’s why.

Photo by Shira Golding, CC BY NC.

The problem with the phrase “high quality” as used by traditional publishers is that it puts process over outcome. If publishers were basketball players, they would say, “When I shoot free throws, I align my toes with the foul line and square my shoulders to the basket. I slow my breathing and count to 5. I dribble three times, exhale once more, and then shoot, making sure to keep my elbow in and fully extend my arm.” Honestly, who cares? What you really want to know about a basketball player is whether or not he makes his foul shots. You aren’t going to draft him based on his free throw shooting process – you’re going to draft him based on his free throw shooting percentage. If the player you’re vetting shoots underhanded but makes over 90% of his foul shots, you’re going to draft him. The same is true with a salesperson – you don’t care about her sales process, you care about the number of sales she closes. Or with a baseball player – you don’t care about his batting process, you care about his batting average. Or with a network engineer, you don’t care about her specific troubleshooting process, you care about whether your employees can reach the internet or not.

So why, why, why, would we accept a publisher telling us that “high quality” is a function of process and not a function of results? Publishers want “high quality” to mean educational resources that are “written by experts, copyedited by professionals, reviewed by peers, laid out by graphic designers, and provided in multiple formats,” with literally no reference to results. What we need to know is how much do students who use the resources learn? But as a community, faculty largely accept publishers’ claim that “process = high quality” and don’t ask for outcomes or results data as part of our textbook or other materials adoption process.

And this notion is absolutely critical for the field of open education to understand: it is clearly in publishers’ best interest to focus faculty on process rather than outcome. By (1) equating “high quality” with process rather than results, and then (2) creating extremely complex authoring processes they proclaim to be “the industry standard,” publishers are attempting to create a barrier to entry for other would-be creators of educational resources (like many OER authors). “Oh, you can’t afford to replicate our elaborate publication process? That’s too bad, because our process is synonymous with high quality. Ergo, your materials are low quality.” And see? There’s literally no appeal to results in this argument, only slavish devotion to process. It’s a blatant attempt by publishers at keeping fresh competition – including OER – at bay.

In this bizarro world where results don’t matter, resources that produce better learning results than content produced using the traditional process are described as low quality. Huh?!? Encouraging people to talk about results instead of process – encouraging them to avoid nebulous phrases like “high quality” in favor of words like results, outcomes, or efficacy – is about taking back the conversation from publishers and focusing it where it belongs.

Now, I fully believe that resources created through the “traditional process” can effectively supporting learning. But there are two things I don’t believe:

  1. That conformance to the traditional process guarantees that every resource created that way will effectively support learning, and
  2. That the traditional process is the only process that can result in resources that effectively support learning.

There has to be a recognition by faculty – if not an admission by publishers – that alternate development processes can result in highly effective educational materials. But currently there’s not. It feels a bit like we’re trapped in 2005, still arguing over whether or not the Wikipedia authoring process can create writing as accurate as the Encyclopedia Britannica process. We settled this argument ten years ago. What are we still arguing about?

Postscript. In the comments on my first “Stop Saying High Quality” post, one commenter asked for a concrete example of OER significantly outperforming commercial materials. His comment makes the point of the argument while seeming to completely miss it – “Oh yea? Show us proof we should stop saying high quality!” To oblige the commenter, you can see an example of a college abandoning a Pearson textbook and MyMathLab bundle in favor of OER and the open source MyOpenMath practice system published in Educause Review. (Spoiler alert: Pass rates rise from 48% to 60% from Spring 2011 to Spring 2013.)