Last week I wrote that we should stop saying “high quality” when discussing learning materials. Some have questioned whether or not that’s true. It is true, and here’s why.

Photo by Shira Golding, CC BY NC.

The problem with the phrase “high quality” as used by traditional publishers is that it puts process over outcome. If publishers were basketball players, they would say, “When I shoot free throws, I align my toes with the foul line and square my shoulders to the basket. I slow my breathing and count to 5. I dribble three times, exhale once more, and then shoot, making sure to keep my elbow in and fully extend my arm.” Honestly, who cares? What you really want to know about a basketball player is whether or not he makes his foul shots. You aren’t going to draft him based on his free throw shooting process – you’re going to draft him based on his free throw shooting percentage. If the player you’re vetting shoots underhanded but makes over 90% of his foul shots, you’re going to draft him. The same is true with a salesperson – you don’t care about her sales process, you care about the number of sales she closes. Or with a baseball player – you don’t care about his batting process, you care about his batting average. Or with a network engineer, you don’t care about her specific troubleshooting process, you care about whether your employees can reach the internet or not.

So why, why, why, would we accept a publisher telling us that “high quality” is a function of process and not a function of results? Publishers want “high quality” to mean educational resources that are “written by experts, copyedited by professionals, reviewed by peers, laid out by graphic designers, and provided in multiple formats,” with literally no reference to results. What we need to know is how much do students who use the resources learn? But as a community, faculty largely accept publishers’ claim that “process = high quality” and don’t ask for outcomes or results data as part of our textbook or other materials adoption process.

And this notion is absolutely critical for the field of open education to understand: it is clearly in publishers’ best interest to focus faculty on process rather than outcome. By (1) equating “high quality” with process rather than results, and then (2) creating extremely complex authoring processes they proclaim to be “the industry standard,” publishers are attempting to create a barrier to entry for other would-be creators of educational resources (like many OER authors). “Oh, you can’t afford to replicate our elaborate publication process? That’s too bad, because our process is synonymous with high quality. Ergo, your materials are low quality.” And see? There’s literally no appeal to results in this argument, only slavish devotion to process. It’s a blatant attempt by publishers at keeping fresh competition – including OER – at bay.

In this bizarro world where results don’t matter, resources that produce better learning results than content produced using the traditional process are described as low quality. Huh?!? Encouraging people to talk about results instead of process – encouraging them to avoid nebulous phrases like “high quality” in favor of words like results, outcomes, or efficacy – is about taking back the conversation from publishers and focusing it where it belongs.

Now, I fully believe that resources created through the “traditional process” can effectively supporting learning. But there are two things I don’t believe:

  1. That conformance to the traditional process guarantees that every resource created that way will effectively support learning, and
  2. That the traditional process is the only process that can result in resources that effectively support learning.

There has to be a recognition by faculty – if not an admission by publishers – that alternate development processes can result in highly effective educational materials. But currently there’s not. It feels a bit like we’re trapped in 2005, still arguing over whether or not the Wikipedia authoring process can create writing as accurate as the Encyclopedia Britannica process. We settled this argument ten years ago. What are we still arguing about?

Postscript. In the comments on my first “Stop Saying High Quality” post, one commenter asked for a concrete example of OER significantly outperforming commercial materials. His comment makes the point of the argument while seeming to completely miss it – “Oh yea? Show us proof we should stop saying high quality!” To oblige the commenter, you can see an example of a college abandoning a Pearson textbook and MyMathLab bundle in favor of OER and the open source MyOpenMath practice system published in Educause Review. (Spoiler alert: Pass rates rise from 48% to 60% from Spring 2011 to Spring 2013.)

{ 5 comments }

Stop Saying “High Quality”

Recently the phrase “high quality” has come up several times in discussions of educational materials, and I’ve been surprised what a strong, negative reaction I’ve been having each time I heard the word.

After some reflection I think the reason the phrase gets my goat is that “high quality” sounds like it’s dealing with a core issue, while actually dodging the core issue. The phrase is sneaky and deceptive. (Now I don’t mean that the people who were using it were trying to be deceptive; they weren’t. But the phrase itself tends to blind people.) And by “core issue” I mean this – the core issue in determining the quality of any educational resource is the degree to which it supports learning. But confusingly, that’s not what people mean when they say that a textbook or other educational resource is “high quality.”

It’s very easy to demonstrate that “the degree to which it supports learning” is the only characteristic of an educational resource that matters. If an educational resource is written by experts, copyedited by professionals, reviewed by peers, laid out by graphic designers, contains beautiful imagery, and is provided in multiple formats, but fails to support learning, is it appropriate for us to call it “high quality”? No. No, no, no. A thousand times no. Despite this fact, which is intuitively obvious, when people say “high quality” they actually mean all these things (author credentials, review by faculty, copyediting, etc.) except effectiveness. In the world of textbooks and other educational materials, “high quality” describes the authoring and editorial process, and is literally unrelated to whether or not the educational resource supports learning.

In this way, saying “high quality” obscures the issue we should care about most. Instead of letting people and companies off the hook by checking boxes during the pre-publication process, we should care about whether or not a particular resource supports learning for each of our particular students. Seen this way, the true desideratum of educational materials is “effective.” I really don’t care what the pre-publication processes was like as long as my students are learning (unless the process was unethical in some way).

So please – let’s stop saying “high quality.” We don’t want “high quality” educational materials – we want “effective” educational materials. In the future, when you catch yourself saying “high quality,” stop and correct yourself. When you hear others say “high quality,” take that teachable moment to help them understand that the phrase is a ruse. If we can change this one element of the education conversation, we’ll have done something powerful.

(And don’t forget – when materials are so expensive that students can’t afford them, they are perfectly ineffective.)

{ 13 comments }

The Remix Hypothesis

For several years my colleagues and I have been conducting and reviewing empirical research on the impact on student outcomes when OER are adopted in place of commercial materials. Suffice it to say the research results are highly variable. Some studies of OER adoption show essentially no change in student outcomes. Many of these studies report small positive and negative changes in outcomes that, aggregated across several courses, fail to achieve statistical significance. Some studies including larger numbers of students find small changes in students outcomes that achieve statistical significance while failing to achieve practical significance. Based on these studies, we can say that sometimes OER save students significant amounts of money while obeying the “do no harm” rule in terms of student outcomes. Achieving the same outcomes for free, or for 95% less than students were previously paying, is a solid “win” for OER.

However, there are other studies of OER adoption that show large positive changes in student outcomes. (For sake of completeness, we are unaware of any studies showing large negative impacts of OER adoption on student outcomes.) Of these studies we can say that OER save students significant amounts of money while actually improving their learning outcomes. Clearly, supporting more learning at lower cost is also a solid “win” for OER.

Seeing two solid wins for OER, and no clear losses, reassures those of us pursuing this line of work that we are on the right track. However, as this irregularity in research results persists – some showing essentially no change in student outcomes while others show large improvements in students outcomes – you have to begin asking yourself, “Self, what is going on here? Why is there such a large difference in the results of many of these studies?” For the past several months I have been pondering this question and speaking to some of the faculty who taught the courses reported in the studies. While I don’t have any research to report on the issue today, I have developed an initial working understanding of the discrepancy. Until I think of something more descriptive I’ll call it “The Remix Hypothesis.”

In it’s simplest form, The Remix Hypothesis states that changes in students outcomes occurring in conjunction with OER adoption correlate positively with faculty remixing activities. Specifically, I hypothesize relationships between (at least) three levels of remix activity by faculty who adopt OER and changes in student outcomes, based on what I’ve seen in the research to date.

Level 0 – Replace
At this level faculty engage in no remixing whatsoever. They simply adopt OER (most often an open textbook) in place of a commercial textbook and preserve other aspects of the course as they taught it previously. I hypothesize no changes in student outcomes when faculty Replace – except possibly in one special case. In the case of students who are particularly financially disadvantaged, where faculty were previously assigning very expensive textbooks, there may be a small positive effect attributable to the increased percentage of students who can access the core instructional materials of the course.

Level 1 – Realign
At this level faculty remix their open course materials. In my work to date, this has most often involved faculty stripping a course’s content down to its bare learning outcomes, and then selecting the OER from multiple sources that they feel will best support student learning of specific course learning outcomes. I hypothesize small to modest positive changes in student outcomes when faculty Realign.

Level 2 – Rethink
At this level faculty remix both course materials and pedagogy. In conjunction with the Realign activities described above, faculty create or select new learning activities and assessments – possibly inviting students to co-create and openly share them – often leveraging the unique pedagogical possibilities provided by the 5R permissions of OER. (This is what I refer to as open pedagogy.) I hypothesize modest to large positive changes in student outcomes when faculty Rethink.

Remixing and revising at the Rethink level will be significantly more effective if faculty make those decisions after gathering, analyzing, and reflecting on empirical data about their courses. Helping them ground these decisions in their own data is extremely important. I’ve personally seen several cases where faculty’s own memories about what is and isn’t working in their own classes are contradicted by data in their own gradebooks. It’s an amazing process to watch faculty struggle to understand the conflict between their intuitions and their data. We need to support faculty in this reflection process every time they teach the course – supporting ongoing, continuous improvement. Rethink is ideally an iterative process by which pedagogy and supporting materials come into increasing harmony, supporting deeper and deeper student learning.

Very roughly, we might say in terms of the 5Rs that at Level 0 faculty take advantage of their permissions to Retain, Reuse, and Redistribute educational materials. At Level 1 they add Remix. At Level 2 they add Revise, and expand from open materials into open pedagogy. Strictly speaking this characterization isn’t completely accurate, but I think it provides a good approximation to get our thinking going.

The three levels build on one another. First, a faculty member decides to Replace her textbook with OER. A logical next step a semester or three later is to Realign, making more sophisticated choices about which specific OER to use to support specific student learning outcomes. Finally, as faculty become more familiar with the benefits the 5Rs provide them as teachers, they may begin to grasp the potential benefits the 5Rs can provide to learners. This will lead them to Rethink their assignments and assessments so that they maximize the learning-related benefits of openness to their students.

As I said above, I don’t have empirical data from a specifically designed study to corroborate The Remix Hypothesis yet, but I hope to either validate or disprove it empirically in the next few years in collaboration with my awesome partners in the Open Education Group.

{ 3 comments }